00:00:00.001 Started by upstream project "autotest-per-patch" build number 132293 00:00:00.001 originally caused by: 00:00:00.001 Started by user sys_sgci 00:00:00.088 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/nvmf-tcp-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-phy.groovy 00:00:00.089 The recommended git tool is: git 00:00:00.089 using credential 00000000-0000-0000-0000-000000000002 00:00:00.091 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/nvmf-tcp-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.147 Fetching changes from the remote Git repository 00:00:00.149 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.209 Using shallow fetch with depth 1 00:00:00.209 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.209 > git --version # timeout=10 00:00:00.258 > git --version # 'git version 2.39.2' 00:00:00.258 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.283 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.283 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:06.386 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:06.398 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:06.409 Checking out Revision b9dd3f7ec12b0ee8a44940dc99ce739345caa4cf (FETCH_HEAD) 00:00:06.409 > git config core.sparsecheckout # timeout=10 00:00:06.420 > git read-tree -mu HEAD # timeout=10 00:00:06.436 > git checkout -f b9dd3f7ec12b0ee8a44940dc99ce739345caa4cf # timeout=5 00:00:06.451 Commit message: "jenkins/jjb-config: Ignore OS version mismatch under freebsd" 00:00:06.451 > git rev-list --no-walk b9dd3f7ec12b0ee8a44940dc99ce739345caa4cf # timeout=10 00:00:06.551 [Pipeline] Start of Pipeline 00:00:06.563 [Pipeline] library 00:00:06.565 Loading library shm_lib@master 00:00:06.565 Library shm_lib@master is cached. Copying from home. 00:00:06.579 [Pipeline] node 00:00:06.591 Running on WFP16 in /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:00:06.593 [Pipeline] { 00:00:06.603 [Pipeline] catchError 00:00:06.605 [Pipeline] { 00:00:06.617 [Pipeline] wrap 00:00:06.625 [Pipeline] { 00:00:06.632 [Pipeline] stage 00:00:06.634 [Pipeline] { (Prologue) 00:00:06.815 [Pipeline] sh 00:00:07.101 + logger -p user.info -t JENKINS-CI 00:00:07.122 [Pipeline] echo 00:00:07.124 Node: WFP16 00:00:07.132 [Pipeline] sh 00:00:07.426 [Pipeline] setCustomBuildProperty 00:00:07.435 [Pipeline] echo 00:00:07.436 Cleanup processes 00:00:07.439 [Pipeline] sh 00:00:07.718 + sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:07.718 928395 sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:07.731 [Pipeline] sh 00:00:08.017 ++ sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:08.017 ++ grep -v 'sudo pgrep' 00:00:08.017 ++ awk '{print $1}' 00:00:08.017 + sudo kill -9 00:00:08.017 + true 00:00:08.030 [Pipeline] cleanWs 00:00:08.038 [WS-CLEANUP] Deleting project workspace... 00:00:08.038 [WS-CLEANUP] Deferred wipeout is used... 00:00:08.043 [WS-CLEANUP] done 00:00:08.047 [Pipeline] setCustomBuildProperty 00:00:08.058 [Pipeline] sh 00:00:08.342 + sudo git config --global --replace-all safe.directory '*' 00:00:08.442 [Pipeline] httpRequest 00:00:11.494 [Pipeline] echo 00:00:11.502 Sorcerer 10.211.164.101 is dead 00:00:11.517 [Pipeline] httpRequest 00:00:14.541 [Pipeline] echo 00:00:14.544 Sorcerer 10.211.164.101 is dead 00:00:14.551 [Pipeline] httpRequest 00:00:14.611 [Pipeline] echo 00:00:14.612 Sorcerer 10.211.164.96 is dead 00:00:14.620 [Pipeline] httpRequest 00:00:15.106 [Pipeline] echo 00:00:15.108 Sorcerer 10.211.164.20 is alive 00:00:15.121 [Pipeline] retry 00:00:15.123 [Pipeline] { 00:00:15.139 [Pipeline] httpRequest 00:00:15.144 HttpMethod: GET 00:00:15.144 URL: http://10.211.164.20/packages/jbp_b9dd3f7ec12b0ee8a44940dc99ce739345caa4cf.tar.gz 00:00:15.145 Sending request to url: http://10.211.164.20/packages/jbp_b9dd3f7ec12b0ee8a44940dc99ce739345caa4cf.tar.gz 00:00:15.146 Response Code: HTTP/1.1 200 OK 00:00:15.147 Success: Status code 200 is in the accepted range: 200,404 00:00:15.147 Saving response body to /var/jenkins/workspace/nvmf-tcp-phy-autotest/jbp_b9dd3f7ec12b0ee8a44940dc99ce739345caa4cf.tar.gz 00:00:15.293 [Pipeline] } 00:00:15.311 [Pipeline] // retry 00:00:15.318 [Pipeline] sh 00:00:15.602 + tar --no-same-owner -xf jbp_b9dd3f7ec12b0ee8a44940dc99ce739345caa4cf.tar.gz 00:00:15.617 [Pipeline] httpRequest 00:00:15.925 [Pipeline] echo 00:00:15.927 Sorcerer 10.211.164.20 is alive 00:00:15.937 [Pipeline] retry 00:00:15.939 [Pipeline] { 00:00:15.953 [Pipeline] httpRequest 00:00:15.958 HttpMethod: GET 00:00:15.958 URL: http://10.211.164.20/packages/spdk_4b2d483c63162e17641f75a0719927be08118be9.tar.gz 00:00:15.959 Sending request to url: http://10.211.164.20/packages/spdk_4b2d483c63162e17641f75a0719927be08118be9.tar.gz 00:00:15.961 Response Code: HTTP/1.1 404 Not Found 00:00:15.961 Success: Status code 404 is in the accepted range: 200,404 00:00:15.961 Saving response body to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk_4b2d483c63162e17641f75a0719927be08118be9.tar.gz 00:00:15.966 [Pipeline] } 00:00:15.983 [Pipeline] // retry 00:00:15.990 [Pipeline] sh 00:00:16.275 + rm -f spdk_4b2d483c63162e17641f75a0719927be08118be9.tar.gz 00:00:16.289 [Pipeline] retry 00:00:16.291 [Pipeline] { 00:00:16.312 [Pipeline] checkout 00:00:16.319 The recommended git tool is: NONE 00:00:16.344 using credential 00000000-0000-0000-0000-000000000002 00:00:16.346 Wiping out workspace first. 00:00:16.355 Cloning the remote Git repository 00:00:16.358 Honoring refspec on initial clone 00:00:16.365 Cloning repository https://review.spdk.io/gerrit/a/spdk/spdk 00:00:16.366 > git init /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk # timeout=10 00:00:16.374 Using reference repository: /var/ci_repos/spdk_multi 00:00:16.374 Fetching upstream changes from https://review.spdk.io/gerrit/a/spdk/spdk 00:00:16.374 > git --version # timeout=10 00:00:16.379 > git --version # 'git version 2.45.2' 00:00:16.379 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:16.384 Setting http proxy: proxy-dmz.intel.com:911 00:00:16.384 > git fetch --tags --force --progress -- https://review.spdk.io/gerrit/a/spdk/spdk refs/changes/36/25436/2 +refs/heads/master:refs/remotes/origin/master # timeout=10 00:00:31.339 Avoid second fetch 00:00:31.357 Checking out Revision 4b2d483c63162e17641f75a0719927be08118be9 (FETCH_HEAD) 00:00:31.622 Commit message: "dif: Add spdk_dif_pi_format_get_pi_size() to use for NVMe PRACT" 00:00:31.319 > git config remote.origin.url https://review.spdk.io/gerrit/a/spdk/spdk # timeout=10 00:00:31.325 > git config --add remote.origin.fetch refs/changes/36/25436/2 # timeout=10 00:00:31.327 > git config --add remote.origin.fetch +refs/heads/master:refs/remotes/origin/master # timeout=10 00:00:31.342 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:31.351 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:31.361 > git config core.sparsecheckout # timeout=10 00:00:31.363 > git checkout -f 4b2d483c63162e17641f75a0719927be08118be9 # timeout=10 00:00:31.625 > git rev-list --no-walk 4bd31eb0a55e96d2df53478378615a3c3fa2bf4f # timeout=10 00:00:31.652 > git remote # timeout=10 00:00:31.657 > git submodule init # timeout=10 00:00:31.713 > git submodule sync # timeout=10 00:00:31.777 > git config --get remote.origin.url # timeout=10 00:00:31.785 > git submodule init # timeout=10 00:00:31.853 > git config -f .gitmodules --get-regexp ^submodule\.(.+)\.url # timeout=10 00:00:31.858 > git config --get submodule.dpdk.url # timeout=10 00:00:31.863 > git remote # timeout=10 00:00:31.868 > git config --get remote.origin.url # timeout=10 00:00:31.873 > git config -f .gitmodules --get submodule.dpdk.path # timeout=10 00:00:31.878 > git config --get submodule.intel-ipsec-mb.url # timeout=10 00:00:31.882 > git remote # timeout=10 00:00:31.887 > git config --get remote.origin.url # timeout=10 00:00:31.892 > git config -f .gitmodules --get submodule.intel-ipsec-mb.path # timeout=10 00:00:31.897 > git config --get submodule.isa-l.url # timeout=10 00:00:31.902 > git remote # timeout=10 00:00:31.904 > git config --get remote.origin.url # timeout=10 00:00:31.909 > git config -f .gitmodules --get submodule.isa-l.path # timeout=10 00:00:31.914 > git config --get submodule.ocf.url # timeout=10 00:00:31.919 > git remote # timeout=10 00:00:31.924 > git config --get remote.origin.url # timeout=10 00:00:31.929 > git config -f .gitmodules --get submodule.ocf.path # timeout=10 00:00:31.933 > git config --get submodule.libvfio-user.url # timeout=10 00:00:31.938 > git remote # timeout=10 00:00:31.943 > git config --get remote.origin.url # timeout=10 00:00:31.948 > git config -f .gitmodules --get submodule.libvfio-user.path # timeout=10 00:00:31.953 > git config --get submodule.xnvme.url # timeout=10 00:00:31.957 > git remote # timeout=10 00:00:31.962 > git config --get remote.origin.url # timeout=10 00:00:31.965 > git config -f .gitmodules --get submodule.xnvme.path # timeout=10 00:00:31.969 > git config --get submodule.isa-l-crypto.url # timeout=10 00:00:31.974 > git remote # timeout=10 00:00:31.979 > git config --get remote.origin.url # timeout=10 00:00:31.984 > git config -f .gitmodules --get submodule.isa-l-crypto.path # timeout=10 00:00:31.988 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:31.989 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:31.989 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:31.989 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:31.989 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:31.990 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:31.990 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:31.992 Setting http proxy: proxy-dmz.intel.com:911 00:00:31.992 > git submodule update --init --recursive --reference /var/ci_repos/spdk_multi isa-l # timeout=10 00:00:31.992 Setting http proxy: proxy-dmz.intel.com:911 00:00:31.993 > git submodule update --init --recursive --reference /var/ci_repos/spdk_multi xnvme # timeout=10 00:00:31.993 Setting http proxy: proxy-dmz.intel.com:911 00:00:31.993 > git submodule update --init --recursive --reference /var/ci_repos/spdk_multi intel-ipsec-mb # timeout=10 00:00:31.994 Setting http proxy: proxy-dmz.intel.com:911 00:00:31.994 > git submodule update --init --recursive --reference /var/ci_repos/spdk_multi dpdk # timeout=10 00:00:31.994 Setting http proxy: proxy-dmz.intel.com:911 00:00:31.994 Setting http proxy: proxy-dmz.intel.com:911 00:00:31.995 > git submodule update --init --recursive --reference /var/ci_repos/spdk_multi ocf # timeout=10 00:00:31.995 > git submodule update --init --recursive --reference /var/ci_repos/spdk_multi libvfio-user # timeout=10 00:00:31.995 Setting http proxy: proxy-dmz.intel.com:911 00:00:31.999 > git submodule update --init --recursive --reference /var/ci_repos/spdk_multi isa-l-crypto # timeout=10 00:00:41.238 [Pipeline] dir 00:00:41.239 Running in /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:41.240 [Pipeline] { 00:00:41.255 [Pipeline] sh 00:00:41.541 ++ nproc 00:00:41.541 + threads=112 00:00:41.541 + git repack -a -d --threads=112 00:00:46.813 + git submodule foreach git repack -a -d --threads=112 00:00:47.072 Entering 'dpdk' 00:00:52.347 Entering 'intel-ipsec-mb' 00:00:52.347 Entering 'isa-l' 00:00:52.347 Entering 'isa-l-crypto' 00:00:52.347 Entering 'libvfio-user' 00:00:52.606 Entering 'ocf' 00:00:52.866 Entering 'xnvme' 00:00:53.434 + find .git -type f -name alternates -print -delete 00:00:53.434 .git/objects/info/alternates 00:00:53.434 .git/modules/isa-l/objects/info/alternates 00:00:53.434 .git/modules/xnvme/objects/info/alternates 00:00:53.434 .git/modules/dpdk/objects/info/alternates 00:00:53.434 .git/modules/intel-ipsec-mb/objects/info/alternates 00:00:53.434 .git/modules/isa-l-crypto/objects/info/alternates 00:00:53.434 .git/modules/ocf/objects/info/alternates 00:00:53.434 .git/modules/libvfio-user/objects/info/alternates 00:00:53.444 [Pipeline] } 00:00:53.462 [Pipeline] // dir 00:00:53.468 [Pipeline] } 00:00:53.485 [Pipeline] // retry 00:00:53.493 [Pipeline] sh 00:00:53.777 + hash pigz 00:00:53.777 + tar -cf spdk_4b2d483c63162e17641f75a0719927be08118be9.tar.gz -I pigz spdk 00:00:54.365 [Pipeline] retry 00:00:54.367 [Pipeline] { 00:00:54.382 [Pipeline] httpRequest 00:00:54.390 HttpMethod: PUT 00:00:54.390 URL: http://10.211.164.20/cgi-bin/sorcerer.py?group=packages&filename=spdk_4b2d483c63162e17641f75a0719927be08118be9.tar.gz 00:00:54.393 Sending request to url: http://10.211.164.20/cgi-bin/sorcerer.py?group=packages&filename=spdk_4b2d483c63162e17641f75a0719927be08118be9.tar.gz 00:00:57.138 Response Code: HTTP/1.1 200 OK 00:00:57.144 Success: Status code 200 is in the accepted range: 200 00:00:57.147 [Pipeline] } 00:00:57.165 [Pipeline] // retry 00:00:57.173 [Pipeline] echo 00:00:57.175 00:00:57.175 Locking 00:00:57.175 Waited 0s for lock 00:00:57.175 Everything Fine. Saved: /storage/packages/spdk_4b2d483c63162e17641f75a0719927be08118be9.tar.gz 00:00:57.175 00:00:57.178 [Pipeline] sh 00:00:57.460 + git -C spdk log --oneline -n5 00:00:57.460 4b2d483c6 dif: Add spdk_dif_pi_format_get_pi_size() to use for NVMe PRACT 00:00:57.460 560a1dde3 bdev/malloc: Support accel sequence when DIF is enabled 00:00:57.460 30279d1cf bdev: Add spdk_bdev_io_has_no_metadata() for bdev modules 00:00:57.460 4bd31eb0a bdev/malloc: Extract internal of verify_pi() for code reuse 00:00:57.460 2093c51b3 bdev/malloc: malloc_done() uses switch-case for clean up 00:00:57.469 [Pipeline] } 00:00:57.482 [Pipeline] // stage 00:00:57.489 [Pipeline] stage 00:00:57.491 [Pipeline] { (Prepare) 00:00:57.512 [Pipeline] writeFile 00:00:57.525 [Pipeline] sh 00:00:57.801 + logger -p user.info -t JENKINS-CI 00:00:57.811 [Pipeline] sh 00:00:58.085 + logger -p user.info -t JENKINS-CI 00:00:58.097 [Pipeline] sh 00:00:58.378 + cat autorun-spdk.conf 00:00:58.378 SPDK_RUN_FUNCTIONAL_TEST=1 00:00:58.378 SPDK_TEST_NVMF=1 00:00:58.378 SPDK_TEST_NVME_CLI=1 00:00:58.378 SPDK_TEST_NVMF_TRANSPORT=tcp 00:00:58.378 SPDK_TEST_NVMF_NICS=e810 00:00:58.378 SPDK_TEST_VFIOUSER=1 00:00:58.378 SPDK_RUN_UBSAN=1 00:00:58.378 NET_TYPE=phy 00:00:58.385 RUN_NIGHTLY=0 00:00:58.389 [Pipeline] readFile 00:00:58.414 [Pipeline] withEnv 00:00:58.416 [Pipeline] { 00:00:58.431 [Pipeline] sh 00:00:58.714 + set -ex 00:00:58.714 + [[ -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf ]] 00:00:58.714 + source /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:00:58.714 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:00:58.714 ++ SPDK_TEST_NVMF=1 00:00:58.714 ++ SPDK_TEST_NVME_CLI=1 00:00:58.714 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:00:58.714 ++ SPDK_TEST_NVMF_NICS=e810 00:00:58.714 ++ SPDK_TEST_VFIOUSER=1 00:00:58.714 ++ SPDK_RUN_UBSAN=1 00:00:58.714 ++ NET_TYPE=phy 00:00:58.714 ++ RUN_NIGHTLY=0 00:00:58.714 + case $SPDK_TEST_NVMF_NICS in 00:00:58.714 + DRIVERS=ice 00:00:58.714 + [[ tcp == \r\d\m\a ]] 00:00:58.714 + [[ -n ice ]] 00:00:58.714 + sudo rmmod mlx4_ib mlx5_ib irdma i40iw iw_cxgb4 00:00:58.714 rmmod: ERROR: Module mlx4_ib is not currently loaded 00:00:58.714 rmmod: ERROR: Module mlx5_ib is not currently loaded 00:00:58.714 rmmod: ERROR: Module irdma is not currently loaded 00:00:58.714 rmmod: ERROR: Module i40iw is not currently loaded 00:00:58.714 rmmod: ERROR: Module iw_cxgb4 is not currently loaded 00:00:58.714 + true 00:00:58.714 + for D in $DRIVERS 00:00:58.714 + sudo modprobe ice 00:00:58.714 + exit 0 00:00:58.722 [Pipeline] } 00:00:58.736 [Pipeline] // withEnv 00:00:58.741 [Pipeline] } 00:00:58.754 [Pipeline] // stage 00:00:58.762 [Pipeline] catchError 00:00:58.764 [Pipeline] { 00:00:58.777 [Pipeline] timeout 00:00:58.777 Timeout set to expire in 1 hr 0 min 00:00:58.779 [Pipeline] { 00:00:58.791 [Pipeline] stage 00:00:58.793 [Pipeline] { (Tests) 00:00:58.805 [Pipeline] sh 00:00:59.086 + jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:00:59.086 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:00:59.086 + DIR_ROOT=/var/jenkins/workspace/nvmf-tcp-phy-autotest 00:00:59.086 + [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest ]] 00:00:59.086 + DIR_SPDK=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:59.086 + DIR_OUTPUT=/var/jenkins/workspace/nvmf-tcp-phy-autotest/output 00:00:59.086 + [[ -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk ]] 00:00:59.086 + [[ ! -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/output ]] 00:00:59.086 + mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/output 00:00:59.086 + [[ -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/output ]] 00:00:59.086 + [[ nvmf-tcp-phy-autotest == pkgdep-* ]] 00:00:59.086 + cd /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:00:59.086 + source /etc/os-release 00:00:59.086 ++ NAME='Fedora Linux' 00:00:59.086 ++ VERSION='39 (Cloud Edition)' 00:00:59.086 ++ ID=fedora 00:00:59.086 ++ VERSION_ID=39 00:00:59.086 ++ VERSION_CODENAME= 00:00:59.086 ++ PLATFORM_ID=platform:f39 00:00:59.086 ++ PRETTY_NAME='Fedora Linux 39 (Cloud Edition)' 00:00:59.086 ++ ANSI_COLOR='0;38;2;60;110;180' 00:00:59.086 ++ LOGO=fedora-logo-icon 00:00:59.086 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:39 00:00:59.086 ++ HOME_URL=https://fedoraproject.org/ 00:00:59.086 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f39/system-administrators-guide/ 00:00:59.086 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:00:59.086 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:00:59.086 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:00:59.086 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=39 00:00:59.086 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:00:59.086 ++ REDHAT_SUPPORT_PRODUCT_VERSION=39 00:00:59.086 ++ SUPPORT_END=2024-11-12 00:00:59.086 ++ VARIANT='Cloud Edition' 00:00:59.086 ++ VARIANT_ID=cloud 00:00:59.086 + uname -a 00:00:59.086 Linux spdk-wfp-16 6.8.9-200.fc39.x86_64 #1 SMP PREEMPT_DYNAMIC Wed Jul 24 03:04:40 UTC 2024 x86_64 GNU/Linux 00:00:59.086 + sudo /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:01:01.617 Hugepages 00:01:01.617 node hugesize free / total 00:01:01.617 node0 1048576kB 0 / 0 00:01:01.617 node0 2048kB 0 / 0 00:01:01.617 node1 1048576kB 0 / 0 00:01:01.617 node1 2048kB 0 / 0 00:01:01.617 00:01:01.617 Type BDF Vendor Device NUMA Driver Device Block devices 00:01:01.617 I/OAT 0000:00:04.0 8086 2021 0 ioatdma - - 00:01:01.617 I/OAT 0000:00:04.1 8086 2021 0 ioatdma - - 00:01:01.617 I/OAT 0000:00:04.2 8086 2021 0 ioatdma - - 00:01:01.617 I/OAT 0000:00:04.3 8086 2021 0 ioatdma - - 00:01:01.617 I/OAT 0000:00:04.4 8086 2021 0 ioatdma - - 00:01:01.617 I/OAT 0000:00:04.5 8086 2021 0 ioatdma - - 00:01:01.617 I/OAT 0000:00:04.6 8086 2021 0 ioatdma - - 00:01:01.617 I/OAT 0000:00:04.7 8086 2021 0 ioatdma - - 00:01:01.617 I/OAT 0000:80:04.0 8086 2021 1 ioatdma - - 00:01:01.617 I/OAT 0000:80:04.1 8086 2021 1 ioatdma - - 00:01:01.617 I/OAT 0000:80:04.2 8086 2021 1 ioatdma - - 00:01:01.617 I/OAT 0000:80:04.3 8086 2021 1 ioatdma - - 00:01:01.617 I/OAT 0000:80:04.4 8086 2021 1 ioatdma - - 00:01:01.617 I/OAT 0000:80:04.5 8086 2021 1 ioatdma - - 00:01:01.617 I/OAT 0000:80:04.6 8086 2021 1 ioatdma - - 00:01:01.617 I/OAT 0000:80:04.7 8086 2021 1 ioatdma - - 00:01:01.617 NVMe 0000:86:00.0 8086 0a54 1 nvme nvme0 nvme0n1 00:01:01.617 + rm -f /tmp/spdk-ld-path 00:01:01.617 + source autorun-spdk.conf 00:01:01.617 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:01.617 ++ SPDK_TEST_NVMF=1 00:01:01.617 ++ SPDK_TEST_NVME_CLI=1 00:01:01.617 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:01.617 ++ SPDK_TEST_NVMF_NICS=e810 00:01:01.617 ++ SPDK_TEST_VFIOUSER=1 00:01:01.617 ++ SPDK_RUN_UBSAN=1 00:01:01.617 ++ NET_TYPE=phy 00:01:01.617 ++ RUN_NIGHTLY=0 00:01:01.617 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:01:01.617 + [[ -n '' ]] 00:01:01.617 + sudo git config --global --add safe.directory /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:01:01.617 + for M in /var/spdk/build-*-manifest.txt 00:01:01.617 + [[ -f /var/spdk/build-kernel-manifest.txt ]] 00:01:01.617 + cp /var/spdk/build-kernel-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:01:01.617 + for M in /var/spdk/build-*-manifest.txt 00:01:01.617 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:01:01.617 + cp /var/spdk/build-pkg-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:01:01.617 + for M in /var/spdk/build-*-manifest.txt 00:01:01.617 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:01:01.617 + cp /var/spdk/build-repo-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:01:01.617 ++ uname 00:01:01.617 + [[ Linux == \L\i\n\u\x ]] 00:01:01.617 + sudo dmesg -T 00:01:01.617 + sudo dmesg --clear 00:01:01.617 + dmesg_pid=931128 00:01:01.617 + [[ Fedora Linux == FreeBSD ]] 00:01:01.617 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:01:01.617 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:01:01.618 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:01:01.618 + export VM_IMAGE=/var/spdk/dependencies/vhost/spdk_test_image.qcow2 00:01:01.618 + VM_IMAGE=/var/spdk/dependencies/vhost/spdk_test_image.qcow2 00:01:01.618 + [[ -x /usr/src/fio-static/fio ]] 00:01:01.618 + export FIO_BIN=/usr/src/fio-static/fio 00:01:01.618 + sudo dmesg -Tw 00:01:01.618 + FIO_BIN=/usr/src/fio-static/fio 00:01:01.618 + [[ '' == \/\v\a\r\/\j\e\n\k\i\n\s\/\w\o\r\k\s\p\a\c\e\/\n\v\m\f\-\t\c\p\-\p\h\y\-\a\u\t\o\t\e\s\t\/\q\e\m\u\_\v\f\i\o\/* ]] 00:01:01.618 + [[ ! -v VFIO_QEMU_BIN ]] 00:01:01.618 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:01:01.618 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:01:01.618 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:01:01.618 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:01:01.618 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:01:01.618 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:01:01.618 + spdk/autorun.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:01:01.618 11:19:02 -- common/autotest_common.sh@1690 -- $ [[ n == y ]] 00:01:01.618 11:19:02 -- spdk/autorun.sh@20 -- $ source /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:01:01.618 11:19:02 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@1 -- $ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:01.618 11:19:02 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@2 -- $ SPDK_TEST_NVMF=1 00:01:01.618 11:19:02 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@3 -- $ SPDK_TEST_NVME_CLI=1 00:01:01.618 11:19:02 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@4 -- $ SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:01.618 11:19:02 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@5 -- $ SPDK_TEST_NVMF_NICS=e810 00:01:01.618 11:19:02 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@6 -- $ SPDK_TEST_VFIOUSER=1 00:01:01.618 11:19:02 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@7 -- $ SPDK_RUN_UBSAN=1 00:01:01.618 11:19:02 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@8 -- $ NET_TYPE=phy 00:01:01.618 11:19:02 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@9 -- $ RUN_NIGHTLY=0 00:01:01.618 11:19:02 -- spdk/autorun.sh@22 -- $ trap 'timing_finish || exit 1' EXIT 00:01:01.618 11:19:02 -- spdk/autorun.sh@25 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/autobuild.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:01:01.618 11:19:02 -- common/autotest_common.sh@1690 -- $ [[ n == y ]] 00:01:01.618 11:19:02 -- common/autobuild_common.sh@15 -- $ source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:01:01.618 11:19:02 -- scripts/common.sh@15 -- $ shopt -s extglob 00:01:01.618 11:19:02 -- scripts/common.sh@544 -- $ [[ -e /bin/wpdk_common.sh ]] 00:01:01.618 11:19:02 -- scripts/common.sh@552 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:01:01.618 11:19:02 -- scripts/common.sh@553 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:01:01.618 11:19:02 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:01.618 11:19:02 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:01.618 11:19:02 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:01.618 11:19:02 -- paths/export.sh@5 -- $ export PATH 00:01:01.618 11:19:02 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:01.618 11:19:02 -- common/autobuild_common.sh@485 -- $ out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:01:01.618 11:19:02 -- common/autobuild_common.sh@486 -- $ date +%s 00:01:01.618 11:19:02 -- common/autobuild_common.sh@486 -- $ mktemp -dt spdk_1731665942.XXXXXX 00:01:01.618 11:19:02 -- common/autobuild_common.sh@486 -- $ SPDK_WORKSPACE=/tmp/spdk_1731665942.wkIyqv 00:01:01.618 11:19:02 -- common/autobuild_common.sh@488 -- $ [[ -n '' ]] 00:01:01.618 11:19:02 -- common/autobuild_common.sh@492 -- $ '[' -n '' ']' 00:01:01.618 11:19:02 -- common/autobuild_common.sh@495 -- $ scanbuild_exclude='--exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/' 00:01:01.618 11:19:02 -- common/autobuild_common.sh@499 -- $ scanbuild_exclude+=' --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp' 00:01:01.618 11:19:02 -- common/autobuild_common.sh@501 -- $ scanbuild='scan-build -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/scan-build-tmp --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/ --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp --status-bugs' 00:01:01.618 11:19:02 -- common/autobuild_common.sh@502 -- $ get_config_params 00:01:01.618 11:19:02 -- common/autotest_common.sh@407 -- $ xtrace_disable 00:01:01.618 11:19:02 -- common/autotest_common.sh@10 -- $ set +x 00:01:01.876 11:19:02 -- common/autobuild_common.sh@502 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user' 00:01:01.876 11:19:02 -- common/autobuild_common.sh@504 -- $ start_monitor_resources 00:01:01.876 11:19:02 -- pm/common@17 -- $ local monitor 00:01:01.876 11:19:02 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:01.876 11:19:02 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:01.876 11:19:02 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:01.876 11:19:02 -- pm/common@21 -- $ date +%s 00:01:01.876 11:19:02 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:01.876 11:19:02 -- pm/common@21 -- $ date +%s 00:01:01.876 11:19:02 -- pm/common@25 -- $ sleep 1 00:01:01.876 11:19:02 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1731665942 00:01:01.876 11:19:02 -- pm/common@21 -- $ date +%s 00:01:01.876 11:19:02 -- pm/common@21 -- $ date +%s 00:01:01.876 11:19:02 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1731665942 00:01:01.876 11:19:02 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1731665942 00:01:01.876 11:19:02 -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1731665942 00:01:01.876 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1731665942_collect-cpu-load.pm.log 00:01:01.876 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1731665942_collect-vmstat.pm.log 00:01:01.876 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1731665942_collect-cpu-temp.pm.log 00:01:01.876 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1731665942_collect-bmc-pm.bmc.pm.log 00:01:02.813 11:19:03 -- common/autobuild_common.sh@505 -- $ trap stop_monitor_resources EXIT 00:01:02.813 11:19:03 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:01:02.813 11:19:03 -- spdk/autobuild.sh@12 -- $ umask 022 00:01:02.813 11:19:03 -- spdk/autobuild.sh@13 -- $ cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:01:02.813 11:19:03 -- spdk/autobuild.sh@16 -- $ date -u 00:01:02.813 Fri Nov 15 10:19:03 AM UTC 2024 00:01:02.813 11:19:03 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:01:02.813 v25.01-pre-210-g4b2d483c6 00:01:02.813 11:19:03 -- spdk/autobuild.sh@19 -- $ '[' 0 -eq 1 ']' 00:01:02.813 11:19:03 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:01:02.813 11:19:03 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:01:02.813 11:19:03 -- common/autotest_common.sh@1103 -- $ '[' 3 -le 1 ']' 00:01:02.813 11:19:03 -- common/autotest_common.sh@1109 -- $ xtrace_disable 00:01:02.813 11:19:03 -- common/autotest_common.sh@10 -- $ set +x 00:01:02.813 ************************************ 00:01:02.813 START TEST ubsan 00:01:02.813 ************************************ 00:01:02.813 11:19:03 ubsan -- common/autotest_common.sh@1127 -- $ echo 'using ubsan' 00:01:02.813 using ubsan 00:01:02.813 00:01:02.813 real 0m0.000s 00:01:02.813 user 0m0.000s 00:01:02.813 sys 0m0.000s 00:01:02.813 11:19:03 ubsan -- common/autotest_common.sh@1128 -- $ xtrace_disable 00:01:02.813 11:19:03 ubsan -- common/autotest_common.sh@10 -- $ set +x 00:01:02.813 ************************************ 00:01:02.813 END TEST ubsan 00:01:02.813 ************************************ 00:01:02.813 11:19:03 -- spdk/autobuild.sh@27 -- $ '[' -n '' ']' 00:01:02.813 11:19:03 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:01:02.813 11:19:03 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:01:02.813 11:19:03 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:01:02.813 11:19:03 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:01:02.813 11:19:03 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:01:02.813 11:19:03 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:01:02.813 11:19:03 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:01:02.813 11:19:03 -- spdk/autobuild.sh@67 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user --with-shared 00:01:03.078 Using default SPDK env in /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:01:03.078 Using default DPDK in /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:01:03.341 Using 'verbs' RDMA provider 00:01:16.475 Configuring ISA-L (logfile: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.spdk-isal.log)...done. 00:01:28.673 Configuring ISA-L-crypto (logfile: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.spdk-isal-crypto.log)...done. 00:01:29.190 Creating mk/config.mk...done. 00:01:29.190 Creating mk/cc.flags.mk...done. 00:01:29.190 Type 'make' to build. 00:01:29.190 11:19:29 -- spdk/autobuild.sh@70 -- $ run_test make make -j112 00:01:29.190 11:19:29 -- common/autotest_common.sh@1103 -- $ '[' 3 -le 1 ']' 00:01:29.190 11:19:29 -- common/autotest_common.sh@1109 -- $ xtrace_disable 00:01:29.190 11:19:29 -- common/autotest_common.sh@10 -- $ set +x 00:01:29.190 ************************************ 00:01:29.190 START TEST make 00:01:29.190 ************************************ 00:01:29.190 11:19:29 make -- common/autotest_common.sh@1127 -- $ make -j112 00:01:29.757 make[1]: Nothing to be done for 'all'. 00:01:30.016 help2man: can't get `--help' info from ./programs/igzip 00:01:30.016 Try `--no-discard-stderr' if option outputs to stderr 00:01:30.016 make[3]: [Makefile:4944: programs/igzip.1] Error 127 (ignored) 00:01:31.400 The Meson build system 00:01:31.400 Version: 1.5.0 00:01:31.400 Source dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user 00:01:31.400 Build dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:01:31.400 Build type: native build 00:01:31.400 Project name: libvfio-user 00:01:31.400 Project version: 0.0.1 00:01:31.400 C compiler for the host machine: cc (gcc 13.3.1 "cc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:01:31.400 C linker for the host machine: cc ld.bfd 2.40-14 00:01:31.400 Host machine cpu family: x86_64 00:01:31.400 Host machine cpu: x86_64 00:01:31.400 Run-time dependency threads found: YES 00:01:31.400 Library dl found: YES 00:01:31.400 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:01:31.400 Run-time dependency json-c found: YES 0.17 00:01:31.400 Run-time dependency cmocka found: YES 1.1.7 00:01:31.400 Program pytest-3 found: NO 00:01:31.400 Program flake8 found: NO 00:01:31.400 Program misspell-fixer found: NO 00:01:31.400 Program restructuredtext-lint found: NO 00:01:31.400 Program valgrind found: YES (/usr/bin/valgrind) 00:01:31.400 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:01:31.400 Compiler for C supports arguments -Wmissing-declarations: YES 00:01:31.400 Compiler for C supports arguments -Wwrite-strings: YES 00:01:31.400 ../libvfio-user/test/meson.build:20: WARNING: Project targets '>= 0.53.0' but uses feature introduced in '0.57.0': exclude_suites arg in add_test_setup. 00:01:31.400 Program test-lspci.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user/test/test-lspci.sh) 00:01:31.400 Program test-linkage.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user/test/test-linkage.sh) 00:01:31.400 ../libvfio-user/test/py/meson.build:16: WARNING: Project targets '>= 0.53.0' but uses feature introduced in '0.57.0': exclude_suites arg in add_test_setup. 00:01:31.400 Build targets in project: 8 00:01:31.400 WARNING: Project specifies a minimum meson_version '>= 0.53.0' but uses features which were added in newer versions: 00:01:31.400 * 0.57.0: {'exclude_suites arg in add_test_setup'} 00:01:31.400 00:01:31.400 libvfio-user 0.0.1 00:01:31.400 00:01:31.400 User defined options 00:01:31.400 buildtype : debug 00:01:31.400 default_library: shared 00:01:31.400 libdir : /usr/local/lib 00:01:31.400 00:01:31.400 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:01:31.658 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug' 00:01:31.658 [1/37] Compiling C object samples/client.p/.._lib_tran.c.o 00:01:31.658 [2/37] Compiling C object lib/libvfio-user.so.0.0.1.p/migration.c.o 00:01:31.658 [3/37] Compiling C object samples/shadow_ioeventfd_server.p/shadow_ioeventfd_server.c.o 00:01:31.658 [4/37] Compiling C object samples/null.p/null.c.o 00:01:31.914 [5/37] Compiling C object lib/libvfio-user.so.0.0.1.p/irq.c.o 00:01:31.914 [6/37] Compiling C object lib/libvfio-user.so.0.0.1.p/tran.c.o 00:01:31.914 [7/37] Compiling C object samples/client.p/.._lib_migration.c.o 00:01:31.914 [8/37] Compiling C object test/unit_tests.p/.._lib_tran_pipe.c.o 00:01:31.914 [9/37] Compiling C object samples/lspci.p/lspci.c.o 00:01:31.914 [10/37] Compiling C object test/unit_tests.p/mocks.c.o 00:01:31.914 [11/37] Compiling C object samples/gpio-pci-idio-16.p/gpio-pci-idio-16.c.o 00:01:31.914 [12/37] Compiling C object lib/libvfio-user.so.0.0.1.p/tran_sock.c.o 00:01:31.914 [13/37] Compiling C object test/unit_tests.p/.._lib_dma.c.o 00:01:31.914 [14/37] Compiling C object test/unit_tests.p/unit-tests.c.o 00:01:31.914 [15/37] Compiling C object test/unit_tests.p/.._lib_migration.c.o 00:01:31.914 [16/37] Compiling C object lib/libvfio-user.so.0.0.1.p/dma.c.o 00:01:31.914 [17/37] Compiling C object test/unit_tests.p/.._lib_tran.c.o 00:01:31.914 [18/37] Compiling C object lib/libvfio-user.so.0.0.1.p/pci.c.o 00:01:31.914 [19/37] Compiling C object test/unit_tests.p/.._lib_irq.c.o 00:01:31.914 [20/37] Compiling C object samples/server.p/server.c.o 00:01:31.914 [21/37] Compiling C object test/unit_tests.p/.._lib_pci.c.o 00:01:31.914 [22/37] Compiling C object samples/client.p/.._lib_tran_sock.c.o 00:01:31.914 [23/37] Compiling C object lib/libvfio-user.so.0.0.1.p/pci_caps.c.o 00:01:31.914 [24/37] Compiling C object test/unit_tests.p/.._lib_tran_sock.c.o 00:01:31.914 [25/37] Compiling C object test/unit_tests.p/.._lib_pci_caps.c.o 00:01:31.914 [26/37] Compiling C object samples/client.p/client.c.o 00:01:31.914 [27/37] Compiling C object lib/libvfio-user.so.0.0.1.p/libvfio-user.c.o 00:01:31.914 [28/37] Linking target samples/client 00:01:31.914 [29/37] Linking target lib/libvfio-user.so.0.0.1 00:01:32.171 [30/37] Compiling C object test/unit_tests.p/.._lib_libvfio-user.c.o 00:01:32.171 [31/37] Linking target test/unit_tests 00:01:32.171 [32/37] Generating symbol file lib/libvfio-user.so.0.0.1.p/libvfio-user.so.0.0.1.symbols 00:01:32.171 [33/37] Linking target samples/gpio-pci-idio-16 00:01:32.171 [34/37] Linking target samples/server 00:01:32.171 [35/37] Linking target samples/lspci 00:01:32.171 [36/37] Linking target samples/null 00:01:32.171 [37/37] Linking target samples/shadow_ioeventfd_server 00:01:32.171 INFO: autodetecting backend as ninja 00:01:32.171 INFO: calculating backend command to run: /usr/local/bin/ninja -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:01:32.171 DESTDIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user meson install --quiet -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:01:32.735 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug' 00:01:32.735 ninja: no work to do. 00:01:39.295 The Meson build system 00:01:39.295 Version: 1.5.0 00:01:39.295 Source dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk 00:01:39.295 Build dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build-tmp 00:01:39.295 Build type: native build 00:01:39.295 Program cat found: YES (/usr/bin/cat) 00:01:39.295 Project name: DPDK 00:01:39.295 Project version: 24.03.0 00:01:39.295 C compiler for the host machine: cc (gcc 13.3.1 "cc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:01:39.295 C linker for the host machine: cc ld.bfd 2.40-14 00:01:39.295 Host machine cpu family: x86_64 00:01:39.295 Host machine cpu: x86_64 00:01:39.295 Message: ## Building in Developer Mode ## 00:01:39.295 Program pkg-config found: YES (/usr/bin/pkg-config) 00:01:39.295 Program check-symbols.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/buildtools/check-symbols.sh) 00:01:39.295 Program options-ibverbs-static.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/buildtools/options-ibverbs-static.sh) 00:01:39.295 Program python3 found: YES (/usr/bin/python3) 00:01:39.295 Program cat found: YES (/usr/bin/cat) 00:01:39.295 Compiler for C supports arguments -march=native: YES 00:01:39.295 Checking for size of "void *" : 8 00:01:39.295 Checking for size of "void *" : 8 (cached) 00:01:39.295 Compiler for C supports link arguments -Wl,--undefined-version: YES 00:01:39.295 Library m found: YES 00:01:39.295 Library numa found: YES 00:01:39.295 Has header "numaif.h" : YES 00:01:39.295 Library fdt found: NO 00:01:39.295 Library execinfo found: NO 00:01:39.295 Has header "execinfo.h" : YES 00:01:39.295 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:01:39.295 Run-time dependency libarchive found: NO (tried pkgconfig) 00:01:39.295 Run-time dependency libbsd found: NO (tried pkgconfig) 00:01:39.295 Run-time dependency jansson found: NO (tried pkgconfig) 00:01:39.295 Run-time dependency openssl found: YES 3.1.1 00:01:39.295 Run-time dependency libpcap found: YES 1.10.4 00:01:39.295 Has header "pcap.h" with dependency libpcap: YES 00:01:39.295 Compiler for C supports arguments -Wcast-qual: YES 00:01:39.295 Compiler for C supports arguments -Wdeprecated: YES 00:01:39.295 Compiler for C supports arguments -Wformat: YES 00:01:39.295 Compiler for C supports arguments -Wformat-nonliteral: NO 00:01:39.295 Compiler for C supports arguments -Wformat-security: NO 00:01:39.295 Compiler for C supports arguments -Wmissing-declarations: YES 00:01:39.295 Compiler for C supports arguments -Wmissing-prototypes: YES 00:01:39.295 Compiler for C supports arguments -Wnested-externs: YES 00:01:39.295 Compiler for C supports arguments -Wold-style-definition: YES 00:01:39.295 Compiler for C supports arguments -Wpointer-arith: YES 00:01:39.295 Compiler for C supports arguments -Wsign-compare: YES 00:01:39.295 Compiler for C supports arguments -Wstrict-prototypes: YES 00:01:39.295 Compiler for C supports arguments -Wundef: YES 00:01:39.295 Compiler for C supports arguments -Wwrite-strings: YES 00:01:39.295 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:01:39.295 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:01:39.295 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:01:39.295 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:01:39.295 Program objdump found: YES (/usr/bin/objdump) 00:01:39.295 Compiler for C supports arguments -mavx512f: YES 00:01:39.295 Checking if "AVX512 checking" compiles: YES 00:01:39.295 Fetching value of define "__SSE4_2__" : 1 00:01:39.295 Fetching value of define "__AES__" : 1 00:01:39.295 Fetching value of define "__AVX__" : 1 00:01:39.295 Fetching value of define "__AVX2__" : 1 00:01:39.295 Fetching value of define "__AVX512BW__" : 1 00:01:39.295 Fetching value of define "__AVX512CD__" : 1 00:01:39.295 Fetching value of define "__AVX512DQ__" : 1 00:01:39.295 Fetching value of define "__AVX512F__" : 1 00:01:39.295 Fetching value of define "__AVX512VL__" : 1 00:01:39.295 Fetching value of define "__PCLMUL__" : 1 00:01:39.295 Fetching value of define "__RDRND__" : 1 00:01:39.295 Fetching value of define "__RDSEED__" : 1 00:01:39.295 Fetching value of define "__VPCLMULQDQ__" : (undefined) 00:01:39.295 Fetching value of define "__znver1__" : (undefined) 00:01:39.295 Fetching value of define "__znver2__" : (undefined) 00:01:39.295 Fetching value of define "__znver3__" : (undefined) 00:01:39.295 Fetching value of define "__znver4__" : (undefined) 00:01:39.295 Compiler for C supports arguments -Wno-format-truncation: YES 00:01:39.295 Message: lib/log: Defining dependency "log" 00:01:39.295 Message: lib/kvargs: Defining dependency "kvargs" 00:01:39.295 Message: lib/telemetry: Defining dependency "telemetry" 00:01:39.295 Checking for function "getentropy" : NO 00:01:39.295 Message: lib/eal: Defining dependency "eal" 00:01:39.295 Message: lib/ring: Defining dependency "ring" 00:01:39.295 Message: lib/rcu: Defining dependency "rcu" 00:01:39.295 Message: lib/mempool: Defining dependency "mempool" 00:01:39.295 Message: lib/mbuf: Defining dependency "mbuf" 00:01:39.295 Fetching value of define "__PCLMUL__" : 1 (cached) 00:01:39.295 Fetching value of define "__AVX512F__" : 1 (cached) 00:01:39.295 Fetching value of define "__AVX512BW__" : 1 (cached) 00:01:39.295 Fetching value of define "__AVX512DQ__" : 1 (cached) 00:01:39.295 Fetching value of define "__AVX512VL__" : 1 (cached) 00:01:39.295 Fetching value of define "__VPCLMULQDQ__" : (undefined) (cached) 00:01:39.295 Compiler for C supports arguments -mpclmul: YES 00:01:39.295 Compiler for C supports arguments -maes: YES 00:01:39.295 Compiler for C supports arguments -mavx512f: YES (cached) 00:01:39.295 Compiler for C supports arguments -mavx512bw: YES 00:01:39.295 Compiler for C supports arguments -mavx512dq: YES 00:01:39.295 Compiler for C supports arguments -mavx512vl: YES 00:01:39.295 Compiler for C supports arguments -mvpclmulqdq: YES 00:01:39.295 Compiler for C supports arguments -mavx2: YES 00:01:39.295 Compiler for C supports arguments -mavx: YES 00:01:39.295 Message: lib/net: Defining dependency "net" 00:01:39.295 Message: lib/meter: Defining dependency "meter" 00:01:39.295 Message: lib/ethdev: Defining dependency "ethdev" 00:01:39.295 Message: lib/pci: Defining dependency "pci" 00:01:39.295 Message: lib/cmdline: Defining dependency "cmdline" 00:01:39.295 Message: lib/hash: Defining dependency "hash" 00:01:39.295 Message: lib/timer: Defining dependency "timer" 00:01:39.295 Message: lib/compressdev: Defining dependency "compressdev" 00:01:39.295 Message: lib/cryptodev: Defining dependency "cryptodev" 00:01:39.295 Message: lib/dmadev: Defining dependency "dmadev" 00:01:39.295 Compiler for C supports arguments -Wno-cast-qual: YES 00:01:39.295 Message: lib/power: Defining dependency "power" 00:01:39.295 Message: lib/reorder: Defining dependency "reorder" 00:01:39.295 Message: lib/security: Defining dependency "security" 00:01:39.295 Has header "linux/userfaultfd.h" : YES 00:01:39.295 Has header "linux/vduse.h" : YES 00:01:39.295 Message: lib/vhost: Defining dependency "vhost" 00:01:39.295 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:01:39.295 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:01:39.295 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:01:39.295 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:01:39.295 Message: Disabling raw/* drivers: missing internal dependency "rawdev" 00:01:39.295 Message: Disabling regex/* drivers: missing internal dependency "regexdev" 00:01:39.296 Message: Disabling ml/* drivers: missing internal dependency "mldev" 00:01:39.296 Message: Disabling event/* drivers: missing internal dependency "eventdev" 00:01:39.296 Message: Disabling baseband/* drivers: missing internal dependency "bbdev" 00:01:39.296 Message: Disabling gpu/* drivers: missing internal dependency "gpudev" 00:01:39.296 Program doxygen found: YES (/usr/local/bin/doxygen) 00:01:39.296 Configuring doxy-api-html.conf using configuration 00:01:39.296 Configuring doxy-api-man.conf using configuration 00:01:39.296 Program mandb found: YES (/usr/bin/mandb) 00:01:39.296 Program sphinx-build found: NO 00:01:39.296 Configuring rte_build_config.h using configuration 00:01:39.296 Message: 00:01:39.296 ================= 00:01:39.296 Applications Enabled 00:01:39.296 ================= 00:01:39.296 00:01:39.296 apps: 00:01:39.296 00:01:39.296 00:01:39.296 Message: 00:01:39.296 ================= 00:01:39.296 Libraries Enabled 00:01:39.296 ================= 00:01:39.296 00:01:39.296 libs: 00:01:39.296 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 00:01:39.296 net, meter, ethdev, pci, cmdline, hash, timer, compressdev, 00:01:39.296 cryptodev, dmadev, power, reorder, security, vhost, 00:01:39.296 00:01:39.296 Message: 00:01:39.296 =============== 00:01:39.296 Drivers Enabled 00:01:39.296 =============== 00:01:39.296 00:01:39.296 common: 00:01:39.296 00:01:39.296 bus: 00:01:39.296 pci, vdev, 00:01:39.296 mempool: 00:01:39.296 ring, 00:01:39.296 dma: 00:01:39.296 00:01:39.296 net: 00:01:39.296 00:01:39.296 crypto: 00:01:39.296 00:01:39.296 compress: 00:01:39.296 00:01:39.296 vdpa: 00:01:39.296 00:01:39.296 00:01:39.296 Message: 00:01:39.296 ================= 00:01:39.296 Content Skipped 00:01:39.296 ================= 00:01:39.296 00:01:39.296 apps: 00:01:39.296 dumpcap: explicitly disabled via build config 00:01:39.296 graph: explicitly disabled via build config 00:01:39.296 pdump: explicitly disabled via build config 00:01:39.296 proc-info: explicitly disabled via build config 00:01:39.296 test-acl: explicitly disabled via build config 00:01:39.296 test-bbdev: explicitly disabled via build config 00:01:39.296 test-cmdline: explicitly disabled via build config 00:01:39.296 test-compress-perf: explicitly disabled via build config 00:01:39.296 test-crypto-perf: explicitly disabled via build config 00:01:39.296 test-dma-perf: explicitly disabled via build config 00:01:39.296 test-eventdev: explicitly disabled via build config 00:01:39.296 test-fib: explicitly disabled via build config 00:01:39.296 test-flow-perf: explicitly disabled via build config 00:01:39.296 test-gpudev: explicitly disabled via build config 00:01:39.296 test-mldev: explicitly disabled via build config 00:01:39.296 test-pipeline: explicitly disabled via build config 00:01:39.296 test-pmd: explicitly disabled via build config 00:01:39.296 test-regex: explicitly disabled via build config 00:01:39.296 test-sad: explicitly disabled via build config 00:01:39.296 test-security-perf: explicitly disabled via build config 00:01:39.296 00:01:39.296 libs: 00:01:39.296 argparse: explicitly disabled via build config 00:01:39.296 metrics: explicitly disabled via build config 00:01:39.296 acl: explicitly disabled via build config 00:01:39.296 bbdev: explicitly disabled via build config 00:01:39.296 bitratestats: explicitly disabled via build config 00:01:39.296 bpf: explicitly disabled via build config 00:01:39.296 cfgfile: explicitly disabled via build config 00:01:39.296 distributor: explicitly disabled via build config 00:01:39.296 efd: explicitly disabled via build config 00:01:39.296 eventdev: explicitly disabled via build config 00:01:39.296 dispatcher: explicitly disabled via build config 00:01:39.296 gpudev: explicitly disabled via build config 00:01:39.296 gro: explicitly disabled via build config 00:01:39.296 gso: explicitly disabled via build config 00:01:39.296 ip_frag: explicitly disabled via build config 00:01:39.296 jobstats: explicitly disabled via build config 00:01:39.296 latencystats: explicitly disabled via build config 00:01:39.296 lpm: explicitly disabled via build config 00:01:39.296 member: explicitly disabled via build config 00:01:39.296 pcapng: explicitly disabled via build config 00:01:39.296 rawdev: explicitly disabled via build config 00:01:39.296 regexdev: explicitly disabled via build config 00:01:39.296 mldev: explicitly disabled via build config 00:01:39.296 rib: explicitly disabled via build config 00:01:39.296 sched: explicitly disabled via build config 00:01:39.296 stack: explicitly disabled via build config 00:01:39.296 ipsec: explicitly disabled via build config 00:01:39.296 pdcp: explicitly disabled via build config 00:01:39.296 fib: explicitly disabled via build config 00:01:39.296 port: explicitly disabled via build config 00:01:39.296 pdump: explicitly disabled via build config 00:01:39.296 table: explicitly disabled via build config 00:01:39.296 pipeline: explicitly disabled via build config 00:01:39.296 graph: explicitly disabled via build config 00:01:39.296 node: explicitly disabled via build config 00:01:39.296 00:01:39.296 drivers: 00:01:39.296 common/cpt: not in enabled drivers build config 00:01:39.296 common/dpaax: not in enabled drivers build config 00:01:39.296 common/iavf: not in enabled drivers build config 00:01:39.296 common/idpf: not in enabled drivers build config 00:01:39.296 common/ionic: not in enabled drivers build config 00:01:39.296 common/mvep: not in enabled drivers build config 00:01:39.296 common/octeontx: not in enabled drivers build config 00:01:39.296 bus/auxiliary: not in enabled drivers build config 00:01:39.296 bus/cdx: not in enabled drivers build config 00:01:39.296 bus/dpaa: not in enabled drivers build config 00:01:39.296 bus/fslmc: not in enabled drivers build config 00:01:39.296 bus/ifpga: not in enabled drivers build config 00:01:39.296 bus/platform: not in enabled drivers build config 00:01:39.296 bus/uacce: not in enabled drivers build config 00:01:39.296 bus/vmbus: not in enabled drivers build config 00:01:39.296 common/cnxk: not in enabled drivers build config 00:01:39.296 common/mlx5: not in enabled drivers build config 00:01:39.296 common/nfp: not in enabled drivers build config 00:01:39.296 common/nitrox: not in enabled drivers build config 00:01:39.296 common/qat: not in enabled drivers build config 00:01:39.296 common/sfc_efx: not in enabled drivers build config 00:01:39.296 mempool/bucket: not in enabled drivers build config 00:01:39.296 mempool/cnxk: not in enabled drivers build config 00:01:39.296 mempool/dpaa: not in enabled drivers build config 00:01:39.296 mempool/dpaa2: not in enabled drivers build config 00:01:39.296 mempool/octeontx: not in enabled drivers build config 00:01:39.296 mempool/stack: not in enabled drivers build config 00:01:39.296 dma/cnxk: not in enabled drivers build config 00:01:39.296 dma/dpaa: not in enabled drivers build config 00:01:39.296 dma/dpaa2: not in enabled drivers build config 00:01:39.296 dma/hisilicon: not in enabled drivers build config 00:01:39.296 dma/idxd: not in enabled drivers build config 00:01:39.296 dma/ioat: not in enabled drivers build config 00:01:39.296 dma/skeleton: not in enabled drivers build config 00:01:39.296 net/af_packet: not in enabled drivers build config 00:01:39.296 net/af_xdp: not in enabled drivers build config 00:01:39.296 net/ark: not in enabled drivers build config 00:01:39.296 net/atlantic: not in enabled drivers build config 00:01:39.296 net/avp: not in enabled drivers build config 00:01:39.296 net/axgbe: not in enabled drivers build config 00:01:39.296 net/bnx2x: not in enabled drivers build config 00:01:39.296 net/bnxt: not in enabled drivers build config 00:01:39.296 net/bonding: not in enabled drivers build config 00:01:39.296 net/cnxk: not in enabled drivers build config 00:01:39.296 net/cpfl: not in enabled drivers build config 00:01:39.296 net/cxgbe: not in enabled drivers build config 00:01:39.296 net/dpaa: not in enabled drivers build config 00:01:39.296 net/dpaa2: not in enabled drivers build config 00:01:39.296 net/e1000: not in enabled drivers build config 00:01:39.296 net/ena: not in enabled drivers build config 00:01:39.296 net/enetc: not in enabled drivers build config 00:01:39.296 net/enetfec: not in enabled drivers build config 00:01:39.296 net/enic: not in enabled drivers build config 00:01:39.296 net/failsafe: not in enabled drivers build config 00:01:39.296 net/fm10k: not in enabled drivers build config 00:01:39.296 net/gve: not in enabled drivers build config 00:01:39.296 net/hinic: not in enabled drivers build config 00:01:39.296 net/hns3: not in enabled drivers build config 00:01:39.296 net/i40e: not in enabled drivers build config 00:01:39.296 net/iavf: not in enabled drivers build config 00:01:39.296 net/ice: not in enabled drivers build config 00:01:39.296 net/idpf: not in enabled drivers build config 00:01:39.296 net/igc: not in enabled drivers build config 00:01:39.296 net/ionic: not in enabled drivers build config 00:01:39.296 net/ipn3ke: not in enabled drivers build config 00:01:39.296 net/ixgbe: not in enabled drivers build config 00:01:39.296 net/mana: not in enabled drivers build config 00:01:39.296 net/memif: not in enabled drivers build config 00:01:39.296 net/mlx4: not in enabled drivers build config 00:01:39.296 net/mlx5: not in enabled drivers build config 00:01:39.296 net/mvneta: not in enabled drivers build config 00:01:39.296 net/mvpp2: not in enabled drivers build config 00:01:39.296 net/netvsc: not in enabled drivers build config 00:01:39.296 net/nfb: not in enabled drivers build config 00:01:39.296 net/nfp: not in enabled drivers build config 00:01:39.296 net/ngbe: not in enabled drivers build config 00:01:39.296 net/null: not in enabled drivers build config 00:01:39.296 net/octeontx: not in enabled drivers build config 00:01:39.296 net/octeon_ep: not in enabled drivers build config 00:01:39.296 net/pcap: not in enabled drivers build config 00:01:39.296 net/pfe: not in enabled drivers build config 00:01:39.296 net/qede: not in enabled drivers build config 00:01:39.296 net/ring: not in enabled drivers build config 00:01:39.296 net/sfc: not in enabled drivers build config 00:01:39.296 net/softnic: not in enabled drivers build config 00:01:39.296 net/tap: not in enabled drivers build config 00:01:39.296 net/thunderx: not in enabled drivers build config 00:01:39.296 net/txgbe: not in enabled drivers build config 00:01:39.296 net/vdev_netvsc: not in enabled drivers build config 00:01:39.296 net/vhost: not in enabled drivers build config 00:01:39.297 net/virtio: not in enabled drivers build config 00:01:39.297 net/vmxnet3: not in enabled drivers build config 00:01:39.297 raw/*: missing internal dependency, "rawdev" 00:01:39.297 crypto/armv8: not in enabled drivers build config 00:01:39.297 crypto/bcmfs: not in enabled drivers build config 00:01:39.297 crypto/caam_jr: not in enabled drivers build config 00:01:39.297 crypto/ccp: not in enabled drivers build config 00:01:39.297 crypto/cnxk: not in enabled drivers build config 00:01:39.297 crypto/dpaa_sec: not in enabled drivers build config 00:01:39.297 crypto/dpaa2_sec: not in enabled drivers build config 00:01:39.297 crypto/ipsec_mb: not in enabled drivers build config 00:01:39.297 crypto/mlx5: not in enabled drivers build config 00:01:39.297 crypto/mvsam: not in enabled drivers build config 00:01:39.297 crypto/nitrox: not in enabled drivers build config 00:01:39.297 crypto/null: not in enabled drivers build config 00:01:39.297 crypto/octeontx: not in enabled drivers build config 00:01:39.297 crypto/openssl: not in enabled drivers build config 00:01:39.297 crypto/scheduler: not in enabled drivers build config 00:01:39.297 crypto/uadk: not in enabled drivers build config 00:01:39.297 crypto/virtio: not in enabled drivers build config 00:01:39.297 compress/isal: not in enabled drivers build config 00:01:39.297 compress/mlx5: not in enabled drivers build config 00:01:39.297 compress/nitrox: not in enabled drivers build config 00:01:39.297 compress/octeontx: not in enabled drivers build config 00:01:39.297 compress/zlib: not in enabled drivers build config 00:01:39.297 regex/*: missing internal dependency, "regexdev" 00:01:39.297 ml/*: missing internal dependency, "mldev" 00:01:39.297 vdpa/ifc: not in enabled drivers build config 00:01:39.297 vdpa/mlx5: not in enabled drivers build config 00:01:39.297 vdpa/nfp: not in enabled drivers build config 00:01:39.297 vdpa/sfc: not in enabled drivers build config 00:01:39.297 event/*: missing internal dependency, "eventdev" 00:01:39.297 baseband/*: missing internal dependency, "bbdev" 00:01:39.297 gpu/*: missing internal dependency, "gpudev" 00:01:39.297 00:01:39.297 00:01:39.297 Build targets in project: 85 00:01:39.297 00:01:39.297 DPDK 24.03.0 00:01:39.297 00:01:39.297 User defined options 00:01:39.297 buildtype : debug 00:01:39.297 default_library : shared 00:01:39.297 libdir : lib 00:01:39.297 prefix : /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:01:39.297 c_args : -Wno-stringop-overflow -fcommon -Wno-stringop-overread -Wno-array-bounds -fPIC -Werror 00:01:39.297 c_link_args : 00:01:39.297 cpu_instruction_set: native 00:01:39.297 disable_apps : dumpcap,graph,pdump,proc-info,test-acl,test-bbdev,test-cmdline,test-compress-perf,test-crypto-perf,test-dma-perf,test-eventdev,test-fib,test-flow-perf,test-gpudev,test-mldev,test-pipeline,test-pmd,test-regex,test-sad,test-security-perf,test 00:01:39.297 disable_libs : acl,argparse,bbdev,bitratestats,bpf,cfgfile,dispatcher,distributor,efd,eventdev,fib,gpudev,graph,gro,gso,ip_frag,ipsec,jobstats,latencystats,lpm,member,metrics,mldev,node,pcapng,pdcp,pdump,pipeline,port,rawdev,regexdev,rib,sched,stack,table 00:01:39.297 enable_docs : false 00:01:39.297 enable_drivers : bus,bus/pci,bus/vdev,mempool/ring 00:01:39.297 enable_kmods : false 00:01:39.297 max_lcores : 128 00:01:39.297 tests : false 00:01:39.297 00:01:39.297 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:01:39.570 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build-tmp' 00:01:39.570 [1/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:01:39.570 [2/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:01:39.570 [3/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:01:39.833 [4/268] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:01:39.833 [5/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:01:39.833 [6/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:01:39.833 [7/268] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:01:39.833 [8/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:01:39.833 [9/268] Compiling C object lib/librte_log.a.p/log_log.c.o 00:01:39.833 [10/268] Linking static target lib/librte_kvargs.a 00:01:39.833 [11/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:01:39.833 [12/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:01:39.833 [13/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:01:39.833 [14/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:01:39.833 [15/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:01:39.833 [16/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:01:39.833 [17/268] Linking static target lib/librte_log.a 00:01:39.833 [18/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:01:39.833 [19/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:01:39.833 [20/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:01:39.833 [21/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:01:39.833 [22/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:01:39.833 [23/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:01:39.833 [24/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:01:39.833 [25/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:01:39.833 [26/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:01:39.833 [27/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:01:39.833 [28/268] Compiling C object lib/librte_hash.a.p/hash_rte_hash_crc.c.o 00:01:39.833 [29/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:01:40.093 [30/268] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:01:40.093 [31/268] Linking static target lib/librte_pci.a 00:01:40.093 [32/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:01:40.093 [33/268] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:01:40.093 [34/268] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:01:40.093 [35/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:01:40.093 [36/268] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:01:40.093 [37/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:01:40.093 [38/268] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:01:40.351 [39/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:01:40.351 [40/268] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:01:40.351 [41/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:01:40.351 [42/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:01:40.351 [43/268] Linking static target lib/librte_ring.a 00:01:40.351 [44/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:01:40.351 [45/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:01:40.351 [46/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:01:40.351 [47/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:01:40.351 [48/268] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:01:40.351 [49/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:01:40.351 [50/268] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:01:40.351 [51/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:01:40.351 [52/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:01:40.351 [53/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:01:40.351 [54/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:01:40.351 [55/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:01:40.351 [56/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:01:40.351 [57/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:01:40.351 [58/268] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:01:40.351 [59/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:01:40.351 [60/268] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:01:40.351 [61/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:01:40.351 [62/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:01:40.351 [63/268] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:01:40.351 [64/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:01:40.351 [65/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:01:40.351 [66/268] Linking static target lib/librte_meter.a 00:01:40.351 [67/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:01:40.351 [68/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:01:40.351 [69/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:01:40.351 [70/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:01:40.351 [71/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:01:40.351 [72/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:01:40.351 [73/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:01:40.351 [74/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:01:40.351 [75/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:01:40.351 [76/268] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:01:40.351 [77/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:01:40.351 [78/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:01:40.351 [79/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:01:40.351 [80/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:01:40.351 [81/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:01:40.351 [82/268] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:01:40.351 [83/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:01:40.351 [84/268] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:01:40.351 [85/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:01:40.351 [86/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:01:40.351 [87/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:01:40.351 [88/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:01:40.351 [89/268] Linking static target lib/librte_telemetry.a 00:01:40.351 [90/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:01:40.351 [91/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:01:40.351 [92/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:01:40.351 [93/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:01:40.351 [94/268] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:01:40.351 [95/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:01:40.610 [96/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:01:40.610 [97/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:01:40.610 [98/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:01:40.610 [99/268] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:01:40.610 [100/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:01:40.610 [101/268] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:01:40.610 [102/268] Compiling C object lib/net/libnet_crc_avx512_lib.a.p/net_crc_avx512.c.o 00:01:40.610 [103/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:01:40.610 [104/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:01:40.610 [105/268] Linking static target lib/librte_rcu.a 00:01:40.610 [106/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:01:40.610 [107/268] Linking static target lib/net/libnet_crc_avx512_lib.a 00:01:40.610 [108/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:01:40.610 [109/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:01:40.610 [110/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:01:40.610 [111/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:01:40.610 [112/268] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:01:40.610 [113/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:01:40.610 [114/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:01:40.610 [115/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash_gfni.c.o 00:01:40.610 [116/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:01:40.610 [117/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_linux_ethtool.c.o 00:01:40.610 [118/268] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:01:40.610 [119/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:01:40.610 [120/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:01:40.610 [121/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:01:40.610 [122/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:01:40.610 [123/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:01:40.610 [124/268] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:01:40.610 [125/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:01:40.610 [126/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:01:40.610 [127/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:01:40.610 [128/268] Linking static target lib/librte_mempool.a 00:01:40.610 [129/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:01:40.610 [130/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:01:40.610 [131/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:01:40.610 [132/268] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:01:40.610 [133/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:01:40.610 [134/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:01:40.610 [135/268] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:01:40.610 [136/268] Linking static target lib/librte_eal.a 00:01:40.610 [137/268] Linking static target lib/librte_timer.a 00:01:40.610 [138/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:01:40.610 [139/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:01:40.610 [140/268] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:01:40.610 [141/268] Linking static target lib/librte_dmadev.a 00:01:40.610 [142/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:01:40.610 [143/268] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:01:40.610 [144/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:01:40.610 [145/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:01:40.610 [146/268] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:01:40.610 [147/268] Linking static target lib/librte_compressdev.a 00:01:40.610 [148/268] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:01:40.610 [149/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:01:40.610 [150/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:01:40.610 [151/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:01:40.610 [152/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:01:40.610 [153/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:01:40.610 [154/268] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:01:40.610 [155/268] Linking static target lib/librte_cmdline.a 00:01:40.610 [156/268] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:01:40.610 [157/268] Linking target lib/librte_log.so.24.1 00:01:40.992 [158/268] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:01:40.992 [159/268] Linking static target lib/librte_net.a 00:01:40.992 [160/268] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:01:40.992 [161/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:01:40.992 [162/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:01:40.992 [163/268] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:01:40.992 [164/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:01:40.992 [165/268] Linking static target lib/librte_security.a 00:01:40.992 [166/268] Linking static target lib/librte_mbuf.a 00:01:40.992 [167/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:01:40.992 [168/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:01:40.992 [169/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:01:40.992 [170/268] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:01:40.992 [171/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:01:40.992 [172/268] Generating symbol file lib/librte_log.so.24.1.p/librte_log.so.24.1.symbols 00:01:40.992 [173/268] Linking static target drivers/libtmp_rte_bus_vdev.a 00:01:40.992 [174/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:01:40.992 [175/268] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:01:40.992 [176/268] Linking target lib/librte_kvargs.so.24.1 00:01:40.992 [177/268] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:01:40.992 [178/268] Linking static target lib/librte_power.a 00:01:40.992 [179/268] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:01:40.992 [180/268] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:01:40.992 [181/268] Linking static target lib/librte_hash.a 00:01:40.992 [182/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:01:40.992 [183/268] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:01:40.992 [184/268] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:01:40.992 [185/268] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:01:40.992 [186/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:01:40.992 [187/268] Linking static target lib/librte_reorder.a 00:01:40.992 [188/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:01:40.992 [189/268] Linking static target drivers/libtmp_rte_bus_pci.a 00:01:40.992 [190/268] Linking target lib/librte_telemetry.so.24.1 00:01:40.992 [191/268] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:01:40.992 [192/268] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:01:40.992 [193/268] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:01:40.992 [194/268] Linking static target drivers/libtmp_rte_mempool_ring.a 00:01:40.992 [195/268] Generating symbol file lib/librte_kvargs.so.24.1.p/librte_kvargs.so.24.1.symbols 00:01:40.992 [196/268] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:01:41.251 [197/268] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:01:41.251 [198/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:01:41.251 [199/268] Linking static target lib/librte_cryptodev.a 00:01:41.251 [200/268] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:01:41.251 [201/268] Compiling C object drivers/librte_bus_vdev.so.24.1.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:01:41.251 [202/268] Linking static target drivers/librte_bus_vdev.a 00:01:41.251 [203/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:01:41.251 [204/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:01:41.252 [205/268] Generating symbol file lib/librte_telemetry.so.24.1.p/librte_telemetry.so.24.1.symbols 00:01:41.252 [206/268] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:41.252 [207/268] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:01:41.252 [208/268] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:01:41.252 [209/268] Compiling C object drivers/librte_bus_pci.so.24.1.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:01:41.252 [210/268] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:01:41.252 [211/268] Linking static target drivers/librte_bus_pci.a 00:01:41.252 [212/268] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:01:41.252 [213/268] Compiling C object drivers/librte_mempool_ring.so.24.1.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:01:41.252 [214/268] Linking static target drivers/librte_mempool_ring.a 00:01:41.510 [215/268] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:41.510 [216/268] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:01:41.510 [217/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:01:41.510 [218/268] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:01:41.510 [219/268] Linking static target lib/librte_ethdev.a 00:01:41.510 [220/268] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:01:41.510 [221/268] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:41.768 [222/268] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:01:42.025 [223/268] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:01:42.025 [224/268] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:01:42.025 [225/268] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:01:42.025 [226/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:01:42.025 [227/268] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:01:42.959 [228/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:01:42.959 [229/268] Linking static target lib/librte_vhost.a 00:01:43.218 [230/268] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:44.592 [231/268] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:01:49.859 [232/268] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:50.795 [233/268] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:01:50.795 [234/268] Linking target lib/librte_eal.so.24.1 00:01:51.054 [235/268] Generating symbol file lib/librte_eal.so.24.1.p/librte_eal.so.24.1.symbols 00:01:51.054 [236/268] Linking target lib/librte_ring.so.24.1 00:01:51.054 [237/268] Linking target lib/librte_meter.so.24.1 00:01:51.054 [238/268] Linking target lib/librte_pci.so.24.1 00:01:51.054 [239/268] Linking target lib/librte_timer.so.24.1 00:01:51.054 [240/268] Linking target drivers/librte_bus_vdev.so.24.1 00:01:51.054 [241/268] Linking target lib/librte_dmadev.so.24.1 00:01:51.313 [242/268] Generating symbol file lib/librte_ring.so.24.1.p/librte_ring.so.24.1.symbols 00:01:51.313 [243/268] Generating symbol file lib/librte_timer.so.24.1.p/librte_timer.so.24.1.symbols 00:01:51.313 [244/268] Generating symbol file lib/librte_pci.so.24.1.p/librte_pci.so.24.1.symbols 00:01:51.313 [245/268] Generating symbol file lib/librte_meter.so.24.1.p/librte_meter.so.24.1.symbols 00:01:51.313 [246/268] Generating symbol file lib/librte_dmadev.so.24.1.p/librte_dmadev.so.24.1.symbols 00:01:51.313 [247/268] Linking target lib/librte_mempool.so.24.1 00:01:51.313 [248/268] Linking target lib/librte_rcu.so.24.1 00:01:51.313 [249/268] Linking target drivers/librte_bus_pci.so.24.1 00:01:51.572 [250/268] Generating symbol file lib/librte_mempool.so.24.1.p/librte_mempool.so.24.1.symbols 00:01:51.572 [251/268] Generating symbol file lib/librte_rcu.so.24.1.p/librte_rcu.so.24.1.symbols 00:01:51.572 [252/268] Linking target lib/librte_mbuf.so.24.1 00:01:51.572 [253/268] Linking target drivers/librte_mempool_ring.so.24.1 00:01:51.572 [254/268] Generating symbol file lib/librte_mbuf.so.24.1.p/librte_mbuf.so.24.1.symbols 00:01:51.830 [255/268] Linking target lib/librte_compressdev.so.24.1 00:01:51.830 [256/268] Linking target lib/librte_net.so.24.1 00:01:51.830 [257/268] Linking target lib/librte_reorder.so.24.1 00:01:51.830 [258/268] Linking target lib/librte_cryptodev.so.24.1 00:01:51.830 [259/268] Generating symbol file lib/librte_net.so.24.1.p/librte_net.so.24.1.symbols 00:01:51.830 [260/268] Generating symbol file lib/librte_cryptodev.so.24.1.p/librte_cryptodev.so.24.1.symbols 00:01:51.830 [261/268] Linking target lib/librte_cmdline.so.24.1 00:01:51.830 [262/268] Linking target lib/librte_hash.so.24.1 00:01:51.830 [263/268] Linking target lib/librte_ethdev.so.24.1 00:01:51.830 [264/268] Linking target lib/librte_security.so.24.1 00:01:52.089 [265/268] Generating symbol file lib/librte_hash.so.24.1.p/librte_hash.so.24.1.symbols 00:01:52.089 [266/268] Generating symbol file lib/librte_ethdev.so.24.1.p/librte_ethdev.so.24.1.symbols 00:01:52.089 [267/268] Linking target lib/librte_vhost.so.24.1 00:01:52.089 [268/268] Linking target lib/librte_power.so.24.1 00:01:52.089 INFO: autodetecting backend as ninja 00:01:52.089 INFO: calculating backend command to run: /usr/local/bin/ninja -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build-tmp -j 112 00:02:10.174 CC lib/log/log.o 00:02:10.174 CC lib/log/log_flags.o 00:02:10.174 CC lib/log/log_deprecated.o 00:02:10.174 CC lib/ut/ut.o 00:02:10.174 CC lib/ut_mock/mock.o 00:02:10.174 LIB libspdk_ut_mock.a 00:02:10.174 LIB libspdk_log.a 00:02:10.174 LIB libspdk_ut.a 00:02:10.174 SO libspdk_ut_mock.so.6.0 00:02:10.174 SO libspdk_log.so.7.1 00:02:10.174 SO libspdk_ut.so.2.0 00:02:10.174 SYMLINK libspdk_ut_mock.so 00:02:10.174 SYMLINK libspdk_log.so 00:02:10.174 SYMLINK libspdk_ut.so 00:02:10.432 CC lib/dma/dma.o 00:02:10.432 CC lib/util/base64.o 00:02:10.432 CC lib/util/bit_array.o 00:02:10.432 CC lib/util/cpuset.o 00:02:10.432 CC lib/util/crc16.o 00:02:10.432 CC lib/util/crc32.o 00:02:10.432 CC lib/util/crc32c.o 00:02:10.432 CXX lib/trace_parser/trace.o 00:02:10.432 CC lib/util/crc32_ieee.o 00:02:10.432 CC lib/util/crc64.o 00:02:10.432 CC lib/util/dif.o 00:02:10.432 CC lib/util/fd.o 00:02:10.432 CC lib/util/fd_group.o 00:02:10.432 CC lib/util/file.o 00:02:10.432 CC lib/util/hexlify.o 00:02:10.432 CC lib/util/iov.o 00:02:10.432 CC lib/util/math.o 00:02:10.432 CC lib/util/net.o 00:02:10.432 CC lib/util/pipe.o 00:02:10.432 CC lib/util/strerror_tls.o 00:02:10.432 CC lib/util/string.o 00:02:10.432 CC lib/util/xor.o 00:02:10.432 CC lib/util/uuid.o 00:02:10.432 CC lib/ioat/ioat.o 00:02:10.432 CC lib/util/zipf.o 00:02:10.432 CC lib/util/md5.o 00:02:10.432 CC lib/vfio_user/host/vfio_user.o 00:02:10.432 CC lib/vfio_user/host/vfio_user_pci.o 00:02:10.432 LIB libspdk_dma.a 00:02:10.691 SO libspdk_dma.so.5.0 00:02:10.691 SYMLINK libspdk_dma.so 00:02:10.691 LIB libspdk_ioat.a 00:02:10.691 SO libspdk_ioat.so.7.0 00:02:10.950 LIB libspdk_vfio_user.a 00:02:10.950 SYMLINK libspdk_ioat.so 00:02:10.950 SO libspdk_vfio_user.so.5.0 00:02:10.950 SYMLINK libspdk_vfio_user.so 00:02:10.950 LIB libspdk_util.a 00:02:11.208 SO libspdk_util.so.10.1 00:02:11.208 SYMLINK libspdk_util.so 00:02:11.466 LIB libspdk_trace_parser.a 00:02:11.466 SO libspdk_trace_parser.so.6.0 00:02:11.466 SYMLINK libspdk_trace_parser.so 00:02:11.466 CC lib/vmd/vmd.o 00:02:11.466 CC lib/vmd/led.o 00:02:11.466 CC lib/rdma_utils/rdma_utils.o 00:02:11.466 CC lib/idxd/idxd.o 00:02:11.466 CC lib/json/json_parse.o 00:02:11.466 CC lib/idxd/idxd_user.o 00:02:11.466 CC lib/json/json_util.o 00:02:11.466 CC lib/idxd/idxd_kernel.o 00:02:11.466 CC lib/json/json_write.o 00:02:11.466 CC lib/env_dpdk/env.o 00:02:11.466 CC lib/env_dpdk/pci_ioat.o 00:02:11.466 CC lib/env_dpdk/memory.o 00:02:11.466 CC lib/env_dpdk/pci.o 00:02:11.466 CC lib/env_dpdk/init.o 00:02:11.466 CC lib/env_dpdk/threads.o 00:02:11.466 CC lib/env_dpdk/pci_virtio.o 00:02:11.466 CC lib/env_dpdk/pci_dpdk.o 00:02:11.466 CC lib/env_dpdk/pci_vmd.o 00:02:11.466 CC lib/env_dpdk/sigbus_handler.o 00:02:11.466 CC lib/env_dpdk/pci_idxd.o 00:02:11.466 CC lib/env_dpdk/pci_event.o 00:02:11.466 CC lib/conf/conf.o 00:02:11.466 CC lib/env_dpdk/pci_dpdk_2207.o 00:02:11.466 CC lib/env_dpdk/pci_dpdk_2211.o 00:02:11.724 LIB libspdk_conf.a 00:02:11.724 SO libspdk_conf.so.6.0 00:02:11.982 LIB libspdk_rdma_utils.a 00:02:11.982 LIB libspdk_json.a 00:02:11.982 SO libspdk_rdma_utils.so.1.0 00:02:11.982 SYMLINK libspdk_conf.so 00:02:11.982 SO libspdk_json.so.6.0 00:02:11.982 LIB libspdk_idxd.a 00:02:11.982 SYMLINK libspdk_rdma_utils.so 00:02:11.982 SYMLINK libspdk_json.so 00:02:11.982 SO libspdk_idxd.so.12.1 00:02:11.982 LIB libspdk_vmd.a 00:02:11.982 SO libspdk_vmd.so.6.0 00:02:11.982 SYMLINK libspdk_idxd.so 00:02:12.240 SYMLINK libspdk_vmd.so 00:02:12.240 CC lib/rdma_provider/common.o 00:02:12.240 CC lib/rdma_provider/rdma_provider_verbs.o 00:02:12.240 CC lib/jsonrpc/jsonrpc_server.o 00:02:12.240 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:02:12.240 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:02:12.240 CC lib/jsonrpc/jsonrpc_client.o 00:02:12.498 LIB libspdk_rdma_provider.a 00:02:12.498 SO libspdk_rdma_provider.so.7.0 00:02:12.498 LIB libspdk_jsonrpc.a 00:02:12.498 SO libspdk_jsonrpc.so.6.0 00:02:12.755 SYMLINK libspdk_rdma_provider.so 00:02:12.755 SYMLINK libspdk_jsonrpc.so 00:02:13.014 LIB libspdk_env_dpdk.a 00:02:13.014 SO libspdk_env_dpdk.so.15.1 00:02:13.014 CC lib/rpc/rpc.o 00:02:13.014 SYMLINK libspdk_env_dpdk.so 00:02:13.273 LIB libspdk_rpc.a 00:02:13.273 SO libspdk_rpc.so.6.0 00:02:13.273 SYMLINK libspdk_rpc.so 00:02:13.531 CC lib/trace/trace.o 00:02:13.531 CC lib/trace/trace_flags.o 00:02:13.531 CC lib/trace/trace_rpc.o 00:02:13.531 CC lib/notify/notify.o 00:02:13.531 CC lib/notify/notify_rpc.o 00:02:13.531 CC lib/keyring/keyring.o 00:02:13.531 CC lib/keyring/keyring_rpc.o 00:02:13.789 LIB libspdk_notify.a 00:02:13.789 SO libspdk_notify.so.6.0 00:02:13.789 LIB libspdk_trace.a 00:02:13.789 LIB libspdk_keyring.a 00:02:14.049 SYMLINK libspdk_notify.so 00:02:14.049 SO libspdk_trace.so.11.0 00:02:14.049 SO libspdk_keyring.so.2.0 00:02:14.049 SYMLINK libspdk_trace.so 00:02:14.049 SYMLINK libspdk_keyring.so 00:02:14.308 CC lib/sock/sock.o 00:02:14.308 CC lib/sock/sock_rpc.o 00:02:14.308 CC lib/thread/thread.o 00:02:14.308 CC lib/thread/iobuf.o 00:02:14.874 LIB libspdk_sock.a 00:02:14.874 SO libspdk_sock.so.10.0 00:02:14.874 SYMLINK libspdk_sock.so 00:02:15.133 CC lib/nvme/nvme_ctrlr_cmd.o 00:02:15.133 CC lib/nvme/nvme_ctrlr.o 00:02:15.133 CC lib/nvme/nvme_fabric.o 00:02:15.133 CC lib/nvme/nvme_ns.o 00:02:15.133 CC lib/nvme/nvme_ns_cmd.o 00:02:15.133 CC lib/nvme/nvme_pcie_common.o 00:02:15.133 CC lib/nvme/nvme_pcie.o 00:02:15.133 CC lib/nvme/nvme_qpair.o 00:02:15.133 CC lib/nvme/nvme.o 00:02:15.133 CC lib/nvme/nvme_quirks.o 00:02:15.133 CC lib/nvme/nvme_transport.o 00:02:15.133 CC lib/nvme/nvme_discovery.o 00:02:15.133 CC lib/nvme/nvme_io_msg.o 00:02:15.133 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:02:15.133 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:02:15.133 CC lib/nvme/nvme_tcp.o 00:02:15.133 CC lib/nvme/nvme_opal.o 00:02:15.133 CC lib/nvme/nvme_poll_group.o 00:02:15.133 CC lib/nvme/nvme_zns.o 00:02:15.133 CC lib/nvme/nvme_rdma.o 00:02:15.133 CC lib/nvme/nvme_stubs.o 00:02:15.133 CC lib/nvme/nvme_auth.o 00:02:15.133 CC lib/nvme/nvme_cuse.o 00:02:15.133 CC lib/nvme/nvme_vfio_user.o 00:02:15.392 LIB libspdk_thread.a 00:02:15.392 SO libspdk_thread.so.11.0 00:02:15.392 SYMLINK libspdk_thread.so 00:02:15.651 CC lib/fsdev/fsdev.o 00:02:15.651 CC lib/fsdev/fsdev_io.o 00:02:15.651 CC lib/fsdev/fsdev_rpc.o 00:02:15.651 CC lib/blob/blobstore.o 00:02:15.651 CC lib/blob/request.o 00:02:15.651 CC lib/accel/accel.o 00:02:15.651 CC lib/blob/zeroes.o 00:02:15.651 CC lib/vfu_tgt/tgt_endpoint.o 00:02:15.651 CC lib/virtio/virtio.o 00:02:15.651 CC lib/accel/accel_rpc.o 00:02:15.651 CC lib/blob/blob_bs_dev.o 00:02:15.651 CC lib/virtio/virtio_vhost_user.o 00:02:15.651 CC lib/vfu_tgt/tgt_rpc.o 00:02:15.651 CC lib/accel/accel_sw.o 00:02:15.651 CC lib/virtio/virtio_vfio_user.o 00:02:15.651 CC lib/virtio/virtio_pci.o 00:02:15.651 CC lib/init/json_config.o 00:02:15.651 CC lib/init/subsystem.o 00:02:15.651 CC lib/init/subsystem_rpc.o 00:02:15.651 CC lib/init/rpc.o 00:02:15.910 LIB libspdk_init.a 00:02:16.168 SO libspdk_init.so.6.0 00:02:16.168 LIB libspdk_vfu_tgt.a 00:02:16.168 SYMLINK libspdk_init.so 00:02:16.168 LIB libspdk_virtio.a 00:02:16.168 SO libspdk_vfu_tgt.so.3.0 00:02:16.168 SO libspdk_virtio.so.7.0 00:02:16.168 LIB libspdk_fsdev.a 00:02:16.168 SYMLINK libspdk_vfu_tgt.so 00:02:16.168 SO libspdk_fsdev.so.2.0 00:02:16.168 SYMLINK libspdk_virtio.so 00:02:16.426 SYMLINK libspdk_fsdev.so 00:02:16.426 CC lib/event/reactor.o 00:02:16.426 CC lib/event/app.o 00:02:16.426 CC lib/event/scheduler_static.o 00:02:16.426 CC lib/event/log_rpc.o 00:02:16.426 CC lib/event/app_rpc.o 00:02:16.685 CC lib/fuse_dispatcher/fuse_dispatcher.o 00:02:16.944 LIB libspdk_accel.a 00:02:16.944 LIB libspdk_event.a 00:02:16.944 SO libspdk_accel.so.16.0 00:02:16.944 SO libspdk_event.so.14.0 00:02:16.944 SYMLINK libspdk_accel.so 00:02:16.944 SYMLINK libspdk_event.so 00:02:17.203 LIB libspdk_fuse_dispatcher.a 00:02:17.203 CC lib/bdev/bdev.o 00:02:17.203 CC lib/bdev/part.o 00:02:17.203 CC lib/bdev/bdev_rpc.o 00:02:17.203 CC lib/bdev/bdev_zone.o 00:02:17.203 CC lib/bdev/scsi_nvme.o 00:02:17.203 SO libspdk_fuse_dispatcher.so.1.0 00:02:17.461 SYMLINK libspdk_fuse_dispatcher.so 00:02:17.461 LIB libspdk_nvme.a 00:02:17.461 SO libspdk_nvme.so.15.0 00:02:17.720 SYMLINK libspdk_nvme.so 00:02:19.096 LIB libspdk_blob.a 00:02:19.096 SO libspdk_blob.so.11.0 00:02:19.096 SYMLINK libspdk_blob.so 00:02:19.353 CC lib/lvol/lvol.o 00:02:19.353 CC lib/blobfs/blobfs.o 00:02:19.353 CC lib/blobfs/tree.o 00:02:20.312 LIB libspdk_bdev.a 00:02:20.313 SO libspdk_bdev.so.17.0 00:02:20.313 LIB libspdk_blobfs.a 00:02:20.313 SO libspdk_blobfs.so.10.0 00:02:20.313 SYMLINK libspdk_bdev.so 00:02:20.313 LIB libspdk_lvol.a 00:02:20.313 SYMLINK libspdk_blobfs.so 00:02:20.313 SO libspdk_lvol.so.10.0 00:02:20.313 SYMLINK libspdk_lvol.so 00:02:20.572 CC lib/ftl/ftl_core.o 00:02:20.572 CC lib/ftl/ftl_layout.o 00:02:20.572 CC lib/ftl/ftl_init.o 00:02:20.572 CC lib/ftl/ftl_debug.o 00:02:20.572 CC lib/ftl/ftl_io.o 00:02:20.572 CC lib/ftl/ftl_l2p.o 00:02:20.572 CC lib/ftl/ftl_sb.o 00:02:20.572 CC lib/nbd/nbd.o 00:02:20.572 CC lib/ftl/ftl_l2p_flat.o 00:02:20.572 CC lib/nbd/nbd_rpc.o 00:02:20.572 CC lib/ftl/ftl_nv_cache.o 00:02:20.572 CC lib/ftl/ftl_band.o 00:02:20.572 CC lib/ftl/ftl_band_ops.o 00:02:20.572 CC lib/ftl/ftl_rq.o 00:02:20.572 CC lib/ftl/ftl_writer.o 00:02:20.572 CC lib/ftl/ftl_reloc.o 00:02:20.572 CC lib/ftl/ftl_l2p_cache.o 00:02:20.572 CC lib/scsi/port.o 00:02:20.572 CC lib/ftl/ftl_p2l.o 00:02:20.572 CC lib/scsi/dev.o 00:02:20.572 CC lib/scsi/lun.o 00:02:20.572 CC lib/scsi/scsi.o 00:02:20.572 CC lib/ftl/ftl_p2l_log.o 00:02:20.572 CC lib/ftl/mngt/ftl_mngt.o 00:02:20.572 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:02:20.572 CC lib/scsi/scsi_bdev.o 00:02:20.572 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:02:20.572 CC lib/scsi/scsi_pr.o 00:02:20.572 CC lib/ftl/mngt/ftl_mngt_startup.o 00:02:20.572 CC lib/ftl/mngt/ftl_mngt_md.o 00:02:20.572 CC lib/scsi/scsi_rpc.o 00:02:20.572 CC lib/ftl/mngt/ftl_mngt_misc.o 00:02:20.572 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:02:20.572 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:02:20.572 CC lib/scsi/task.o 00:02:20.572 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:02:20.572 CC lib/ftl/mngt/ftl_mngt_band.o 00:02:20.572 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:02:20.572 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:02:20.572 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:02:20.572 CC lib/ftl/utils/ftl_conf.o 00:02:20.572 CC lib/ftl/utils/ftl_md.o 00:02:20.572 CC lib/ublk/ublk.o 00:02:20.572 CC lib/ftl/utils/ftl_mempool.o 00:02:20.572 CC lib/ublk/ublk_rpc.o 00:02:20.572 CC lib/ftl/utils/ftl_bitmap.o 00:02:20.572 CC lib/ftl/utils/ftl_property.o 00:02:20.572 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:02:20.572 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:02:20.572 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:02:20.572 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:02:20.572 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:02:20.572 CC lib/nvmf/ctrlr.o 00:02:20.572 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:02:20.572 CC lib/nvmf/ctrlr_bdev.o 00:02:20.572 CC lib/nvmf/ctrlr_discovery.o 00:02:20.572 CC lib/ftl/upgrade/ftl_trim_upgrade.o 00:02:20.572 CC lib/ftl/upgrade/ftl_sb_v3.o 00:02:20.572 CC lib/ftl/upgrade/ftl_sb_v5.o 00:02:20.572 CC lib/nvmf/subsystem.o 00:02:20.572 CC lib/ftl/nvc/ftl_nvc_dev.o 00:02:20.572 CC lib/ftl/nvc/ftl_nvc_bdev_non_vss.o 00:02:20.572 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:02:20.572 CC lib/nvmf/nvmf.o 00:02:20.572 CC lib/ftl/base/ftl_base_dev.o 00:02:20.572 CC lib/nvmf/nvmf_rpc.o 00:02:20.572 CC lib/ftl/nvc/ftl_nvc_bdev_common.o 00:02:20.572 CC lib/ftl/ftl_trace.o 00:02:20.572 CC lib/ftl/base/ftl_base_bdev.o 00:02:20.572 CC lib/nvmf/transport.o 00:02:20.572 CC lib/nvmf/tcp.o 00:02:20.572 CC lib/nvmf/mdns_server.o 00:02:20.572 CC lib/nvmf/stubs.o 00:02:20.572 CC lib/nvmf/vfio_user.o 00:02:20.572 CC lib/nvmf/rdma.o 00:02:20.572 CC lib/nvmf/auth.o 00:02:21.139 LIB libspdk_nbd.a 00:02:21.139 SO libspdk_nbd.so.7.0 00:02:21.139 SYMLINK libspdk_nbd.so 00:02:21.399 LIB libspdk_ublk.a 00:02:21.399 SO libspdk_ublk.so.3.0 00:02:21.399 SYMLINK libspdk_ublk.so 00:02:21.399 LIB libspdk_scsi.a 00:02:21.399 SO libspdk_scsi.so.9.0 00:02:21.657 SYMLINK libspdk_scsi.so 00:02:21.915 CC lib/vhost/vhost.o 00:02:21.915 CC lib/vhost/vhost_rpc.o 00:02:21.915 CC lib/vhost/vhost_scsi.o 00:02:21.915 CC lib/vhost/vhost_blk.o 00:02:21.915 CC lib/vhost/rte_vhost_user.o 00:02:21.915 CC lib/iscsi/conn.o 00:02:21.915 CC lib/iscsi/init_grp.o 00:02:21.915 CC lib/iscsi/iscsi.o 00:02:21.915 CC lib/iscsi/param.o 00:02:21.915 CC lib/iscsi/portal_grp.o 00:02:21.915 CC lib/iscsi/tgt_node.o 00:02:21.915 CC lib/iscsi/iscsi_subsystem.o 00:02:21.915 CC lib/iscsi/iscsi_rpc.o 00:02:21.915 CC lib/iscsi/task.o 00:02:21.915 LIB libspdk_ftl.a 00:02:22.174 SO libspdk_ftl.so.9.0 00:02:22.174 SYMLINK libspdk_ftl.so 00:02:23.112 LIB libspdk_vhost.a 00:02:23.112 SO libspdk_vhost.so.8.0 00:02:23.112 LIB libspdk_nvmf.a 00:02:23.112 SYMLINK libspdk_vhost.so 00:02:23.112 SO libspdk_nvmf.so.20.0 00:02:23.371 SYMLINK libspdk_nvmf.so 00:02:23.371 LIB libspdk_iscsi.a 00:02:23.371 SO libspdk_iscsi.so.8.0 00:02:23.371 SYMLINK libspdk_iscsi.so 00:02:23.938 CC module/vfu_device/vfu_virtio_blk.o 00:02:23.938 CC module/vfu_device/vfu_virtio.o 00:02:23.938 CC module/vfu_device/vfu_virtio_scsi.o 00:02:23.938 CC module/vfu_device/vfu_virtio_rpc.o 00:02:23.938 CC module/env_dpdk/env_dpdk_rpc.o 00:02:23.938 CC module/vfu_device/vfu_virtio_fs.o 00:02:24.196 CC module/scheduler/dynamic/scheduler_dynamic.o 00:02:24.196 CC module/keyring/linux/keyring.o 00:02:24.196 CC module/keyring/linux/keyring_rpc.o 00:02:24.196 CC module/keyring/file/keyring.o 00:02:24.196 CC module/keyring/file/keyring_rpc.o 00:02:24.196 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:02:24.196 CC module/blob/bdev/blob_bdev.o 00:02:24.196 CC module/accel/iaa/accel_iaa_rpc.o 00:02:24.196 CC module/accel/iaa/accel_iaa.o 00:02:24.196 CC module/sock/posix/posix.o 00:02:24.196 CC module/fsdev/aio/fsdev_aio.o 00:02:24.196 CC module/accel/error/accel_error_rpc.o 00:02:24.196 CC module/fsdev/aio/fsdev_aio_rpc.o 00:02:24.196 CC module/accel/error/accel_error.o 00:02:24.196 CC module/fsdev/aio/linux_aio_mgr.o 00:02:24.196 CC module/accel/dsa/accel_dsa_rpc.o 00:02:24.196 CC module/accel/dsa/accel_dsa.o 00:02:24.196 CC module/scheduler/gscheduler/gscheduler.o 00:02:24.196 CC module/accel/ioat/accel_ioat_rpc.o 00:02:24.196 CC module/accel/ioat/accel_ioat.o 00:02:24.196 LIB libspdk_env_dpdk_rpc.a 00:02:24.196 SO libspdk_env_dpdk_rpc.so.6.0 00:02:24.196 SYMLINK libspdk_env_dpdk_rpc.so 00:02:24.196 LIB libspdk_keyring_linux.a 00:02:24.196 LIB libspdk_keyring_file.a 00:02:24.455 LIB libspdk_accel_ioat.a 00:02:24.455 SO libspdk_keyring_linux.so.1.0 00:02:24.455 LIB libspdk_scheduler_dpdk_governor.a 00:02:24.455 LIB libspdk_accel_error.a 00:02:24.455 SO libspdk_keyring_file.so.2.0 00:02:24.455 LIB libspdk_scheduler_dynamic.a 00:02:24.455 LIB libspdk_scheduler_gscheduler.a 00:02:24.455 SO libspdk_accel_ioat.so.6.0 00:02:24.455 SO libspdk_scheduler_dpdk_governor.so.4.0 00:02:24.455 SO libspdk_accel_error.so.2.0 00:02:24.455 SO libspdk_scheduler_dynamic.so.4.0 00:02:24.455 SO libspdk_scheduler_gscheduler.so.4.0 00:02:24.455 SYMLINK libspdk_keyring_file.so 00:02:24.455 LIB libspdk_accel_iaa.a 00:02:24.455 SYMLINK libspdk_keyring_linux.so 00:02:24.455 SYMLINK libspdk_scheduler_dpdk_governor.so 00:02:24.455 SYMLINK libspdk_accel_ioat.so 00:02:24.455 SYMLINK libspdk_accel_error.so 00:02:24.455 SO libspdk_accel_iaa.so.3.0 00:02:24.455 SYMLINK libspdk_scheduler_dynamic.so 00:02:24.455 SYMLINK libspdk_scheduler_gscheduler.so 00:02:24.455 LIB libspdk_blob_bdev.a 00:02:24.455 LIB libspdk_accel_dsa.a 00:02:24.455 SO libspdk_blob_bdev.so.11.0 00:02:24.455 SO libspdk_accel_dsa.so.5.0 00:02:24.455 SYMLINK libspdk_accel_iaa.so 00:02:24.455 SYMLINK libspdk_blob_bdev.so 00:02:24.714 SYMLINK libspdk_accel_dsa.so 00:02:24.714 LIB libspdk_vfu_device.a 00:02:24.714 SO libspdk_vfu_device.so.3.0 00:02:24.714 SYMLINK libspdk_vfu_device.so 00:02:24.974 LIB libspdk_fsdev_aio.a 00:02:24.974 SO libspdk_fsdev_aio.so.1.0 00:02:24.974 LIB libspdk_sock_posix.a 00:02:24.974 SO libspdk_sock_posix.so.6.0 00:02:24.974 SYMLINK libspdk_fsdev_aio.so 00:02:24.974 CC module/bdev/delay/vbdev_delay.o 00:02:24.974 CC module/bdev/delay/vbdev_delay_rpc.o 00:02:24.974 CC module/bdev/lvol/vbdev_lvol.o 00:02:24.974 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:02:24.974 CC module/blobfs/bdev/blobfs_bdev.o 00:02:24.974 CC module/bdev/raid/bdev_raid.o 00:02:24.974 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:02:24.974 CC module/bdev/raid/bdev_raid_sb.o 00:02:24.974 CC module/bdev/raid/bdev_raid_rpc.o 00:02:24.974 CC module/bdev/raid/raid0.o 00:02:24.974 CC module/bdev/gpt/gpt.o 00:02:24.974 CC module/bdev/malloc/bdev_malloc.o 00:02:24.974 CC module/bdev/malloc/bdev_malloc_rpc.o 00:02:24.974 CC module/bdev/raid/raid1.o 00:02:24.974 CC module/bdev/split/vbdev_split.o 00:02:24.974 CC module/bdev/null/bdev_null_rpc.o 00:02:24.974 CC module/bdev/gpt/vbdev_gpt.o 00:02:24.974 CC module/bdev/raid/concat.o 00:02:24.974 CC module/bdev/virtio/bdev_virtio_rpc.o 00:02:24.974 CC module/bdev/split/vbdev_split_rpc.o 00:02:24.974 CC module/bdev/virtio/bdev_virtio_scsi.o 00:02:24.974 CC module/bdev/virtio/bdev_virtio_blk.o 00:02:24.974 CC module/bdev/null/bdev_null.o 00:02:24.974 CC module/bdev/zone_block/vbdev_zone_block.o 00:02:24.974 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:02:24.974 CC module/bdev/passthru/vbdev_passthru.o 00:02:24.974 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:02:24.974 CC module/bdev/nvme/bdev_nvme.o 00:02:24.974 CC module/bdev/nvme/bdev_nvme_rpc.o 00:02:24.974 CC module/bdev/nvme/nvme_rpc.o 00:02:24.974 CC module/bdev/nvme/bdev_mdns_client.o 00:02:24.974 CC module/bdev/nvme/vbdev_opal_rpc.o 00:02:24.974 CC module/bdev/nvme/vbdev_opal.o 00:02:24.974 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:02:24.974 CC module/bdev/ftl/bdev_ftl.o 00:02:24.974 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:02:24.974 CC module/bdev/iscsi/bdev_iscsi.o 00:02:24.974 CC module/bdev/ftl/bdev_ftl_rpc.o 00:02:24.974 CC module/bdev/aio/bdev_aio.o 00:02:24.974 CC module/bdev/aio/bdev_aio_rpc.o 00:02:24.974 CC module/bdev/error/vbdev_error.o 00:02:24.974 CC module/bdev/error/vbdev_error_rpc.o 00:02:24.974 SYMLINK libspdk_sock_posix.so 00:02:25.233 LIB libspdk_blobfs_bdev.a 00:02:25.233 SO libspdk_blobfs_bdev.so.6.0 00:02:25.233 LIB libspdk_bdev_split.a 00:02:25.491 SO libspdk_bdev_split.so.6.0 00:02:25.491 LIB libspdk_bdev_null.a 00:02:25.491 SYMLINK libspdk_blobfs_bdev.so 00:02:25.491 SO libspdk_bdev_null.so.6.0 00:02:25.491 LIB libspdk_bdev_passthru.a 00:02:25.491 LIB libspdk_bdev_error.a 00:02:25.491 LIB libspdk_bdev_gpt.a 00:02:25.491 SYMLINK libspdk_bdev_split.so 00:02:25.491 SO libspdk_bdev_passthru.so.6.0 00:02:25.491 LIB libspdk_bdev_ftl.a 00:02:25.491 SO libspdk_bdev_error.so.6.0 00:02:25.491 SO libspdk_bdev_gpt.so.6.0 00:02:25.491 LIB libspdk_bdev_delay.a 00:02:25.491 LIB libspdk_bdev_zone_block.a 00:02:25.491 LIB libspdk_bdev_aio.a 00:02:25.491 SYMLINK libspdk_bdev_null.so 00:02:25.491 SO libspdk_bdev_ftl.so.6.0 00:02:25.491 LIB libspdk_bdev_malloc.a 00:02:25.491 SO libspdk_bdev_aio.so.6.0 00:02:25.491 SO libspdk_bdev_zone_block.so.6.0 00:02:25.491 SO libspdk_bdev_delay.so.6.0 00:02:25.491 SYMLINK libspdk_bdev_passthru.so 00:02:25.491 SYMLINK libspdk_bdev_gpt.so 00:02:25.491 LIB libspdk_bdev_iscsi.a 00:02:25.491 SYMLINK libspdk_bdev_error.so 00:02:25.491 SO libspdk_bdev_malloc.so.6.0 00:02:25.491 SO libspdk_bdev_iscsi.so.6.0 00:02:25.491 SYMLINK libspdk_bdev_ftl.so 00:02:25.491 SYMLINK libspdk_bdev_zone_block.so 00:02:25.491 SYMLINK libspdk_bdev_aio.so 00:02:25.491 SYMLINK libspdk_bdev_delay.so 00:02:25.750 SYMLINK libspdk_bdev_malloc.so 00:02:25.750 LIB libspdk_bdev_lvol.a 00:02:25.750 SYMLINK libspdk_bdev_iscsi.so 00:02:25.750 LIB libspdk_bdev_virtio.a 00:02:25.750 SO libspdk_bdev_virtio.so.6.0 00:02:25.750 SO libspdk_bdev_lvol.so.6.0 00:02:25.750 SYMLINK libspdk_bdev_virtio.so 00:02:25.750 SYMLINK libspdk_bdev_lvol.so 00:02:26.319 LIB libspdk_bdev_raid.a 00:02:26.319 SO libspdk_bdev_raid.so.6.0 00:02:26.319 SYMLINK libspdk_bdev_raid.so 00:02:26.886 LIB libspdk_bdev_nvme.a 00:02:26.886 SO libspdk_bdev_nvme.so.7.1 00:02:27.145 SYMLINK libspdk_bdev_nvme.so 00:02:27.715 CC module/event/subsystems/scheduler/scheduler.o 00:02:27.715 CC module/event/subsystems/vfu_tgt/vfu_tgt.o 00:02:27.715 CC module/event/subsystems/sock/sock.o 00:02:27.715 CC module/event/subsystems/vmd/vmd.o 00:02:27.715 CC module/event/subsystems/vmd/vmd_rpc.o 00:02:27.715 CC module/event/subsystems/iobuf/iobuf.o 00:02:27.715 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:02:27.715 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:02:27.715 CC module/event/subsystems/fsdev/fsdev.o 00:02:27.715 CC module/event/subsystems/keyring/keyring.o 00:02:27.715 LIB libspdk_event_vfu_tgt.a 00:02:27.715 LIB libspdk_event_keyring.a 00:02:27.715 LIB libspdk_event_scheduler.a 00:02:27.715 LIB libspdk_event_vmd.a 00:02:27.715 LIB libspdk_event_sock.a 00:02:27.974 LIB libspdk_event_vhost_blk.a 00:02:27.974 SO libspdk_event_vfu_tgt.so.3.0 00:02:27.974 SO libspdk_event_keyring.so.1.0 00:02:27.974 SO libspdk_event_scheduler.so.4.0 00:02:27.974 SO libspdk_event_vmd.so.6.0 00:02:27.974 LIB libspdk_event_fsdev.a 00:02:27.974 SO libspdk_event_sock.so.5.0 00:02:27.974 LIB libspdk_event_iobuf.a 00:02:27.974 SO libspdk_event_vhost_blk.so.3.0 00:02:27.974 SO libspdk_event_fsdev.so.1.0 00:02:27.974 SO libspdk_event_iobuf.so.3.0 00:02:27.974 SYMLINK libspdk_event_vfu_tgt.so 00:02:27.974 SYMLINK libspdk_event_keyring.so 00:02:27.974 SYMLINK libspdk_event_sock.so 00:02:27.974 SYMLINK libspdk_event_scheduler.so 00:02:27.974 SYMLINK libspdk_event_vmd.so 00:02:27.974 SYMLINK libspdk_event_vhost_blk.so 00:02:27.974 SYMLINK libspdk_event_fsdev.so 00:02:27.974 SYMLINK libspdk_event_iobuf.so 00:02:28.234 CC module/event/subsystems/accel/accel.o 00:02:28.492 LIB libspdk_event_accel.a 00:02:28.492 SO libspdk_event_accel.so.6.0 00:02:28.492 SYMLINK libspdk_event_accel.so 00:02:28.751 CC module/event/subsystems/bdev/bdev.o 00:02:29.011 LIB libspdk_event_bdev.a 00:02:29.011 SO libspdk_event_bdev.so.6.0 00:02:29.011 SYMLINK libspdk_event_bdev.so 00:02:29.270 CC module/event/subsystems/scsi/scsi.o 00:02:29.270 CC module/event/subsystems/ublk/ublk.o 00:02:29.270 CC module/event/subsystems/nbd/nbd.o 00:02:29.270 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:02:29.270 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:02:29.529 LIB libspdk_event_ublk.a 00:02:29.529 LIB libspdk_event_nbd.a 00:02:29.529 LIB libspdk_event_scsi.a 00:02:29.529 SO libspdk_event_ublk.so.3.0 00:02:29.529 SO libspdk_event_nbd.so.6.0 00:02:29.529 SO libspdk_event_scsi.so.6.0 00:02:29.529 LIB libspdk_event_nvmf.a 00:02:29.529 SYMLINK libspdk_event_ublk.so 00:02:29.529 SYMLINK libspdk_event_nbd.so 00:02:29.529 SYMLINK libspdk_event_scsi.so 00:02:29.529 SO libspdk_event_nvmf.so.6.0 00:02:29.788 SYMLINK libspdk_event_nvmf.so 00:02:29.788 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:02:30.047 CC module/event/subsystems/iscsi/iscsi.o 00:02:30.047 LIB libspdk_event_vhost_scsi.a 00:02:30.047 SO libspdk_event_vhost_scsi.so.3.0 00:02:30.047 LIB libspdk_event_iscsi.a 00:02:30.047 SYMLINK libspdk_event_vhost_scsi.so 00:02:30.047 SO libspdk_event_iscsi.so.6.0 00:02:30.306 SYMLINK libspdk_event_iscsi.so 00:02:30.306 SO libspdk.so.6.0 00:02:30.306 SYMLINK libspdk.so 00:02:30.918 CC test/rpc_client/rpc_client_test.o 00:02:30.918 TEST_HEADER include/spdk/accel.h 00:02:30.918 TEST_HEADER include/spdk/accel_module.h 00:02:30.918 TEST_HEADER include/spdk/assert.h 00:02:30.918 TEST_HEADER include/spdk/barrier.h 00:02:30.918 TEST_HEADER include/spdk/bdev_module.h 00:02:30.918 TEST_HEADER include/spdk/bdev.h 00:02:30.918 TEST_HEADER include/spdk/base64.h 00:02:30.918 TEST_HEADER include/spdk/bit_array.h 00:02:30.918 TEST_HEADER include/spdk/bdev_zone.h 00:02:30.918 TEST_HEADER include/spdk/blob_bdev.h 00:02:30.918 TEST_HEADER include/spdk/blobfs.h 00:02:30.918 TEST_HEADER include/spdk/bit_pool.h 00:02:30.918 TEST_HEADER include/spdk/blobfs_bdev.h 00:02:30.918 TEST_HEADER include/spdk/conf.h 00:02:30.918 TEST_HEADER include/spdk/config.h 00:02:30.918 TEST_HEADER include/spdk/crc16.h 00:02:30.918 TEST_HEADER include/spdk/blob.h 00:02:30.918 TEST_HEADER include/spdk/crc32.h 00:02:30.918 TEST_HEADER include/spdk/cpuset.h 00:02:30.918 TEST_HEADER include/spdk/dif.h 00:02:30.918 TEST_HEADER include/spdk/crc64.h 00:02:30.918 TEST_HEADER include/spdk/dma.h 00:02:30.918 TEST_HEADER include/spdk/env.h 00:02:30.919 TEST_HEADER include/spdk/endian.h 00:02:30.919 TEST_HEADER include/spdk/env_dpdk.h 00:02:30.919 CC app/spdk_top/spdk_top.o 00:02:30.919 TEST_HEADER include/spdk/event.h 00:02:30.919 CC app/spdk_nvme_perf/perf.o 00:02:30.919 TEST_HEADER include/spdk/fd.h 00:02:30.919 TEST_HEADER include/spdk/file.h 00:02:30.919 TEST_HEADER include/spdk/fd_group.h 00:02:30.919 TEST_HEADER include/spdk/fsdev_module.h 00:02:30.919 TEST_HEADER include/spdk/fsdev.h 00:02:30.919 TEST_HEADER include/spdk/ftl.h 00:02:30.919 TEST_HEADER include/spdk/fuse_dispatcher.h 00:02:30.919 TEST_HEADER include/spdk/histogram_data.h 00:02:30.919 TEST_HEADER include/spdk/gpt_spec.h 00:02:30.919 TEST_HEADER include/spdk/idxd.h 00:02:30.919 TEST_HEADER include/spdk/idxd_spec.h 00:02:30.919 TEST_HEADER include/spdk/hexlify.h 00:02:30.919 CC app/spdk_nvme_identify/identify.o 00:02:30.919 CXX app/trace/trace.o 00:02:30.919 TEST_HEADER include/spdk/ioat_spec.h 00:02:30.919 CC app/spdk_nvme_discover/discovery_aer.o 00:02:30.919 TEST_HEADER include/spdk/ioat.h 00:02:30.919 TEST_HEADER include/spdk/init.h 00:02:30.919 TEST_HEADER include/spdk/json.h 00:02:30.919 TEST_HEADER include/spdk/jsonrpc.h 00:02:30.919 TEST_HEADER include/spdk/iscsi_spec.h 00:02:30.919 TEST_HEADER include/spdk/keyring.h 00:02:30.919 TEST_HEADER include/spdk/likely.h 00:02:30.919 TEST_HEADER include/spdk/keyring_module.h 00:02:30.919 TEST_HEADER include/spdk/log.h 00:02:30.919 TEST_HEADER include/spdk/memory.h 00:02:30.919 TEST_HEADER include/spdk/md5.h 00:02:30.919 TEST_HEADER include/spdk/lvol.h 00:02:30.919 TEST_HEADER include/spdk/mmio.h 00:02:30.919 CC app/spdk_lspci/spdk_lspci.o 00:02:30.919 TEST_HEADER include/spdk/nbd.h 00:02:30.919 TEST_HEADER include/spdk/net.h 00:02:30.919 TEST_HEADER include/spdk/notify.h 00:02:30.919 CC app/trace_record/trace_record.o 00:02:30.919 TEST_HEADER include/spdk/nvme_intel.h 00:02:30.919 TEST_HEADER include/spdk/nvme.h 00:02:30.919 TEST_HEADER include/spdk/nvme_ocssd.h 00:02:30.919 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:02:30.919 TEST_HEADER include/spdk/nvme_spec.h 00:02:30.919 CC app/spdk_dd/spdk_dd.o 00:02:30.919 TEST_HEADER include/spdk/nvme_zns.h 00:02:30.919 TEST_HEADER include/spdk/nvmf_spec.h 00:02:30.919 TEST_HEADER include/spdk/nvmf.h 00:02:30.919 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:02:30.919 TEST_HEADER include/spdk/nvmf_cmd.h 00:02:30.919 TEST_HEADER include/spdk/nvmf_transport.h 00:02:30.919 TEST_HEADER include/spdk/opal.h 00:02:30.919 TEST_HEADER include/spdk/opal_spec.h 00:02:30.919 TEST_HEADER include/spdk/pci_ids.h 00:02:30.919 TEST_HEADER include/spdk/rpc.h 00:02:30.919 TEST_HEADER include/spdk/queue.h 00:02:30.919 TEST_HEADER include/spdk/reduce.h 00:02:30.919 TEST_HEADER include/spdk/pipe.h 00:02:30.919 TEST_HEADER include/spdk/scheduler.h 00:02:30.919 TEST_HEADER include/spdk/scsi_spec.h 00:02:30.919 TEST_HEADER include/spdk/scsi.h 00:02:30.919 TEST_HEADER include/spdk/sock.h 00:02:30.919 TEST_HEADER include/spdk/string.h 00:02:30.919 TEST_HEADER include/spdk/stdinc.h 00:02:30.919 TEST_HEADER include/spdk/thread.h 00:02:30.919 TEST_HEADER include/spdk/trace_parser.h 00:02:30.919 TEST_HEADER include/spdk/tree.h 00:02:30.919 TEST_HEADER include/spdk/trace.h 00:02:30.919 TEST_HEADER include/spdk/ublk.h 00:02:30.919 TEST_HEADER include/spdk/util.h 00:02:30.919 TEST_HEADER include/spdk/uuid.h 00:02:30.919 TEST_HEADER include/spdk/version.h 00:02:30.919 TEST_HEADER include/spdk/vfio_user_pci.h 00:02:30.919 TEST_HEADER include/spdk/vfio_user_spec.h 00:02:30.919 TEST_HEADER include/spdk/vmd.h 00:02:30.919 TEST_HEADER include/spdk/vhost.h 00:02:30.919 TEST_HEADER include/spdk/xor.h 00:02:30.919 TEST_HEADER include/spdk/zipf.h 00:02:30.919 CXX test/cpp_headers/accel.o 00:02:30.919 CXX test/cpp_headers/accel_module.o 00:02:30.919 CXX test/cpp_headers/assert.o 00:02:30.919 CXX test/cpp_headers/barrier.o 00:02:30.919 CXX test/cpp_headers/bdev.o 00:02:30.919 CXX test/cpp_headers/base64.o 00:02:30.919 CXX test/cpp_headers/bdev_module.o 00:02:30.919 CXX test/cpp_headers/bit_array.o 00:02:30.919 CXX test/cpp_headers/bdev_zone.o 00:02:30.919 CC examples/interrupt_tgt/interrupt_tgt.o 00:02:30.919 CXX test/cpp_headers/blob_bdev.o 00:02:30.919 CXX test/cpp_headers/bit_pool.o 00:02:30.919 CXX test/cpp_headers/blobfs.o 00:02:30.919 CXX test/cpp_headers/blob.o 00:02:30.919 CXX test/cpp_headers/conf.o 00:02:30.919 CXX test/cpp_headers/blobfs_bdev.o 00:02:30.919 CXX test/cpp_headers/cpuset.o 00:02:30.919 CXX test/cpp_headers/crc16.o 00:02:30.919 CXX test/cpp_headers/config.o 00:02:30.919 CXX test/cpp_headers/crc64.o 00:02:30.919 CXX test/cpp_headers/dma.o 00:02:30.919 CXX test/cpp_headers/crc32.o 00:02:30.919 CC app/nvmf_tgt/nvmf_main.o 00:02:30.919 CXX test/cpp_headers/endian.o 00:02:30.919 CXX test/cpp_headers/env_dpdk.o 00:02:30.919 CXX test/cpp_headers/event.o 00:02:30.919 CXX test/cpp_headers/dif.o 00:02:30.919 CXX test/cpp_headers/fd_group.o 00:02:30.919 CXX test/cpp_headers/fd.o 00:02:30.919 CXX test/cpp_headers/env.o 00:02:30.919 CXX test/cpp_headers/fsdev.o 00:02:30.919 CXX test/cpp_headers/fsdev_module.o 00:02:30.919 CXX test/cpp_headers/ftl.o 00:02:30.919 CXX test/cpp_headers/fuse_dispatcher.o 00:02:30.919 CXX test/cpp_headers/gpt_spec.o 00:02:30.919 CXX test/cpp_headers/file.o 00:02:30.919 CXX test/cpp_headers/hexlify.o 00:02:30.919 CXX test/cpp_headers/histogram_data.o 00:02:30.919 CXX test/cpp_headers/idxd_spec.o 00:02:30.919 CXX test/cpp_headers/idxd.o 00:02:30.919 CXX test/cpp_headers/init.o 00:02:30.919 CXX test/cpp_headers/ioat.o 00:02:30.919 CXX test/cpp_headers/ioat_spec.o 00:02:30.919 CXX test/cpp_headers/iscsi_spec.o 00:02:30.919 CXX test/cpp_headers/json.o 00:02:30.919 CC app/iscsi_tgt/iscsi_tgt.o 00:02:30.919 CXX test/cpp_headers/keyring.o 00:02:30.919 CXX test/cpp_headers/keyring_module.o 00:02:30.919 CXX test/cpp_headers/jsonrpc.o 00:02:30.919 CXX test/cpp_headers/log.o 00:02:30.919 CXX test/cpp_headers/likely.o 00:02:30.919 CXX test/cpp_headers/lvol.o 00:02:30.919 CXX test/cpp_headers/md5.o 00:02:30.919 CXX test/cpp_headers/net.o 00:02:30.919 CXX test/cpp_headers/nbd.o 00:02:30.919 CXX test/cpp_headers/memory.o 00:02:30.919 CXX test/cpp_headers/mmio.o 00:02:30.919 CXX test/cpp_headers/notify.o 00:02:30.919 CXX test/cpp_headers/nvme_intel.o 00:02:30.919 CC app/spdk_tgt/spdk_tgt.o 00:02:30.919 CXX test/cpp_headers/nvme.o 00:02:30.919 CXX test/cpp_headers/nvme_ocssd.o 00:02:30.919 CXX test/cpp_headers/nvme_ocssd_spec.o 00:02:30.919 CXX test/cpp_headers/nvme_spec.o 00:02:30.919 CXX test/cpp_headers/nvmf_cmd.o 00:02:30.919 CXX test/cpp_headers/nvme_zns.o 00:02:30.919 CXX test/cpp_headers/nvmf_fc_spec.o 00:02:30.919 CXX test/cpp_headers/nvmf.o 00:02:30.919 CXX test/cpp_headers/nvmf_spec.o 00:02:30.919 CXX test/cpp_headers/nvmf_transport.o 00:02:30.919 CXX test/cpp_headers/opal.o 00:02:30.919 CXX test/cpp_headers/opal_spec.o 00:02:30.919 CXX test/cpp_headers/pipe.o 00:02:30.919 CXX test/cpp_headers/pci_ids.o 00:02:30.919 CXX test/cpp_headers/queue.o 00:02:30.919 CXX test/cpp_headers/reduce.o 00:02:30.919 CXX test/cpp_headers/rpc.o 00:02:30.919 CXX test/cpp_headers/scheduler.o 00:02:30.919 CXX test/cpp_headers/scsi.o 00:02:30.919 CXX test/cpp_headers/scsi_spec.o 00:02:30.919 CXX test/cpp_headers/sock.o 00:02:30.919 CXX test/cpp_headers/stdinc.o 00:02:30.919 CXX test/cpp_headers/thread.o 00:02:30.919 CXX test/cpp_headers/string.o 00:02:30.919 CXX test/cpp_headers/trace.o 00:02:30.919 CXX test/cpp_headers/trace_parser.o 00:02:30.919 CXX test/cpp_headers/tree.o 00:02:30.919 CXX test/cpp_headers/ublk.o 00:02:30.919 CC test/env/pci/pci_ut.o 00:02:30.919 CC test/env/memory/memory_ut.o 00:02:30.919 CC test/env/vtophys/vtophys.o 00:02:30.919 CC test/app/histogram_perf/histogram_perf.o 00:02:30.919 CC test/app/stub/stub.o 00:02:30.919 CC test/thread/poller_perf/poller_perf.o 00:02:31.237 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:02:31.237 CC test/app/jsoncat/jsoncat.o 00:02:31.237 CXX test/cpp_headers/util.o 00:02:31.237 CC test/dma/test_dma/test_dma.o 00:02:31.237 CC test/app/bdev_svc/bdev_svc.o 00:02:31.237 CC examples/util/zipf/zipf.o 00:02:31.237 CC examples/ioat/perf/perf.o 00:02:31.237 CC examples/ioat/verify/verify.o 00:02:31.237 CC app/fio/bdev/fio_plugin.o 00:02:31.237 CC app/fio/nvme/fio_plugin.o 00:02:31.568 LINK spdk_lspci 00:02:31.568 LINK spdk_nvme_discover 00:02:31.568 CXX test/cpp_headers/uuid.o 00:02:31.568 CXX test/cpp_headers/version.o 00:02:31.568 CXX test/cpp_headers/vfio_user_pci.o 00:02:31.568 LINK interrupt_tgt 00:02:31.887 CXX test/cpp_headers/vfio_user_spec.o 00:02:31.887 CXX test/cpp_headers/vhost.o 00:02:31.887 LINK iscsi_tgt 00:02:31.887 CXX test/cpp_headers/vmd.o 00:02:31.887 CXX test/cpp_headers/xor.o 00:02:31.887 CXX test/cpp_headers/zipf.o 00:02:31.887 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:02:31.887 LINK vtophys 00:02:31.887 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:02:31.887 LINK nvmf_tgt 00:02:31.887 LINK rpc_client_test 00:02:31.887 LINK histogram_perf 00:02:31.887 CC test/env/mem_callbacks/mem_callbacks.o 00:02:31.887 LINK env_dpdk_post_init 00:02:31.887 LINK jsoncat 00:02:31.887 LINK zipf 00:02:31.887 LINK poller_perf 00:02:31.887 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:02:31.887 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:02:31.887 LINK bdev_svc 00:02:31.887 LINK spdk_tgt 00:02:31.887 LINK stub 00:02:31.887 LINK spdk_trace_record 00:02:31.887 LINK pci_ut 00:02:31.887 LINK verify 00:02:32.186 LINK spdk_trace 00:02:32.186 LINK ioat_perf 00:02:32.186 LINK spdk_dd 00:02:32.186 LINK spdk_nvme 00:02:32.186 LINK spdk_bdev 00:02:32.186 LINK test_dma 00:02:32.186 CC examples/vmd/lsvmd/lsvmd.o 00:02:32.186 CC examples/sock/hello_world/hello_sock.o 00:02:32.186 CC examples/vmd/led/led.o 00:02:32.469 CC examples/idxd/perf/perf.o 00:02:32.469 LINK nvme_fuzz 00:02:32.469 CC test/event/reactor_perf/reactor_perf.o 00:02:32.469 CC test/event/reactor/reactor.o 00:02:32.469 CC test/event/event_perf/event_perf.o 00:02:32.469 LINK vhost_fuzz 00:02:32.469 LINK spdk_nvme_perf 00:02:32.469 CC test/event/app_repeat/app_repeat.o 00:02:32.469 CC examples/thread/thread/thread_ex.o 00:02:32.469 LINK spdk_nvme_identify 00:02:32.469 CC test/event/scheduler/scheduler.o 00:02:32.469 LINK mem_callbacks 00:02:32.469 LINK lsvmd 00:02:32.469 LINK event_perf 00:02:32.469 CC app/vhost/vhost.o 00:02:32.469 LINK led 00:02:32.469 LINK reactor_perf 00:02:32.469 LINK reactor 00:02:32.469 LINK hello_sock 00:02:32.469 LINK app_repeat 00:02:32.744 LINK spdk_top 00:02:32.744 LINK scheduler 00:02:32.744 LINK thread 00:02:32.744 LINK idxd_perf 00:02:32.744 LINK vhost 00:02:32.744 CC test/nvme/err_injection/err_injection.o 00:02:32.744 CC test/nvme/connect_stress/connect_stress.o 00:02:32.744 CC test/nvme/reset/reset.o 00:02:32.744 CC test/nvme/overhead/overhead.o 00:02:32.744 CC test/nvme/startup/startup.o 00:02:32.744 CC test/nvme/simple_copy/simple_copy.o 00:02:32.744 CC test/nvme/aer/aer.o 00:02:32.744 CC test/nvme/e2edp/nvme_dp.o 00:02:32.744 CC test/nvme/doorbell_aers/doorbell_aers.o 00:02:32.744 CC test/nvme/reserve/reserve.o 00:02:32.744 CC test/nvme/compliance/nvme_compliance.o 00:02:32.744 CC test/nvme/cuse/cuse.o 00:02:32.744 CC test/nvme/fused_ordering/fused_ordering.o 00:02:32.744 CC test/nvme/sgl/sgl.o 00:02:32.744 CC test/nvme/boot_partition/boot_partition.o 00:02:32.744 CC test/nvme/fdp/fdp.o 00:02:32.744 CC test/accel/dif/dif.o 00:02:32.744 CC test/blobfs/mkfs/mkfs.o 00:02:32.744 LINK memory_ut 00:02:33.002 CC test/lvol/esnap/esnap.o 00:02:33.002 LINK connect_stress 00:02:33.002 LINK startup 00:02:33.002 LINK simple_copy 00:02:33.002 LINK err_injection 00:02:33.002 LINK boot_partition 00:02:33.002 LINK doorbell_aers 00:02:33.002 LINK fused_ordering 00:02:33.002 LINK reserve 00:02:33.002 CC examples/nvme/hello_world/hello_world.o 00:02:33.002 CC examples/nvme/reconnect/reconnect.o 00:02:33.002 CC examples/nvme/nvme_manage/nvme_manage.o 00:02:33.002 CC examples/nvme/abort/abort.o 00:02:33.002 CC examples/nvme/cmb_copy/cmb_copy.o 00:02:33.002 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:02:33.002 CC examples/nvme/hotplug/hotplug.o 00:02:33.002 LINK reset 00:02:33.002 CC examples/nvme/arbitration/arbitration.o 00:02:33.002 LINK nvme_dp 00:02:33.002 LINK overhead 00:02:33.002 LINK sgl 00:02:33.002 LINK aer 00:02:33.002 LINK mkfs 00:02:33.260 LINK nvme_compliance 00:02:33.260 CC examples/accel/perf/accel_perf.o 00:02:33.260 LINK fdp 00:02:33.260 CC examples/blob/hello_world/hello_blob.o 00:02:33.260 CC examples/fsdev/hello_world/hello_fsdev.o 00:02:33.260 CC examples/blob/cli/blobcli.o 00:02:33.260 LINK pmr_persistence 00:02:33.260 LINK cmb_copy 00:02:33.260 LINK reconnect 00:02:33.260 LINK hello_world 00:02:33.260 LINK hotplug 00:02:33.519 LINK arbitration 00:02:33.519 LINK hello_fsdev 00:02:33.519 LINK hello_blob 00:02:33.519 LINK abort 00:02:33.519 LINK dif 00:02:33.519 LINK accel_perf 00:02:33.519 LINK iscsi_fuzz 00:02:33.519 LINK nvme_manage 00:02:33.787 LINK blobcli 00:02:34.046 CC examples/bdev/bdevperf/bdevperf.o 00:02:34.046 CC examples/bdev/hello_world/hello_bdev.o 00:02:34.046 CC test/bdev/bdevio/bdevio.o 00:02:34.304 LINK cuse 00:02:34.304 LINK hello_bdev 00:02:34.563 LINK bdevio 00:02:34.822 LINK bdevperf 00:02:35.388 CC examples/nvmf/nvmf/nvmf.o 00:02:35.954 LINK nvmf 00:02:38.487 LINK esnap 00:02:38.487 00:02:38.487 real 1m9.297s 00:02:38.487 user 9m46.740s 00:02:38.487 sys 4m27.898s 00:02:38.487 11:20:39 make -- common/autotest_common.sh@1128 -- $ xtrace_disable 00:02:38.487 11:20:39 make -- common/autotest_common.sh@10 -- $ set +x 00:02:38.487 ************************************ 00:02:38.487 END TEST make 00:02:38.487 ************************************ 00:02:38.487 11:20:39 -- spdk/autobuild.sh@1 -- $ stop_monitor_resources 00:02:38.487 11:20:39 -- pm/common@29 -- $ signal_monitor_resources TERM 00:02:38.487 11:20:39 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:02:38.487 11:20:39 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:38.487 11:20:39 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-load.pid ]] 00:02:38.487 11:20:39 -- pm/common@44 -- $ pid=931170 00:02:38.487 11:20:39 -- pm/common@50 -- $ kill -TERM 931170 00:02:38.487 11:20:39 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:38.487 11:20:39 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-vmstat.pid ]] 00:02:38.487 11:20:39 -- pm/common@44 -- $ pid=931171 00:02:38.487 11:20:39 -- pm/common@50 -- $ kill -TERM 931171 00:02:38.487 11:20:39 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:38.487 11:20:39 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-temp.pid ]] 00:02:38.487 11:20:39 -- pm/common@44 -- $ pid=931173 00:02:38.487 11:20:39 -- pm/common@50 -- $ kill -TERM 931173 00:02:38.487 11:20:39 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:38.487 11:20:39 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-bmc-pm.pid ]] 00:02:38.487 11:20:39 -- pm/common@44 -- $ pid=931192 00:02:38.487 11:20:39 -- pm/common@50 -- $ sudo -E kill -TERM 931192 00:02:38.746 11:20:39 -- spdk/autorun.sh@26 -- $ (( SPDK_TEST_UNITTEST == 1 || SPDK_RUN_FUNCTIONAL_TEST == 1 )) 00:02:38.746 11:20:39 -- spdk/autorun.sh@27 -- $ sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/autotest.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:02:38.746 11:20:39 -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:02:38.746 11:20:39 -- common/autotest_common.sh@1691 -- # lcov --version 00:02:38.746 11:20:39 -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:02:38.746 11:20:39 -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:02:38.746 11:20:39 -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:02:38.746 11:20:39 -- scripts/common.sh@333 -- # local ver1 ver1_l 00:02:38.746 11:20:39 -- scripts/common.sh@334 -- # local ver2 ver2_l 00:02:38.746 11:20:39 -- scripts/common.sh@336 -- # IFS=.-: 00:02:38.746 11:20:39 -- scripts/common.sh@336 -- # read -ra ver1 00:02:38.746 11:20:39 -- scripts/common.sh@337 -- # IFS=.-: 00:02:38.746 11:20:39 -- scripts/common.sh@337 -- # read -ra ver2 00:02:38.746 11:20:39 -- scripts/common.sh@338 -- # local 'op=<' 00:02:38.746 11:20:39 -- scripts/common.sh@340 -- # ver1_l=2 00:02:38.746 11:20:39 -- scripts/common.sh@341 -- # ver2_l=1 00:02:38.746 11:20:39 -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:02:38.746 11:20:39 -- scripts/common.sh@344 -- # case "$op" in 00:02:38.746 11:20:39 -- scripts/common.sh@345 -- # : 1 00:02:38.746 11:20:39 -- scripts/common.sh@364 -- # (( v = 0 )) 00:02:38.746 11:20:39 -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:02:38.746 11:20:39 -- scripts/common.sh@365 -- # decimal 1 00:02:38.746 11:20:39 -- scripts/common.sh@353 -- # local d=1 00:02:38.746 11:20:39 -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:02:38.746 11:20:39 -- scripts/common.sh@355 -- # echo 1 00:02:38.747 11:20:39 -- scripts/common.sh@365 -- # ver1[v]=1 00:02:38.747 11:20:39 -- scripts/common.sh@366 -- # decimal 2 00:02:38.747 11:20:39 -- scripts/common.sh@353 -- # local d=2 00:02:38.747 11:20:39 -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:02:38.747 11:20:39 -- scripts/common.sh@355 -- # echo 2 00:02:38.747 11:20:39 -- scripts/common.sh@366 -- # ver2[v]=2 00:02:38.747 11:20:39 -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:02:38.747 11:20:39 -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:02:38.747 11:20:39 -- scripts/common.sh@368 -- # return 0 00:02:38.747 11:20:39 -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:02:38.747 11:20:39 -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:02:38.747 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:02:38.747 --rc genhtml_branch_coverage=1 00:02:38.747 --rc genhtml_function_coverage=1 00:02:38.747 --rc genhtml_legend=1 00:02:38.747 --rc geninfo_all_blocks=1 00:02:38.747 --rc geninfo_unexecuted_blocks=1 00:02:38.747 00:02:38.747 ' 00:02:38.747 11:20:39 -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:02:38.747 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:02:38.747 --rc genhtml_branch_coverage=1 00:02:38.747 --rc genhtml_function_coverage=1 00:02:38.747 --rc genhtml_legend=1 00:02:38.747 --rc geninfo_all_blocks=1 00:02:38.747 --rc geninfo_unexecuted_blocks=1 00:02:38.747 00:02:38.747 ' 00:02:38.747 11:20:39 -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:02:38.747 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:02:38.747 --rc genhtml_branch_coverage=1 00:02:38.747 --rc genhtml_function_coverage=1 00:02:38.747 --rc genhtml_legend=1 00:02:38.747 --rc geninfo_all_blocks=1 00:02:38.747 --rc geninfo_unexecuted_blocks=1 00:02:38.747 00:02:38.747 ' 00:02:38.747 11:20:39 -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:02:38.747 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:02:38.747 --rc genhtml_branch_coverage=1 00:02:38.747 --rc genhtml_function_coverage=1 00:02:38.747 --rc genhtml_legend=1 00:02:38.747 --rc geninfo_all_blocks=1 00:02:38.747 --rc geninfo_unexecuted_blocks=1 00:02:38.747 00:02:38.747 ' 00:02:38.747 11:20:39 -- spdk/autotest.sh@25 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:02:38.747 11:20:39 -- nvmf/common.sh@7 -- # uname -s 00:02:38.747 11:20:39 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:02:38.747 11:20:39 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:02:38.747 11:20:39 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:02:38.747 11:20:39 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:02:38.747 11:20:39 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:02:38.747 11:20:39 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:02:38.747 11:20:39 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:02:38.747 11:20:39 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:02:38.747 11:20:39 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:02:38.747 11:20:39 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:02:38.747 11:20:39 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:02:38.747 11:20:39 -- nvmf/common.sh@18 -- # NVME_HOSTID=00abaa28-3537-eb11-906e-0017a4403562 00:02:38.747 11:20:39 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:02:38.747 11:20:39 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:02:38.747 11:20:39 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:02:38.747 11:20:39 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:02:38.747 11:20:39 -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:02:38.747 11:20:39 -- scripts/common.sh@15 -- # shopt -s extglob 00:02:38.747 11:20:39 -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:02:38.747 11:20:39 -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:02:38.747 11:20:39 -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:02:38.747 11:20:39 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:38.747 11:20:39 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:38.747 11:20:39 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:38.747 11:20:39 -- paths/export.sh@5 -- # export PATH 00:02:38.747 11:20:39 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:38.747 11:20:39 -- nvmf/common.sh@51 -- # : 0 00:02:38.747 11:20:39 -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:02:38.747 11:20:39 -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:02:38.747 11:20:39 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:02:38.747 11:20:39 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:02:38.747 11:20:39 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:02:38.747 11:20:39 -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:02:38.747 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:02:38.747 11:20:39 -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:02:38.747 11:20:39 -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:02:38.747 11:20:39 -- nvmf/common.sh@55 -- # have_pci_nics=0 00:02:38.747 11:20:39 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:02:38.747 11:20:39 -- spdk/autotest.sh@32 -- # uname -s 00:02:38.747 11:20:39 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:02:38.747 11:20:39 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:02:38.747 11:20:39 -- spdk/autotest.sh@34 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/coredumps 00:02:38.747 11:20:39 -- spdk/autotest.sh@39 -- # echo '|/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/core-collector.sh %P %s %t' 00:02:38.747 11:20:39 -- spdk/autotest.sh@40 -- # echo /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/coredumps 00:02:38.747 11:20:39 -- spdk/autotest.sh@44 -- # modprobe nbd 00:02:38.747 11:20:39 -- spdk/autotest.sh@46 -- # type -P udevadm 00:02:38.747 11:20:39 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:02:38.747 11:20:39 -- spdk/autotest.sh@48 -- # udevadm_pid=997704 00:02:38.747 11:20:39 -- spdk/autotest.sh@53 -- # start_monitor_resources 00:02:38.747 11:20:39 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:02:38.747 11:20:39 -- pm/common@17 -- # local monitor 00:02:38.747 11:20:39 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:02:38.747 11:20:39 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:02:38.747 11:20:39 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:02:38.747 11:20:39 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:02:38.747 11:20:39 -- pm/common@21 -- # date +%s 00:02:38.747 11:20:39 -- pm/common@21 -- # date +%s 00:02:38.747 11:20:39 -- pm/common@25 -- # sleep 1 00:02:38.747 11:20:39 -- pm/common@21 -- # date +%s 00:02:38.747 11:20:39 -- pm/common@21 -- # date +%s 00:02:38.747 11:20:39 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1731666039 00:02:38.747 11:20:39 -- pm/common@21 -- # sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1731666039 00:02:38.747 11:20:39 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1731666039 00:02:38.747 11:20:39 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1731666039 00:02:39.006 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1731666039_collect-vmstat.pm.log 00:02:39.006 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1731666039_collect-cpu-load.pm.log 00:02:39.006 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1731666039_collect-cpu-temp.pm.log 00:02:39.006 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1731666039_collect-bmc-pm.bmc.pm.log 00:02:39.943 11:20:40 -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:02:39.943 11:20:40 -- spdk/autotest.sh@57 -- # timing_enter autotest 00:02:39.943 11:20:40 -- common/autotest_common.sh@724 -- # xtrace_disable 00:02:39.943 11:20:40 -- common/autotest_common.sh@10 -- # set +x 00:02:39.943 11:20:40 -- spdk/autotest.sh@59 -- # create_test_list 00:02:39.943 11:20:40 -- common/autotest_common.sh@750 -- # xtrace_disable 00:02:39.943 11:20:40 -- common/autotest_common.sh@10 -- # set +x 00:02:39.943 11:20:40 -- spdk/autotest.sh@61 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/autotest.sh 00:02:39.943 11:20:40 -- spdk/autotest.sh@61 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:02:39.943 11:20:40 -- spdk/autotest.sh@61 -- # src=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:02:39.943 11:20:40 -- spdk/autotest.sh@62 -- # out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:02:39.943 11:20:40 -- spdk/autotest.sh@63 -- # cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:02:39.943 11:20:40 -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod 00:02:39.943 11:20:40 -- common/autotest_common.sh@1455 -- # uname 00:02:39.943 11:20:40 -- common/autotest_common.sh@1455 -- # '[' Linux = FreeBSD ']' 00:02:39.943 11:20:40 -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf 00:02:39.943 11:20:40 -- common/autotest_common.sh@1475 -- # uname 00:02:39.943 11:20:40 -- common/autotest_common.sh@1475 -- # [[ Linux = FreeBSD ]] 00:02:39.943 11:20:40 -- spdk/autotest.sh@68 -- # [[ y == y ]] 00:02:39.943 11:20:40 -- spdk/autotest.sh@70 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 --version 00:02:39.943 lcov: LCOV version 1.15 00:02:39.943 11:20:40 -- spdk/autotest.sh@72 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -i -t Baseline -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_base.info 00:03:01.884 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/nvme/nvme_stubs.gcno:no functions found 00:03:01.884 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/nvme/nvme_stubs.gcno 00:03:19.972 11:21:17 -- spdk/autotest.sh@76 -- # timing_enter pre_cleanup 00:03:19.972 11:21:17 -- common/autotest_common.sh@724 -- # xtrace_disable 00:03:19.972 11:21:17 -- common/autotest_common.sh@10 -- # set +x 00:03:19.972 11:21:17 -- spdk/autotest.sh@78 -- # rm -f 00:03:19.972 11:21:17 -- spdk/autotest.sh@81 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:03:19.972 0000:86:00.0 (8086 0a54): Already using the nvme driver 00:03:19.972 0000:00:04.7 (8086 2021): Already using the ioatdma driver 00:03:19.972 0000:00:04.6 (8086 2021): Already using the ioatdma driver 00:03:19.972 0000:00:04.5 (8086 2021): Already using the ioatdma driver 00:03:19.972 0000:00:04.4 (8086 2021): Already using the ioatdma driver 00:03:19.972 0000:00:04.3 (8086 2021): Already using the ioatdma driver 00:03:19.972 0000:00:04.2 (8086 2021): Already using the ioatdma driver 00:03:19.972 0000:00:04.1 (8086 2021): Already using the ioatdma driver 00:03:19.972 0000:00:04.0 (8086 2021): Already using the ioatdma driver 00:03:19.972 0000:80:04.7 (8086 2021): Already using the ioatdma driver 00:03:19.972 0000:80:04.6 (8086 2021): Already using the ioatdma driver 00:03:19.972 0000:80:04.5 (8086 2021): Already using the ioatdma driver 00:03:19.972 0000:80:04.4 (8086 2021): Already using the ioatdma driver 00:03:19.972 0000:80:04.3 (8086 2021): Already using the ioatdma driver 00:03:19.972 0000:80:04.2 (8086 2021): Already using the ioatdma driver 00:03:19.972 0000:80:04.1 (8086 2021): Already using the ioatdma driver 00:03:19.972 0000:80:04.0 (8086 2021): Already using the ioatdma driver 00:03:20.231 11:21:20 -- spdk/autotest.sh@83 -- # get_zoned_devs 00:03:20.231 11:21:20 -- common/autotest_common.sh@1655 -- # zoned_devs=() 00:03:20.231 11:21:20 -- common/autotest_common.sh@1655 -- # local -gA zoned_devs 00:03:20.231 11:21:20 -- common/autotest_common.sh@1656 -- # local nvme bdf 00:03:20.231 11:21:20 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:03:20.231 11:21:20 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme0n1 00:03:20.231 11:21:20 -- common/autotest_common.sh@1648 -- # local device=nvme0n1 00:03:20.231 11:21:20 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:03:20.231 11:21:20 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:03:20.231 11:21:20 -- spdk/autotest.sh@85 -- # (( 0 > 0 )) 00:03:20.231 11:21:20 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:03:20.231 11:21:20 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:03:20.231 11:21:20 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme0n1 00:03:20.231 11:21:20 -- scripts/common.sh@381 -- # local block=/dev/nvme0n1 pt 00:03:20.231 11:21:20 -- scripts/common.sh@390 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:03:20.231 No valid GPT data, bailing 00:03:20.231 11:21:20 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:03:20.231 11:21:20 -- scripts/common.sh@394 -- # pt= 00:03:20.231 11:21:20 -- scripts/common.sh@395 -- # return 1 00:03:20.231 11:21:20 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:03:20.231 1+0 records in 00:03:20.231 1+0 records out 00:03:20.231 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00485284 s, 216 MB/s 00:03:20.231 11:21:20 -- spdk/autotest.sh@105 -- # sync 00:03:20.231 11:21:20 -- spdk/autotest.sh@107 -- # xtrace_disable_per_cmd reap_spdk_processes 00:03:20.231 11:21:20 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:03:20.231 11:21:20 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:03:26.798 11:21:27 -- spdk/autotest.sh@111 -- # uname -s 00:03:26.798 11:21:27 -- spdk/autotest.sh@111 -- # [[ Linux == Linux ]] 00:03:26.798 11:21:27 -- spdk/autotest.sh@111 -- # [[ 0 -eq 1 ]] 00:03:26.798 11:21:27 -- spdk/autotest.sh@115 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:03:29.332 Hugepages 00:03:29.332 node hugesize free / total 00:03:29.332 node0 1048576kB 0 / 0 00:03:29.332 node0 2048kB 0 / 0 00:03:29.332 node1 1048576kB 0 / 0 00:03:29.332 node1 2048kB 0 / 0 00:03:29.332 00:03:29.332 Type BDF Vendor Device NUMA Driver Device Block devices 00:03:29.332 I/OAT 0000:00:04.0 8086 2021 0 ioatdma - - 00:03:29.332 I/OAT 0000:00:04.1 8086 2021 0 ioatdma - - 00:03:29.332 I/OAT 0000:00:04.2 8086 2021 0 ioatdma - - 00:03:29.332 I/OAT 0000:00:04.3 8086 2021 0 ioatdma - - 00:03:29.332 I/OAT 0000:00:04.4 8086 2021 0 ioatdma - - 00:03:29.332 I/OAT 0000:00:04.5 8086 2021 0 ioatdma - - 00:03:29.332 I/OAT 0000:00:04.6 8086 2021 0 ioatdma - - 00:03:29.332 I/OAT 0000:00:04.7 8086 2021 0 ioatdma - - 00:03:29.332 I/OAT 0000:80:04.0 8086 2021 1 ioatdma - - 00:03:29.332 I/OAT 0000:80:04.1 8086 2021 1 ioatdma - - 00:03:29.332 I/OAT 0000:80:04.2 8086 2021 1 ioatdma - - 00:03:29.332 I/OAT 0000:80:04.3 8086 2021 1 ioatdma - - 00:03:29.332 I/OAT 0000:80:04.4 8086 2021 1 ioatdma - - 00:03:29.332 I/OAT 0000:80:04.5 8086 2021 1 ioatdma - - 00:03:29.332 I/OAT 0000:80:04.6 8086 2021 1 ioatdma - - 00:03:29.332 I/OAT 0000:80:04.7 8086 2021 1 ioatdma - - 00:03:29.332 NVMe 0000:86:00.0 8086 0a54 1 nvme nvme0 nvme0n1 00:03:29.332 11:21:29 -- spdk/autotest.sh@117 -- # uname -s 00:03:29.332 11:21:29 -- spdk/autotest.sh@117 -- # [[ Linux == Linux ]] 00:03:29.332 11:21:29 -- spdk/autotest.sh@119 -- # nvme_namespace_revert 00:03:29.332 11:21:29 -- common/autotest_common.sh@1514 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:03:31.867 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:03:32.126 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:03:32.126 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:03:32.126 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:03:32.126 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:03:32.126 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:03:32.126 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:03:32.126 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:03:32.126 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:03:32.126 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:03:32.126 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:03:32.126 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:03:32.126 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:03:32.126 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:03:32.126 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:03:32.126 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:03:33.061 0000:86:00.0 (8086 0a54): nvme -> vfio-pci 00:03:33.061 11:21:33 -- common/autotest_common.sh@1515 -- # sleep 1 00:03:34.437 11:21:34 -- common/autotest_common.sh@1516 -- # bdfs=() 00:03:34.437 11:21:34 -- common/autotest_common.sh@1516 -- # local bdfs 00:03:34.437 11:21:34 -- common/autotest_common.sh@1518 -- # bdfs=($(get_nvme_bdfs)) 00:03:34.437 11:21:34 -- common/autotest_common.sh@1518 -- # get_nvme_bdfs 00:03:34.437 11:21:34 -- common/autotest_common.sh@1496 -- # bdfs=() 00:03:34.437 11:21:34 -- common/autotest_common.sh@1496 -- # local bdfs 00:03:34.437 11:21:34 -- common/autotest_common.sh@1497 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:03:34.437 11:21:34 -- common/autotest_common.sh@1497 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:03:34.437 11:21:34 -- common/autotest_common.sh@1497 -- # jq -r '.config[].params.traddr' 00:03:34.437 11:21:34 -- common/autotest_common.sh@1498 -- # (( 1 == 0 )) 00:03:34.437 11:21:34 -- common/autotest_common.sh@1502 -- # printf '%s\n' 0000:86:00.0 00:03:34.437 11:21:34 -- common/autotest_common.sh@1520 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:03:36.971 Waiting for block devices as requested 00:03:36.971 0000:86:00.0 (8086 0a54): vfio-pci -> nvme 00:03:36.971 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:03:36.971 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:03:36.971 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:03:37.229 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:03:37.229 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:03:37.229 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:03:37.229 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:03:37.488 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:03:37.488 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:03:37.488 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:03:37.747 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:03:37.747 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:03:37.747 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:03:37.747 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:03:38.005 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:03:38.005 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:03:38.005 11:21:38 -- common/autotest_common.sh@1522 -- # for bdf in "${bdfs[@]}" 00:03:38.005 11:21:38 -- common/autotest_common.sh@1523 -- # get_nvme_ctrlr_from_bdf 0000:86:00.0 00:03:38.005 11:21:38 -- common/autotest_common.sh@1485 -- # readlink -f /sys/class/nvme/nvme0 00:03:38.005 11:21:38 -- common/autotest_common.sh@1485 -- # grep 0000:86:00.0/nvme/nvme 00:03:38.005 11:21:38 -- common/autotest_common.sh@1485 -- # bdf_sysfs_path=/sys/devices/pci0000:85/0000:85:00.0/0000:86:00.0/nvme/nvme0 00:03:38.005 11:21:38 -- common/autotest_common.sh@1486 -- # [[ -z /sys/devices/pci0000:85/0000:85:00.0/0000:86:00.0/nvme/nvme0 ]] 00:03:38.005 11:21:38 -- common/autotest_common.sh@1490 -- # basename /sys/devices/pci0000:85/0000:85:00.0/0000:86:00.0/nvme/nvme0 00:03:38.005 11:21:38 -- common/autotest_common.sh@1490 -- # printf '%s\n' nvme0 00:03:38.005 11:21:38 -- common/autotest_common.sh@1523 -- # nvme_ctrlr=/dev/nvme0 00:03:38.005 11:21:38 -- common/autotest_common.sh@1524 -- # [[ -z /dev/nvme0 ]] 00:03:38.005 11:21:38 -- common/autotest_common.sh@1529 -- # nvme id-ctrl /dev/nvme0 00:03:38.005 11:21:38 -- common/autotest_common.sh@1529 -- # grep oacs 00:03:38.005 11:21:38 -- common/autotest_common.sh@1529 -- # cut -d: -f2 00:03:38.005 11:21:38 -- common/autotest_common.sh@1529 -- # oacs=' 0xe' 00:03:38.005 11:21:38 -- common/autotest_common.sh@1530 -- # oacs_ns_manage=8 00:03:38.005 11:21:38 -- common/autotest_common.sh@1532 -- # [[ 8 -ne 0 ]] 00:03:38.005 11:21:38 -- common/autotest_common.sh@1538 -- # nvme id-ctrl /dev/nvme0 00:03:38.005 11:21:38 -- common/autotest_common.sh@1538 -- # grep unvmcap 00:03:38.005 11:21:38 -- common/autotest_common.sh@1538 -- # cut -d: -f2 00:03:38.005 11:21:38 -- common/autotest_common.sh@1538 -- # unvmcap=' 0' 00:03:38.005 11:21:38 -- common/autotest_common.sh@1539 -- # [[ 0 -eq 0 ]] 00:03:38.005 11:21:38 -- common/autotest_common.sh@1541 -- # continue 00:03:38.005 11:21:38 -- spdk/autotest.sh@122 -- # timing_exit pre_cleanup 00:03:38.005 11:21:38 -- common/autotest_common.sh@730 -- # xtrace_disable 00:03:38.005 11:21:38 -- common/autotest_common.sh@10 -- # set +x 00:03:38.005 11:21:38 -- spdk/autotest.sh@125 -- # timing_enter afterboot 00:03:38.005 11:21:38 -- common/autotest_common.sh@724 -- # xtrace_disable 00:03:38.005 11:21:38 -- common/autotest_common.sh@10 -- # set +x 00:03:38.005 11:21:38 -- spdk/autotest.sh@126 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:03:41.291 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:03:41.291 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:03:41.291 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:03:41.291 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:03:41.291 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:03:41.291 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:03:41.291 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:03:41.291 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:03:41.291 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:03:41.291 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:03:41.291 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:03:41.291 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:03:41.291 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:03:41.292 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:03:41.292 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:03:41.292 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:03:41.858 0000:86:00.0 (8086 0a54): nvme -> vfio-pci 00:03:41.858 11:21:42 -- spdk/autotest.sh@127 -- # timing_exit afterboot 00:03:41.858 11:21:42 -- common/autotest_common.sh@730 -- # xtrace_disable 00:03:41.858 11:21:42 -- common/autotest_common.sh@10 -- # set +x 00:03:41.858 11:21:42 -- spdk/autotest.sh@131 -- # opal_revert_cleanup 00:03:41.858 11:21:42 -- common/autotest_common.sh@1576 -- # mapfile -t bdfs 00:03:41.858 11:21:42 -- common/autotest_common.sh@1576 -- # get_nvme_bdfs_by_id 0x0a54 00:03:41.859 11:21:42 -- common/autotest_common.sh@1561 -- # bdfs=() 00:03:41.859 11:21:42 -- common/autotest_common.sh@1561 -- # _bdfs=() 00:03:41.859 11:21:42 -- common/autotest_common.sh@1561 -- # local bdfs _bdfs 00:03:41.859 11:21:42 -- common/autotest_common.sh@1562 -- # _bdfs=($(get_nvme_bdfs)) 00:03:41.859 11:21:42 -- common/autotest_common.sh@1562 -- # get_nvme_bdfs 00:03:41.859 11:21:42 -- common/autotest_common.sh@1496 -- # bdfs=() 00:03:41.859 11:21:42 -- common/autotest_common.sh@1496 -- # local bdfs 00:03:41.859 11:21:42 -- common/autotest_common.sh@1497 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:03:41.859 11:21:42 -- common/autotest_common.sh@1497 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:03:41.859 11:21:42 -- common/autotest_common.sh@1497 -- # jq -r '.config[].params.traddr' 00:03:42.117 11:21:42 -- common/autotest_common.sh@1498 -- # (( 1 == 0 )) 00:03:42.117 11:21:42 -- common/autotest_common.sh@1502 -- # printf '%s\n' 0000:86:00.0 00:03:42.117 11:21:42 -- common/autotest_common.sh@1563 -- # for bdf in "${_bdfs[@]}" 00:03:42.117 11:21:42 -- common/autotest_common.sh@1564 -- # cat /sys/bus/pci/devices/0000:86:00.0/device 00:03:42.117 11:21:42 -- common/autotest_common.sh@1564 -- # device=0x0a54 00:03:42.117 11:21:42 -- common/autotest_common.sh@1565 -- # [[ 0x0a54 == \0\x\0\a\5\4 ]] 00:03:42.117 11:21:42 -- common/autotest_common.sh@1566 -- # bdfs+=($bdf) 00:03:42.117 11:21:42 -- common/autotest_common.sh@1570 -- # (( 1 > 0 )) 00:03:42.117 11:21:42 -- common/autotest_common.sh@1571 -- # printf '%s\n' 0000:86:00.0 00:03:42.117 11:21:42 -- common/autotest_common.sh@1577 -- # [[ -z 0000:86:00.0 ]] 00:03:42.117 11:21:42 -- common/autotest_common.sh@1582 -- # spdk_tgt_pid=1015953 00:03:42.117 11:21:42 -- common/autotest_common.sh@1583 -- # waitforlisten 1015953 00:03:42.117 11:21:42 -- common/autotest_common.sh@1581 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:03:42.117 11:21:42 -- common/autotest_common.sh@833 -- # '[' -z 1015953 ']' 00:03:42.117 11:21:42 -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:03:42.117 11:21:42 -- common/autotest_common.sh@838 -- # local max_retries=100 00:03:42.117 11:21:42 -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:03:42.117 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:03:42.117 11:21:42 -- common/autotest_common.sh@842 -- # xtrace_disable 00:03:42.117 11:21:42 -- common/autotest_common.sh@10 -- # set +x 00:03:42.117 [2024-11-15 11:21:42.820019] Starting SPDK v25.01-pre git sha1 4b2d483c6 / DPDK 24.03.0 initialization... 00:03:42.117 [2024-11-15 11:21:42.820063] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1015953 ] 00:03:42.117 [2024-11-15 11:21:42.902340] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:03:42.117 [2024-11-15 11:21:42.954113] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:03:43.054 11:21:43 -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:03:43.054 11:21:43 -- common/autotest_common.sh@866 -- # return 0 00:03:43.054 11:21:43 -- common/autotest_common.sh@1585 -- # bdf_id=0 00:03:43.054 11:21:43 -- common/autotest_common.sh@1586 -- # for bdf in "${bdfs[@]}" 00:03:43.054 11:21:43 -- common/autotest_common.sh@1587 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t pcie -a 0000:86:00.0 00:03:46.341 nvme0n1 00:03:46.341 11:21:46 -- common/autotest_common.sh@1589 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_nvme_opal_revert -b nvme0 -p test 00:03:46.341 [2024-11-15 11:21:46.758749] vbdev_opal_rpc.c: 125:rpc_bdev_nvme_opal_revert: *ERROR*: nvme0 not support opal 00:03:46.341 request: 00:03:46.341 { 00:03:46.341 "nvme_ctrlr_name": "nvme0", 00:03:46.341 "password": "test", 00:03:46.341 "method": "bdev_nvme_opal_revert", 00:03:46.341 "req_id": 1 00:03:46.341 } 00:03:46.341 Got JSON-RPC error response 00:03:46.341 response: 00:03:46.341 { 00:03:46.341 "code": -32602, 00:03:46.341 "message": "Invalid parameters" 00:03:46.341 } 00:03:46.341 11:21:46 -- common/autotest_common.sh@1589 -- # true 00:03:46.341 11:21:46 -- common/autotest_common.sh@1590 -- # (( ++bdf_id )) 00:03:46.341 11:21:46 -- common/autotest_common.sh@1593 -- # killprocess 1015953 00:03:46.341 11:21:46 -- common/autotest_common.sh@952 -- # '[' -z 1015953 ']' 00:03:46.341 11:21:46 -- common/autotest_common.sh@956 -- # kill -0 1015953 00:03:46.341 11:21:46 -- common/autotest_common.sh@957 -- # uname 00:03:46.341 11:21:46 -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:03:46.341 11:21:46 -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 1015953 00:03:46.341 11:21:46 -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:03:46.341 11:21:46 -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:03:46.341 11:21:46 -- common/autotest_common.sh@970 -- # echo 'killing process with pid 1015953' 00:03:46.341 killing process with pid 1015953 00:03:46.341 11:21:46 -- common/autotest_common.sh@971 -- # kill 1015953 00:03:46.341 11:21:46 -- common/autotest_common.sh@976 -- # wait 1015953 00:03:47.717 11:21:48 -- spdk/autotest.sh@137 -- # '[' 0 -eq 1 ']' 00:03:47.717 11:21:48 -- spdk/autotest.sh@141 -- # '[' 1 -eq 1 ']' 00:03:47.717 11:21:48 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:03:47.717 11:21:48 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:03:47.717 11:21:48 -- spdk/autotest.sh@149 -- # timing_enter lib 00:03:47.717 11:21:48 -- common/autotest_common.sh@724 -- # xtrace_disable 00:03:47.717 11:21:48 -- common/autotest_common.sh@10 -- # set +x 00:03:47.717 11:21:48 -- spdk/autotest.sh@151 -- # [[ 0 -eq 1 ]] 00:03:47.717 11:21:48 -- spdk/autotest.sh@155 -- # run_test env /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env.sh 00:03:47.717 11:21:48 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:03:47.717 11:21:48 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:03:47.717 11:21:48 -- common/autotest_common.sh@10 -- # set +x 00:03:47.717 ************************************ 00:03:47.717 START TEST env 00:03:47.717 ************************************ 00:03:47.717 11:21:48 env -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env.sh 00:03:47.976 * Looking for test storage... 00:03:47.976 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env 00:03:47.976 11:21:48 env -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:03:47.977 11:21:48 env -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:03:47.977 11:21:48 env -- common/autotest_common.sh@1691 -- # lcov --version 00:03:47.977 11:21:48 env -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:03:47.977 11:21:48 env -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:03:47.977 11:21:48 env -- scripts/common.sh@333 -- # local ver1 ver1_l 00:03:47.977 11:21:48 env -- scripts/common.sh@334 -- # local ver2 ver2_l 00:03:47.977 11:21:48 env -- scripts/common.sh@336 -- # IFS=.-: 00:03:47.977 11:21:48 env -- scripts/common.sh@336 -- # read -ra ver1 00:03:47.977 11:21:48 env -- scripts/common.sh@337 -- # IFS=.-: 00:03:47.977 11:21:48 env -- scripts/common.sh@337 -- # read -ra ver2 00:03:47.977 11:21:48 env -- scripts/common.sh@338 -- # local 'op=<' 00:03:47.977 11:21:48 env -- scripts/common.sh@340 -- # ver1_l=2 00:03:47.977 11:21:48 env -- scripts/common.sh@341 -- # ver2_l=1 00:03:47.977 11:21:48 env -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:03:47.977 11:21:48 env -- scripts/common.sh@344 -- # case "$op" in 00:03:47.977 11:21:48 env -- scripts/common.sh@345 -- # : 1 00:03:47.977 11:21:48 env -- scripts/common.sh@364 -- # (( v = 0 )) 00:03:47.977 11:21:48 env -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:03:47.977 11:21:48 env -- scripts/common.sh@365 -- # decimal 1 00:03:47.977 11:21:48 env -- scripts/common.sh@353 -- # local d=1 00:03:47.977 11:21:48 env -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:03:47.977 11:21:48 env -- scripts/common.sh@355 -- # echo 1 00:03:47.977 11:21:48 env -- scripts/common.sh@365 -- # ver1[v]=1 00:03:47.977 11:21:48 env -- scripts/common.sh@366 -- # decimal 2 00:03:47.977 11:21:48 env -- scripts/common.sh@353 -- # local d=2 00:03:47.977 11:21:48 env -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:03:47.977 11:21:48 env -- scripts/common.sh@355 -- # echo 2 00:03:47.977 11:21:48 env -- scripts/common.sh@366 -- # ver2[v]=2 00:03:47.977 11:21:48 env -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:03:47.977 11:21:48 env -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:03:47.977 11:21:48 env -- scripts/common.sh@368 -- # return 0 00:03:47.977 11:21:48 env -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:03:47.977 11:21:48 env -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:03:47.977 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:47.977 --rc genhtml_branch_coverage=1 00:03:47.977 --rc genhtml_function_coverage=1 00:03:47.977 --rc genhtml_legend=1 00:03:47.977 --rc geninfo_all_blocks=1 00:03:47.977 --rc geninfo_unexecuted_blocks=1 00:03:47.977 00:03:47.977 ' 00:03:47.977 11:21:48 env -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:03:47.977 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:47.977 --rc genhtml_branch_coverage=1 00:03:47.977 --rc genhtml_function_coverage=1 00:03:47.977 --rc genhtml_legend=1 00:03:47.977 --rc geninfo_all_blocks=1 00:03:47.977 --rc geninfo_unexecuted_blocks=1 00:03:47.977 00:03:47.977 ' 00:03:47.977 11:21:48 env -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:03:47.977 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:47.977 --rc genhtml_branch_coverage=1 00:03:47.977 --rc genhtml_function_coverage=1 00:03:47.977 --rc genhtml_legend=1 00:03:47.977 --rc geninfo_all_blocks=1 00:03:47.977 --rc geninfo_unexecuted_blocks=1 00:03:47.977 00:03:47.977 ' 00:03:47.977 11:21:48 env -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:03:47.977 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:47.977 --rc genhtml_branch_coverage=1 00:03:47.977 --rc genhtml_function_coverage=1 00:03:47.977 --rc genhtml_legend=1 00:03:47.977 --rc geninfo_all_blocks=1 00:03:47.977 --rc geninfo_unexecuted_blocks=1 00:03:47.977 00:03:47.977 ' 00:03:47.977 11:21:48 env -- env/env.sh@10 -- # run_test env_memory /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/memory/memory_ut 00:03:47.977 11:21:48 env -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:03:47.977 11:21:48 env -- common/autotest_common.sh@1109 -- # xtrace_disable 00:03:47.977 11:21:48 env -- common/autotest_common.sh@10 -- # set +x 00:03:47.977 ************************************ 00:03:47.977 START TEST env_memory 00:03:47.977 ************************************ 00:03:47.977 11:21:48 env.env_memory -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/memory/memory_ut 00:03:47.977 00:03:47.977 00:03:47.977 CUnit - A unit testing framework for C - Version 2.1-3 00:03:47.977 http://cunit.sourceforge.net/ 00:03:47.977 00:03:47.977 00:03:47.977 Suite: memory 00:03:47.977 Test: alloc and free memory map ...[2024-11-15 11:21:48.802368] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:03:47.977 passed 00:03:48.236 Test: mem map translation ...[2024-11-15 11:21:48.831479] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:03:48.236 [2024-11-15 11:21:48.831500] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:03:48.236 [2024-11-15 11:21:48.831554] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 589:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:03:48.236 [2024-11-15 11:21:48.831570] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 605:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:03:48.236 passed 00:03:48.236 Test: mem map registration ...[2024-11-15 11:21:48.891381] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=200000 len=1234 00:03:48.236 [2024-11-15 11:21:48.891400] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=4d2 len=2097152 00:03:48.236 passed 00:03:48.236 Test: mem map adjacent registrations ...passed 00:03:48.236 00:03:48.236 Run Summary: Type Total Ran Passed Failed Inactive 00:03:48.236 suites 1 1 n/a 0 0 00:03:48.236 tests 4 4 4 0 0 00:03:48.236 asserts 152 152 152 0 n/a 00:03:48.236 00:03:48.236 Elapsed time = 0.205 seconds 00:03:48.236 00:03:48.236 real 0m0.218s 00:03:48.236 user 0m0.204s 00:03:48.236 sys 0m0.014s 00:03:48.236 11:21:48 env.env_memory -- common/autotest_common.sh@1128 -- # xtrace_disable 00:03:48.236 11:21:48 env.env_memory -- common/autotest_common.sh@10 -- # set +x 00:03:48.236 ************************************ 00:03:48.236 END TEST env_memory 00:03:48.236 ************************************ 00:03:48.236 11:21:49 env -- env/env.sh@11 -- # run_test env_vtophys /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/vtophys/vtophys 00:03:48.236 11:21:49 env -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:03:48.236 11:21:49 env -- common/autotest_common.sh@1109 -- # xtrace_disable 00:03:48.236 11:21:49 env -- common/autotest_common.sh@10 -- # set +x 00:03:48.236 ************************************ 00:03:48.236 START TEST env_vtophys 00:03:48.236 ************************************ 00:03:48.236 11:21:49 env.env_vtophys -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/vtophys/vtophys 00:03:48.236 EAL: lib.eal log level changed from notice to debug 00:03:48.236 EAL: Detected lcore 0 as core 0 on socket 0 00:03:48.236 EAL: Detected lcore 1 as core 1 on socket 0 00:03:48.236 EAL: Detected lcore 2 as core 2 on socket 0 00:03:48.236 EAL: Detected lcore 3 as core 3 on socket 0 00:03:48.236 EAL: Detected lcore 4 as core 4 on socket 0 00:03:48.236 EAL: Detected lcore 5 as core 5 on socket 0 00:03:48.236 EAL: Detected lcore 6 as core 6 on socket 0 00:03:48.236 EAL: Detected lcore 7 as core 8 on socket 0 00:03:48.236 EAL: Detected lcore 8 as core 9 on socket 0 00:03:48.236 EAL: Detected lcore 9 as core 10 on socket 0 00:03:48.237 EAL: Detected lcore 10 as core 11 on socket 0 00:03:48.237 EAL: Detected lcore 11 as core 12 on socket 0 00:03:48.237 EAL: Detected lcore 12 as core 13 on socket 0 00:03:48.237 EAL: Detected lcore 13 as core 14 on socket 0 00:03:48.237 EAL: Detected lcore 14 as core 16 on socket 0 00:03:48.237 EAL: Detected lcore 15 as core 17 on socket 0 00:03:48.237 EAL: Detected lcore 16 as core 18 on socket 0 00:03:48.237 EAL: Detected lcore 17 as core 19 on socket 0 00:03:48.237 EAL: Detected lcore 18 as core 20 on socket 0 00:03:48.237 EAL: Detected lcore 19 as core 21 on socket 0 00:03:48.237 EAL: Detected lcore 20 as core 22 on socket 0 00:03:48.237 EAL: Detected lcore 21 as core 24 on socket 0 00:03:48.237 EAL: Detected lcore 22 as core 25 on socket 0 00:03:48.237 EAL: Detected lcore 23 as core 26 on socket 0 00:03:48.237 EAL: Detected lcore 24 as core 27 on socket 0 00:03:48.237 EAL: Detected lcore 25 as core 28 on socket 0 00:03:48.237 EAL: Detected lcore 26 as core 29 on socket 0 00:03:48.237 EAL: Detected lcore 27 as core 30 on socket 0 00:03:48.237 EAL: Detected lcore 28 as core 0 on socket 1 00:03:48.237 EAL: Detected lcore 29 as core 1 on socket 1 00:03:48.237 EAL: Detected lcore 30 as core 2 on socket 1 00:03:48.237 EAL: Detected lcore 31 as core 3 on socket 1 00:03:48.237 EAL: Detected lcore 32 as core 4 on socket 1 00:03:48.237 EAL: Detected lcore 33 as core 5 on socket 1 00:03:48.237 EAL: Detected lcore 34 as core 6 on socket 1 00:03:48.237 EAL: Detected lcore 35 as core 8 on socket 1 00:03:48.237 EAL: Detected lcore 36 as core 9 on socket 1 00:03:48.237 EAL: Detected lcore 37 as core 10 on socket 1 00:03:48.237 EAL: Detected lcore 38 as core 11 on socket 1 00:03:48.237 EAL: Detected lcore 39 as core 12 on socket 1 00:03:48.237 EAL: Detected lcore 40 as core 13 on socket 1 00:03:48.237 EAL: Detected lcore 41 as core 14 on socket 1 00:03:48.237 EAL: Detected lcore 42 as core 16 on socket 1 00:03:48.237 EAL: Detected lcore 43 as core 17 on socket 1 00:03:48.237 EAL: Detected lcore 44 as core 18 on socket 1 00:03:48.237 EAL: Detected lcore 45 as core 19 on socket 1 00:03:48.237 EAL: Detected lcore 46 as core 20 on socket 1 00:03:48.237 EAL: Detected lcore 47 as core 21 on socket 1 00:03:48.237 EAL: Detected lcore 48 as core 22 on socket 1 00:03:48.237 EAL: Detected lcore 49 as core 24 on socket 1 00:03:48.237 EAL: Detected lcore 50 as core 25 on socket 1 00:03:48.237 EAL: Detected lcore 51 as core 26 on socket 1 00:03:48.237 EAL: Detected lcore 52 as core 27 on socket 1 00:03:48.237 EAL: Detected lcore 53 as core 28 on socket 1 00:03:48.237 EAL: Detected lcore 54 as core 29 on socket 1 00:03:48.237 EAL: Detected lcore 55 as core 30 on socket 1 00:03:48.237 EAL: Detected lcore 56 as core 0 on socket 0 00:03:48.237 EAL: Detected lcore 57 as core 1 on socket 0 00:03:48.237 EAL: Detected lcore 58 as core 2 on socket 0 00:03:48.237 EAL: Detected lcore 59 as core 3 on socket 0 00:03:48.237 EAL: Detected lcore 60 as core 4 on socket 0 00:03:48.237 EAL: Detected lcore 61 as core 5 on socket 0 00:03:48.237 EAL: Detected lcore 62 as core 6 on socket 0 00:03:48.237 EAL: Detected lcore 63 as core 8 on socket 0 00:03:48.237 EAL: Detected lcore 64 as core 9 on socket 0 00:03:48.237 EAL: Detected lcore 65 as core 10 on socket 0 00:03:48.237 EAL: Detected lcore 66 as core 11 on socket 0 00:03:48.237 EAL: Detected lcore 67 as core 12 on socket 0 00:03:48.237 EAL: Detected lcore 68 as core 13 on socket 0 00:03:48.237 EAL: Detected lcore 69 as core 14 on socket 0 00:03:48.237 EAL: Detected lcore 70 as core 16 on socket 0 00:03:48.237 EAL: Detected lcore 71 as core 17 on socket 0 00:03:48.237 EAL: Detected lcore 72 as core 18 on socket 0 00:03:48.237 EAL: Detected lcore 73 as core 19 on socket 0 00:03:48.237 EAL: Detected lcore 74 as core 20 on socket 0 00:03:48.237 EAL: Detected lcore 75 as core 21 on socket 0 00:03:48.237 EAL: Detected lcore 76 as core 22 on socket 0 00:03:48.237 EAL: Detected lcore 77 as core 24 on socket 0 00:03:48.237 EAL: Detected lcore 78 as core 25 on socket 0 00:03:48.237 EAL: Detected lcore 79 as core 26 on socket 0 00:03:48.237 EAL: Detected lcore 80 as core 27 on socket 0 00:03:48.237 EAL: Detected lcore 81 as core 28 on socket 0 00:03:48.237 EAL: Detected lcore 82 as core 29 on socket 0 00:03:48.237 EAL: Detected lcore 83 as core 30 on socket 0 00:03:48.237 EAL: Detected lcore 84 as core 0 on socket 1 00:03:48.237 EAL: Detected lcore 85 as core 1 on socket 1 00:03:48.237 EAL: Detected lcore 86 as core 2 on socket 1 00:03:48.237 EAL: Detected lcore 87 as core 3 on socket 1 00:03:48.237 EAL: Detected lcore 88 as core 4 on socket 1 00:03:48.237 EAL: Detected lcore 89 as core 5 on socket 1 00:03:48.237 EAL: Detected lcore 90 as core 6 on socket 1 00:03:48.237 EAL: Detected lcore 91 as core 8 on socket 1 00:03:48.237 EAL: Detected lcore 92 as core 9 on socket 1 00:03:48.237 EAL: Detected lcore 93 as core 10 on socket 1 00:03:48.237 EAL: Detected lcore 94 as core 11 on socket 1 00:03:48.237 EAL: Detected lcore 95 as core 12 on socket 1 00:03:48.237 EAL: Detected lcore 96 as core 13 on socket 1 00:03:48.237 EAL: Detected lcore 97 as core 14 on socket 1 00:03:48.237 EAL: Detected lcore 98 as core 16 on socket 1 00:03:48.237 EAL: Detected lcore 99 as core 17 on socket 1 00:03:48.237 EAL: Detected lcore 100 as core 18 on socket 1 00:03:48.237 EAL: Detected lcore 101 as core 19 on socket 1 00:03:48.237 EAL: Detected lcore 102 as core 20 on socket 1 00:03:48.237 EAL: Detected lcore 103 as core 21 on socket 1 00:03:48.237 EAL: Detected lcore 104 as core 22 on socket 1 00:03:48.237 EAL: Detected lcore 105 as core 24 on socket 1 00:03:48.237 EAL: Detected lcore 106 as core 25 on socket 1 00:03:48.237 EAL: Detected lcore 107 as core 26 on socket 1 00:03:48.237 EAL: Detected lcore 108 as core 27 on socket 1 00:03:48.237 EAL: Detected lcore 109 as core 28 on socket 1 00:03:48.237 EAL: Detected lcore 110 as core 29 on socket 1 00:03:48.237 EAL: Detected lcore 111 as core 30 on socket 1 00:03:48.237 EAL: Maximum logical cores by configuration: 128 00:03:48.237 EAL: Detected CPU lcores: 112 00:03:48.237 EAL: Detected NUMA nodes: 2 00:03:48.237 EAL: Checking presence of .so 'librte_eal.so.24.1' 00:03:48.237 EAL: Detected shared linkage of DPDK 00:03:48.237 EAL: No shared files mode enabled, IPC will be disabled 00:03:48.497 EAL: Bus pci wants IOVA as 'DC' 00:03:48.497 EAL: Buses did not request a specific IOVA mode. 00:03:48.497 EAL: IOMMU is available, selecting IOVA as VA mode. 00:03:48.497 EAL: Selected IOVA mode 'VA' 00:03:48.497 EAL: Probing VFIO support... 00:03:48.497 EAL: IOMMU type 1 (Type 1) is supported 00:03:48.497 EAL: IOMMU type 7 (sPAPR) is not supported 00:03:48.497 EAL: IOMMU type 8 (No-IOMMU) is not supported 00:03:48.497 EAL: VFIO support initialized 00:03:48.497 EAL: Ask a virtual area of 0x2e000 bytes 00:03:48.497 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:03:48.497 EAL: Setting up physically contiguous memory... 00:03:48.497 EAL: Setting maximum number of open files to 524288 00:03:48.497 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:03:48.497 EAL: Detected memory type: socket_id:1 hugepage_sz:2097152 00:03:48.497 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:03:48.497 EAL: Ask a virtual area of 0x61000 bytes 00:03:48.497 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:03:48.497 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:03:48.497 EAL: Ask a virtual area of 0x400000000 bytes 00:03:48.497 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:03:48.497 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:03:48.497 EAL: Ask a virtual area of 0x61000 bytes 00:03:48.497 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:03:48.497 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:03:48.497 EAL: Ask a virtual area of 0x400000000 bytes 00:03:48.497 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:03:48.497 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:03:48.497 EAL: Ask a virtual area of 0x61000 bytes 00:03:48.497 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:03:48.497 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:03:48.497 EAL: Ask a virtual area of 0x400000000 bytes 00:03:48.497 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:03:48.497 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:03:48.497 EAL: Ask a virtual area of 0x61000 bytes 00:03:48.497 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:03:48.497 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:03:48.497 EAL: Ask a virtual area of 0x400000000 bytes 00:03:48.497 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:03:48.497 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:03:48.497 EAL: Creating 4 segment lists: n_segs:8192 socket_id:1 hugepage_sz:2097152 00:03:48.497 EAL: Ask a virtual area of 0x61000 bytes 00:03:48.497 EAL: Virtual area found at 0x201000800000 (size = 0x61000) 00:03:48.497 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:03:48.497 EAL: Ask a virtual area of 0x400000000 bytes 00:03:48.497 EAL: Virtual area found at 0x201000a00000 (size = 0x400000000) 00:03:48.497 EAL: VA reserved for memseg list at 0x201000a00000, size 400000000 00:03:48.497 EAL: Ask a virtual area of 0x61000 bytes 00:03:48.497 EAL: Virtual area found at 0x201400a00000 (size = 0x61000) 00:03:48.497 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:03:48.497 EAL: Ask a virtual area of 0x400000000 bytes 00:03:48.497 EAL: Virtual area found at 0x201400c00000 (size = 0x400000000) 00:03:48.497 EAL: VA reserved for memseg list at 0x201400c00000, size 400000000 00:03:48.497 EAL: Ask a virtual area of 0x61000 bytes 00:03:48.497 EAL: Virtual area found at 0x201800c00000 (size = 0x61000) 00:03:48.497 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:03:48.497 EAL: Ask a virtual area of 0x400000000 bytes 00:03:48.497 EAL: Virtual area found at 0x201800e00000 (size = 0x400000000) 00:03:48.497 EAL: VA reserved for memseg list at 0x201800e00000, size 400000000 00:03:48.497 EAL: Ask a virtual area of 0x61000 bytes 00:03:48.497 EAL: Virtual area found at 0x201c00e00000 (size = 0x61000) 00:03:48.497 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:03:48.497 EAL: Ask a virtual area of 0x400000000 bytes 00:03:48.497 EAL: Virtual area found at 0x201c01000000 (size = 0x400000000) 00:03:48.497 EAL: VA reserved for memseg list at 0x201c01000000, size 400000000 00:03:48.497 EAL: Hugepages will be freed exactly as allocated. 00:03:48.497 EAL: No shared files mode enabled, IPC is disabled 00:03:48.497 EAL: No shared files mode enabled, IPC is disabled 00:03:48.497 EAL: TSC frequency is ~2200000 KHz 00:03:48.497 EAL: Main lcore 0 is ready (tid=7f454248fa00;cpuset=[0]) 00:03:48.497 EAL: Trying to obtain current memory policy. 00:03:48.497 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:48.497 EAL: Restoring previous memory policy: 0 00:03:48.497 EAL: request: mp_malloc_sync 00:03:48.497 EAL: No shared files mode enabled, IPC is disabled 00:03:48.497 EAL: Heap on socket 0 was expanded by 2MB 00:03:48.497 EAL: No shared files mode enabled, IPC is disabled 00:03:48.497 EAL: No PCI address specified using 'addr=' in: bus=pci 00:03:48.497 EAL: Mem event callback 'spdk:(nil)' registered 00:03:48.497 00:03:48.497 00:03:48.497 CUnit - A unit testing framework for C - Version 2.1-3 00:03:48.497 http://cunit.sourceforge.net/ 00:03:48.497 00:03:48.497 00:03:48.497 Suite: components_suite 00:03:48.497 Test: vtophys_malloc_test ...passed 00:03:48.497 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:03:48.497 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:48.497 EAL: Restoring previous memory policy: 4 00:03:48.497 EAL: Calling mem event callback 'spdk:(nil)' 00:03:48.497 EAL: request: mp_malloc_sync 00:03:48.497 EAL: No shared files mode enabled, IPC is disabled 00:03:48.497 EAL: Heap on socket 0 was expanded by 4MB 00:03:48.497 EAL: Calling mem event callback 'spdk:(nil)' 00:03:48.497 EAL: request: mp_malloc_sync 00:03:48.497 EAL: No shared files mode enabled, IPC is disabled 00:03:48.497 EAL: Heap on socket 0 was shrunk by 4MB 00:03:48.497 EAL: Trying to obtain current memory policy. 00:03:48.497 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:48.497 EAL: Restoring previous memory policy: 4 00:03:48.497 EAL: Calling mem event callback 'spdk:(nil)' 00:03:48.497 EAL: request: mp_malloc_sync 00:03:48.497 EAL: No shared files mode enabled, IPC is disabled 00:03:48.497 EAL: Heap on socket 0 was expanded by 6MB 00:03:48.497 EAL: Calling mem event callback 'spdk:(nil)' 00:03:48.497 EAL: request: mp_malloc_sync 00:03:48.497 EAL: No shared files mode enabled, IPC is disabled 00:03:48.497 EAL: Heap on socket 0 was shrunk by 6MB 00:03:48.497 EAL: Trying to obtain current memory policy. 00:03:48.497 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:48.497 EAL: Restoring previous memory policy: 4 00:03:48.497 EAL: Calling mem event callback 'spdk:(nil)' 00:03:48.497 EAL: request: mp_malloc_sync 00:03:48.497 EAL: No shared files mode enabled, IPC is disabled 00:03:48.497 EAL: Heap on socket 0 was expanded by 10MB 00:03:48.497 EAL: Calling mem event callback 'spdk:(nil)' 00:03:48.497 EAL: request: mp_malloc_sync 00:03:48.497 EAL: No shared files mode enabled, IPC is disabled 00:03:48.497 EAL: Heap on socket 0 was shrunk by 10MB 00:03:48.497 EAL: Trying to obtain current memory policy. 00:03:48.497 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:48.497 EAL: Restoring previous memory policy: 4 00:03:48.497 EAL: Calling mem event callback 'spdk:(nil)' 00:03:48.497 EAL: request: mp_malloc_sync 00:03:48.497 EAL: No shared files mode enabled, IPC is disabled 00:03:48.497 EAL: Heap on socket 0 was expanded by 18MB 00:03:48.497 EAL: Calling mem event callback 'spdk:(nil)' 00:03:48.497 EAL: request: mp_malloc_sync 00:03:48.497 EAL: No shared files mode enabled, IPC is disabled 00:03:48.497 EAL: Heap on socket 0 was shrunk by 18MB 00:03:48.497 EAL: Trying to obtain current memory policy. 00:03:48.497 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:48.497 EAL: Restoring previous memory policy: 4 00:03:48.497 EAL: Calling mem event callback 'spdk:(nil)' 00:03:48.497 EAL: request: mp_malloc_sync 00:03:48.497 EAL: No shared files mode enabled, IPC is disabled 00:03:48.497 EAL: Heap on socket 0 was expanded by 34MB 00:03:48.497 EAL: Calling mem event callback 'spdk:(nil)' 00:03:48.497 EAL: request: mp_malloc_sync 00:03:48.497 EAL: No shared files mode enabled, IPC is disabled 00:03:48.497 EAL: Heap on socket 0 was shrunk by 34MB 00:03:48.497 EAL: Trying to obtain current memory policy. 00:03:48.497 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:48.497 EAL: Restoring previous memory policy: 4 00:03:48.497 EAL: Calling mem event callback 'spdk:(nil)' 00:03:48.497 EAL: request: mp_malloc_sync 00:03:48.497 EAL: No shared files mode enabled, IPC is disabled 00:03:48.497 EAL: Heap on socket 0 was expanded by 66MB 00:03:48.497 EAL: Calling mem event callback 'spdk:(nil)' 00:03:48.497 EAL: request: mp_malloc_sync 00:03:48.497 EAL: No shared files mode enabled, IPC is disabled 00:03:48.497 EAL: Heap on socket 0 was shrunk by 66MB 00:03:48.497 EAL: Trying to obtain current memory policy. 00:03:48.497 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:48.497 EAL: Restoring previous memory policy: 4 00:03:48.497 EAL: Calling mem event callback 'spdk:(nil)' 00:03:48.497 EAL: request: mp_malloc_sync 00:03:48.497 EAL: No shared files mode enabled, IPC is disabled 00:03:48.497 EAL: Heap on socket 0 was expanded by 130MB 00:03:48.497 EAL: Calling mem event callback 'spdk:(nil)' 00:03:48.497 EAL: request: mp_malloc_sync 00:03:48.497 EAL: No shared files mode enabled, IPC is disabled 00:03:48.497 EAL: Heap on socket 0 was shrunk by 130MB 00:03:48.497 EAL: Trying to obtain current memory policy. 00:03:48.497 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:48.497 EAL: Restoring previous memory policy: 4 00:03:48.497 EAL: Calling mem event callback 'spdk:(nil)' 00:03:48.497 EAL: request: mp_malloc_sync 00:03:48.497 EAL: No shared files mode enabled, IPC is disabled 00:03:48.497 EAL: Heap on socket 0 was expanded by 258MB 00:03:48.756 EAL: Calling mem event callback 'spdk:(nil)' 00:03:48.756 EAL: request: mp_malloc_sync 00:03:48.756 EAL: No shared files mode enabled, IPC is disabled 00:03:48.756 EAL: Heap on socket 0 was shrunk by 258MB 00:03:48.756 EAL: Trying to obtain current memory policy. 00:03:48.756 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:48.756 EAL: Restoring previous memory policy: 4 00:03:48.756 EAL: Calling mem event callback 'spdk:(nil)' 00:03:48.757 EAL: request: mp_malloc_sync 00:03:48.757 EAL: No shared files mode enabled, IPC is disabled 00:03:48.757 EAL: Heap on socket 0 was expanded by 514MB 00:03:48.757 EAL: Calling mem event callback 'spdk:(nil)' 00:03:49.015 EAL: request: mp_malloc_sync 00:03:49.015 EAL: No shared files mode enabled, IPC is disabled 00:03:49.015 EAL: Heap on socket 0 was shrunk by 514MB 00:03:49.015 EAL: Trying to obtain current memory policy. 00:03:49.015 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:49.274 EAL: Restoring previous memory policy: 4 00:03:49.274 EAL: Calling mem event callback 'spdk:(nil)' 00:03:49.274 EAL: request: mp_malloc_sync 00:03:49.274 EAL: No shared files mode enabled, IPC is disabled 00:03:49.274 EAL: Heap on socket 0 was expanded by 1026MB 00:03:49.274 EAL: Calling mem event callback 'spdk:(nil)' 00:03:49.533 EAL: request: mp_malloc_sync 00:03:49.533 EAL: No shared files mode enabled, IPC is disabled 00:03:49.533 EAL: Heap on socket 0 was shrunk by 1026MB 00:03:49.533 passed 00:03:49.533 00:03:49.533 Run Summary: Type Total Ran Passed Failed Inactive 00:03:49.533 suites 1 1 n/a 0 0 00:03:49.533 tests 2 2 2 0 0 00:03:49.533 asserts 497 497 497 0 n/a 00:03:49.533 00:03:49.533 Elapsed time = 1.020 seconds 00:03:49.533 EAL: Calling mem event callback 'spdk:(nil)' 00:03:49.533 EAL: request: mp_malloc_sync 00:03:49.533 EAL: No shared files mode enabled, IPC is disabled 00:03:49.533 EAL: Heap on socket 0 was shrunk by 2MB 00:03:49.533 EAL: No shared files mode enabled, IPC is disabled 00:03:49.533 EAL: No shared files mode enabled, IPC is disabled 00:03:49.533 EAL: No shared files mode enabled, IPC is disabled 00:03:49.533 00:03:49.533 real 0m1.172s 00:03:49.533 user 0m0.690s 00:03:49.533 sys 0m0.454s 00:03:49.533 11:21:50 env.env_vtophys -- common/autotest_common.sh@1128 -- # xtrace_disable 00:03:49.533 11:21:50 env.env_vtophys -- common/autotest_common.sh@10 -- # set +x 00:03:49.533 ************************************ 00:03:49.533 END TEST env_vtophys 00:03:49.533 ************************************ 00:03:49.533 11:21:50 env -- env/env.sh@12 -- # run_test env_pci /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/pci/pci_ut 00:03:49.533 11:21:50 env -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:03:49.533 11:21:50 env -- common/autotest_common.sh@1109 -- # xtrace_disable 00:03:49.533 11:21:50 env -- common/autotest_common.sh@10 -- # set +x 00:03:49.533 ************************************ 00:03:49.533 START TEST env_pci 00:03:49.533 ************************************ 00:03:49.533 11:21:50 env.env_pci -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/pci/pci_ut 00:03:49.533 00:03:49.533 00:03:49.533 CUnit - A unit testing framework for C - Version 2.1-3 00:03:49.533 http://cunit.sourceforge.net/ 00:03:49.533 00:03:49.533 00:03:49.533 Suite: pci 00:03:49.533 Test: pci_hook ...[2024-11-15 11:21:50.297112] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/pci.c:1117:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 1017456 has claimed it 00:03:49.533 EAL: Cannot find device (10000:00:01.0) 00:03:49.533 EAL: Failed to attach device on primary process 00:03:49.533 passed 00:03:49.533 00:03:49.533 Run Summary: Type Total Ran Passed Failed Inactive 00:03:49.533 suites 1 1 n/a 0 0 00:03:49.533 tests 1 1 1 0 0 00:03:49.533 asserts 25 25 25 0 n/a 00:03:49.533 00:03:49.533 Elapsed time = 0.030 seconds 00:03:49.533 00:03:49.533 real 0m0.049s 00:03:49.533 user 0m0.017s 00:03:49.533 sys 0m0.032s 00:03:49.533 11:21:50 env.env_pci -- common/autotest_common.sh@1128 -- # xtrace_disable 00:03:49.533 11:21:50 env.env_pci -- common/autotest_common.sh@10 -- # set +x 00:03:49.533 ************************************ 00:03:49.533 END TEST env_pci 00:03:49.533 ************************************ 00:03:49.533 11:21:50 env -- env/env.sh@14 -- # argv='-c 0x1 ' 00:03:49.533 11:21:50 env -- env/env.sh@15 -- # uname 00:03:49.533 11:21:50 env -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:03:49.533 11:21:50 env -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:03:49.533 11:21:50 env -- env/env.sh@24 -- # run_test env_dpdk_post_init /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:03:49.533 11:21:50 env -- common/autotest_common.sh@1103 -- # '[' 5 -le 1 ']' 00:03:49.533 11:21:50 env -- common/autotest_common.sh@1109 -- # xtrace_disable 00:03:49.533 11:21:50 env -- common/autotest_common.sh@10 -- # set +x 00:03:49.792 ************************************ 00:03:49.792 START TEST env_dpdk_post_init 00:03:49.792 ************************************ 00:03:49.792 11:21:50 env.env_dpdk_post_init -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:03:49.792 EAL: Detected CPU lcores: 112 00:03:49.792 EAL: Detected NUMA nodes: 2 00:03:49.792 EAL: Detected shared linkage of DPDK 00:03:49.792 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:03:49.792 EAL: Selected IOVA mode 'VA' 00:03:49.792 EAL: VFIO support initialized 00:03:49.792 TELEMETRY: No legacy callbacks, legacy socket not created 00:03:49.792 EAL: Using IOMMU type 1 (Type 1) 00:03:49.792 EAL: Ignore mapping IO port bar(1) 00:03:49.792 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.0 (socket 0) 00:03:49.792 EAL: Ignore mapping IO port bar(1) 00:03:49.792 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.1 (socket 0) 00:03:49.792 EAL: Ignore mapping IO port bar(1) 00:03:49.792 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.2 (socket 0) 00:03:49.792 EAL: Ignore mapping IO port bar(1) 00:03:49.792 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.3 (socket 0) 00:03:49.792 EAL: Ignore mapping IO port bar(1) 00:03:49.792 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.4 (socket 0) 00:03:49.792 EAL: Ignore mapping IO port bar(1) 00:03:49.792 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.5 (socket 0) 00:03:49.792 EAL: Ignore mapping IO port bar(1) 00:03:49.792 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.6 (socket 0) 00:03:50.051 EAL: Ignore mapping IO port bar(1) 00:03:50.051 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.7 (socket 0) 00:03:50.051 EAL: Ignore mapping IO port bar(1) 00:03:50.051 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.0 (socket 1) 00:03:50.051 EAL: Ignore mapping IO port bar(1) 00:03:50.051 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.1 (socket 1) 00:03:50.051 EAL: Ignore mapping IO port bar(1) 00:03:50.051 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.2 (socket 1) 00:03:50.051 EAL: Ignore mapping IO port bar(1) 00:03:50.051 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.3 (socket 1) 00:03:50.051 EAL: Ignore mapping IO port bar(1) 00:03:50.051 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.4 (socket 1) 00:03:50.051 EAL: Ignore mapping IO port bar(1) 00:03:50.051 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.5 (socket 1) 00:03:50.051 EAL: Ignore mapping IO port bar(1) 00:03:50.051 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.6 (socket 1) 00:03:50.051 EAL: Ignore mapping IO port bar(1) 00:03:50.051 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.7 (socket 1) 00:03:50.986 EAL: Probe PCI driver: spdk_nvme (8086:0a54) device: 0000:86:00.0 (socket 1) 00:03:54.270 EAL: Releasing PCI mapped resource for 0000:86:00.0 00:03:54.270 EAL: Calling pci_unmap_resource for 0000:86:00.0 at 0x202001040000 00:03:54.270 Starting DPDK initialization... 00:03:54.270 Starting SPDK post initialization... 00:03:54.270 SPDK NVMe probe 00:03:54.270 Attaching to 0000:86:00.0 00:03:54.270 Attached to 0000:86:00.0 00:03:54.270 Cleaning up... 00:03:54.270 00:03:54.270 real 0m4.503s 00:03:54.270 user 0m3.075s 00:03:54.270 sys 0m0.484s 00:03:54.270 11:21:54 env.env_dpdk_post_init -- common/autotest_common.sh@1128 -- # xtrace_disable 00:03:54.270 11:21:54 env.env_dpdk_post_init -- common/autotest_common.sh@10 -- # set +x 00:03:54.270 ************************************ 00:03:54.270 END TEST env_dpdk_post_init 00:03:54.270 ************************************ 00:03:54.270 11:21:54 env -- env/env.sh@26 -- # uname 00:03:54.270 11:21:54 env -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:03:54.270 11:21:54 env -- env/env.sh@29 -- # run_test env_mem_callbacks /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:03:54.270 11:21:54 env -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:03:54.270 11:21:54 env -- common/autotest_common.sh@1109 -- # xtrace_disable 00:03:54.270 11:21:54 env -- common/autotest_common.sh@10 -- # set +x 00:03:54.270 ************************************ 00:03:54.270 START TEST env_mem_callbacks 00:03:54.270 ************************************ 00:03:54.270 11:21:54 env.env_mem_callbacks -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:03:54.270 EAL: Detected CPU lcores: 112 00:03:54.270 EAL: Detected NUMA nodes: 2 00:03:54.270 EAL: Detected shared linkage of DPDK 00:03:54.270 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:03:54.270 EAL: Selected IOVA mode 'VA' 00:03:54.270 EAL: VFIO support initialized 00:03:54.270 TELEMETRY: No legacy callbacks, legacy socket not created 00:03:54.270 00:03:54.270 00:03:54.270 CUnit - A unit testing framework for C - Version 2.1-3 00:03:54.270 http://cunit.sourceforge.net/ 00:03:54.270 00:03:54.270 00:03:54.270 Suite: memory 00:03:54.270 Test: test ... 00:03:54.270 register 0x200000200000 2097152 00:03:54.270 malloc 3145728 00:03:54.270 register 0x200000400000 4194304 00:03:54.270 buf 0x200000500000 len 3145728 PASSED 00:03:54.270 malloc 64 00:03:54.270 buf 0x2000004fff40 len 64 PASSED 00:03:54.270 malloc 4194304 00:03:54.270 register 0x200000800000 6291456 00:03:54.270 buf 0x200000a00000 len 4194304 PASSED 00:03:54.270 free 0x200000500000 3145728 00:03:54.270 free 0x2000004fff40 64 00:03:54.270 unregister 0x200000400000 4194304 PASSED 00:03:54.270 free 0x200000a00000 4194304 00:03:54.270 unregister 0x200000800000 6291456 PASSED 00:03:54.270 malloc 8388608 00:03:54.270 register 0x200000400000 10485760 00:03:54.270 buf 0x200000600000 len 8388608 PASSED 00:03:54.270 free 0x200000600000 8388608 00:03:54.270 unregister 0x200000400000 10485760 PASSED 00:03:54.270 passed 00:03:54.270 00:03:54.270 Run Summary: Type Total Ran Passed Failed Inactive 00:03:54.270 suites 1 1 n/a 0 0 00:03:54.270 tests 1 1 1 0 0 00:03:54.270 asserts 15 15 15 0 n/a 00:03:54.270 00:03:54.270 Elapsed time = 0.008 seconds 00:03:54.270 00:03:54.270 real 0m0.065s 00:03:54.270 user 0m0.019s 00:03:54.270 sys 0m0.046s 00:03:54.270 11:21:55 env.env_mem_callbacks -- common/autotest_common.sh@1128 -- # xtrace_disable 00:03:54.270 11:21:55 env.env_mem_callbacks -- common/autotest_common.sh@10 -- # set +x 00:03:54.270 ************************************ 00:03:54.270 END TEST env_mem_callbacks 00:03:54.270 ************************************ 00:03:54.270 00:03:54.270 real 0m6.548s 00:03:54.270 user 0m4.246s 00:03:54.270 sys 0m1.364s 00:03:54.270 11:21:55 env -- common/autotest_common.sh@1128 -- # xtrace_disable 00:03:54.270 11:21:55 env -- common/autotest_common.sh@10 -- # set +x 00:03:54.270 ************************************ 00:03:54.270 END TEST env 00:03:54.270 ************************************ 00:03:54.270 11:21:55 -- spdk/autotest.sh@156 -- # run_test rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/rpc.sh 00:03:54.270 11:21:55 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:03:54.270 11:21:55 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:03:54.270 11:21:55 -- common/autotest_common.sh@10 -- # set +x 00:03:54.529 ************************************ 00:03:54.529 START TEST rpc 00:03:54.529 ************************************ 00:03:54.529 11:21:55 rpc -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/rpc.sh 00:03:54.529 * Looking for test storage... 00:03:54.529 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:03:54.529 11:21:55 rpc -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:03:54.529 11:21:55 rpc -- common/autotest_common.sh@1691 -- # lcov --version 00:03:54.529 11:21:55 rpc -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:03:54.529 11:21:55 rpc -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:03:54.529 11:21:55 rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:03:54.529 11:21:55 rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:03:54.529 11:21:55 rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:03:54.529 11:21:55 rpc -- scripts/common.sh@336 -- # IFS=.-: 00:03:54.529 11:21:55 rpc -- scripts/common.sh@336 -- # read -ra ver1 00:03:54.529 11:21:55 rpc -- scripts/common.sh@337 -- # IFS=.-: 00:03:54.529 11:21:55 rpc -- scripts/common.sh@337 -- # read -ra ver2 00:03:54.529 11:21:55 rpc -- scripts/common.sh@338 -- # local 'op=<' 00:03:54.529 11:21:55 rpc -- scripts/common.sh@340 -- # ver1_l=2 00:03:54.529 11:21:55 rpc -- scripts/common.sh@341 -- # ver2_l=1 00:03:54.529 11:21:55 rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:03:54.529 11:21:55 rpc -- scripts/common.sh@344 -- # case "$op" in 00:03:54.529 11:21:55 rpc -- scripts/common.sh@345 -- # : 1 00:03:54.529 11:21:55 rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:03:54.529 11:21:55 rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:03:54.529 11:21:55 rpc -- scripts/common.sh@365 -- # decimal 1 00:03:54.529 11:21:55 rpc -- scripts/common.sh@353 -- # local d=1 00:03:54.529 11:21:55 rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:03:54.529 11:21:55 rpc -- scripts/common.sh@355 -- # echo 1 00:03:54.529 11:21:55 rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:03:54.529 11:21:55 rpc -- scripts/common.sh@366 -- # decimal 2 00:03:54.529 11:21:55 rpc -- scripts/common.sh@353 -- # local d=2 00:03:54.529 11:21:55 rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:03:54.529 11:21:55 rpc -- scripts/common.sh@355 -- # echo 2 00:03:54.529 11:21:55 rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:03:54.529 11:21:55 rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:03:54.529 11:21:55 rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:03:54.529 11:21:55 rpc -- scripts/common.sh@368 -- # return 0 00:03:54.529 11:21:55 rpc -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:03:54.529 11:21:55 rpc -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:03:54.529 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:54.529 --rc genhtml_branch_coverage=1 00:03:54.529 --rc genhtml_function_coverage=1 00:03:54.529 --rc genhtml_legend=1 00:03:54.529 --rc geninfo_all_blocks=1 00:03:54.529 --rc geninfo_unexecuted_blocks=1 00:03:54.529 00:03:54.529 ' 00:03:54.529 11:21:55 rpc -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:03:54.529 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:54.529 --rc genhtml_branch_coverage=1 00:03:54.529 --rc genhtml_function_coverage=1 00:03:54.529 --rc genhtml_legend=1 00:03:54.529 --rc geninfo_all_blocks=1 00:03:54.529 --rc geninfo_unexecuted_blocks=1 00:03:54.529 00:03:54.529 ' 00:03:54.529 11:21:55 rpc -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:03:54.529 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:54.529 --rc genhtml_branch_coverage=1 00:03:54.529 --rc genhtml_function_coverage=1 00:03:54.529 --rc genhtml_legend=1 00:03:54.529 --rc geninfo_all_blocks=1 00:03:54.529 --rc geninfo_unexecuted_blocks=1 00:03:54.529 00:03:54.529 ' 00:03:54.529 11:21:55 rpc -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:03:54.529 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:54.529 --rc genhtml_branch_coverage=1 00:03:54.529 --rc genhtml_function_coverage=1 00:03:54.529 --rc genhtml_legend=1 00:03:54.529 --rc geninfo_all_blocks=1 00:03:54.529 --rc geninfo_unexecuted_blocks=1 00:03:54.529 00:03:54.529 ' 00:03:54.529 11:21:55 rpc -- rpc/rpc.sh@65 -- # spdk_pid=1018384 00:03:54.529 11:21:55 rpc -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:03:54.529 11:21:55 rpc -- rpc/rpc.sh@67 -- # waitforlisten 1018384 00:03:54.529 11:21:55 rpc -- rpc/rpc.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -e bdev 00:03:54.529 11:21:55 rpc -- common/autotest_common.sh@833 -- # '[' -z 1018384 ']' 00:03:54.529 11:21:55 rpc -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:03:54.529 11:21:55 rpc -- common/autotest_common.sh@838 -- # local max_retries=100 00:03:54.530 11:21:55 rpc -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:03:54.530 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:03:54.530 11:21:55 rpc -- common/autotest_common.sh@842 -- # xtrace_disable 00:03:54.530 11:21:55 rpc -- common/autotest_common.sh@10 -- # set +x 00:03:54.530 [2024-11-15 11:21:55.358424] Starting SPDK v25.01-pre git sha1 4b2d483c6 / DPDK 24.03.0 initialization... 00:03:54.530 [2024-11-15 11:21:55.358470] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1018384 ] 00:03:54.789 [2024-11-15 11:21:55.438358] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:03:54.789 [2024-11-15 11:21:55.488961] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:03:54.789 [2024-11-15 11:21:55.489000] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 1018384' to capture a snapshot of events at runtime. 00:03:54.789 [2024-11-15 11:21:55.489011] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:03:54.789 [2024-11-15 11:21:55.489020] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:03:54.789 [2024-11-15 11:21:55.489027] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid1018384 for offline analysis/debug. 00:03:54.789 [2024-11-15 11:21:55.489739] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:03:55.048 11:21:55 rpc -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:03:55.048 11:21:55 rpc -- common/autotest_common.sh@866 -- # return 0 00:03:55.048 11:21:55 rpc -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:03:55.048 11:21:55 rpc -- rpc/rpc.sh@69 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:03:55.048 11:21:55 rpc -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:03:55.048 11:21:55 rpc -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:03:55.048 11:21:55 rpc -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:03:55.048 11:21:55 rpc -- common/autotest_common.sh@1109 -- # xtrace_disable 00:03:55.048 11:21:55 rpc -- common/autotest_common.sh@10 -- # set +x 00:03:55.048 ************************************ 00:03:55.048 START TEST rpc_integrity 00:03:55.048 ************************************ 00:03:55.048 11:21:55 rpc.rpc_integrity -- common/autotest_common.sh@1127 -- # rpc_integrity 00:03:55.048 11:21:55 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:03:55.048 11:21:55 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:03:55.048 11:21:55 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:55.048 11:21:55 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:03:55.048 11:21:55 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:03:55.048 11:21:55 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # jq length 00:03:55.048 11:21:55 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:03:55.048 11:21:55 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:03:55.048 11:21:55 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:03:55.048 11:21:55 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:55.048 11:21:55 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:03:55.048 11:21:55 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:03:55.048 11:21:55 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:03:55.048 11:21:55 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:03:55.048 11:21:55 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:55.048 11:21:55 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:03:55.048 11:21:55 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:03:55.048 { 00:03:55.048 "name": "Malloc0", 00:03:55.048 "aliases": [ 00:03:55.048 "5f748647-863d-4009-af91-a634b31f0e6a" 00:03:55.048 ], 00:03:55.048 "product_name": "Malloc disk", 00:03:55.048 "block_size": 512, 00:03:55.048 "num_blocks": 16384, 00:03:55.048 "uuid": "5f748647-863d-4009-af91-a634b31f0e6a", 00:03:55.048 "assigned_rate_limits": { 00:03:55.048 "rw_ios_per_sec": 0, 00:03:55.048 "rw_mbytes_per_sec": 0, 00:03:55.048 "r_mbytes_per_sec": 0, 00:03:55.048 "w_mbytes_per_sec": 0 00:03:55.048 }, 00:03:55.048 "claimed": false, 00:03:55.048 "zoned": false, 00:03:55.048 "supported_io_types": { 00:03:55.048 "read": true, 00:03:55.048 "write": true, 00:03:55.048 "unmap": true, 00:03:55.048 "flush": true, 00:03:55.048 "reset": true, 00:03:55.048 "nvme_admin": false, 00:03:55.048 "nvme_io": false, 00:03:55.048 "nvme_io_md": false, 00:03:55.048 "write_zeroes": true, 00:03:55.048 "zcopy": true, 00:03:55.048 "get_zone_info": false, 00:03:55.048 "zone_management": false, 00:03:55.048 "zone_append": false, 00:03:55.048 "compare": false, 00:03:55.048 "compare_and_write": false, 00:03:55.048 "abort": true, 00:03:55.048 "seek_hole": false, 00:03:55.048 "seek_data": false, 00:03:55.048 "copy": true, 00:03:55.048 "nvme_iov_md": false 00:03:55.048 }, 00:03:55.048 "memory_domains": [ 00:03:55.048 { 00:03:55.048 "dma_device_id": "system", 00:03:55.048 "dma_device_type": 1 00:03:55.048 }, 00:03:55.048 { 00:03:55.048 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:03:55.048 "dma_device_type": 2 00:03:55.048 } 00:03:55.048 ], 00:03:55.048 "driver_specific": {} 00:03:55.048 } 00:03:55.048 ]' 00:03:55.048 11:21:55 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # jq length 00:03:55.048 11:21:55 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:03:55.048 11:21:55 rpc.rpc_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:03:55.048 11:21:55 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:03:55.048 11:21:55 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:55.048 [2024-11-15 11:21:55.864425] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:03:55.048 [2024-11-15 11:21:55.864468] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:03:55.048 [2024-11-15 11:21:55.864485] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x12447b0 00:03:55.048 [2024-11-15 11:21:55.864494] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:03:55.048 [2024-11-15 11:21:55.866049] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:03:55.048 [2024-11-15 11:21:55.866076] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:03:55.048 Passthru0 00:03:55.048 11:21:55 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:03:55.048 11:21:55 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:03:55.048 11:21:55 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:03:55.048 11:21:55 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:55.048 11:21:55 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:03:55.048 11:21:55 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:03:55.048 { 00:03:55.048 "name": "Malloc0", 00:03:55.048 "aliases": [ 00:03:55.048 "5f748647-863d-4009-af91-a634b31f0e6a" 00:03:55.048 ], 00:03:55.048 "product_name": "Malloc disk", 00:03:55.048 "block_size": 512, 00:03:55.048 "num_blocks": 16384, 00:03:55.048 "uuid": "5f748647-863d-4009-af91-a634b31f0e6a", 00:03:55.048 "assigned_rate_limits": { 00:03:55.048 "rw_ios_per_sec": 0, 00:03:55.048 "rw_mbytes_per_sec": 0, 00:03:55.048 "r_mbytes_per_sec": 0, 00:03:55.048 "w_mbytes_per_sec": 0 00:03:55.048 }, 00:03:55.048 "claimed": true, 00:03:55.048 "claim_type": "exclusive_write", 00:03:55.048 "zoned": false, 00:03:55.048 "supported_io_types": { 00:03:55.048 "read": true, 00:03:55.048 "write": true, 00:03:55.048 "unmap": true, 00:03:55.048 "flush": true, 00:03:55.048 "reset": true, 00:03:55.048 "nvme_admin": false, 00:03:55.048 "nvme_io": false, 00:03:55.048 "nvme_io_md": false, 00:03:55.048 "write_zeroes": true, 00:03:55.048 "zcopy": true, 00:03:55.048 "get_zone_info": false, 00:03:55.048 "zone_management": false, 00:03:55.048 "zone_append": false, 00:03:55.048 "compare": false, 00:03:55.048 "compare_and_write": false, 00:03:55.048 "abort": true, 00:03:55.048 "seek_hole": false, 00:03:55.048 "seek_data": false, 00:03:55.048 "copy": true, 00:03:55.048 "nvme_iov_md": false 00:03:55.048 }, 00:03:55.048 "memory_domains": [ 00:03:55.048 { 00:03:55.048 "dma_device_id": "system", 00:03:55.048 "dma_device_type": 1 00:03:55.048 }, 00:03:55.048 { 00:03:55.048 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:03:55.048 "dma_device_type": 2 00:03:55.048 } 00:03:55.048 ], 00:03:55.048 "driver_specific": {} 00:03:55.048 }, 00:03:55.048 { 00:03:55.048 "name": "Passthru0", 00:03:55.048 "aliases": [ 00:03:55.048 "9c6da5ab-4420-555b-9d50-ff818dd8d3fb" 00:03:55.048 ], 00:03:55.048 "product_name": "passthru", 00:03:55.048 "block_size": 512, 00:03:55.048 "num_blocks": 16384, 00:03:55.048 "uuid": "9c6da5ab-4420-555b-9d50-ff818dd8d3fb", 00:03:55.048 "assigned_rate_limits": { 00:03:55.048 "rw_ios_per_sec": 0, 00:03:55.048 "rw_mbytes_per_sec": 0, 00:03:55.048 "r_mbytes_per_sec": 0, 00:03:55.048 "w_mbytes_per_sec": 0 00:03:55.048 }, 00:03:55.048 "claimed": false, 00:03:55.048 "zoned": false, 00:03:55.048 "supported_io_types": { 00:03:55.048 "read": true, 00:03:55.048 "write": true, 00:03:55.048 "unmap": true, 00:03:55.048 "flush": true, 00:03:55.048 "reset": true, 00:03:55.048 "nvme_admin": false, 00:03:55.048 "nvme_io": false, 00:03:55.048 "nvme_io_md": false, 00:03:55.048 "write_zeroes": true, 00:03:55.048 "zcopy": true, 00:03:55.048 "get_zone_info": false, 00:03:55.048 "zone_management": false, 00:03:55.048 "zone_append": false, 00:03:55.048 "compare": false, 00:03:55.048 "compare_and_write": false, 00:03:55.048 "abort": true, 00:03:55.048 "seek_hole": false, 00:03:55.048 "seek_data": false, 00:03:55.048 "copy": true, 00:03:55.048 "nvme_iov_md": false 00:03:55.048 }, 00:03:55.048 "memory_domains": [ 00:03:55.048 { 00:03:55.048 "dma_device_id": "system", 00:03:55.048 "dma_device_type": 1 00:03:55.048 }, 00:03:55.048 { 00:03:55.048 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:03:55.048 "dma_device_type": 2 00:03:55.048 } 00:03:55.048 ], 00:03:55.048 "driver_specific": { 00:03:55.048 "passthru": { 00:03:55.048 "name": "Passthru0", 00:03:55.049 "base_bdev_name": "Malloc0" 00:03:55.049 } 00:03:55.049 } 00:03:55.049 } 00:03:55.049 ]' 00:03:55.049 11:21:55 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # jq length 00:03:55.307 11:21:55 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:03:55.307 11:21:55 rpc.rpc_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:03:55.307 11:21:55 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:03:55.307 11:21:55 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:55.307 11:21:55 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:03:55.307 11:21:55 rpc.rpc_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:03:55.307 11:21:55 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:03:55.307 11:21:55 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:55.307 11:21:55 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:03:55.307 11:21:55 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:03:55.307 11:21:55 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:03:55.307 11:21:55 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:55.307 11:21:55 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:03:55.308 11:21:55 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:03:55.308 11:21:55 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # jq length 00:03:55.308 11:21:55 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:03:55.308 00:03:55.308 real 0m0.255s 00:03:55.308 user 0m0.167s 00:03:55.308 sys 0m0.024s 00:03:55.308 11:21:55 rpc.rpc_integrity -- common/autotest_common.sh@1128 -- # xtrace_disable 00:03:55.308 11:21:55 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:55.308 ************************************ 00:03:55.308 END TEST rpc_integrity 00:03:55.308 ************************************ 00:03:55.308 11:21:56 rpc -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:03:55.308 11:21:56 rpc -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:03:55.308 11:21:56 rpc -- common/autotest_common.sh@1109 -- # xtrace_disable 00:03:55.308 11:21:56 rpc -- common/autotest_common.sh@10 -- # set +x 00:03:55.308 ************************************ 00:03:55.308 START TEST rpc_plugins 00:03:55.308 ************************************ 00:03:55.308 11:21:56 rpc.rpc_plugins -- common/autotest_common.sh@1127 -- # rpc_plugins 00:03:55.308 11:21:56 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:03:55.308 11:21:56 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:03:55.308 11:21:56 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:03:55.308 11:21:56 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:03:55.308 11:21:56 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:03:55.308 11:21:56 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:03:55.308 11:21:56 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:03:55.308 11:21:56 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:03:55.308 11:21:56 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:03:55.308 11:21:56 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # bdevs='[ 00:03:55.308 { 00:03:55.308 "name": "Malloc1", 00:03:55.308 "aliases": [ 00:03:55.308 "e8688e2b-6015-429d-9b08-07b78186dd91" 00:03:55.308 ], 00:03:55.308 "product_name": "Malloc disk", 00:03:55.308 "block_size": 4096, 00:03:55.308 "num_blocks": 256, 00:03:55.308 "uuid": "e8688e2b-6015-429d-9b08-07b78186dd91", 00:03:55.308 "assigned_rate_limits": { 00:03:55.308 "rw_ios_per_sec": 0, 00:03:55.308 "rw_mbytes_per_sec": 0, 00:03:55.308 "r_mbytes_per_sec": 0, 00:03:55.308 "w_mbytes_per_sec": 0 00:03:55.308 }, 00:03:55.308 "claimed": false, 00:03:55.308 "zoned": false, 00:03:55.308 "supported_io_types": { 00:03:55.308 "read": true, 00:03:55.308 "write": true, 00:03:55.308 "unmap": true, 00:03:55.308 "flush": true, 00:03:55.308 "reset": true, 00:03:55.308 "nvme_admin": false, 00:03:55.308 "nvme_io": false, 00:03:55.308 "nvme_io_md": false, 00:03:55.308 "write_zeroes": true, 00:03:55.308 "zcopy": true, 00:03:55.308 "get_zone_info": false, 00:03:55.308 "zone_management": false, 00:03:55.308 "zone_append": false, 00:03:55.308 "compare": false, 00:03:55.308 "compare_and_write": false, 00:03:55.308 "abort": true, 00:03:55.308 "seek_hole": false, 00:03:55.308 "seek_data": false, 00:03:55.308 "copy": true, 00:03:55.308 "nvme_iov_md": false 00:03:55.308 }, 00:03:55.308 "memory_domains": [ 00:03:55.308 { 00:03:55.308 "dma_device_id": "system", 00:03:55.308 "dma_device_type": 1 00:03:55.308 }, 00:03:55.308 { 00:03:55.308 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:03:55.308 "dma_device_type": 2 00:03:55.308 } 00:03:55.308 ], 00:03:55.308 "driver_specific": {} 00:03:55.308 } 00:03:55.308 ]' 00:03:55.308 11:21:56 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # jq length 00:03:55.308 11:21:56 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:03:55.308 11:21:56 rpc.rpc_plugins -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:03:55.308 11:21:56 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:03:55.308 11:21:56 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:03:55.308 11:21:56 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:03:55.308 11:21:56 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:03:55.308 11:21:56 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:03:55.308 11:21:56 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:03:55.308 11:21:56 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:03:55.308 11:21:56 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # bdevs='[]' 00:03:55.308 11:21:56 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # jq length 00:03:55.566 11:21:56 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:03:55.566 00:03:55.566 real 0m0.142s 00:03:55.566 user 0m0.087s 00:03:55.566 sys 0m0.014s 00:03:55.566 11:21:56 rpc.rpc_plugins -- common/autotest_common.sh@1128 -- # xtrace_disable 00:03:55.566 11:21:56 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:03:55.566 ************************************ 00:03:55.566 END TEST rpc_plugins 00:03:55.566 ************************************ 00:03:55.566 11:21:56 rpc -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:03:55.566 11:21:56 rpc -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:03:55.566 11:21:56 rpc -- common/autotest_common.sh@1109 -- # xtrace_disable 00:03:55.566 11:21:56 rpc -- common/autotest_common.sh@10 -- # set +x 00:03:55.566 ************************************ 00:03:55.566 START TEST rpc_trace_cmd_test 00:03:55.566 ************************************ 00:03:55.566 11:21:56 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1127 -- # rpc_trace_cmd_test 00:03:55.566 11:21:56 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@40 -- # local info 00:03:55.566 11:21:56 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:03:55.566 11:21:56 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:03:55.566 11:21:56 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:03:55.566 11:21:56 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:03:55.566 11:21:56 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # info='{ 00:03:55.566 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid1018384", 00:03:55.566 "tpoint_group_mask": "0x8", 00:03:55.566 "iscsi_conn": { 00:03:55.566 "mask": "0x2", 00:03:55.566 "tpoint_mask": "0x0" 00:03:55.566 }, 00:03:55.566 "scsi": { 00:03:55.566 "mask": "0x4", 00:03:55.566 "tpoint_mask": "0x0" 00:03:55.566 }, 00:03:55.566 "bdev": { 00:03:55.566 "mask": "0x8", 00:03:55.566 "tpoint_mask": "0xffffffffffffffff" 00:03:55.566 }, 00:03:55.566 "nvmf_rdma": { 00:03:55.566 "mask": "0x10", 00:03:55.566 "tpoint_mask": "0x0" 00:03:55.566 }, 00:03:55.566 "nvmf_tcp": { 00:03:55.566 "mask": "0x20", 00:03:55.566 "tpoint_mask": "0x0" 00:03:55.566 }, 00:03:55.566 "ftl": { 00:03:55.566 "mask": "0x40", 00:03:55.566 "tpoint_mask": "0x0" 00:03:55.566 }, 00:03:55.566 "blobfs": { 00:03:55.566 "mask": "0x80", 00:03:55.566 "tpoint_mask": "0x0" 00:03:55.566 }, 00:03:55.566 "dsa": { 00:03:55.566 "mask": "0x200", 00:03:55.566 "tpoint_mask": "0x0" 00:03:55.566 }, 00:03:55.566 "thread": { 00:03:55.566 "mask": "0x400", 00:03:55.566 "tpoint_mask": "0x0" 00:03:55.566 }, 00:03:55.566 "nvme_pcie": { 00:03:55.566 "mask": "0x800", 00:03:55.566 "tpoint_mask": "0x0" 00:03:55.566 }, 00:03:55.566 "iaa": { 00:03:55.566 "mask": "0x1000", 00:03:55.566 "tpoint_mask": "0x0" 00:03:55.566 }, 00:03:55.566 "nvme_tcp": { 00:03:55.566 "mask": "0x2000", 00:03:55.567 "tpoint_mask": "0x0" 00:03:55.567 }, 00:03:55.567 "bdev_nvme": { 00:03:55.567 "mask": "0x4000", 00:03:55.567 "tpoint_mask": "0x0" 00:03:55.567 }, 00:03:55.567 "sock": { 00:03:55.567 "mask": "0x8000", 00:03:55.567 "tpoint_mask": "0x0" 00:03:55.567 }, 00:03:55.567 "blob": { 00:03:55.567 "mask": "0x10000", 00:03:55.567 "tpoint_mask": "0x0" 00:03:55.567 }, 00:03:55.567 "bdev_raid": { 00:03:55.567 "mask": "0x20000", 00:03:55.567 "tpoint_mask": "0x0" 00:03:55.567 }, 00:03:55.567 "scheduler": { 00:03:55.567 "mask": "0x40000", 00:03:55.567 "tpoint_mask": "0x0" 00:03:55.567 } 00:03:55.567 }' 00:03:55.567 11:21:56 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # jq length 00:03:55.567 11:21:56 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # '[' 19 -gt 2 ']' 00:03:55.567 11:21:56 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:03:55.567 11:21:56 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:03:55.567 11:21:56 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:03:55.826 11:21:56 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:03:55.826 11:21:56 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:03:55.826 11:21:56 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:03:55.826 11:21:56 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:03:55.826 11:21:56 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:03:55.826 00:03:55.826 real 0m0.215s 00:03:55.826 user 0m0.187s 00:03:55.826 sys 0m0.021s 00:03:55.826 11:21:56 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1128 -- # xtrace_disable 00:03:55.826 11:21:56 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:03:55.826 ************************************ 00:03:55.826 END TEST rpc_trace_cmd_test 00:03:55.826 ************************************ 00:03:55.826 11:21:56 rpc -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:03:55.826 11:21:56 rpc -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:03:55.826 11:21:56 rpc -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:03:55.826 11:21:56 rpc -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:03:55.826 11:21:56 rpc -- common/autotest_common.sh@1109 -- # xtrace_disable 00:03:55.826 11:21:56 rpc -- common/autotest_common.sh@10 -- # set +x 00:03:55.826 ************************************ 00:03:55.826 START TEST rpc_daemon_integrity 00:03:55.826 ************************************ 00:03:55.826 11:21:56 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1127 -- # rpc_integrity 00:03:55.826 11:21:56 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:03:55.826 11:21:56 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:03:55.826 11:21:56 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:55.826 11:21:56 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:03:55.826 11:21:56 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:03:55.826 11:21:56 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # jq length 00:03:55.826 11:21:56 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:03:55.826 11:21:56 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:03:55.826 11:21:56 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:03:55.826 11:21:56 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:55.826 11:21:56 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:03:55.826 11:21:56 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:03:55.826 11:21:56 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:03:55.826 11:21:56 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:03:55.826 11:21:56 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:55.826 11:21:56 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:03:55.826 11:21:56 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:03:55.826 { 00:03:55.826 "name": "Malloc2", 00:03:55.826 "aliases": [ 00:03:55.826 "05ea24d4-a68e-48c4-98bd-289358a4e1f7" 00:03:55.826 ], 00:03:55.826 "product_name": "Malloc disk", 00:03:55.826 "block_size": 512, 00:03:55.826 "num_blocks": 16384, 00:03:55.826 "uuid": "05ea24d4-a68e-48c4-98bd-289358a4e1f7", 00:03:55.826 "assigned_rate_limits": { 00:03:55.826 "rw_ios_per_sec": 0, 00:03:55.826 "rw_mbytes_per_sec": 0, 00:03:55.826 "r_mbytes_per_sec": 0, 00:03:55.826 "w_mbytes_per_sec": 0 00:03:55.826 }, 00:03:55.826 "claimed": false, 00:03:55.826 "zoned": false, 00:03:55.826 "supported_io_types": { 00:03:55.826 "read": true, 00:03:55.826 "write": true, 00:03:55.826 "unmap": true, 00:03:55.826 "flush": true, 00:03:55.826 "reset": true, 00:03:55.826 "nvme_admin": false, 00:03:55.826 "nvme_io": false, 00:03:55.826 "nvme_io_md": false, 00:03:55.826 "write_zeroes": true, 00:03:55.826 "zcopy": true, 00:03:55.826 "get_zone_info": false, 00:03:55.826 "zone_management": false, 00:03:55.826 "zone_append": false, 00:03:55.826 "compare": false, 00:03:55.826 "compare_and_write": false, 00:03:55.826 "abort": true, 00:03:55.826 "seek_hole": false, 00:03:55.826 "seek_data": false, 00:03:55.826 "copy": true, 00:03:55.826 "nvme_iov_md": false 00:03:55.826 }, 00:03:55.826 "memory_domains": [ 00:03:55.826 { 00:03:55.826 "dma_device_id": "system", 00:03:55.826 "dma_device_type": 1 00:03:55.826 }, 00:03:55.826 { 00:03:55.826 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:03:55.826 "dma_device_type": 2 00:03:55.826 } 00:03:55.826 ], 00:03:55.826 "driver_specific": {} 00:03:55.826 } 00:03:55.826 ]' 00:03:55.826 11:21:56 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # jq length 00:03:56.086 11:21:56 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:03:56.086 11:21:56 rpc.rpc_daemon_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:03:56.086 11:21:56 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:03:56.086 11:21:56 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:56.086 [2024-11-15 11:21:56.694809] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:03:56.086 [2024-11-15 11:21:56.694842] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:03:56.086 [2024-11-15 11:21:56.694859] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1372960 00:03:56.086 [2024-11-15 11:21:56.694868] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:03:56.086 [2024-11-15 11:21:56.696336] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:03:56.086 [2024-11-15 11:21:56.696360] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:03:56.086 Passthru0 00:03:56.086 11:21:56 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:03:56.086 11:21:56 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:03:56.086 11:21:56 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:03:56.086 11:21:56 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:56.086 11:21:56 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:03:56.086 11:21:56 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:03:56.086 { 00:03:56.086 "name": "Malloc2", 00:03:56.086 "aliases": [ 00:03:56.086 "05ea24d4-a68e-48c4-98bd-289358a4e1f7" 00:03:56.086 ], 00:03:56.086 "product_name": "Malloc disk", 00:03:56.086 "block_size": 512, 00:03:56.086 "num_blocks": 16384, 00:03:56.086 "uuid": "05ea24d4-a68e-48c4-98bd-289358a4e1f7", 00:03:56.086 "assigned_rate_limits": { 00:03:56.086 "rw_ios_per_sec": 0, 00:03:56.086 "rw_mbytes_per_sec": 0, 00:03:56.086 "r_mbytes_per_sec": 0, 00:03:56.086 "w_mbytes_per_sec": 0 00:03:56.086 }, 00:03:56.086 "claimed": true, 00:03:56.086 "claim_type": "exclusive_write", 00:03:56.086 "zoned": false, 00:03:56.086 "supported_io_types": { 00:03:56.086 "read": true, 00:03:56.086 "write": true, 00:03:56.086 "unmap": true, 00:03:56.086 "flush": true, 00:03:56.086 "reset": true, 00:03:56.086 "nvme_admin": false, 00:03:56.086 "nvme_io": false, 00:03:56.086 "nvme_io_md": false, 00:03:56.086 "write_zeroes": true, 00:03:56.086 "zcopy": true, 00:03:56.086 "get_zone_info": false, 00:03:56.086 "zone_management": false, 00:03:56.086 "zone_append": false, 00:03:56.086 "compare": false, 00:03:56.086 "compare_and_write": false, 00:03:56.086 "abort": true, 00:03:56.086 "seek_hole": false, 00:03:56.086 "seek_data": false, 00:03:56.086 "copy": true, 00:03:56.086 "nvme_iov_md": false 00:03:56.086 }, 00:03:56.086 "memory_domains": [ 00:03:56.086 { 00:03:56.086 "dma_device_id": "system", 00:03:56.086 "dma_device_type": 1 00:03:56.086 }, 00:03:56.086 { 00:03:56.086 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:03:56.086 "dma_device_type": 2 00:03:56.086 } 00:03:56.086 ], 00:03:56.086 "driver_specific": {} 00:03:56.086 }, 00:03:56.086 { 00:03:56.086 "name": "Passthru0", 00:03:56.086 "aliases": [ 00:03:56.086 "3109fdfe-02bc-5422-a6c9-449165f9db1f" 00:03:56.086 ], 00:03:56.086 "product_name": "passthru", 00:03:56.086 "block_size": 512, 00:03:56.086 "num_blocks": 16384, 00:03:56.086 "uuid": "3109fdfe-02bc-5422-a6c9-449165f9db1f", 00:03:56.086 "assigned_rate_limits": { 00:03:56.086 "rw_ios_per_sec": 0, 00:03:56.086 "rw_mbytes_per_sec": 0, 00:03:56.086 "r_mbytes_per_sec": 0, 00:03:56.086 "w_mbytes_per_sec": 0 00:03:56.086 }, 00:03:56.086 "claimed": false, 00:03:56.086 "zoned": false, 00:03:56.086 "supported_io_types": { 00:03:56.086 "read": true, 00:03:56.086 "write": true, 00:03:56.086 "unmap": true, 00:03:56.086 "flush": true, 00:03:56.086 "reset": true, 00:03:56.086 "nvme_admin": false, 00:03:56.086 "nvme_io": false, 00:03:56.086 "nvme_io_md": false, 00:03:56.086 "write_zeroes": true, 00:03:56.086 "zcopy": true, 00:03:56.086 "get_zone_info": false, 00:03:56.086 "zone_management": false, 00:03:56.086 "zone_append": false, 00:03:56.086 "compare": false, 00:03:56.086 "compare_and_write": false, 00:03:56.086 "abort": true, 00:03:56.086 "seek_hole": false, 00:03:56.086 "seek_data": false, 00:03:56.086 "copy": true, 00:03:56.086 "nvme_iov_md": false 00:03:56.086 }, 00:03:56.086 "memory_domains": [ 00:03:56.086 { 00:03:56.086 "dma_device_id": "system", 00:03:56.086 "dma_device_type": 1 00:03:56.086 }, 00:03:56.086 { 00:03:56.086 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:03:56.086 "dma_device_type": 2 00:03:56.086 } 00:03:56.086 ], 00:03:56.086 "driver_specific": { 00:03:56.086 "passthru": { 00:03:56.086 "name": "Passthru0", 00:03:56.086 "base_bdev_name": "Malloc2" 00:03:56.086 } 00:03:56.086 } 00:03:56.086 } 00:03:56.086 ]' 00:03:56.086 11:21:56 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # jq length 00:03:56.086 11:21:56 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:03:56.086 11:21:56 rpc.rpc_daemon_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:03:56.086 11:21:56 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:03:56.086 11:21:56 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:56.086 11:21:56 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:03:56.086 11:21:56 rpc.rpc_daemon_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:03:56.086 11:21:56 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:03:56.086 11:21:56 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:56.086 11:21:56 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:03:56.086 11:21:56 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:03:56.086 11:21:56 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:03:56.086 11:21:56 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:56.086 11:21:56 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:03:56.086 11:21:56 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:03:56.086 11:21:56 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # jq length 00:03:56.087 11:21:56 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:03:56.087 00:03:56.087 real 0m0.274s 00:03:56.087 user 0m0.185s 00:03:56.087 sys 0m0.024s 00:03:56.087 11:21:56 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1128 -- # xtrace_disable 00:03:56.087 11:21:56 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:56.087 ************************************ 00:03:56.087 END TEST rpc_daemon_integrity 00:03:56.087 ************************************ 00:03:56.087 11:21:56 rpc -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:03:56.087 11:21:56 rpc -- rpc/rpc.sh@84 -- # killprocess 1018384 00:03:56.087 11:21:56 rpc -- common/autotest_common.sh@952 -- # '[' -z 1018384 ']' 00:03:56.087 11:21:56 rpc -- common/autotest_common.sh@956 -- # kill -0 1018384 00:03:56.087 11:21:56 rpc -- common/autotest_common.sh@957 -- # uname 00:03:56.087 11:21:56 rpc -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:03:56.087 11:21:56 rpc -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 1018384 00:03:56.087 11:21:56 rpc -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:03:56.087 11:21:56 rpc -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:03:56.087 11:21:56 rpc -- common/autotest_common.sh@970 -- # echo 'killing process with pid 1018384' 00:03:56.087 killing process with pid 1018384 00:03:56.087 11:21:56 rpc -- common/autotest_common.sh@971 -- # kill 1018384 00:03:56.087 11:21:56 rpc -- common/autotest_common.sh@976 -- # wait 1018384 00:03:56.655 00:03:56.655 real 0m2.099s 00:03:56.655 user 0m2.742s 00:03:56.655 sys 0m0.649s 00:03:56.655 11:21:57 rpc -- common/autotest_common.sh@1128 -- # xtrace_disable 00:03:56.655 11:21:57 rpc -- common/autotest_common.sh@10 -- # set +x 00:03:56.655 ************************************ 00:03:56.655 END TEST rpc 00:03:56.655 ************************************ 00:03:56.655 11:21:57 -- spdk/autotest.sh@157 -- # run_test skip_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:03:56.655 11:21:57 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:03:56.655 11:21:57 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:03:56.655 11:21:57 -- common/autotest_common.sh@10 -- # set +x 00:03:56.655 ************************************ 00:03:56.655 START TEST skip_rpc 00:03:56.655 ************************************ 00:03:56.655 11:21:57 skip_rpc -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:03:56.655 * Looking for test storage... 00:03:56.655 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:03:56.655 11:21:57 skip_rpc -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:03:56.655 11:21:57 skip_rpc -- common/autotest_common.sh@1691 -- # lcov --version 00:03:56.655 11:21:57 skip_rpc -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:03:56.655 11:21:57 skip_rpc -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:03:56.655 11:21:57 skip_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:03:56.655 11:21:57 skip_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:03:56.655 11:21:57 skip_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:03:56.655 11:21:57 skip_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:03:56.655 11:21:57 skip_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:03:56.655 11:21:57 skip_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:03:56.655 11:21:57 skip_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:03:56.655 11:21:57 skip_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:03:56.655 11:21:57 skip_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:03:56.655 11:21:57 skip_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:03:56.655 11:21:57 skip_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:03:56.655 11:21:57 skip_rpc -- scripts/common.sh@344 -- # case "$op" in 00:03:56.655 11:21:57 skip_rpc -- scripts/common.sh@345 -- # : 1 00:03:56.655 11:21:57 skip_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:03:56.655 11:21:57 skip_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:03:56.655 11:21:57 skip_rpc -- scripts/common.sh@365 -- # decimal 1 00:03:56.655 11:21:57 skip_rpc -- scripts/common.sh@353 -- # local d=1 00:03:56.655 11:21:57 skip_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:03:56.655 11:21:57 skip_rpc -- scripts/common.sh@355 -- # echo 1 00:03:56.655 11:21:57 skip_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:03:56.655 11:21:57 skip_rpc -- scripts/common.sh@366 -- # decimal 2 00:03:56.655 11:21:57 skip_rpc -- scripts/common.sh@353 -- # local d=2 00:03:56.655 11:21:57 skip_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:03:56.655 11:21:57 skip_rpc -- scripts/common.sh@355 -- # echo 2 00:03:56.655 11:21:57 skip_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:03:56.655 11:21:57 skip_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:03:56.655 11:21:57 skip_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:03:56.655 11:21:57 skip_rpc -- scripts/common.sh@368 -- # return 0 00:03:56.655 11:21:57 skip_rpc -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:03:56.655 11:21:57 skip_rpc -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:03:56.655 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:56.655 --rc genhtml_branch_coverage=1 00:03:56.655 --rc genhtml_function_coverage=1 00:03:56.655 --rc genhtml_legend=1 00:03:56.655 --rc geninfo_all_blocks=1 00:03:56.655 --rc geninfo_unexecuted_blocks=1 00:03:56.655 00:03:56.655 ' 00:03:56.655 11:21:57 skip_rpc -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:03:56.655 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:56.655 --rc genhtml_branch_coverage=1 00:03:56.655 --rc genhtml_function_coverage=1 00:03:56.655 --rc genhtml_legend=1 00:03:56.655 --rc geninfo_all_blocks=1 00:03:56.655 --rc geninfo_unexecuted_blocks=1 00:03:56.655 00:03:56.655 ' 00:03:56.655 11:21:57 skip_rpc -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:03:56.655 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:56.655 --rc genhtml_branch_coverage=1 00:03:56.655 --rc genhtml_function_coverage=1 00:03:56.655 --rc genhtml_legend=1 00:03:56.655 --rc geninfo_all_blocks=1 00:03:56.655 --rc geninfo_unexecuted_blocks=1 00:03:56.655 00:03:56.655 ' 00:03:56.655 11:21:57 skip_rpc -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:03:56.655 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:56.655 --rc genhtml_branch_coverage=1 00:03:56.655 --rc genhtml_function_coverage=1 00:03:56.655 --rc genhtml_legend=1 00:03:56.655 --rc geninfo_all_blocks=1 00:03:56.655 --rc geninfo_unexecuted_blocks=1 00:03:56.655 00:03:56.655 ' 00:03:56.655 11:21:57 skip_rpc -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:03:56.655 11:21:57 skip_rpc -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:03:56.655 11:21:57 skip_rpc -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc 00:03:56.655 11:21:57 skip_rpc -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:03:56.655 11:21:57 skip_rpc -- common/autotest_common.sh@1109 -- # xtrace_disable 00:03:56.655 11:21:57 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:03:56.914 ************************************ 00:03:56.914 START TEST skip_rpc 00:03:56.914 ************************************ 00:03:56.914 11:21:57 skip_rpc.skip_rpc -- common/autotest_common.sh@1127 -- # test_skip_rpc 00:03:56.914 11:21:57 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@16 -- # local spdk_pid=1019083 00:03:56.914 11:21:57 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:03:56.914 11:21:57 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 00:03:56.914 11:21:57 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@19 -- # sleep 5 00:03:56.914 [2024-11-15 11:21:57.577775] Starting SPDK v25.01-pre git sha1 4b2d483c6 / DPDK 24.03.0 initialization... 00:03:56.914 [2024-11-15 11:21:57.577830] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1019083 ] 00:03:56.914 [2024-11-15 11:21:57.675150] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:03:56.914 [2024-11-15 11:21:57.723769] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:02.181 11:22:02 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version 00:04:02.181 11:22:02 skip_rpc.skip_rpc -- common/autotest_common.sh@650 -- # local es=0 00:04:02.181 11:22:02 skip_rpc.skip_rpc -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd spdk_get_version 00:04:02.181 11:22:02 skip_rpc.skip_rpc -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:04:02.181 11:22:02 skip_rpc.skip_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:04:02.181 11:22:02 skip_rpc.skip_rpc -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:04:02.181 11:22:02 skip_rpc.skip_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:04:02.181 11:22:02 skip_rpc.skip_rpc -- common/autotest_common.sh@653 -- # rpc_cmd spdk_get_version 00:04:02.181 11:22:02 skip_rpc.skip_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:02.181 11:22:02 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:02.181 11:22:02 skip_rpc.skip_rpc -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:04:02.182 11:22:02 skip_rpc.skip_rpc -- common/autotest_common.sh@653 -- # es=1 00:04:02.182 11:22:02 skip_rpc.skip_rpc -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:04:02.182 11:22:02 skip_rpc.skip_rpc -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:04:02.182 11:22:02 skip_rpc.skip_rpc -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:04:02.182 11:22:02 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT 00:04:02.182 11:22:02 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@23 -- # killprocess 1019083 00:04:02.182 11:22:02 skip_rpc.skip_rpc -- common/autotest_common.sh@952 -- # '[' -z 1019083 ']' 00:04:02.182 11:22:02 skip_rpc.skip_rpc -- common/autotest_common.sh@956 -- # kill -0 1019083 00:04:02.182 11:22:02 skip_rpc.skip_rpc -- common/autotest_common.sh@957 -- # uname 00:04:02.182 11:22:02 skip_rpc.skip_rpc -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:04:02.182 11:22:02 skip_rpc.skip_rpc -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 1019083 00:04:02.182 11:22:02 skip_rpc.skip_rpc -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:04:02.182 11:22:02 skip_rpc.skip_rpc -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:04:02.182 11:22:02 skip_rpc.skip_rpc -- common/autotest_common.sh@970 -- # echo 'killing process with pid 1019083' 00:04:02.182 killing process with pid 1019083 00:04:02.182 11:22:02 skip_rpc.skip_rpc -- common/autotest_common.sh@971 -- # kill 1019083 00:04:02.182 11:22:02 skip_rpc.skip_rpc -- common/autotest_common.sh@976 -- # wait 1019083 00:04:02.182 00:04:02.182 real 0m5.405s 00:04:02.182 user 0m5.136s 00:04:02.182 sys 0m0.314s 00:04:02.182 11:22:02 skip_rpc.skip_rpc -- common/autotest_common.sh@1128 -- # xtrace_disable 00:04:02.182 11:22:02 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:02.182 ************************************ 00:04:02.182 END TEST skip_rpc 00:04:02.182 ************************************ 00:04:02.182 11:22:02 skip_rpc -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json 00:04:02.182 11:22:02 skip_rpc -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:04:02.182 11:22:02 skip_rpc -- common/autotest_common.sh@1109 -- # xtrace_disable 00:04:02.182 11:22:02 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:02.182 ************************************ 00:04:02.182 START TEST skip_rpc_with_json 00:04:02.182 ************************************ 00:04:02.182 11:22:02 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1127 -- # test_skip_rpc_with_json 00:04:02.182 11:22:02 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@44 -- # gen_json_config 00:04:02.182 11:22:02 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@28 -- # local spdk_pid=1020129 00:04:02.182 11:22:02 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:02.182 11:22:02 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:04:02.182 11:22:02 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@31 -- # waitforlisten 1020129 00:04:02.182 11:22:02 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@833 -- # '[' -z 1020129 ']' 00:04:02.182 11:22:02 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:02.182 11:22:02 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@838 -- # local max_retries=100 00:04:02.182 11:22:02 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:02.182 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:02.182 11:22:02 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@842 -- # xtrace_disable 00:04:02.182 11:22:02 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:02.440 [2024-11-15 11:22:03.055414] Starting SPDK v25.01-pre git sha1 4b2d483c6 / DPDK 24.03.0 initialization... 00:04:02.440 [2024-11-15 11:22:03.055475] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1020129 ] 00:04:02.440 [2024-11-15 11:22:03.150381] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:02.440 [2024-11-15 11:22:03.200557] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:02.698 11:22:03 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:04:02.699 11:22:03 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@866 -- # return 0 00:04:02.699 11:22:03 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp 00:04:02.699 11:22:03 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:02.699 11:22:03 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:02.699 [2024-11-15 11:22:03.443865] nvmf_rpc.c:2706:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist 00:04:02.699 request: 00:04:02.699 { 00:04:02.699 "trtype": "tcp", 00:04:02.699 "method": "nvmf_get_transports", 00:04:02.699 "req_id": 1 00:04:02.699 } 00:04:02.699 Got JSON-RPC error response 00:04:02.699 response: 00:04:02.699 { 00:04:02.699 "code": -19, 00:04:02.699 "message": "No such device" 00:04:02.699 } 00:04:02.699 11:22:03 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:04:02.699 11:22:03 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp 00:04:02.699 11:22:03 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:02.699 11:22:03 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:02.699 [2024-11-15 11:22:03.456011] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:04:02.699 11:22:03 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:02.699 11:22:03 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config 00:04:02.699 11:22:03 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:02.699 11:22:03 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:02.958 11:22:03 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:02.958 11:22:03 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@37 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:04:02.958 { 00:04:02.958 "subsystems": [ 00:04:02.958 { 00:04:02.958 "subsystem": "fsdev", 00:04:02.958 "config": [ 00:04:02.958 { 00:04:02.958 "method": "fsdev_set_opts", 00:04:02.958 "params": { 00:04:02.958 "fsdev_io_pool_size": 65535, 00:04:02.958 "fsdev_io_cache_size": 256 00:04:02.958 } 00:04:02.958 } 00:04:02.958 ] 00:04:02.958 }, 00:04:02.958 { 00:04:02.958 "subsystem": "vfio_user_target", 00:04:02.958 "config": null 00:04:02.958 }, 00:04:02.958 { 00:04:02.958 "subsystem": "keyring", 00:04:02.958 "config": [] 00:04:02.958 }, 00:04:02.958 { 00:04:02.958 "subsystem": "iobuf", 00:04:02.958 "config": [ 00:04:02.958 { 00:04:02.958 "method": "iobuf_set_options", 00:04:02.958 "params": { 00:04:02.958 "small_pool_count": 8192, 00:04:02.958 "large_pool_count": 1024, 00:04:02.958 "small_bufsize": 8192, 00:04:02.958 "large_bufsize": 135168, 00:04:02.958 "enable_numa": false 00:04:02.958 } 00:04:02.958 } 00:04:02.958 ] 00:04:02.958 }, 00:04:02.958 { 00:04:02.958 "subsystem": "sock", 00:04:02.958 "config": [ 00:04:02.958 { 00:04:02.958 "method": "sock_set_default_impl", 00:04:02.958 "params": { 00:04:02.958 "impl_name": "posix" 00:04:02.958 } 00:04:02.958 }, 00:04:02.958 { 00:04:02.958 "method": "sock_impl_set_options", 00:04:02.958 "params": { 00:04:02.958 "impl_name": "ssl", 00:04:02.958 "recv_buf_size": 4096, 00:04:02.958 "send_buf_size": 4096, 00:04:02.958 "enable_recv_pipe": true, 00:04:02.958 "enable_quickack": false, 00:04:02.958 "enable_placement_id": 0, 00:04:02.958 "enable_zerocopy_send_server": true, 00:04:02.958 "enable_zerocopy_send_client": false, 00:04:02.958 "zerocopy_threshold": 0, 00:04:02.958 "tls_version": 0, 00:04:02.958 "enable_ktls": false 00:04:02.958 } 00:04:02.958 }, 00:04:02.958 { 00:04:02.958 "method": "sock_impl_set_options", 00:04:02.958 "params": { 00:04:02.958 "impl_name": "posix", 00:04:02.958 "recv_buf_size": 2097152, 00:04:02.958 "send_buf_size": 2097152, 00:04:02.958 "enable_recv_pipe": true, 00:04:02.958 "enable_quickack": false, 00:04:02.958 "enable_placement_id": 0, 00:04:02.958 "enable_zerocopy_send_server": true, 00:04:02.958 "enable_zerocopy_send_client": false, 00:04:02.958 "zerocopy_threshold": 0, 00:04:02.959 "tls_version": 0, 00:04:02.959 "enable_ktls": false 00:04:02.959 } 00:04:02.959 } 00:04:02.959 ] 00:04:02.959 }, 00:04:02.959 { 00:04:02.959 "subsystem": "vmd", 00:04:02.959 "config": [] 00:04:02.959 }, 00:04:02.959 { 00:04:02.959 "subsystem": "accel", 00:04:02.959 "config": [ 00:04:02.959 { 00:04:02.959 "method": "accel_set_options", 00:04:02.959 "params": { 00:04:02.959 "small_cache_size": 128, 00:04:02.959 "large_cache_size": 16, 00:04:02.959 "task_count": 2048, 00:04:02.959 "sequence_count": 2048, 00:04:02.959 "buf_count": 2048 00:04:02.959 } 00:04:02.959 } 00:04:02.959 ] 00:04:02.959 }, 00:04:02.959 { 00:04:02.959 "subsystem": "bdev", 00:04:02.959 "config": [ 00:04:02.959 { 00:04:02.959 "method": "bdev_set_options", 00:04:02.959 "params": { 00:04:02.959 "bdev_io_pool_size": 65535, 00:04:02.959 "bdev_io_cache_size": 256, 00:04:02.959 "bdev_auto_examine": true, 00:04:02.959 "iobuf_small_cache_size": 128, 00:04:02.959 "iobuf_large_cache_size": 16 00:04:02.959 } 00:04:02.959 }, 00:04:02.959 { 00:04:02.959 "method": "bdev_raid_set_options", 00:04:02.959 "params": { 00:04:02.959 "process_window_size_kb": 1024, 00:04:02.959 "process_max_bandwidth_mb_sec": 0 00:04:02.959 } 00:04:02.959 }, 00:04:02.959 { 00:04:02.959 "method": "bdev_iscsi_set_options", 00:04:02.959 "params": { 00:04:02.959 "timeout_sec": 30 00:04:02.959 } 00:04:02.959 }, 00:04:02.959 { 00:04:02.959 "method": "bdev_nvme_set_options", 00:04:02.959 "params": { 00:04:02.959 "action_on_timeout": "none", 00:04:02.959 "timeout_us": 0, 00:04:02.959 "timeout_admin_us": 0, 00:04:02.959 "keep_alive_timeout_ms": 10000, 00:04:02.959 "arbitration_burst": 0, 00:04:02.959 "low_priority_weight": 0, 00:04:02.959 "medium_priority_weight": 0, 00:04:02.959 "high_priority_weight": 0, 00:04:02.959 "nvme_adminq_poll_period_us": 10000, 00:04:02.959 "nvme_ioq_poll_period_us": 0, 00:04:02.959 "io_queue_requests": 0, 00:04:02.959 "delay_cmd_submit": true, 00:04:02.959 "transport_retry_count": 4, 00:04:02.959 "bdev_retry_count": 3, 00:04:02.959 "transport_ack_timeout": 0, 00:04:02.959 "ctrlr_loss_timeout_sec": 0, 00:04:02.959 "reconnect_delay_sec": 0, 00:04:02.959 "fast_io_fail_timeout_sec": 0, 00:04:02.959 "disable_auto_failback": false, 00:04:02.959 "generate_uuids": false, 00:04:02.959 "transport_tos": 0, 00:04:02.959 "nvme_error_stat": false, 00:04:02.959 "rdma_srq_size": 0, 00:04:02.959 "io_path_stat": false, 00:04:02.959 "allow_accel_sequence": false, 00:04:02.959 "rdma_max_cq_size": 0, 00:04:02.959 "rdma_cm_event_timeout_ms": 0, 00:04:02.959 "dhchap_digests": [ 00:04:02.959 "sha256", 00:04:02.959 "sha384", 00:04:02.959 "sha512" 00:04:02.959 ], 00:04:02.959 "dhchap_dhgroups": [ 00:04:02.959 "null", 00:04:02.959 "ffdhe2048", 00:04:02.959 "ffdhe3072", 00:04:02.959 "ffdhe4096", 00:04:02.959 "ffdhe6144", 00:04:02.959 "ffdhe8192" 00:04:02.959 ] 00:04:02.959 } 00:04:02.959 }, 00:04:02.959 { 00:04:02.959 "method": "bdev_nvme_set_hotplug", 00:04:02.959 "params": { 00:04:02.959 "period_us": 100000, 00:04:02.959 "enable": false 00:04:02.959 } 00:04:02.959 }, 00:04:02.959 { 00:04:02.959 "method": "bdev_wait_for_examine" 00:04:02.959 } 00:04:02.959 ] 00:04:02.959 }, 00:04:02.959 { 00:04:02.959 "subsystem": "scsi", 00:04:02.959 "config": null 00:04:02.959 }, 00:04:02.959 { 00:04:02.959 "subsystem": "scheduler", 00:04:02.959 "config": [ 00:04:02.959 { 00:04:02.959 "method": "framework_set_scheduler", 00:04:02.959 "params": { 00:04:02.959 "name": "static" 00:04:02.959 } 00:04:02.959 } 00:04:02.959 ] 00:04:02.959 }, 00:04:02.959 { 00:04:02.959 "subsystem": "vhost_scsi", 00:04:02.959 "config": [] 00:04:02.959 }, 00:04:02.959 { 00:04:02.959 "subsystem": "vhost_blk", 00:04:02.959 "config": [] 00:04:02.959 }, 00:04:02.959 { 00:04:02.959 "subsystem": "ublk", 00:04:02.959 "config": [] 00:04:02.959 }, 00:04:02.959 { 00:04:02.959 "subsystem": "nbd", 00:04:02.959 "config": [] 00:04:02.959 }, 00:04:02.959 { 00:04:02.959 "subsystem": "nvmf", 00:04:02.959 "config": [ 00:04:02.959 { 00:04:02.959 "method": "nvmf_set_config", 00:04:02.959 "params": { 00:04:02.959 "discovery_filter": "match_any", 00:04:02.959 "admin_cmd_passthru": { 00:04:02.959 "identify_ctrlr": false 00:04:02.959 }, 00:04:02.959 "dhchap_digests": [ 00:04:02.959 "sha256", 00:04:02.959 "sha384", 00:04:02.959 "sha512" 00:04:02.959 ], 00:04:02.959 "dhchap_dhgroups": [ 00:04:02.959 "null", 00:04:02.959 "ffdhe2048", 00:04:02.959 "ffdhe3072", 00:04:02.959 "ffdhe4096", 00:04:02.959 "ffdhe6144", 00:04:02.959 "ffdhe8192" 00:04:02.959 ] 00:04:02.959 } 00:04:02.959 }, 00:04:02.959 { 00:04:02.959 "method": "nvmf_set_max_subsystems", 00:04:02.959 "params": { 00:04:02.959 "max_subsystems": 1024 00:04:02.959 } 00:04:02.959 }, 00:04:02.959 { 00:04:02.959 "method": "nvmf_set_crdt", 00:04:02.959 "params": { 00:04:02.959 "crdt1": 0, 00:04:02.959 "crdt2": 0, 00:04:02.959 "crdt3": 0 00:04:02.959 } 00:04:02.959 }, 00:04:02.959 { 00:04:02.959 "method": "nvmf_create_transport", 00:04:02.959 "params": { 00:04:02.959 "trtype": "TCP", 00:04:02.959 "max_queue_depth": 128, 00:04:02.959 "max_io_qpairs_per_ctrlr": 127, 00:04:02.959 "in_capsule_data_size": 4096, 00:04:02.959 "max_io_size": 131072, 00:04:02.959 "io_unit_size": 131072, 00:04:02.959 "max_aq_depth": 128, 00:04:02.959 "num_shared_buffers": 511, 00:04:02.959 "buf_cache_size": 4294967295, 00:04:02.959 "dif_insert_or_strip": false, 00:04:02.959 "zcopy": false, 00:04:02.959 "c2h_success": true, 00:04:02.959 "sock_priority": 0, 00:04:02.959 "abort_timeout_sec": 1, 00:04:02.959 "ack_timeout": 0, 00:04:02.959 "data_wr_pool_size": 0 00:04:02.959 } 00:04:02.959 } 00:04:02.959 ] 00:04:02.959 }, 00:04:02.959 { 00:04:02.959 "subsystem": "iscsi", 00:04:02.959 "config": [ 00:04:02.959 { 00:04:02.959 "method": "iscsi_set_options", 00:04:02.959 "params": { 00:04:02.960 "node_base": "iqn.2016-06.io.spdk", 00:04:02.960 "max_sessions": 128, 00:04:02.960 "max_connections_per_session": 2, 00:04:02.960 "max_queue_depth": 64, 00:04:02.960 "default_time2wait": 2, 00:04:02.960 "default_time2retain": 20, 00:04:02.960 "first_burst_length": 8192, 00:04:02.960 "immediate_data": true, 00:04:02.960 "allow_duplicated_isid": false, 00:04:02.960 "error_recovery_level": 0, 00:04:02.960 "nop_timeout": 60, 00:04:02.960 "nop_in_interval": 30, 00:04:02.960 "disable_chap": false, 00:04:02.960 "require_chap": false, 00:04:02.960 "mutual_chap": false, 00:04:02.960 "chap_group": 0, 00:04:02.960 "max_large_datain_per_connection": 64, 00:04:02.960 "max_r2t_per_connection": 4, 00:04:02.960 "pdu_pool_size": 36864, 00:04:02.960 "immediate_data_pool_size": 16384, 00:04:02.960 "data_out_pool_size": 2048 00:04:02.960 } 00:04:02.960 } 00:04:02.960 ] 00:04:02.960 } 00:04:02.960 ] 00:04:02.960 } 00:04:02.960 11:22:03 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:04:02.960 11:22:03 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@40 -- # killprocess 1020129 00:04:02.960 11:22:03 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@952 -- # '[' -z 1020129 ']' 00:04:02.960 11:22:03 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # kill -0 1020129 00:04:02.960 11:22:03 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@957 -- # uname 00:04:02.960 11:22:03 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:04:02.960 11:22:03 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 1020129 00:04:02.960 11:22:03 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:04:02.960 11:22:03 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:04:02.960 11:22:03 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@970 -- # echo 'killing process with pid 1020129' 00:04:02.960 killing process with pid 1020129 00:04:02.960 11:22:03 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@971 -- # kill 1020129 00:04:02.960 11:22:03 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@976 -- # wait 1020129 00:04:03.218 11:22:04 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@47 -- # local spdk_pid=1020181 00:04:03.218 11:22:04 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@48 -- # sleep 5 00:04:03.219 11:22:04 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:04:08.501 11:22:09 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@50 -- # killprocess 1020181 00:04:08.501 11:22:09 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@952 -- # '[' -z 1020181 ']' 00:04:08.501 11:22:09 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # kill -0 1020181 00:04:08.501 11:22:09 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@957 -- # uname 00:04:08.501 11:22:09 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:04:08.501 11:22:09 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 1020181 00:04:08.501 11:22:09 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:04:08.501 11:22:09 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:04:08.501 11:22:09 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@970 -- # echo 'killing process with pid 1020181' 00:04:08.501 killing process with pid 1020181 00:04:08.501 11:22:09 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@971 -- # kill 1020181 00:04:08.501 11:22:09 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@976 -- # wait 1020181 00:04:08.761 11:22:09 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:04:08.761 11:22:09 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@52 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:04:08.761 00:04:08.761 real 0m6.432s 00:04:08.761 user 0m6.151s 00:04:08.761 sys 0m0.672s 00:04:08.761 11:22:09 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1128 -- # xtrace_disable 00:04:08.761 11:22:09 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:08.761 ************************************ 00:04:08.761 END TEST skip_rpc_with_json 00:04:08.761 ************************************ 00:04:08.761 11:22:09 skip_rpc -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay 00:04:08.761 11:22:09 skip_rpc -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:04:08.761 11:22:09 skip_rpc -- common/autotest_common.sh@1109 -- # xtrace_disable 00:04:08.761 11:22:09 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:08.761 ************************************ 00:04:08.761 START TEST skip_rpc_with_delay 00:04:08.761 ************************************ 00:04:08.761 11:22:09 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1127 -- # test_skip_rpc_with_delay 00:04:08.761 11:22:09 skip_rpc.skip_rpc_with_delay -- rpc/skip_rpc.sh@57 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:04:08.761 11:22:09 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@650 -- # local es=0 00:04:08.761 11:22:09 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:04:08.761 11:22:09 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:08.761 11:22:09 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:04:08.761 11:22:09 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:08.761 11:22:09 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:04:08.761 11:22:09 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:08.761 11:22:09 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:04:08.761 11:22:09 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:08.761 11:22:09 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:04:08.761 11:22:09 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:04:08.761 [2024-11-15 11:22:09.561838] app.c: 842:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started. 00:04:08.761 11:22:09 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@653 -- # es=1 00:04:08.761 11:22:09 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:04:08.761 11:22:09 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:04:08.761 11:22:09 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:04:08.761 00:04:08.761 real 0m0.082s 00:04:08.761 user 0m0.054s 00:04:08.761 sys 0m0.027s 00:04:08.761 11:22:09 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1128 -- # xtrace_disable 00:04:08.761 11:22:09 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@10 -- # set +x 00:04:08.761 ************************************ 00:04:08.761 END TEST skip_rpc_with_delay 00:04:08.761 ************************************ 00:04:08.761 11:22:09 skip_rpc -- rpc/skip_rpc.sh@77 -- # uname 00:04:09.021 11:22:09 skip_rpc -- rpc/skip_rpc.sh@77 -- # '[' Linux '!=' FreeBSD ']' 00:04:09.021 11:22:09 skip_rpc -- rpc/skip_rpc.sh@78 -- # run_test exit_on_failed_rpc_init test_exit_on_failed_rpc_init 00:04:09.021 11:22:09 skip_rpc -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:04:09.021 11:22:09 skip_rpc -- common/autotest_common.sh@1109 -- # xtrace_disable 00:04:09.021 11:22:09 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:09.021 ************************************ 00:04:09.021 START TEST exit_on_failed_rpc_init 00:04:09.021 ************************************ 00:04:09.021 11:22:09 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1127 -- # test_exit_on_failed_rpc_init 00:04:09.021 11:22:09 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@62 -- # local spdk_pid=1021282 00:04:09.021 11:22:09 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@63 -- # waitforlisten 1021282 00:04:09.021 11:22:09 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:04:09.021 11:22:09 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@833 -- # '[' -z 1021282 ']' 00:04:09.021 11:22:09 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:09.021 11:22:09 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@838 -- # local max_retries=100 00:04:09.021 11:22:09 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:09.021 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:09.021 11:22:09 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@842 -- # xtrace_disable 00:04:09.021 11:22:09 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:04:09.021 [2024-11-15 11:22:09.708493] Starting SPDK v25.01-pre git sha1 4b2d483c6 / DPDK 24.03.0 initialization... 00:04:09.021 [2024-11-15 11:22:09.708546] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1021282 ] 00:04:09.021 [2024-11-15 11:22:09.801678] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:09.021 [2024-11-15 11:22:09.851701] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:09.280 11:22:10 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:04:09.280 11:22:10 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@866 -- # return 0 00:04:09.280 11:22:10 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@65 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:09.280 11:22:10 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@67 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:04:09.280 11:22:10 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@650 -- # local es=0 00:04:09.280 11:22:10 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:04:09.280 11:22:10 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:09.280 11:22:10 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:04:09.280 11:22:10 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:09.280 11:22:10 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:04:09.280 11:22:10 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:09.280 11:22:10 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:04:09.280 11:22:10 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:09.280 11:22:10 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:04:09.280 11:22:10 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:04:09.280 [2024-11-15 11:22:10.127645] Starting SPDK v25.01-pre git sha1 4b2d483c6 / DPDK 24.03.0 initialization... 00:04:09.280 [2024-11-15 11:22:10.127691] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1021457 ] 00:04:09.539 [2024-11-15 11:22:10.181116] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:09.539 [2024-11-15 11:22:10.219811] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:04:09.539 [2024-11-15 11:22:10.219868] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:04:09.539 [2024-11-15 11:22:10.219880] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:04:09.539 [2024-11-15 11:22:10.219887] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:04:09.539 11:22:10 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@653 -- # es=234 00:04:09.539 11:22:10 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:04:09.540 11:22:10 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@662 -- # es=106 00:04:09.540 11:22:10 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@663 -- # case "$es" in 00:04:09.540 11:22:10 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@670 -- # es=1 00:04:09.540 11:22:10 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:04:09.540 11:22:10 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:04:09.540 11:22:10 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@70 -- # killprocess 1021282 00:04:09.540 11:22:10 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@952 -- # '[' -z 1021282 ']' 00:04:09.540 11:22:10 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@956 -- # kill -0 1021282 00:04:09.540 11:22:10 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@957 -- # uname 00:04:09.540 11:22:10 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:04:09.540 11:22:10 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 1021282 00:04:09.540 11:22:10 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:04:09.540 11:22:10 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:04:09.540 11:22:10 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@970 -- # echo 'killing process with pid 1021282' 00:04:09.540 killing process with pid 1021282 00:04:09.540 11:22:10 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@971 -- # kill 1021282 00:04:09.540 11:22:10 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@976 -- # wait 1021282 00:04:09.799 00:04:09.799 real 0m1.003s 00:04:09.799 user 0m1.096s 00:04:09.799 sys 0m0.399s 00:04:10.057 11:22:10 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1128 -- # xtrace_disable 00:04:10.057 11:22:10 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:04:10.057 ************************************ 00:04:10.057 END TEST exit_on_failed_rpc_init 00:04:10.057 ************************************ 00:04:10.057 11:22:10 skip_rpc -- rpc/skip_rpc.sh@81 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:04:10.057 00:04:10.057 real 0m13.393s 00:04:10.057 user 0m12.650s 00:04:10.057 sys 0m1.701s 00:04:10.057 11:22:10 skip_rpc -- common/autotest_common.sh@1128 -- # xtrace_disable 00:04:10.057 11:22:10 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:10.057 ************************************ 00:04:10.057 END TEST skip_rpc 00:04:10.057 ************************************ 00:04:10.057 11:22:10 -- spdk/autotest.sh@158 -- # run_test rpc_client /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:04:10.057 11:22:10 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:04:10.057 11:22:10 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:04:10.057 11:22:10 -- common/autotest_common.sh@10 -- # set +x 00:04:10.057 ************************************ 00:04:10.057 START TEST rpc_client 00:04:10.057 ************************************ 00:04:10.057 11:22:10 rpc_client -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:04:10.057 * Looking for test storage... 00:04:10.057 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client 00:04:10.057 11:22:10 rpc_client -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:04:10.057 11:22:10 rpc_client -- common/autotest_common.sh@1691 -- # lcov --version 00:04:10.057 11:22:10 rpc_client -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:04:10.316 11:22:10 rpc_client -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:04:10.316 11:22:10 rpc_client -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:10.316 11:22:10 rpc_client -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:10.316 11:22:10 rpc_client -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:10.316 11:22:10 rpc_client -- scripts/common.sh@336 -- # IFS=.-: 00:04:10.316 11:22:10 rpc_client -- scripts/common.sh@336 -- # read -ra ver1 00:04:10.316 11:22:10 rpc_client -- scripts/common.sh@337 -- # IFS=.-: 00:04:10.316 11:22:10 rpc_client -- scripts/common.sh@337 -- # read -ra ver2 00:04:10.316 11:22:10 rpc_client -- scripts/common.sh@338 -- # local 'op=<' 00:04:10.316 11:22:10 rpc_client -- scripts/common.sh@340 -- # ver1_l=2 00:04:10.316 11:22:10 rpc_client -- scripts/common.sh@341 -- # ver2_l=1 00:04:10.316 11:22:10 rpc_client -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:10.316 11:22:10 rpc_client -- scripts/common.sh@344 -- # case "$op" in 00:04:10.316 11:22:10 rpc_client -- scripts/common.sh@345 -- # : 1 00:04:10.316 11:22:10 rpc_client -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:10.316 11:22:10 rpc_client -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:10.316 11:22:10 rpc_client -- scripts/common.sh@365 -- # decimal 1 00:04:10.316 11:22:10 rpc_client -- scripts/common.sh@353 -- # local d=1 00:04:10.316 11:22:10 rpc_client -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:10.316 11:22:10 rpc_client -- scripts/common.sh@355 -- # echo 1 00:04:10.316 11:22:10 rpc_client -- scripts/common.sh@365 -- # ver1[v]=1 00:04:10.316 11:22:10 rpc_client -- scripts/common.sh@366 -- # decimal 2 00:04:10.316 11:22:10 rpc_client -- scripts/common.sh@353 -- # local d=2 00:04:10.316 11:22:10 rpc_client -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:10.316 11:22:10 rpc_client -- scripts/common.sh@355 -- # echo 2 00:04:10.316 11:22:10 rpc_client -- scripts/common.sh@366 -- # ver2[v]=2 00:04:10.316 11:22:10 rpc_client -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:10.316 11:22:10 rpc_client -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:10.316 11:22:10 rpc_client -- scripts/common.sh@368 -- # return 0 00:04:10.316 11:22:10 rpc_client -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:10.316 11:22:10 rpc_client -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:04:10.316 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:10.316 --rc genhtml_branch_coverage=1 00:04:10.316 --rc genhtml_function_coverage=1 00:04:10.316 --rc genhtml_legend=1 00:04:10.316 --rc geninfo_all_blocks=1 00:04:10.316 --rc geninfo_unexecuted_blocks=1 00:04:10.316 00:04:10.316 ' 00:04:10.316 11:22:10 rpc_client -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:04:10.316 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:10.316 --rc genhtml_branch_coverage=1 00:04:10.316 --rc genhtml_function_coverage=1 00:04:10.316 --rc genhtml_legend=1 00:04:10.316 --rc geninfo_all_blocks=1 00:04:10.316 --rc geninfo_unexecuted_blocks=1 00:04:10.316 00:04:10.316 ' 00:04:10.316 11:22:10 rpc_client -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:04:10.316 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:10.316 --rc genhtml_branch_coverage=1 00:04:10.316 --rc genhtml_function_coverage=1 00:04:10.316 --rc genhtml_legend=1 00:04:10.316 --rc geninfo_all_blocks=1 00:04:10.316 --rc geninfo_unexecuted_blocks=1 00:04:10.316 00:04:10.316 ' 00:04:10.316 11:22:10 rpc_client -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:04:10.316 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:10.316 --rc genhtml_branch_coverage=1 00:04:10.316 --rc genhtml_function_coverage=1 00:04:10.316 --rc genhtml_legend=1 00:04:10.316 --rc geninfo_all_blocks=1 00:04:10.316 --rc geninfo_unexecuted_blocks=1 00:04:10.316 00:04:10.316 ' 00:04:10.316 11:22:10 rpc_client -- rpc_client/rpc_client.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client_test 00:04:10.316 OK 00:04:10.316 11:22:10 rpc_client -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:04:10.316 00:04:10.316 real 0m0.202s 00:04:10.316 user 0m0.129s 00:04:10.316 sys 0m0.084s 00:04:10.316 11:22:10 rpc_client -- common/autotest_common.sh@1128 -- # xtrace_disable 00:04:10.316 11:22:10 rpc_client -- common/autotest_common.sh@10 -- # set +x 00:04:10.316 ************************************ 00:04:10.316 END TEST rpc_client 00:04:10.316 ************************************ 00:04:10.316 11:22:10 -- spdk/autotest.sh@159 -- # run_test json_config /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config.sh 00:04:10.316 11:22:10 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:04:10.316 11:22:10 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:04:10.316 11:22:10 -- common/autotest_common.sh@10 -- # set +x 00:04:10.316 ************************************ 00:04:10.316 START TEST json_config 00:04:10.316 ************************************ 00:04:10.316 11:22:11 json_config -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config.sh 00:04:10.316 11:22:11 json_config -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:04:10.316 11:22:11 json_config -- common/autotest_common.sh@1691 -- # lcov --version 00:04:10.316 11:22:11 json_config -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:04:10.316 11:22:11 json_config -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:04:10.316 11:22:11 json_config -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:10.316 11:22:11 json_config -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:10.316 11:22:11 json_config -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:10.317 11:22:11 json_config -- scripts/common.sh@336 -- # IFS=.-: 00:04:10.317 11:22:11 json_config -- scripts/common.sh@336 -- # read -ra ver1 00:04:10.317 11:22:11 json_config -- scripts/common.sh@337 -- # IFS=.-: 00:04:10.317 11:22:11 json_config -- scripts/common.sh@337 -- # read -ra ver2 00:04:10.317 11:22:11 json_config -- scripts/common.sh@338 -- # local 'op=<' 00:04:10.317 11:22:11 json_config -- scripts/common.sh@340 -- # ver1_l=2 00:04:10.317 11:22:11 json_config -- scripts/common.sh@341 -- # ver2_l=1 00:04:10.317 11:22:11 json_config -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:10.317 11:22:11 json_config -- scripts/common.sh@344 -- # case "$op" in 00:04:10.317 11:22:11 json_config -- scripts/common.sh@345 -- # : 1 00:04:10.317 11:22:11 json_config -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:10.317 11:22:11 json_config -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:10.317 11:22:11 json_config -- scripts/common.sh@365 -- # decimal 1 00:04:10.317 11:22:11 json_config -- scripts/common.sh@353 -- # local d=1 00:04:10.317 11:22:11 json_config -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:10.317 11:22:11 json_config -- scripts/common.sh@355 -- # echo 1 00:04:10.317 11:22:11 json_config -- scripts/common.sh@365 -- # ver1[v]=1 00:04:10.317 11:22:11 json_config -- scripts/common.sh@366 -- # decimal 2 00:04:10.576 11:22:11 json_config -- scripts/common.sh@353 -- # local d=2 00:04:10.576 11:22:11 json_config -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:10.576 11:22:11 json_config -- scripts/common.sh@355 -- # echo 2 00:04:10.576 11:22:11 json_config -- scripts/common.sh@366 -- # ver2[v]=2 00:04:10.576 11:22:11 json_config -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:10.576 11:22:11 json_config -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:10.576 11:22:11 json_config -- scripts/common.sh@368 -- # return 0 00:04:10.576 11:22:11 json_config -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:10.576 11:22:11 json_config -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:04:10.576 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:10.576 --rc genhtml_branch_coverage=1 00:04:10.576 --rc genhtml_function_coverage=1 00:04:10.576 --rc genhtml_legend=1 00:04:10.576 --rc geninfo_all_blocks=1 00:04:10.576 --rc geninfo_unexecuted_blocks=1 00:04:10.576 00:04:10.576 ' 00:04:10.576 11:22:11 json_config -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:04:10.576 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:10.576 --rc genhtml_branch_coverage=1 00:04:10.576 --rc genhtml_function_coverage=1 00:04:10.576 --rc genhtml_legend=1 00:04:10.576 --rc geninfo_all_blocks=1 00:04:10.576 --rc geninfo_unexecuted_blocks=1 00:04:10.576 00:04:10.576 ' 00:04:10.576 11:22:11 json_config -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:04:10.576 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:10.576 --rc genhtml_branch_coverage=1 00:04:10.576 --rc genhtml_function_coverage=1 00:04:10.576 --rc genhtml_legend=1 00:04:10.576 --rc geninfo_all_blocks=1 00:04:10.576 --rc geninfo_unexecuted_blocks=1 00:04:10.576 00:04:10.576 ' 00:04:10.576 11:22:11 json_config -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:04:10.576 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:10.576 --rc genhtml_branch_coverage=1 00:04:10.576 --rc genhtml_function_coverage=1 00:04:10.576 --rc genhtml_legend=1 00:04:10.576 --rc geninfo_all_blocks=1 00:04:10.576 --rc geninfo_unexecuted_blocks=1 00:04:10.576 00:04:10.576 ' 00:04:10.576 11:22:11 json_config -- json_config/json_config.sh@8 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:04:10.576 11:22:11 json_config -- nvmf/common.sh@7 -- # uname -s 00:04:10.576 11:22:11 json_config -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:04:10.576 11:22:11 json_config -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:04:10.576 11:22:11 json_config -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:04:10.576 11:22:11 json_config -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:04:10.576 11:22:11 json_config -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:04:10.576 11:22:11 json_config -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:04:10.576 11:22:11 json_config -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:04:10.576 11:22:11 json_config -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:04:10.576 11:22:11 json_config -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:04:10.576 11:22:11 json_config -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:04:10.576 11:22:11 json_config -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:04:10.576 11:22:11 json_config -- nvmf/common.sh@18 -- # NVME_HOSTID=00abaa28-3537-eb11-906e-0017a4403562 00:04:10.576 11:22:11 json_config -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:04:10.576 11:22:11 json_config -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:04:10.576 11:22:11 json_config -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:04:10.576 11:22:11 json_config -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:04:10.576 11:22:11 json_config -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:04:10.576 11:22:11 json_config -- scripts/common.sh@15 -- # shopt -s extglob 00:04:10.576 11:22:11 json_config -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:04:10.576 11:22:11 json_config -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:04:10.576 11:22:11 json_config -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:04:10.576 11:22:11 json_config -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:10.576 11:22:11 json_config -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:10.576 11:22:11 json_config -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:10.576 11:22:11 json_config -- paths/export.sh@5 -- # export PATH 00:04:10.576 11:22:11 json_config -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:10.576 11:22:11 json_config -- nvmf/common.sh@51 -- # : 0 00:04:10.576 11:22:11 json_config -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:04:10.576 11:22:11 json_config -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:04:10.576 11:22:11 json_config -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:04:10.576 11:22:11 json_config -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:04:10.576 11:22:11 json_config -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:04:10.576 11:22:11 json_config -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:04:10.576 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:04:10.576 11:22:11 json_config -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:04:10.576 11:22:11 json_config -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:04:10.576 11:22:11 json_config -- nvmf/common.sh@55 -- # have_pci_nics=0 00:04:10.576 11:22:11 json_config -- json_config/json_config.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/common.sh 00:04:10.576 11:22:11 json_config -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]] 00:04:10.576 11:22:11 json_config -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]] 00:04:10.576 11:22:11 json_config -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]] 00:04:10.576 11:22:11 json_config -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:04:10.576 11:22:11 json_config -- json_config/json_config.sh@31 -- # app_pid=(['target']='' ['initiator']='') 00:04:10.576 11:22:11 json_config -- json_config/json_config.sh@31 -- # declare -A app_pid 00:04:10.576 11:22:11 json_config -- json_config/json_config.sh@32 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock' ['initiator']='/var/tmp/spdk_initiator.sock') 00:04:10.576 11:22:11 json_config -- json_config/json_config.sh@32 -- # declare -A app_socket 00:04:10.576 11:22:11 json_config -- json_config/json_config.sh@33 -- # app_params=(['target']='-m 0x1 -s 1024' ['initiator']='-m 0x2 -g -u -s 1024') 00:04:10.576 11:22:11 json_config -- json_config/json_config.sh@33 -- # declare -A app_params 00:04:10.576 11:22:11 json_config -- json_config/json_config.sh@34 -- # configs_path=(['target']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json' ['initiator']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_initiator_config.json') 00:04:10.576 11:22:11 json_config -- json_config/json_config.sh@34 -- # declare -A configs_path 00:04:10.576 11:22:11 json_config -- json_config/json_config.sh@40 -- # last_event_id=0 00:04:10.576 11:22:11 json_config -- json_config/json_config.sh@362 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:04:10.576 11:22:11 json_config -- json_config/json_config.sh@363 -- # echo 'INFO: JSON configuration test init' 00:04:10.576 INFO: JSON configuration test init 00:04:10.576 11:22:11 json_config -- json_config/json_config.sh@364 -- # json_config_test_init 00:04:10.576 11:22:11 json_config -- json_config/json_config.sh@269 -- # timing_enter json_config_test_init 00:04:10.576 11:22:11 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:04:10.576 11:22:11 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:10.576 11:22:11 json_config -- json_config/json_config.sh@270 -- # timing_enter json_config_setup_target 00:04:10.576 11:22:11 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:04:10.576 11:22:11 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:10.576 11:22:11 json_config -- json_config/json_config.sh@272 -- # json_config_test_start_app target --wait-for-rpc 00:04:10.576 11:22:11 json_config -- json_config/common.sh@9 -- # local app=target 00:04:10.576 11:22:11 json_config -- json_config/common.sh@10 -- # shift 00:04:10.576 11:22:11 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:04:10.576 11:22:11 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:04:10.577 11:22:11 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:04:10.577 11:22:11 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:10.577 11:22:11 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:10.577 11:22:11 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=1021681 00:04:10.577 11:22:11 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:04:10.577 Waiting for target to run... 00:04:10.577 11:22:11 json_config -- json_config/common.sh@25 -- # waitforlisten 1021681 /var/tmp/spdk_tgt.sock 00:04:10.577 11:22:11 json_config -- common/autotest_common.sh@833 -- # '[' -z 1021681 ']' 00:04:10.577 11:22:11 json_config -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --wait-for-rpc 00:04:10.577 11:22:11 json_config -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:04:10.577 11:22:11 json_config -- common/autotest_common.sh@838 -- # local max_retries=100 00:04:10.577 11:22:11 json_config -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:04:10.577 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:04:10.577 11:22:11 json_config -- common/autotest_common.sh@842 -- # xtrace_disable 00:04:10.577 11:22:11 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:10.577 [2024-11-15 11:22:11.284864] Starting SPDK v25.01-pre git sha1 4b2d483c6 / DPDK 24.03.0 initialization... 00:04:10.577 [2024-11-15 11:22:11.284930] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1021681 ] 00:04:11.143 [2024-11-15 11:22:11.746623] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:11.143 [2024-11-15 11:22:11.814848] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:11.711 11:22:12 json_config -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:04:11.711 11:22:12 json_config -- common/autotest_common.sh@866 -- # return 0 00:04:11.711 11:22:12 json_config -- json_config/common.sh@26 -- # echo '' 00:04:11.711 00:04:11.711 11:22:12 json_config -- json_config/json_config.sh@276 -- # create_accel_config 00:04:11.711 11:22:12 json_config -- json_config/json_config.sh@100 -- # timing_enter create_accel_config 00:04:11.711 11:22:12 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:04:11.711 11:22:12 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:11.711 11:22:12 json_config -- json_config/json_config.sh@102 -- # [[ 0 -eq 1 ]] 00:04:11.711 11:22:12 json_config -- json_config/json_config.sh@108 -- # timing_exit create_accel_config 00:04:11.711 11:22:12 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:04:11.711 11:22:12 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:11.711 11:22:12 json_config -- json_config/json_config.sh@280 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh --json-with-subsystems 00:04:11.711 11:22:12 json_config -- json_config/json_config.sh@281 -- # tgt_rpc load_config 00:04:11.711 11:22:12 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock load_config 00:04:14.998 11:22:15 json_config -- json_config/json_config.sh@283 -- # tgt_check_notification_types 00:04:14.998 11:22:15 json_config -- json_config/json_config.sh@43 -- # timing_enter tgt_check_notification_types 00:04:14.998 11:22:15 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:04:14.998 11:22:15 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:14.998 11:22:15 json_config -- json_config/json_config.sh@45 -- # local ret=0 00:04:14.998 11:22:15 json_config -- json_config/json_config.sh@46 -- # enabled_types=('bdev_register' 'bdev_unregister') 00:04:14.998 11:22:15 json_config -- json_config/json_config.sh@46 -- # local enabled_types 00:04:14.999 11:22:15 json_config -- json_config/json_config.sh@47 -- # [[ y == y ]] 00:04:14.999 11:22:15 json_config -- json_config/json_config.sh@48 -- # enabled_types+=("fsdev_register" "fsdev_unregister") 00:04:14.999 11:22:15 json_config -- json_config/json_config.sh@51 -- # tgt_rpc notify_get_types 00:04:14.999 11:22:15 json_config -- json_config/json_config.sh@51 -- # jq -r '.[]' 00:04:14.999 11:22:15 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock notify_get_types 00:04:14.999 11:22:15 json_config -- json_config/json_config.sh@51 -- # get_types=('fsdev_register' 'fsdev_unregister' 'bdev_register' 'bdev_unregister') 00:04:14.999 11:22:15 json_config -- json_config/json_config.sh@51 -- # local get_types 00:04:14.999 11:22:15 json_config -- json_config/json_config.sh@53 -- # local type_diff 00:04:14.999 11:22:15 json_config -- json_config/json_config.sh@54 -- # sort 00:04:14.999 11:22:15 json_config -- json_config/json_config.sh@54 -- # echo bdev_register bdev_unregister fsdev_register fsdev_unregister fsdev_register fsdev_unregister bdev_register bdev_unregister 00:04:14.999 11:22:15 json_config -- json_config/json_config.sh@54 -- # tr ' ' '\n' 00:04:14.999 11:22:15 json_config -- json_config/json_config.sh@54 -- # uniq -u 00:04:14.999 11:22:15 json_config -- json_config/json_config.sh@54 -- # type_diff= 00:04:14.999 11:22:15 json_config -- json_config/json_config.sh@56 -- # [[ -n '' ]] 00:04:14.999 11:22:15 json_config -- json_config/json_config.sh@61 -- # timing_exit tgt_check_notification_types 00:04:14.999 11:22:15 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:04:14.999 11:22:15 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:14.999 11:22:15 json_config -- json_config/json_config.sh@62 -- # return 0 00:04:14.999 11:22:15 json_config -- json_config/json_config.sh@285 -- # [[ 0 -eq 1 ]] 00:04:14.999 11:22:15 json_config -- json_config/json_config.sh@289 -- # [[ 0 -eq 1 ]] 00:04:14.999 11:22:15 json_config -- json_config/json_config.sh@293 -- # [[ 0 -eq 1 ]] 00:04:14.999 11:22:15 json_config -- json_config/json_config.sh@297 -- # [[ 1 -eq 1 ]] 00:04:14.999 11:22:15 json_config -- json_config/json_config.sh@298 -- # create_nvmf_subsystem_config 00:04:14.999 11:22:15 json_config -- json_config/json_config.sh@237 -- # timing_enter create_nvmf_subsystem_config 00:04:14.999 11:22:15 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:04:14.999 11:22:15 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:14.999 11:22:15 json_config -- json_config/json_config.sh@239 -- # NVMF_FIRST_TARGET_IP=127.0.0.1 00:04:14.999 11:22:15 json_config -- json_config/json_config.sh@240 -- # [[ tcp == \r\d\m\a ]] 00:04:14.999 11:22:15 json_config -- json_config/json_config.sh@244 -- # [[ -z 127.0.0.1 ]] 00:04:14.999 11:22:15 json_config -- json_config/json_config.sh@249 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocForNvmf0 00:04:14.999 11:22:15 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocForNvmf0 00:04:15.292 MallocForNvmf0 00:04:15.613 11:22:16 json_config -- json_config/json_config.sh@250 -- # tgt_rpc bdev_malloc_create 4 1024 --name MallocForNvmf1 00:04:15.614 11:22:16 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 4 1024 --name MallocForNvmf1 00:04:15.614 MallocForNvmf1 00:04:15.614 11:22:16 json_config -- json_config/json_config.sh@252 -- # tgt_rpc nvmf_create_transport -t tcp -u 8192 -c 0 00:04:15.614 11:22:16 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_transport -t tcp -u 8192 -c 0 00:04:15.938 [2024-11-15 11:22:16.645074] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:04:15.939 11:22:16 json_config -- json_config/json_config.sh@253 -- # tgt_rpc nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:04:15.939 11:22:16 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:04:16.248 11:22:16 json_config -- json_config/json_config.sh@254 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:04:16.248 11:22:16 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:04:16.507 11:22:17 json_config -- json_config/json_config.sh@255 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:04:16.507 11:22:17 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:04:16.766 11:22:17 json_config -- json_config/json_config.sh@256 -- # tgt_rpc nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:04:16.766 11:22:17 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:04:17.025 [2024-11-15 11:22:17.620340] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:04:17.025 11:22:17 json_config -- json_config/json_config.sh@258 -- # timing_exit create_nvmf_subsystem_config 00:04:17.025 11:22:17 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:04:17.025 11:22:17 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:17.025 11:22:17 json_config -- json_config/json_config.sh@300 -- # timing_exit json_config_setup_target 00:04:17.025 11:22:17 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:04:17.025 11:22:17 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:17.025 11:22:17 json_config -- json_config/json_config.sh@302 -- # [[ 0 -eq 1 ]] 00:04:17.026 11:22:17 json_config -- json_config/json_config.sh@307 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:04:17.026 11:22:17 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:04:17.285 MallocBdevForConfigChangeCheck 00:04:17.285 11:22:17 json_config -- json_config/json_config.sh@309 -- # timing_exit json_config_test_init 00:04:17.285 11:22:17 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:04:17.285 11:22:17 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:17.285 11:22:17 json_config -- json_config/json_config.sh@366 -- # tgt_rpc save_config 00:04:17.285 11:22:17 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:04:17.543 11:22:18 json_config -- json_config/json_config.sh@368 -- # echo 'INFO: shutting down applications...' 00:04:17.543 INFO: shutting down applications... 00:04:17.543 11:22:18 json_config -- json_config/json_config.sh@369 -- # [[ 0 -eq 1 ]] 00:04:17.543 11:22:18 json_config -- json_config/json_config.sh@375 -- # json_config_clear target 00:04:17.543 11:22:18 json_config -- json_config/json_config.sh@339 -- # [[ -n 22 ]] 00:04:17.802 11:22:18 json_config -- json_config/json_config.sh@340 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py -s /var/tmp/spdk_tgt.sock clear_config 00:04:19.181 Calling clear_iscsi_subsystem 00:04:19.181 Calling clear_nvmf_subsystem 00:04:19.181 Calling clear_nbd_subsystem 00:04:19.181 Calling clear_ublk_subsystem 00:04:19.181 Calling clear_vhost_blk_subsystem 00:04:19.181 Calling clear_vhost_scsi_subsystem 00:04:19.181 Calling clear_bdev_subsystem 00:04:19.440 11:22:20 json_config -- json_config/json_config.sh@344 -- # local config_filter=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py 00:04:19.440 11:22:20 json_config -- json_config/json_config.sh@350 -- # count=100 00:04:19.440 11:22:20 json_config -- json_config/json_config.sh@351 -- # '[' 100 -gt 0 ']' 00:04:19.440 11:22:20 json_config -- json_config/json_config.sh@352 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:04:19.440 11:22:20 json_config -- json_config/json_config.sh@352 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method delete_global_parameters 00:04:19.440 11:22:20 json_config -- json_config/json_config.sh@352 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method check_empty 00:04:19.699 11:22:20 json_config -- json_config/json_config.sh@352 -- # break 00:04:19.699 11:22:20 json_config -- json_config/json_config.sh@357 -- # '[' 100 -eq 0 ']' 00:04:19.699 11:22:20 json_config -- json_config/json_config.sh@376 -- # json_config_test_shutdown_app target 00:04:19.699 11:22:20 json_config -- json_config/common.sh@31 -- # local app=target 00:04:19.699 11:22:20 json_config -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:04:19.699 11:22:20 json_config -- json_config/common.sh@35 -- # [[ -n 1021681 ]] 00:04:19.699 11:22:20 json_config -- json_config/common.sh@38 -- # kill -SIGINT 1021681 00:04:19.699 11:22:20 json_config -- json_config/common.sh@40 -- # (( i = 0 )) 00:04:19.699 11:22:20 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:19.699 11:22:20 json_config -- json_config/common.sh@41 -- # kill -0 1021681 00:04:19.699 11:22:20 json_config -- json_config/common.sh@45 -- # sleep 0.5 00:04:20.267 11:22:20 json_config -- json_config/common.sh@40 -- # (( i++ )) 00:04:20.267 11:22:20 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:20.267 11:22:20 json_config -- json_config/common.sh@41 -- # kill -0 1021681 00:04:20.267 11:22:20 json_config -- json_config/common.sh@42 -- # app_pid["$app"]= 00:04:20.267 11:22:20 json_config -- json_config/common.sh@43 -- # break 00:04:20.267 11:22:20 json_config -- json_config/common.sh@48 -- # [[ -n '' ]] 00:04:20.267 11:22:20 json_config -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:04:20.267 SPDK target shutdown done 00:04:20.267 11:22:20 json_config -- json_config/json_config.sh@378 -- # echo 'INFO: relaunching applications...' 00:04:20.267 INFO: relaunching applications... 00:04:20.267 11:22:20 json_config -- json_config/json_config.sh@379 -- # json_config_test_start_app target --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:04:20.267 11:22:20 json_config -- json_config/common.sh@9 -- # local app=target 00:04:20.267 11:22:20 json_config -- json_config/common.sh@10 -- # shift 00:04:20.267 11:22:20 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:04:20.267 11:22:20 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:04:20.267 11:22:20 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:04:20.267 11:22:20 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:20.267 11:22:20 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:20.267 11:22:20 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=1023649 00:04:20.267 11:22:20 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:04:20.267 Waiting for target to run... 00:04:20.267 11:22:20 json_config -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:04:20.267 11:22:20 json_config -- json_config/common.sh@25 -- # waitforlisten 1023649 /var/tmp/spdk_tgt.sock 00:04:20.267 11:22:20 json_config -- common/autotest_common.sh@833 -- # '[' -z 1023649 ']' 00:04:20.267 11:22:20 json_config -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:04:20.267 11:22:20 json_config -- common/autotest_common.sh@838 -- # local max_retries=100 00:04:20.267 11:22:20 json_config -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:04:20.267 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:04:20.267 11:22:20 json_config -- common/autotest_common.sh@842 -- # xtrace_disable 00:04:20.267 11:22:20 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:20.267 [2024-11-15 11:22:21.020974] Starting SPDK v25.01-pre git sha1 4b2d483c6 / DPDK 24.03.0 initialization... 00:04:20.267 [2024-11-15 11:22:21.021047] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1023649 ] 00:04:20.835 [2024-11-15 11:22:21.481523] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:20.835 [2024-11-15 11:22:21.545054] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:24.123 [2024-11-15 11:22:24.613843] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:04:24.123 [2024-11-15 11:22:24.646262] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:04:24.123 11:22:24 json_config -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:04:24.123 11:22:24 json_config -- common/autotest_common.sh@866 -- # return 0 00:04:24.123 11:22:24 json_config -- json_config/common.sh@26 -- # echo '' 00:04:24.123 00:04:24.123 11:22:24 json_config -- json_config/json_config.sh@380 -- # [[ 0 -eq 1 ]] 00:04:24.123 11:22:24 json_config -- json_config/json_config.sh@384 -- # echo 'INFO: Checking if target configuration is the same...' 00:04:24.123 INFO: Checking if target configuration is the same... 00:04:24.123 11:22:24 json_config -- json_config/json_config.sh@385 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:04:24.123 11:22:24 json_config -- json_config/json_config.sh@385 -- # tgt_rpc save_config 00:04:24.123 11:22:24 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:04:24.123 + '[' 2 -ne 2 ']' 00:04:24.123 +++ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh 00:04:24.123 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/../.. 00:04:24.123 + rootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:04:24.123 +++ basename /dev/fd/62 00:04:24.123 ++ mktemp /tmp/62.XXX 00:04:24.124 + tmp_file_1=/tmp/62.eg0 00:04:24.124 +++ basename /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:04:24.124 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:04:24.124 + tmp_file_2=/tmp/spdk_tgt_config.json.qjS 00:04:24.124 + ret=0 00:04:24.124 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:04:24.382 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:04:24.382 + diff -u /tmp/62.eg0 /tmp/spdk_tgt_config.json.qjS 00:04:24.382 + echo 'INFO: JSON config files are the same' 00:04:24.382 INFO: JSON config files are the same 00:04:24.382 + rm /tmp/62.eg0 /tmp/spdk_tgt_config.json.qjS 00:04:24.382 + exit 0 00:04:24.382 11:22:25 json_config -- json_config/json_config.sh@386 -- # [[ 0 -eq 1 ]] 00:04:24.382 11:22:25 json_config -- json_config/json_config.sh@391 -- # echo 'INFO: changing configuration and checking if this can be detected...' 00:04:24.382 INFO: changing configuration and checking if this can be detected... 00:04:24.382 11:22:25 json_config -- json_config/json_config.sh@393 -- # tgt_rpc bdev_malloc_delete MallocBdevForConfigChangeCheck 00:04:24.382 11:22:25 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_delete MallocBdevForConfigChangeCheck 00:04:24.640 11:22:25 json_config -- json_config/json_config.sh@394 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:04:24.640 11:22:25 json_config -- json_config/json_config.sh@394 -- # tgt_rpc save_config 00:04:24.640 11:22:25 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:04:24.640 + '[' 2 -ne 2 ']' 00:04:24.640 +++ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh 00:04:24.640 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/../.. 00:04:24.640 + rootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:04:24.640 +++ basename /dev/fd/62 00:04:24.640 ++ mktemp /tmp/62.XXX 00:04:24.640 + tmp_file_1=/tmp/62.qZu 00:04:24.640 +++ basename /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:04:24.640 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:04:24.640 + tmp_file_2=/tmp/spdk_tgt_config.json.zVJ 00:04:24.640 + ret=0 00:04:24.640 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:04:25.205 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:04:25.205 + diff -u /tmp/62.qZu /tmp/spdk_tgt_config.json.zVJ 00:04:25.205 + ret=1 00:04:25.205 + echo '=== Start of file: /tmp/62.qZu ===' 00:04:25.205 + cat /tmp/62.qZu 00:04:25.205 + echo '=== End of file: /tmp/62.qZu ===' 00:04:25.205 + echo '' 00:04:25.205 + echo '=== Start of file: /tmp/spdk_tgt_config.json.zVJ ===' 00:04:25.205 + cat /tmp/spdk_tgt_config.json.zVJ 00:04:25.205 + echo '=== End of file: /tmp/spdk_tgt_config.json.zVJ ===' 00:04:25.205 + echo '' 00:04:25.205 + rm /tmp/62.qZu /tmp/spdk_tgt_config.json.zVJ 00:04:25.206 + exit 1 00:04:25.206 11:22:25 json_config -- json_config/json_config.sh@398 -- # echo 'INFO: configuration change detected.' 00:04:25.206 INFO: configuration change detected. 00:04:25.206 11:22:25 json_config -- json_config/json_config.sh@401 -- # json_config_test_fini 00:04:25.206 11:22:25 json_config -- json_config/json_config.sh@313 -- # timing_enter json_config_test_fini 00:04:25.206 11:22:25 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:04:25.206 11:22:25 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:25.206 11:22:25 json_config -- json_config/json_config.sh@314 -- # local ret=0 00:04:25.206 11:22:25 json_config -- json_config/json_config.sh@316 -- # [[ -n '' ]] 00:04:25.206 11:22:25 json_config -- json_config/json_config.sh@324 -- # [[ -n 1023649 ]] 00:04:25.206 11:22:25 json_config -- json_config/json_config.sh@327 -- # cleanup_bdev_subsystem_config 00:04:25.206 11:22:25 json_config -- json_config/json_config.sh@191 -- # timing_enter cleanup_bdev_subsystem_config 00:04:25.206 11:22:25 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:04:25.206 11:22:25 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:25.206 11:22:25 json_config -- json_config/json_config.sh@193 -- # [[ 0 -eq 1 ]] 00:04:25.206 11:22:25 json_config -- json_config/json_config.sh@200 -- # uname -s 00:04:25.206 11:22:25 json_config -- json_config/json_config.sh@200 -- # [[ Linux = Linux ]] 00:04:25.206 11:22:25 json_config -- json_config/json_config.sh@201 -- # rm -f /sample_aio 00:04:25.206 11:22:25 json_config -- json_config/json_config.sh@204 -- # [[ 0 -eq 1 ]] 00:04:25.206 11:22:25 json_config -- json_config/json_config.sh@208 -- # timing_exit cleanup_bdev_subsystem_config 00:04:25.206 11:22:25 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:04:25.206 11:22:25 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:25.206 11:22:25 json_config -- json_config/json_config.sh@330 -- # killprocess 1023649 00:04:25.206 11:22:25 json_config -- common/autotest_common.sh@952 -- # '[' -z 1023649 ']' 00:04:25.206 11:22:25 json_config -- common/autotest_common.sh@956 -- # kill -0 1023649 00:04:25.206 11:22:25 json_config -- common/autotest_common.sh@957 -- # uname 00:04:25.206 11:22:25 json_config -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:04:25.206 11:22:25 json_config -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 1023649 00:04:25.206 11:22:26 json_config -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:04:25.206 11:22:26 json_config -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:04:25.206 11:22:26 json_config -- common/autotest_common.sh@970 -- # echo 'killing process with pid 1023649' 00:04:25.206 killing process with pid 1023649 00:04:25.206 11:22:26 json_config -- common/autotest_common.sh@971 -- # kill 1023649 00:04:25.206 11:22:26 json_config -- common/autotest_common.sh@976 -- # wait 1023649 00:04:27.107 11:22:27 json_config -- json_config/json_config.sh@333 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_initiator_config.json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:04:27.107 11:22:27 json_config -- json_config/json_config.sh@334 -- # timing_exit json_config_test_fini 00:04:27.107 11:22:27 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:04:27.107 11:22:27 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:27.107 11:22:27 json_config -- json_config/json_config.sh@335 -- # return 0 00:04:27.107 11:22:27 json_config -- json_config/json_config.sh@403 -- # echo 'INFO: Success' 00:04:27.107 INFO: Success 00:04:27.107 00:04:27.107 real 0m16.641s 00:04:27.107 user 0m18.108s 00:04:27.107 sys 0m2.903s 00:04:27.107 11:22:27 json_config -- common/autotest_common.sh@1128 -- # xtrace_disable 00:04:27.107 11:22:27 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:27.107 ************************************ 00:04:27.107 END TEST json_config 00:04:27.107 ************************************ 00:04:27.107 11:22:27 -- spdk/autotest.sh@160 -- # run_test json_config_extra_key /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:04:27.107 11:22:27 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:04:27.107 11:22:27 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:04:27.107 11:22:27 -- common/autotest_common.sh@10 -- # set +x 00:04:27.108 ************************************ 00:04:27.108 START TEST json_config_extra_key 00:04:27.108 ************************************ 00:04:27.108 11:22:27 json_config_extra_key -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:04:27.108 11:22:27 json_config_extra_key -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:04:27.108 11:22:27 json_config_extra_key -- common/autotest_common.sh@1691 -- # lcov --version 00:04:27.108 11:22:27 json_config_extra_key -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:04:27.108 11:22:27 json_config_extra_key -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:04:27.108 11:22:27 json_config_extra_key -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:27.108 11:22:27 json_config_extra_key -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:27.108 11:22:27 json_config_extra_key -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:27.108 11:22:27 json_config_extra_key -- scripts/common.sh@336 -- # IFS=.-: 00:04:27.108 11:22:27 json_config_extra_key -- scripts/common.sh@336 -- # read -ra ver1 00:04:27.108 11:22:27 json_config_extra_key -- scripts/common.sh@337 -- # IFS=.-: 00:04:27.108 11:22:27 json_config_extra_key -- scripts/common.sh@337 -- # read -ra ver2 00:04:27.108 11:22:27 json_config_extra_key -- scripts/common.sh@338 -- # local 'op=<' 00:04:27.108 11:22:27 json_config_extra_key -- scripts/common.sh@340 -- # ver1_l=2 00:04:27.108 11:22:27 json_config_extra_key -- scripts/common.sh@341 -- # ver2_l=1 00:04:27.108 11:22:27 json_config_extra_key -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:27.108 11:22:27 json_config_extra_key -- scripts/common.sh@344 -- # case "$op" in 00:04:27.108 11:22:27 json_config_extra_key -- scripts/common.sh@345 -- # : 1 00:04:27.108 11:22:27 json_config_extra_key -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:27.108 11:22:27 json_config_extra_key -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:27.108 11:22:27 json_config_extra_key -- scripts/common.sh@365 -- # decimal 1 00:04:27.108 11:22:27 json_config_extra_key -- scripts/common.sh@353 -- # local d=1 00:04:27.108 11:22:27 json_config_extra_key -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:27.108 11:22:27 json_config_extra_key -- scripts/common.sh@355 -- # echo 1 00:04:27.108 11:22:27 json_config_extra_key -- scripts/common.sh@365 -- # ver1[v]=1 00:04:27.108 11:22:27 json_config_extra_key -- scripts/common.sh@366 -- # decimal 2 00:04:27.108 11:22:27 json_config_extra_key -- scripts/common.sh@353 -- # local d=2 00:04:27.108 11:22:27 json_config_extra_key -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:27.108 11:22:27 json_config_extra_key -- scripts/common.sh@355 -- # echo 2 00:04:27.108 11:22:27 json_config_extra_key -- scripts/common.sh@366 -- # ver2[v]=2 00:04:27.108 11:22:27 json_config_extra_key -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:27.108 11:22:27 json_config_extra_key -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:27.108 11:22:27 json_config_extra_key -- scripts/common.sh@368 -- # return 0 00:04:27.108 11:22:27 json_config_extra_key -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:27.108 11:22:27 json_config_extra_key -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:04:27.108 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:27.108 --rc genhtml_branch_coverage=1 00:04:27.108 --rc genhtml_function_coverage=1 00:04:27.108 --rc genhtml_legend=1 00:04:27.108 --rc geninfo_all_blocks=1 00:04:27.108 --rc geninfo_unexecuted_blocks=1 00:04:27.108 00:04:27.108 ' 00:04:27.108 11:22:27 json_config_extra_key -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:04:27.108 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:27.108 --rc genhtml_branch_coverage=1 00:04:27.108 --rc genhtml_function_coverage=1 00:04:27.108 --rc genhtml_legend=1 00:04:27.108 --rc geninfo_all_blocks=1 00:04:27.108 --rc geninfo_unexecuted_blocks=1 00:04:27.108 00:04:27.108 ' 00:04:27.108 11:22:27 json_config_extra_key -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:04:27.108 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:27.108 --rc genhtml_branch_coverage=1 00:04:27.108 --rc genhtml_function_coverage=1 00:04:27.108 --rc genhtml_legend=1 00:04:27.108 --rc geninfo_all_blocks=1 00:04:27.108 --rc geninfo_unexecuted_blocks=1 00:04:27.108 00:04:27.108 ' 00:04:27.108 11:22:27 json_config_extra_key -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:04:27.108 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:27.108 --rc genhtml_branch_coverage=1 00:04:27.108 --rc genhtml_function_coverage=1 00:04:27.108 --rc genhtml_legend=1 00:04:27.108 --rc geninfo_all_blocks=1 00:04:27.108 --rc geninfo_unexecuted_blocks=1 00:04:27.108 00:04:27.108 ' 00:04:27.108 11:22:27 json_config_extra_key -- json_config/json_config_extra_key.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:04:27.108 11:22:27 json_config_extra_key -- nvmf/common.sh@7 -- # uname -s 00:04:27.108 11:22:27 json_config_extra_key -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:04:27.108 11:22:27 json_config_extra_key -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:04:27.108 11:22:27 json_config_extra_key -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:04:27.108 11:22:27 json_config_extra_key -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:04:27.108 11:22:27 json_config_extra_key -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:04:27.108 11:22:27 json_config_extra_key -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:04:27.108 11:22:27 json_config_extra_key -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:04:27.108 11:22:27 json_config_extra_key -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:04:27.108 11:22:27 json_config_extra_key -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:04:27.108 11:22:27 json_config_extra_key -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:04:27.108 11:22:27 json_config_extra_key -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:04:27.108 11:22:27 json_config_extra_key -- nvmf/common.sh@18 -- # NVME_HOSTID=00abaa28-3537-eb11-906e-0017a4403562 00:04:27.108 11:22:27 json_config_extra_key -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:04:27.108 11:22:27 json_config_extra_key -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:04:27.108 11:22:27 json_config_extra_key -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:04:27.108 11:22:27 json_config_extra_key -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:04:27.108 11:22:27 json_config_extra_key -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:04:27.108 11:22:27 json_config_extra_key -- scripts/common.sh@15 -- # shopt -s extglob 00:04:27.108 11:22:27 json_config_extra_key -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:04:27.108 11:22:27 json_config_extra_key -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:04:27.108 11:22:27 json_config_extra_key -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:04:27.108 11:22:27 json_config_extra_key -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:27.108 11:22:27 json_config_extra_key -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:27.108 11:22:27 json_config_extra_key -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:27.108 11:22:27 json_config_extra_key -- paths/export.sh@5 -- # export PATH 00:04:27.108 11:22:27 json_config_extra_key -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:27.108 11:22:27 json_config_extra_key -- nvmf/common.sh@51 -- # : 0 00:04:27.108 11:22:27 json_config_extra_key -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:04:27.108 11:22:27 json_config_extra_key -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:04:27.108 11:22:27 json_config_extra_key -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:04:27.108 11:22:27 json_config_extra_key -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:04:27.108 11:22:27 json_config_extra_key -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:04:27.108 11:22:27 json_config_extra_key -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:04:27.108 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:04:27.108 11:22:27 json_config_extra_key -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:04:27.108 11:22:27 json_config_extra_key -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:04:27.108 11:22:27 json_config_extra_key -- nvmf/common.sh@55 -- # have_pci_nics=0 00:04:27.108 11:22:27 json_config_extra_key -- json_config/json_config_extra_key.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/common.sh 00:04:27.109 11:22:27 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='') 00:04:27.109 11:22:27 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid 00:04:27.109 11:22:27 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:04:27.109 11:22:27 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket 00:04:27.109 11:22:27 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024') 00:04:27.109 11:22:27 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params 00:04:27.109 11:22:27 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json') 00:04:27.109 11:22:27 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path 00:04:27.109 11:22:27 json_config_extra_key -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:04:27.109 11:22:27 json_config_extra_key -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...' 00:04:27.109 INFO: launching applications... 00:04:27.109 11:22:27 json_config_extra_key -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json 00:04:27.109 11:22:27 json_config_extra_key -- json_config/common.sh@9 -- # local app=target 00:04:27.109 11:22:27 json_config_extra_key -- json_config/common.sh@10 -- # shift 00:04:27.109 11:22:27 json_config_extra_key -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:04:27.109 11:22:27 json_config_extra_key -- json_config/common.sh@13 -- # [[ -z '' ]] 00:04:27.109 11:22:27 json_config_extra_key -- json_config/common.sh@15 -- # local app_extra_params= 00:04:27.109 11:22:27 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:27.109 11:22:27 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:27.109 11:22:27 json_config_extra_key -- json_config/common.sh@22 -- # app_pid["$app"]=1025086 00:04:27.109 11:22:27 json_config_extra_key -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:04:27.109 Waiting for target to run... 00:04:27.109 11:22:27 json_config_extra_key -- json_config/common.sh@25 -- # waitforlisten 1025086 /var/tmp/spdk_tgt.sock 00:04:27.109 11:22:27 json_config_extra_key -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json 00:04:27.109 11:22:27 json_config_extra_key -- common/autotest_common.sh@833 -- # '[' -z 1025086 ']' 00:04:27.109 11:22:27 json_config_extra_key -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:04:27.109 11:22:27 json_config_extra_key -- common/autotest_common.sh@838 -- # local max_retries=100 00:04:27.109 11:22:27 json_config_extra_key -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:04:27.109 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:04:27.109 11:22:27 json_config_extra_key -- common/autotest_common.sh@842 -- # xtrace_disable 00:04:27.109 11:22:27 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:04:27.368 [2024-11-15 11:22:27.972359] Starting SPDK v25.01-pre git sha1 4b2d483c6 / DPDK 24.03.0 initialization... 00:04:27.368 [2024-11-15 11:22:27.972406] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1025086 ] 00:04:27.627 [2024-11-15 11:22:28.401961] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:27.627 [2024-11-15 11:22:28.460633] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:28.195 11:22:28 json_config_extra_key -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:04:28.195 11:22:28 json_config_extra_key -- common/autotest_common.sh@866 -- # return 0 00:04:28.195 11:22:28 json_config_extra_key -- json_config/common.sh@26 -- # echo '' 00:04:28.195 00:04:28.195 11:22:28 json_config_extra_key -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...' 00:04:28.195 INFO: shutting down applications... 00:04:28.195 11:22:28 json_config_extra_key -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target 00:04:28.195 11:22:28 json_config_extra_key -- json_config/common.sh@31 -- # local app=target 00:04:28.195 11:22:28 json_config_extra_key -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:04:28.195 11:22:28 json_config_extra_key -- json_config/common.sh@35 -- # [[ -n 1025086 ]] 00:04:28.195 11:22:28 json_config_extra_key -- json_config/common.sh@38 -- # kill -SIGINT 1025086 00:04:28.195 11:22:28 json_config_extra_key -- json_config/common.sh@40 -- # (( i = 0 )) 00:04:28.195 11:22:28 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:28.195 11:22:28 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 1025086 00:04:28.195 11:22:28 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:04:28.763 11:22:29 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:04:28.763 11:22:29 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:28.763 11:22:29 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 1025086 00:04:28.763 11:22:29 json_config_extra_key -- json_config/common.sh@42 -- # app_pid["$app"]= 00:04:28.763 11:22:29 json_config_extra_key -- json_config/common.sh@43 -- # break 00:04:28.763 11:22:29 json_config_extra_key -- json_config/common.sh@48 -- # [[ -n '' ]] 00:04:28.763 11:22:29 json_config_extra_key -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:04:28.763 SPDK target shutdown done 00:04:28.763 11:22:29 json_config_extra_key -- json_config/json_config_extra_key.sh@30 -- # echo Success 00:04:28.763 Success 00:04:28.763 00:04:28.763 real 0m1.653s 00:04:28.763 user 0m1.368s 00:04:28.763 sys 0m0.561s 00:04:28.763 11:22:29 json_config_extra_key -- common/autotest_common.sh@1128 -- # xtrace_disable 00:04:28.763 11:22:29 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:04:28.763 ************************************ 00:04:28.763 END TEST json_config_extra_key 00:04:28.763 ************************************ 00:04:28.763 11:22:29 -- spdk/autotest.sh@161 -- # run_test alias_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:04:28.763 11:22:29 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:04:28.763 11:22:29 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:04:28.763 11:22:29 -- common/autotest_common.sh@10 -- # set +x 00:04:28.763 ************************************ 00:04:28.763 START TEST alias_rpc 00:04:28.763 ************************************ 00:04:28.763 11:22:29 alias_rpc -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:04:28.763 * Looking for test storage... 00:04:28.763 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc 00:04:28.763 11:22:29 alias_rpc -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:04:28.763 11:22:29 alias_rpc -- common/autotest_common.sh@1691 -- # lcov --version 00:04:28.763 11:22:29 alias_rpc -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:04:28.763 11:22:29 alias_rpc -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:04:28.763 11:22:29 alias_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:28.763 11:22:29 alias_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:28.763 11:22:29 alias_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:28.763 11:22:29 alias_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:04:28.763 11:22:29 alias_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:04:28.763 11:22:29 alias_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:04:28.763 11:22:29 alias_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:04:28.763 11:22:29 alias_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:04:28.763 11:22:29 alias_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:04:28.763 11:22:29 alias_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:04:28.763 11:22:29 alias_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:28.763 11:22:29 alias_rpc -- scripts/common.sh@344 -- # case "$op" in 00:04:28.763 11:22:29 alias_rpc -- scripts/common.sh@345 -- # : 1 00:04:28.763 11:22:29 alias_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:28.763 11:22:29 alias_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:28.763 11:22:29 alias_rpc -- scripts/common.sh@365 -- # decimal 1 00:04:28.763 11:22:29 alias_rpc -- scripts/common.sh@353 -- # local d=1 00:04:28.763 11:22:29 alias_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:28.763 11:22:29 alias_rpc -- scripts/common.sh@355 -- # echo 1 00:04:28.763 11:22:29 alias_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:04:29.023 11:22:29 alias_rpc -- scripts/common.sh@366 -- # decimal 2 00:04:29.023 11:22:29 alias_rpc -- scripts/common.sh@353 -- # local d=2 00:04:29.023 11:22:29 alias_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:29.023 11:22:29 alias_rpc -- scripts/common.sh@355 -- # echo 2 00:04:29.023 11:22:29 alias_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:04:29.023 11:22:29 alias_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:29.023 11:22:29 alias_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:29.023 11:22:29 alias_rpc -- scripts/common.sh@368 -- # return 0 00:04:29.023 11:22:29 alias_rpc -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:29.023 11:22:29 alias_rpc -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:04:29.023 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:29.023 --rc genhtml_branch_coverage=1 00:04:29.023 --rc genhtml_function_coverage=1 00:04:29.023 --rc genhtml_legend=1 00:04:29.023 --rc geninfo_all_blocks=1 00:04:29.023 --rc geninfo_unexecuted_blocks=1 00:04:29.023 00:04:29.023 ' 00:04:29.023 11:22:29 alias_rpc -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:04:29.023 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:29.023 --rc genhtml_branch_coverage=1 00:04:29.023 --rc genhtml_function_coverage=1 00:04:29.023 --rc genhtml_legend=1 00:04:29.023 --rc geninfo_all_blocks=1 00:04:29.023 --rc geninfo_unexecuted_blocks=1 00:04:29.023 00:04:29.023 ' 00:04:29.023 11:22:29 alias_rpc -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:04:29.023 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:29.023 --rc genhtml_branch_coverage=1 00:04:29.023 --rc genhtml_function_coverage=1 00:04:29.023 --rc genhtml_legend=1 00:04:29.023 --rc geninfo_all_blocks=1 00:04:29.023 --rc geninfo_unexecuted_blocks=1 00:04:29.023 00:04:29.023 ' 00:04:29.023 11:22:29 alias_rpc -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:04:29.023 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:29.023 --rc genhtml_branch_coverage=1 00:04:29.023 --rc genhtml_function_coverage=1 00:04:29.023 --rc genhtml_legend=1 00:04:29.023 --rc geninfo_all_blocks=1 00:04:29.023 --rc geninfo_unexecuted_blocks=1 00:04:29.023 00:04:29.023 ' 00:04:29.023 11:22:29 alias_rpc -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:04:29.023 11:22:29 alias_rpc -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=1025490 00:04:29.023 11:22:29 alias_rpc -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 1025490 00:04:29.023 11:22:29 alias_rpc -- alias_rpc/alias_rpc.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:29.023 11:22:29 alias_rpc -- common/autotest_common.sh@833 -- # '[' -z 1025490 ']' 00:04:29.023 11:22:29 alias_rpc -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:29.023 11:22:29 alias_rpc -- common/autotest_common.sh@838 -- # local max_retries=100 00:04:29.023 11:22:29 alias_rpc -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:29.023 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:29.023 11:22:29 alias_rpc -- common/autotest_common.sh@842 -- # xtrace_disable 00:04:29.023 11:22:29 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:29.023 [2024-11-15 11:22:29.685568] Starting SPDK v25.01-pre git sha1 4b2d483c6 / DPDK 24.03.0 initialization... 00:04:29.023 [2024-11-15 11:22:29.685633] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1025490 ] 00:04:29.023 [2024-11-15 11:22:29.772308] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:29.023 [2024-11-15 11:22:29.822570] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:29.282 11:22:30 alias_rpc -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:04:29.282 11:22:30 alias_rpc -- common/autotest_common.sh@866 -- # return 0 00:04:29.282 11:22:30 alias_rpc -- alias_rpc/alias_rpc.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py load_config -i 00:04:29.541 11:22:30 alias_rpc -- alias_rpc/alias_rpc.sh@19 -- # killprocess 1025490 00:04:29.541 11:22:30 alias_rpc -- common/autotest_common.sh@952 -- # '[' -z 1025490 ']' 00:04:29.541 11:22:30 alias_rpc -- common/autotest_common.sh@956 -- # kill -0 1025490 00:04:29.541 11:22:30 alias_rpc -- common/autotest_common.sh@957 -- # uname 00:04:29.541 11:22:30 alias_rpc -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:04:29.541 11:22:30 alias_rpc -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 1025490 00:04:29.800 11:22:30 alias_rpc -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:04:29.800 11:22:30 alias_rpc -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:04:29.800 11:22:30 alias_rpc -- common/autotest_common.sh@970 -- # echo 'killing process with pid 1025490' 00:04:29.800 killing process with pid 1025490 00:04:29.800 11:22:30 alias_rpc -- common/autotest_common.sh@971 -- # kill 1025490 00:04:29.800 11:22:30 alias_rpc -- common/autotest_common.sh@976 -- # wait 1025490 00:04:30.059 00:04:30.059 real 0m1.315s 00:04:30.059 user 0m1.484s 00:04:30.059 sys 0m0.439s 00:04:30.059 11:22:30 alias_rpc -- common/autotest_common.sh@1128 -- # xtrace_disable 00:04:30.059 11:22:30 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:30.059 ************************************ 00:04:30.059 END TEST alias_rpc 00:04:30.059 ************************************ 00:04:30.059 11:22:30 -- spdk/autotest.sh@163 -- # [[ 0 -eq 0 ]] 00:04:30.059 11:22:30 -- spdk/autotest.sh@164 -- # run_test spdkcli_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/tcp.sh 00:04:30.059 11:22:30 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:04:30.059 11:22:30 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:04:30.059 11:22:30 -- common/autotest_common.sh@10 -- # set +x 00:04:30.059 ************************************ 00:04:30.059 START TEST spdkcli_tcp 00:04:30.059 ************************************ 00:04:30.059 11:22:30 spdkcli_tcp -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/tcp.sh 00:04:30.059 * Looking for test storage... 00:04:30.059 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli 00:04:30.059 11:22:30 spdkcli_tcp -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:04:30.059 11:22:30 spdkcli_tcp -- common/autotest_common.sh@1691 -- # lcov --version 00:04:30.059 11:22:30 spdkcli_tcp -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:04:30.319 11:22:30 spdkcli_tcp -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:04:30.319 11:22:30 spdkcli_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:30.319 11:22:30 spdkcli_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:30.319 11:22:30 spdkcli_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:30.319 11:22:30 spdkcli_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:04:30.319 11:22:30 spdkcli_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:04:30.319 11:22:30 spdkcli_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:04:30.319 11:22:30 spdkcli_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:04:30.319 11:22:30 spdkcli_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:04:30.319 11:22:30 spdkcli_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:04:30.319 11:22:30 spdkcli_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:04:30.319 11:22:30 spdkcli_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:30.319 11:22:30 spdkcli_tcp -- scripts/common.sh@344 -- # case "$op" in 00:04:30.319 11:22:30 spdkcli_tcp -- scripts/common.sh@345 -- # : 1 00:04:30.319 11:22:30 spdkcli_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:30.319 11:22:30 spdkcli_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:30.319 11:22:30 spdkcli_tcp -- scripts/common.sh@365 -- # decimal 1 00:04:30.319 11:22:30 spdkcli_tcp -- scripts/common.sh@353 -- # local d=1 00:04:30.319 11:22:30 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:30.319 11:22:30 spdkcli_tcp -- scripts/common.sh@355 -- # echo 1 00:04:30.319 11:22:30 spdkcli_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:04:30.319 11:22:30 spdkcli_tcp -- scripts/common.sh@366 -- # decimal 2 00:04:30.319 11:22:30 spdkcli_tcp -- scripts/common.sh@353 -- # local d=2 00:04:30.319 11:22:30 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:30.319 11:22:30 spdkcli_tcp -- scripts/common.sh@355 -- # echo 2 00:04:30.319 11:22:30 spdkcli_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:04:30.319 11:22:30 spdkcli_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:30.319 11:22:30 spdkcli_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:30.319 11:22:30 spdkcli_tcp -- scripts/common.sh@368 -- # return 0 00:04:30.319 11:22:30 spdkcli_tcp -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:30.319 11:22:30 spdkcli_tcp -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:04:30.319 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:30.319 --rc genhtml_branch_coverage=1 00:04:30.319 --rc genhtml_function_coverage=1 00:04:30.319 --rc genhtml_legend=1 00:04:30.319 --rc geninfo_all_blocks=1 00:04:30.319 --rc geninfo_unexecuted_blocks=1 00:04:30.319 00:04:30.319 ' 00:04:30.319 11:22:30 spdkcli_tcp -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:04:30.319 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:30.319 --rc genhtml_branch_coverage=1 00:04:30.319 --rc genhtml_function_coverage=1 00:04:30.319 --rc genhtml_legend=1 00:04:30.319 --rc geninfo_all_blocks=1 00:04:30.319 --rc geninfo_unexecuted_blocks=1 00:04:30.319 00:04:30.319 ' 00:04:30.319 11:22:30 spdkcli_tcp -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:04:30.319 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:30.319 --rc genhtml_branch_coverage=1 00:04:30.319 --rc genhtml_function_coverage=1 00:04:30.319 --rc genhtml_legend=1 00:04:30.319 --rc geninfo_all_blocks=1 00:04:30.319 --rc geninfo_unexecuted_blocks=1 00:04:30.319 00:04:30.319 ' 00:04:30.319 11:22:30 spdkcli_tcp -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:04:30.319 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:30.319 --rc genhtml_branch_coverage=1 00:04:30.319 --rc genhtml_function_coverage=1 00:04:30.319 --rc genhtml_legend=1 00:04:30.319 --rc geninfo_all_blocks=1 00:04:30.319 --rc geninfo_unexecuted_blocks=1 00:04:30.319 00:04:30.319 ' 00:04:30.319 11:22:30 spdkcli_tcp -- spdkcli/tcp.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/common.sh 00:04:30.319 11:22:30 spdkcli_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:04:30.319 11:22:30 spdkcli_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py 00:04:30.319 11:22:30 spdkcli_tcp -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:04:30.319 11:22:30 spdkcli_tcp -- spdkcli/tcp.sh@19 -- # PORT=9998 00:04:30.319 11:22:30 spdkcli_tcp -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:04:30.319 11:22:30 spdkcli_tcp -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:04:30.319 11:22:30 spdkcli_tcp -- common/autotest_common.sh@724 -- # xtrace_disable 00:04:30.319 11:22:30 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:04:30.319 11:22:31 spdkcli_tcp -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=1025777 00:04:30.319 11:22:31 spdkcli_tcp -- spdkcli/tcp.sh@27 -- # waitforlisten 1025777 00:04:30.319 11:22:31 spdkcli_tcp -- spdkcli/tcp.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:04:30.319 11:22:31 spdkcli_tcp -- common/autotest_common.sh@833 -- # '[' -z 1025777 ']' 00:04:30.319 11:22:31 spdkcli_tcp -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:30.319 11:22:31 spdkcli_tcp -- common/autotest_common.sh@838 -- # local max_retries=100 00:04:30.319 11:22:31 spdkcli_tcp -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:30.319 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:30.319 11:22:31 spdkcli_tcp -- common/autotest_common.sh@842 -- # xtrace_disable 00:04:30.319 11:22:31 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:04:30.319 [2024-11-15 11:22:31.060827] Starting SPDK v25.01-pre git sha1 4b2d483c6 / DPDK 24.03.0 initialization... 00:04:30.319 [2024-11-15 11:22:31.060894] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1025777 ] 00:04:30.319 [2024-11-15 11:22:31.146576] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:04:30.579 [2024-11-15 11:22:31.199905] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:04:30.579 [2024-11-15 11:22:31.199912] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:30.838 11:22:31 spdkcli_tcp -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:04:30.838 11:22:31 spdkcli_tcp -- common/autotest_common.sh@866 -- # return 0 00:04:30.838 11:22:31 spdkcli_tcp -- spdkcli/tcp.sh@31 -- # socat_pid=1025986 00:04:30.838 11:22:31 spdkcli_tcp -- spdkcli/tcp.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:04:30.838 11:22:31 spdkcli_tcp -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:04:31.098 [ 00:04:31.098 "bdev_malloc_delete", 00:04:31.098 "bdev_malloc_create", 00:04:31.098 "bdev_null_resize", 00:04:31.098 "bdev_null_delete", 00:04:31.098 "bdev_null_create", 00:04:31.098 "bdev_nvme_cuse_unregister", 00:04:31.098 "bdev_nvme_cuse_register", 00:04:31.098 "bdev_opal_new_user", 00:04:31.098 "bdev_opal_set_lock_state", 00:04:31.098 "bdev_opal_delete", 00:04:31.098 "bdev_opal_get_info", 00:04:31.098 "bdev_opal_create", 00:04:31.098 "bdev_nvme_opal_revert", 00:04:31.098 "bdev_nvme_opal_init", 00:04:31.098 "bdev_nvme_send_cmd", 00:04:31.098 "bdev_nvme_set_keys", 00:04:31.098 "bdev_nvme_get_path_iostat", 00:04:31.098 "bdev_nvme_get_mdns_discovery_info", 00:04:31.098 "bdev_nvme_stop_mdns_discovery", 00:04:31.098 "bdev_nvme_start_mdns_discovery", 00:04:31.098 "bdev_nvme_set_multipath_policy", 00:04:31.098 "bdev_nvme_set_preferred_path", 00:04:31.098 "bdev_nvme_get_io_paths", 00:04:31.098 "bdev_nvme_remove_error_injection", 00:04:31.098 "bdev_nvme_add_error_injection", 00:04:31.098 "bdev_nvme_get_discovery_info", 00:04:31.098 "bdev_nvme_stop_discovery", 00:04:31.098 "bdev_nvme_start_discovery", 00:04:31.098 "bdev_nvme_get_controller_health_info", 00:04:31.098 "bdev_nvme_disable_controller", 00:04:31.098 "bdev_nvme_enable_controller", 00:04:31.098 "bdev_nvme_reset_controller", 00:04:31.098 "bdev_nvme_get_transport_statistics", 00:04:31.098 "bdev_nvme_apply_firmware", 00:04:31.098 "bdev_nvme_detach_controller", 00:04:31.098 "bdev_nvme_get_controllers", 00:04:31.098 "bdev_nvme_attach_controller", 00:04:31.098 "bdev_nvme_set_hotplug", 00:04:31.098 "bdev_nvme_set_options", 00:04:31.098 "bdev_passthru_delete", 00:04:31.098 "bdev_passthru_create", 00:04:31.098 "bdev_lvol_set_parent_bdev", 00:04:31.098 "bdev_lvol_set_parent", 00:04:31.098 "bdev_lvol_check_shallow_copy", 00:04:31.098 "bdev_lvol_start_shallow_copy", 00:04:31.098 "bdev_lvol_grow_lvstore", 00:04:31.098 "bdev_lvol_get_lvols", 00:04:31.098 "bdev_lvol_get_lvstores", 00:04:31.098 "bdev_lvol_delete", 00:04:31.098 "bdev_lvol_set_read_only", 00:04:31.098 "bdev_lvol_resize", 00:04:31.098 "bdev_lvol_decouple_parent", 00:04:31.098 "bdev_lvol_inflate", 00:04:31.098 "bdev_lvol_rename", 00:04:31.098 "bdev_lvol_clone_bdev", 00:04:31.098 "bdev_lvol_clone", 00:04:31.098 "bdev_lvol_snapshot", 00:04:31.098 "bdev_lvol_create", 00:04:31.098 "bdev_lvol_delete_lvstore", 00:04:31.098 "bdev_lvol_rename_lvstore", 00:04:31.098 "bdev_lvol_create_lvstore", 00:04:31.098 "bdev_raid_set_options", 00:04:31.098 "bdev_raid_remove_base_bdev", 00:04:31.098 "bdev_raid_add_base_bdev", 00:04:31.098 "bdev_raid_delete", 00:04:31.098 "bdev_raid_create", 00:04:31.098 "bdev_raid_get_bdevs", 00:04:31.098 "bdev_error_inject_error", 00:04:31.098 "bdev_error_delete", 00:04:31.098 "bdev_error_create", 00:04:31.098 "bdev_split_delete", 00:04:31.098 "bdev_split_create", 00:04:31.098 "bdev_delay_delete", 00:04:31.098 "bdev_delay_create", 00:04:31.098 "bdev_delay_update_latency", 00:04:31.098 "bdev_zone_block_delete", 00:04:31.098 "bdev_zone_block_create", 00:04:31.098 "blobfs_create", 00:04:31.098 "blobfs_detect", 00:04:31.098 "blobfs_set_cache_size", 00:04:31.098 "bdev_aio_delete", 00:04:31.098 "bdev_aio_rescan", 00:04:31.098 "bdev_aio_create", 00:04:31.098 "bdev_ftl_set_property", 00:04:31.098 "bdev_ftl_get_properties", 00:04:31.098 "bdev_ftl_get_stats", 00:04:31.098 "bdev_ftl_unmap", 00:04:31.098 "bdev_ftl_unload", 00:04:31.098 "bdev_ftl_delete", 00:04:31.098 "bdev_ftl_load", 00:04:31.098 "bdev_ftl_create", 00:04:31.098 "bdev_virtio_attach_controller", 00:04:31.098 "bdev_virtio_scsi_get_devices", 00:04:31.098 "bdev_virtio_detach_controller", 00:04:31.098 "bdev_virtio_blk_set_hotplug", 00:04:31.098 "bdev_iscsi_delete", 00:04:31.098 "bdev_iscsi_create", 00:04:31.098 "bdev_iscsi_set_options", 00:04:31.098 "accel_error_inject_error", 00:04:31.098 "ioat_scan_accel_module", 00:04:31.098 "dsa_scan_accel_module", 00:04:31.098 "iaa_scan_accel_module", 00:04:31.098 "vfu_virtio_create_fs_endpoint", 00:04:31.098 "vfu_virtio_create_scsi_endpoint", 00:04:31.098 "vfu_virtio_scsi_remove_target", 00:04:31.098 "vfu_virtio_scsi_add_target", 00:04:31.098 "vfu_virtio_create_blk_endpoint", 00:04:31.098 "vfu_virtio_delete_endpoint", 00:04:31.098 "keyring_file_remove_key", 00:04:31.098 "keyring_file_add_key", 00:04:31.098 "keyring_linux_set_options", 00:04:31.098 "fsdev_aio_delete", 00:04:31.098 "fsdev_aio_create", 00:04:31.098 "iscsi_get_histogram", 00:04:31.098 "iscsi_enable_histogram", 00:04:31.098 "iscsi_set_options", 00:04:31.098 "iscsi_get_auth_groups", 00:04:31.098 "iscsi_auth_group_remove_secret", 00:04:31.098 "iscsi_auth_group_add_secret", 00:04:31.098 "iscsi_delete_auth_group", 00:04:31.098 "iscsi_create_auth_group", 00:04:31.098 "iscsi_set_discovery_auth", 00:04:31.098 "iscsi_get_options", 00:04:31.098 "iscsi_target_node_request_logout", 00:04:31.098 "iscsi_target_node_set_redirect", 00:04:31.098 "iscsi_target_node_set_auth", 00:04:31.098 "iscsi_target_node_add_lun", 00:04:31.098 "iscsi_get_stats", 00:04:31.098 "iscsi_get_connections", 00:04:31.098 "iscsi_portal_group_set_auth", 00:04:31.098 "iscsi_start_portal_group", 00:04:31.098 "iscsi_delete_portal_group", 00:04:31.098 "iscsi_create_portal_group", 00:04:31.098 "iscsi_get_portal_groups", 00:04:31.098 "iscsi_delete_target_node", 00:04:31.098 "iscsi_target_node_remove_pg_ig_maps", 00:04:31.098 "iscsi_target_node_add_pg_ig_maps", 00:04:31.098 "iscsi_create_target_node", 00:04:31.098 "iscsi_get_target_nodes", 00:04:31.098 "iscsi_delete_initiator_group", 00:04:31.098 "iscsi_initiator_group_remove_initiators", 00:04:31.098 "iscsi_initiator_group_add_initiators", 00:04:31.098 "iscsi_create_initiator_group", 00:04:31.098 "iscsi_get_initiator_groups", 00:04:31.098 "nvmf_set_crdt", 00:04:31.098 "nvmf_set_config", 00:04:31.098 "nvmf_set_max_subsystems", 00:04:31.098 "nvmf_stop_mdns_prr", 00:04:31.098 "nvmf_publish_mdns_prr", 00:04:31.098 "nvmf_subsystem_get_listeners", 00:04:31.098 "nvmf_subsystem_get_qpairs", 00:04:31.098 "nvmf_subsystem_get_controllers", 00:04:31.098 "nvmf_get_stats", 00:04:31.098 "nvmf_get_transports", 00:04:31.098 "nvmf_create_transport", 00:04:31.098 "nvmf_get_targets", 00:04:31.098 "nvmf_delete_target", 00:04:31.098 "nvmf_create_target", 00:04:31.098 "nvmf_subsystem_allow_any_host", 00:04:31.098 "nvmf_subsystem_set_keys", 00:04:31.099 "nvmf_subsystem_remove_host", 00:04:31.099 "nvmf_subsystem_add_host", 00:04:31.099 "nvmf_ns_remove_host", 00:04:31.099 "nvmf_ns_add_host", 00:04:31.099 "nvmf_subsystem_remove_ns", 00:04:31.099 "nvmf_subsystem_set_ns_ana_group", 00:04:31.099 "nvmf_subsystem_add_ns", 00:04:31.099 "nvmf_subsystem_listener_set_ana_state", 00:04:31.099 "nvmf_discovery_get_referrals", 00:04:31.099 "nvmf_discovery_remove_referral", 00:04:31.099 "nvmf_discovery_add_referral", 00:04:31.099 "nvmf_subsystem_remove_listener", 00:04:31.099 "nvmf_subsystem_add_listener", 00:04:31.099 "nvmf_delete_subsystem", 00:04:31.099 "nvmf_create_subsystem", 00:04:31.099 "nvmf_get_subsystems", 00:04:31.099 "env_dpdk_get_mem_stats", 00:04:31.099 "nbd_get_disks", 00:04:31.099 "nbd_stop_disk", 00:04:31.099 "nbd_start_disk", 00:04:31.099 "ublk_recover_disk", 00:04:31.099 "ublk_get_disks", 00:04:31.099 "ublk_stop_disk", 00:04:31.099 "ublk_start_disk", 00:04:31.099 "ublk_destroy_target", 00:04:31.099 "ublk_create_target", 00:04:31.099 "virtio_blk_create_transport", 00:04:31.099 "virtio_blk_get_transports", 00:04:31.099 "vhost_controller_set_coalescing", 00:04:31.099 "vhost_get_controllers", 00:04:31.099 "vhost_delete_controller", 00:04:31.099 "vhost_create_blk_controller", 00:04:31.099 "vhost_scsi_controller_remove_target", 00:04:31.099 "vhost_scsi_controller_add_target", 00:04:31.099 "vhost_start_scsi_controller", 00:04:31.099 "vhost_create_scsi_controller", 00:04:31.099 "thread_set_cpumask", 00:04:31.099 "scheduler_set_options", 00:04:31.099 "framework_get_governor", 00:04:31.099 "framework_get_scheduler", 00:04:31.099 "framework_set_scheduler", 00:04:31.099 "framework_get_reactors", 00:04:31.099 "thread_get_io_channels", 00:04:31.099 "thread_get_pollers", 00:04:31.099 "thread_get_stats", 00:04:31.099 "framework_monitor_context_switch", 00:04:31.099 "spdk_kill_instance", 00:04:31.099 "log_enable_timestamps", 00:04:31.099 "log_get_flags", 00:04:31.099 "log_clear_flag", 00:04:31.099 "log_set_flag", 00:04:31.099 "log_get_level", 00:04:31.099 "log_set_level", 00:04:31.099 "log_get_print_level", 00:04:31.099 "log_set_print_level", 00:04:31.099 "framework_enable_cpumask_locks", 00:04:31.099 "framework_disable_cpumask_locks", 00:04:31.099 "framework_wait_init", 00:04:31.099 "framework_start_init", 00:04:31.099 "scsi_get_devices", 00:04:31.099 "bdev_get_histogram", 00:04:31.099 "bdev_enable_histogram", 00:04:31.099 "bdev_set_qos_limit", 00:04:31.099 "bdev_set_qd_sampling_period", 00:04:31.099 "bdev_get_bdevs", 00:04:31.099 "bdev_reset_iostat", 00:04:31.099 "bdev_get_iostat", 00:04:31.099 "bdev_examine", 00:04:31.099 "bdev_wait_for_examine", 00:04:31.099 "bdev_set_options", 00:04:31.099 "accel_get_stats", 00:04:31.099 "accel_set_options", 00:04:31.099 "accel_set_driver", 00:04:31.099 "accel_crypto_key_destroy", 00:04:31.099 "accel_crypto_keys_get", 00:04:31.099 "accel_crypto_key_create", 00:04:31.099 "accel_assign_opc", 00:04:31.099 "accel_get_module_info", 00:04:31.099 "accel_get_opc_assignments", 00:04:31.099 "vmd_rescan", 00:04:31.099 "vmd_remove_device", 00:04:31.099 "vmd_enable", 00:04:31.099 "sock_get_default_impl", 00:04:31.099 "sock_set_default_impl", 00:04:31.099 "sock_impl_set_options", 00:04:31.099 "sock_impl_get_options", 00:04:31.099 "iobuf_get_stats", 00:04:31.099 "iobuf_set_options", 00:04:31.099 "keyring_get_keys", 00:04:31.099 "vfu_tgt_set_base_path", 00:04:31.099 "framework_get_pci_devices", 00:04:31.099 "framework_get_config", 00:04:31.099 "framework_get_subsystems", 00:04:31.099 "fsdev_set_opts", 00:04:31.099 "fsdev_get_opts", 00:04:31.099 "trace_get_info", 00:04:31.099 "trace_get_tpoint_group_mask", 00:04:31.099 "trace_disable_tpoint_group", 00:04:31.099 "trace_enable_tpoint_group", 00:04:31.099 "trace_clear_tpoint_mask", 00:04:31.099 "trace_set_tpoint_mask", 00:04:31.099 "notify_get_notifications", 00:04:31.099 "notify_get_types", 00:04:31.099 "spdk_get_version", 00:04:31.099 "rpc_get_methods" 00:04:31.099 ] 00:04:31.099 11:22:31 spdkcli_tcp -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:04:31.099 11:22:31 spdkcli_tcp -- common/autotest_common.sh@730 -- # xtrace_disable 00:04:31.099 11:22:31 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:04:31.099 11:22:31 spdkcli_tcp -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:04:31.099 11:22:31 spdkcli_tcp -- spdkcli/tcp.sh@38 -- # killprocess 1025777 00:04:31.099 11:22:31 spdkcli_tcp -- common/autotest_common.sh@952 -- # '[' -z 1025777 ']' 00:04:31.099 11:22:31 spdkcli_tcp -- common/autotest_common.sh@956 -- # kill -0 1025777 00:04:31.099 11:22:31 spdkcli_tcp -- common/autotest_common.sh@957 -- # uname 00:04:31.099 11:22:31 spdkcli_tcp -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:04:31.099 11:22:31 spdkcli_tcp -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 1025777 00:04:31.099 11:22:31 spdkcli_tcp -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:04:31.099 11:22:31 spdkcli_tcp -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:04:31.099 11:22:31 spdkcli_tcp -- common/autotest_common.sh@970 -- # echo 'killing process with pid 1025777' 00:04:31.099 killing process with pid 1025777 00:04:31.099 11:22:31 spdkcli_tcp -- common/autotest_common.sh@971 -- # kill 1025777 00:04:31.099 11:22:31 spdkcli_tcp -- common/autotest_common.sh@976 -- # wait 1025777 00:04:31.359 00:04:31.359 real 0m1.323s 00:04:31.359 user 0m2.355s 00:04:31.359 sys 0m0.473s 00:04:31.359 11:22:32 spdkcli_tcp -- common/autotest_common.sh@1128 -- # xtrace_disable 00:04:31.359 11:22:32 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:04:31.359 ************************************ 00:04:31.359 END TEST spdkcli_tcp 00:04:31.359 ************************************ 00:04:31.359 11:22:32 -- spdk/autotest.sh@167 -- # run_test dpdk_mem_utility /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:04:31.359 11:22:32 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:04:31.359 11:22:32 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:04:31.359 11:22:32 -- common/autotest_common.sh@10 -- # set +x 00:04:31.359 ************************************ 00:04:31.359 START TEST dpdk_mem_utility 00:04:31.359 ************************************ 00:04:31.359 11:22:32 dpdk_mem_utility -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:04:31.618 * Looking for test storage... 00:04:31.618 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility 00:04:31.618 11:22:32 dpdk_mem_utility -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:04:31.618 11:22:32 dpdk_mem_utility -- common/autotest_common.sh@1691 -- # lcov --version 00:04:31.618 11:22:32 dpdk_mem_utility -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:04:31.618 11:22:32 dpdk_mem_utility -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:04:31.618 11:22:32 dpdk_mem_utility -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:31.618 11:22:32 dpdk_mem_utility -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:31.618 11:22:32 dpdk_mem_utility -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:31.618 11:22:32 dpdk_mem_utility -- scripts/common.sh@336 -- # IFS=.-: 00:04:31.618 11:22:32 dpdk_mem_utility -- scripts/common.sh@336 -- # read -ra ver1 00:04:31.618 11:22:32 dpdk_mem_utility -- scripts/common.sh@337 -- # IFS=.-: 00:04:31.618 11:22:32 dpdk_mem_utility -- scripts/common.sh@337 -- # read -ra ver2 00:04:31.618 11:22:32 dpdk_mem_utility -- scripts/common.sh@338 -- # local 'op=<' 00:04:31.618 11:22:32 dpdk_mem_utility -- scripts/common.sh@340 -- # ver1_l=2 00:04:31.618 11:22:32 dpdk_mem_utility -- scripts/common.sh@341 -- # ver2_l=1 00:04:31.618 11:22:32 dpdk_mem_utility -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:31.618 11:22:32 dpdk_mem_utility -- scripts/common.sh@344 -- # case "$op" in 00:04:31.618 11:22:32 dpdk_mem_utility -- scripts/common.sh@345 -- # : 1 00:04:31.618 11:22:32 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:31.618 11:22:32 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:31.618 11:22:32 dpdk_mem_utility -- scripts/common.sh@365 -- # decimal 1 00:04:31.618 11:22:32 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=1 00:04:31.618 11:22:32 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:31.618 11:22:32 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 1 00:04:31.618 11:22:32 dpdk_mem_utility -- scripts/common.sh@365 -- # ver1[v]=1 00:04:31.618 11:22:32 dpdk_mem_utility -- scripts/common.sh@366 -- # decimal 2 00:04:31.618 11:22:32 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=2 00:04:31.618 11:22:32 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:31.618 11:22:32 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 2 00:04:31.618 11:22:32 dpdk_mem_utility -- scripts/common.sh@366 -- # ver2[v]=2 00:04:31.618 11:22:32 dpdk_mem_utility -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:31.618 11:22:32 dpdk_mem_utility -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:31.618 11:22:32 dpdk_mem_utility -- scripts/common.sh@368 -- # return 0 00:04:31.618 11:22:32 dpdk_mem_utility -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:31.618 11:22:32 dpdk_mem_utility -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:04:31.618 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:31.618 --rc genhtml_branch_coverage=1 00:04:31.618 --rc genhtml_function_coverage=1 00:04:31.618 --rc genhtml_legend=1 00:04:31.618 --rc geninfo_all_blocks=1 00:04:31.618 --rc geninfo_unexecuted_blocks=1 00:04:31.618 00:04:31.618 ' 00:04:31.618 11:22:32 dpdk_mem_utility -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:04:31.618 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:31.618 --rc genhtml_branch_coverage=1 00:04:31.618 --rc genhtml_function_coverage=1 00:04:31.618 --rc genhtml_legend=1 00:04:31.618 --rc geninfo_all_blocks=1 00:04:31.618 --rc geninfo_unexecuted_blocks=1 00:04:31.618 00:04:31.618 ' 00:04:31.618 11:22:32 dpdk_mem_utility -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:04:31.618 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:31.618 --rc genhtml_branch_coverage=1 00:04:31.618 --rc genhtml_function_coverage=1 00:04:31.618 --rc genhtml_legend=1 00:04:31.618 --rc geninfo_all_blocks=1 00:04:31.618 --rc geninfo_unexecuted_blocks=1 00:04:31.618 00:04:31.618 ' 00:04:31.618 11:22:32 dpdk_mem_utility -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:04:31.618 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:31.618 --rc genhtml_branch_coverage=1 00:04:31.618 --rc genhtml_function_coverage=1 00:04:31.618 --rc genhtml_legend=1 00:04:31.618 --rc geninfo_all_blocks=1 00:04:31.618 --rc geninfo_unexecuted_blocks=1 00:04:31.618 00:04:31.618 ' 00:04:31.618 11:22:32 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:04:31.618 11:22:32 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=1026084 00:04:31.618 11:22:32 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 1026084 00:04:31.618 11:22:32 dpdk_mem_utility -- common/autotest_common.sh@833 -- # '[' -z 1026084 ']' 00:04:31.618 11:22:32 dpdk_mem_utility -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:31.618 11:22:32 dpdk_mem_utility -- common/autotest_common.sh@838 -- # local max_retries=100 00:04:31.618 11:22:32 dpdk_mem_utility -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:31.618 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:31.618 11:22:32 dpdk_mem_utility -- common/autotest_common.sh@842 -- # xtrace_disable 00:04:31.618 11:22:32 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:04:31.618 11:22:32 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:31.618 [2024-11-15 11:22:32.435719] Starting SPDK v25.01-pre git sha1 4b2d483c6 / DPDK 24.03.0 initialization... 00:04:31.618 [2024-11-15 11:22:32.435779] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1026084 ] 00:04:31.878 [2024-11-15 11:22:32.529565] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:31.878 [2024-11-15 11:22:32.579408] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:32.137 11:22:32 dpdk_mem_utility -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:04:32.137 11:22:32 dpdk_mem_utility -- common/autotest_common.sh@866 -- # return 0 00:04:32.137 11:22:32 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:04:32.137 11:22:32 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:04:32.137 11:22:32 dpdk_mem_utility -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:32.137 11:22:32 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:04:32.137 { 00:04:32.137 "filename": "/tmp/spdk_mem_dump.txt" 00:04:32.137 } 00:04:32.137 11:22:32 dpdk_mem_utility -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:32.137 11:22:32 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:04:32.137 DPDK memory size 818.000000 MiB in 1 heap(s) 00:04:32.137 1 heaps totaling size 818.000000 MiB 00:04:32.137 size: 818.000000 MiB heap id: 0 00:04:32.137 end heaps---------- 00:04:32.137 9 mempools totaling size 603.782043 MiB 00:04:32.137 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:04:32.137 size: 158.602051 MiB name: PDU_data_out_Pool 00:04:32.137 size: 100.555481 MiB name: bdev_io_1026084 00:04:32.137 size: 50.003479 MiB name: msgpool_1026084 00:04:32.137 size: 36.509338 MiB name: fsdev_io_1026084 00:04:32.137 size: 21.763794 MiB name: PDU_Pool 00:04:32.137 size: 19.513306 MiB name: SCSI_TASK_Pool 00:04:32.137 size: 4.133484 MiB name: evtpool_1026084 00:04:32.137 size: 0.026123 MiB name: Session_Pool 00:04:32.137 end mempools------- 00:04:32.137 6 memzones totaling size 4.142822 MiB 00:04:32.137 size: 1.000366 MiB name: RG_ring_0_1026084 00:04:32.137 size: 1.000366 MiB name: RG_ring_1_1026084 00:04:32.137 size: 1.000366 MiB name: RG_ring_4_1026084 00:04:32.137 size: 1.000366 MiB name: RG_ring_5_1026084 00:04:32.137 size: 0.125366 MiB name: RG_ring_2_1026084 00:04:32.137 size: 0.015991 MiB name: RG_ring_3_1026084 00:04:32.137 end memzones------- 00:04:32.137 11:22:32 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py -m 0 00:04:32.137 heap id: 0 total size: 818.000000 MiB number of busy elements: 44 number of free elements: 15 00:04:32.137 list of free elements. size: 10.852478 MiB 00:04:32.137 element at address: 0x200019200000 with size: 0.999878 MiB 00:04:32.137 element at address: 0x200019400000 with size: 0.999878 MiB 00:04:32.137 element at address: 0x200000400000 with size: 0.998535 MiB 00:04:32.137 element at address: 0x200032000000 with size: 0.994446 MiB 00:04:32.137 element at address: 0x200006400000 with size: 0.959839 MiB 00:04:32.137 element at address: 0x200012c00000 with size: 0.944275 MiB 00:04:32.137 element at address: 0x200019600000 with size: 0.936584 MiB 00:04:32.137 element at address: 0x200000200000 with size: 0.717346 MiB 00:04:32.137 element at address: 0x20001ae00000 with size: 0.582886 MiB 00:04:32.137 element at address: 0x200000c00000 with size: 0.495422 MiB 00:04:32.137 element at address: 0x20000a600000 with size: 0.490723 MiB 00:04:32.137 element at address: 0x200019800000 with size: 0.485657 MiB 00:04:32.137 element at address: 0x200003e00000 with size: 0.481934 MiB 00:04:32.137 element at address: 0x200028200000 with size: 0.410034 MiB 00:04:32.137 element at address: 0x200000800000 with size: 0.355042 MiB 00:04:32.137 list of standard malloc elements. size: 199.218628 MiB 00:04:32.137 element at address: 0x20000a7fff80 with size: 132.000122 MiB 00:04:32.137 element at address: 0x2000065fff80 with size: 64.000122 MiB 00:04:32.137 element at address: 0x2000192fff80 with size: 1.000122 MiB 00:04:32.137 element at address: 0x2000194fff80 with size: 1.000122 MiB 00:04:32.137 element at address: 0x2000196fff80 with size: 1.000122 MiB 00:04:32.137 element at address: 0x2000003d9f00 with size: 0.140747 MiB 00:04:32.137 element at address: 0x2000196eff00 with size: 0.062622 MiB 00:04:32.137 element at address: 0x2000003fdf80 with size: 0.007935 MiB 00:04:32.137 element at address: 0x2000196efdc0 with size: 0.000305 MiB 00:04:32.137 element at address: 0x2000002d7c40 with size: 0.000183 MiB 00:04:32.137 element at address: 0x2000003d9e40 with size: 0.000183 MiB 00:04:32.138 element at address: 0x2000004ffa00 with size: 0.000183 MiB 00:04:32.138 element at address: 0x2000004ffac0 with size: 0.000183 MiB 00:04:32.138 element at address: 0x2000004ffb80 with size: 0.000183 MiB 00:04:32.138 element at address: 0x2000004ffd80 with size: 0.000183 MiB 00:04:32.138 element at address: 0x2000004ffe40 with size: 0.000183 MiB 00:04:32.138 element at address: 0x20000085ae40 with size: 0.000183 MiB 00:04:32.138 element at address: 0x20000085b040 with size: 0.000183 MiB 00:04:32.138 element at address: 0x20000085f300 with size: 0.000183 MiB 00:04:32.138 element at address: 0x20000087f5c0 with size: 0.000183 MiB 00:04:32.138 element at address: 0x20000087f680 with size: 0.000183 MiB 00:04:32.138 element at address: 0x2000008ff940 with size: 0.000183 MiB 00:04:32.138 element at address: 0x2000008ffb40 with size: 0.000183 MiB 00:04:32.138 element at address: 0x200000c7ed40 with size: 0.000183 MiB 00:04:32.138 element at address: 0x200000cff000 with size: 0.000183 MiB 00:04:32.138 element at address: 0x200000cff0c0 with size: 0.000183 MiB 00:04:32.138 element at address: 0x200003e7b600 with size: 0.000183 MiB 00:04:32.138 element at address: 0x200003e7b6c0 with size: 0.000183 MiB 00:04:32.138 element at address: 0x200003efb980 with size: 0.000183 MiB 00:04:32.138 element at address: 0x2000064fdd80 with size: 0.000183 MiB 00:04:32.138 element at address: 0x20000a67da00 with size: 0.000183 MiB 00:04:32.138 element at address: 0x20000a67dac0 with size: 0.000183 MiB 00:04:32.138 element at address: 0x20000a6fdd80 with size: 0.000183 MiB 00:04:32.138 element at address: 0x200012cf1bc0 with size: 0.000183 MiB 00:04:32.138 element at address: 0x2000196efc40 with size: 0.000183 MiB 00:04:32.138 element at address: 0x2000196efd00 with size: 0.000183 MiB 00:04:32.138 element at address: 0x2000198bc740 with size: 0.000183 MiB 00:04:32.138 element at address: 0x20001ae95380 with size: 0.000183 MiB 00:04:32.138 element at address: 0x20001ae95440 with size: 0.000183 MiB 00:04:32.138 element at address: 0x200028268f80 with size: 0.000183 MiB 00:04:32.138 element at address: 0x200028269040 with size: 0.000183 MiB 00:04:32.138 element at address: 0x20002826fc40 with size: 0.000183 MiB 00:04:32.138 element at address: 0x20002826fe40 with size: 0.000183 MiB 00:04:32.138 element at address: 0x20002826ff00 with size: 0.000183 MiB 00:04:32.138 list of memzone associated elements. size: 607.928894 MiB 00:04:32.138 element at address: 0x20001ae95500 with size: 211.416748 MiB 00:04:32.138 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:04:32.138 element at address: 0x20002826ffc0 with size: 157.562561 MiB 00:04:32.138 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:04:32.138 element at address: 0x200012df1e80 with size: 100.055054 MiB 00:04:32.138 associated memzone info: size: 100.054932 MiB name: MP_bdev_io_1026084_0 00:04:32.138 element at address: 0x200000dff380 with size: 48.003052 MiB 00:04:32.138 associated memzone info: size: 48.002930 MiB name: MP_msgpool_1026084_0 00:04:32.138 element at address: 0x200003ffdb80 with size: 36.008911 MiB 00:04:32.138 associated memzone info: size: 36.008789 MiB name: MP_fsdev_io_1026084_0 00:04:32.138 element at address: 0x2000199be940 with size: 20.255554 MiB 00:04:32.138 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:04:32.138 element at address: 0x2000321feb40 with size: 18.005066 MiB 00:04:32.138 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:04:32.138 element at address: 0x2000004fff00 with size: 3.000244 MiB 00:04:32.138 associated memzone info: size: 3.000122 MiB name: MP_evtpool_1026084_0 00:04:32.138 element at address: 0x2000009ffe00 with size: 2.000488 MiB 00:04:32.138 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_1026084 00:04:32.138 element at address: 0x2000002d7d00 with size: 1.008118 MiB 00:04:32.138 associated memzone info: size: 1.007996 MiB name: MP_evtpool_1026084 00:04:32.138 element at address: 0x20000a6fde40 with size: 1.008118 MiB 00:04:32.138 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:04:32.138 element at address: 0x2000198bc800 with size: 1.008118 MiB 00:04:32.138 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:04:32.138 element at address: 0x2000064fde40 with size: 1.008118 MiB 00:04:32.138 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:04:32.138 element at address: 0x200003efba40 with size: 1.008118 MiB 00:04:32.138 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:04:32.138 element at address: 0x200000cff180 with size: 1.000488 MiB 00:04:32.138 associated memzone info: size: 1.000366 MiB name: RG_ring_0_1026084 00:04:32.138 element at address: 0x2000008ffc00 with size: 1.000488 MiB 00:04:32.138 associated memzone info: size: 1.000366 MiB name: RG_ring_1_1026084 00:04:32.138 element at address: 0x200012cf1c80 with size: 1.000488 MiB 00:04:32.138 associated memzone info: size: 1.000366 MiB name: RG_ring_4_1026084 00:04:32.138 element at address: 0x2000320fe940 with size: 1.000488 MiB 00:04:32.138 associated memzone info: size: 1.000366 MiB name: RG_ring_5_1026084 00:04:32.138 element at address: 0x20000087f740 with size: 0.500488 MiB 00:04:32.138 associated memzone info: size: 0.500366 MiB name: RG_MP_fsdev_io_1026084 00:04:32.138 element at address: 0x200000c7ee00 with size: 0.500488 MiB 00:04:32.138 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_1026084 00:04:32.138 element at address: 0x20000a67db80 with size: 0.500488 MiB 00:04:32.138 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:04:32.138 element at address: 0x200003e7b780 with size: 0.500488 MiB 00:04:32.138 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:04:32.138 element at address: 0x20001987c540 with size: 0.250488 MiB 00:04:32.138 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:04:32.138 element at address: 0x2000002b7a40 with size: 0.125488 MiB 00:04:32.138 associated memzone info: size: 0.125366 MiB name: RG_MP_evtpool_1026084 00:04:32.138 element at address: 0x20000085f3c0 with size: 0.125488 MiB 00:04:32.138 associated memzone info: size: 0.125366 MiB name: RG_ring_2_1026084 00:04:32.138 element at address: 0x2000064f5b80 with size: 0.031738 MiB 00:04:32.138 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:04:32.138 element at address: 0x200028269100 with size: 0.023743 MiB 00:04:32.138 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:04:32.138 element at address: 0x20000085b100 with size: 0.016113 MiB 00:04:32.138 associated memzone info: size: 0.015991 MiB name: RG_ring_3_1026084 00:04:32.138 element at address: 0x20002826f240 with size: 0.002441 MiB 00:04:32.138 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:04:32.138 element at address: 0x2000004ffc40 with size: 0.000305 MiB 00:04:32.138 associated memzone info: size: 0.000183 MiB name: MP_msgpool_1026084 00:04:32.138 element at address: 0x2000008ffa00 with size: 0.000305 MiB 00:04:32.138 associated memzone info: size: 0.000183 MiB name: MP_fsdev_io_1026084 00:04:32.138 element at address: 0x20000085af00 with size: 0.000305 MiB 00:04:32.138 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_1026084 00:04:32.138 element at address: 0x20002826fd00 with size: 0.000305 MiB 00:04:32.138 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:04:32.138 11:22:32 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:04:32.138 11:22:32 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 1026084 00:04:32.138 11:22:32 dpdk_mem_utility -- common/autotest_common.sh@952 -- # '[' -z 1026084 ']' 00:04:32.138 11:22:32 dpdk_mem_utility -- common/autotest_common.sh@956 -- # kill -0 1026084 00:04:32.138 11:22:32 dpdk_mem_utility -- common/autotest_common.sh@957 -- # uname 00:04:32.138 11:22:32 dpdk_mem_utility -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:04:32.138 11:22:32 dpdk_mem_utility -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 1026084 00:04:32.138 11:22:32 dpdk_mem_utility -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:04:32.138 11:22:32 dpdk_mem_utility -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:04:32.138 11:22:32 dpdk_mem_utility -- common/autotest_common.sh@970 -- # echo 'killing process with pid 1026084' 00:04:32.138 killing process with pid 1026084 00:04:32.138 11:22:32 dpdk_mem_utility -- common/autotest_common.sh@971 -- # kill 1026084 00:04:32.138 11:22:32 dpdk_mem_utility -- common/autotest_common.sh@976 -- # wait 1026084 00:04:32.707 00:04:32.707 real 0m1.094s 00:04:32.707 user 0m1.056s 00:04:32.707 sys 0m0.442s 00:04:32.707 11:22:33 dpdk_mem_utility -- common/autotest_common.sh@1128 -- # xtrace_disable 00:04:32.707 11:22:33 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:04:32.707 ************************************ 00:04:32.707 END TEST dpdk_mem_utility 00:04:32.707 ************************************ 00:04:32.707 11:22:33 -- spdk/autotest.sh@168 -- # run_test event /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event.sh 00:04:32.707 11:22:33 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:04:32.707 11:22:33 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:04:32.707 11:22:33 -- common/autotest_common.sh@10 -- # set +x 00:04:32.707 ************************************ 00:04:32.707 START TEST event 00:04:32.707 ************************************ 00:04:32.707 11:22:33 event -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event.sh 00:04:32.707 * Looking for test storage... 00:04:32.707 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event 00:04:32.707 11:22:33 event -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:04:32.707 11:22:33 event -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:04:32.707 11:22:33 event -- common/autotest_common.sh@1691 -- # lcov --version 00:04:32.707 11:22:33 event -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:04:32.707 11:22:33 event -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:32.707 11:22:33 event -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:32.707 11:22:33 event -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:32.707 11:22:33 event -- scripts/common.sh@336 -- # IFS=.-: 00:04:32.707 11:22:33 event -- scripts/common.sh@336 -- # read -ra ver1 00:04:32.707 11:22:33 event -- scripts/common.sh@337 -- # IFS=.-: 00:04:32.707 11:22:33 event -- scripts/common.sh@337 -- # read -ra ver2 00:04:32.707 11:22:33 event -- scripts/common.sh@338 -- # local 'op=<' 00:04:32.707 11:22:33 event -- scripts/common.sh@340 -- # ver1_l=2 00:04:32.707 11:22:33 event -- scripts/common.sh@341 -- # ver2_l=1 00:04:32.707 11:22:33 event -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:32.707 11:22:33 event -- scripts/common.sh@344 -- # case "$op" in 00:04:32.707 11:22:33 event -- scripts/common.sh@345 -- # : 1 00:04:32.707 11:22:33 event -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:32.707 11:22:33 event -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:32.707 11:22:33 event -- scripts/common.sh@365 -- # decimal 1 00:04:32.707 11:22:33 event -- scripts/common.sh@353 -- # local d=1 00:04:32.707 11:22:33 event -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:32.707 11:22:33 event -- scripts/common.sh@355 -- # echo 1 00:04:32.707 11:22:33 event -- scripts/common.sh@365 -- # ver1[v]=1 00:04:32.707 11:22:33 event -- scripts/common.sh@366 -- # decimal 2 00:04:32.707 11:22:33 event -- scripts/common.sh@353 -- # local d=2 00:04:32.707 11:22:33 event -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:32.707 11:22:33 event -- scripts/common.sh@355 -- # echo 2 00:04:32.707 11:22:33 event -- scripts/common.sh@366 -- # ver2[v]=2 00:04:32.707 11:22:33 event -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:32.707 11:22:33 event -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:32.707 11:22:33 event -- scripts/common.sh@368 -- # return 0 00:04:32.707 11:22:33 event -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:32.707 11:22:33 event -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:04:32.707 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:32.707 --rc genhtml_branch_coverage=1 00:04:32.707 --rc genhtml_function_coverage=1 00:04:32.707 --rc genhtml_legend=1 00:04:32.707 --rc geninfo_all_blocks=1 00:04:32.707 --rc geninfo_unexecuted_blocks=1 00:04:32.707 00:04:32.707 ' 00:04:32.707 11:22:33 event -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:04:32.708 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:32.708 --rc genhtml_branch_coverage=1 00:04:32.708 --rc genhtml_function_coverage=1 00:04:32.708 --rc genhtml_legend=1 00:04:32.708 --rc geninfo_all_blocks=1 00:04:32.708 --rc geninfo_unexecuted_blocks=1 00:04:32.708 00:04:32.708 ' 00:04:32.708 11:22:33 event -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:04:32.708 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:32.708 --rc genhtml_branch_coverage=1 00:04:32.708 --rc genhtml_function_coverage=1 00:04:32.708 --rc genhtml_legend=1 00:04:32.708 --rc geninfo_all_blocks=1 00:04:32.708 --rc geninfo_unexecuted_blocks=1 00:04:32.708 00:04:32.708 ' 00:04:32.708 11:22:33 event -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:04:32.708 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:32.708 --rc genhtml_branch_coverage=1 00:04:32.708 --rc genhtml_function_coverage=1 00:04:32.708 --rc genhtml_legend=1 00:04:32.708 --rc geninfo_all_blocks=1 00:04:32.708 --rc geninfo_unexecuted_blocks=1 00:04:32.708 00:04:32.708 ' 00:04:32.708 11:22:33 event -- event/event.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/nbd_common.sh 00:04:32.708 11:22:33 event -- bdev/nbd_common.sh@6 -- # set -e 00:04:32.708 11:22:33 event -- event/event.sh@45 -- # run_test event_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:04:32.708 11:22:33 event -- common/autotest_common.sh@1103 -- # '[' 6 -le 1 ']' 00:04:32.708 11:22:33 event -- common/autotest_common.sh@1109 -- # xtrace_disable 00:04:32.708 11:22:33 event -- common/autotest_common.sh@10 -- # set +x 00:04:32.977 ************************************ 00:04:32.977 START TEST event_perf 00:04:32.977 ************************************ 00:04:32.977 11:22:33 event.event_perf -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:04:32.977 Running I/O for 1 seconds...[2024-11-15 11:22:33.598899] Starting SPDK v25.01-pre git sha1 4b2d483c6 / DPDK 24.03.0 initialization... 00:04:32.977 [2024-11-15 11:22:33.598966] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1026397 ] 00:04:32.977 [2024-11-15 11:22:33.694310] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:04:32.977 [2024-11-15 11:22:33.747564] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:04:32.977 [2024-11-15 11:22:33.747668] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:04:32.978 [2024-11-15 11:22:33.747742] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:04:32.978 [2024-11-15 11:22:33.747746] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:34.355 Running I/O for 1 seconds... 00:04:34.355 lcore 0: 186706 00:04:34.355 lcore 1: 186700 00:04:34.355 lcore 2: 186702 00:04:34.355 lcore 3: 186704 00:04:34.355 done. 00:04:34.355 00:04:34.355 real 0m1.218s 00:04:34.355 user 0m4.123s 00:04:34.355 sys 0m0.090s 00:04:34.355 11:22:34 event.event_perf -- common/autotest_common.sh@1128 -- # xtrace_disable 00:04:34.355 11:22:34 event.event_perf -- common/autotest_common.sh@10 -- # set +x 00:04:34.355 ************************************ 00:04:34.355 END TEST event_perf 00:04:34.355 ************************************ 00:04:34.355 11:22:34 event -- event/event.sh@46 -- # run_test event_reactor /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:04:34.355 11:22:34 event -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:04:34.355 11:22:34 event -- common/autotest_common.sh@1109 -- # xtrace_disable 00:04:34.355 11:22:34 event -- common/autotest_common.sh@10 -- # set +x 00:04:34.355 ************************************ 00:04:34.355 START TEST event_reactor 00:04:34.355 ************************************ 00:04:34.355 11:22:34 event.event_reactor -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:04:34.355 [2024-11-15 11:22:34.883380] Starting SPDK v25.01-pre git sha1 4b2d483c6 / DPDK 24.03.0 initialization... 00:04:34.355 [2024-11-15 11:22:34.883439] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1026685 ] 00:04:34.355 [2024-11-15 11:22:34.977044] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:34.355 [2024-11-15 11:22:35.024807] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:35.292 test_start 00:04:35.292 oneshot 00:04:35.292 tick 100 00:04:35.292 tick 100 00:04:35.292 tick 250 00:04:35.292 tick 100 00:04:35.292 tick 100 00:04:35.292 tick 250 00:04:35.292 tick 100 00:04:35.292 tick 500 00:04:35.292 tick 100 00:04:35.292 tick 100 00:04:35.292 tick 250 00:04:35.292 tick 100 00:04:35.292 tick 100 00:04:35.292 test_end 00:04:35.292 00:04:35.292 real 0m1.209s 00:04:35.292 user 0m1.122s 00:04:35.292 sys 0m0.082s 00:04:35.292 11:22:36 event.event_reactor -- common/autotest_common.sh@1128 -- # xtrace_disable 00:04:35.292 11:22:36 event.event_reactor -- common/autotest_common.sh@10 -- # set +x 00:04:35.292 ************************************ 00:04:35.292 END TEST event_reactor 00:04:35.292 ************************************ 00:04:35.292 11:22:36 event -- event/event.sh@47 -- # run_test event_reactor_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:04:35.292 11:22:36 event -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:04:35.292 11:22:36 event -- common/autotest_common.sh@1109 -- # xtrace_disable 00:04:35.292 11:22:36 event -- common/autotest_common.sh@10 -- # set +x 00:04:35.292 ************************************ 00:04:35.292 START TEST event_reactor_perf 00:04:35.292 ************************************ 00:04:35.292 11:22:36 event.event_reactor_perf -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:04:35.551 [2024-11-15 11:22:36.161806] Starting SPDK v25.01-pre git sha1 4b2d483c6 / DPDK 24.03.0 initialization... 00:04:35.551 [2024-11-15 11:22:36.161874] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1026965 ] 00:04:35.551 [2024-11-15 11:22:36.258103] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:35.551 [2024-11-15 11:22:36.306104] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:36.928 test_start 00:04:36.928 test_end 00:04:36.928 Performance: 315153 events per second 00:04:36.928 00:04:36.928 real 0m1.213s 00:04:36.928 user 0m1.126s 00:04:36.928 sys 0m0.082s 00:04:36.928 11:22:37 event.event_reactor_perf -- common/autotest_common.sh@1128 -- # xtrace_disable 00:04:36.928 11:22:37 event.event_reactor_perf -- common/autotest_common.sh@10 -- # set +x 00:04:36.928 ************************************ 00:04:36.928 END TEST event_reactor_perf 00:04:36.928 ************************************ 00:04:36.928 11:22:37 event -- event/event.sh@49 -- # uname -s 00:04:36.928 11:22:37 event -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:04:36.928 11:22:37 event -- event/event.sh@50 -- # run_test event_scheduler /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:04:36.928 11:22:37 event -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:04:36.928 11:22:37 event -- common/autotest_common.sh@1109 -- # xtrace_disable 00:04:36.928 11:22:37 event -- common/autotest_common.sh@10 -- # set +x 00:04:36.928 ************************************ 00:04:36.928 START TEST event_scheduler 00:04:36.928 ************************************ 00:04:36.928 11:22:37 event.event_scheduler -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:04:36.928 * Looking for test storage... 00:04:36.928 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler 00:04:36.928 11:22:37 event.event_scheduler -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:04:36.928 11:22:37 event.event_scheduler -- common/autotest_common.sh@1691 -- # lcov --version 00:04:36.928 11:22:37 event.event_scheduler -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:04:36.928 11:22:37 event.event_scheduler -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:04:36.928 11:22:37 event.event_scheduler -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:36.928 11:22:37 event.event_scheduler -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:36.928 11:22:37 event.event_scheduler -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:36.928 11:22:37 event.event_scheduler -- scripts/common.sh@336 -- # IFS=.-: 00:04:36.928 11:22:37 event.event_scheduler -- scripts/common.sh@336 -- # read -ra ver1 00:04:36.928 11:22:37 event.event_scheduler -- scripts/common.sh@337 -- # IFS=.-: 00:04:36.928 11:22:37 event.event_scheduler -- scripts/common.sh@337 -- # read -ra ver2 00:04:36.928 11:22:37 event.event_scheduler -- scripts/common.sh@338 -- # local 'op=<' 00:04:36.928 11:22:37 event.event_scheduler -- scripts/common.sh@340 -- # ver1_l=2 00:04:36.928 11:22:37 event.event_scheduler -- scripts/common.sh@341 -- # ver2_l=1 00:04:36.928 11:22:37 event.event_scheduler -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:36.928 11:22:37 event.event_scheduler -- scripts/common.sh@344 -- # case "$op" in 00:04:36.928 11:22:37 event.event_scheduler -- scripts/common.sh@345 -- # : 1 00:04:36.928 11:22:37 event.event_scheduler -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:36.928 11:22:37 event.event_scheduler -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:36.928 11:22:37 event.event_scheduler -- scripts/common.sh@365 -- # decimal 1 00:04:36.928 11:22:37 event.event_scheduler -- scripts/common.sh@353 -- # local d=1 00:04:36.928 11:22:37 event.event_scheduler -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:36.928 11:22:37 event.event_scheduler -- scripts/common.sh@355 -- # echo 1 00:04:36.928 11:22:37 event.event_scheduler -- scripts/common.sh@365 -- # ver1[v]=1 00:04:36.928 11:22:37 event.event_scheduler -- scripts/common.sh@366 -- # decimal 2 00:04:36.928 11:22:37 event.event_scheduler -- scripts/common.sh@353 -- # local d=2 00:04:36.928 11:22:37 event.event_scheduler -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:36.928 11:22:37 event.event_scheduler -- scripts/common.sh@355 -- # echo 2 00:04:36.928 11:22:37 event.event_scheduler -- scripts/common.sh@366 -- # ver2[v]=2 00:04:36.928 11:22:37 event.event_scheduler -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:36.928 11:22:37 event.event_scheduler -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:36.928 11:22:37 event.event_scheduler -- scripts/common.sh@368 -- # return 0 00:04:36.928 11:22:37 event.event_scheduler -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:36.928 11:22:37 event.event_scheduler -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:04:36.928 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:36.928 --rc genhtml_branch_coverage=1 00:04:36.928 --rc genhtml_function_coverage=1 00:04:36.928 --rc genhtml_legend=1 00:04:36.928 --rc geninfo_all_blocks=1 00:04:36.928 --rc geninfo_unexecuted_blocks=1 00:04:36.928 00:04:36.928 ' 00:04:36.928 11:22:37 event.event_scheduler -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:04:36.928 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:36.928 --rc genhtml_branch_coverage=1 00:04:36.928 --rc genhtml_function_coverage=1 00:04:36.928 --rc genhtml_legend=1 00:04:36.928 --rc geninfo_all_blocks=1 00:04:36.928 --rc geninfo_unexecuted_blocks=1 00:04:36.928 00:04:36.928 ' 00:04:36.928 11:22:37 event.event_scheduler -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:04:36.928 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:36.928 --rc genhtml_branch_coverage=1 00:04:36.928 --rc genhtml_function_coverage=1 00:04:36.928 --rc genhtml_legend=1 00:04:36.928 --rc geninfo_all_blocks=1 00:04:36.928 --rc geninfo_unexecuted_blocks=1 00:04:36.928 00:04:36.928 ' 00:04:36.928 11:22:37 event.event_scheduler -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:04:36.928 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:36.928 --rc genhtml_branch_coverage=1 00:04:36.928 --rc genhtml_function_coverage=1 00:04:36.928 --rc genhtml_legend=1 00:04:36.928 --rc geninfo_all_blocks=1 00:04:36.928 --rc geninfo_unexecuted_blocks=1 00:04:36.928 00:04:36.928 ' 00:04:36.928 11:22:37 event.event_scheduler -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:04:36.928 11:22:37 event.event_scheduler -- scheduler/scheduler.sh@35 -- # scheduler_pid=1027277 00:04:36.928 11:22:37 event.event_scheduler -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:04:36.928 11:22:37 event.event_scheduler -- scheduler/scheduler.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:04:36.928 11:22:37 event.event_scheduler -- scheduler/scheduler.sh@37 -- # waitforlisten 1027277 00:04:36.928 11:22:37 event.event_scheduler -- common/autotest_common.sh@833 -- # '[' -z 1027277 ']' 00:04:36.928 11:22:37 event.event_scheduler -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:36.928 11:22:37 event.event_scheduler -- common/autotest_common.sh@838 -- # local max_retries=100 00:04:36.928 11:22:37 event.event_scheduler -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:36.928 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:36.928 11:22:37 event.event_scheduler -- common/autotest_common.sh@842 -- # xtrace_disable 00:04:36.928 11:22:37 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:04:36.928 [2024-11-15 11:22:37.645627] Starting SPDK v25.01-pre git sha1 4b2d483c6 / DPDK 24.03.0 initialization... 00:04:36.928 [2024-11-15 11:22:37.645691] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1027277 ] 00:04:36.928 [2024-11-15 11:22:37.713654] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:04:36.928 [2024-11-15 11:22:37.754545] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:36.928 [2024-11-15 11:22:37.754649] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:04:36.928 [2024-11-15 11:22:37.754742] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:04:36.928 [2024-11-15 11:22:37.754743] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:04:37.188 11:22:37 event.event_scheduler -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:04:37.188 11:22:37 event.event_scheduler -- common/autotest_common.sh@866 -- # return 0 00:04:37.188 11:22:37 event.event_scheduler -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:04:37.188 11:22:37 event.event_scheduler -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:37.188 11:22:37 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:04:37.188 [2024-11-15 11:22:37.875453] dpdk_governor.c: 173:_init: *ERROR*: App core mask contains some but not all of a set of SMT siblings 00:04:37.188 [2024-11-15 11:22:37.875474] scheduler_dynamic.c: 280:init: *NOTICE*: Unable to initialize dpdk governor 00:04:37.188 [2024-11-15 11:22:37.875482] scheduler_dynamic.c: 427:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:04:37.188 [2024-11-15 11:22:37.875488] scheduler_dynamic.c: 429:set_opts: *NOTICE*: Setting scheduler core limit to 80 00:04:37.188 [2024-11-15 11:22:37.875492] scheduler_dynamic.c: 431:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:04:37.188 11:22:37 event.event_scheduler -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:37.188 11:22:37 event.event_scheduler -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:04:37.188 11:22:37 event.event_scheduler -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:37.188 11:22:37 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:04:37.188 [2024-11-15 11:22:37.949397] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:04:37.188 11:22:37 event.event_scheduler -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:37.188 11:22:37 event.event_scheduler -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:04:37.188 11:22:37 event.event_scheduler -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:04:37.188 11:22:37 event.event_scheduler -- common/autotest_common.sh@1109 -- # xtrace_disable 00:04:37.188 11:22:37 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:04:37.188 ************************************ 00:04:37.188 START TEST scheduler_create_thread 00:04:37.188 ************************************ 00:04:37.188 11:22:37 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1127 -- # scheduler_create_thread 00:04:37.189 11:22:37 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:04:37.189 11:22:37 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:37.189 11:22:37 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:37.189 2 00:04:37.189 11:22:37 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:37.189 11:22:37 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:04:37.189 11:22:37 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:37.189 11:22:37 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:37.189 3 00:04:37.189 11:22:37 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:37.189 11:22:37 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:04:37.189 11:22:37 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:37.189 11:22:37 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:37.189 4 00:04:37.189 11:22:38 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:37.189 11:22:38 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:04:37.189 11:22:38 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:37.189 11:22:38 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:37.189 5 00:04:37.189 11:22:38 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:37.189 11:22:38 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:04:37.189 11:22:38 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:37.189 11:22:38 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:37.189 6 00:04:37.189 11:22:38 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:37.189 11:22:38 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:04:37.189 11:22:38 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:37.189 11:22:38 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:37.189 7 00:04:37.189 11:22:38 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:37.189 11:22:38 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:04:37.189 11:22:38 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:37.189 11:22:38 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:37.189 8 00:04:37.189 11:22:38 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:37.189 11:22:38 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:04:37.189 11:22:38 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:37.189 11:22:38 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:37.447 9 00:04:37.447 11:22:38 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:37.447 11:22:38 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:04:37.447 11:22:38 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:37.448 11:22:38 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:37.448 10 00:04:37.448 11:22:38 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:37.448 11:22:38 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:04:37.448 11:22:38 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:37.448 11:22:38 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:37.448 11:22:38 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:37.448 11:22:38 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # thread_id=11 00:04:37.448 11:22:38 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:04:37.448 11:22:38 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:37.448 11:22:38 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:37.448 11:22:38 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:37.448 11:22:38 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:04:37.448 11:22:38 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:37.448 11:22:38 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:38.825 11:22:39 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:38.826 11:22:39 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # thread_id=12 00:04:38.826 11:22:39 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:04:38.826 11:22:39 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:38.826 11:22:39 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:39.763 11:22:40 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:39.763 00:04:39.763 real 0m2.616s 00:04:39.763 user 0m0.008s 00:04:39.763 sys 0m0.006s 00:04:39.763 11:22:40 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1128 -- # xtrace_disable 00:04:39.763 11:22:40 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:39.763 ************************************ 00:04:39.763 END TEST scheduler_create_thread 00:04:39.763 ************************************ 00:04:40.022 11:22:40 event.event_scheduler -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:04:40.022 11:22:40 event.event_scheduler -- scheduler/scheduler.sh@46 -- # killprocess 1027277 00:04:40.022 11:22:40 event.event_scheduler -- common/autotest_common.sh@952 -- # '[' -z 1027277 ']' 00:04:40.022 11:22:40 event.event_scheduler -- common/autotest_common.sh@956 -- # kill -0 1027277 00:04:40.022 11:22:40 event.event_scheduler -- common/autotest_common.sh@957 -- # uname 00:04:40.022 11:22:40 event.event_scheduler -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:04:40.022 11:22:40 event.event_scheduler -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 1027277 00:04:40.022 11:22:40 event.event_scheduler -- common/autotest_common.sh@958 -- # process_name=reactor_2 00:04:40.022 11:22:40 event.event_scheduler -- common/autotest_common.sh@962 -- # '[' reactor_2 = sudo ']' 00:04:40.022 11:22:40 event.event_scheduler -- common/autotest_common.sh@970 -- # echo 'killing process with pid 1027277' 00:04:40.022 killing process with pid 1027277 00:04:40.022 11:22:40 event.event_scheduler -- common/autotest_common.sh@971 -- # kill 1027277 00:04:40.022 11:22:40 event.event_scheduler -- common/autotest_common.sh@976 -- # wait 1027277 00:04:40.281 [2024-11-15 11:22:41.075494] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:04:40.540 00:04:40.540 real 0m3.813s 00:04:40.540 user 0m5.939s 00:04:40.540 sys 0m0.359s 00:04:40.540 11:22:41 event.event_scheduler -- common/autotest_common.sh@1128 -- # xtrace_disable 00:04:40.540 11:22:41 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:04:40.540 ************************************ 00:04:40.540 END TEST event_scheduler 00:04:40.540 ************************************ 00:04:40.540 11:22:41 event -- event/event.sh@51 -- # modprobe -n nbd 00:04:40.540 11:22:41 event -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:04:40.540 11:22:41 event -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:04:40.540 11:22:41 event -- common/autotest_common.sh@1109 -- # xtrace_disable 00:04:40.540 11:22:41 event -- common/autotest_common.sh@10 -- # set +x 00:04:40.540 ************************************ 00:04:40.540 START TEST app_repeat 00:04:40.540 ************************************ 00:04:40.540 11:22:41 event.app_repeat -- common/autotest_common.sh@1127 -- # app_repeat_test 00:04:40.540 11:22:41 event.app_repeat -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:40.540 11:22:41 event.app_repeat -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:40.540 11:22:41 event.app_repeat -- event/event.sh@13 -- # local nbd_list 00:04:40.540 11:22:41 event.app_repeat -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:40.540 11:22:41 event.app_repeat -- event/event.sh@14 -- # local bdev_list 00:04:40.540 11:22:41 event.app_repeat -- event/event.sh@15 -- # local repeat_times=4 00:04:40.540 11:22:41 event.app_repeat -- event/event.sh@17 -- # modprobe nbd 00:04:40.540 11:22:41 event.app_repeat -- event/event.sh@19 -- # repeat_pid=1028071 00:04:40.541 11:22:41 event.app_repeat -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:04:40.541 11:22:41 event.app_repeat -- event/event.sh@21 -- # echo 'Process app_repeat pid: 1028071' 00:04:40.541 Process app_repeat pid: 1028071 00:04:40.541 11:22:41 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:04:40.541 11:22:41 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:04:40.541 spdk_app_start Round 0 00:04:40.541 11:22:41 event.app_repeat -- event/event.sh@25 -- # waitforlisten 1028071 /var/tmp/spdk-nbd.sock 00:04:40.541 11:22:41 event.app_repeat -- common/autotest_common.sh@833 -- # '[' -z 1028071 ']' 00:04:40.541 11:22:41 event.app_repeat -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:04:40.541 11:22:41 event.app_repeat -- common/autotest_common.sh@838 -- # local max_retries=100 00:04:40.541 11:22:41 event.app_repeat -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:04:40.541 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:04:40.541 11:22:41 event.app_repeat -- common/autotest_common.sh@842 -- # xtrace_disable 00:04:40.541 11:22:41 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:04:40.541 11:22:41 event.app_repeat -- event/event.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:04:40.541 [2024-11-15 11:22:41.327590] Starting SPDK v25.01-pre git sha1 4b2d483c6 / DPDK 24.03.0 initialization... 00:04:40.541 [2024-11-15 11:22:41.327643] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1028071 ] 00:04:40.800 [2024-11-15 11:22:41.420443] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:04:40.800 [2024-11-15 11:22:41.473078] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:04:40.800 [2024-11-15 11:22:41.473086] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:40.800 11:22:41 event.app_repeat -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:04:40.800 11:22:41 event.app_repeat -- common/autotest_common.sh@866 -- # return 0 00:04:40.800 11:22:41 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:04:41.060 Malloc0 00:04:41.060 11:22:41 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:04:41.319 Malloc1 00:04:41.319 11:22:42 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:04:41.319 11:22:42 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:41.319 11:22:42 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:41.319 11:22:42 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:04:41.319 11:22:42 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:41.319 11:22:42 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:04:41.319 11:22:42 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:04:41.319 11:22:42 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:41.319 11:22:42 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:41.319 11:22:42 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:04:41.319 11:22:42 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:41.319 11:22:42 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:04:41.319 11:22:42 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:04:41.319 11:22:42 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:04:41.319 11:22:42 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:41.319 11:22:42 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:04:41.887 /dev/nbd0 00:04:41.887 11:22:42 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:04:41.887 11:22:42 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:04:41.887 11:22:42 event.app_repeat -- common/autotest_common.sh@870 -- # local nbd_name=nbd0 00:04:41.887 11:22:42 event.app_repeat -- common/autotest_common.sh@871 -- # local i 00:04:41.887 11:22:42 event.app_repeat -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:04:41.887 11:22:42 event.app_repeat -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:04:41.887 11:22:42 event.app_repeat -- common/autotest_common.sh@874 -- # grep -q -w nbd0 /proc/partitions 00:04:41.887 11:22:42 event.app_repeat -- common/autotest_common.sh@875 -- # break 00:04:41.887 11:22:42 event.app_repeat -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:04:41.887 11:22:42 event.app_repeat -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:04:41.887 11:22:42 event.app_repeat -- common/autotest_common.sh@887 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:04:41.887 1+0 records in 00:04:41.887 1+0 records out 00:04:41.887 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00024072 s, 17.0 MB/s 00:04:41.887 11:22:42 event.app_repeat -- common/autotest_common.sh@888 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:41.887 11:22:42 event.app_repeat -- common/autotest_common.sh@888 -- # size=4096 00:04:41.887 11:22:42 event.app_repeat -- common/autotest_common.sh@889 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:41.887 11:22:42 event.app_repeat -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:04:41.887 11:22:42 event.app_repeat -- common/autotest_common.sh@891 -- # return 0 00:04:41.887 11:22:42 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:04:41.887 11:22:42 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:41.887 11:22:42 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:04:42.146 /dev/nbd1 00:04:42.146 11:22:42 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:04:42.146 11:22:42 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:04:42.146 11:22:42 event.app_repeat -- common/autotest_common.sh@870 -- # local nbd_name=nbd1 00:04:42.146 11:22:42 event.app_repeat -- common/autotest_common.sh@871 -- # local i 00:04:42.146 11:22:42 event.app_repeat -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:04:42.146 11:22:42 event.app_repeat -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:04:42.146 11:22:42 event.app_repeat -- common/autotest_common.sh@874 -- # grep -q -w nbd1 /proc/partitions 00:04:42.146 11:22:42 event.app_repeat -- common/autotest_common.sh@875 -- # break 00:04:42.146 11:22:42 event.app_repeat -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:04:42.146 11:22:42 event.app_repeat -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:04:42.146 11:22:42 event.app_repeat -- common/autotest_common.sh@887 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:04:42.146 1+0 records in 00:04:42.146 1+0 records out 00:04:42.146 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000146561 s, 27.9 MB/s 00:04:42.146 11:22:42 event.app_repeat -- common/autotest_common.sh@888 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:42.146 11:22:42 event.app_repeat -- common/autotest_common.sh@888 -- # size=4096 00:04:42.146 11:22:42 event.app_repeat -- common/autotest_common.sh@889 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:42.146 11:22:42 event.app_repeat -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:04:42.146 11:22:42 event.app_repeat -- common/autotest_common.sh@891 -- # return 0 00:04:42.146 11:22:42 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:04:42.146 11:22:42 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:42.146 11:22:42 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:04:42.146 11:22:42 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:42.146 11:22:42 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:04:42.406 11:22:43 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:04:42.406 { 00:04:42.406 "nbd_device": "/dev/nbd0", 00:04:42.406 "bdev_name": "Malloc0" 00:04:42.406 }, 00:04:42.406 { 00:04:42.406 "nbd_device": "/dev/nbd1", 00:04:42.406 "bdev_name": "Malloc1" 00:04:42.406 } 00:04:42.406 ]' 00:04:42.406 11:22:43 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:04:42.406 { 00:04:42.406 "nbd_device": "/dev/nbd0", 00:04:42.406 "bdev_name": "Malloc0" 00:04:42.406 }, 00:04:42.406 { 00:04:42.406 "nbd_device": "/dev/nbd1", 00:04:42.406 "bdev_name": "Malloc1" 00:04:42.406 } 00:04:42.406 ]' 00:04:42.406 11:22:43 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:04:42.406 11:22:43 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:04:42.406 /dev/nbd1' 00:04:42.406 11:22:43 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:04:42.406 /dev/nbd1' 00:04:42.406 11:22:43 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:04:42.406 11:22:43 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:04:42.406 11:22:43 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:04:42.406 11:22:43 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:04:42.406 11:22:43 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:04:42.406 11:22:43 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:04:42.406 11:22:43 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:42.406 11:22:43 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:04:42.406 11:22:43 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:04:42.406 11:22:43 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:04:42.406 11:22:43 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:04:42.406 11:22:43 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:04:42.406 256+0 records in 00:04:42.406 256+0 records out 00:04:42.406 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00658329 s, 159 MB/s 00:04:42.406 11:22:43 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:04:42.406 11:22:43 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:04:42.406 256+0 records in 00:04:42.406 256+0 records out 00:04:42.406 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0197855 s, 53.0 MB/s 00:04:42.406 11:22:43 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:04:42.406 11:22:43 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:04:42.406 256+0 records in 00:04:42.406 256+0 records out 00:04:42.406 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0217149 s, 48.3 MB/s 00:04:42.406 11:22:43 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:04:42.406 11:22:43 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:42.406 11:22:43 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:04:42.406 11:22:43 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:04:42.406 11:22:43 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:04:42.406 11:22:43 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:04:42.406 11:22:43 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:04:42.406 11:22:43 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:04:42.406 11:22:43 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:04:42.406 11:22:43 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:04:42.406 11:22:43 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:04:42.406 11:22:43 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:04:42.406 11:22:43 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:04:42.406 11:22:43 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:42.406 11:22:43 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:42.406 11:22:43 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:04:42.406 11:22:43 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:04:42.406 11:22:43 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:04:42.406 11:22:43 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:04:42.665 11:22:43 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:04:42.665 11:22:43 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:04:42.665 11:22:43 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:04:42.665 11:22:43 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:04:42.665 11:22:43 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:04:42.665 11:22:43 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:04:42.665 11:22:43 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:04:42.665 11:22:43 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:04:42.665 11:22:43 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:04:42.665 11:22:43 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:04:43.233 11:22:43 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:04:43.233 11:22:43 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:04:43.233 11:22:43 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:04:43.233 11:22:43 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:04:43.233 11:22:43 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:04:43.233 11:22:43 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:04:43.233 11:22:43 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:04:43.233 11:22:43 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:04:43.233 11:22:43 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:04:43.233 11:22:43 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:43.233 11:22:43 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:04:43.233 11:22:44 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:04:43.233 11:22:44 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:04:43.233 11:22:44 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:04:43.491 11:22:44 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:04:43.491 11:22:44 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:04:43.491 11:22:44 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:04:43.491 11:22:44 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:04:43.491 11:22:44 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:04:43.491 11:22:44 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:04:43.491 11:22:44 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:04:43.491 11:22:44 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:04:43.491 11:22:44 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:04:43.491 11:22:44 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:04:43.750 11:22:44 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:04:43.750 [2024-11-15 11:22:44.601553] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:04:44.008 [2024-11-15 11:22:44.647211] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:04:44.008 [2024-11-15 11:22:44.647216] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:44.008 [2024-11-15 11:22:44.691947] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:04:44.008 [2024-11-15 11:22:44.691994] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:04:47.297 11:22:47 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:04:47.297 11:22:47 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:04:47.297 spdk_app_start Round 1 00:04:47.297 11:22:47 event.app_repeat -- event/event.sh@25 -- # waitforlisten 1028071 /var/tmp/spdk-nbd.sock 00:04:47.297 11:22:47 event.app_repeat -- common/autotest_common.sh@833 -- # '[' -z 1028071 ']' 00:04:47.297 11:22:47 event.app_repeat -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:04:47.297 11:22:47 event.app_repeat -- common/autotest_common.sh@838 -- # local max_retries=100 00:04:47.297 11:22:47 event.app_repeat -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:04:47.297 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:04:47.297 11:22:47 event.app_repeat -- common/autotest_common.sh@842 -- # xtrace_disable 00:04:47.297 11:22:47 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:04:47.297 11:22:47 event.app_repeat -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:04:47.297 11:22:47 event.app_repeat -- common/autotest_common.sh@866 -- # return 0 00:04:47.297 11:22:47 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:04:47.297 Malloc0 00:04:47.297 11:22:47 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:04:47.556 Malloc1 00:04:47.556 11:22:48 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:04:47.556 11:22:48 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:47.556 11:22:48 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:47.556 11:22:48 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:04:47.556 11:22:48 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:47.556 11:22:48 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:04:47.556 11:22:48 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:04:47.556 11:22:48 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:47.556 11:22:48 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:47.556 11:22:48 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:04:47.556 11:22:48 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:47.556 11:22:48 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:04:47.556 11:22:48 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:04:47.556 11:22:48 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:04:47.556 11:22:48 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:47.556 11:22:48 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:04:47.815 /dev/nbd0 00:04:47.815 11:22:48 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:04:47.815 11:22:48 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:04:47.815 11:22:48 event.app_repeat -- common/autotest_common.sh@870 -- # local nbd_name=nbd0 00:04:47.815 11:22:48 event.app_repeat -- common/autotest_common.sh@871 -- # local i 00:04:47.815 11:22:48 event.app_repeat -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:04:47.815 11:22:48 event.app_repeat -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:04:47.815 11:22:48 event.app_repeat -- common/autotest_common.sh@874 -- # grep -q -w nbd0 /proc/partitions 00:04:47.815 11:22:48 event.app_repeat -- common/autotest_common.sh@875 -- # break 00:04:47.815 11:22:48 event.app_repeat -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:04:47.815 11:22:48 event.app_repeat -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:04:47.815 11:22:48 event.app_repeat -- common/autotest_common.sh@887 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:04:47.815 1+0 records in 00:04:47.815 1+0 records out 00:04:47.815 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000188783 s, 21.7 MB/s 00:04:47.815 11:22:48 event.app_repeat -- common/autotest_common.sh@888 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:47.815 11:22:48 event.app_repeat -- common/autotest_common.sh@888 -- # size=4096 00:04:47.815 11:22:48 event.app_repeat -- common/autotest_common.sh@889 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:47.815 11:22:48 event.app_repeat -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:04:47.815 11:22:48 event.app_repeat -- common/autotest_common.sh@891 -- # return 0 00:04:47.815 11:22:48 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:04:47.816 11:22:48 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:47.816 11:22:48 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:04:48.074 /dev/nbd1 00:04:48.074 11:22:48 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:04:48.074 11:22:48 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:04:48.074 11:22:48 event.app_repeat -- common/autotest_common.sh@870 -- # local nbd_name=nbd1 00:04:48.074 11:22:48 event.app_repeat -- common/autotest_common.sh@871 -- # local i 00:04:48.074 11:22:48 event.app_repeat -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:04:48.074 11:22:48 event.app_repeat -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:04:48.074 11:22:48 event.app_repeat -- common/autotest_common.sh@874 -- # grep -q -w nbd1 /proc/partitions 00:04:48.074 11:22:48 event.app_repeat -- common/autotest_common.sh@875 -- # break 00:04:48.074 11:22:48 event.app_repeat -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:04:48.074 11:22:48 event.app_repeat -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:04:48.074 11:22:48 event.app_repeat -- common/autotest_common.sh@887 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:04:48.074 1+0 records in 00:04:48.074 1+0 records out 00:04:48.074 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000178637 s, 22.9 MB/s 00:04:48.075 11:22:48 event.app_repeat -- common/autotest_common.sh@888 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:48.075 11:22:48 event.app_repeat -- common/autotest_common.sh@888 -- # size=4096 00:04:48.075 11:22:48 event.app_repeat -- common/autotest_common.sh@889 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:48.075 11:22:48 event.app_repeat -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:04:48.075 11:22:48 event.app_repeat -- common/autotest_common.sh@891 -- # return 0 00:04:48.075 11:22:48 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:04:48.075 11:22:48 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:48.075 11:22:48 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:04:48.075 11:22:48 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:48.075 11:22:48 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:04:48.643 11:22:49 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:04:48.643 { 00:04:48.643 "nbd_device": "/dev/nbd0", 00:04:48.643 "bdev_name": "Malloc0" 00:04:48.643 }, 00:04:48.643 { 00:04:48.643 "nbd_device": "/dev/nbd1", 00:04:48.643 "bdev_name": "Malloc1" 00:04:48.643 } 00:04:48.643 ]' 00:04:48.643 11:22:49 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:04:48.643 { 00:04:48.643 "nbd_device": "/dev/nbd0", 00:04:48.643 "bdev_name": "Malloc0" 00:04:48.643 }, 00:04:48.643 { 00:04:48.643 "nbd_device": "/dev/nbd1", 00:04:48.643 "bdev_name": "Malloc1" 00:04:48.643 } 00:04:48.643 ]' 00:04:48.643 11:22:49 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:04:48.643 11:22:49 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:04:48.643 /dev/nbd1' 00:04:48.643 11:22:49 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:04:48.643 /dev/nbd1' 00:04:48.643 11:22:49 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:04:48.643 11:22:49 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:04:48.643 11:22:49 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:04:48.643 11:22:49 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:04:48.643 11:22:49 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:04:48.643 11:22:49 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:04:48.643 11:22:49 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:48.643 11:22:49 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:04:48.643 11:22:49 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:04:48.643 11:22:49 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:04:48.643 11:22:49 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:04:48.643 11:22:49 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:04:48.643 256+0 records in 00:04:48.643 256+0 records out 00:04:48.643 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0106779 s, 98.2 MB/s 00:04:48.643 11:22:49 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:04:48.643 11:22:49 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:04:48.643 256+0 records in 00:04:48.643 256+0 records out 00:04:48.643 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0201262 s, 52.1 MB/s 00:04:48.643 11:22:49 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:04:48.643 11:22:49 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:04:48.643 256+0 records in 00:04:48.643 256+0 records out 00:04:48.643 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0209849 s, 50.0 MB/s 00:04:48.643 11:22:49 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:04:48.643 11:22:49 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:48.643 11:22:49 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:04:48.643 11:22:49 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:04:48.643 11:22:49 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:04:48.643 11:22:49 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:04:48.643 11:22:49 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:04:48.643 11:22:49 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:04:48.643 11:22:49 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:04:48.643 11:22:49 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:04:48.643 11:22:49 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:04:48.643 11:22:49 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:04:48.643 11:22:49 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:04:48.643 11:22:49 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:48.643 11:22:49 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:48.643 11:22:49 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:04:48.643 11:22:49 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:04:48.643 11:22:49 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:04:48.643 11:22:49 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:04:48.902 11:22:49 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:04:48.902 11:22:49 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:04:48.902 11:22:49 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:04:48.902 11:22:49 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:04:48.902 11:22:49 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:04:48.902 11:22:49 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:04:48.902 11:22:49 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:04:48.902 11:22:49 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:04:48.902 11:22:49 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:04:48.902 11:22:49 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:04:49.161 11:22:49 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:04:49.161 11:22:49 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:04:49.161 11:22:49 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:04:49.161 11:22:49 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:04:49.161 11:22:49 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:04:49.161 11:22:49 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:04:49.161 11:22:49 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:04:49.161 11:22:49 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:04:49.161 11:22:49 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:04:49.161 11:22:49 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:49.161 11:22:49 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:04:49.420 11:22:50 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:04:49.420 11:22:50 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:04:49.420 11:22:50 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:04:49.420 11:22:50 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:04:49.420 11:22:50 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:04:49.420 11:22:50 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:04:49.420 11:22:50 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:04:49.420 11:22:50 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:04:49.420 11:22:50 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:04:49.420 11:22:50 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:04:49.420 11:22:50 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:04:49.420 11:22:50 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:04:49.420 11:22:50 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:04:49.679 11:22:50 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:04:49.938 [2024-11-15 11:22:50.701444] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:04:49.938 [2024-11-15 11:22:50.746747] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:04:49.938 [2024-11-15 11:22:50.746753] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:50.197 [2024-11-15 11:22:50.792179] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:04:50.197 [2024-11-15 11:22:50.792222] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:04:52.729 11:22:53 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:04:52.729 11:22:53 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:04:52.729 spdk_app_start Round 2 00:04:52.729 11:22:53 event.app_repeat -- event/event.sh@25 -- # waitforlisten 1028071 /var/tmp/spdk-nbd.sock 00:04:52.729 11:22:53 event.app_repeat -- common/autotest_common.sh@833 -- # '[' -z 1028071 ']' 00:04:52.729 11:22:53 event.app_repeat -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:04:52.729 11:22:53 event.app_repeat -- common/autotest_common.sh@838 -- # local max_retries=100 00:04:52.729 11:22:53 event.app_repeat -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:04:52.729 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:04:52.729 11:22:53 event.app_repeat -- common/autotest_common.sh@842 -- # xtrace_disable 00:04:52.729 11:22:53 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:04:52.987 11:22:53 event.app_repeat -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:04:52.987 11:22:53 event.app_repeat -- common/autotest_common.sh@866 -- # return 0 00:04:52.987 11:22:53 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:04:53.245 Malloc0 00:04:53.245 11:22:54 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:04:53.810 Malloc1 00:04:53.810 11:22:54 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:04:53.810 11:22:54 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:53.810 11:22:54 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:53.810 11:22:54 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:04:53.810 11:22:54 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:53.810 11:22:54 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:04:53.810 11:22:54 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:04:53.810 11:22:54 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:53.810 11:22:54 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:53.810 11:22:54 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:04:53.810 11:22:54 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:53.810 11:22:54 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:04:53.810 11:22:54 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:04:53.810 11:22:54 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:04:53.810 11:22:54 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:53.810 11:22:54 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:04:54.068 /dev/nbd0 00:04:54.068 11:22:54 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:04:54.068 11:22:54 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:04:54.068 11:22:54 event.app_repeat -- common/autotest_common.sh@870 -- # local nbd_name=nbd0 00:04:54.068 11:22:54 event.app_repeat -- common/autotest_common.sh@871 -- # local i 00:04:54.068 11:22:54 event.app_repeat -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:04:54.068 11:22:54 event.app_repeat -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:04:54.068 11:22:54 event.app_repeat -- common/autotest_common.sh@874 -- # grep -q -w nbd0 /proc/partitions 00:04:54.068 11:22:54 event.app_repeat -- common/autotest_common.sh@875 -- # break 00:04:54.068 11:22:54 event.app_repeat -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:04:54.068 11:22:54 event.app_repeat -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:04:54.068 11:22:54 event.app_repeat -- common/autotest_common.sh@887 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:04:54.068 1+0 records in 00:04:54.068 1+0 records out 00:04:54.068 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000220361 s, 18.6 MB/s 00:04:54.068 11:22:54 event.app_repeat -- common/autotest_common.sh@888 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:54.068 11:22:54 event.app_repeat -- common/autotest_common.sh@888 -- # size=4096 00:04:54.068 11:22:54 event.app_repeat -- common/autotest_common.sh@889 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:54.068 11:22:54 event.app_repeat -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:04:54.068 11:22:54 event.app_repeat -- common/autotest_common.sh@891 -- # return 0 00:04:54.068 11:22:54 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:04:54.068 11:22:54 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:54.068 11:22:54 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:04:54.068 /dev/nbd1 00:04:54.068 11:22:54 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:04:54.324 11:22:54 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:04:54.324 11:22:54 event.app_repeat -- common/autotest_common.sh@870 -- # local nbd_name=nbd1 00:04:54.324 11:22:54 event.app_repeat -- common/autotest_common.sh@871 -- # local i 00:04:54.324 11:22:54 event.app_repeat -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:04:54.324 11:22:54 event.app_repeat -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:04:54.324 11:22:54 event.app_repeat -- common/autotest_common.sh@874 -- # grep -q -w nbd1 /proc/partitions 00:04:54.324 11:22:54 event.app_repeat -- common/autotest_common.sh@875 -- # break 00:04:54.324 11:22:54 event.app_repeat -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:04:54.324 11:22:54 event.app_repeat -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:04:54.324 11:22:54 event.app_repeat -- common/autotest_common.sh@887 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:04:54.324 1+0 records in 00:04:54.324 1+0 records out 00:04:54.324 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000217631 s, 18.8 MB/s 00:04:54.324 11:22:54 event.app_repeat -- common/autotest_common.sh@888 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:54.324 11:22:54 event.app_repeat -- common/autotest_common.sh@888 -- # size=4096 00:04:54.324 11:22:54 event.app_repeat -- common/autotest_common.sh@889 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:54.324 11:22:54 event.app_repeat -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:04:54.324 11:22:54 event.app_repeat -- common/autotest_common.sh@891 -- # return 0 00:04:54.324 11:22:54 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:04:54.324 11:22:54 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:54.324 11:22:54 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:04:54.324 11:22:54 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:54.325 11:22:54 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:04:54.582 11:22:55 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:04:54.582 { 00:04:54.582 "nbd_device": "/dev/nbd0", 00:04:54.582 "bdev_name": "Malloc0" 00:04:54.582 }, 00:04:54.582 { 00:04:54.582 "nbd_device": "/dev/nbd1", 00:04:54.582 "bdev_name": "Malloc1" 00:04:54.582 } 00:04:54.582 ]' 00:04:54.582 11:22:55 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:04:54.582 11:22:55 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:04:54.582 { 00:04:54.582 "nbd_device": "/dev/nbd0", 00:04:54.582 "bdev_name": "Malloc0" 00:04:54.582 }, 00:04:54.582 { 00:04:54.582 "nbd_device": "/dev/nbd1", 00:04:54.582 "bdev_name": "Malloc1" 00:04:54.582 } 00:04:54.582 ]' 00:04:54.582 11:22:55 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:04:54.582 /dev/nbd1' 00:04:54.582 11:22:55 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:04:54.582 /dev/nbd1' 00:04:54.582 11:22:55 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:04:54.582 11:22:55 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:04:54.582 11:22:55 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:04:54.582 11:22:55 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:04:54.582 11:22:55 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:04:54.582 11:22:55 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:04:54.582 11:22:55 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:54.582 11:22:55 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:04:54.583 11:22:55 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:04:54.583 11:22:55 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:04:54.583 11:22:55 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:04:54.583 11:22:55 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:04:54.583 256+0 records in 00:04:54.583 256+0 records out 00:04:54.583 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0107299 s, 97.7 MB/s 00:04:54.583 11:22:55 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:04:54.583 11:22:55 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:04:54.583 256+0 records in 00:04:54.583 256+0 records out 00:04:54.583 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0199466 s, 52.6 MB/s 00:04:54.583 11:22:55 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:04:54.583 11:22:55 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:04:54.583 256+0 records in 00:04:54.583 256+0 records out 00:04:54.583 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0213153 s, 49.2 MB/s 00:04:54.583 11:22:55 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:04:54.583 11:22:55 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:54.583 11:22:55 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:04:54.583 11:22:55 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:04:54.583 11:22:55 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:04:54.583 11:22:55 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:04:54.583 11:22:55 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:04:54.583 11:22:55 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:04:54.583 11:22:55 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:04:54.583 11:22:55 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:04:54.583 11:22:55 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:04:54.583 11:22:55 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:04:54.583 11:22:55 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:04:54.583 11:22:55 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:54.583 11:22:55 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:54.583 11:22:55 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:04:54.583 11:22:55 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:04:54.583 11:22:55 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:04:54.583 11:22:55 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:04:54.840 11:22:55 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:04:54.840 11:22:55 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:04:54.840 11:22:55 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:04:54.840 11:22:55 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:04:54.840 11:22:55 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:04:54.840 11:22:55 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:04:54.841 11:22:55 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:04:54.841 11:22:55 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:04:54.841 11:22:55 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:04:54.841 11:22:55 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:04:55.098 11:22:55 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:04:55.098 11:22:55 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:04:55.098 11:22:55 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:04:55.098 11:22:55 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:04:55.098 11:22:55 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:04:55.098 11:22:55 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:04:55.098 11:22:55 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:04:55.098 11:22:55 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:04:55.098 11:22:55 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:04:55.098 11:22:55 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:55.098 11:22:55 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:04:55.356 11:22:56 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:04:55.356 11:22:56 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:04:55.356 11:22:56 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:04:55.356 11:22:56 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:04:55.356 11:22:56 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:04:55.356 11:22:56 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:04:55.356 11:22:56 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:04:55.356 11:22:56 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:04:55.356 11:22:56 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:04:55.356 11:22:56 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:04:55.356 11:22:56 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:04:55.356 11:22:56 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:04:55.356 11:22:56 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:04:55.614 11:22:56 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:04:55.873 [2024-11-15 11:22:56.540859] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:04:55.873 [2024-11-15 11:22:56.585643] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:04:55.873 [2024-11-15 11:22:56.585651] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:55.873 [2024-11-15 11:22:56.630541] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:04:55.873 [2024-11-15 11:22:56.630586] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:04:59.160 11:22:59 event.app_repeat -- event/event.sh@38 -- # waitforlisten 1028071 /var/tmp/spdk-nbd.sock 00:04:59.160 11:22:59 event.app_repeat -- common/autotest_common.sh@833 -- # '[' -z 1028071 ']' 00:04:59.160 11:22:59 event.app_repeat -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:04:59.160 11:22:59 event.app_repeat -- common/autotest_common.sh@838 -- # local max_retries=100 00:04:59.160 11:22:59 event.app_repeat -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:04:59.160 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:04:59.160 11:22:59 event.app_repeat -- common/autotest_common.sh@842 -- # xtrace_disable 00:04:59.160 11:22:59 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:04:59.160 11:22:59 event.app_repeat -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:04:59.160 11:22:59 event.app_repeat -- common/autotest_common.sh@866 -- # return 0 00:04:59.160 11:22:59 event.app_repeat -- event/event.sh@39 -- # killprocess 1028071 00:04:59.160 11:22:59 event.app_repeat -- common/autotest_common.sh@952 -- # '[' -z 1028071 ']' 00:04:59.160 11:22:59 event.app_repeat -- common/autotest_common.sh@956 -- # kill -0 1028071 00:04:59.160 11:22:59 event.app_repeat -- common/autotest_common.sh@957 -- # uname 00:04:59.160 11:22:59 event.app_repeat -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:04:59.160 11:22:59 event.app_repeat -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 1028071 00:04:59.160 11:22:59 event.app_repeat -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:04:59.160 11:22:59 event.app_repeat -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:04:59.160 11:22:59 event.app_repeat -- common/autotest_common.sh@970 -- # echo 'killing process with pid 1028071' 00:04:59.160 killing process with pid 1028071 00:04:59.160 11:22:59 event.app_repeat -- common/autotest_common.sh@971 -- # kill 1028071 00:04:59.160 11:22:59 event.app_repeat -- common/autotest_common.sh@976 -- # wait 1028071 00:04:59.160 spdk_app_start is called in Round 0. 00:04:59.160 Shutdown signal received, stop current app iteration 00:04:59.160 Starting SPDK v25.01-pre git sha1 4b2d483c6 / DPDK 24.03.0 reinitialization... 00:04:59.160 spdk_app_start is called in Round 1. 00:04:59.160 Shutdown signal received, stop current app iteration 00:04:59.160 Starting SPDK v25.01-pre git sha1 4b2d483c6 / DPDK 24.03.0 reinitialization... 00:04:59.160 spdk_app_start is called in Round 2. 00:04:59.160 Shutdown signal received, stop current app iteration 00:04:59.160 Starting SPDK v25.01-pre git sha1 4b2d483c6 / DPDK 24.03.0 reinitialization... 00:04:59.160 spdk_app_start is called in Round 3. 00:04:59.160 Shutdown signal received, stop current app iteration 00:04:59.160 11:22:59 event.app_repeat -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:04:59.160 11:22:59 event.app_repeat -- event/event.sh@42 -- # return 0 00:04:59.160 00:04:59.160 real 0m18.461s 00:04:59.160 user 0m41.567s 00:04:59.160 sys 0m3.072s 00:04:59.160 11:22:59 event.app_repeat -- common/autotest_common.sh@1128 -- # xtrace_disable 00:04:59.160 11:22:59 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:04:59.160 ************************************ 00:04:59.160 END TEST app_repeat 00:04:59.160 ************************************ 00:04:59.160 11:22:59 event -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:04:59.160 11:22:59 event -- event/event.sh@55 -- # run_test cpu_locks /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/cpu_locks.sh 00:04:59.160 11:22:59 event -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:04:59.160 11:22:59 event -- common/autotest_common.sh@1109 -- # xtrace_disable 00:04:59.160 11:22:59 event -- common/autotest_common.sh@10 -- # set +x 00:04:59.160 ************************************ 00:04:59.160 START TEST cpu_locks 00:04:59.160 ************************************ 00:04:59.160 11:22:59 event.cpu_locks -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/cpu_locks.sh 00:04:59.160 * Looking for test storage... 00:04:59.160 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event 00:04:59.160 11:22:59 event.cpu_locks -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:04:59.160 11:22:59 event.cpu_locks -- common/autotest_common.sh@1691 -- # lcov --version 00:04:59.160 11:22:59 event.cpu_locks -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:04:59.160 11:23:00 event.cpu_locks -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:04:59.160 11:23:00 event.cpu_locks -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:59.160 11:23:00 event.cpu_locks -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:59.160 11:23:00 event.cpu_locks -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:59.160 11:23:00 event.cpu_locks -- scripts/common.sh@336 -- # IFS=.-: 00:04:59.160 11:23:00 event.cpu_locks -- scripts/common.sh@336 -- # read -ra ver1 00:04:59.160 11:23:00 event.cpu_locks -- scripts/common.sh@337 -- # IFS=.-: 00:04:59.160 11:23:00 event.cpu_locks -- scripts/common.sh@337 -- # read -ra ver2 00:04:59.160 11:23:00 event.cpu_locks -- scripts/common.sh@338 -- # local 'op=<' 00:04:59.160 11:23:00 event.cpu_locks -- scripts/common.sh@340 -- # ver1_l=2 00:04:59.160 11:23:00 event.cpu_locks -- scripts/common.sh@341 -- # ver2_l=1 00:04:59.160 11:23:00 event.cpu_locks -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:59.160 11:23:00 event.cpu_locks -- scripts/common.sh@344 -- # case "$op" in 00:04:59.160 11:23:00 event.cpu_locks -- scripts/common.sh@345 -- # : 1 00:04:59.160 11:23:00 event.cpu_locks -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:59.160 11:23:00 event.cpu_locks -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:59.160 11:23:00 event.cpu_locks -- scripts/common.sh@365 -- # decimal 1 00:04:59.160 11:23:00 event.cpu_locks -- scripts/common.sh@353 -- # local d=1 00:04:59.160 11:23:00 event.cpu_locks -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:59.160 11:23:00 event.cpu_locks -- scripts/common.sh@355 -- # echo 1 00:04:59.160 11:23:00 event.cpu_locks -- scripts/common.sh@365 -- # ver1[v]=1 00:04:59.419 11:23:00 event.cpu_locks -- scripts/common.sh@366 -- # decimal 2 00:04:59.419 11:23:00 event.cpu_locks -- scripts/common.sh@353 -- # local d=2 00:04:59.419 11:23:00 event.cpu_locks -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:59.419 11:23:00 event.cpu_locks -- scripts/common.sh@355 -- # echo 2 00:04:59.419 11:23:00 event.cpu_locks -- scripts/common.sh@366 -- # ver2[v]=2 00:04:59.419 11:23:00 event.cpu_locks -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:59.419 11:23:00 event.cpu_locks -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:59.419 11:23:00 event.cpu_locks -- scripts/common.sh@368 -- # return 0 00:04:59.419 11:23:00 event.cpu_locks -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:59.419 11:23:00 event.cpu_locks -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:04:59.419 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:59.419 --rc genhtml_branch_coverage=1 00:04:59.419 --rc genhtml_function_coverage=1 00:04:59.419 --rc genhtml_legend=1 00:04:59.419 --rc geninfo_all_blocks=1 00:04:59.419 --rc geninfo_unexecuted_blocks=1 00:04:59.419 00:04:59.419 ' 00:04:59.419 11:23:00 event.cpu_locks -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:04:59.419 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:59.419 --rc genhtml_branch_coverage=1 00:04:59.419 --rc genhtml_function_coverage=1 00:04:59.419 --rc genhtml_legend=1 00:04:59.419 --rc geninfo_all_blocks=1 00:04:59.419 --rc geninfo_unexecuted_blocks=1 00:04:59.419 00:04:59.419 ' 00:04:59.419 11:23:00 event.cpu_locks -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:04:59.419 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:59.419 --rc genhtml_branch_coverage=1 00:04:59.419 --rc genhtml_function_coverage=1 00:04:59.419 --rc genhtml_legend=1 00:04:59.419 --rc geninfo_all_blocks=1 00:04:59.419 --rc geninfo_unexecuted_blocks=1 00:04:59.419 00:04:59.419 ' 00:04:59.419 11:23:00 event.cpu_locks -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:04:59.419 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:59.419 --rc genhtml_branch_coverage=1 00:04:59.419 --rc genhtml_function_coverage=1 00:04:59.419 --rc genhtml_legend=1 00:04:59.419 --rc geninfo_all_blocks=1 00:04:59.419 --rc geninfo_unexecuted_blocks=1 00:04:59.419 00:04:59.419 ' 00:04:59.419 11:23:00 event.cpu_locks -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:04:59.419 11:23:00 event.cpu_locks -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:04:59.419 11:23:00 event.cpu_locks -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:04:59.419 11:23:00 event.cpu_locks -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:04:59.419 11:23:00 event.cpu_locks -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:04:59.419 11:23:00 event.cpu_locks -- common/autotest_common.sh@1109 -- # xtrace_disable 00:04:59.419 11:23:00 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:04:59.419 ************************************ 00:04:59.419 START TEST default_locks 00:04:59.419 ************************************ 00:04:59.419 11:23:00 event.cpu_locks.default_locks -- common/autotest_common.sh@1127 -- # default_locks 00:04:59.419 11:23:00 event.cpu_locks.default_locks -- event/cpu_locks.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:04:59.419 11:23:00 event.cpu_locks.default_locks -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=1031755 00:04:59.419 11:23:00 event.cpu_locks.default_locks -- event/cpu_locks.sh@47 -- # waitforlisten 1031755 00:04:59.419 11:23:00 event.cpu_locks.default_locks -- common/autotest_common.sh@833 -- # '[' -z 1031755 ']' 00:04:59.419 11:23:00 event.cpu_locks.default_locks -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:59.419 11:23:00 event.cpu_locks.default_locks -- common/autotest_common.sh@838 -- # local max_retries=100 00:04:59.419 11:23:00 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:59.419 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:59.419 11:23:00 event.cpu_locks.default_locks -- common/autotest_common.sh@842 -- # xtrace_disable 00:04:59.419 11:23:00 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:04:59.419 [2024-11-15 11:23:00.099346] Starting SPDK v25.01-pre git sha1 4b2d483c6 / DPDK 24.03.0 initialization... 00:04:59.419 [2024-11-15 11:23:00.099386] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1031755 ] 00:04:59.419 [2024-11-15 11:23:00.183198] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:59.419 [2024-11-15 11:23:00.231966] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:59.677 11:23:00 event.cpu_locks.default_locks -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:04:59.677 11:23:00 event.cpu_locks.default_locks -- common/autotest_common.sh@866 -- # return 0 00:04:59.677 11:23:00 event.cpu_locks.default_locks -- event/cpu_locks.sh@49 -- # locks_exist 1031755 00:04:59.677 11:23:00 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # lslocks -p 1031755 00:04:59.677 11:23:00 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:00.245 lslocks: write error 00:05:00.245 11:23:01 event.cpu_locks.default_locks -- event/cpu_locks.sh@50 -- # killprocess 1031755 00:05:00.245 11:23:01 event.cpu_locks.default_locks -- common/autotest_common.sh@952 -- # '[' -z 1031755 ']' 00:05:00.245 11:23:01 event.cpu_locks.default_locks -- common/autotest_common.sh@956 -- # kill -0 1031755 00:05:00.245 11:23:01 event.cpu_locks.default_locks -- common/autotest_common.sh@957 -- # uname 00:05:00.245 11:23:01 event.cpu_locks.default_locks -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:05:00.245 11:23:01 event.cpu_locks.default_locks -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 1031755 00:05:00.505 11:23:01 event.cpu_locks.default_locks -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:05:00.505 11:23:01 event.cpu_locks.default_locks -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:05:00.505 11:23:01 event.cpu_locks.default_locks -- common/autotest_common.sh@970 -- # echo 'killing process with pid 1031755' 00:05:00.505 killing process with pid 1031755 00:05:00.505 11:23:01 event.cpu_locks.default_locks -- common/autotest_common.sh@971 -- # kill 1031755 00:05:00.505 11:23:01 event.cpu_locks.default_locks -- common/autotest_common.sh@976 -- # wait 1031755 00:05:00.765 11:23:01 event.cpu_locks.default_locks -- event/cpu_locks.sh@52 -- # NOT waitforlisten 1031755 00:05:00.765 11:23:01 event.cpu_locks.default_locks -- common/autotest_common.sh@650 -- # local es=0 00:05:00.765 11:23:01 event.cpu_locks.default_locks -- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 1031755 00:05:00.765 11:23:01 event.cpu_locks.default_locks -- common/autotest_common.sh@638 -- # local arg=waitforlisten 00:05:00.765 11:23:01 event.cpu_locks.default_locks -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:00.765 11:23:01 event.cpu_locks.default_locks -- common/autotest_common.sh@642 -- # type -t waitforlisten 00:05:00.765 11:23:01 event.cpu_locks.default_locks -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:00.765 11:23:01 event.cpu_locks.default_locks -- common/autotest_common.sh@653 -- # waitforlisten 1031755 00:05:00.765 11:23:01 event.cpu_locks.default_locks -- common/autotest_common.sh@833 -- # '[' -z 1031755 ']' 00:05:00.765 11:23:01 event.cpu_locks.default_locks -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:00.765 11:23:01 event.cpu_locks.default_locks -- common/autotest_common.sh@838 -- # local max_retries=100 00:05:00.765 11:23:01 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:00.765 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:00.765 11:23:01 event.cpu_locks.default_locks -- common/autotest_common.sh@842 -- # xtrace_disable 00:05:00.765 11:23:01 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:05:00.765 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 848: kill: (1031755) - No such process 00:05:00.765 ERROR: process (pid: 1031755) is no longer running 00:05:00.765 11:23:01 event.cpu_locks.default_locks -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:05:00.765 11:23:01 event.cpu_locks.default_locks -- common/autotest_common.sh@866 -- # return 1 00:05:00.765 11:23:01 event.cpu_locks.default_locks -- common/autotest_common.sh@653 -- # es=1 00:05:00.765 11:23:01 event.cpu_locks.default_locks -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:05:00.765 11:23:01 event.cpu_locks.default_locks -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:05:00.765 11:23:01 event.cpu_locks.default_locks -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:05:00.765 11:23:01 event.cpu_locks.default_locks -- event/cpu_locks.sh@54 -- # no_locks 00:05:00.765 11:23:01 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # lock_files=() 00:05:00.765 11:23:01 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # local lock_files 00:05:00.765 11:23:01 event.cpu_locks.default_locks -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:05:00.765 00:05:00.765 real 0m1.400s 00:05:00.765 user 0m1.421s 00:05:00.765 sys 0m0.600s 00:05:00.765 11:23:01 event.cpu_locks.default_locks -- common/autotest_common.sh@1128 -- # xtrace_disable 00:05:00.765 11:23:01 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:05:00.765 ************************************ 00:05:00.765 END TEST default_locks 00:05:00.765 ************************************ 00:05:00.765 11:23:01 event.cpu_locks -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:05:00.765 11:23:01 event.cpu_locks -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:05:00.765 11:23:01 event.cpu_locks -- common/autotest_common.sh@1109 -- # xtrace_disable 00:05:00.765 11:23:01 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:00.765 ************************************ 00:05:00.765 START TEST default_locks_via_rpc 00:05:00.765 ************************************ 00:05:00.765 11:23:01 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1127 -- # default_locks_via_rpc 00:05:00.765 11:23:01 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=1032051 00:05:00.765 11:23:01 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@63 -- # waitforlisten 1032051 00:05:00.765 11:23:01 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:05:00.765 11:23:01 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@833 -- # '[' -z 1032051 ']' 00:05:00.765 11:23:01 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:00.765 11:23:01 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@838 -- # local max_retries=100 00:05:00.765 11:23:01 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:00.765 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:00.765 11:23:01 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@842 -- # xtrace_disable 00:05:00.765 11:23:01 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:00.765 [2024-11-15 11:23:01.589496] Starting SPDK v25.01-pre git sha1 4b2d483c6 / DPDK 24.03.0 initialization... 00:05:00.765 [2024-11-15 11:23:01.589559] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1032051 ] 00:05:01.024 [2024-11-15 11:23:01.685177] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:01.024 [2024-11-15 11:23:01.733735] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:01.284 11:23:01 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:05:01.284 11:23:01 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@866 -- # return 0 00:05:01.284 11:23:01 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:05:01.284 11:23:01 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:01.284 11:23:01 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:01.284 11:23:01 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:01.284 11:23:01 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@67 -- # no_locks 00:05:01.284 11:23:01 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # lock_files=() 00:05:01.284 11:23:01 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # local lock_files 00:05:01.284 11:23:01 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:05:01.284 11:23:01 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:05:01.284 11:23:01 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:01.284 11:23:01 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:01.284 11:23:01 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:01.284 11:23:01 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@71 -- # locks_exist 1032051 00:05:01.284 11:23:01 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # lslocks -p 1032051 00:05:01.284 11:23:01 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:01.284 11:23:02 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@73 -- # killprocess 1032051 00:05:01.284 11:23:02 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@952 -- # '[' -z 1032051 ']' 00:05:01.284 11:23:02 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@956 -- # kill -0 1032051 00:05:01.284 11:23:02 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@957 -- # uname 00:05:01.284 11:23:02 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:05:01.284 11:23:02 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 1032051 00:05:01.542 11:23:02 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:05:01.542 11:23:02 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:05:01.542 11:23:02 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@970 -- # echo 'killing process with pid 1032051' 00:05:01.542 killing process with pid 1032051 00:05:01.542 11:23:02 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@971 -- # kill 1032051 00:05:01.542 11:23:02 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@976 -- # wait 1032051 00:05:01.801 00:05:01.801 real 0m0.947s 00:05:01.801 user 0m0.941s 00:05:01.801 sys 0m0.419s 00:05:01.801 11:23:02 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1128 -- # xtrace_disable 00:05:01.801 11:23:02 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:01.801 ************************************ 00:05:01.801 END TEST default_locks_via_rpc 00:05:01.801 ************************************ 00:05:01.801 11:23:02 event.cpu_locks -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:05:01.801 11:23:02 event.cpu_locks -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:05:01.801 11:23:02 event.cpu_locks -- common/autotest_common.sh@1109 -- # xtrace_disable 00:05:01.801 11:23:02 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:01.801 ************************************ 00:05:01.801 START TEST non_locking_app_on_locked_coremask 00:05:01.801 ************************************ 00:05:01.801 11:23:02 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1127 -- # non_locking_app_on_locked_coremask 00:05:01.801 11:23:02 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=1032129 00:05:01.801 11:23:02 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@81 -- # waitforlisten 1032129 /var/tmp/spdk.sock 00:05:01.801 11:23:02 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:05:01.801 11:23:02 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # '[' -z 1032129 ']' 00:05:01.801 11:23:02 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:01.801 11:23:02 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # local max_retries=100 00:05:01.801 11:23:02 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:01.801 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:01.801 11:23:02 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # xtrace_disable 00:05:01.801 11:23:02 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:01.801 [2024-11-15 11:23:02.604546] Starting SPDK v25.01-pre git sha1 4b2d483c6 / DPDK 24.03.0 initialization... 00:05:01.801 [2024-11-15 11:23:02.604604] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1032129 ] 00:05:02.061 [2024-11-15 11:23:02.702354] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:02.061 [2024-11-15 11:23:02.751066] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:02.320 11:23:02 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:05:02.320 11:23:02 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@866 -- # return 0 00:05:02.320 11:23:02 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=1032347 00:05:02.320 11:23:02 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@85 -- # waitforlisten 1032347 /var/tmp/spdk2.sock 00:05:02.320 11:23:02 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:05:02.320 11:23:02 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # '[' -z 1032347 ']' 00:05:02.320 11:23:02 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:02.320 11:23:02 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # local max_retries=100 00:05:02.320 11:23:02 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:02.320 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:02.320 11:23:02 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # xtrace_disable 00:05:02.320 11:23:02 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:02.320 [2024-11-15 11:23:03.029503] Starting SPDK v25.01-pre git sha1 4b2d483c6 / DPDK 24.03.0 initialization... 00:05:02.320 [2024-11-15 11:23:03.029552] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1032347 ] 00:05:02.320 [2024-11-15 11:23:03.151811] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:02.320 [2024-11-15 11:23:03.151854] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:02.580 [2024-11-15 11:23:03.249593] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:03.147 11:23:03 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:05:03.147 11:23:03 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@866 -- # return 0 00:05:03.147 11:23:03 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@87 -- # locks_exist 1032129 00:05:03.147 11:23:03 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:03.147 11:23:03 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 1032129 00:05:03.406 lslocks: write error 00:05:03.406 11:23:04 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@89 -- # killprocess 1032129 00:05:03.406 11:23:04 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # '[' -z 1032129 ']' 00:05:03.406 11:23:04 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # kill -0 1032129 00:05:03.406 11:23:04 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@957 -- # uname 00:05:03.406 11:23:04 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:05:03.406 11:23:04 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 1032129 00:05:03.406 11:23:04 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:05:03.406 11:23:04 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:05:03.406 11:23:04 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@970 -- # echo 'killing process with pid 1032129' 00:05:03.406 killing process with pid 1032129 00:05:03.406 11:23:04 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@971 -- # kill 1032129 00:05:03.406 11:23:04 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@976 -- # wait 1032129 00:05:03.973 11:23:04 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@90 -- # killprocess 1032347 00:05:03.973 11:23:04 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # '[' -z 1032347 ']' 00:05:03.973 11:23:04 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # kill -0 1032347 00:05:03.973 11:23:04 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@957 -- # uname 00:05:03.973 11:23:04 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:05:03.973 11:23:04 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 1032347 00:05:04.232 11:23:04 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:05:04.232 11:23:04 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:05:04.232 11:23:04 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@970 -- # echo 'killing process with pid 1032347' 00:05:04.232 killing process with pid 1032347 00:05:04.232 11:23:04 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@971 -- # kill 1032347 00:05:04.232 11:23:04 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@976 -- # wait 1032347 00:05:04.496 00:05:04.496 real 0m2.661s 00:05:04.496 user 0m2.813s 00:05:04.496 sys 0m0.865s 00:05:04.496 11:23:05 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1128 -- # xtrace_disable 00:05:04.496 11:23:05 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:04.496 ************************************ 00:05:04.496 END TEST non_locking_app_on_locked_coremask 00:05:04.496 ************************************ 00:05:04.496 11:23:05 event.cpu_locks -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:05:04.496 11:23:05 event.cpu_locks -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:05:04.496 11:23:05 event.cpu_locks -- common/autotest_common.sh@1109 -- # xtrace_disable 00:05:04.496 11:23:05 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:04.496 ************************************ 00:05:04.496 START TEST locking_app_on_unlocked_coremask 00:05:04.496 ************************************ 00:05:04.496 11:23:05 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1127 -- # locking_app_on_unlocked_coremask 00:05:04.496 11:23:05 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=1032652 00:05:04.496 11:23:05 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@99 -- # waitforlisten 1032652 /var/tmp/spdk.sock 00:05:04.496 11:23:05 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@97 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:05:04.496 11:23:05 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@833 -- # '[' -z 1032652 ']' 00:05:04.496 11:23:05 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:04.496 11:23:05 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@838 -- # local max_retries=100 00:05:04.496 11:23:05 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:04.496 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:04.496 11:23:05 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@842 -- # xtrace_disable 00:05:04.496 11:23:05 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:04.496 [2024-11-15 11:23:05.338392] Starting SPDK v25.01-pre git sha1 4b2d483c6 / DPDK 24.03.0 initialization... 00:05:04.496 [2024-11-15 11:23:05.338452] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1032652 ] 00:05:04.754 [2024-11-15 11:23:05.433211] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:04.754 [2024-11-15 11:23:05.433243] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:04.754 [2024-11-15 11:23:05.480715] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:05.323 11:23:06 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:05:05.323 11:23:06 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@866 -- # return 0 00:05:05.323 11:23:06 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=1032916 00:05:05.323 11:23:06 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@103 -- # waitforlisten 1032916 /var/tmp/spdk2.sock 00:05:05.323 11:23:06 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@101 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:05:05.323 11:23:06 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@833 -- # '[' -z 1032916 ']' 00:05:05.323 11:23:06 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:05.324 11:23:06 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@838 -- # local max_retries=100 00:05:05.324 11:23:06 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:05.324 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:05.324 11:23:06 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@842 -- # xtrace_disable 00:05:05.324 11:23:06 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:05.583 [2024-11-15 11:23:06.181118] Starting SPDK v25.01-pre git sha1 4b2d483c6 / DPDK 24.03.0 initialization... 00:05:05.583 [2024-11-15 11:23:06.181165] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1032916 ] 00:05:05.583 [2024-11-15 11:23:06.303780] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:05.583 [2024-11-15 11:23:06.400666] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:06.519 11:23:07 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:05:06.520 11:23:07 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@866 -- # return 0 00:05:06.520 11:23:07 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@105 -- # locks_exist 1032916 00:05:06.520 11:23:07 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 1032916 00:05:06.520 11:23:07 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:06.520 lslocks: write error 00:05:06.520 11:23:07 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@107 -- # killprocess 1032652 00:05:06.520 11:23:07 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@952 -- # '[' -z 1032652 ']' 00:05:06.520 11:23:07 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # kill -0 1032652 00:05:06.520 11:23:07 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@957 -- # uname 00:05:06.520 11:23:07 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:05:06.520 11:23:07 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 1032652 00:05:06.520 11:23:07 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:05:06.520 11:23:07 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:05:06.520 11:23:07 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@970 -- # echo 'killing process with pid 1032652' 00:05:06.520 killing process with pid 1032652 00:05:06.520 11:23:07 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@971 -- # kill 1032652 00:05:06.520 11:23:07 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@976 -- # wait 1032652 00:05:07.458 11:23:07 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@108 -- # killprocess 1032916 00:05:07.458 11:23:07 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@952 -- # '[' -z 1032916 ']' 00:05:07.458 11:23:07 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # kill -0 1032916 00:05:07.458 11:23:07 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@957 -- # uname 00:05:07.458 11:23:07 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:05:07.458 11:23:07 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 1032916 00:05:07.458 11:23:08 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:05:07.458 11:23:08 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:05:07.458 11:23:08 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@970 -- # echo 'killing process with pid 1032916' 00:05:07.458 killing process with pid 1032916 00:05:07.458 11:23:08 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@971 -- # kill 1032916 00:05:07.458 11:23:08 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@976 -- # wait 1032916 00:05:07.718 00:05:07.718 real 0m3.103s 00:05:07.718 user 0m3.377s 00:05:07.718 sys 0m0.866s 00:05:07.718 11:23:08 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1128 -- # xtrace_disable 00:05:07.718 11:23:08 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:07.718 ************************************ 00:05:07.718 END TEST locking_app_on_unlocked_coremask 00:05:07.718 ************************************ 00:05:07.718 11:23:08 event.cpu_locks -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:05:07.718 11:23:08 event.cpu_locks -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:05:07.718 11:23:08 event.cpu_locks -- common/autotest_common.sh@1109 -- # xtrace_disable 00:05:07.718 11:23:08 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:07.718 ************************************ 00:05:07.718 START TEST locking_app_on_locked_coremask 00:05:07.718 ************************************ 00:05:07.718 11:23:08 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1127 -- # locking_app_on_locked_coremask 00:05:07.718 11:23:08 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=1033362 00:05:07.718 11:23:08 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@116 -- # waitforlisten 1033362 /var/tmp/spdk.sock 00:05:07.718 11:23:08 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@114 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:05:07.718 11:23:08 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # '[' -z 1033362 ']' 00:05:07.718 11:23:08 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:07.718 11:23:08 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # local max_retries=100 00:05:07.718 11:23:08 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:07.718 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:07.718 11:23:08 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # xtrace_disable 00:05:07.718 11:23:08 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:07.718 [2024-11-15 11:23:08.503728] Starting SPDK v25.01-pre git sha1 4b2d483c6 / DPDK 24.03.0 initialization... 00:05:07.718 [2024-11-15 11:23:08.503785] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1033362 ] 00:05:07.977 [2024-11-15 11:23:08.587905] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:07.977 [2024-11-15 11:23:08.638373] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:08.236 11:23:08 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:05:08.236 11:23:08 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@866 -- # return 0 00:05:08.236 11:23:08 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@118 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:05:08.236 11:23:08 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=1033475 00:05:08.236 11:23:08 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@120 -- # NOT waitforlisten 1033475 /var/tmp/spdk2.sock 00:05:08.236 11:23:08 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@650 -- # local es=0 00:05:08.236 11:23:08 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 1033475 /var/tmp/spdk2.sock 00:05:08.236 11:23:08 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@638 -- # local arg=waitforlisten 00:05:08.236 11:23:08 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:08.236 11:23:08 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@642 -- # type -t waitforlisten 00:05:08.236 11:23:08 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:08.236 11:23:08 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@653 -- # waitforlisten 1033475 /var/tmp/spdk2.sock 00:05:08.236 11:23:08 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # '[' -z 1033475 ']' 00:05:08.236 11:23:08 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:08.236 11:23:08 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # local max_retries=100 00:05:08.236 11:23:08 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:08.236 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:08.236 11:23:08 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # xtrace_disable 00:05:08.236 11:23:08 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:08.237 [2024-11-15 11:23:08.904864] Starting SPDK v25.01-pre git sha1 4b2d483c6 / DPDK 24.03.0 initialization... 00:05:08.237 [2024-11-15 11:23:08.904908] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1033475 ] 00:05:08.237 [2024-11-15 11:23:09.026921] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 1033362 has claimed it. 00:05:08.237 [2024-11-15 11:23:09.026966] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:05:08.805 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 848: kill: (1033475) - No such process 00:05:08.805 ERROR: process (pid: 1033475) is no longer running 00:05:08.805 11:23:09 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:05:08.805 11:23:09 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@866 -- # return 1 00:05:08.805 11:23:09 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@653 -- # es=1 00:05:08.805 11:23:09 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:05:08.805 11:23:09 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:05:08.805 11:23:09 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:05:08.805 11:23:09 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@122 -- # locks_exist 1033362 00:05:08.805 11:23:09 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 1033362 00:05:08.805 11:23:09 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:09.372 lslocks: write error 00:05:09.372 11:23:09 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@124 -- # killprocess 1033362 00:05:09.372 11:23:09 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # '[' -z 1033362 ']' 00:05:09.372 11:23:09 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # kill -0 1033362 00:05:09.372 11:23:09 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@957 -- # uname 00:05:09.372 11:23:09 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:05:09.372 11:23:09 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 1033362 00:05:09.372 11:23:10 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:05:09.372 11:23:10 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:05:09.372 11:23:10 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@970 -- # echo 'killing process with pid 1033362' 00:05:09.372 killing process with pid 1033362 00:05:09.372 11:23:10 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@971 -- # kill 1033362 00:05:09.372 11:23:10 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@976 -- # wait 1033362 00:05:09.631 00:05:09.631 real 0m1.899s 00:05:09.631 user 0m2.062s 00:05:09.631 sys 0m0.626s 00:05:09.631 11:23:10 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1128 -- # xtrace_disable 00:05:09.631 11:23:10 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:09.631 ************************************ 00:05:09.631 END TEST locking_app_on_locked_coremask 00:05:09.631 ************************************ 00:05:09.631 11:23:10 event.cpu_locks -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:05:09.631 11:23:10 event.cpu_locks -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:05:09.631 11:23:10 event.cpu_locks -- common/autotest_common.sh@1109 -- # xtrace_disable 00:05:09.631 11:23:10 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:09.631 ************************************ 00:05:09.631 START TEST locking_overlapped_coremask 00:05:09.631 ************************************ 00:05:09.631 11:23:10 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1127 -- # locking_overlapped_coremask 00:05:09.631 11:23:10 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=1033768 00:05:09.631 11:23:10 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@133 -- # waitforlisten 1033768 /var/tmp/spdk.sock 00:05:09.631 11:23:10 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@131 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 00:05:09.631 11:23:10 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@833 -- # '[' -z 1033768 ']' 00:05:09.631 11:23:10 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:09.631 11:23:10 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@838 -- # local max_retries=100 00:05:09.631 11:23:10 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:09.631 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:09.631 11:23:10 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@842 -- # xtrace_disable 00:05:09.631 11:23:10 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:09.631 [2024-11-15 11:23:10.460794] Starting SPDK v25.01-pre git sha1 4b2d483c6 / DPDK 24.03.0 initialization... 00:05:09.631 [2024-11-15 11:23:10.460850] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1033768 ] 00:05:09.891 [2024-11-15 11:23:10.547260] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:05:09.891 [2024-11-15 11:23:10.599869] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:09.891 [2024-11-15 11:23:10.599969] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:05:09.891 [2024-11-15 11:23:10.599971] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:10.150 11:23:10 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:05:10.150 11:23:10 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@866 -- # return 0 00:05:10.150 11:23:10 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=1033784 00:05:10.150 11:23:10 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@137 -- # NOT waitforlisten 1033784 /var/tmp/spdk2.sock 00:05:10.150 11:23:10 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@650 -- # local es=0 00:05:10.150 11:23:10 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 1033784 /var/tmp/spdk2.sock 00:05:10.150 11:23:10 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@638 -- # local arg=waitforlisten 00:05:10.150 11:23:10 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@135 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:05:10.150 11:23:10 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:10.150 11:23:10 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@642 -- # type -t waitforlisten 00:05:10.150 11:23:10 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:10.150 11:23:10 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@653 -- # waitforlisten 1033784 /var/tmp/spdk2.sock 00:05:10.150 11:23:10 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@833 -- # '[' -z 1033784 ']' 00:05:10.150 11:23:10 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:10.150 11:23:10 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@838 -- # local max_retries=100 00:05:10.150 11:23:10 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:10.150 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:10.150 11:23:10 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@842 -- # xtrace_disable 00:05:10.150 11:23:10 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:10.150 [2024-11-15 11:23:10.892493] Starting SPDK v25.01-pre git sha1 4b2d483c6 / DPDK 24.03.0 initialization... 00:05:10.150 [2024-11-15 11:23:10.892557] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1033784 ] 00:05:10.151 [2024-11-15 11:23:10.989465] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 1033768 has claimed it. 00:05:10.151 [2024-11-15 11:23:10.989502] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:05:11.088 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 848: kill: (1033784) - No such process 00:05:11.088 ERROR: process (pid: 1033784) is no longer running 00:05:11.088 11:23:11 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:05:11.088 11:23:11 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@866 -- # return 1 00:05:11.088 11:23:11 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@653 -- # es=1 00:05:11.088 11:23:11 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:05:11.088 11:23:11 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:05:11.088 11:23:11 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:05:11.088 11:23:11 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:05:11.088 11:23:11 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:05:11.088 11:23:11 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:05:11.088 11:23:11 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:05:11.088 11:23:11 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@141 -- # killprocess 1033768 00:05:11.088 11:23:11 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@952 -- # '[' -z 1033768 ']' 00:05:11.088 11:23:11 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@956 -- # kill -0 1033768 00:05:11.088 11:23:11 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@957 -- # uname 00:05:11.088 11:23:11 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:05:11.088 11:23:11 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 1033768 00:05:11.088 11:23:11 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:05:11.088 11:23:11 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:05:11.088 11:23:11 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@970 -- # echo 'killing process with pid 1033768' 00:05:11.088 killing process with pid 1033768 00:05:11.088 11:23:11 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@971 -- # kill 1033768 00:05:11.088 11:23:11 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@976 -- # wait 1033768 00:05:11.347 00:05:11.347 real 0m1.599s 00:05:11.347 user 0m4.593s 00:05:11.347 sys 0m0.449s 00:05:11.347 11:23:12 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1128 -- # xtrace_disable 00:05:11.347 11:23:12 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:11.347 ************************************ 00:05:11.347 END TEST locking_overlapped_coremask 00:05:11.347 ************************************ 00:05:11.347 11:23:12 event.cpu_locks -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:05:11.347 11:23:12 event.cpu_locks -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:05:11.347 11:23:12 event.cpu_locks -- common/autotest_common.sh@1109 -- # xtrace_disable 00:05:11.347 11:23:12 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:11.347 ************************************ 00:05:11.347 START TEST locking_overlapped_coremask_via_rpc 00:05:11.347 ************************************ 00:05:11.347 11:23:12 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1127 -- # locking_overlapped_coremask_via_rpc 00:05:11.347 11:23:12 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=1034072 00:05:11.347 11:23:12 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@149 -- # waitforlisten 1034072 /var/tmp/spdk.sock 00:05:11.347 11:23:12 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # '[' -z 1034072 ']' 00:05:11.347 11:23:12 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@147 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:05:11.347 11:23:12 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:11.347 11:23:12 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # local max_retries=100 00:05:11.347 11:23:12 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:11.347 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:11.347 11:23:12 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # xtrace_disable 00:05:11.347 11:23:12 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:11.347 [2024-11-15 11:23:12.095855] Starting SPDK v25.01-pre git sha1 4b2d483c6 / DPDK 24.03.0 initialization... 00:05:11.347 [2024-11-15 11:23:12.095892] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1034072 ] 00:05:11.347 [2024-11-15 11:23:12.175945] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:11.347 [2024-11-15 11:23:12.175976] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:05:11.606 [2024-11-15 11:23:12.230123] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:11.606 [2024-11-15 11:23:12.230214] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:05:11.606 [2024-11-15 11:23:12.230226] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:12.174 11:23:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:05:12.174 11:23:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@866 -- # return 0 00:05:12.174 11:23:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=1034333 00:05:12.174 11:23:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@153 -- # waitforlisten 1034333 /var/tmp/spdk2.sock 00:05:12.174 11:23:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # '[' -z 1034333 ']' 00:05:12.174 11:23:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:12.174 11:23:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # local max_retries=100 00:05:12.174 11:23:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:12.174 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:12.174 11:23:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # xtrace_disable 00:05:12.174 11:23:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:12.174 11:23:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@151 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:05:12.434 [2024-11-15 11:23:13.071801] Starting SPDK v25.01-pre git sha1 4b2d483c6 / DPDK 24.03.0 initialization... 00:05:12.434 [2024-11-15 11:23:13.071866] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1034333 ] 00:05:12.434 [2024-11-15 11:23:13.167329] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:12.434 [2024-11-15 11:23:13.167354] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:05:12.434 [2024-11-15 11:23:13.248135] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:05:12.434 [2024-11-15 11:23:13.251483] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:05:12.434 [2024-11-15 11:23:13.251485] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:05:13.003 11:23:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:05:13.003 11:23:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@866 -- # return 0 00:05:13.003 11:23:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:05:13.003 11:23:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:13.003 11:23:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:13.003 11:23:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:13.003 11:23:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:05:13.003 11:23:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@650 -- # local es=0 00:05:13.003 11:23:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:05:13.003 11:23:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:05:13.003 11:23:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:13.003 11:23:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:05:13.003 11:23:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:13.003 11:23:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@653 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:05:13.003 11:23:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:13.003 11:23:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:13.003 [2024-11-15 11:23:13.650526] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 1034072 has claimed it. 00:05:13.003 request: 00:05:13.003 { 00:05:13.003 "method": "framework_enable_cpumask_locks", 00:05:13.003 "req_id": 1 00:05:13.003 } 00:05:13.003 Got JSON-RPC error response 00:05:13.003 response: 00:05:13.003 { 00:05:13.003 "code": -32603, 00:05:13.003 "message": "Failed to claim CPU core: 2" 00:05:13.003 } 00:05:13.003 11:23:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:05:13.003 11:23:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@653 -- # es=1 00:05:13.003 11:23:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:05:13.003 11:23:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:05:13.003 11:23:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:05:13.003 11:23:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@158 -- # waitforlisten 1034072 /var/tmp/spdk.sock 00:05:13.003 11:23:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # '[' -z 1034072 ']' 00:05:13.003 11:23:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:13.003 11:23:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # local max_retries=100 00:05:13.003 11:23:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:13.003 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:13.003 11:23:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # xtrace_disable 00:05:13.003 11:23:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:13.262 11:23:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:05:13.262 11:23:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@866 -- # return 0 00:05:13.262 11:23:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@159 -- # waitforlisten 1034333 /var/tmp/spdk2.sock 00:05:13.262 11:23:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # '[' -z 1034333 ']' 00:05:13.262 11:23:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:13.262 11:23:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # local max_retries=100 00:05:13.262 11:23:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:13.262 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:13.262 11:23:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # xtrace_disable 00:05:13.262 11:23:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:13.521 11:23:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:05:13.521 11:23:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@866 -- # return 0 00:05:13.521 11:23:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:05:13.521 11:23:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:05:13.521 11:23:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:05:13.521 11:23:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:05:13.521 00:05:13.521 real 0m2.158s 00:05:13.521 user 0m1.085s 00:05:13.521 sys 0m0.170s 00:05:13.521 11:23:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1128 -- # xtrace_disable 00:05:13.521 11:23:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:13.521 ************************************ 00:05:13.521 END TEST locking_overlapped_coremask_via_rpc 00:05:13.521 ************************************ 00:05:13.521 11:23:14 event.cpu_locks -- event/cpu_locks.sh@174 -- # cleanup 00:05:13.521 11:23:14 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 1034072 ]] 00:05:13.521 11:23:14 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 1034072 00:05:13.521 11:23:14 event.cpu_locks -- common/autotest_common.sh@952 -- # '[' -z 1034072 ']' 00:05:13.521 11:23:14 event.cpu_locks -- common/autotest_common.sh@956 -- # kill -0 1034072 00:05:13.522 11:23:14 event.cpu_locks -- common/autotest_common.sh@957 -- # uname 00:05:13.522 11:23:14 event.cpu_locks -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:05:13.522 11:23:14 event.cpu_locks -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 1034072 00:05:13.522 11:23:14 event.cpu_locks -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:05:13.522 11:23:14 event.cpu_locks -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:05:13.522 11:23:14 event.cpu_locks -- common/autotest_common.sh@970 -- # echo 'killing process with pid 1034072' 00:05:13.522 killing process with pid 1034072 00:05:13.522 11:23:14 event.cpu_locks -- common/autotest_common.sh@971 -- # kill 1034072 00:05:13.522 11:23:14 event.cpu_locks -- common/autotest_common.sh@976 -- # wait 1034072 00:05:14.089 11:23:14 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 1034333 ]] 00:05:14.089 11:23:14 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 1034333 00:05:14.089 11:23:14 event.cpu_locks -- common/autotest_common.sh@952 -- # '[' -z 1034333 ']' 00:05:14.089 11:23:14 event.cpu_locks -- common/autotest_common.sh@956 -- # kill -0 1034333 00:05:14.089 11:23:14 event.cpu_locks -- common/autotest_common.sh@957 -- # uname 00:05:14.089 11:23:14 event.cpu_locks -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:05:14.089 11:23:14 event.cpu_locks -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 1034333 00:05:14.089 11:23:14 event.cpu_locks -- common/autotest_common.sh@958 -- # process_name=reactor_2 00:05:14.089 11:23:14 event.cpu_locks -- common/autotest_common.sh@962 -- # '[' reactor_2 = sudo ']' 00:05:14.089 11:23:14 event.cpu_locks -- common/autotest_common.sh@970 -- # echo 'killing process with pid 1034333' 00:05:14.089 killing process with pid 1034333 00:05:14.089 11:23:14 event.cpu_locks -- common/autotest_common.sh@971 -- # kill 1034333 00:05:14.089 11:23:14 event.cpu_locks -- common/autotest_common.sh@976 -- # wait 1034333 00:05:14.348 11:23:15 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:05:14.348 11:23:15 event.cpu_locks -- event/cpu_locks.sh@1 -- # cleanup 00:05:14.348 11:23:15 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 1034072 ]] 00:05:14.348 11:23:15 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 1034072 00:05:14.348 11:23:15 event.cpu_locks -- common/autotest_common.sh@952 -- # '[' -z 1034072 ']' 00:05:14.348 11:23:15 event.cpu_locks -- common/autotest_common.sh@956 -- # kill -0 1034072 00:05:14.348 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 956: kill: (1034072) - No such process 00:05:14.348 11:23:15 event.cpu_locks -- common/autotest_common.sh@979 -- # echo 'Process with pid 1034072 is not found' 00:05:14.348 Process with pid 1034072 is not found 00:05:14.348 11:23:15 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 1034333 ]] 00:05:14.348 11:23:15 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 1034333 00:05:14.348 11:23:15 event.cpu_locks -- common/autotest_common.sh@952 -- # '[' -z 1034333 ']' 00:05:14.348 11:23:15 event.cpu_locks -- common/autotest_common.sh@956 -- # kill -0 1034333 00:05:14.348 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 956: kill: (1034333) - No such process 00:05:14.348 11:23:15 event.cpu_locks -- common/autotest_common.sh@979 -- # echo 'Process with pid 1034333 is not found' 00:05:14.348 Process with pid 1034333 is not found 00:05:14.348 11:23:15 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:05:14.348 00:05:14.348 real 0m15.195s 00:05:14.348 user 0m27.308s 00:05:14.348 sys 0m4.998s 00:05:14.348 11:23:15 event.cpu_locks -- common/autotest_common.sh@1128 -- # xtrace_disable 00:05:14.348 11:23:15 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:14.348 ************************************ 00:05:14.348 END TEST cpu_locks 00:05:14.348 ************************************ 00:05:14.348 00:05:14.348 real 0m41.704s 00:05:14.348 user 1m21.448s 00:05:14.348 sys 0m9.053s 00:05:14.348 11:23:15 event -- common/autotest_common.sh@1128 -- # xtrace_disable 00:05:14.348 11:23:15 event -- common/autotest_common.sh@10 -- # set +x 00:05:14.348 ************************************ 00:05:14.348 END TEST event 00:05:14.348 ************************************ 00:05:14.348 11:23:15 -- spdk/autotest.sh@169 -- # run_test thread /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/thread.sh 00:05:14.348 11:23:15 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:05:14.348 11:23:15 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:05:14.348 11:23:15 -- common/autotest_common.sh@10 -- # set +x 00:05:14.348 ************************************ 00:05:14.348 START TEST thread 00:05:14.348 ************************************ 00:05:14.348 11:23:15 thread -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/thread.sh 00:05:14.348 * Looking for test storage... 00:05:14.348 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread 00:05:14.348 11:23:15 thread -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:05:14.348 11:23:15 thread -- common/autotest_common.sh@1691 -- # lcov --version 00:05:14.348 11:23:15 thread -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:05:14.607 11:23:15 thread -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:05:14.607 11:23:15 thread -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:14.607 11:23:15 thread -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:14.607 11:23:15 thread -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:14.607 11:23:15 thread -- scripts/common.sh@336 -- # IFS=.-: 00:05:14.607 11:23:15 thread -- scripts/common.sh@336 -- # read -ra ver1 00:05:14.607 11:23:15 thread -- scripts/common.sh@337 -- # IFS=.-: 00:05:14.607 11:23:15 thread -- scripts/common.sh@337 -- # read -ra ver2 00:05:14.607 11:23:15 thread -- scripts/common.sh@338 -- # local 'op=<' 00:05:14.607 11:23:15 thread -- scripts/common.sh@340 -- # ver1_l=2 00:05:14.607 11:23:15 thread -- scripts/common.sh@341 -- # ver2_l=1 00:05:14.607 11:23:15 thread -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:14.607 11:23:15 thread -- scripts/common.sh@344 -- # case "$op" in 00:05:14.607 11:23:15 thread -- scripts/common.sh@345 -- # : 1 00:05:14.607 11:23:15 thread -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:14.607 11:23:15 thread -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:14.607 11:23:15 thread -- scripts/common.sh@365 -- # decimal 1 00:05:14.607 11:23:15 thread -- scripts/common.sh@353 -- # local d=1 00:05:14.607 11:23:15 thread -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:14.607 11:23:15 thread -- scripts/common.sh@355 -- # echo 1 00:05:14.607 11:23:15 thread -- scripts/common.sh@365 -- # ver1[v]=1 00:05:14.607 11:23:15 thread -- scripts/common.sh@366 -- # decimal 2 00:05:14.607 11:23:15 thread -- scripts/common.sh@353 -- # local d=2 00:05:14.607 11:23:15 thread -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:14.607 11:23:15 thread -- scripts/common.sh@355 -- # echo 2 00:05:14.607 11:23:15 thread -- scripts/common.sh@366 -- # ver2[v]=2 00:05:14.607 11:23:15 thread -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:14.607 11:23:15 thread -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:14.607 11:23:15 thread -- scripts/common.sh@368 -- # return 0 00:05:14.607 11:23:15 thread -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:14.607 11:23:15 thread -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:05:14.607 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:14.607 --rc genhtml_branch_coverage=1 00:05:14.607 --rc genhtml_function_coverage=1 00:05:14.607 --rc genhtml_legend=1 00:05:14.607 --rc geninfo_all_blocks=1 00:05:14.607 --rc geninfo_unexecuted_blocks=1 00:05:14.607 00:05:14.607 ' 00:05:14.607 11:23:15 thread -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:05:14.607 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:14.607 --rc genhtml_branch_coverage=1 00:05:14.607 --rc genhtml_function_coverage=1 00:05:14.607 --rc genhtml_legend=1 00:05:14.607 --rc geninfo_all_blocks=1 00:05:14.607 --rc geninfo_unexecuted_blocks=1 00:05:14.607 00:05:14.607 ' 00:05:14.607 11:23:15 thread -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:05:14.607 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:14.607 --rc genhtml_branch_coverage=1 00:05:14.607 --rc genhtml_function_coverage=1 00:05:14.607 --rc genhtml_legend=1 00:05:14.607 --rc geninfo_all_blocks=1 00:05:14.607 --rc geninfo_unexecuted_blocks=1 00:05:14.607 00:05:14.607 ' 00:05:14.607 11:23:15 thread -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:05:14.607 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:14.607 --rc genhtml_branch_coverage=1 00:05:14.607 --rc genhtml_function_coverage=1 00:05:14.607 --rc genhtml_legend=1 00:05:14.607 --rc geninfo_all_blocks=1 00:05:14.607 --rc geninfo_unexecuted_blocks=1 00:05:14.607 00:05:14.607 ' 00:05:14.607 11:23:15 thread -- thread/thread.sh@11 -- # run_test thread_poller_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:05:14.607 11:23:15 thread -- common/autotest_common.sh@1103 -- # '[' 8 -le 1 ']' 00:05:14.607 11:23:15 thread -- common/autotest_common.sh@1109 -- # xtrace_disable 00:05:14.607 11:23:15 thread -- common/autotest_common.sh@10 -- # set +x 00:05:14.607 ************************************ 00:05:14.607 START TEST thread_poller_perf 00:05:14.607 ************************************ 00:05:14.607 11:23:15 thread.thread_poller_perf -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:05:14.607 [2024-11-15 11:23:15.341944] Starting SPDK v25.01-pre git sha1 4b2d483c6 / DPDK 24.03.0 initialization... 00:05:14.607 [2024-11-15 11:23:15.342002] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1034715 ] 00:05:14.607 [2024-11-15 11:23:15.428963] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:14.867 [2024-11-15 11:23:15.478297] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:14.867 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:05:15.804 [2024-11-15T10:23:16.657Z] ====================================== 00:05:15.804 [2024-11-15T10:23:16.657Z] busy:2212703842 (cyc) 00:05:15.804 [2024-11-15T10:23:16.657Z] total_run_count: 255000 00:05:15.804 [2024-11-15T10:23:16.657Z] tsc_hz: 2200000000 (cyc) 00:05:15.804 [2024-11-15T10:23:16.657Z] ====================================== 00:05:15.804 [2024-11-15T10:23:16.657Z] poller_cost: 8677 (cyc), 3944 (nsec) 00:05:15.804 00:05:15.804 real 0m1.213s 00:05:15.804 user 0m1.140s 00:05:15.804 sys 0m0.068s 00:05:15.804 11:23:16 thread.thread_poller_perf -- common/autotest_common.sh@1128 -- # xtrace_disable 00:05:15.804 11:23:16 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:05:15.804 ************************************ 00:05:15.804 END TEST thread_poller_perf 00:05:15.804 ************************************ 00:05:15.804 11:23:16 thread -- thread/thread.sh@12 -- # run_test thread_poller_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:05:15.804 11:23:16 thread -- common/autotest_common.sh@1103 -- # '[' 8 -le 1 ']' 00:05:15.804 11:23:16 thread -- common/autotest_common.sh@1109 -- # xtrace_disable 00:05:15.804 11:23:16 thread -- common/autotest_common.sh@10 -- # set +x 00:05:15.804 ************************************ 00:05:15.804 START TEST thread_poller_perf 00:05:15.804 ************************************ 00:05:15.804 11:23:16 thread.thread_poller_perf -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:05:15.804 [2024-11-15 11:23:16.627561] Starting SPDK v25.01-pre git sha1 4b2d483c6 / DPDK 24.03.0 initialization... 00:05:15.804 [2024-11-15 11:23:16.627630] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1034990 ] 00:05:16.063 [2024-11-15 11:23:16.715346] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:16.063 [2024-11-15 11:23:16.761842] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:16.063 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:05:17.000 [2024-11-15T10:23:17.853Z] ====================================== 00:05:17.000 [2024-11-15T10:23:17.853Z] busy:2202557238 (cyc) 00:05:17.000 [2024-11-15T10:23:17.853Z] total_run_count: 3231000 00:05:17.000 [2024-11-15T10:23:17.853Z] tsc_hz: 2200000000 (cyc) 00:05:17.000 [2024-11-15T10:23:17.853Z] ====================================== 00:05:17.000 [2024-11-15T10:23:17.853Z] poller_cost: 681 (cyc), 309 (nsec) 00:05:17.000 00:05:17.000 real 0m1.205s 00:05:17.000 user 0m1.121s 00:05:17.000 sys 0m0.079s 00:05:17.000 11:23:17 thread.thread_poller_perf -- common/autotest_common.sh@1128 -- # xtrace_disable 00:05:17.000 11:23:17 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:05:17.000 ************************************ 00:05:17.000 END TEST thread_poller_perf 00:05:17.000 ************************************ 00:05:17.000 11:23:17 thread -- thread/thread.sh@17 -- # [[ y != \y ]] 00:05:17.000 00:05:17.000 real 0m2.727s 00:05:17.000 user 0m2.411s 00:05:17.000 sys 0m0.323s 00:05:17.000 11:23:17 thread -- common/autotest_common.sh@1128 -- # xtrace_disable 00:05:17.000 11:23:17 thread -- common/autotest_common.sh@10 -- # set +x 00:05:17.000 ************************************ 00:05:17.000 END TEST thread 00:05:17.000 ************************************ 00:05:17.259 11:23:17 -- spdk/autotest.sh@171 -- # [[ 0 -eq 1 ]] 00:05:17.260 11:23:17 -- spdk/autotest.sh@176 -- # run_test app_cmdline /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/cmdline.sh 00:05:17.260 11:23:17 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:05:17.260 11:23:17 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:05:17.260 11:23:17 -- common/autotest_common.sh@10 -- # set +x 00:05:17.260 ************************************ 00:05:17.260 START TEST app_cmdline 00:05:17.260 ************************************ 00:05:17.260 11:23:17 app_cmdline -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/cmdline.sh 00:05:17.260 * Looking for test storage... 00:05:17.260 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:05:17.260 11:23:17 app_cmdline -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:05:17.260 11:23:17 app_cmdline -- common/autotest_common.sh@1691 -- # lcov --version 00:05:17.260 11:23:17 app_cmdline -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:05:17.260 11:23:18 app_cmdline -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:05:17.260 11:23:18 app_cmdline -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:17.260 11:23:18 app_cmdline -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:17.260 11:23:18 app_cmdline -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:17.260 11:23:18 app_cmdline -- scripts/common.sh@336 -- # IFS=.-: 00:05:17.260 11:23:18 app_cmdline -- scripts/common.sh@336 -- # read -ra ver1 00:05:17.260 11:23:18 app_cmdline -- scripts/common.sh@337 -- # IFS=.-: 00:05:17.260 11:23:18 app_cmdline -- scripts/common.sh@337 -- # read -ra ver2 00:05:17.260 11:23:18 app_cmdline -- scripts/common.sh@338 -- # local 'op=<' 00:05:17.260 11:23:18 app_cmdline -- scripts/common.sh@340 -- # ver1_l=2 00:05:17.260 11:23:18 app_cmdline -- scripts/common.sh@341 -- # ver2_l=1 00:05:17.260 11:23:18 app_cmdline -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:17.260 11:23:18 app_cmdline -- scripts/common.sh@344 -- # case "$op" in 00:05:17.260 11:23:18 app_cmdline -- scripts/common.sh@345 -- # : 1 00:05:17.260 11:23:18 app_cmdline -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:17.260 11:23:18 app_cmdline -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:17.260 11:23:18 app_cmdline -- scripts/common.sh@365 -- # decimal 1 00:05:17.260 11:23:18 app_cmdline -- scripts/common.sh@353 -- # local d=1 00:05:17.260 11:23:18 app_cmdline -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:17.260 11:23:18 app_cmdline -- scripts/common.sh@355 -- # echo 1 00:05:17.260 11:23:18 app_cmdline -- scripts/common.sh@365 -- # ver1[v]=1 00:05:17.260 11:23:18 app_cmdline -- scripts/common.sh@366 -- # decimal 2 00:05:17.260 11:23:18 app_cmdline -- scripts/common.sh@353 -- # local d=2 00:05:17.260 11:23:18 app_cmdline -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:17.260 11:23:18 app_cmdline -- scripts/common.sh@355 -- # echo 2 00:05:17.260 11:23:18 app_cmdline -- scripts/common.sh@366 -- # ver2[v]=2 00:05:17.260 11:23:18 app_cmdline -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:17.260 11:23:18 app_cmdline -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:17.260 11:23:18 app_cmdline -- scripts/common.sh@368 -- # return 0 00:05:17.260 11:23:18 app_cmdline -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:17.260 11:23:18 app_cmdline -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:05:17.260 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:17.260 --rc genhtml_branch_coverage=1 00:05:17.260 --rc genhtml_function_coverage=1 00:05:17.260 --rc genhtml_legend=1 00:05:17.260 --rc geninfo_all_blocks=1 00:05:17.260 --rc geninfo_unexecuted_blocks=1 00:05:17.260 00:05:17.260 ' 00:05:17.260 11:23:18 app_cmdline -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:05:17.260 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:17.260 --rc genhtml_branch_coverage=1 00:05:17.260 --rc genhtml_function_coverage=1 00:05:17.260 --rc genhtml_legend=1 00:05:17.260 --rc geninfo_all_blocks=1 00:05:17.260 --rc geninfo_unexecuted_blocks=1 00:05:17.260 00:05:17.260 ' 00:05:17.260 11:23:18 app_cmdline -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:05:17.260 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:17.260 --rc genhtml_branch_coverage=1 00:05:17.260 --rc genhtml_function_coverage=1 00:05:17.260 --rc genhtml_legend=1 00:05:17.260 --rc geninfo_all_blocks=1 00:05:17.260 --rc geninfo_unexecuted_blocks=1 00:05:17.260 00:05:17.260 ' 00:05:17.260 11:23:18 app_cmdline -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:05:17.260 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:17.260 --rc genhtml_branch_coverage=1 00:05:17.260 --rc genhtml_function_coverage=1 00:05:17.260 --rc genhtml_legend=1 00:05:17.260 --rc geninfo_all_blocks=1 00:05:17.260 --rc geninfo_unexecuted_blocks=1 00:05:17.260 00:05:17.260 ' 00:05:17.260 11:23:18 app_cmdline -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:05:17.260 11:23:18 app_cmdline -- app/cmdline.sh@17 -- # spdk_tgt_pid=1035324 00:05:17.260 11:23:18 app_cmdline -- app/cmdline.sh@18 -- # waitforlisten 1035324 00:05:17.260 11:23:18 app_cmdline -- common/autotest_common.sh@833 -- # '[' -z 1035324 ']' 00:05:17.260 11:23:18 app_cmdline -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:17.260 11:23:18 app_cmdline -- common/autotest_common.sh@838 -- # local max_retries=100 00:05:17.260 11:23:18 app_cmdline -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:17.260 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:17.260 11:23:18 app_cmdline -- common/autotest_common.sh@842 -- # xtrace_disable 00:05:17.260 11:23:18 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:05:17.260 11:23:18 app_cmdline -- app/cmdline.sh@16 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:05:17.519 [2024-11-15 11:23:18.116081] Starting SPDK v25.01-pre git sha1 4b2d483c6 / DPDK 24.03.0 initialization... 00:05:17.519 [2024-11-15 11:23:18.116144] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1035324 ] 00:05:17.519 [2024-11-15 11:23:18.211321] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:17.519 [2024-11-15 11:23:18.259972] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:17.777 11:23:18 app_cmdline -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:05:17.777 11:23:18 app_cmdline -- common/autotest_common.sh@866 -- # return 0 00:05:17.777 11:23:18 app_cmdline -- app/cmdline.sh@20 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py spdk_get_version 00:05:18.037 { 00:05:18.037 "version": "SPDK v25.01-pre git sha1 4b2d483c6", 00:05:18.037 "fields": { 00:05:18.037 "major": 25, 00:05:18.037 "minor": 1, 00:05:18.037 "patch": 0, 00:05:18.037 "suffix": "-pre", 00:05:18.037 "commit": "4b2d483c6" 00:05:18.037 } 00:05:18.037 } 00:05:18.037 11:23:18 app_cmdline -- app/cmdline.sh@22 -- # expected_methods=() 00:05:18.037 11:23:18 app_cmdline -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:05:18.037 11:23:18 app_cmdline -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:05:18.037 11:23:18 app_cmdline -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:05:18.037 11:23:18 app_cmdline -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:05:18.037 11:23:18 app_cmdline -- app/cmdline.sh@26 -- # jq -r '.[]' 00:05:18.037 11:23:18 app_cmdline -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:18.037 11:23:18 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:05:18.037 11:23:18 app_cmdline -- app/cmdline.sh@26 -- # sort 00:05:18.037 11:23:18 app_cmdline -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:18.037 11:23:18 app_cmdline -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:05:18.037 11:23:18 app_cmdline -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:05:18.037 11:23:18 app_cmdline -- app/cmdline.sh@30 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:05:18.037 11:23:18 app_cmdline -- common/autotest_common.sh@650 -- # local es=0 00:05:18.037 11:23:18 app_cmdline -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:05:18.037 11:23:18 app_cmdline -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:05:18.037 11:23:18 app_cmdline -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:18.037 11:23:18 app_cmdline -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:05:18.037 11:23:18 app_cmdline -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:18.037 11:23:18 app_cmdline -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:05:18.037 11:23:18 app_cmdline -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:18.037 11:23:18 app_cmdline -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:05:18.037 11:23:18 app_cmdline -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:05:18.037 11:23:18 app_cmdline -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:05:18.296 request: 00:05:18.296 { 00:05:18.296 "method": "env_dpdk_get_mem_stats", 00:05:18.296 "req_id": 1 00:05:18.296 } 00:05:18.296 Got JSON-RPC error response 00:05:18.296 response: 00:05:18.296 { 00:05:18.296 "code": -32601, 00:05:18.296 "message": "Method not found" 00:05:18.296 } 00:05:18.296 11:23:19 app_cmdline -- common/autotest_common.sh@653 -- # es=1 00:05:18.296 11:23:19 app_cmdline -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:05:18.296 11:23:19 app_cmdline -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:05:18.296 11:23:19 app_cmdline -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:05:18.296 11:23:19 app_cmdline -- app/cmdline.sh@1 -- # killprocess 1035324 00:05:18.296 11:23:19 app_cmdline -- common/autotest_common.sh@952 -- # '[' -z 1035324 ']' 00:05:18.296 11:23:19 app_cmdline -- common/autotest_common.sh@956 -- # kill -0 1035324 00:05:18.296 11:23:19 app_cmdline -- common/autotest_common.sh@957 -- # uname 00:05:18.296 11:23:19 app_cmdline -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:05:18.296 11:23:19 app_cmdline -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 1035324 00:05:18.296 11:23:19 app_cmdline -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:05:18.296 11:23:19 app_cmdline -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:05:18.296 11:23:19 app_cmdline -- common/autotest_common.sh@970 -- # echo 'killing process with pid 1035324' 00:05:18.296 killing process with pid 1035324 00:05:18.296 11:23:19 app_cmdline -- common/autotest_common.sh@971 -- # kill 1035324 00:05:18.296 11:23:19 app_cmdline -- common/autotest_common.sh@976 -- # wait 1035324 00:05:18.865 00:05:18.865 real 0m1.561s 00:05:18.865 user 0m1.997s 00:05:18.865 sys 0m0.462s 00:05:18.865 11:23:19 app_cmdline -- common/autotest_common.sh@1128 -- # xtrace_disable 00:05:18.865 11:23:19 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:05:18.865 ************************************ 00:05:18.865 END TEST app_cmdline 00:05:18.865 ************************************ 00:05:18.865 11:23:19 -- spdk/autotest.sh@177 -- # run_test version /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/version.sh 00:05:18.865 11:23:19 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:05:18.865 11:23:19 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:05:18.865 11:23:19 -- common/autotest_common.sh@10 -- # set +x 00:05:18.865 ************************************ 00:05:18.865 START TEST version 00:05:18.865 ************************************ 00:05:18.865 11:23:19 version -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/version.sh 00:05:18.865 * Looking for test storage... 00:05:18.865 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:05:18.865 11:23:19 version -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:05:18.865 11:23:19 version -- common/autotest_common.sh@1691 -- # lcov --version 00:05:18.865 11:23:19 version -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:05:18.865 11:23:19 version -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:05:18.865 11:23:19 version -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:18.865 11:23:19 version -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:18.865 11:23:19 version -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:18.865 11:23:19 version -- scripts/common.sh@336 -- # IFS=.-: 00:05:18.865 11:23:19 version -- scripts/common.sh@336 -- # read -ra ver1 00:05:18.865 11:23:19 version -- scripts/common.sh@337 -- # IFS=.-: 00:05:18.865 11:23:19 version -- scripts/common.sh@337 -- # read -ra ver2 00:05:18.865 11:23:19 version -- scripts/common.sh@338 -- # local 'op=<' 00:05:18.865 11:23:19 version -- scripts/common.sh@340 -- # ver1_l=2 00:05:18.865 11:23:19 version -- scripts/common.sh@341 -- # ver2_l=1 00:05:18.865 11:23:19 version -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:18.865 11:23:19 version -- scripts/common.sh@344 -- # case "$op" in 00:05:18.865 11:23:19 version -- scripts/common.sh@345 -- # : 1 00:05:18.865 11:23:19 version -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:18.865 11:23:19 version -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:18.865 11:23:19 version -- scripts/common.sh@365 -- # decimal 1 00:05:18.865 11:23:19 version -- scripts/common.sh@353 -- # local d=1 00:05:18.865 11:23:19 version -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:18.865 11:23:19 version -- scripts/common.sh@355 -- # echo 1 00:05:18.865 11:23:19 version -- scripts/common.sh@365 -- # ver1[v]=1 00:05:18.865 11:23:19 version -- scripts/common.sh@366 -- # decimal 2 00:05:18.865 11:23:19 version -- scripts/common.sh@353 -- # local d=2 00:05:18.865 11:23:19 version -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:18.865 11:23:19 version -- scripts/common.sh@355 -- # echo 2 00:05:18.865 11:23:19 version -- scripts/common.sh@366 -- # ver2[v]=2 00:05:18.865 11:23:19 version -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:18.865 11:23:19 version -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:18.865 11:23:19 version -- scripts/common.sh@368 -- # return 0 00:05:18.865 11:23:19 version -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:18.865 11:23:19 version -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:05:18.865 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:18.865 --rc genhtml_branch_coverage=1 00:05:18.865 --rc genhtml_function_coverage=1 00:05:18.865 --rc genhtml_legend=1 00:05:18.865 --rc geninfo_all_blocks=1 00:05:18.865 --rc geninfo_unexecuted_blocks=1 00:05:18.865 00:05:18.865 ' 00:05:18.865 11:23:19 version -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:05:18.865 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:18.865 --rc genhtml_branch_coverage=1 00:05:18.865 --rc genhtml_function_coverage=1 00:05:18.865 --rc genhtml_legend=1 00:05:18.865 --rc geninfo_all_blocks=1 00:05:18.865 --rc geninfo_unexecuted_blocks=1 00:05:18.865 00:05:18.865 ' 00:05:18.865 11:23:19 version -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:05:18.865 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:18.865 --rc genhtml_branch_coverage=1 00:05:18.865 --rc genhtml_function_coverage=1 00:05:18.865 --rc genhtml_legend=1 00:05:18.865 --rc geninfo_all_blocks=1 00:05:18.865 --rc geninfo_unexecuted_blocks=1 00:05:18.865 00:05:18.865 ' 00:05:18.865 11:23:19 version -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:05:18.865 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:18.865 --rc genhtml_branch_coverage=1 00:05:18.865 --rc genhtml_function_coverage=1 00:05:18.865 --rc genhtml_legend=1 00:05:18.865 --rc geninfo_all_blocks=1 00:05:18.865 --rc geninfo_unexecuted_blocks=1 00:05:18.865 00:05:18.865 ' 00:05:18.865 11:23:19 version -- app/version.sh@17 -- # get_header_version major 00:05:18.865 11:23:19 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:05:18.865 11:23:19 version -- app/version.sh@14 -- # cut -f2 00:05:18.865 11:23:19 version -- app/version.sh@14 -- # tr -d '"' 00:05:18.865 11:23:19 version -- app/version.sh@17 -- # major=25 00:05:18.865 11:23:19 version -- app/version.sh@18 -- # get_header_version minor 00:05:18.866 11:23:19 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:05:18.866 11:23:19 version -- app/version.sh@14 -- # cut -f2 00:05:18.866 11:23:19 version -- app/version.sh@14 -- # tr -d '"' 00:05:19.125 11:23:19 version -- app/version.sh@18 -- # minor=1 00:05:19.125 11:23:19 version -- app/version.sh@19 -- # get_header_version patch 00:05:19.125 11:23:19 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:05:19.125 11:23:19 version -- app/version.sh@14 -- # cut -f2 00:05:19.125 11:23:19 version -- app/version.sh@14 -- # tr -d '"' 00:05:19.125 11:23:19 version -- app/version.sh@19 -- # patch=0 00:05:19.125 11:23:19 version -- app/version.sh@20 -- # get_header_version suffix 00:05:19.125 11:23:19 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:05:19.125 11:23:19 version -- app/version.sh@14 -- # cut -f2 00:05:19.125 11:23:19 version -- app/version.sh@14 -- # tr -d '"' 00:05:19.125 11:23:19 version -- app/version.sh@20 -- # suffix=-pre 00:05:19.125 11:23:19 version -- app/version.sh@22 -- # version=25.1 00:05:19.125 11:23:19 version -- app/version.sh@25 -- # (( patch != 0 )) 00:05:19.125 11:23:19 version -- app/version.sh@28 -- # version=25.1rc0 00:05:19.125 11:23:19 version -- app/version.sh@30 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:05:19.125 11:23:19 version -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:05:19.125 11:23:19 version -- app/version.sh@30 -- # py_version=25.1rc0 00:05:19.125 11:23:19 version -- app/version.sh@31 -- # [[ 25.1rc0 == \2\5\.\1\r\c\0 ]] 00:05:19.125 00:05:19.125 real 0m0.248s 00:05:19.125 user 0m0.154s 00:05:19.125 sys 0m0.136s 00:05:19.125 11:23:19 version -- common/autotest_common.sh@1128 -- # xtrace_disable 00:05:19.125 11:23:19 version -- common/autotest_common.sh@10 -- # set +x 00:05:19.125 ************************************ 00:05:19.125 END TEST version 00:05:19.125 ************************************ 00:05:19.125 11:23:19 -- spdk/autotest.sh@179 -- # '[' 0 -eq 1 ']' 00:05:19.125 11:23:19 -- spdk/autotest.sh@188 -- # [[ 0 -eq 1 ]] 00:05:19.125 11:23:19 -- spdk/autotest.sh@194 -- # uname -s 00:05:19.125 11:23:19 -- spdk/autotest.sh@194 -- # [[ Linux == Linux ]] 00:05:19.125 11:23:19 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:05:19.125 11:23:19 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:05:19.125 11:23:19 -- spdk/autotest.sh@207 -- # '[' 0 -eq 1 ']' 00:05:19.125 11:23:19 -- spdk/autotest.sh@252 -- # '[' 0 -eq 1 ']' 00:05:19.125 11:23:19 -- spdk/autotest.sh@256 -- # timing_exit lib 00:05:19.125 11:23:19 -- common/autotest_common.sh@730 -- # xtrace_disable 00:05:19.125 11:23:19 -- common/autotest_common.sh@10 -- # set +x 00:05:19.125 11:23:19 -- spdk/autotest.sh@258 -- # '[' 0 -eq 1 ']' 00:05:19.125 11:23:19 -- spdk/autotest.sh@263 -- # '[' 0 -eq 1 ']' 00:05:19.125 11:23:19 -- spdk/autotest.sh@272 -- # '[' 1 -eq 1 ']' 00:05:19.125 11:23:19 -- spdk/autotest.sh@273 -- # export NET_TYPE 00:05:19.125 11:23:19 -- spdk/autotest.sh@276 -- # '[' tcp = rdma ']' 00:05:19.125 11:23:19 -- spdk/autotest.sh@279 -- # '[' tcp = tcp ']' 00:05:19.125 11:23:19 -- spdk/autotest.sh@280 -- # run_test nvmf_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf.sh --transport=tcp 00:05:19.125 11:23:19 -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:05:19.125 11:23:19 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:05:19.125 11:23:19 -- common/autotest_common.sh@10 -- # set +x 00:05:19.125 ************************************ 00:05:19.125 START TEST nvmf_tcp 00:05:19.125 ************************************ 00:05:19.125 11:23:19 nvmf_tcp -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf.sh --transport=tcp 00:05:19.125 * Looking for test storage... 00:05:19.391 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:05:19.391 11:23:19 nvmf_tcp -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:05:19.391 11:23:19 nvmf_tcp -- common/autotest_common.sh@1691 -- # lcov --version 00:05:19.391 11:23:19 nvmf_tcp -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:05:19.391 11:23:20 nvmf_tcp -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:05:19.391 11:23:20 nvmf_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:19.391 11:23:20 nvmf_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:19.391 11:23:20 nvmf_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:19.391 11:23:20 nvmf_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:05:19.391 11:23:20 nvmf_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:05:19.391 11:23:20 nvmf_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:05:19.391 11:23:20 nvmf_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:05:19.391 11:23:20 nvmf_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:05:19.391 11:23:20 nvmf_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:05:19.391 11:23:20 nvmf_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:05:19.391 11:23:20 nvmf_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:19.391 11:23:20 nvmf_tcp -- scripts/common.sh@344 -- # case "$op" in 00:05:19.391 11:23:20 nvmf_tcp -- scripts/common.sh@345 -- # : 1 00:05:19.391 11:23:20 nvmf_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:19.391 11:23:20 nvmf_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:19.391 11:23:20 nvmf_tcp -- scripts/common.sh@365 -- # decimal 1 00:05:19.391 11:23:20 nvmf_tcp -- scripts/common.sh@353 -- # local d=1 00:05:19.391 11:23:20 nvmf_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:19.391 11:23:20 nvmf_tcp -- scripts/common.sh@355 -- # echo 1 00:05:19.391 11:23:20 nvmf_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:05:19.391 11:23:20 nvmf_tcp -- scripts/common.sh@366 -- # decimal 2 00:05:19.391 11:23:20 nvmf_tcp -- scripts/common.sh@353 -- # local d=2 00:05:19.391 11:23:20 nvmf_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:19.391 11:23:20 nvmf_tcp -- scripts/common.sh@355 -- # echo 2 00:05:19.391 11:23:20 nvmf_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:05:19.391 11:23:20 nvmf_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:19.391 11:23:20 nvmf_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:19.391 11:23:20 nvmf_tcp -- scripts/common.sh@368 -- # return 0 00:05:19.391 11:23:20 nvmf_tcp -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:19.391 11:23:20 nvmf_tcp -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:05:19.391 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:19.391 --rc genhtml_branch_coverage=1 00:05:19.391 --rc genhtml_function_coverage=1 00:05:19.391 --rc genhtml_legend=1 00:05:19.391 --rc geninfo_all_blocks=1 00:05:19.391 --rc geninfo_unexecuted_blocks=1 00:05:19.391 00:05:19.391 ' 00:05:19.391 11:23:20 nvmf_tcp -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:05:19.391 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:19.391 --rc genhtml_branch_coverage=1 00:05:19.391 --rc genhtml_function_coverage=1 00:05:19.391 --rc genhtml_legend=1 00:05:19.391 --rc geninfo_all_blocks=1 00:05:19.391 --rc geninfo_unexecuted_blocks=1 00:05:19.391 00:05:19.391 ' 00:05:19.391 11:23:20 nvmf_tcp -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:05:19.391 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:19.391 --rc genhtml_branch_coverage=1 00:05:19.391 --rc genhtml_function_coverage=1 00:05:19.391 --rc genhtml_legend=1 00:05:19.391 --rc geninfo_all_blocks=1 00:05:19.391 --rc geninfo_unexecuted_blocks=1 00:05:19.391 00:05:19.391 ' 00:05:19.391 11:23:20 nvmf_tcp -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:05:19.391 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:19.391 --rc genhtml_branch_coverage=1 00:05:19.391 --rc genhtml_function_coverage=1 00:05:19.391 --rc genhtml_legend=1 00:05:19.391 --rc geninfo_all_blocks=1 00:05:19.391 --rc geninfo_unexecuted_blocks=1 00:05:19.391 00:05:19.391 ' 00:05:19.391 11:23:20 nvmf_tcp -- nvmf/nvmf.sh@10 -- # uname -s 00:05:19.391 11:23:20 nvmf_tcp -- nvmf/nvmf.sh@10 -- # '[' '!' Linux = Linux ']' 00:05:19.391 11:23:20 nvmf_tcp -- nvmf/nvmf.sh@14 -- # run_test nvmf_target_core /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp 00:05:19.391 11:23:20 nvmf_tcp -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:05:19.391 11:23:20 nvmf_tcp -- common/autotest_common.sh@1109 -- # xtrace_disable 00:05:19.391 11:23:20 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:19.391 ************************************ 00:05:19.391 START TEST nvmf_target_core 00:05:19.391 ************************************ 00:05:19.391 11:23:20 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp 00:05:19.391 * Looking for test storage... 00:05:19.391 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:05:19.391 11:23:20 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:05:19.391 11:23:20 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1691 -- # lcov --version 00:05:19.391 11:23:20 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:05:19.715 11:23:20 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:05:19.715 11:23:20 nvmf_tcp.nvmf_target_core -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:19.715 11:23:20 nvmf_tcp.nvmf_target_core -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:19.715 11:23:20 nvmf_tcp.nvmf_target_core -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:19.715 11:23:20 nvmf_tcp.nvmf_target_core -- scripts/common.sh@336 -- # IFS=.-: 00:05:19.715 11:23:20 nvmf_tcp.nvmf_target_core -- scripts/common.sh@336 -- # read -ra ver1 00:05:19.715 11:23:20 nvmf_tcp.nvmf_target_core -- scripts/common.sh@337 -- # IFS=.-: 00:05:19.715 11:23:20 nvmf_tcp.nvmf_target_core -- scripts/common.sh@337 -- # read -ra ver2 00:05:19.715 11:23:20 nvmf_tcp.nvmf_target_core -- scripts/common.sh@338 -- # local 'op=<' 00:05:19.715 11:23:20 nvmf_tcp.nvmf_target_core -- scripts/common.sh@340 -- # ver1_l=2 00:05:19.715 11:23:20 nvmf_tcp.nvmf_target_core -- scripts/common.sh@341 -- # ver2_l=1 00:05:19.715 11:23:20 nvmf_tcp.nvmf_target_core -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:19.715 11:23:20 nvmf_tcp.nvmf_target_core -- scripts/common.sh@344 -- # case "$op" in 00:05:19.715 11:23:20 nvmf_tcp.nvmf_target_core -- scripts/common.sh@345 -- # : 1 00:05:19.715 11:23:20 nvmf_tcp.nvmf_target_core -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:19.715 11:23:20 nvmf_tcp.nvmf_target_core -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:19.715 11:23:20 nvmf_tcp.nvmf_target_core -- scripts/common.sh@365 -- # decimal 1 00:05:19.715 11:23:20 nvmf_tcp.nvmf_target_core -- scripts/common.sh@353 -- # local d=1 00:05:19.715 11:23:20 nvmf_tcp.nvmf_target_core -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:19.715 11:23:20 nvmf_tcp.nvmf_target_core -- scripts/common.sh@355 -- # echo 1 00:05:19.715 11:23:20 nvmf_tcp.nvmf_target_core -- scripts/common.sh@365 -- # ver1[v]=1 00:05:19.715 11:23:20 nvmf_tcp.nvmf_target_core -- scripts/common.sh@366 -- # decimal 2 00:05:19.715 11:23:20 nvmf_tcp.nvmf_target_core -- scripts/common.sh@353 -- # local d=2 00:05:19.715 11:23:20 nvmf_tcp.nvmf_target_core -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:19.715 11:23:20 nvmf_tcp.nvmf_target_core -- scripts/common.sh@355 -- # echo 2 00:05:19.715 11:23:20 nvmf_tcp.nvmf_target_core -- scripts/common.sh@366 -- # ver2[v]=2 00:05:19.715 11:23:20 nvmf_tcp.nvmf_target_core -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:19.715 11:23:20 nvmf_tcp.nvmf_target_core -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:19.715 11:23:20 nvmf_tcp.nvmf_target_core -- scripts/common.sh@368 -- # return 0 00:05:19.715 11:23:20 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:19.715 11:23:20 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:05:19.715 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:19.715 --rc genhtml_branch_coverage=1 00:05:19.715 --rc genhtml_function_coverage=1 00:05:19.715 --rc genhtml_legend=1 00:05:19.715 --rc geninfo_all_blocks=1 00:05:19.715 --rc geninfo_unexecuted_blocks=1 00:05:19.715 00:05:19.715 ' 00:05:19.715 11:23:20 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:05:19.715 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:19.716 --rc genhtml_branch_coverage=1 00:05:19.716 --rc genhtml_function_coverage=1 00:05:19.716 --rc genhtml_legend=1 00:05:19.716 --rc geninfo_all_blocks=1 00:05:19.716 --rc geninfo_unexecuted_blocks=1 00:05:19.716 00:05:19.716 ' 00:05:19.716 11:23:20 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:05:19.716 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:19.716 --rc genhtml_branch_coverage=1 00:05:19.716 --rc genhtml_function_coverage=1 00:05:19.716 --rc genhtml_legend=1 00:05:19.716 --rc geninfo_all_blocks=1 00:05:19.716 --rc geninfo_unexecuted_blocks=1 00:05:19.716 00:05:19.716 ' 00:05:19.716 11:23:20 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:05:19.716 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:19.716 --rc genhtml_branch_coverage=1 00:05:19.716 --rc genhtml_function_coverage=1 00:05:19.716 --rc genhtml_legend=1 00:05:19.716 --rc geninfo_all_blocks=1 00:05:19.716 --rc geninfo_unexecuted_blocks=1 00:05:19.716 00:05:19.716 ' 00:05:19.716 11:23:20 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@10 -- # uname -s 00:05:19.716 11:23:20 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@10 -- # '[' '!' Linux = Linux ']' 00:05:19.716 11:23:20 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:05:19.716 11:23:20 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@7 -- # uname -s 00:05:19.716 11:23:20 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:19.716 11:23:20 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:19.716 11:23:20 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:19.716 11:23:20 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:19.716 11:23:20 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:19.716 11:23:20 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:19.716 11:23:20 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:19.716 11:23:20 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:19.716 11:23:20 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:19.716 11:23:20 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:19.716 11:23:20 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:05:19.716 11:23:20 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@18 -- # NVME_HOSTID=00abaa28-3537-eb11-906e-0017a4403562 00:05:19.716 11:23:20 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:19.716 11:23:20 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:19.716 11:23:20 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:05:19.716 11:23:20 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:19.716 11:23:20 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:05:19.716 11:23:20 nvmf_tcp.nvmf_target_core -- scripts/common.sh@15 -- # shopt -s extglob 00:05:19.716 11:23:20 nvmf_tcp.nvmf_target_core -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:19.716 11:23:20 nvmf_tcp.nvmf_target_core -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:19.716 11:23:20 nvmf_tcp.nvmf_target_core -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:19.716 11:23:20 nvmf_tcp.nvmf_target_core -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:19.716 11:23:20 nvmf_tcp.nvmf_target_core -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:19.716 11:23:20 nvmf_tcp.nvmf_target_core -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:19.716 11:23:20 nvmf_tcp.nvmf_target_core -- paths/export.sh@5 -- # export PATH 00:05:19.716 11:23:20 nvmf_tcp.nvmf_target_core -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:19.716 11:23:20 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@51 -- # : 0 00:05:19.716 11:23:20 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:05:19.716 11:23:20 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:05:19.716 11:23:20 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:05:19.716 11:23:20 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:19.716 11:23:20 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:19.716 11:23:20 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:05:19.716 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:05:19.716 11:23:20 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:05:19.716 11:23:20 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:05:19.716 11:23:20 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@55 -- # have_pci_nics=0 00:05:19.716 11:23:20 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@16 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:05:19.716 11:23:20 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@18 -- # TEST_ARGS=("$@") 00:05:19.716 11:23:20 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@20 -- # [[ 0 -eq 0 ]] 00:05:19.716 11:23:20 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@21 -- # run_test nvmf_abort /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp 00:05:19.716 11:23:20 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:05:19.716 11:23:20 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1109 -- # xtrace_disable 00:05:19.716 11:23:20 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:05:19.716 ************************************ 00:05:19.716 START TEST nvmf_abort 00:05:19.716 ************************************ 00:05:19.716 11:23:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp 00:05:19.716 * Looking for test storage... 00:05:19.716 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:05:19.716 11:23:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:05:19.716 11:23:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1691 -- # lcov --version 00:05:19.716 11:23:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:05:19.716 11:23:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:05:19.716 11:23:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:19.716 11:23:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:19.716 11:23:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:19.716 11:23:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@336 -- # IFS=.-: 00:05:19.716 11:23:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@336 -- # read -ra ver1 00:05:19.716 11:23:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@337 -- # IFS=.-: 00:05:19.716 11:23:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@337 -- # read -ra ver2 00:05:19.716 11:23:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@338 -- # local 'op=<' 00:05:19.716 11:23:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@340 -- # ver1_l=2 00:05:19.716 11:23:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@341 -- # ver2_l=1 00:05:19.716 11:23:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:19.716 11:23:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@344 -- # case "$op" in 00:05:19.716 11:23:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@345 -- # : 1 00:05:19.716 11:23:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:19.716 11:23:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:19.716 11:23:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@365 -- # decimal 1 00:05:19.716 11:23:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@353 -- # local d=1 00:05:19.716 11:23:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:19.716 11:23:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@355 -- # echo 1 00:05:19.716 11:23:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@365 -- # ver1[v]=1 00:05:19.716 11:23:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@366 -- # decimal 2 00:05:19.716 11:23:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@353 -- # local d=2 00:05:19.716 11:23:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:19.716 11:23:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@355 -- # echo 2 00:05:19.716 11:23:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@366 -- # ver2[v]=2 00:05:19.716 11:23:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:19.716 11:23:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:19.716 11:23:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@368 -- # return 0 00:05:19.717 11:23:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:19.717 11:23:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:05:19.717 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:19.717 --rc genhtml_branch_coverage=1 00:05:19.717 --rc genhtml_function_coverage=1 00:05:19.717 --rc genhtml_legend=1 00:05:19.717 --rc geninfo_all_blocks=1 00:05:19.717 --rc geninfo_unexecuted_blocks=1 00:05:19.717 00:05:19.717 ' 00:05:19.717 11:23:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:05:19.717 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:19.717 --rc genhtml_branch_coverage=1 00:05:19.717 --rc genhtml_function_coverage=1 00:05:19.717 --rc genhtml_legend=1 00:05:19.717 --rc geninfo_all_blocks=1 00:05:19.717 --rc geninfo_unexecuted_blocks=1 00:05:19.717 00:05:19.717 ' 00:05:19.717 11:23:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:05:19.717 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:19.717 --rc genhtml_branch_coverage=1 00:05:19.717 --rc genhtml_function_coverage=1 00:05:19.717 --rc genhtml_legend=1 00:05:19.717 --rc geninfo_all_blocks=1 00:05:19.717 --rc geninfo_unexecuted_blocks=1 00:05:19.717 00:05:19.717 ' 00:05:19.717 11:23:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:05:19.717 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:19.717 --rc genhtml_branch_coverage=1 00:05:19.717 --rc genhtml_function_coverage=1 00:05:19.717 --rc genhtml_legend=1 00:05:19.717 --rc geninfo_all_blocks=1 00:05:19.717 --rc geninfo_unexecuted_blocks=1 00:05:19.717 00:05:19.717 ' 00:05:19.717 11:23:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:05:19.717 11:23:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@7 -- # uname -s 00:05:19.717 11:23:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:19.717 11:23:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:19.717 11:23:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:19.717 11:23:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:19.717 11:23:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:19.717 11:23:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:19.717 11:23:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:19.717 11:23:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:19.717 11:23:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:19.717 11:23:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:19.717 11:23:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:05:19.717 11:23:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@18 -- # NVME_HOSTID=00abaa28-3537-eb11-906e-0017a4403562 00:05:19.717 11:23:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:19.717 11:23:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:19.717 11:23:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:05:19.717 11:23:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:19.717 11:23:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:05:19.717 11:23:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@15 -- # shopt -s extglob 00:05:19.717 11:23:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:19.717 11:23:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:19.717 11:23:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:19.717 11:23:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:19.717 11:23:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:19.717 11:23:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:19.717 11:23:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@5 -- # export PATH 00:05:19.717 11:23:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:19.717 11:23:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@51 -- # : 0 00:05:19.717 11:23:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:05:19.717 11:23:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:05:19.717 11:23:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:05:19.717 11:23:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:19.717 11:23:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:19.717 11:23:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:05:19.717 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:05:19.717 11:23:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:05:19.717 11:23:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:05:19.717 11:23:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@55 -- # have_pci_nics=0 00:05:19.717 11:23:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@11 -- # MALLOC_BDEV_SIZE=64 00:05:19.717 11:23:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@12 -- # MALLOC_BLOCK_SIZE=4096 00:05:19.717 11:23:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@14 -- # nvmftestinit 00:05:19.717 11:23:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:05:19.717 11:23:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:05:19.717 11:23:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@476 -- # prepare_net_devs 00:05:20.011 11:23:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@438 -- # local -g is_hw=no 00:05:20.011 11:23:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@440 -- # remove_spdk_ns 00:05:20.011 11:23:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:05:20.011 11:23:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:05:20.011 11:23:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:05:20.011 11:23:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:05:20.011 11:23:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:05:20.011 11:23:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@309 -- # xtrace_disable 00:05:20.011 11:23:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:05:25.377 11:23:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:05:25.377 11:23:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@315 -- # pci_devs=() 00:05:25.377 11:23:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@315 -- # local -a pci_devs 00:05:25.377 11:23:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@316 -- # pci_net_devs=() 00:05:25.377 11:23:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:05:25.377 11:23:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@317 -- # pci_drivers=() 00:05:25.377 11:23:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@317 -- # local -A pci_drivers 00:05:25.377 11:23:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@319 -- # net_devs=() 00:05:25.377 11:23:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@319 -- # local -ga net_devs 00:05:25.377 11:23:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@320 -- # e810=() 00:05:25.377 11:23:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@320 -- # local -ga e810 00:05:25.377 11:23:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@321 -- # x722=() 00:05:25.377 11:23:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@321 -- # local -ga x722 00:05:25.377 11:23:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@322 -- # mlx=() 00:05:25.377 11:23:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@322 -- # local -ga mlx 00:05:25.377 11:23:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:05:25.377 11:23:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:05:25.377 11:23:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:05:25.377 11:23:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:05:25.377 11:23:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:05:25.377 11:23:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:05:25.377 11:23:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:05:25.377 11:23:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:05:25.377 11:23:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:05:25.377 11:23:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:05:25.377 11:23:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:05:25.377 11:23:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:05:25.377 11:23:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:05:25.377 11:23:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:05:25.377 11:23:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:05:25.377 11:23:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:05:25.377 11:23:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:05:25.377 11:23:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:05:25.377 11:23:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:05:25.377 11:23:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:05:25.377 Found 0000:af:00.0 (0x8086 - 0x159b) 00:05:25.377 11:23:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:05:25.377 11:23:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:05:25.377 11:23:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:05:25.377 11:23:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:05:25.377 11:23:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:05:25.377 11:23:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:05:25.377 11:23:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:05:25.377 Found 0000:af:00.1 (0x8086 - 0x159b) 00:05:25.377 11:23:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:05:25.377 11:23:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:05:25.377 11:23:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:05:25.377 11:23:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:05:25.377 11:23:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:05:25.377 11:23:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:05:25.377 11:23:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:05:25.377 11:23:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:05:25.377 11:23:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:05:25.377 11:23:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:05:25.377 11:23:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:05:25.377 11:23:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:05:25.377 11:23:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@418 -- # [[ up == up ]] 00:05:25.377 11:23:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:05:25.377 11:23:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:05:25.377 11:23:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:05:25.377 Found net devices under 0000:af:00.0: cvl_0_0 00:05:25.377 11:23:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:05:25.377 11:23:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:05:25.377 11:23:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:05:25.377 11:23:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:05:25.377 11:23:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:05:25.377 11:23:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@418 -- # [[ up == up ]] 00:05:25.377 11:23:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:05:25.377 11:23:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:05:25.377 11:23:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:05:25.377 Found net devices under 0000:af:00.1: cvl_0_1 00:05:25.377 11:23:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:05:25.377 11:23:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:05:25.377 11:23:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@442 -- # is_hw=yes 00:05:25.377 11:23:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:05:25.377 11:23:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:05:25.377 11:23:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:05:25.377 11:23:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:05:25.378 11:23:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:05:25.378 11:23:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:05:25.378 11:23:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:05:25.378 11:23:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:05:25.378 11:23:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:05:25.378 11:23:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:05:25.378 11:23:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:05:25.378 11:23:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:05:25.378 11:23:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:05:25.378 11:23:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:05:25.378 11:23:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:05:25.378 11:23:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:05:25.636 11:23:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:05:25.636 11:23:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:05:25.636 11:23:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:05:25.636 11:23:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:05:25.636 11:23:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:05:25.636 11:23:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:05:25.636 11:23:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:05:25.636 11:23:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:05:25.637 11:23:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:05:25.637 11:23:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:05:25.637 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:05:25.637 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.476 ms 00:05:25.637 00:05:25.637 --- 10.0.0.2 ping statistics --- 00:05:25.637 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:05:25.637 rtt min/avg/max/mdev = 0.476/0.476/0.476/0.000 ms 00:05:25.637 11:23:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:05:25.637 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:05:25.637 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.222 ms 00:05:25.637 00:05:25.637 --- 10.0.0.1 ping statistics --- 00:05:25.637 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:05:25.637 rtt min/avg/max/mdev = 0.222/0.222/0.222/0.000 ms 00:05:25.637 11:23:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:05:25.637 11:23:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@450 -- # return 0 00:05:25.637 11:23:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:05:25.896 11:23:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:05:25.896 11:23:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:05:25.896 11:23:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:05:25.896 11:23:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:05:25.896 11:23:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:05:25.896 11:23:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:05:25.896 11:23:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@15 -- # nvmfappstart -m 0xE 00:05:25.896 11:23:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:05:25.896 11:23:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@724 -- # xtrace_disable 00:05:25.896 11:23:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:05:25.896 11:23:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@509 -- # nvmfpid=1039211 00:05:25.896 11:23:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@510 -- # waitforlisten 1039211 00:05:25.896 11:23:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:05:25.896 11:23:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@833 -- # '[' -z 1039211 ']' 00:05:25.896 11:23:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:25.896 11:23:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@838 -- # local max_retries=100 00:05:25.896 11:23:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:25.896 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:25.896 11:23:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@842 -- # xtrace_disable 00:05:25.896 11:23:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:05:25.896 [2024-11-15 11:23:26.585783] Starting SPDK v25.01-pre git sha1 4b2d483c6 / DPDK 24.03.0 initialization... 00:05:25.896 [2024-11-15 11:23:26.585841] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:05:25.896 [2024-11-15 11:23:26.659008] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:05:25.896 [2024-11-15 11:23:26.697097] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:05:25.896 [2024-11-15 11:23:26.697133] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:05:25.896 [2024-11-15 11:23:26.697139] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:05:25.896 [2024-11-15 11:23:26.697146] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:05:25.896 [2024-11-15 11:23:26.697150] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:05:25.896 [2024-11-15 11:23:26.698470] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:05:25.896 [2024-11-15 11:23:26.698545] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:05:25.896 [2024-11-15 11:23:26.698547] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:26.155 11:23:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:05:26.155 11:23:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@866 -- # return 0 00:05:26.155 11:23:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:05:26.155 11:23:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@730 -- # xtrace_disable 00:05:26.155 11:23:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:05:26.155 11:23:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:05:26.155 11:23:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -a 256 00:05:26.155 11:23:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:26.155 11:23:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:05:26.155 [2024-11-15 11:23:26.849206] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:05:26.155 11:23:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:26.156 11:23:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@20 -- # rpc_cmd bdev_malloc_create 64 4096 -b Malloc0 00:05:26.156 11:23:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:26.156 11:23:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:05:26.156 Malloc0 00:05:26.156 11:23:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:26.156 11:23:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@21 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:05:26.156 11:23:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:26.156 11:23:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:05:26.156 Delay0 00:05:26.156 11:23:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:26.156 11:23:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:05:26.156 11:23:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:26.156 11:23:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:05:26.156 11:23:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:26.156 11:23:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 Delay0 00:05:26.156 11:23:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:26.156 11:23:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:05:26.156 11:23:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:26.156 11:23:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:05:26.156 11:23:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:26.156 11:23:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:05:26.156 [2024-11-15 11:23:26.925250] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:05:26.156 11:23:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:26.156 11:23:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:05:26.156 11:23:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:26.156 11:23:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:05:26.156 11:23:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:26.156 11:23:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0x1 -t 1 -l warning -q 128 00:05:26.415 [2024-11-15 11:23:27.062585] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:05:28.316 Initializing NVMe Controllers 00:05:28.316 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:05:28.316 controller IO queue size 128 less than required 00:05:28.316 Consider using lower queue depth or small IO size because IO requests may be queued at the NVMe driver. 00:05:28.316 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 0 00:05:28.316 Initialization complete. Launching workers. 00:05:28.316 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 I/O completed: 123, failed: 23803 00:05:28.316 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) abort submitted 23864, failed to submit 62 00:05:28.316 success 23807, unsuccessful 57, failed 0 00:05:28.316 11:23:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:05:28.316 11:23:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:28.316 11:23:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:05:28.316 11:23:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:28.316 11:23:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:05:28.316 11:23:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@38 -- # nvmftestfini 00:05:28.316 11:23:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@516 -- # nvmfcleanup 00:05:28.316 11:23:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@121 -- # sync 00:05:28.316 11:23:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:05:28.316 11:23:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@124 -- # set +e 00:05:28.316 11:23:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@125 -- # for i in {1..20} 00:05:28.316 11:23:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:05:28.316 rmmod nvme_tcp 00:05:28.574 rmmod nvme_fabrics 00:05:28.574 rmmod nvme_keyring 00:05:28.574 11:23:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:05:28.574 11:23:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@128 -- # set -e 00:05:28.574 11:23:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@129 -- # return 0 00:05:28.574 11:23:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@517 -- # '[' -n 1039211 ']' 00:05:28.574 11:23:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@518 -- # killprocess 1039211 00:05:28.574 11:23:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@952 -- # '[' -z 1039211 ']' 00:05:28.574 11:23:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@956 -- # kill -0 1039211 00:05:28.574 11:23:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@957 -- # uname 00:05:28.574 11:23:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:05:28.574 11:23:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 1039211 00:05:28.574 11:23:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:05:28.574 11:23:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:05:28.574 11:23:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@970 -- # echo 'killing process with pid 1039211' 00:05:28.574 killing process with pid 1039211 00:05:28.574 11:23:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@971 -- # kill 1039211 00:05:28.574 11:23:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@976 -- # wait 1039211 00:05:28.831 11:23:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:05:28.831 11:23:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:05:28.831 11:23:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:05:28.831 11:23:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@297 -- # iptr 00:05:28.831 11:23:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@791 -- # iptables-save 00:05:28.831 11:23:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:05:28.831 11:23:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@791 -- # iptables-restore 00:05:28.831 11:23:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:05:28.831 11:23:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@302 -- # remove_spdk_ns 00:05:28.831 11:23:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:05:28.831 11:23:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:05:28.831 11:23:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:05:30.730 11:23:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:05:30.730 00:05:30.730 real 0m11.169s 00:05:30.730 user 0m11.805s 00:05:30.730 sys 0m5.301s 00:05:30.730 11:23:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1128 -- # xtrace_disable 00:05:30.730 11:23:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:05:30.730 ************************************ 00:05:30.730 END TEST nvmf_abort 00:05:30.730 ************************************ 00:05:30.730 11:23:31 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@22 -- # run_test nvmf_ns_hotplug_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:05:30.730 11:23:31 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:05:30.730 11:23:31 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1109 -- # xtrace_disable 00:05:30.730 11:23:31 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:05:30.988 ************************************ 00:05:30.988 START TEST nvmf_ns_hotplug_stress 00:05:30.988 ************************************ 00:05:30.988 11:23:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:05:30.988 * Looking for test storage... 00:05:30.988 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:05:30.988 11:23:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:05:30.988 11:23:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1691 -- # lcov --version 00:05:30.988 11:23:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:05:30.988 11:23:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:05:30.988 11:23:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:30.988 11:23:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:30.988 11:23:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:30.988 11:23:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@336 -- # IFS=.-: 00:05:30.988 11:23:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@336 -- # read -ra ver1 00:05:30.988 11:23:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@337 -- # IFS=.-: 00:05:30.988 11:23:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@337 -- # read -ra ver2 00:05:30.988 11:23:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@338 -- # local 'op=<' 00:05:30.988 11:23:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@340 -- # ver1_l=2 00:05:30.988 11:23:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@341 -- # ver2_l=1 00:05:30.988 11:23:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:30.988 11:23:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@344 -- # case "$op" in 00:05:30.988 11:23:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@345 -- # : 1 00:05:30.988 11:23:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:30.988 11:23:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:30.988 11:23:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@365 -- # decimal 1 00:05:30.988 11:23:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@353 -- # local d=1 00:05:30.988 11:23:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:30.988 11:23:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@355 -- # echo 1 00:05:30.988 11:23:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@365 -- # ver1[v]=1 00:05:30.988 11:23:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@366 -- # decimal 2 00:05:30.988 11:23:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@353 -- # local d=2 00:05:30.988 11:23:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:30.988 11:23:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@355 -- # echo 2 00:05:30.989 11:23:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@366 -- # ver2[v]=2 00:05:30.989 11:23:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:30.989 11:23:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:30.989 11:23:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@368 -- # return 0 00:05:30.989 11:23:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:30.989 11:23:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:05:30.989 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:30.989 --rc genhtml_branch_coverage=1 00:05:30.989 --rc genhtml_function_coverage=1 00:05:30.989 --rc genhtml_legend=1 00:05:30.989 --rc geninfo_all_blocks=1 00:05:30.989 --rc geninfo_unexecuted_blocks=1 00:05:30.989 00:05:30.989 ' 00:05:30.989 11:23:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:05:30.989 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:30.989 --rc genhtml_branch_coverage=1 00:05:30.989 --rc genhtml_function_coverage=1 00:05:30.989 --rc genhtml_legend=1 00:05:30.989 --rc geninfo_all_blocks=1 00:05:30.989 --rc geninfo_unexecuted_blocks=1 00:05:30.989 00:05:30.989 ' 00:05:30.989 11:23:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:05:30.989 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:30.989 --rc genhtml_branch_coverage=1 00:05:30.989 --rc genhtml_function_coverage=1 00:05:30.989 --rc genhtml_legend=1 00:05:30.989 --rc geninfo_all_blocks=1 00:05:30.989 --rc geninfo_unexecuted_blocks=1 00:05:30.989 00:05:30.989 ' 00:05:30.989 11:23:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:05:30.989 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:30.989 --rc genhtml_branch_coverage=1 00:05:30.989 --rc genhtml_function_coverage=1 00:05:30.989 --rc genhtml_legend=1 00:05:30.989 --rc geninfo_all_blocks=1 00:05:30.989 --rc geninfo_unexecuted_blocks=1 00:05:30.989 00:05:30.989 ' 00:05:30.989 11:23:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:05:30.989 11:23:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # uname -s 00:05:30.989 11:23:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:30.989 11:23:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:30.989 11:23:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:30.989 11:23:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:30.989 11:23:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:30.989 11:23:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:30.989 11:23:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:30.989 11:23:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:30.989 11:23:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:30.989 11:23:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:30.989 11:23:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:05:30.989 11:23:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=00abaa28-3537-eb11-906e-0017a4403562 00:05:30.989 11:23:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:30.989 11:23:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:30.989 11:23:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:05:30.989 11:23:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:30.989 11:23:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:05:30.989 11:23:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@15 -- # shopt -s extglob 00:05:30.989 11:23:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:30.989 11:23:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:30.989 11:23:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:30.989 11:23:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:30.989 11:23:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:30.989 11:23:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:30.989 11:23:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@5 -- # export PATH 00:05:30.989 11:23:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:30.989 11:23:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@51 -- # : 0 00:05:30.989 11:23:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:05:30.989 11:23:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:05:30.989 11:23:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:05:30.989 11:23:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:30.989 11:23:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:30.989 11:23:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:05:30.989 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:05:30.989 11:23:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:05:30.989 11:23:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:05:30.989 11:23:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@55 -- # have_pci_nics=0 00:05:30.989 11:23:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:05:30.989 11:23:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@22 -- # nvmftestinit 00:05:30.989 11:23:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:05:30.989 11:23:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:05:30.989 11:23:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@476 -- # prepare_net_devs 00:05:30.989 11:23:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@438 -- # local -g is_hw=no 00:05:30.989 11:23:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@440 -- # remove_spdk_ns 00:05:30.989 11:23:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:05:30.989 11:23:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:05:30.989 11:23:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:05:30.989 11:23:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:05:30.989 11:23:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:05:30.989 11:23:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@309 -- # xtrace_disable 00:05:30.989 11:23:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:05:36.257 11:23:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:05:36.257 11:23:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # pci_devs=() 00:05:36.257 11:23:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # local -a pci_devs 00:05:36.257 11:23:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@316 -- # pci_net_devs=() 00:05:36.257 11:23:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:05:36.257 11:23:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@317 -- # pci_drivers=() 00:05:36.257 11:23:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@317 -- # local -A pci_drivers 00:05:36.257 11:23:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@319 -- # net_devs=() 00:05:36.257 11:23:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@319 -- # local -ga net_devs 00:05:36.257 11:23:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@320 -- # e810=() 00:05:36.257 11:23:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@320 -- # local -ga e810 00:05:36.257 11:23:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # x722=() 00:05:36.257 11:23:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # local -ga x722 00:05:36.257 11:23:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@322 -- # mlx=() 00:05:36.257 11:23:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@322 -- # local -ga mlx 00:05:36.257 11:23:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:05:36.257 11:23:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:05:36.257 11:23:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:05:36.257 11:23:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:05:36.257 11:23:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:05:36.257 11:23:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:05:36.257 11:23:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:05:36.257 11:23:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:05:36.258 11:23:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:05:36.258 11:23:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:05:36.258 11:23:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:05:36.258 11:23:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:05:36.258 11:23:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:05:36.258 11:23:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:05:36.258 11:23:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:05:36.258 11:23:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:05:36.258 11:23:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:05:36.258 11:23:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:05:36.258 11:23:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:05:36.258 11:23:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:05:36.258 Found 0000:af:00.0 (0x8086 - 0x159b) 00:05:36.258 11:23:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:05:36.258 11:23:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:05:36.258 11:23:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:05:36.258 11:23:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:05:36.258 11:23:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:05:36.258 11:23:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:05:36.258 11:23:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:05:36.258 Found 0000:af:00.1 (0x8086 - 0x159b) 00:05:36.258 11:23:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:05:36.258 11:23:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:05:36.258 11:23:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:05:36.258 11:23:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:05:36.258 11:23:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:05:36.258 11:23:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:05:36.258 11:23:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:05:36.258 11:23:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:05:36.258 11:23:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:05:36.258 11:23:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:05:36.258 11:23:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:05:36.258 11:23:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:05:36.258 11:23:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@418 -- # [[ up == up ]] 00:05:36.258 11:23:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:05:36.258 11:23:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:05:36.258 11:23:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:05:36.258 Found net devices under 0000:af:00.0: cvl_0_0 00:05:36.258 11:23:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:05:36.258 11:23:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:05:36.258 11:23:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:05:36.258 11:23:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:05:36.258 11:23:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:05:36.258 11:23:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@418 -- # [[ up == up ]] 00:05:36.258 11:23:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:05:36.258 11:23:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:05:36.258 11:23:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:05:36.258 Found net devices under 0000:af:00.1: cvl_0_1 00:05:36.258 11:23:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:05:36.258 11:23:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:05:36.258 11:23:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # is_hw=yes 00:05:36.258 11:23:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:05:36.258 11:23:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:05:36.258 11:23:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:05:36.258 11:23:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:05:36.258 11:23:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:05:36.258 11:23:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:05:36.258 11:23:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:05:36.258 11:23:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:05:36.258 11:23:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:05:36.258 11:23:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:05:36.258 11:23:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:05:36.258 11:23:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:05:36.258 11:23:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:05:36.258 11:23:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:05:36.258 11:23:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:05:36.258 11:23:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:05:36.258 11:23:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:05:36.258 11:23:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:05:36.258 11:23:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:05:36.258 11:23:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:05:36.258 11:23:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:05:36.258 11:23:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:05:36.258 11:23:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:05:36.258 11:23:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:05:36.258 11:23:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:05:36.258 11:23:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:05:36.258 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:05:36.258 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.327 ms 00:05:36.258 00:05:36.258 --- 10.0.0.2 ping statistics --- 00:05:36.258 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:05:36.258 rtt min/avg/max/mdev = 0.327/0.327/0.327/0.000 ms 00:05:36.258 11:23:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:05:36.258 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:05:36.258 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.134 ms 00:05:36.258 00:05:36.258 --- 10.0.0.1 ping statistics --- 00:05:36.258 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:05:36.258 rtt min/avg/max/mdev = 0.134/0.134/0.134/0.000 ms 00:05:36.258 11:23:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:05:36.258 11:23:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@450 -- # return 0 00:05:36.258 11:23:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:05:36.258 11:23:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:05:36.258 11:23:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:05:36.258 11:23:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:05:36.258 11:23:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:05:36.258 11:23:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:05:36.258 11:23:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:05:36.258 11:23:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@23 -- # nvmfappstart -m 0xE 00:05:36.258 11:23:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:05:36.258 11:23:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@724 -- # xtrace_disable 00:05:36.258 11:23:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:05:36.258 11:23:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@509 -- # nvmfpid=1043347 00:05:36.258 11:23:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@510 -- # waitforlisten 1043347 00:05:36.258 11:23:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@833 -- # '[' -z 1043347 ']' 00:05:36.258 11:23:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:36.258 11:23:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@838 -- # local max_retries=100 00:05:36.258 11:23:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:36.258 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:36.259 11:23:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@842 -- # xtrace_disable 00:05:36.259 11:23:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:05:36.259 11:23:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:05:36.259 [2024-11-15 11:23:37.071503] Starting SPDK v25.01-pre git sha1 4b2d483c6 / DPDK 24.03.0 initialization... 00:05:36.259 [2024-11-15 11:23:37.071562] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:05:36.517 [2024-11-15 11:23:37.143834] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:05:36.517 [2024-11-15 11:23:37.183442] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:05:36.517 [2024-11-15 11:23:37.183480] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:05:36.517 [2024-11-15 11:23:37.183487] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:05:36.517 [2024-11-15 11:23:37.183493] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:05:36.517 [2024-11-15 11:23:37.183497] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:05:36.517 [2024-11-15 11:23:37.184945] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:05:36.517 [2024-11-15 11:23:37.185039] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:36.517 [2024-11-15 11:23:37.185040] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:05:36.517 11:23:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:05:36.517 11:23:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@866 -- # return 0 00:05:36.517 11:23:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:05:36.517 11:23:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@730 -- # xtrace_disable 00:05:36.517 11:23:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:05:36.517 11:23:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:05:36.517 11:23:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@25 -- # null_size=1000 00:05:36.517 11:23:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:05:36.776 [2024-11-15 11:23:37.585899] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:05:36.776 11:23:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:05:37.035 11:23:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:05:37.293 [2024-11-15 11:23:38.123743] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:05:37.293 11:23:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:05:37.862 11:23:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 512 -b Malloc0 00:05:37.862 Malloc0 00:05:37.862 11:23:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:05:38.120 Delay0 00:05:38.379 11:23:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:38.637 11:23:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create NULL1 1000 512 00:05:38.896 NULL1 00:05:38.896 11:23:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:05:38.896 11:23:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@42 -- # PERF_PID=1043809 00:05:38.896 11:23:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 30 -q 128 -w randread -o 512 -Q 1000 00:05:38.896 11:23:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1043809 00:05:38.896 11:23:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:40.394 Read completed with error (sct=0, sc=11) 00:05:40.394 11:23:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:40.394 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:40.394 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:40.394 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:40.394 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:40.394 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:40.394 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:40.394 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:40.394 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:40.653 11:23:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1001 00:05:40.653 11:23:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1001 00:05:40.911 true 00:05:40.911 11:23:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1043809 00:05:40.911 11:23:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:41.479 11:23:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:41.737 11:23:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1002 00:05:41.737 11:23:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1002 00:05:41.996 true 00:05:41.996 11:23:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1043809 00:05:41.996 11:23:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:42.564 11:23:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:42.564 11:23:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1003 00:05:42.564 11:23:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1003 00:05:42.822 true 00:05:43.081 11:23:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1043809 00:05:43.081 11:23:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:43.340 11:23:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:43.598 11:23:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1004 00:05:43.598 11:23:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1004 00:05:43.857 true 00:05:43.857 11:23:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1043809 00:05:43.857 11:23:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:44.793 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:44.793 11:23:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:45.052 11:23:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1005 00:05:45.052 11:23:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1005 00:05:45.309 true 00:05:45.309 11:23:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1043809 00:05:45.309 11:23:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:45.567 11:23:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:45.826 11:23:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1006 00:05:45.826 11:23:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1006 00:05:46.085 true 00:05:46.085 11:23:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1043809 00:05:46.085 11:23:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:46.344 11:23:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:46.912 11:23:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1007 00:05:46.912 11:23:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1007 00:05:46.913 true 00:05:47.171 11:23:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1043809 00:05:47.171 11:23:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:48.108 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:48.108 11:23:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:48.108 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:48.108 11:23:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1008 00:05:48.108 11:23:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1008 00:05:48.366 true 00:05:48.366 11:23:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1043809 00:05:48.366 11:23:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:48.933 11:23:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:48.933 11:23:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1009 00:05:48.933 11:23:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1009 00:05:49.192 true 00:05:49.451 11:23:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1043809 00:05:49.451 11:23:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:49.709 11:23:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:49.968 11:23:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1010 00:05:49.968 11:23:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1010 00:05:50.226 true 00:05:50.226 11:23:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1043809 00:05:50.226 11:23:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:51.162 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:51.162 11:23:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:51.162 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:51.162 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:51.421 11:23:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1011 00:05:51.421 11:23:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1011 00:05:51.680 true 00:05:51.680 11:23:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1043809 00:05:51.680 11:23:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:51.939 11:23:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:52.197 11:23:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1012 00:05:52.197 11:23:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1012 00:05:52.456 true 00:05:52.456 11:23:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1043809 00:05:52.456 11:23:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:52.714 11:23:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:52.973 11:23:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1013 00:05:52.973 11:23:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1013 00:05:53.232 true 00:05:53.232 11:23:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1043809 00:05:53.232 11:23:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:54.167 11:23:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:54.167 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:54.167 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:54.425 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:54.425 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:54.425 11:23:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1014 00:05:54.425 11:23:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1014 00:05:54.684 true 00:05:54.942 11:23:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1043809 00:05:54.942 11:23:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:55.200 11:23:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:55.459 11:23:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1015 00:05:55.459 11:23:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1015 00:05:55.717 true 00:05:55.717 11:23:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1043809 00:05:55.717 11:23:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:56.652 11:23:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:56.652 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:56.652 11:23:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1016 00:05:56.652 11:23:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1016 00:05:56.910 true 00:05:56.910 11:23:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1043809 00:05:56.910 11:23:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:57.168 11:23:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:57.426 11:23:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1017 00:05:57.426 11:23:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1017 00:05:57.683 true 00:05:57.941 11:23:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1043809 00:05:57.941 11:23:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:58.875 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:58.875 11:23:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:58.875 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:58.875 11:23:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1018 00:05:58.875 11:23:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1018 00:05:59.132 true 00:05:59.132 11:23:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1043809 00:05:59.132 11:23:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:59.391 11:24:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:59.957 11:24:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1019 00:05:59.957 11:24:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1019 00:05:59.957 true 00:05:59.957 11:24:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1043809 00:05:59.957 11:24:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:00.524 11:24:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:00.524 11:24:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1020 00:06:00.524 11:24:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1020 00:06:00.782 true 00:06:01.040 11:24:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1043809 00:06:01.040 11:24:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:01.976 11:24:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:01.976 11:24:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1021 00:06:01.976 11:24:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1021 00:06:02.234 true 00:06:02.235 11:24:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1043809 00:06:02.235 11:24:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:02.802 11:24:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:02.802 11:24:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1022 00:06:02.802 11:24:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1022 00:06:03.059 true 00:06:03.316 11:24:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1043809 00:06:03.316 11:24:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:03.573 11:24:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:03.829 11:24:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1023 00:06:03.829 11:24:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1023 00:06:04.086 true 00:06:04.086 11:24:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1043809 00:06:04.086 11:24:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:05.019 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:05.019 11:24:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:05.019 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:05.019 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:05.277 11:24:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1024 00:06:05.277 11:24:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1024 00:06:05.535 true 00:06:05.535 11:24:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1043809 00:06:05.535 11:24:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:05.794 11:24:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:05.794 11:24:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1025 00:06:05.794 11:24:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1025 00:06:06.361 true 00:06:06.361 11:24:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1043809 00:06:06.361 11:24:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:06.929 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:06.929 11:24:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:07.187 11:24:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1026 00:06:07.187 11:24:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1026 00:06:07.445 true 00:06:07.445 11:24:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1043809 00:06:07.445 11:24:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:07.704 11:24:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:07.962 11:24:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1027 00:06:07.962 11:24:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1027 00:06:08.220 true 00:06:08.478 11:24:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1043809 00:06:08.478 11:24:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:09.046 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:09.304 11:24:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:09.304 Initializing NVMe Controllers 00:06:09.304 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:06:09.304 Controller IO queue size 128, less than required. 00:06:09.304 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:06:09.304 Controller IO queue size 128, less than required. 00:06:09.304 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:06:09.304 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:06:09.304 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:06:09.304 Initialization complete. Launching workers. 00:06:09.304 ======================================================== 00:06:09.304 Latency(us) 00:06:09.304 Device Information : IOPS MiB/s Average min max 00:06:09.304 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 949.81 0.46 62328.73 2806.56 1089441.17 00:06:09.304 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 12671.49 6.19 10104.43 1871.10 574014.58 00:06:09.304 ======================================================== 00:06:09.304 Total : 13621.30 6.65 13746.02 1871.10 1089441.17 00:06:09.304 00:06:09.563 11:24:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1028 00:06:09.563 11:24:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1028 00:06:09.822 true 00:06:09.822 11:24:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1043809 00:06:09.822 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh: line 44: kill: (1043809) - No such process 00:06:09.822 11:24:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@53 -- # wait 1043809 00:06:09.822 11:24:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:10.081 11:24:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:06:10.339 11:24:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # nthreads=8 00:06:10.339 11:24:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # pids=() 00:06:10.339 11:24:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i = 0 )) 00:06:10.339 11:24:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:06:10.339 11:24:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null0 100 4096 00:06:10.598 null0 00:06:10.598 11:24:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:06:10.598 11:24:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:06:10.598 11:24:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null1 100 4096 00:06:10.856 null1 00:06:10.856 11:24:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:06:10.856 11:24:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:06:10.856 11:24:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null2 100 4096 00:06:11.115 null2 00:06:11.115 11:24:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:06:11.115 11:24:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:06:11.115 11:24:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null3 100 4096 00:06:11.374 null3 00:06:11.374 11:24:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:06:11.374 11:24:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:06:11.374 11:24:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null4 100 4096 00:06:11.632 null4 00:06:11.632 11:24:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:06:11.632 11:24:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:06:11.632 11:24:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null5 100 4096 00:06:11.890 null5 00:06:11.890 11:24:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:06:11.890 11:24:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:06:11.890 11:24:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null6 100 4096 00:06:12.148 null6 00:06:12.148 11:24:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:06:12.148 11:24:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:06:12.148 11:24:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null7 100 4096 00:06:12.407 null7 00:06:12.407 11:24:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:06:12.407 11:24:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:06:12.407 11:24:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i = 0 )) 00:06:12.407 11:24:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:06:12.407 11:24:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:06:12.407 11:24:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 1 null0 00:06:12.407 11:24:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:06:12.407 11:24:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:06:12.407 11:24:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=1 bdev=null0 00:06:12.407 11:24:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:06:12.407 11:24:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:12.407 11:24:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:06:12.407 11:24:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:06:12.407 11:24:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:06:12.407 11:24:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:06:12.407 11:24:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 2 null1 00:06:12.408 11:24:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=2 bdev=null1 00:06:12.408 11:24:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:06:12.408 11:24:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:12.408 11:24:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:06:12.408 11:24:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:06:12.408 11:24:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:06:12.408 11:24:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 3 null2 00:06:12.408 11:24:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:06:12.408 11:24:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=3 bdev=null2 00:06:12.408 11:24:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:06:12.408 11:24:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:12.408 11:24:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:06:12.408 11:24:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:06:12.408 11:24:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:06:12.408 11:24:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 4 null3 00:06:12.408 11:24:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:06:12.408 11:24:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=4 bdev=null3 00:06:12.408 11:24:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:06:12.408 11:24:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:12.408 11:24:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:06:12.408 11:24:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:06:12.408 11:24:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:06:12.408 11:24:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 5 null4 00:06:12.408 11:24:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:06:12.408 11:24:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=5 bdev=null4 00:06:12.408 11:24:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:06:12.408 11:24:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:12.408 11:24:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:06:12.408 11:24:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:06:12.408 11:24:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:06:12.408 11:24:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 6 null5 00:06:12.408 11:24:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:06:12.408 11:24:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=6 bdev=null5 00:06:12.408 11:24:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:06:12.408 11:24:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:12.408 11:24:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:06:12.408 11:24:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:06:12.408 11:24:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:06:12.408 11:24:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 7 null6 00:06:12.408 11:24:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:06:12.408 11:24:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=7 bdev=null6 00:06:12.408 11:24:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:06:12.408 11:24:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:12.408 11:24:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:06:12.408 11:24:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:06:12.408 11:24:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:06:12.408 11:24:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 8 null7 00:06:12.408 11:24:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:06:12.408 11:24:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=8 bdev=null7 00:06:12.408 11:24:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@66 -- # wait 1050574 1050576 1050579 1050582 1050585 1050589 1050592 1050595 00:06:12.408 11:24:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:06:12.408 11:24:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:12.408 11:24:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:06:12.667 11:24:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:06:12.667 11:24:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:12.667 11:24:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:06:12.667 11:24:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:06:12.667 11:24:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:06:12.667 11:24:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:06:12.667 11:24:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:06:12.667 11:24:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:06:12.926 11:24:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:12.926 11:24:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:12.926 11:24:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:06:12.926 11:24:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:12.926 11:24:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:12.926 11:24:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:06:12.926 11:24:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:12.926 11:24:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:12.926 11:24:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:06:12.926 11:24:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:12.926 11:24:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:12.926 11:24:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:06:12.926 11:24:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:12.926 11:24:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:12.926 11:24:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:06:12.926 11:24:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:06:12.926 11:24:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:12.926 11:24:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:12.926 11:24:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:06:12.926 11:24:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:12.926 11:24:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:12.926 11:24:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:06:13.185 11:24:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:13.185 11:24:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:13.185 11:24:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:06:13.185 11:24:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:06:13.185 11:24:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:06:13.185 11:24:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:13.445 11:24:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:13.445 11:24:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:06:13.445 11:24:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:13.445 11:24:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:06:13.445 11:24:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:06:13.445 11:24:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:06:13.445 11:24:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:06:13.445 11:24:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:06:13.445 11:24:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:13.445 11:24:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:13.445 11:24:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:06:13.445 11:24:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:13.445 11:24:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:13.445 11:24:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:06:13.445 11:24:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:13.445 11:24:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:13.445 11:24:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:06:13.445 11:24:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:13.445 11:24:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:13.445 11:24:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:06:13.704 11:24:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:13.704 11:24:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:13.704 11:24:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:06:13.704 11:24:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:13.704 11:24:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:13.704 11:24:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:06:13.704 11:24:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:13.704 11:24:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:13.704 11:24:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:06:13.704 11:24:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:13.704 11:24:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:13.704 11:24:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:06:13.704 11:24:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:06:13.704 11:24:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:06:13.704 11:24:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:06:13.964 11:24:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:13.964 11:24:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:06:13.964 11:24:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:06:13.964 11:24:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:06:13.964 11:24:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:06:13.964 11:24:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:13.964 11:24:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:13.964 11:24:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:06:14.223 11:24:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:14.223 11:24:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:14.223 11:24:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:06:14.223 11:24:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:14.223 11:24:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:14.223 11:24:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:06:14.223 11:24:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:14.223 11:24:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:14.223 11:24:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:06:14.223 11:24:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:14.223 11:24:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:14.223 11:24:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:06:14.223 11:24:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:14.223 11:24:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:14.223 11:24:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:14.223 11:24:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:14.223 11:24:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:06:14.223 11:24:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:06:14.223 11:24:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:14.223 11:24:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:14.223 11:24:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:06:14.223 11:24:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:06:14.223 11:24:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:14.482 11:24:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:06:14.482 11:24:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:06:14.482 11:24:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:06:14.482 11:24:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:06:14.482 11:24:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:06:14.482 11:24:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:14.482 11:24:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:14.482 11:24:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:06:14.482 11:24:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:06:14.482 11:24:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:14.482 11:24:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:14.482 11:24:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:06:14.742 11:24:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:14.742 11:24:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:14.742 11:24:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:06:14.742 11:24:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:14.742 11:24:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:14.742 11:24:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:06:14.742 11:24:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:14.742 11:24:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:14.742 11:24:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:06:14.742 11:24:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:14.742 11:24:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:14.742 11:24:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:06:14.742 11:24:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:14.742 11:24:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:14.742 11:24:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:06:14.742 11:24:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:14.742 11:24:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:14.742 11:24:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:06:14.742 11:24:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:06:14.742 11:24:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:14.742 11:24:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:06:15.001 11:24:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:06:15.001 11:24:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:06:15.001 11:24:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:06:15.001 11:24:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:06:15.001 11:24:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:06:15.001 11:24:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:15.001 11:24:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:15.001 11:24:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:06:15.260 11:24:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:15.260 11:24:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:15.260 11:24:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:06:15.260 11:24:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:15.260 11:24:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:15.260 11:24:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:06:15.260 11:24:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:15.260 11:24:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:15.260 11:24:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:06:15.260 11:24:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:15.260 11:24:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:15.260 11:24:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:06:15.260 11:24:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:15.260 11:24:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:15.260 11:24:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:06:15.260 11:24:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:15.260 11:24:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:15.260 11:24:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:06:15.260 11:24:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:15.260 11:24:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:06:15.260 11:24:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:15.260 11:24:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:06:15.260 11:24:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:15.519 11:24:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:06:15.520 11:24:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:06:15.520 11:24:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:06:15.520 11:24:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:15.520 11:24:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:15.520 11:24:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:06:15.520 11:24:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:06:15.520 11:24:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:06:15.520 11:24:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:06:15.520 11:24:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:15.520 11:24:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:15.520 11:24:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:06:15.779 11:24:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:15.779 11:24:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:15.779 11:24:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:06:15.779 11:24:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:15.779 11:24:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:15.779 11:24:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:06:15.779 11:24:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:15.779 11:24:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:15.779 11:24:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:06:15.779 11:24:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:15.779 11:24:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:15.780 11:24:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:06:15.780 11:24:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:15.780 11:24:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:15.780 11:24:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:15.780 11:24:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:06:15.780 11:24:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:15.780 11:24:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:06:15.780 11:24:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:15.780 11:24:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:06:16.039 11:24:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:06:16.039 11:24:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:06:16.039 11:24:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:06:16.039 11:24:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:06:16.039 11:24:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:16.039 11:24:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:16.039 11:24:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:06:16.039 11:24:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:06:16.039 11:24:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:16.039 11:24:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:16.039 11:24:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:06:16.039 11:24:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:06:16.039 11:24:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:16.039 11:24:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:16.039 11:24:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:06:16.298 11:24:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:16.298 11:24:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:16.298 11:24:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:06:16.298 11:24:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:16.298 11:24:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:16.298 11:24:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:06:16.298 11:24:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:06:16.298 11:24:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:16.298 11:24:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:16.298 11:24:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:06:16.298 11:24:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:16.298 11:24:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:16.298 11:24:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:16.298 11:24:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:06:16.298 11:24:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:16.298 11:24:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:16.299 11:24:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:06:16.299 11:24:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:06:16.558 11:24:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:06:16.558 11:24:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:06:16.558 11:24:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:06:16.558 11:24:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:16.558 11:24:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:16.558 11:24:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:06:16.558 11:24:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:16.558 11:24:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:16.558 11:24:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:06:16.558 11:24:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:06:16.558 11:24:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:06:16.558 11:24:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:16.558 11:24:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:16.558 11:24:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:06:16.817 11:24:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:16.817 11:24:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:16.817 11:24:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:06:16.817 11:24:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:16.817 11:24:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:16.817 11:24:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:06:16.817 11:24:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:16.817 11:24:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:16.817 11:24:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:06:16.817 11:24:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:06:16.817 11:24:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:16.817 11:24:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:16.817 11:24:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:16.817 11:24:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:06:17.076 11:24:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:06:17.076 11:24:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:17.076 11:24:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:17.076 11:24:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:06:17.076 11:24:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:06:17.076 11:24:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:06:17.076 11:24:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:17.076 11:24:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:17.076 11:24:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:06:17.076 11:24:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:06:17.076 11:24:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:17.076 11:24:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:17.077 11:24:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:06:17.336 11:24:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:06:17.336 11:24:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:17.336 11:24:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:17.336 11:24:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:06:17.336 11:24:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:06:17.336 11:24:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:17.336 11:24:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:17.336 11:24:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:06:17.336 11:24:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:17.336 11:24:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:17.336 11:24:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:17.336 11:24:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:17.336 11:24:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:06:17.336 11:24:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:06:17.336 11:24:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:06:17.336 11:24:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:06:17.596 11:24:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:17.596 11:24:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:17.596 11:24:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:17.596 11:24:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:06:17.596 11:24:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:17.596 11:24:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:17.596 11:24:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:06:17.596 11:24:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:06:17.596 11:24:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:17.596 11:24:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:17.596 11:24:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:17.596 11:24:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:17.596 11:24:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:17.596 11:24:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:17.855 11:24:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:06:17.855 11:24:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:06:17.855 11:24:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:17.855 11:24:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:17.855 11:24:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:17.855 11:24:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:17.855 11:24:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:17.855 11:24:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:18.115 11:24:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:18.115 11:24:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:18.115 11:24:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:06:18.115 11:24:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@70 -- # nvmftestfini 00:06:18.115 11:24:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@516 -- # nvmfcleanup 00:06:18.115 11:24:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@121 -- # sync 00:06:18.115 11:24:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:06:18.115 11:24:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@124 -- # set +e 00:06:18.115 11:24:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@125 -- # for i in {1..20} 00:06:18.115 11:24:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:06:18.115 rmmod nvme_tcp 00:06:18.115 rmmod nvme_fabrics 00:06:18.115 rmmod nvme_keyring 00:06:18.115 11:24:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:06:18.115 11:24:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@128 -- # set -e 00:06:18.115 11:24:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@129 -- # return 0 00:06:18.115 11:24:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@517 -- # '[' -n 1043347 ']' 00:06:18.115 11:24:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@518 -- # killprocess 1043347 00:06:18.115 11:24:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@952 -- # '[' -z 1043347 ']' 00:06:18.115 11:24:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@956 -- # kill -0 1043347 00:06:18.115 11:24:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@957 -- # uname 00:06:18.115 11:24:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:06:18.115 11:24:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 1043347 00:06:18.115 11:24:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:06:18.115 11:24:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:06:18.115 11:24:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@970 -- # echo 'killing process with pid 1043347' 00:06:18.115 killing process with pid 1043347 00:06:18.115 11:24:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@971 -- # kill 1043347 00:06:18.115 11:24:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@976 -- # wait 1043347 00:06:18.374 11:24:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:06:18.374 11:24:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:06:18.374 11:24:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:06:18.374 11:24:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@297 -- # iptr 00:06:18.374 11:24:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:06:18.374 11:24:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@791 -- # iptables-save 00:06:18.374 11:24:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@791 -- # iptables-restore 00:06:18.374 11:24:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:06:18.374 11:24:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@302 -- # remove_spdk_ns 00:06:18.374 11:24:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:18.374 11:24:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:06:18.374 11:24:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:20.911 11:24:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:06:20.911 00:06:20.911 real 0m49.573s 00:06:20.911 user 3m36.573s 00:06:20.911 sys 0m15.371s 00:06:20.911 11:24:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1128 -- # xtrace_disable 00:06:20.911 11:24:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:06:20.911 ************************************ 00:06:20.911 END TEST nvmf_ns_hotplug_stress 00:06:20.911 ************************************ 00:06:20.911 11:24:21 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@23 -- # run_test nvmf_delete_subsystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:06:20.911 11:24:21 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:06:20.911 11:24:21 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1109 -- # xtrace_disable 00:06:20.911 11:24:21 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:06:20.911 ************************************ 00:06:20.911 START TEST nvmf_delete_subsystem 00:06:20.911 ************************************ 00:06:20.911 11:24:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:06:20.911 * Looking for test storage... 00:06:20.911 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:06:20.911 11:24:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:06:20.911 11:24:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1691 -- # lcov --version 00:06:20.911 11:24:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:06:20.911 11:24:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:06:20.911 11:24:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:20.911 11:24:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:20.911 11:24:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:20.911 11:24:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@336 -- # IFS=.-: 00:06:20.911 11:24:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@336 -- # read -ra ver1 00:06:20.911 11:24:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@337 -- # IFS=.-: 00:06:20.911 11:24:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@337 -- # read -ra ver2 00:06:20.911 11:24:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@338 -- # local 'op=<' 00:06:20.911 11:24:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@340 -- # ver1_l=2 00:06:20.911 11:24:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@341 -- # ver2_l=1 00:06:20.911 11:24:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:20.911 11:24:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@344 -- # case "$op" in 00:06:20.911 11:24:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@345 -- # : 1 00:06:20.911 11:24:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:20.911 11:24:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:20.911 11:24:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@365 -- # decimal 1 00:06:20.911 11:24:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@353 -- # local d=1 00:06:20.911 11:24:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:20.911 11:24:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@355 -- # echo 1 00:06:20.911 11:24:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@365 -- # ver1[v]=1 00:06:20.911 11:24:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@366 -- # decimal 2 00:06:20.911 11:24:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@353 -- # local d=2 00:06:20.911 11:24:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:20.911 11:24:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@355 -- # echo 2 00:06:20.911 11:24:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@366 -- # ver2[v]=2 00:06:20.911 11:24:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:20.911 11:24:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:20.912 11:24:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@368 -- # return 0 00:06:20.912 11:24:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:20.912 11:24:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:06:20.912 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:20.912 --rc genhtml_branch_coverage=1 00:06:20.912 --rc genhtml_function_coverage=1 00:06:20.912 --rc genhtml_legend=1 00:06:20.912 --rc geninfo_all_blocks=1 00:06:20.912 --rc geninfo_unexecuted_blocks=1 00:06:20.912 00:06:20.912 ' 00:06:20.912 11:24:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:06:20.912 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:20.912 --rc genhtml_branch_coverage=1 00:06:20.912 --rc genhtml_function_coverage=1 00:06:20.912 --rc genhtml_legend=1 00:06:20.912 --rc geninfo_all_blocks=1 00:06:20.912 --rc geninfo_unexecuted_blocks=1 00:06:20.912 00:06:20.912 ' 00:06:20.912 11:24:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:06:20.912 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:20.912 --rc genhtml_branch_coverage=1 00:06:20.912 --rc genhtml_function_coverage=1 00:06:20.912 --rc genhtml_legend=1 00:06:20.912 --rc geninfo_all_blocks=1 00:06:20.912 --rc geninfo_unexecuted_blocks=1 00:06:20.912 00:06:20.912 ' 00:06:20.912 11:24:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:06:20.912 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:20.912 --rc genhtml_branch_coverage=1 00:06:20.912 --rc genhtml_function_coverage=1 00:06:20.912 --rc genhtml_legend=1 00:06:20.912 --rc geninfo_all_blocks=1 00:06:20.912 --rc geninfo_unexecuted_blocks=1 00:06:20.912 00:06:20.912 ' 00:06:20.912 11:24:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:06:20.912 11:24:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # uname -s 00:06:20.912 11:24:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:20.912 11:24:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:20.912 11:24:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:20.912 11:24:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:20.912 11:24:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:20.912 11:24:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:20.912 11:24:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:20.912 11:24:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:20.912 11:24:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:20.912 11:24:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:20.912 11:24:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:06:20.912 11:24:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@18 -- # NVME_HOSTID=00abaa28-3537-eb11-906e-0017a4403562 00:06:20.912 11:24:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:20.912 11:24:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:20.912 11:24:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:06:20.912 11:24:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:20.912 11:24:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:06:20.912 11:24:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@15 -- # shopt -s extglob 00:06:20.912 11:24:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:20.912 11:24:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:20.912 11:24:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:20.912 11:24:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:20.912 11:24:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:20.912 11:24:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:20.912 11:24:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@5 -- # export PATH 00:06:20.912 11:24:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:20.912 11:24:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@51 -- # : 0 00:06:20.912 11:24:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:06:20.912 11:24:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:06:20.912 11:24:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:20.912 11:24:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:20.912 11:24:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:20.912 11:24:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:06:20.912 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:06:20.912 11:24:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:06:20.912 11:24:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:06:20.912 11:24:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@55 -- # have_pci_nics=0 00:06:20.912 11:24:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@12 -- # nvmftestinit 00:06:20.912 11:24:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:06:20.912 11:24:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:06:20.912 11:24:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@476 -- # prepare_net_devs 00:06:20.912 11:24:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@438 -- # local -g is_hw=no 00:06:20.912 11:24:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@440 -- # remove_spdk_ns 00:06:20.912 11:24:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:20.912 11:24:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:06:20.912 11:24:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:20.912 11:24:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:06:20.912 11:24:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:06:20.912 11:24:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@309 -- # xtrace_disable 00:06:20.912 11:24:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:26.185 11:24:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:06:26.185 11:24:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # pci_devs=() 00:06:26.185 11:24:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # local -a pci_devs 00:06:26.185 11:24:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@316 -- # pci_net_devs=() 00:06:26.185 11:24:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:06:26.185 11:24:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@317 -- # pci_drivers=() 00:06:26.185 11:24:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@317 -- # local -A pci_drivers 00:06:26.185 11:24:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@319 -- # net_devs=() 00:06:26.185 11:24:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@319 -- # local -ga net_devs 00:06:26.185 11:24:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@320 -- # e810=() 00:06:26.185 11:24:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@320 -- # local -ga e810 00:06:26.185 11:24:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # x722=() 00:06:26.185 11:24:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # local -ga x722 00:06:26.185 11:24:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@322 -- # mlx=() 00:06:26.185 11:24:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@322 -- # local -ga mlx 00:06:26.185 11:24:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:06:26.185 11:24:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:06:26.185 11:24:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:06:26.185 11:24:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:06:26.185 11:24:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:06:26.185 11:24:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:06:26.185 11:24:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:06:26.185 11:24:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:06:26.185 11:24:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:06:26.185 11:24:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:06:26.185 11:24:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:06:26.185 11:24:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:06:26.185 11:24:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:06:26.185 11:24:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:06:26.185 11:24:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:06:26.185 11:24:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:06:26.185 11:24:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:06:26.185 11:24:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:06:26.185 11:24:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:06:26.185 11:24:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:06:26.185 Found 0000:af:00.0 (0x8086 - 0x159b) 00:06:26.186 11:24:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:06:26.186 11:24:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:06:26.186 11:24:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:26.186 11:24:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:26.186 11:24:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:06:26.186 11:24:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:06:26.186 11:24:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:06:26.186 Found 0000:af:00.1 (0x8086 - 0x159b) 00:06:26.186 11:24:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:06:26.186 11:24:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:06:26.186 11:24:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:26.186 11:24:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:26.186 11:24:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:06:26.186 11:24:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:06:26.186 11:24:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:06:26.186 11:24:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:06:26.186 11:24:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:06:26.186 11:24:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:26.186 11:24:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:06:26.186 11:24:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:06:26.186 11:24:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@418 -- # [[ up == up ]] 00:06:26.186 11:24:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:06:26.186 11:24:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:26.186 11:24:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:06:26.186 Found net devices under 0000:af:00.0: cvl_0_0 00:06:26.186 11:24:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:06:26.186 11:24:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:06:26.186 11:24:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:26.186 11:24:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:06:26.186 11:24:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:06:26.186 11:24:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@418 -- # [[ up == up ]] 00:06:26.186 11:24:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:06:26.186 11:24:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:26.186 11:24:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:06:26.186 Found net devices under 0000:af:00.1: cvl_0_1 00:06:26.186 11:24:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:06:26.186 11:24:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:06:26.186 11:24:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # is_hw=yes 00:06:26.186 11:24:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:06:26.186 11:24:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:06:26.186 11:24:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:06:26.186 11:24:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:06:26.186 11:24:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:06:26.186 11:24:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:06:26.186 11:24:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:06:26.186 11:24:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:06:26.186 11:24:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:06:26.186 11:24:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:06:26.186 11:24:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:06:26.186 11:24:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:06:26.186 11:24:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:06:26.186 11:24:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:06:26.186 11:24:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:06:26.186 11:24:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:06:26.186 11:24:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:06:26.186 11:24:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:06:26.186 11:24:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:06:26.186 11:24:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:06:26.186 11:24:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:06:26.186 11:24:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:06:26.445 11:24:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:06:26.445 11:24:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:06:26.445 11:24:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:06:26.445 11:24:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:06:26.445 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:06:26.445 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.458 ms 00:06:26.445 00:06:26.445 --- 10.0.0.2 ping statistics --- 00:06:26.445 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:26.445 rtt min/avg/max/mdev = 0.458/0.458/0.458/0.000 ms 00:06:26.445 11:24:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:06:26.445 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:06:26.445 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.226 ms 00:06:26.445 00:06:26.445 --- 10.0.0.1 ping statistics --- 00:06:26.445 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:26.445 rtt min/avg/max/mdev = 0.226/0.226/0.226/0.000 ms 00:06:26.445 11:24:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:06:26.445 11:24:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@450 -- # return 0 00:06:26.445 11:24:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:06:26.445 11:24:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:06:26.445 11:24:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:06:26.445 11:24:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:06:26.445 11:24:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:06:26.445 11:24:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:06:26.445 11:24:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:06:26.445 11:24:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@13 -- # nvmfappstart -m 0x3 00:06:26.445 11:24:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:06:26.445 11:24:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@724 -- # xtrace_disable 00:06:26.445 11:24:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:26.445 11:24:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@509 -- # nvmfpid=1055390 00:06:26.445 11:24:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@510 -- # waitforlisten 1055390 00:06:26.445 11:24:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:06:26.445 11:24:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@833 -- # '[' -z 1055390 ']' 00:06:26.445 11:24:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:26.445 11:24:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@838 -- # local max_retries=100 00:06:26.445 11:24:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:26.445 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:26.445 11:24:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@842 -- # xtrace_disable 00:06:26.445 11:24:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:26.445 [2024-11-15 11:24:27.192529] Starting SPDK v25.01-pre git sha1 4b2d483c6 / DPDK 24.03.0 initialization... 00:06:26.445 [2024-11-15 11:24:27.192590] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:26.445 [2024-11-15 11:24:27.293540] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:26.736 [2024-11-15 11:24:27.340914] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:06:26.736 [2024-11-15 11:24:27.340955] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:06:26.736 [2024-11-15 11:24:27.340966] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:06:26.736 [2024-11-15 11:24:27.340975] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:06:26.736 [2024-11-15 11:24:27.340983] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:06:26.736 [2024-11-15 11:24:27.342401] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:26.736 [2024-11-15 11:24:27.342409] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:26.736 11:24:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:06:26.736 11:24:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@866 -- # return 0 00:06:26.736 11:24:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:06:26.736 11:24:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@730 -- # xtrace_disable 00:06:26.736 11:24:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:26.736 11:24:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:06:26.736 11:24:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:06:26.736 11:24:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:26.736 11:24:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:26.736 [2024-11-15 11:24:27.486275] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:26.736 11:24:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:26.736 11:24:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:06:26.736 11:24:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:26.736 11:24:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:26.736 11:24:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:26.736 11:24:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:06:26.736 11:24:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:26.736 11:24:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:26.736 [2024-11-15 11:24:27.502504] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:06:26.736 11:24:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:26.736 11:24:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:06:26.736 11:24:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:26.736 11:24:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:26.736 NULL1 00:06:26.736 11:24:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:26.736 11:24:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@23 -- # rpc_cmd bdev_delay_create -b NULL1 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:06:26.736 11:24:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:26.736 11:24:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:26.736 Delay0 00:06:26.736 11:24:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:26.736 11:24:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:26.736 11:24:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:26.736 11:24:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:26.736 11:24:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:26.736 11:24:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@28 -- # perf_pid=1055567 00:06:26.736 11:24:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@30 -- # sleep 2 00:06:26.736 11:24:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 5 -q 128 -w randrw -M 70 -o 512 -P 4 00:06:27.039 [2024-11-15 11:24:27.567183] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:06:29.039 11:24:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@32 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:06:29.039 11:24:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:29.039 11:24:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:29.039 Write completed with error (sct=0, sc=8) 00:06:29.039 Write completed with error (sct=0, sc=8) 00:06:29.039 Read completed with error (sct=0, sc=8) 00:06:29.039 starting I/O failed: -6 00:06:29.040 Read completed with error (sct=0, sc=8) 00:06:29.040 Read completed with error (sct=0, sc=8) 00:06:29.040 Read completed with error (sct=0, sc=8) 00:06:29.040 Write completed with error (sct=0, sc=8) 00:06:29.040 starting I/O failed: -6 00:06:29.040 Read completed with error (sct=0, sc=8) 00:06:29.040 Read completed with error (sct=0, sc=8) 00:06:29.040 Read completed with error (sct=0, sc=8) 00:06:29.040 Write completed with error (sct=0, sc=8) 00:06:29.040 starting I/O failed: -6 00:06:29.040 Write completed with error (sct=0, sc=8) 00:06:29.040 Read completed with error (sct=0, sc=8) 00:06:29.040 Read completed with error (sct=0, sc=8) 00:06:29.040 Read completed with error (sct=0, sc=8) 00:06:29.040 starting I/O failed: -6 00:06:29.040 Read completed with error (sct=0, sc=8) 00:06:29.040 Write completed with error (sct=0, sc=8) 00:06:29.040 Write completed with error (sct=0, sc=8) 00:06:29.040 Read completed with error (sct=0, sc=8) 00:06:29.040 starting I/O failed: -6 00:06:29.040 Read completed with error (sct=0, sc=8) 00:06:29.040 Write completed with error (sct=0, sc=8) 00:06:29.040 Read completed with error (sct=0, sc=8) 00:06:29.040 Read completed with error (sct=0, sc=8) 00:06:29.040 starting I/O failed: -6 00:06:29.040 Read completed with error (sct=0, sc=8) 00:06:29.040 Write completed with error (sct=0, sc=8) 00:06:29.040 Read completed with error (sct=0, sc=8) 00:06:29.040 Read completed with error (sct=0, sc=8) 00:06:29.040 starting I/O failed: -6 00:06:29.040 Write completed with error (sct=0, sc=8) 00:06:29.040 Read completed with error (sct=0, sc=8) 00:06:29.040 Read completed with error (sct=0, sc=8) 00:06:29.040 Read completed with error (sct=0, sc=8) 00:06:29.040 starting I/O failed: -6 00:06:29.040 Read completed with error (sct=0, sc=8) 00:06:29.040 Read completed with error (sct=0, sc=8) 00:06:29.040 Read completed with error (sct=0, sc=8) 00:06:29.040 Read completed with error (sct=0, sc=8) 00:06:29.040 starting I/O failed: -6 00:06:29.040 Read completed with error (sct=0, sc=8) 00:06:29.040 Read completed with error (sct=0, sc=8) 00:06:29.040 Read completed with error (sct=0, sc=8) 00:06:29.040 Write completed with error (sct=0, sc=8) 00:06:29.040 starting I/O failed: -6 00:06:29.040 [2024-11-15 11:24:29.777707] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f9900000c40 is same with the state(6) to be set 00:06:29.040 Read completed with error (sct=0, sc=8) 00:06:29.040 Read completed with error (sct=0, sc=8) 00:06:29.040 Write completed with error (sct=0, sc=8) 00:06:29.040 Read completed with error (sct=0, sc=8) 00:06:29.040 Read completed with error (sct=0, sc=8) 00:06:29.040 Read completed with error (sct=0, sc=8) 00:06:29.040 Read completed with error (sct=0, sc=8) 00:06:29.040 Read completed with error (sct=0, sc=8) 00:06:29.040 Read completed with error (sct=0, sc=8) 00:06:29.040 Write completed with error (sct=0, sc=8) 00:06:29.040 Read completed with error (sct=0, sc=8) 00:06:29.040 Read completed with error (sct=0, sc=8) 00:06:29.040 Read completed with error (sct=0, sc=8) 00:06:29.040 Read completed with error (sct=0, sc=8) 00:06:29.040 Read completed with error (sct=0, sc=8) 00:06:29.040 Write completed with error (sct=0, sc=8) 00:06:29.040 Read completed with error (sct=0, sc=8) 00:06:29.040 Read completed with error (sct=0, sc=8) 00:06:29.040 Read completed with error (sct=0, sc=8) 00:06:29.040 Write completed with error (sct=0, sc=8) 00:06:29.040 Write completed with error (sct=0, sc=8) 00:06:29.040 Read completed with error (sct=0, sc=8) 00:06:29.040 Write completed with error (sct=0, sc=8) 00:06:29.040 Write completed with error (sct=0, sc=8) 00:06:29.040 Write completed with error (sct=0, sc=8) 00:06:29.040 Read completed with error (sct=0, sc=8) 00:06:29.040 Read completed with error (sct=0, sc=8) 00:06:29.040 Write completed with error (sct=0, sc=8) 00:06:29.040 Write completed with error (sct=0, sc=8) 00:06:29.040 Write completed with error (sct=0, sc=8) 00:06:29.040 Write completed with error (sct=0, sc=8) 00:06:29.040 Read completed with error (sct=0, sc=8) 00:06:29.040 Write completed with error (sct=0, sc=8) 00:06:29.040 Read completed with error (sct=0, sc=8) 00:06:29.040 Write completed with error (sct=0, sc=8) 00:06:29.040 Write completed with error (sct=0, sc=8) 00:06:29.040 Read completed with error (sct=0, sc=8) 00:06:29.040 Read completed with error (sct=0, sc=8) 00:06:29.040 Read completed with error (sct=0, sc=8) 00:06:29.040 Read completed with error (sct=0, sc=8) 00:06:29.040 Write completed with error (sct=0, sc=8) 00:06:29.040 Read completed with error (sct=0, sc=8) 00:06:29.040 Read completed with error (sct=0, sc=8) 00:06:29.040 Write completed with error (sct=0, sc=8) 00:06:29.040 Read completed with error (sct=0, sc=8) 00:06:29.040 Write completed with error (sct=0, sc=8) 00:06:29.040 Write completed with error (sct=0, sc=8) 00:06:29.040 Read completed with error (sct=0, sc=8) 00:06:29.040 Write completed with error (sct=0, sc=8) 00:06:29.040 Read completed with error (sct=0, sc=8) 00:06:29.040 Write completed with error (sct=0, sc=8) 00:06:29.040 Read completed with error (sct=0, sc=8) 00:06:29.040 Read completed with error (sct=0, sc=8) 00:06:29.040 Write completed with error (sct=0, sc=8) 00:06:29.040 Read completed with error (sct=0, sc=8) 00:06:29.040 Write completed with error (sct=0, sc=8) 00:06:29.040 Read completed with error (sct=0, sc=8) 00:06:29.040 Write completed with error (sct=0, sc=8) 00:06:29.040 Read completed with error (sct=0, sc=8) 00:06:29.040 Read completed with error (sct=0, sc=8) 00:06:29.040 Read completed with error (sct=0, sc=8) 00:06:29.040 Read completed with error (sct=0, sc=8) 00:06:29.040 Write completed with error (sct=0, sc=8) 00:06:29.040 Write completed with error (sct=0, sc=8) 00:06:29.040 Read completed with error (sct=0, sc=8) 00:06:29.040 Write completed with error (sct=0, sc=8) 00:06:29.040 Read completed with error (sct=0, sc=8) 00:06:29.040 Read completed with error (sct=0, sc=8) 00:06:29.040 Read completed with error (sct=0, sc=8) 00:06:29.040 Read completed with error (sct=0, sc=8) 00:06:29.040 Read completed with error (sct=0, sc=8) 00:06:29.040 Read completed with error (sct=0, sc=8) 00:06:29.040 Write completed with error (sct=0, sc=8) 00:06:29.040 Write completed with error (sct=0, sc=8) 00:06:29.040 Read completed with error (sct=0, sc=8) 00:06:29.040 Read completed with error (sct=0, sc=8) 00:06:29.040 Read completed with error (sct=0, sc=8) 00:06:29.040 Read completed with error (sct=0, sc=8) 00:06:29.040 Write completed with error (sct=0, sc=8) 00:06:29.040 Write completed with error (sct=0, sc=8) 00:06:29.040 Write completed with error (sct=0, sc=8) 00:06:29.040 Write completed with error (sct=0, sc=8) 00:06:29.040 Write completed with error (sct=0, sc=8) 00:06:29.040 Read completed with error (sct=0, sc=8) 00:06:29.040 Read completed with error (sct=0, sc=8) 00:06:29.040 Read completed with error (sct=0, sc=8) 00:06:29.040 Read completed with error (sct=0, sc=8) 00:06:29.040 Read completed with error (sct=0, sc=8) 00:06:29.040 Read completed with error (sct=0, sc=8) 00:06:29.040 Read completed with error (sct=0, sc=8) 00:06:29.040 Read completed with error (sct=0, sc=8) 00:06:29.040 Write completed with error (sct=0, sc=8) 00:06:29.040 Write completed with error (sct=0, sc=8) 00:06:29.040 Read completed with error (sct=0, sc=8) 00:06:29.040 Read completed with error (sct=0, sc=8) 00:06:29.040 Read completed with error (sct=0, sc=8) 00:06:29.040 Read completed with error (sct=0, sc=8) 00:06:29.040 Read completed with error (sct=0, sc=8) 00:06:29.040 Read completed with error (sct=0, sc=8) 00:06:29.040 starting I/O failed: -6 00:06:29.040 Write completed with error (sct=0, sc=8) 00:06:29.040 Write completed with error (sct=0, sc=8) 00:06:29.040 Read completed with error (sct=0, sc=8) 00:06:29.040 Write completed with error (sct=0, sc=8) 00:06:29.040 starting I/O failed: -6 00:06:29.040 Read completed with error (sct=0, sc=8) 00:06:29.040 Write completed with error (sct=0, sc=8) 00:06:29.040 Read completed with error (sct=0, sc=8) 00:06:29.040 Read completed with error (sct=0, sc=8) 00:06:29.040 starting I/O failed: -6 00:06:29.040 Read completed with error (sct=0, sc=8) 00:06:29.040 Read completed with error (sct=0, sc=8) 00:06:29.040 Read completed with error (sct=0, sc=8) 00:06:29.040 Read completed with error (sct=0, sc=8) 00:06:29.040 starting I/O failed: -6 00:06:29.040 Read completed with error (sct=0, sc=8) 00:06:29.040 Read completed with error (sct=0, sc=8) 00:06:29.040 Read completed with error (sct=0, sc=8) 00:06:29.040 Write completed with error (sct=0, sc=8) 00:06:29.040 starting I/O failed: -6 00:06:29.040 Read completed with error (sct=0, sc=8) 00:06:29.040 Read completed with error (sct=0, sc=8) 00:06:29.040 Write completed with error (sct=0, sc=8) 00:06:29.040 Read completed with error (sct=0, sc=8) 00:06:29.040 starting I/O failed: -6 00:06:29.040 Read completed with error (sct=0, sc=8) 00:06:29.040 Read completed with error (sct=0, sc=8) 00:06:29.040 Write completed with error (sct=0, sc=8) 00:06:29.040 Read completed with error (sct=0, sc=8) 00:06:29.040 starting I/O failed: -6 00:06:29.040 Read completed with error (sct=0, sc=8) 00:06:29.040 Write completed with error (sct=0, sc=8) 00:06:29.040 Read completed with error (sct=0, sc=8) 00:06:29.040 Write completed with error (sct=0, sc=8) 00:06:29.040 starting I/O failed: -6 00:06:29.040 Read completed with error (sct=0, sc=8) 00:06:29.040 Write completed with error (sct=0, sc=8) 00:06:29.040 Read completed with error (sct=0, sc=8) 00:06:29.040 Read completed with error (sct=0, sc=8) 00:06:29.040 starting I/O failed: -6 00:06:29.040 Write completed with error (sct=0, sc=8) 00:06:29.040 Write completed with error (sct=0, sc=8) 00:06:29.040 Read completed with error (sct=0, sc=8) 00:06:29.040 Read completed with error (sct=0, sc=8) 00:06:29.040 starting I/O failed: -6 00:06:29.040 Write completed with error (sct=0, sc=8) 00:06:29.040 Read completed with error (sct=0, sc=8) 00:06:29.040 Write completed with error (sct=0, sc=8) 00:06:29.040 Write completed with error (sct=0, sc=8) 00:06:29.040 starting I/O failed: -6 00:06:29.040 Read completed with error (sct=0, sc=8) 00:06:29.040 Read completed with error (sct=0, sc=8) 00:06:29.040 Write completed with error (sct=0, sc=8) 00:06:29.040 starting I/O failed: -6 00:06:29.976 [2024-11-15 11:24:30.745606] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb895e0 is same with the state(6) to be set 00:06:29.976 Read completed with error (sct=0, sc=8) 00:06:29.976 Read completed with error (sct=0, sc=8) 00:06:29.976 Write completed with error (sct=0, sc=8) 00:06:29.976 Write completed with error (sct=0, sc=8) 00:06:29.976 Read completed with error (sct=0, sc=8) 00:06:29.976 Write completed with error (sct=0, sc=8) 00:06:29.976 Write completed with error (sct=0, sc=8) 00:06:29.976 Read completed with error (sct=0, sc=8) 00:06:29.976 Read completed with error (sct=0, sc=8) 00:06:29.976 Read completed with error (sct=0, sc=8) 00:06:29.976 Read completed with error (sct=0, sc=8) 00:06:29.976 Read completed with error (sct=0, sc=8) 00:06:29.976 Write completed with error (sct=0, sc=8) 00:06:29.976 Read completed with error (sct=0, sc=8) 00:06:29.976 Read completed with error (sct=0, sc=8) 00:06:29.976 Read completed with error (sct=0, sc=8) 00:06:29.976 Write completed with error (sct=0, sc=8) 00:06:29.976 Read completed with error (sct=0, sc=8) 00:06:29.976 Read completed with error (sct=0, sc=8) 00:06:29.976 Read completed with error (sct=0, sc=8) 00:06:29.976 Write completed with error (sct=0, sc=8) 00:06:29.976 Write completed with error (sct=0, sc=8) 00:06:29.976 Read completed with error (sct=0, sc=8) 00:06:29.976 Read completed with error (sct=0, sc=8) 00:06:29.976 Read completed with error (sct=0, sc=8) 00:06:29.976 Read completed with error (sct=0, sc=8) 00:06:29.976 Read completed with error (sct=0, sc=8) 00:06:29.976 Write completed with error (sct=0, sc=8) 00:06:29.976 Read completed with error (sct=0, sc=8) 00:06:29.976 Read completed with error (sct=0, sc=8) 00:06:29.976 Write completed with error (sct=0, sc=8) 00:06:29.976 Read completed with error (sct=0, sc=8) 00:06:29.976 Write completed with error (sct=0, sc=8) 00:06:29.976 Read completed with error (sct=0, sc=8) 00:06:29.976 Read completed with error (sct=0, sc=8) 00:06:29.976 Read completed with error (sct=0, sc=8) 00:06:29.976 Read completed with error (sct=0, sc=8) 00:06:29.976 Read completed with error (sct=0, sc=8) 00:06:29.976 Read completed with error (sct=0, sc=8) 00:06:29.976 [2024-11-15 11:24:30.780976] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb884a0 is same with the state(6) to be set 00:06:29.976 Write completed with error (sct=0, sc=8) 00:06:29.976 Read completed with error (sct=0, sc=8) 00:06:29.976 Write completed with error (sct=0, sc=8) 00:06:29.976 Write completed with error (sct=0, sc=8) 00:06:29.976 Read completed with error (sct=0, sc=8) 00:06:29.976 Read completed with error (sct=0, sc=8) 00:06:29.976 Write completed with error (sct=0, sc=8) 00:06:29.976 Read completed with error (sct=0, sc=8) 00:06:29.976 Write completed with error (sct=0, sc=8) 00:06:29.976 Read completed with error (sct=0, sc=8) 00:06:29.976 Read completed with error (sct=0, sc=8) 00:06:29.976 Read completed with error (sct=0, sc=8) 00:06:29.976 Read completed with error (sct=0, sc=8) 00:06:29.976 Read completed with error (sct=0, sc=8) 00:06:29.976 Write completed with error (sct=0, sc=8) 00:06:29.976 Read completed with error (sct=0, sc=8) 00:06:29.976 Read completed with error (sct=0, sc=8) 00:06:29.976 Read completed with error (sct=0, sc=8) 00:06:29.976 Read completed with error (sct=0, sc=8) 00:06:29.976 Write completed with error (sct=0, sc=8) 00:06:29.976 Write completed with error (sct=0, sc=8) 00:06:29.976 Read completed with error (sct=0, sc=8) 00:06:29.976 Read completed with error (sct=0, sc=8) 00:06:29.976 Read completed with error (sct=0, sc=8) 00:06:29.976 Read completed with error (sct=0, sc=8) 00:06:29.976 Read completed with error (sct=0, sc=8) 00:06:29.976 Read completed with error (sct=0, sc=8) 00:06:29.976 Read completed with error (sct=0, sc=8) 00:06:29.976 Read completed with error (sct=0, sc=8) 00:06:29.976 Read completed with error (sct=0, sc=8) 00:06:29.976 Read completed with error (sct=0, sc=8) 00:06:29.976 Read completed with error (sct=0, sc=8) 00:06:29.976 Read completed with error (sct=0, sc=8) 00:06:29.976 Read completed with error (sct=0, sc=8) 00:06:29.976 Read completed with error (sct=0, sc=8) 00:06:29.976 Read completed with error (sct=0, sc=8) 00:06:29.976 Write completed with error (sct=0, sc=8) 00:06:29.976 Write completed with error (sct=0, sc=8) 00:06:29.976 [2024-11-15 11:24:30.781159] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb880e0 is same with the state(6) to be set 00:06:29.976 Read completed with error (sct=0, sc=8) 00:06:29.976 Read completed with error (sct=0, sc=8) 00:06:29.976 Read completed with error (sct=0, sc=8) 00:06:29.976 Read completed with error (sct=0, sc=8) 00:06:29.976 Write completed with error (sct=0, sc=8) 00:06:29.976 Read completed with error (sct=0, sc=8) 00:06:29.976 Read completed with error (sct=0, sc=8) 00:06:29.976 Write completed with error (sct=0, sc=8) 00:06:29.976 Read completed with error (sct=0, sc=8) 00:06:29.976 Write completed with error (sct=0, sc=8) 00:06:29.976 Read completed with error (sct=0, sc=8) 00:06:29.976 Read completed with error (sct=0, sc=8) 00:06:29.976 Write completed with error (sct=0, sc=8) 00:06:29.976 Write completed with error (sct=0, sc=8) 00:06:29.976 Read completed with error (sct=0, sc=8) 00:06:29.976 Read completed with error (sct=0, sc=8) 00:06:29.976 Read completed with error (sct=0, sc=8) 00:06:29.976 Read completed with error (sct=0, sc=8) 00:06:29.977 Write completed with error (sct=0, sc=8) 00:06:29.977 Write completed with error (sct=0, sc=8) 00:06:29.977 Read completed with error (sct=0, sc=8) 00:06:29.977 Read completed with error (sct=0, sc=8) 00:06:29.977 Read completed with error (sct=0, sc=8) 00:06:29.977 Write completed with error (sct=0, sc=8) 00:06:29.977 Read completed with error (sct=0, sc=8) 00:06:29.977 Read completed with error (sct=0, sc=8) 00:06:29.977 Read completed with error (sct=0, sc=8) 00:06:29.977 Read completed with error (sct=0, sc=8) 00:06:29.977 Read completed with error (sct=0, sc=8) 00:06:29.977 Write completed with error (sct=0, sc=8) 00:06:29.977 Read completed with error (sct=0, sc=8) 00:06:29.977 Write completed with error (sct=0, sc=8) 00:06:29.977 Read completed with error (sct=0, sc=8) 00:06:29.977 Read completed with error (sct=0, sc=8) 00:06:29.977 Read completed with error (sct=0, sc=8) 00:06:29.977 Read completed with error (sct=0, sc=8) 00:06:29.977 Read completed with error (sct=0, sc=8) 00:06:29.977 Read completed with error (sct=0, sc=8) 00:06:29.977 Read completed with error (sct=0, sc=8) 00:06:29.977 [2024-11-15 11:24:30.781320] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb87f00 is same with the state(6) to be set 00:06:29.977 Read completed with error (sct=0, sc=8) 00:06:29.977 Read completed with error (sct=0, sc=8) 00:06:29.977 Read completed with error (sct=0, sc=8) 00:06:29.977 Read completed with error (sct=0, sc=8) 00:06:29.977 Write completed with error (sct=0, sc=8) 00:06:29.977 Read completed with error (sct=0, sc=8) 00:06:29.977 Read completed with error (sct=0, sc=8) 00:06:29.977 Read completed with error (sct=0, sc=8) 00:06:29.977 Read completed with error (sct=0, sc=8) 00:06:29.977 Read completed with error (sct=0, sc=8) 00:06:29.977 Write completed with error (sct=0, sc=8) 00:06:29.977 Write completed with error (sct=0, sc=8) 00:06:29.977 Write completed with error (sct=0, sc=8) 00:06:29.977 Read completed with error (sct=0, sc=8) 00:06:29.977 Read completed with error (sct=0, sc=8) 00:06:29.977 Read completed with error (sct=0, sc=8) 00:06:29.977 Write completed with error (sct=0, sc=8) 00:06:29.977 Read completed with error (sct=0, sc=8) 00:06:29.977 Write completed with error (sct=0, sc=8) 00:06:29.977 Read completed with error (sct=0, sc=8) 00:06:29.977 Read completed with error (sct=0, sc=8) 00:06:29.977 [2024-11-15 11:24:30.781989] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f990000d350 is same with the state(6) to be set 00:06:29.977 Initializing NVMe Controllers 00:06:29.977 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:06:29.977 Controller IO queue size 128, less than required. 00:06:29.977 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:06:29.977 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:06:29.977 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:06:29.977 Initialization complete. Launching workers. 00:06:29.977 ======================================================== 00:06:29.977 Latency(us) 00:06:29.977 Device Information : IOPS MiB/s Average min max 00:06:29.977 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 173.19 0.08 1071090.51 324.15 2001797.41 00:06:29.977 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 154.33 0.08 896281.68 239.93 2002078.71 00:06:29.977 ======================================================== 00:06:29.977 Total : 327.52 0.16 988718.47 239.93 2002078.71 00:06:29.977 00:06:29.977 [2024-11-15 11:24:30.782764] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb895e0 (9): Bad file descriptor 00:06:29.977 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf: errors occurred 00:06:29.977 11:24:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:29.977 11:24:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@34 -- # delay=0 00:06:29.977 11:24:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 1055567 00:06:29.977 11:24:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@36 -- # sleep 0.5 00:06:30.544 11:24:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@38 -- # (( delay++ > 30 )) 00:06:30.544 11:24:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 1055567 00:06:30.544 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 35: kill: (1055567) - No such process 00:06:30.544 11:24:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@45 -- # NOT wait 1055567 00:06:30.544 11:24:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@650 -- # local es=0 00:06:30.544 11:24:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@652 -- # valid_exec_arg wait 1055567 00:06:30.544 11:24:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@638 -- # local arg=wait 00:06:30.544 11:24:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:30.544 11:24:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@642 -- # type -t wait 00:06:30.545 11:24:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:30.545 11:24:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@653 -- # wait 1055567 00:06:30.545 11:24:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@653 -- # es=1 00:06:30.545 11:24:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:30.545 11:24:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:06:30.545 11:24:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:30.545 11:24:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:06:30.545 11:24:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:30.545 11:24:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:30.545 11:24:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:30.545 11:24:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:06:30.545 11:24:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:30.545 11:24:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:30.545 [2024-11-15 11:24:31.313928] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:06:30.545 11:24:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:30.545 11:24:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@50 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:30.545 11:24:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:30.545 11:24:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:30.545 11:24:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:30.545 11:24:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@54 -- # perf_pid=1056209 00:06:30.545 11:24:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@56 -- # delay=0 00:06:30.545 11:24:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 3 -q 128 -w randrw -M 70 -o 512 -P 4 00:06:30.545 11:24:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1056209 00:06:30.545 11:24:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:06:30.545 [2024-11-15 11:24:31.383420] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:06:31.112 11:24:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:06:31.112 11:24:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1056209 00:06:31.112 11:24:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:06:31.680 11:24:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:06:31.680 11:24:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1056209 00:06:31.680 11:24:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:06:32.252 11:24:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:06:32.252 11:24:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1056209 00:06:32.252 11:24:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:06:32.510 11:24:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:06:32.510 11:24:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1056209 00:06:32.510 11:24:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:06:33.075 11:24:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:06:33.075 11:24:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1056209 00:06:33.075 11:24:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:06:33.643 11:24:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:06:33.643 11:24:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1056209 00:06:33.643 11:24:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:06:33.902 Initializing NVMe Controllers 00:06:33.902 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:06:33.902 Controller IO queue size 128, less than required. 00:06:33.902 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:06:33.902 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:06:33.902 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:06:33.902 Initialization complete. Launching workers. 00:06:33.902 ======================================================== 00:06:33.902 Latency(us) 00:06:33.902 Device Information : IOPS MiB/s Average min max 00:06:33.902 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 128.00 0.06 1003888.72 1000137.54 1013179.19 00:06:33.902 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 128.00 0.06 1003396.58 1000127.86 1013267.40 00:06:33.902 ======================================================== 00:06:33.902 Total : 256.00 0.12 1003642.65 1000127.86 1013267.40 00:06:33.902 00:06:34.161 11:24:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:06:34.161 11:24:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1056209 00:06:34.161 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 57: kill: (1056209) - No such process 00:06:34.161 11:24:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@67 -- # wait 1056209 00:06:34.161 11:24:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:06:34.161 11:24:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@71 -- # nvmftestfini 00:06:34.161 11:24:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@516 -- # nvmfcleanup 00:06:34.161 11:24:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@121 -- # sync 00:06:34.161 11:24:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:06:34.161 11:24:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@124 -- # set +e 00:06:34.161 11:24:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@125 -- # for i in {1..20} 00:06:34.161 11:24:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:06:34.161 rmmod nvme_tcp 00:06:34.161 rmmod nvme_fabrics 00:06:34.161 rmmod nvme_keyring 00:06:34.161 11:24:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:06:34.161 11:24:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@128 -- # set -e 00:06:34.161 11:24:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@129 -- # return 0 00:06:34.161 11:24:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@517 -- # '[' -n 1055390 ']' 00:06:34.161 11:24:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@518 -- # killprocess 1055390 00:06:34.161 11:24:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@952 -- # '[' -z 1055390 ']' 00:06:34.161 11:24:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@956 -- # kill -0 1055390 00:06:34.161 11:24:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@957 -- # uname 00:06:34.161 11:24:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:06:34.161 11:24:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 1055390 00:06:34.161 11:24:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:06:34.161 11:24:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:06:34.161 11:24:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@970 -- # echo 'killing process with pid 1055390' 00:06:34.161 killing process with pid 1055390 00:06:34.161 11:24:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@971 -- # kill 1055390 00:06:34.161 11:24:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@976 -- # wait 1055390 00:06:34.421 11:24:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:06:34.421 11:24:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:06:34.421 11:24:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:06:34.421 11:24:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@297 -- # iptr 00:06:34.421 11:24:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@791 -- # iptables-save 00:06:34.421 11:24:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:06:34.421 11:24:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@791 -- # iptables-restore 00:06:34.421 11:24:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:06:34.421 11:24:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@302 -- # remove_spdk_ns 00:06:34.421 11:24:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:34.421 11:24:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:06:34.421 11:24:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:36.958 11:24:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:06:36.958 00:06:36.958 real 0m16.020s 00:06:36.958 user 0m29.301s 00:06:36.958 sys 0m5.291s 00:06:36.958 11:24:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1128 -- # xtrace_disable 00:06:36.958 11:24:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:36.958 ************************************ 00:06:36.958 END TEST nvmf_delete_subsystem 00:06:36.958 ************************************ 00:06:36.958 11:24:37 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@26 -- # run_test nvmf_host_management /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:06:36.958 11:24:37 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:06:36.958 11:24:37 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1109 -- # xtrace_disable 00:06:36.958 11:24:37 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:06:36.958 ************************************ 00:06:36.958 START TEST nvmf_host_management 00:06:36.958 ************************************ 00:06:36.958 11:24:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:06:36.958 * Looking for test storage... 00:06:36.958 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:06:36.958 11:24:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:06:36.958 11:24:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1691 -- # lcov --version 00:06:36.958 11:24:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:06:36.958 11:24:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:06:36.958 11:24:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:36.958 11:24:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:36.958 11:24:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:36.958 11:24:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@336 -- # IFS=.-: 00:06:36.958 11:24:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@336 -- # read -ra ver1 00:06:36.958 11:24:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@337 -- # IFS=.-: 00:06:36.958 11:24:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@337 -- # read -ra ver2 00:06:36.958 11:24:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@338 -- # local 'op=<' 00:06:36.958 11:24:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@340 -- # ver1_l=2 00:06:36.958 11:24:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@341 -- # ver2_l=1 00:06:36.958 11:24:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:36.958 11:24:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@344 -- # case "$op" in 00:06:36.958 11:24:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@345 -- # : 1 00:06:36.958 11:24:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:36.958 11:24:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:36.958 11:24:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@365 -- # decimal 1 00:06:36.958 11:24:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@353 -- # local d=1 00:06:36.958 11:24:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:36.958 11:24:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@355 -- # echo 1 00:06:36.958 11:24:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@365 -- # ver1[v]=1 00:06:36.958 11:24:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@366 -- # decimal 2 00:06:36.958 11:24:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@353 -- # local d=2 00:06:36.958 11:24:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:36.958 11:24:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@355 -- # echo 2 00:06:36.958 11:24:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@366 -- # ver2[v]=2 00:06:36.958 11:24:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:36.958 11:24:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:36.958 11:24:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@368 -- # return 0 00:06:36.958 11:24:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:36.958 11:24:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:06:36.958 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:36.958 --rc genhtml_branch_coverage=1 00:06:36.958 --rc genhtml_function_coverage=1 00:06:36.958 --rc genhtml_legend=1 00:06:36.958 --rc geninfo_all_blocks=1 00:06:36.958 --rc geninfo_unexecuted_blocks=1 00:06:36.958 00:06:36.958 ' 00:06:36.958 11:24:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:06:36.958 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:36.958 --rc genhtml_branch_coverage=1 00:06:36.958 --rc genhtml_function_coverage=1 00:06:36.958 --rc genhtml_legend=1 00:06:36.958 --rc geninfo_all_blocks=1 00:06:36.958 --rc geninfo_unexecuted_blocks=1 00:06:36.958 00:06:36.958 ' 00:06:36.959 11:24:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:06:36.959 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:36.959 --rc genhtml_branch_coverage=1 00:06:36.959 --rc genhtml_function_coverage=1 00:06:36.959 --rc genhtml_legend=1 00:06:36.959 --rc geninfo_all_blocks=1 00:06:36.959 --rc geninfo_unexecuted_blocks=1 00:06:36.959 00:06:36.959 ' 00:06:36.959 11:24:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:06:36.959 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:36.959 --rc genhtml_branch_coverage=1 00:06:36.959 --rc genhtml_function_coverage=1 00:06:36.959 --rc genhtml_legend=1 00:06:36.959 --rc geninfo_all_blocks=1 00:06:36.959 --rc geninfo_unexecuted_blocks=1 00:06:36.959 00:06:36.959 ' 00:06:36.959 11:24:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:06:36.959 11:24:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@7 -- # uname -s 00:06:36.959 11:24:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:36.959 11:24:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:36.959 11:24:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:36.959 11:24:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:36.959 11:24:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:36.959 11:24:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:36.959 11:24:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:36.959 11:24:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:36.959 11:24:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:36.959 11:24:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:36.959 11:24:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:06:36.959 11:24:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@18 -- # NVME_HOSTID=00abaa28-3537-eb11-906e-0017a4403562 00:06:36.959 11:24:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:36.959 11:24:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:36.959 11:24:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:06:36.959 11:24:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:36.959 11:24:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:06:36.959 11:24:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@15 -- # shopt -s extglob 00:06:36.959 11:24:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:36.959 11:24:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:36.959 11:24:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:36.959 11:24:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:36.959 11:24:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:36.959 11:24:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:36.959 11:24:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@5 -- # export PATH 00:06:36.959 11:24:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:36.959 11:24:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@51 -- # : 0 00:06:36.959 11:24:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:06:36.959 11:24:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:06:36.959 11:24:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:36.959 11:24:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:36.959 11:24:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:36.959 11:24:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:06:36.959 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:06:36.959 11:24:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:06:36.959 11:24:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:06:36.959 11:24:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@55 -- # have_pci_nics=0 00:06:36.959 11:24:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@11 -- # MALLOC_BDEV_SIZE=64 00:06:36.959 11:24:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:06:36.959 11:24:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@105 -- # nvmftestinit 00:06:36.959 11:24:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:06:36.959 11:24:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:06:36.959 11:24:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@476 -- # prepare_net_devs 00:06:36.959 11:24:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@438 -- # local -g is_hw=no 00:06:36.959 11:24:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@440 -- # remove_spdk_ns 00:06:36.959 11:24:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:36.959 11:24:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:06:36.959 11:24:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:36.959 11:24:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:06:36.959 11:24:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:06:36.959 11:24:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@309 -- # xtrace_disable 00:06:36.959 11:24:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:42.235 11:24:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:06:42.235 11:24:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@315 -- # pci_devs=() 00:06:42.235 11:24:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@315 -- # local -a pci_devs 00:06:42.235 11:24:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@316 -- # pci_net_devs=() 00:06:42.235 11:24:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:06:42.235 11:24:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@317 -- # pci_drivers=() 00:06:42.235 11:24:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@317 -- # local -A pci_drivers 00:06:42.235 11:24:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@319 -- # net_devs=() 00:06:42.235 11:24:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@319 -- # local -ga net_devs 00:06:42.235 11:24:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@320 -- # e810=() 00:06:42.235 11:24:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@320 -- # local -ga e810 00:06:42.235 11:24:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@321 -- # x722=() 00:06:42.235 11:24:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@321 -- # local -ga x722 00:06:42.235 11:24:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@322 -- # mlx=() 00:06:42.235 11:24:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@322 -- # local -ga mlx 00:06:42.235 11:24:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:06:42.235 11:24:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:06:42.235 11:24:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:06:42.235 11:24:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:06:42.235 11:24:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:06:42.235 11:24:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:06:42.235 11:24:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:06:42.235 11:24:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:06:42.235 11:24:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:06:42.235 11:24:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:06:42.235 11:24:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:06:42.235 11:24:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:06:42.235 11:24:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:06:42.235 11:24:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:06:42.235 11:24:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:06:42.235 11:24:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:06:42.235 11:24:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:06:42.235 11:24:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:06:42.235 11:24:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:06:42.235 11:24:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:06:42.235 Found 0000:af:00.0 (0x8086 - 0x159b) 00:06:42.235 11:24:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:06:42.235 11:24:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:06:42.235 11:24:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:42.235 11:24:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:42.235 11:24:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:06:42.235 11:24:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:06:42.235 11:24:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:06:42.235 Found 0000:af:00.1 (0x8086 - 0x159b) 00:06:42.235 11:24:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:06:42.235 11:24:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:06:42.235 11:24:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:42.235 11:24:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:42.235 11:24:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:06:42.235 11:24:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:06:42.235 11:24:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:06:42.235 11:24:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:06:42.235 11:24:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:06:42.235 11:24:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:42.235 11:24:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:06:42.235 11:24:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:06:42.235 11:24:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@418 -- # [[ up == up ]] 00:06:42.235 11:24:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:06:42.235 11:24:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:42.235 11:24:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:06:42.235 Found net devices under 0000:af:00.0: cvl_0_0 00:06:42.235 11:24:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:06:42.235 11:24:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:06:42.235 11:24:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:42.235 11:24:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:06:42.236 11:24:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:06:42.236 11:24:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@418 -- # [[ up == up ]] 00:06:42.236 11:24:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:06:42.236 11:24:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:42.236 11:24:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:06:42.236 Found net devices under 0000:af:00.1: cvl_0_1 00:06:42.236 11:24:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:06:42.236 11:24:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:06:42.236 11:24:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@442 -- # is_hw=yes 00:06:42.236 11:24:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:06:42.236 11:24:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:06:42.236 11:24:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:06:42.236 11:24:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:06:42.236 11:24:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:06:42.236 11:24:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:06:42.236 11:24:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:06:42.236 11:24:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:06:42.236 11:24:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:06:42.236 11:24:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:06:42.236 11:24:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:06:42.236 11:24:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:06:42.236 11:24:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:06:42.236 11:24:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:06:42.236 11:24:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:06:42.236 11:24:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:06:42.236 11:24:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:06:42.236 11:24:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:06:42.236 11:24:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:06:42.236 11:24:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:06:42.236 11:24:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:06:42.236 11:24:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:06:42.236 11:24:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:06:42.236 11:24:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:06:42.236 11:24:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:06:42.236 11:24:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:06:42.236 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:06:42.236 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.472 ms 00:06:42.236 00:06:42.236 --- 10.0.0.2 ping statistics --- 00:06:42.236 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:42.236 rtt min/avg/max/mdev = 0.472/0.472/0.472/0.000 ms 00:06:42.236 11:24:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:06:42.236 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:06:42.236 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.199 ms 00:06:42.236 00:06:42.236 --- 10.0.0.1 ping statistics --- 00:06:42.236 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:42.236 rtt min/avg/max/mdev = 0.199/0.199/0.199/0.000 ms 00:06:42.236 11:24:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:06:42.236 11:24:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@450 -- # return 0 00:06:42.236 11:24:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:06:42.236 11:24:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:06:42.236 11:24:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:06:42.236 11:24:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:06:42.236 11:24:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:06:42.236 11:24:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:06:42.236 11:24:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:06:42.236 11:24:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@107 -- # nvmf_host_management 00:06:42.236 11:24:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@69 -- # starttarget 00:06:42.236 11:24:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@16 -- # nvmfappstart -m 0x1E 00:06:42.236 11:24:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:06:42.236 11:24:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@724 -- # xtrace_disable 00:06:42.236 11:24:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:42.236 11:24:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@509 -- # nvmfpid=1060478 00:06:42.236 11:24:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@510 -- # waitforlisten 1060478 00:06:42.236 11:24:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@833 -- # '[' -z 1060478 ']' 00:06:42.236 11:24:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:42.236 11:24:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@838 -- # local max_retries=100 00:06:42.236 11:24:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:42.236 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:42.236 11:24:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@842 -- # xtrace_disable 00:06:42.236 11:24:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:42.236 11:24:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:06:42.495 [2024-11-15 11:24:43.091412] Starting SPDK v25.01-pre git sha1 4b2d483c6 / DPDK 24.03.0 initialization... 00:06:42.495 [2024-11-15 11:24:43.091475] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:42.495 [2024-11-15 11:24:43.164290] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:42.495 [2024-11-15 11:24:43.205909] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:06:42.495 [2024-11-15 11:24:43.205942] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:06:42.495 [2024-11-15 11:24:43.205949] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:06:42.495 [2024-11-15 11:24:43.205954] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:06:42.495 [2024-11-15 11:24:43.205959] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:06:42.495 [2024-11-15 11:24:43.207563] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:06:42.495 [2024-11-15 11:24:43.207670] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:06:42.495 [2024-11-15 11:24:43.207780] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:06:42.495 [2024-11-15 11:24:43.207782] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:42.495 11:24:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:06:42.495 11:24:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@866 -- # return 0 00:06:42.495 11:24:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:06:42.495 11:24:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@730 -- # xtrace_disable 00:06:42.495 11:24:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:42.754 11:24:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:06:42.754 11:24:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:06:42.754 11:24:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:42.754 11:24:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:42.754 [2024-11-15 11:24:43.356500] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:42.754 11:24:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:42.754 11:24:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@20 -- # timing_enter create_subsystem 00:06:42.754 11:24:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@724 -- # xtrace_disable 00:06:42.754 11:24:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:42.754 11:24:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@22 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:06:42.754 11:24:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@23 -- # cat 00:06:42.755 11:24:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@30 -- # rpc_cmd 00:06:42.755 11:24:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:42.755 11:24:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:42.755 Malloc0 00:06:42.755 [2024-11-15 11:24:43.428557] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:06:42.755 11:24:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:42.755 11:24:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@31 -- # timing_exit create_subsystems 00:06:42.755 11:24:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@730 -- # xtrace_disable 00:06:42.755 11:24:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:42.755 11:24:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@73 -- # perfpid=1060693 00:06:42.755 11:24:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@74 -- # waitforlisten 1060693 /var/tmp/bdevperf.sock 00:06:42.755 11:24:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@833 -- # '[' -z 1060693 ']' 00:06:42.755 11:24:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:06:42.755 11:24:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:06:42.755 11:24:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@72 -- # gen_nvmf_target_json 0 00:06:42.755 11:24:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@838 -- # local max_retries=100 00:06:42.755 11:24:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:06:42.755 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:06:42.755 11:24:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # config=() 00:06:42.755 11:24:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@842 -- # xtrace_disable 00:06:42.755 11:24:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # local subsystem config 00:06:42.755 11:24:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:42.755 11:24:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:06:42.755 11:24:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:06:42.755 { 00:06:42.755 "params": { 00:06:42.755 "name": "Nvme$subsystem", 00:06:42.755 "trtype": "$TEST_TRANSPORT", 00:06:42.755 "traddr": "$NVMF_FIRST_TARGET_IP", 00:06:42.755 "adrfam": "ipv4", 00:06:42.755 "trsvcid": "$NVMF_PORT", 00:06:42.755 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:06:42.755 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:06:42.755 "hdgst": ${hdgst:-false}, 00:06:42.755 "ddgst": ${ddgst:-false} 00:06:42.755 }, 00:06:42.755 "method": "bdev_nvme_attach_controller" 00:06:42.755 } 00:06:42.755 EOF 00:06:42.755 )") 00:06:42.755 11:24:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # cat 00:06:42.755 11:24:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@584 -- # jq . 00:06:42.755 11:24:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@585 -- # IFS=, 00:06:42.755 11:24:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:06:42.755 "params": { 00:06:42.755 "name": "Nvme0", 00:06:42.755 "trtype": "tcp", 00:06:42.755 "traddr": "10.0.0.2", 00:06:42.755 "adrfam": "ipv4", 00:06:42.755 "trsvcid": "4420", 00:06:42.755 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:06:42.755 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:06:42.755 "hdgst": false, 00:06:42.755 "ddgst": false 00:06:42.755 }, 00:06:42.755 "method": "bdev_nvme_attach_controller" 00:06:42.755 }' 00:06:42.755 [2024-11-15 11:24:43.522574] Starting SPDK v25.01-pre git sha1 4b2d483c6 / DPDK 24.03.0 initialization... 00:06:42.755 [2024-11-15 11:24:43.522616] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1060693 ] 00:06:42.755 [2024-11-15 11:24:43.605917] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:43.014 [2024-11-15 11:24:43.654498] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:43.273 Running I/O for 10 seconds... 00:06:43.273 11:24:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:06:43.273 11:24:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@866 -- # return 0 00:06:43.273 11:24:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@75 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:06:43.273 11:24:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:43.273 11:24:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:43.273 11:24:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:43.273 11:24:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@78 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:06:43.273 11:24:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@80 -- # waitforio /var/tmp/bdevperf.sock Nvme0n1 00:06:43.273 11:24:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@45 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:06:43.273 11:24:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@49 -- # '[' -z Nvme0n1 ']' 00:06:43.273 11:24:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@52 -- # local ret=1 00:06:43.273 11:24:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@53 -- # local i 00:06:43.273 11:24:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i = 10 )) 00:06:43.273 11:24:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:06:43.273 11:24:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:06:43.273 11:24:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:06:43.273 11:24:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:43.273 11:24:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:43.273 11:24:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:43.273 11:24:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=78 00:06:43.273 11:24:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@58 -- # '[' 78 -ge 100 ']' 00:06:43.273 11:24:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@62 -- # sleep 0.25 00:06:43.533 11:24:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i-- )) 00:06:43.533 11:24:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:06:43.533 11:24:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:06:43.533 11:24:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:06:43.533 11:24:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:43.533 11:24:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:43.533 11:24:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:43.792 11:24:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=579 00:06:43.792 11:24:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@58 -- # '[' 579 -ge 100 ']' 00:06:43.792 11:24:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@59 -- # ret=0 00:06:43.792 11:24:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@60 -- # break 00:06:43.792 11:24:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@64 -- # return 0 00:06:43.792 11:24:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@84 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:06:43.792 11:24:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:43.792 11:24:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:43.792 [2024-11-15 11:24:44.421532] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:06:43.792 [2024-11-15 11:24:44.421579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:43.792 [2024-11-15 11:24:44.421594] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:06:43.792 [2024-11-15 11:24:44.421604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:43.792 [2024-11-15 11:24:44.421615] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:06:43.792 [2024-11-15 11:24:44.421625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:43.792 [2024-11-15 11:24:44.421636] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:06:43.792 [2024-11-15 11:24:44.421646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:43.792 [2024-11-15 11:24:44.421656] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bc4a40 is same with the state(6) to be set 00:06:43.792 [2024-11-15 11:24:44.421983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:89984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:43.792 [2024-11-15 11:24:44.421998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:43.792 [2024-11-15 11:24:44.422016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:81920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:43.792 [2024-11-15 11:24:44.422026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:43.792 [2024-11-15 11:24:44.422038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:82048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:43.792 [2024-11-15 11:24:44.422055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:43.792 [2024-11-15 11:24:44.422068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:82176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:43.792 [2024-11-15 11:24:44.422077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:43.792 [2024-11-15 11:24:44.422090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:82304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:43.792 [2024-11-15 11:24:44.422100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:43.792 [2024-11-15 11:24:44.422112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:82432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:43.792 [2024-11-15 11:24:44.422122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:43.792 [2024-11-15 11:24:44.422134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:82560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:43.792 [2024-11-15 11:24:44.422143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:43.792 [2024-11-15 11:24:44.422156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:82688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:43.792 [2024-11-15 11:24:44.422166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:43.792 [2024-11-15 11:24:44.422177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:82816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:43.792 [2024-11-15 11:24:44.422187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:43.792 [2024-11-15 11:24:44.422199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:82944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:43.792 [2024-11-15 11:24:44.422209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:43.792 [2024-11-15 11:24:44.422221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:83072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:43.792 [2024-11-15 11:24:44.422230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:43.792 [2024-11-15 11:24:44.422242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:83200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:43.792 [2024-11-15 11:24:44.422252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:43.792 [2024-11-15 11:24:44.422264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:83328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:43.792 [2024-11-15 11:24:44.422274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:43.792 [2024-11-15 11:24:44.422286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:83456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:43.792 [2024-11-15 11:24:44.422295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:43.792 [2024-11-15 11:24:44.422307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:83584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:43.792 [2024-11-15 11:24:44.422318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:43.792 [2024-11-15 11:24:44.422333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:83712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:43.792 [2024-11-15 11:24:44.422343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:43.792 [2024-11-15 11:24:44.422355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:83840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:43.792 [2024-11-15 11:24:44.422365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:43.792 [2024-11-15 11:24:44.422377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:83968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:43.792 [2024-11-15 11:24:44.422387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:43.792 [2024-11-15 11:24:44.422400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:84096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:43.792 [2024-11-15 11:24:44.422410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:43.792 [2024-11-15 11:24:44.422422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:84224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:43.793 [2024-11-15 11:24:44.422432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:43.793 [2024-11-15 11:24:44.422445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:84352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:43.793 [2024-11-15 11:24:44.422455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:43.793 [2024-11-15 11:24:44.422474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:84480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:43.793 [2024-11-15 11:24:44.422483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:43.793 [2024-11-15 11:24:44.422496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:84608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:43.793 [2024-11-15 11:24:44.422506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:43.793 [2024-11-15 11:24:44.422519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:84736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:43.793 [2024-11-15 11:24:44.422529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:43.793 [2024-11-15 11:24:44.422541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:84864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:43.793 [2024-11-15 11:24:44.422551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:43.793 [2024-11-15 11:24:44.422563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:84992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:43.793 [2024-11-15 11:24:44.422573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:43.793 [2024-11-15 11:24:44.422585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:85120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:43.793 [2024-11-15 11:24:44.422595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:43.793 [2024-11-15 11:24:44.422607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:85248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:43.793 [2024-11-15 11:24:44.422623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:43.793 [2024-11-15 11:24:44.422636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:85376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:43.793 [2024-11-15 11:24:44.422646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:43.793 [2024-11-15 11:24:44.422658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:85504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:43.793 [2024-11-15 11:24:44.422668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:43.793 [2024-11-15 11:24:44.422680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:85632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:43.793 [2024-11-15 11:24:44.422690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:43.793 [2024-11-15 11:24:44.422702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:85760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:43.793 [2024-11-15 11:24:44.422713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:43.793 [2024-11-15 11:24:44.422725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:85888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:43.793 [2024-11-15 11:24:44.422734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:43.793 [2024-11-15 11:24:44.422747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:86016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:43.793 [2024-11-15 11:24:44.422756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:43.793 [2024-11-15 11:24:44.422770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:86144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:43.793 [2024-11-15 11:24:44.422780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:43.793 [2024-11-15 11:24:44.422792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:86272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:43.793 [2024-11-15 11:24:44.422802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:43.793 [2024-11-15 11:24:44.422814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:86400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:43.793 [2024-11-15 11:24:44.422824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:43.793 [2024-11-15 11:24:44.422836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:86528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:43.793 [2024-11-15 11:24:44.422846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:43.793 [2024-11-15 11:24:44.422857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:86656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:43.793 [2024-11-15 11:24:44.422867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:43.793 [2024-11-15 11:24:44.422879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:86784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:43.793 [2024-11-15 11:24:44.422889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:43.793 [2024-11-15 11:24:44.422903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:86912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:43.793 [2024-11-15 11:24:44.422912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:43.793 [2024-11-15 11:24:44.422925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:87040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:43.793 [2024-11-15 11:24:44.422935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:43.793 [2024-11-15 11:24:44.422947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:87168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:43.793 [2024-11-15 11:24:44.422956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:43.793 [2024-11-15 11:24:44.422968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:87296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:43.793 [2024-11-15 11:24:44.422980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:43.793 [2024-11-15 11:24:44.422992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:87424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:43.793 [2024-11-15 11:24:44.423002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:43.793 [2024-11-15 11:24:44.423014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:87552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:43.793 [2024-11-15 11:24:44.423024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:43.793 [2024-11-15 11:24:44.423036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:87680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:43.793 [2024-11-15 11:24:44.423046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:43.793 [2024-11-15 11:24:44.423058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:87808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:43.793 [2024-11-15 11:24:44.423067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:43.793 [2024-11-15 11:24:44.423079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:87936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:43.793 [2024-11-15 11:24:44.423089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:43.793 [2024-11-15 11:24:44.423101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:88064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:43.793 [2024-11-15 11:24:44.423110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:43.793 [2024-11-15 11:24:44.423124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:88192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:43.793 [2024-11-15 11:24:44.423134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:43.793 [2024-11-15 11:24:44.423146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:88320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:43.793 [2024-11-15 11:24:44.423156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:43.793 [2024-11-15 11:24:44.423168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:88448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:43.793 [2024-11-15 11:24:44.423180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:43.793 [2024-11-15 11:24:44.423192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:88576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:43.793 [2024-11-15 11:24:44.423201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:43.793 [2024-11-15 11:24:44.423213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:88704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:43.793 [2024-11-15 11:24:44.423223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:43.793 [2024-11-15 11:24:44.423235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:88832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:43.793 [2024-11-15 11:24:44.423245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:43.793 [2024-11-15 11:24:44.423257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:88960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:43.793 [2024-11-15 11:24:44.423266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:43.793 [2024-11-15 11:24:44.423278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:89088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:43.793 [2024-11-15 11:24:44.423288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:43.793 [2024-11-15 11:24:44.423300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:89216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:43.793 [2024-11-15 11:24:44.423310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:43.793 [2024-11-15 11:24:44.423322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:89344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:43.794 [2024-11-15 11:24:44.423332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:43.794 [2024-11-15 11:24:44.423344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:89472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:43.794 [2024-11-15 11:24:44.423354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:43.794 [2024-11-15 11:24:44.423365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:89600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:43.794 [2024-11-15 11:24:44.423375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:43.794 [2024-11-15 11:24:44.423388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:89728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:43.794 [2024-11-15 11:24:44.423398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:43.794 [2024-11-15 11:24:44.423409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:89856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:43.794 [2024-11-15 11:24:44.423419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:43.794 [2024-11-15 11:24:44.423429] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ddd990 is same with the state(6) to be set 00:06:43.794 11:24:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:43.794 11:24:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@85 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:06:43.794 11:24:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:43.794 11:24:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:43.794 [2024-11-15 11:24:44.424863] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:06:43.794 task offset: 89984 on job bdev=Nvme0n1 fails 00:06:43.794 00:06:43.794 Latency(us) 00:06:43.794 [2024-11-15T10:24:44.647Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:06:43.794 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:06:43.794 Job: Nvme0n1 ended in about 0.43 seconds with error 00:06:43.794 Verification LBA range: start 0x0 length 0x400 00:06:43.794 Nvme0n1 : 0.43 1474.21 92.14 147.42 0.00 38042.13 4766.25 34555.35 00:06:43.794 [2024-11-15T10:24:44.647Z] =================================================================================================================== 00:06:43.794 [2024-11-15T10:24:44.647Z] Total : 1474.21 92.14 147.42 0.00 38042.13 4766.25 34555.35 00:06:43.794 [2024-11-15 11:24:44.428025] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:43.794 [2024-11-15 11:24:44.428052] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1bc4a40 (9): Bad file descriptor 00:06:43.794 11:24:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:43.794 11:24:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@87 -- # sleep 1 00:06:43.794 [2024-11-15 11:24:44.474704] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 2] Resetting controller successful. 00:06:44.728 11:24:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@91 -- # kill -9 1060693 00:06:44.728 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh: line 91: kill: (1060693) - No such process 00:06:44.728 11:24:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@91 -- # true 00:06:44.728 11:24:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@97 -- # rm -f /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 /var/tmp/spdk_cpu_lock_003 /var/tmp/spdk_cpu_lock_004 00:06:44.728 11:24:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:06:44.728 11:24:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@100 -- # gen_nvmf_target_json 0 00:06:44.728 11:24:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # config=() 00:06:44.728 11:24:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # local subsystem config 00:06:44.728 11:24:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:06:44.728 11:24:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:06:44.728 { 00:06:44.728 "params": { 00:06:44.728 "name": "Nvme$subsystem", 00:06:44.728 "trtype": "$TEST_TRANSPORT", 00:06:44.728 "traddr": "$NVMF_FIRST_TARGET_IP", 00:06:44.728 "adrfam": "ipv4", 00:06:44.728 "trsvcid": "$NVMF_PORT", 00:06:44.728 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:06:44.728 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:06:44.728 "hdgst": ${hdgst:-false}, 00:06:44.728 "ddgst": ${ddgst:-false} 00:06:44.728 }, 00:06:44.728 "method": "bdev_nvme_attach_controller" 00:06:44.728 } 00:06:44.728 EOF 00:06:44.728 )") 00:06:44.728 11:24:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # cat 00:06:44.728 11:24:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@584 -- # jq . 00:06:44.728 11:24:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@585 -- # IFS=, 00:06:44.728 11:24:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:06:44.728 "params": { 00:06:44.728 "name": "Nvme0", 00:06:44.728 "trtype": "tcp", 00:06:44.728 "traddr": "10.0.0.2", 00:06:44.728 "adrfam": "ipv4", 00:06:44.728 "trsvcid": "4420", 00:06:44.728 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:06:44.728 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:06:44.728 "hdgst": false, 00:06:44.728 "ddgst": false 00:06:44.728 }, 00:06:44.728 "method": "bdev_nvme_attach_controller" 00:06:44.728 }' 00:06:44.728 [2024-11-15 11:24:45.484090] Starting SPDK v25.01-pre git sha1 4b2d483c6 / DPDK 24.03.0 initialization... 00:06:44.728 [2024-11-15 11:24:45.484138] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1061059 ] 00:06:44.728 [2024-11-15 11:24:45.566282] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:44.986 [2024-11-15 11:24:45.613004] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:45.245 Running I/O for 1 seconds... 00:06:46.182 1536.00 IOPS, 96.00 MiB/s 00:06:46.182 Latency(us) 00:06:46.182 [2024-11-15T10:24:47.035Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:06:46.182 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:06:46.182 Verification LBA range: start 0x0 length 0x400 00:06:46.182 Nvme0n1 : 1.01 1586.99 99.19 0.00 0.00 39471.80 6136.55 34555.35 00:06:46.182 [2024-11-15T10:24:47.035Z] =================================================================================================================== 00:06:46.182 [2024-11-15T10:24:47.035Z] Total : 1586.99 99.19 0.00 0.00 39471.80 6136.55 34555.35 00:06:46.441 11:24:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@102 -- # stoptarget 00:06:46.441 11:24:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@36 -- # rm -f ./local-job0-0-verify.state 00:06:46.441 11:24:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@37 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:06:46.441 11:24:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@38 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:06:46.441 11:24:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@40 -- # nvmftestfini 00:06:46.441 11:24:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@516 -- # nvmfcleanup 00:06:46.441 11:24:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@121 -- # sync 00:06:46.441 11:24:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:06:46.441 11:24:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@124 -- # set +e 00:06:46.441 11:24:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@125 -- # for i in {1..20} 00:06:46.441 11:24:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:06:46.441 rmmod nvme_tcp 00:06:46.441 rmmod nvme_fabrics 00:06:46.441 rmmod nvme_keyring 00:06:46.441 11:24:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:06:46.441 11:24:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@128 -- # set -e 00:06:46.441 11:24:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@129 -- # return 0 00:06:46.441 11:24:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@517 -- # '[' -n 1060478 ']' 00:06:46.441 11:24:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@518 -- # killprocess 1060478 00:06:46.441 11:24:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@952 -- # '[' -z 1060478 ']' 00:06:46.441 11:24:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@956 -- # kill -0 1060478 00:06:46.441 11:24:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@957 -- # uname 00:06:46.441 11:24:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:06:46.441 11:24:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 1060478 00:06:46.441 11:24:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:06:46.441 11:24:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:06:46.441 11:24:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@970 -- # echo 'killing process with pid 1060478' 00:06:46.441 killing process with pid 1060478 00:06:46.441 11:24:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@971 -- # kill 1060478 00:06:46.441 11:24:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@976 -- # wait 1060478 00:06:46.700 [2024-11-15 11:24:47.423878] app.c: 721:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 1, errno: 2 00:06:46.700 11:24:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:06:46.700 11:24:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:06:46.700 11:24:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:06:46.700 11:24:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@297 -- # iptr 00:06:46.700 11:24:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@791 -- # iptables-save 00:06:46.700 11:24:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:06:46.700 11:24:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@791 -- # iptables-restore 00:06:46.700 11:24:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:06:46.700 11:24:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@302 -- # remove_spdk_ns 00:06:46.700 11:24:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:46.700 11:24:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:06:46.700 11:24:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:49.235 11:24:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:06:49.235 11:24:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@109 -- # trap - SIGINT SIGTERM EXIT 00:06:49.235 00:06:49.235 real 0m12.208s 00:06:49.235 user 0m20.998s 00:06:49.235 sys 0m5.282s 00:06:49.235 11:24:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1128 -- # xtrace_disable 00:06:49.235 11:24:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:49.235 ************************************ 00:06:49.235 END TEST nvmf_host_management 00:06:49.235 ************************************ 00:06:49.235 11:24:49 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@27 -- # run_test nvmf_lvol /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:06:49.235 11:24:49 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:06:49.235 11:24:49 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1109 -- # xtrace_disable 00:06:49.235 11:24:49 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:06:49.235 ************************************ 00:06:49.235 START TEST nvmf_lvol 00:06:49.235 ************************************ 00:06:49.235 11:24:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:06:49.235 * Looking for test storage... 00:06:49.236 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:06:49.236 11:24:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:06:49.236 11:24:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1691 -- # lcov --version 00:06:49.236 11:24:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:06:49.236 11:24:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:06:49.236 11:24:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:49.236 11:24:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:49.236 11:24:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:49.236 11:24:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@336 -- # IFS=.-: 00:06:49.236 11:24:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@336 -- # read -ra ver1 00:06:49.236 11:24:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@337 -- # IFS=.-: 00:06:49.236 11:24:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@337 -- # read -ra ver2 00:06:49.236 11:24:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@338 -- # local 'op=<' 00:06:49.236 11:24:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@340 -- # ver1_l=2 00:06:49.236 11:24:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@341 -- # ver2_l=1 00:06:49.236 11:24:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:49.236 11:24:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@344 -- # case "$op" in 00:06:49.236 11:24:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@345 -- # : 1 00:06:49.236 11:24:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:49.236 11:24:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:49.236 11:24:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@365 -- # decimal 1 00:06:49.236 11:24:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@353 -- # local d=1 00:06:49.236 11:24:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:49.236 11:24:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@355 -- # echo 1 00:06:49.236 11:24:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@365 -- # ver1[v]=1 00:06:49.236 11:24:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@366 -- # decimal 2 00:06:49.236 11:24:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@353 -- # local d=2 00:06:49.236 11:24:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:49.236 11:24:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@355 -- # echo 2 00:06:49.236 11:24:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@366 -- # ver2[v]=2 00:06:49.236 11:24:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:49.236 11:24:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:49.236 11:24:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@368 -- # return 0 00:06:49.236 11:24:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:49.236 11:24:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:06:49.236 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:49.236 --rc genhtml_branch_coverage=1 00:06:49.236 --rc genhtml_function_coverage=1 00:06:49.236 --rc genhtml_legend=1 00:06:49.236 --rc geninfo_all_blocks=1 00:06:49.236 --rc geninfo_unexecuted_blocks=1 00:06:49.236 00:06:49.236 ' 00:06:49.236 11:24:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:06:49.236 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:49.236 --rc genhtml_branch_coverage=1 00:06:49.236 --rc genhtml_function_coverage=1 00:06:49.236 --rc genhtml_legend=1 00:06:49.236 --rc geninfo_all_blocks=1 00:06:49.236 --rc geninfo_unexecuted_blocks=1 00:06:49.236 00:06:49.236 ' 00:06:49.236 11:24:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:06:49.236 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:49.236 --rc genhtml_branch_coverage=1 00:06:49.236 --rc genhtml_function_coverage=1 00:06:49.236 --rc genhtml_legend=1 00:06:49.236 --rc geninfo_all_blocks=1 00:06:49.236 --rc geninfo_unexecuted_blocks=1 00:06:49.236 00:06:49.236 ' 00:06:49.236 11:24:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:06:49.236 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:49.236 --rc genhtml_branch_coverage=1 00:06:49.236 --rc genhtml_function_coverage=1 00:06:49.236 --rc genhtml_legend=1 00:06:49.236 --rc geninfo_all_blocks=1 00:06:49.236 --rc geninfo_unexecuted_blocks=1 00:06:49.236 00:06:49.236 ' 00:06:49.236 11:24:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:06:49.236 11:24:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@7 -- # uname -s 00:06:49.236 11:24:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:49.236 11:24:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:49.236 11:24:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:49.236 11:24:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:49.236 11:24:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:49.236 11:24:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:49.236 11:24:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:49.236 11:24:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:49.236 11:24:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:49.236 11:24:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:49.236 11:24:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:06:49.236 11:24:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@18 -- # NVME_HOSTID=00abaa28-3537-eb11-906e-0017a4403562 00:06:49.236 11:24:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:49.236 11:24:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:49.236 11:24:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:06:49.236 11:24:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:49.236 11:24:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:06:49.236 11:24:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@15 -- # shopt -s extglob 00:06:49.236 11:24:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:49.236 11:24:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:49.236 11:24:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:49.236 11:24:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:49.236 11:24:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:49.236 11:24:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:49.236 11:24:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@5 -- # export PATH 00:06:49.236 11:24:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:49.236 11:24:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@51 -- # : 0 00:06:49.236 11:24:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:06:49.236 11:24:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:06:49.236 11:24:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:49.236 11:24:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:49.236 11:24:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:49.236 11:24:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:06:49.236 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:06:49.236 11:24:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:06:49.236 11:24:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:06:49.236 11:24:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@55 -- # have_pci_nics=0 00:06:49.236 11:24:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@11 -- # MALLOC_BDEV_SIZE=64 00:06:49.237 11:24:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:06:49.237 11:24:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@13 -- # LVOL_BDEV_INIT_SIZE=20 00:06:49.237 11:24:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@14 -- # LVOL_BDEV_FINAL_SIZE=30 00:06:49.237 11:24:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:06:49.237 11:24:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@18 -- # nvmftestinit 00:06:49.237 11:24:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:06:49.237 11:24:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:06:49.237 11:24:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@476 -- # prepare_net_devs 00:06:49.237 11:24:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@438 -- # local -g is_hw=no 00:06:49.237 11:24:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@440 -- # remove_spdk_ns 00:06:49.237 11:24:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:49.237 11:24:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:06:49.237 11:24:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:49.237 11:24:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:06:49.237 11:24:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:06:49.237 11:24:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@309 -- # xtrace_disable 00:06:49.237 11:24:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:06:54.509 11:24:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:06:54.509 11:24:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@315 -- # pci_devs=() 00:06:54.509 11:24:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@315 -- # local -a pci_devs 00:06:54.509 11:24:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@316 -- # pci_net_devs=() 00:06:54.509 11:24:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:06:54.509 11:24:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@317 -- # pci_drivers=() 00:06:54.509 11:24:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@317 -- # local -A pci_drivers 00:06:54.509 11:24:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@319 -- # net_devs=() 00:06:54.509 11:24:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@319 -- # local -ga net_devs 00:06:54.509 11:24:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@320 -- # e810=() 00:06:54.509 11:24:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@320 -- # local -ga e810 00:06:54.509 11:24:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@321 -- # x722=() 00:06:54.509 11:24:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@321 -- # local -ga x722 00:06:54.509 11:24:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@322 -- # mlx=() 00:06:54.509 11:24:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@322 -- # local -ga mlx 00:06:54.509 11:24:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:06:54.509 11:24:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:06:54.509 11:24:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:06:54.509 11:24:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:06:54.509 11:24:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:06:54.509 11:24:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:06:54.509 11:24:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:06:54.510 11:24:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:06:54.510 11:24:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:06:54.510 11:24:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:06:54.510 11:24:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:06:54.510 11:24:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:06:54.510 11:24:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:06:54.510 11:24:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:06:54.510 11:24:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:06:54.510 11:24:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:06:54.510 11:24:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:06:54.510 11:24:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:06:54.510 11:24:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:06:54.510 11:24:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:06:54.510 Found 0000:af:00.0 (0x8086 - 0x159b) 00:06:54.510 11:24:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:06:54.510 11:24:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:06:54.510 11:24:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:54.510 11:24:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:54.510 11:24:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:06:54.510 11:24:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:06:54.510 11:24:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:06:54.510 Found 0000:af:00.1 (0x8086 - 0x159b) 00:06:54.510 11:24:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:06:54.510 11:24:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:06:54.510 11:24:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:54.510 11:24:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:54.510 11:24:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:06:54.510 11:24:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:06:54.510 11:24:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:06:54.510 11:24:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:06:54.510 11:24:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:06:54.510 11:24:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:54.510 11:24:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:06:54.510 11:24:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:06:54.510 11:24:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@418 -- # [[ up == up ]] 00:06:54.510 11:24:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:06:54.510 11:24:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:54.510 11:24:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:06:54.510 Found net devices under 0000:af:00.0: cvl_0_0 00:06:54.510 11:24:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:06:54.510 11:24:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:06:54.510 11:24:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:54.510 11:24:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:06:54.510 11:24:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:06:54.510 11:24:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@418 -- # [[ up == up ]] 00:06:54.510 11:24:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:06:54.510 11:24:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:54.510 11:24:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:06:54.510 Found net devices under 0000:af:00.1: cvl_0_1 00:06:54.510 11:24:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:06:54.510 11:24:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:06:54.510 11:24:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@442 -- # is_hw=yes 00:06:54.510 11:24:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:06:54.510 11:24:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:06:54.510 11:24:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:06:54.510 11:24:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:06:54.510 11:24:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:06:54.510 11:24:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:06:54.510 11:24:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:06:54.510 11:24:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:06:54.510 11:24:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:06:54.510 11:24:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:06:54.510 11:24:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:06:54.510 11:24:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:06:54.510 11:24:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:06:54.510 11:24:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:06:54.510 11:24:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:06:54.510 11:24:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:06:54.510 11:24:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:06:54.510 11:24:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:06:54.510 11:24:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:06:54.510 11:24:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:06:54.510 11:24:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:06:54.510 11:24:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:06:54.510 11:24:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:06:54.510 11:24:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:06:54.510 11:24:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:06:54.770 11:24:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:06:54.770 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:06:54.770 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.394 ms 00:06:54.770 00:06:54.770 --- 10.0.0.2 ping statistics --- 00:06:54.770 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:54.770 rtt min/avg/max/mdev = 0.394/0.394/0.394/0.000 ms 00:06:54.770 11:24:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:06:54.770 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:06:54.770 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.232 ms 00:06:54.770 00:06:54.770 --- 10.0.0.1 ping statistics --- 00:06:54.770 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:54.770 rtt min/avg/max/mdev = 0.232/0.232/0.232/0.000 ms 00:06:54.770 11:24:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:06:54.770 11:24:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@450 -- # return 0 00:06:54.770 11:24:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:06:54.770 11:24:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:06:54.770 11:24:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:06:54.770 11:24:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:06:54.770 11:24:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:06:54.770 11:24:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:06:54.770 11:24:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:06:54.770 11:24:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@19 -- # nvmfappstart -m 0x7 00:06:54.770 11:24:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:06:54.770 11:24:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@724 -- # xtrace_disable 00:06:54.770 11:24:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:06:54.770 11:24:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@509 -- # nvmfpid=1065052 00:06:54.770 11:24:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@510 -- # waitforlisten 1065052 00:06:54.770 11:24:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@833 -- # '[' -z 1065052 ']' 00:06:54.770 11:24:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:54.770 11:24:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@838 -- # local max_retries=100 00:06:54.770 11:24:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:54.770 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:54.770 11:24:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@842 -- # xtrace_disable 00:06:54.770 11:24:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:06:54.770 11:24:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:06:54.770 [2024-11-15 11:24:55.471212] Starting SPDK v25.01-pre git sha1 4b2d483c6 / DPDK 24.03.0 initialization... 00:06:54.770 [2024-11-15 11:24:55.471267] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:54.770 [2024-11-15 11:24:55.572635] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:54.770 [2024-11-15 11:24:55.621692] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:06:54.770 [2024-11-15 11:24:55.621734] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:06:54.770 [2024-11-15 11:24:55.621745] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:06:54.770 [2024-11-15 11:24:55.621754] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:06:54.770 [2024-11-15 11:24:55.621761] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:06:55.029 [2024-11-15 11:24:55.623549] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:55.029 [2024-11-15 11:24:55.623658] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:06:55.029 [2024-11-15 11:24:55.623662] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:55.029 11:24:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:06:55.029 11:24:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@866 -- # return 0 00:06:55.029 11:24:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:06:55.029 11:24:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@730 -- # xtrace_disable 00:06:55.029 11:24:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:06:55.029 11:24:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:06:55.029 11:24:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:06:55.286 [2024-11-15 11:24:56.021070] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:55.286 11:24:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:06:55.544 11:24:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # base_bdevs='Malloc0 ' 00:06:55.544 11:24:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:06:55.803 11:24:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # base_bdevs+=Malloc1 00:06:55.803 11:24:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc0 Malloc1' 00:06:56.062 11:24:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore raid0 lvs 00:06:56.321 11:24:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # lvs=40307a04-b636-4648-9982-978fbb960069 00:06:56.321 11:24:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 40307a04-b636-4648-9982-978fbb960069 lvol 20 00:06:56.580 11:24:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # lvol=d352545a-9476-4d8f-93ef-db544f136b71 00:06:56.580 11:24:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:06:56.839 11:24:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 d352545a-9476-4d8f-93ef-db544f136b71 00:06:57.098 11:24:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:06:57.357 [2024-11-15 11:24:58.187417] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:06:57.357 11:24:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:06:57.925 11:24:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@42 -- # perf_pid=1065614 00:06:57.925 11:24:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@44 -- # sleep 1 00:06:57.925 11:24:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -o 4096 -q 128 -s 512 -w randwrite -t 10 -c 0x18 00:06:58.862 11:24:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_snapshot d352545a-9476-4d8f-93ef-db544f136b71 MY_SNAPSHOT 00:06:59.121 11:24:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # snapshot=b5c6f851-8d2a-4f3e-bea9-16d7a070ae01 00:06:59.121 11:24:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_resize d352545a-9476-4d8f-93ef-db544f136b71 30 00:06:59.380 11:25:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_clone b5c6f851-8d2a-4f3e-bea9-16d7a070ae01 MY_CLONE 00:06:59.947 11:25:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # clone=4ee4185e-510f-4928-aeb6-20a2c89a8216 00:06:59.947 11:25:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_inflate 4ee4185e-510f-4928-aeb6-20a2c89a8216 00:07:00.515 11:25:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@53 -- # wait 1065614 00:07:08.635 Initializing NVMe Controllers 00:07:08.635 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:07:08.635 Controller IO queue size 128, less than required. 00:07:08.635 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:07:08.635 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 3 00:07:08.635 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 4 00:07:08.635 Initialization complete. Launching workers. 00:07:08.635 ======================================================== 00:07:08.635 Latency(us) 00:07:08.635 Device Information : IOPS MiB/s Average min max 00:07:08.635 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 3: 13449.30 52.54 9522.17 511.41 67933.83 00:07:08.635 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 4: 8537.10 33.35 15000.45 1471.99 85103.00 00:07:08.635 ======================================================== 00:07:08.635 Total : 21986.40 85.88 11649.33 511.41 85103.00 00:07:08.635 00:07:08.635 11:25:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:07:08.635 11:25:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete d352545a-9476-4d8f-93ef-db544f136b71 00:07:08.635 11:25:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 40307a04-b636-4648-9982-978fbb960069 00:07:08.894 11:25:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@60 -- # rm -f 00:07:08.894 11:25:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@62 -- # trap - SIGINT SIGTERM EXIT 00:07:08.894 11:25:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@64 -- # nvmftestfini 00:07:08.894 11:25:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@516 -- # nvmfcleanup 00:07:08.894 11:25:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@121 -- # sync 00:07:08.894 11:25:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:07:08.894 11:25:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@124 -- # set +e 00:07:08.894 11:25:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@125 -- # for i in {1..20} 00:07:08.894 11:25:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:07:08.894 rmmod nvme_tcp 00:07:08.894 rmmod nvme_fabrics 00:07:08.894 rmmod nvme_keyring 00:07:08.894 11:25:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:07:08.894 11:25:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@128 -- # set -e 00:07:08.894 11:25:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@129 -- # return 0 00:07:08.894 11:25:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@517 -- # '[' -n 1065052 ']' 00:07:08.894 11:25:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@518 -- # killprocess 1065052 00:07:08.894 11:25:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@952 -- # '[' -z 1065052 ']' 00:07:08.894 11:25:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@956 -- # kill -0 1065052 00:07:08.894 11:25:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@957 -- # uname 00:07:08.894 11:25:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:07:08.894 11:25:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 1065052 00:07:09.154 11:25:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:07:09.154 11:25:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:07:09.154 11:25:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@970 -- # echo 'killing process with pid 1065052' 00:07:09.154 killing process with pid 1065052 00:07:09.154 11:25:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@971 -- # kill 1065052 00:07:09.154 11:25:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@976 -- # wait 1065052 00:07:09.154 11:25:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:07:09.154 11:25:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:07:09.154 11:25:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:07:09.154 11:25:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@297 -- # iptr 00:07:09.154 11:25:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@791 -- # iptables-save 00:07:09.154 11:25:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:07:09.154 11:25:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@791 -- # iptables-restore 00:07:09.154 11:25:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:07:09.154 11:25:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@302 -- # remove_spdk_ns 00:07:09.154 11:25:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:09.154 11:25:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:09.154 11:25:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:11.688 11:25:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:07:11.688 00:07:11.688 real 0m22.479s 00:07:11.688 user 1m6.716s 00:07:11.688 sys 0m7.426s 00:07:11.688 11:25:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1128 -- # xtrace_disable 00:07:11.688 11:25:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:07:11.688 ************************************ 00:07:11.688 END TEST nvmf_lvol 00:07:11.688 ************************************ 00:07:11.688 11:25:12 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@28 -- # run_test nvmf_lvs_grow /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:07:11.688 11:25:12 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:07:11.688 11:25:12 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1109 -- # xtrace_disable 00:07:11.688 11:25:12 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:07:11.688 ************************************ 00:07:11.688 START TEST nvmf_lvs_grow 00:07:11.688 ************************************ 00:07:11.688 11:25:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:07:11.688 * Looking for test storage... 00:07:11.688 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:11.688 11:25:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:07:11.688 11:25:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1691 -- # lcov --version 00:07:11.688 11:25:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:07:11.688 11:25:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:07:11.688 11:25:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:11.688 11:25:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:11.688 11:25:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:11.688 11:25:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@336 -- # IFS=.-: 00:07:11.688 11:25:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@336 -- # read -ra ver1 00:07:11.688 11:25:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@337 -- # IFS=.-: 00:07:11.688 11:25:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@337 -- # read -ra ver2 00:07:11.688 11:25:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@338 -- # local 'op=<' 00:07:11.688 11:25:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@340 -- # ver1_l=2 00:07:11.688 11:25:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@341 -- # ver2_l=1 00:07:11.688 11:25:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:11.688 11:25:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@344 -- # case "$op" in 00:07:11.688 11:25:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@345 -- # : 1 00:07:11.688 11:25:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:11.688 11:25:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:11.688 11:25:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@365 -- # decimal 1 00:07:11.688 11:25:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=1 00:07:11.688 11:25:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:11.688 11:25:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 1 00:07:11.688 11:25:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@365 -- # ver1[v]=1 00:07:11.688 11:25:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@366 -- # decimal 2 00:07:11.688 11:25:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=2 00:07:11.688 11:25:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:11.688 11:25:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 2 00:07:11.688 11:25:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@366 -- # ver2[v]=2 00:07:11.688 11:25:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:11.688 11:25:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:11.688 11:25:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@368 -- # return 0 00:07:11.689 11:25:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:11.689 11:25:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:07:11.689 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:11.689 --rc genhtml_branch_coverage=1 00:07:11.689 --rc genhtml_function_coverage=1 00:07:11.689 --rc genhtml_legend=1 00:07:11.689 --rc geninfo_all_blocks=1 00:07:11.689 --rc geninfo_unexecuted_blocks=1 00:07:11.689 00:07:11.689 ' 00:07:11.689 11:25:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:07:11.689 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:11.689 --rc genhtml_branch_coverage=1 00:07:11.689 --rc genhtml_function_coverage=1 00:07:11.689 --rc genhtml_legend=1 00:07:11.689 --rc geninfo_all_blocks=1 00:07:11.689 --rc geninfo_unexecuted_blocks=1 00:07:11.689 00:07:11.689 ' 00:07:11.689 11:25:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:07:11.689 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:11.689 --rc genhtml_branch_coverage=1 00:07:11.689 --rc genhtml_function_coverage=1 00:07:11.689 --rc genhtml_legend=1 00:07:11.689 --rc geninfo_all_blocks=1 00:07:11.689 --rc geninfo_unexecuted_blocks=1 00:07:11.689 00:07:11.689 ' 00:07:11.689 11:25:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:07:11.689 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:11.689 --rc genhtml_branch_coverage=1 00:07:11.689 --rc genhtml_function_coverage=1 00:07:11.689 --rc genhtml_legend=1 00:07:11.689 --rc geninfo_all_blocks=1 00:07:11.689 --rc geninfo_unexecuted_blocks=1 00:07:11.689 00:07:11.689 ' 00:07:11.689 11:25:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:11.689 11:25:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@7 -- # uname -s 00:07:11.689 11:25:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:11.689 11:25:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:11.689 11:25:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:11.689 11:25:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:11.689 11:25:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:11.689 11:25:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:11.689 11:25:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:11.689 11:25:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:11.689 11:25:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:11.689 11:25:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:11.689 11:25:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:07:11.689 11:25:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@18 -- # NVME_HOSTID=00abaa28-3537-eb11-906e-0017a4403562 00:07:11.689 11:25:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:11.689 11:25:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:11.689 11:25:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:11.689 11:25:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:11.689 11:25:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:11.689 11:25:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@15 -- # shopt -s extglob 00:07:11.689 11:25:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:11.689 11:25:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:11.689 11:25:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:11.689 11:25:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:11.689 11:25:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:11.689 11:25:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:11.689 11:25:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@5 -- # export PATH 00:07:11.689 11:25:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:11.689 11:25:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@51 -- # : 0 00:07:11.689 11:25:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:07:11.689 11:25:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:07:11.689 11:25:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:11.689 11:25:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:11.689 11:25:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:11.689 11:25:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:07:11.689 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:07:11.689 11:25:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:07:11.689 11:25:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:07:11.689 11:25:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@55 -- # have_pci_nics=0 00:07:11.689 11:25:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:11.689 11:25:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@12 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:07:11.689 11:25:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@98 -- # nvmftestinit 00:07:11.689 11:25:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:07:11.689 11:25:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:11.689 11:25:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@476 -- # prepare_net_devs 00:07:11.689 11:25:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@438 -- # local -g is_hw=no 00:07:11.689 11:25:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@440 -- # remove_spdk_ns 00:07:11.689 11:25:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:11.689 11:25:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:11.689 11:25:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:11.689 11:25:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:07:11.689 11:25:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:07:11.689 11:25:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@309 -- # xtrace_disable 00:07:11.689 11:25:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:07:18.259 11:25:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:07:18.259 11:25:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@315 -- # pci_devs=() 00:07:18.259 11:25:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@315 -- # local -a pci_devs 00:07:18.259 11:25:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@316 -- # pci_net_devs=() 00:07:18.259 11:25:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:07:18.259 11:25:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@317 -- # pci_drivers=() 00:07:18.259 11:25:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@317 -- # local -A pci_drivers 00:07:18.259 11:25:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@319 -- # net_devs=() 00:07:18.259 11:25:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@319 -- # local -ga net_devs 00:07:18.259 11:25:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@320 -- # e810=() 00:07:18.259 11:25:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@320 -- # local -ga e810 00:07:18.259 11:25:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@321 -- # x722=() 00:07:18.259 11:25:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@321 -- # local -ga x722 00:07:18.259 11:25:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@322 -- # mlx=() 00:07:18.259 11:25:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@322 -- # local -ga mlx 00:07:18.259 11:25:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:07:18.259 11:25:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:07:18.259 11:25:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:07:18.259 11:25:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:07:18.259 11:25:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:07:18.259 11:25:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:07:18.259 11:25:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:07:18.259 11:25:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:07:18.259 11:25:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:07:18.259 11:25:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:07:18.259 11:25:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:07:18.259 11:25:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:07:18.259 11:25:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:07:18.259 11:25:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:07:18.259 11:25:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:07:18.259 11:25:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:07:18.259 11:25:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:07:18.259 11:25:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:07:18.259 11:25:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:07:18.259 11:25:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:07:18.259 Found 0000:af:00.0 (0x8086 - 0x159b) 00:07:18.259 11:25:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:07:18.259 11:25:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:07:18.259 11:25:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:18.259 11:25:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:18.259 11:25:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:07:18.259 11:25:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:07:18.259 11:25:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:07:18.259 Found 0000:af:00.1 (0x8086 - 0x159b) 00:07:18.259 11:25:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:07:18.259 11:25:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:07:18.259 11:25:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:18.259 11:25:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:18.259 11:25:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:07:18.259 11:25:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:07:18.259 11:25:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:07:18.259 11:25:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:07:18.259 11:25:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:07:18.259 11:25:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:18.259 11:25:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:07:18.259 11:25:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:18.259 11:25:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@418 -- # [[ up == up ]] 00:07:18.259 11:25:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:07:18.259 11:25:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:18.259 11:25:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:07:18.259 Found net devices under 0000:af:00.0: cvl_0_0 00:07:18.259 11:25:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:07:18.259 11:25:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:07:18.259 11:25:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:18.259 11:25:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:07:18.259 11:25:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:18.259 11:25:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@418 -- # [[ up == up ]] 00:07:18.259 11:25:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:07:18.260 11:25:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:18.260 11:25:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:07:18.260 Found net devices under 0000:af:00.1: cvl_0_1 00:07:18.260 11:25:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:07:18.260 11:25:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:07:18.260 11:25:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@442 -- # is_hw=yes 00:07:18.260 11:25:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:07:18.260 11:25:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:07:18.260 11:25:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:07:18.260 11:25:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:07:18.260 11:25:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:18.260 11:25:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:18.260 11:25:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:07:18.260 11:25:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:07:18.260 11:25:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:07:18.260 11:25:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:07:18.260 11:25:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:07:18.260 11:25:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:07:18.260 11:25:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:07:18.260 11:25:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:18.260 11:25:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:07:18.260 11:25:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:07:18.260 11:25:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:07:18.260 11:25:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:07:18.260 11:25:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:07:18.260 11:25:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:07:18.260 11:25:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:07:18.260 11:25:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:07:18.260 11:25:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:07:18.260 11:25:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:07:18.260 11:25:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:07:18.260 11:25:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:07:18.260 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:18.260 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.274 ms 00:07:18.260 00:07:18.260 --- 10.0.0.2 ping statistics --- 00:07:18.260 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:18.260 rtt min/avg/max/mdev = 0.274/0.274/0.274/0.000 ms 00:07:18.260 11:25:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:07:18.260 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:18.260 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.199 ms 00:07:18.260 00:07:18.260 --- 10.0.0.1 ping statistics --- 00:07:18.260 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:18.260 rtt min/avg/max/mdev = 0.199/0.199/0.199/0.000 ms 00:07:18.260 11:25:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:18.260 11:25:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@450 -- # return 0 00:07:18.260 11:25:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:07:18.260 11:25:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:18.260 11:25:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:07:18.260 11:25:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:07:18.260 11:25:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:18.260 11:25:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:07:18.260 11:25:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:07:18.260 11:25:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@99 -- # nvmfappstart -m 0x1 00:07:18.260 11:25:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:07:18.260 11:25:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@724 -- # xtrace_disable 00:07:18.260 11:25:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:07:18.260 11:25:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@509 -- # nvmfpid=1071408 00:07:18.260 11:25:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@510 -- # waitforlisten 1071408 00:07:18.260 11:25:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:07:18.260 11:25:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@833 -- # '[' -z 1071408 ']' 00:07:18.260 11:25:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:18.260 11:25:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@838 -- # local max_retries=100 00:07:18.260 11:25:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:18.260 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:18.260 11:25:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@842 -- # xtrace_disable 00:07:18.260 11:25:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:07:18.260 [2024-11-15 11:25:18.282600] Starting SPDK v25.01-pre git sha1 4b2d483c6 / DPDK 24.03.0 initialization... 00:07:18.260 [2024-11-15 11:25:18.282660] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:18.260 [2024-11-15 11:25:18.383974] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:18.260 [2024-11-15 11:25:18.430528] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:18.260 [2024-11-15 11:25:18.430580] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:18.260 [2024-11-15 11:25:18.430592] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:18.260 [2024-11-15 11:25:18.430602] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:18.260 [2024-11-15 11:25:18.430609] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:18.260 [2024-11-15 11:25:18.431322] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:18.260 11:25:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:07:18.260 11:25:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@866 -- # return 0 00:07:18.260 11:25:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:07:18.260 11:25:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@730 -- # xtrace_disable 00:07:18.260 11:25:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:07:18.260 11:25:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:18.260 11:25:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:07:18.260 [2024-11-15 11:25:18.819008] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:18.260 11:25:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@102 -- # run_test lvs_grow_clean lvs_grow 00:07:18.260 11:25:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:07:18.260 11:25:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1109 -- # xtrace_disable 00:07:18.260 11:25:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:07:18.260 ************************************ 00:07:18.260 START TEST lvs_grow_clean 00:07:18.260 ************************************ 00:07:18.260 11:25:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1127 -- # lvs_grow 00:07:18.260 11:25:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:07:18.260 11:25:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:07:18.260 11:25:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:07:18.260 11:25:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:07:18.260 11:25:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:07:18.260 11:25:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:07:18.260 11:25:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:07:18.260 11:25:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:07:18.260 11:25:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:07:18.519 11:25:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:07:18.519 11:25:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:07:18.519 11:25:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # lvs=b2759b22-942e-4e79-8456-040765920c54 00:07:18.519 11:25:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u b2759b22-942e-4e79-8456-040765920c54 00:07:18.520 11:25:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:07:18.778 11:25:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:07:18.778 11:25:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:07:18.778 11:25:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u b2759b22-942e-4e79-8456-040765920c54 lvol 150 00:07:19.038 11:25:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # lvol=f53d1332-d1d7-4217-a975-f587d65dffa3 00:07:19.038 11:25:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:07:19.038 11:25:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:07:19.296 [2024-11-15 11:25:20.038818] bdev_aio.c:1053:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:07:19.296 [2024-11-15 11:25:20.038887] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:07:19.296 true 00:07:19.296 11:25:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u b2759b22-942e-4e79-8456-040765920c54 00:07:19.296 11:25:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:07:19.556 11:25:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:07:19.556 11:25:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:07:19.814 11:25:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 f53d1332-d1d7-4217-a975-f587d65dffa3 00:07:20.073 11:25:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:07:20.331 [2024-11-15 11:25:20.945694] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:20.331 11:25:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:07:20.590 11:25:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=1072023 00:07:20.590 11:25:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:07:20.590 11:25:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:07:20.590 11:25:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 1072023 /var/tmp/bdevperf.sock 00:07:20.590 11:25:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@833 -- # '[' -z 1072023 ']' 00:07:20.590 11:25:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:07:20.590 11:25:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@838 -- # local max_retries=100 00:07:20.590 11:25:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:07:20.590 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:07:20.590 11:25:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@842 -- # xtrace_disable 00:07:20.590 11:25:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:07:20.590 [2024-11-15 11:25:21.255129] Starting SPDK v25.01-pre git sha1 4b2d483c6 / DPDK 24.03.0 initialization... 00:07:20.590 [2024-11-15 11:25:21.255173] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1072023 ] 00:07:20.590 [2024-11-15 11:25:21.308306] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:20.590 [2024-11-15 11:25:21.347891] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:20.590 11:25:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:07:20.591 11:25:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@866 -- # return 0 00:07:20.591 11:25:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:07:20.849 Nvme0n1 00:07:21.108 11:25:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:07:21.367 [ 00:07:21.367 { 00:07:21.367 "name": "Nvme0n1", 00:07:21.367 "aliases": [ 00:07:21.367 "f53d1332-d1d7-4217-a975-f587d65dffa3" 00:07:21.367 ], 00:07:21.367 "product_name": "NVMe disk", 00:07:21.367 "block_size": 4096, 00:07:21.367 "num_blocks": 38912, 00:07:21.367 "uuid": "f53d1332-d1d7-4217-a975-f587d65dffa3", 00:07:21.367 "numa_id": 1, 00:07:21.367 "assigned_rate_limits": { 00:07:21.367 "rw_ios_per_sec": 0, 00:07:21.367 "rw_mbytes_per_sec": 0, 00:07:21.367 "r_mbytes_per_sec": 0, 00:07:21.367 "w_mbytes_per_sec": 0 00:07:21.367 }, 00:07:21.367 "claimed": false, 00:07:21.367 "zoned": false, 00:07:21.367 "supported_io_types": { 00:07:21.367 "read": true, 00:07:21.367 "write": true, 00:07:21.367 "unmap": true, 00:07:21.367 "flush": true, 00:07:21.367 "reset": true, 00:07:21.367 "nvme_admin": true, 00:07:21.367 "nvme_io": true, 00:07:21.367 "nvme_io_md": false, 00:07:21.367 "write_zeroes": true, 00:07:21.367 "zcopy": false, 00:07:21.367 "get_zone_info": false, 00:07:21.367 "zone_management": false, 00:07:21.367 "zone_append": false, 00:07:21.367 "compare": true, 00:07:21.367 "compare_and_write": true, 00:07:21.367 "abort": true, 00:07:21.367 "seek_hole": false, 00:07:21.367 "seek_data": false, 00:07:21.367 "copy": true, 00:07:21.367 "nvme_iov_md": false 00:07:21.367 }, 00:07:21.367 "memory_domains": [ 00:07:21.367 { 00:07:21.367 "dma_device_id": "system", 00:07:21.367 "dma_device_type": 1 00:07:21.367 } 00:07:21.367 ], 00:07:21.367 "driver_specific": { 00:07:21.367 "nvme": [ 00:07:21.367 { 00:07:21.367 "trid": { 00:07:21.367 "trtype": "TCP", 00:07:21.367 "adrfam": "IPv4", 00:07:21.367 "traddr": "10.0.0.2", 00:07:21.367 "trsvcid": "4420", 00:07:21.367 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:07:21.367 }, 00:07:21.367 "ctrlr_data": { 00:07:21.367 "cntlid": 1, 00:07:21.367 "vendor_id": "0x8086", 00:07:21.367 "model_number": "SPDK bdev Controller", 00:07:21.367 "serial_number": "SPDK0", 00:07:21.367 "firmware_revision": "25.01", 00:07:21.367 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:07:21.367 "oacs": { 00:07:21.367 "security": 0, 00:07:21.367 "format": 0, 00:07:21.367 "firmware": 0, 00:07:21.367 "ns_manage": 0 00:07:21.367 }, 00:07:21.367 "multi_ctrlr": true, 00:07:21.367 "ana_reporting": false 00:07:21.367 }, 00:07:21.367 "vs": { 00:07:21.367 "nvme_version": "1.3" 00:07:21.367 }, 00:07:21.367 "ns_data": { 00:07:21.367 "id": 1, 00:07:21.367 "can_share": true 00:07:21.367 } 00:07:21.367 } 00:07:21.367 ], 00:07:21.367 "mp_policy": "active_passive" 00:07:21.367 } 00:07:21.367 } 00:07:21.367 ] 00:07:21.367 11:25:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=1072034 00:07:21.367 11:25:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:07:21.367 11:25:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:07:21.367 Running I/O for 10 seconds... 00:07:22.303 Latency(us) 00:07:22.303 [2024-11-15T10:25:23.156Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:22.303 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:22.303 Nvme0n1 : 1.00 14375.00 56.15 0.00 0.00 0.00 0.00 0.00 00:07:22.303 [2024-11-15T10:25:23.156Z] =================================================================================================================== 00:07:22.303 [2024-11-15T10:25:23.156Z] Total : 14375.00 56.15 0.00 0.00 0.00 0.00 0.00 00:07:22.303 00:07:23.239 11:25:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u b2759b22-942e-4e79-8456-040765920c54 00:07:23.239 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:23.239 Nvme0n1 : 2.00 14443.50 56.42 0.00 0.00 0.00 0.00 0.00 00:07:23.239 [2024-11-15T10:25:24.092Z] =================================================================================================================== 00:07:23.239 [2024-11-15T10:25:24.092Z] Total : 14443.50 56.42 0.00 0.00 0.00 0.00 0.00 00:07:23.239 00:07:23.498 true 00:07:23.498 11:25:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u b2759b22-942e-4e79-8456-040765920c54 00:07:23.498 11:25:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:07:23.757 11:25:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:07:23.757 11:25:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:07:23.757 11:25:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@65 -- # wait 1072034 00:07:24.323 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:24.324 Nvme0n1 : 3.00 14477.00 56.55 0.00 0.00 0.00 0.00 0.00 00:07:24.324 [2024-11-15T10:25:25.177Z] =================================================================================================================== 00:07:24.324 [2024-11-15T10:25:25.177Z] Total : 14477.00 56.55 0.00 0.00 0.00 0.00 0.00 00:07:24.324 00:07:25.260 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:25.260 Nvme0n1 : 4.00 14505.75 56.66 0.00 0.00 0.00 0.00 0.00 00:07:25.260 [2024-11-15T10:25:26.113Z] =================================================================================================================== 00:07:25.260 [2024-11-15T10:25:26.113Z] Total : 14505.75 56.66 0.00 0.00 0.00 0.00 0.00 00:07:25.260 00:07:26.637 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:26.637 Nvme0n1 : 5.00 14532.60 56.77 0.00 0.00 0.00 0.00 0.00 00:07:26.637 [2024-11-15T10:25:27.490Z] =================================================================================================================== 00:07:26.637 [2024-11-15T10:25:27.490Z] Total : 14532.60 56.77 0.00 0.00 0.00 0.00 0.00 00:07:26.637 00:07:27.573 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:27.573 Nvme0n1 : 6.00 14557.17 56.86 0.00 0.00 0.00 0.00 0.00 00:07:27.573 [2024-11-15T10:25:28.426Z] =================================================================================================================== 00:07:27.573 [2024-11-15T10:25:28.426Z] Total : 14557.17 56.86 0.00 0.00 0.00 0.00 0.00 00:07:27.573 00:07:28.508 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:28.508 Nvme0n1 : 7.00 14573.57 56.93 0.00 0.00 0.00 0.00 0.00 00:07:28.508 [2024-11-15T10:25:29.361Z] =================================================================================================================== 00:07:28.508 [2024-11-15T10:25:29.361Z] Total : 14573.57 56.93 0.00 0.00 0.00 0.00 0.00 00:07:28.508 00:07:29.445 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:29.445 Nvme0n1 : 8.00 14584.88 56.97 0.00 0.00 0.00 0.00 0.00 00:07:29.445 [2024-11-15T10:25:30.298Z] =================================================================================================================== 00:07:29.445 [2024-11-15T10:25:30.298Z] Total : 14584.88 56.97 0.00 0.00 0.00 0.00 0.00 00:07:29.445 00:07:30.380 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:30.380 Nvme0n1 : 9.00 14598.11 57.02 0.00 0.00 0.00 0.00 0.00 00:07:30.380 [2024-11-15T10:25:31.233Z] =================================================================================================================== 00:07:30.380 [2024-11-15T10:25:31.233Z] Total : 14598.11 57.02 0.00 0.00 0.00 0.00 0.00 00:07:30.380 00:07:31.316 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:31.316 Nvme0n1 : 10.00 14611.10 57.07 0.00 0.00 0.00 0.00 0.00 00:07:31.316 [2024-11-15T10:25:32.169Z] =================================================================================================================== 00:07:31.316 [2024-11-15T10:25:32.169Z] Total : 14611.10 57.07 0.00 0.00 0.00 0.00 0.00 00:07:31.316 00:07:31.316 00:07:31.316 Latency(us) 00:07:31.316 [2024-11-15T10:25:32.169Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:31.316 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:31.316 Nvme0n1 : 10.01 14611.48 57.08 0.00 0.00 8753.94 2546.97 11736.90 00:07:31.316 [2024-11-15T10:25:32.169Z] =================================================================================================================== 00:07:31.316 [2024-11-15T10:25:32.169Z] Total : 14611.48 57.08 0.00 0.00 8753.94 2546.97 11736.90 00:07:31.316 { 00:07:31.316 "results": [ 00:07:31.316 { 00:07:31.316 "job": "Nvme0n1", 00:07:31.316 "core_mask": "0x2", 00:07:31.316 "workload": "randwrite", 00:07:31.316 "status": "finished", 00:07:31.316 "queue_depth": 128, 00:07:31.316 "io_size": 4096, 00:07:31.316 "runtime": 10.008502, 00:07:31.316 "iops": 14611.477321980852, 00:07:31.316 "mibps": 57.076083288987704, 00:07:31.316 "io_failed": 0, 00:07:31.316 "io_timeout": 0, 00:07:31.316 "avg_latency_us": 8753.941870052076, 00:07:31.316 "min_latency_us": 2546.9672727272728, 00:07:31.316 "max_latency_us": 11736.901818181817 00:07:31.316 } 00:07:31.316 ], 00:07:31.317 "core_count": 1 00:07:31.317 } 00:07:31.317 11:25:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@66 -- # killprocess 1072023 00:07:31.317 11:25:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@952 -- # '[' -z 1072023 ']' 00:07:31.317 11:25:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@956 -- # kill -0 1072023 00:07:31.317 11:25:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@957 -- # uname 00:07:31.317 11:25:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:07:31.317 11:25:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 1072023 00:07:31.576 11:25:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:07:31.576 11:25:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:07:31.576 11:25:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@970 -- # echo 'killing process with pid 1072023' 00:07:31.576 killing process with pid 1072023 00:07:31.576 11:25:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@971 -- # kill 1072023 00:07:31.576 Received shutdown signal, test time was about 10.000000 seconds 00:07:31.576 00:07:31.576 Latency(us) 00:07:31.576 [2024-11-15T10:25:32.429Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:31.576 [2024-11-15T10:25:32.429Z] =================================================================================================================== 00:07:31.576 [2024-11-15T10:25:32.429Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:07:31.576 11:25:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@976 -- # wait 1072023 00:07:31.576 11:25:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:07:31.835 11:25:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:07:32.094 11:25:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u b2759b22-942e-4e79-8456-040765920c54 00:07:32.094 11:25:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:07:32.351 11:25:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:07:32.351 11:25:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@72 -- # [[ '' == \d\i\r\t\y ]] 00:07:32.351 11:25:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:07:32.610 [2024-11-15 11:25:33.394613] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:07:32.610 11:25:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u b2759b22-942e-4e79-8456-040765920c54 00:07:32.610 11:25:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@650 -- # local es=0 00:07:32.610 11:25:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u b2759b22-942e-4e79-8456-040765920c54 00:07:32.610 11:25:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:32.610 11:25:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:32.610 11:25:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:32.610 11:25:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:32.610 11:25:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:32.610 11:25:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:32.610 11:25:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:32.610 11:25:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:07:32.610 11:25:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u b2759b22-942e-4e79-8456-040765920c54 00:07:32.868 request: 00:07:32.868 { 00:07:32.868 "uuid": "b2759b22-942e-4e79-8456-040765920c54", 00:07:32.868 "method": "bdev_lvol_get_lvstores", 00:07:32.868 "req_id": 1 00:07:32.869 } 00:07:32.869 Got JSON-RPC error response 00:07:32.869 response: 00:07:32.869 { 00:07:32.869 "code": -19, 00:07:32.869 "message": "No such device" 00:07:32.869 } 00:07:32.869 11:25:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@653 -- # es=1 00:07:32.869 11:25:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:07:32.869 11:25:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:07:32.869 11:25:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:07:32.869 11:25:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:07:33.128 aio_bdev 00:07:33.128 11:25:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev f53d1332-d1d7-4217-a975-f587d65dffa3 00:07:33.128 11:25:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@901 -- # local bdev_name=f53d1332-d1d7-4217-a975-f587d65dffa3 00:07:33.128 11:25:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:07:33.128 11:25:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@903 -- # local i 00:07:33.128 11:25:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:07:33.128 11:25:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:07:33.128 11:25:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:07:33.387 11:25:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@908 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b f53d1332-d1d7-4217-a975-f587d65dffa3 -t 2000 00:07:33.646 [ 00:07:33.646 { 00:07:33.646 "name": "f53d1332-d1d7-4217-a975-f587d65dffa3", 00:07:33.646 "aliases": [ 00:07:33.646 "lvs/lvol" 00:07:33.646 ], 00:07:33.646 "product_name": "Logical Volume", 00:07:33.646 "block_size": 4096, 00:07:33.646 "num_blocks": 38912, 00:07:33.646 "uuid": "f53d1332-d1d7-4217-a975-f587d65dffa3", 00:07:33.646 "assigned_rate_limits": { 00:07:33.646 "rw_ios_per_sec": 0, 00:07:33.646 "rw_mbytes_per_sec": 0, 00:07:33.646 "r_mbytes_per_sec": 0, 00:07:33.646 "w_mbytes_per_sec": 0 00:07:33.646 }, 00:07:33.646 "claimed": false, 00:07:33.646 "zoned": false, 00:07:33.646 "supported_io_types": { 00:07:33.646 "read": true, 00:07:33.646 "write": true, 00:07:33.646 "unmap": true, 00:07:33.646 "flush": false, 00:07:33.646 "reset": true, 00:07:33.646 "nvme_admin": false, 00:07:33.646 "nvme_io": false, 00:07:33.646 "nvme_io_md": false, 00:07:33.646 "write_zeroes": true, 00:07:33.646 "zcopy": false, 00:07:33.646 "get_zone_info": false, 00:07:33.646 "zone_management": false, 00:07:33.646 "zone_append": false, 00:07:33.646 "compare": false, 00:07:33.646 "compare_and_write": false, 00:07:33.646 "abort": false, 00:07:33.646 "seek_hole": true, 00:07:33.646 "seek_data": true, 00:07:33.646 "copy": false, 00:07:33.646 "nvme_iov_md": false 00:07:33.646 }, 00:07:33.646 "driver_specific": { 00:07:33.646 "lvol": { 00:07:33.646 "lvol_store_uuid": "b2759b22-942e-4e79-8456-040765920c54", 00:07:33.646 "base_bdev": "aio_bdev", 00:07:33.646 "thin_provision": false, 00:07:33.646 "num_allocated_clusters": 38, 00:07:33.646 "snapshot": false, 00:07:33.646 "clone": false, 00:07:33.646 "esnap_clone": false 00:07:33.646 } 00:07:33.647 } 00:07:33.647 } 00:07:33.647 ] 00:07:33.647 11:25:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@909 -- # return 0 00:07:33.647 11:25:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u b2759b22-942e-4e79-8456-040765920c54 00:07:33.647 11:25:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:07:33.906 11:25:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:07:33.906 11:25:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u b2759b22-942e-4e79-8456-040765920c54 00:07:33.906 11:25:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:07:34.164 11:25:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:07:34.164 11:25:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete f53d1332-d1d7-4217-a975-f587d65dffa3 00:07:34.423 11:25:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u b2759b22-942e-4e79-8456-040765920c54 00:07:34.682 11:25:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:07:34.941 11:25:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:07:34.941 00:07:34.941 real 0m16.879s 00:07:34.941 user 0m16.409s 00:07:34.941 sys 0m1.665s 00:07:34.941 11:25:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1128 -- # xtrace_disable 00:07:34.941 11:25:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:07:34.941 ************************************ 00:07:34.941 END TEST lvs_grow_clean 00:07:34.941 ************************************ 00:07:34.941 11:25:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@103 -- # run_test lvs_grow_dirty lvs_grow dirty 00:07:34.941 11:25:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:07:34.941 11:25:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1109 -- # xtrace_disable 00:07:34.941 11:25:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:07:35.200 ************************************ 00:07:35.200 START TEST lvs_grow_dirty 00:07:35.200 ************************************ 00:07:35.200 11:25:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1127 -- # lvs_grow dirty 00:07:35.200 11:25:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:07:35.200 11:25:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:07:35.200 11:25:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:07:35.200 11:25:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:07:35.200 11:25:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:07:35.200 11:25:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:07:35.200 11:25:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:07:35.200 11:25:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:07:35.200 11:25:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:07:35.458 11:25:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:07:35.459 11:25:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:07:35.718 11:25:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # lvs=43e6eeb7-8ebb-41c7-9fc2-0470563639bb 00:07:35.718 11:25:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 43e6eeb7-8ebb-41c7-9fc2-0470563639bb 00:07:35.718 11:25:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:07:35.978 11:25:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:07:35.978 11:25:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:07:35.978 11:25:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 43e6eeb7-8ebb-41c7-9fc2-0470563639bb lvol 150 00:07:36.236 11:25:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # lvol=5c07e716-2bd9-476f-a05e-acbddfcbca97 00:07:36.236 11:25:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:07:36.236 11:25:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:07:36.236 [2024-11-15 11:25:36.986939] bdev_aio.c:1053:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:07:36.237 [2024-11-15 11:25:36.987003] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:07:36.237 true 00:07:36.237 11:25:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 43e6eeb7-8ebb-41c7-9fc2-0470563639bb 00:07:36.237 11:25:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:07:36.496 11:25:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:07:36.496 11:25:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:07:36.756 11:25:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 5c07e716-2bd9-476f-a05e-acbddfcbca97 00:07:37.015 11:25:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:07:37.274 [2024-11-15 11:25:37.974017] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:37.274 11:25:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:07:37.533 11:25:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=1075057 00:07:37.533 11:25:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:07:37.533 11:25:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:07:37.533 11:25:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 1075057 /var/tmp/bdevperf.sock 00:07:37.533 11:25:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@833 -- # '[' -z 1075057 ']' 00:07:37.533 11:25:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:07:37.533 11:25:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@838 -- # local max_retries=100 00:07:37.533 11:25:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:07:37.533 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:07:37.533 11:25:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@842 -- # xtrace_disable 00:07:37.533 11:25:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:07:37.533 [2024-11-15 11:25:38.311050] Starting SPDK v25.01-pre git sha1 4b2d483c6 / DPDK 24.03.0 initialization... 00:07:37.533 [2024-11-15 11:25:38.311111] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1075057 ] 00:07:37.533 [2024-11-15 11:25:38.377822] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:37.792 [2024-11-15 11:25:38.418108] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:37.793 11:25:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:07:37.793 11:25:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@866 -- # return 0 00:07:37.793 11:25:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:07:38.360 Nvme0n1 00:07:38.360 11:25:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:07:38.619 [ 00:07:38.619 { 00:07:38.619 "name": "Nvme0n1", 00:07:38.619 "aliases": [ 00:07:38.619 "5c07e716-2bd9-476f-a05e-acbddfcbca97" 00:07:38.619 ], 00:07:38.619 "product_name": "NVMe disk", 00:07:38.619 "block_size": 4096, 00:07:38.619 "num_blocks": 38912, 00:07:38.619 "uuid": "5c07e716-2bd9-476f-a05e-acbddfcbca97", 00:07:38.619 "numa_id": 1, 00:07:38.619 "assigned_rate_limits": { 00:07:38.619 "rw_ios_per_sec": 0, 00:07:38.619 "rw_mbytes_per_sec": 0, 00:07:38.619 "r_mbytes_per_sec": 0, 00:07:38.619 "w_mbytes_per_sec": 0 00:07:38.619 }, 00:07:38.619 "claimed": false, 00:07:38.619 "zoned": false, 00:07:38.619 "supported_io_types": { 00:07:38.619 "read": true, 00:07:38.619 "write": true, 00:07:38.619 "unmap": true, 00:07:38.619 "flush": true, 00:07:38.619 "reset": true, 00:07:38.619 "nvme_admin": true, 00:07:38.619 "nvme_io": true, 00:07:38.619 "nvme_io_md": false, 00:07:38.619 "write_zeroes": true, 00:07:38.619 "zcopy": false, 00:07:38.619 "get_zone_info": false, 00:07:38.619 "zone_management": false, 00:07:38.619 "zone_append": false, 00:07:38.619 "compare": true, 00:07:38.619 "compare_and_write": true, 00:07:38.619 "abort": true, 00:07:38.619 "seek_hole": false, 00:07:38.619 "seek_data": false, 00:07:38.619 "copy": true, 00:07:38.619 "nvme_iov_md": false 00:07:38.619 }, 00:07:38.619 "memory_domains": [ 00:07:38.619 { 00:07:38.619 "dma_device_id": "system", 00:07:38.619 "dma_device_type": 1 00:07:38.619 } 00:07:38.619 ], 00:07:38.619 "driver_specific": { 00:07:38.619 "nvme": [ 00:07:38.619 { 00:07:38.619 "trid": { 00:07:38.619 "trtype": "TCP", 00:07:38.619 "adrfam": "IPv4", 00:07:38.619 "traddr": "10.0.0.2", 00:07:38.619 "trsvcid": "4420", 00:07:38.619 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:07:38.619 }, 00:07:38.619 "ctrlr_data": { 00:07:38.619 "cntlid": 1, 00:07:38.619 "vendor_id": "0x8086", 00:07:38.619 "model_number": "SPDK bdev Controller", 00:07:38.619 "serial_number": "SPDK0", 00:07:38.619 "firmware_revision": "25.01", 00:07:38.619 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:07:38.619 "oacs": { 00:07:38.619 "security": 0, 00:07:38.619 "format": 0, 00:07:38.619 "firmware": 0, 00:07:38.619 "ns_manage": 0 00:07:38.619 }, 00:07:38.619 "multi_ctrlr": true, 00:07:38.619 "ana_reporting": false 00:07:38.619 }, 00:07:38.619 "vs": { 00:07:38.619 "nvme_version": "1.3" 00:07:38.619 }, 00:07:38.619 "ns_data": { 00:07:38.619 "id": 1, 00:07:38.619 "can_share": true 00:07:38.619 } 00:07:38.619 } 00:07:38.619 ], 00:07:38.619 "mp_policy": "active_passive" 00:07:38.619 } 00:07:38.619 } 00:07:38.619 ] 00:07:38.619 11:25:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=1075249 00:07:38.619 11:25:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:07:38.619 11:25:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:07:38.619 Running I/O for 10 seconds... 00:07:39.996 Latency(us) 00:07:39.996 [2024-11-15T10:25:40.849Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:39.996 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:39.996 Nvme0n1 : 1.00 14860.00 58.05 0.00 0.00 0.00 0.00 0.00 00:07:39.996 [2024-11-15T10:25:40.849Z] =================================================================================================================== 00:07:39.996 [2024-11-15T10:25:40.849Z] Total : 14860.00 58.05 0.00 0.00 0.00 0.00 0.00 00:07:39.996 00:07:40.564 11:25:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 43e6eeb7-8ebb-41c7-9fc2-0470563639bb 00:07:40.823 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:40.823 Nvme0n1 : 2.00 14989.50 58.55 0.00 0.00 0.00 0.00 0.00 00:07:40.823 [2024-11-15T10:25:41.676Z] =================================================================================================================== 00:07:40.823 [2024-11-15T10:25:41.676Z] Total : 14989.50 58.55 0.00 0.00 0.00 0.00 0.00 00:07:40.823 00:07:40.823 true 00:07:40.823 11:25:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 43e6eeb7-8ebb-41c7-9fc2-0470563639bb 00:07:40.823 11:25:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:07:41.083 11:25:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:07:41.083 11:25:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:07:41.083 11:25:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@65 -- # wait 1075249 00:07:41.652 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:41.652 Nvme0n1 : 3.00 15073.00 58.88 0.00 0.00 0.00 0.00 0.00 00:07:41.652 [2024-11-15T10:25:42.505Z] =================================================================================================================== 00:07:41.652 [2024-11-15T10:25:42.505Z] Total : 15073.00 58.88 0.00 0.00 0.00 0.00 0.00 00:07:41.652 00:07:43.030 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:43.030 Nvme0n1 : 4.00 15114.75 59.04 0.00 0.00 0.00 0.00 0.00 00:07:43.030 [2024-11-15T10:25:43.883Z] =================================================================================================================== 00:07:43.030 [2024-11-15T10:25:43.883Z] Total : 15114.75 59.04 0.00 0.00 0.00 0.00 0.00 00:07:43.030 00:07:43.598 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:43.598 Nvme0n1 : 5.00 15154.60 59.20 0.00 0.00 0.00 0.00 0.00 00:07:43.598 [2024-11-15T10:25:44.451Z] =================================================================================================================== 00:07:43.598 [2024-11-15T10:25:44.451Z] Total : 15154.60 59.20 0.00 0.00 0.00 0.00 0.00 00:07:43.598 00:07:44.650 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:44.650 Nvme0n1 : 6.00 15168.83 59.25 0.00 0.00 0.00 0.00 0.00 00:07:44.650 [2024-11-15T10:25:45.503Z] =================================================================================================================== 00:07:44.650 [2024-11-15T10:25:45.503Z] Total : 15168.83 59.25 0.00 0.00 0.00 0.00 0.00 00:07:44.650 00:07:45.614 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:45.614 Nvme0n1 : 7.00 15197.14 59.36 0.00 0.00 0.00 0.00 0.00 00:07:45.614 [2024-11-15T10:25:46.467Z] =================================================================================================================== 00:07:45.614 [2024-11-15T10:25:46.467Z] Total : 15197.14 59.36 0.00 0.00 0.00 0.00 0.00 00:07:45.614 00:07:46.990 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:46.990 Nvme0n1 : 8.00 15210.62 59.42 0.00 0.00 0.00 0.00 0.00 00:07:46.990 [2024-11-15T10:25:47.843Z] =================================================================================================================== 00:07:46.990 [2024-11-15T10:25:47.843Z] Total : 15210.62 59.42 0.00 0.00 0.00 0.00 0.00 00:07:46.990 00:07:47.926 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:47.926 Nvme0n1 : 9.00 15228.00 59.48 0.00 0.00 0.00 0.00 0.00 00:07:47.926 [2024-11-15T10:25:48.779Z] =================================================================================================================== 00:07:47.926 [2024-11-15T10:25:48.779Z] Total : 15228.00 59.48 0.00 0.00 0.00 0.00 0.00 00:07:47.926 00:07:48.862 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:48.862 Nvme0n1 : 10.00 15236.30 59.52 0.00 0.00 0.00 0.00 0.00 00:07:48.862 [2024-11-15T10:25:49.715Z] =================================================================================================================== 00:07:48.862 [2024-11-15T10:25:49.716Z] Total : 15236.30 59.52 0.00 0.00 0.00 0.00 0.00 00:07:48.863 00:07:48.863 00:07:48.863 Latency(us) 00:07:48.863 [2024-11-15T10:25:49.716Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:48.863 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:48.863 Nvme0n1 : 10.00 15246.11 59.56 0.00 0.00 8392.31 3902.37 16443.58 00:07:48.863 [2024-11-15T10:25:49.716Z] =================================================================================================================== 00:07:48.863 [2024-11-15T10:25:49.716Z] Total : 15246.11 59.56 0.00 0.00 8392.31 3902.37 16443.58 00:07:48.863 { 00:07:48.863 "results": [ 00:07:48.863 { 00:07:48.863 "job": "Nvme0n1", 00:07:48.863 "core_mask": "0x2", 00:07:48.863 "workload": "randwrite", 00:07:48.863 "status": "finished", 00:07:48.863 "queue_depth": 128, 00:07:48.863 "io_size": 4096, 00:07:48.863 "runtime": 10.001958, 00:07:48.863 "iops": 15246.11481072006, 00:07:48.863 "mibps": 59.55513597937524, 00:07:48.863 "io_failed": 0, 00:07:48.863 "io_timeout": 0, 00:07:48.863 "avg_latency_us": 8392.306050419666, 00:07:48.863 "min_latency_us": 3902.370909090909, 00:07:48.863 "max_latency_us": 16443.578181818182 00:07:48.863 } 00:07:48.863 ], 00:07:48.863 "core_count": 1 00:07:48.863 } 00:07:48.863 11:25:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@66 -- # killprocess 1075057 00:07:48.863 11:25:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@952 -- # '[' -z 1075057 ']' 00:07:48.863 11:25:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@956 -- # kill -0 1075057 00:07:48.863 11:25:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@957 -- # uname 00:07:48.863 11:25:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:07:48.863 11:25:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 1075057 00:07:48.863 11:25:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:07:48.863 11:25:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:07:48.863 11:25:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@970 -- # echo 'killing process with pid 1075057' 00:07:48.863 killing process with pid 1075057 00:07:48.863 11:25:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@971 -- # kill 1075057 00:07:48.863 Received shutdown signal, test time was about 10.000000 seconds 00:07:48.863 00:07:48.863 Latency(us) 00:07:48.863 [2024-11-15T10:25:49.716Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:48.863 [2024-11-15T10:25:49.716Z] =================================================================================================================== 00:07:48.863 [2024-11-15T10:25:49.716Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:07:48.863 11:25:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@976 -- # wait 1075057 00:07:48.863 11:25:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:07:49.121 11:25:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:07:49.688 11:25:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 43e6eeb7-8ebb-41c7-9fc2-0470563639bb 00:07:49.688 11:25:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:07:49.947 11:25:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:07:49.947 11:25:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@72 -- # [[ dirty == \d\i\r\t\y ]] 00:07:49.947 11:25:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@74 -- # kill -9 1071408 00:07:49.947 11:25:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # wait 1071408 00:07:49.947 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh: line 75: 1071408 Killed "${NVMF_APP[@]}" "$@" 00:07:49.947 11:25:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # true 00:07:49.947 11:25:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@76 -- # nvmfappstart -m 0x1 00:07:49.947 11:25:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:07:49.947 11:25:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@724 -- # xtrace_disable 00:07:49.947 11:25:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:07:49.947 11:25:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:07:49.947 11:25:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@509 -- # nvmfpid=1077379 00:07:49.947 11:25:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@510 -- # waitforlisten 1077379 00:07:49.947 11:25:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@833 -- # '[' -z 1077379 ']' 00:07:49.947 11:25:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:49.947 11:25:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@838 -- # local max_retries=100 00:07:49.947 11:25:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:49.947 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:49.947 11:25:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@842 -- # xtrace_disable 00:07:49.947 11:25:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:07:49.947 [2024-11-15 11:25:50.621638] Starting SPDK v25.01-pre git sha1 4b2d483c6 / DPDK 24.03.0 initialization... 00:07:49.947 [2024-11-15 11:25:50.621680] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:49.947 [2024-11-15 11:25:50.708533] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:49.947 [2024-11-15 11:25:50.757956] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:49.947 [2024-11-15 11:25:50.757994] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:49.947 [2024-11-15 11:25:50.758008] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:49.947 [2024-11-15 11:25:50.758017] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:49.947 [2024-11-15 11:25:50.758025] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:49.947 [2024-11-15 11:25:50.758742] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:50.886 11:25:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:07:50.886 11:25:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@866 -- # return 0 00:07:50.886 11:25:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:07:50.886 11:25:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@730 -- # xtrace_disable 00:07:50.886 11:25:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:07:50.886 11:25:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:50.886 11:25:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:07:50.886 [2024-11-15 11:25:51.699267] blobstore.c:4875:bs_recover: *NOTICE*: Performing recovery on blobstore 00:07:50.886 [2024-11-15 11:25:51.699374] blobstore.c:4822:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:07:50.886 [2024-11-15 11:25:51.699413] blobstore.c:4822:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:07:50.886 11:25:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # aio_bdev=aio_bdev 00:07:50.886 11:25:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@78 -- # waitforbdev 5c07e716-2bd9-476f-a05e-acbddfcbca97 00:07:50.886 11:25:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@901 -- # local bdev_name=5c07e716-2bd9-476f-a05e-acbddfcbca97 00:07:50.886 11:25:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:07:50.886 11:25:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@903 -- # local i 00:07:50.886 11:25:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:07:50.886 11:25:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:07:50.886 11:25:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:07:51.145 11:25:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@908 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 5c07e716-2bd9-476f-a05e-acbddfcbca97 -t 2000 00:07:51.404 [ 00:07:51.404 { 00:07:51.404 "name": "5c07e716-2bd9-476f-a05e-acbddfcbca97", 00:07:51.404 "aliases": [ 00:07:51.404 "lvs/lvol" 00:07:51.404 ], 00:07:51.404 "product_name": "Logical Volume", 00:07:51.404 "block_size": 4096, 00:07:51.404 "num_blocks": 38912, 00:07:51.404 "uuid": "5c07e716-2bd9-476f-a05e-acbddfcbca97", 00:07:51.404 "assigned_rate_limits": { 00:07:51.404 "rw_ios_per_sec": 0, 00:07:51.404 "rw_mbytes_per_sec": 0, 00:07:51.404 "r_mbytes_per_sec": 0, 00:07:51.404 "w_mbytes_per_sec": 0 00:07:51.404 }, 00:07:51.404 "claimed": false, 00:07:51.404 "zoned": false, 00:07:51.404 "supported_io_types": { 00:07:51.404 "read": true, 00:07:51.404 "write": true, 00:07:51.404 "unmap": true, 00:07:51.404 "flush": false, 00:07:51.404 "reset": true, 00:07:51.404 "nvme_admin": false, 00:07:51.404 "nvme_io": false, 00:07:51.404 "nvme_io_md": false, 00:07:51.404 "write_zeroes": true, 00:07:51.404 "zcopy": false, 00:07:51.404 "get_zone_info": false, 00:07:51.404 "zone_management": false, 00:07:51.404 "zone_append": false, 00:07:51.404 "compare": false, 00:07:51.404 "compare_and_write": false, 00:07:51.404 "abort": false, 00:07:51.404 "seek_hole": true, 00:07:51.404 "seek_data": true, 00:07:51.404 "copy": false, 00:07:51.404 "nvme_iov_md": false 00:07:51.404 }, 00:07:51.404 "driver_specific": { 00:07:51.404 "lvol": { 00:07:51.404 "lvol_store_uuid": "43e6eeb7-8ebb-41c7-9fc2-0470563639bb", 00:07:51.404 "base_bdev": "aio_bdev", 00:07:51.404 "thin_provision": false, 00:07:51.404 "num_allocated_clusters": 38, 00:07:51.404 "snapshot": false, 00:07:51.404 "clone": false, 00:07:51.404 "esnap_clone": false 00:07:51.404 } 00:07:51.404 } 00:07:51.404 } 00:07:51.404 ] 00:07:51.404 11:25:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@909 -- # return 0 00:07:51.404 11:25:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 43e6eeb7-8ebb-41c7-9fc2-0470563639bb 00:07:51.404 11:25:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # jq -r '.[0].free_clusters' 00:07:51.404 11:25:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # (( free_clusters == 61 )) 00:07:51.404 11:25:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 43e6eeb7-8ebb-41c7-9fc2-0470563639bb 00:07:51.404 11:25:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # jq -r '.[0].total_data_clusters' 00:07:51.662 11:25:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # (( data_clusters == 99 )) 00:07:51.663 11:25:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:07:51.922 [2024-11-15 11:25:52.623397] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:07:51.922 11:25:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 43e6eeb7-8ebb-41c7-9fc2-0470563639bb 00:07:51.922 11:25:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@650 -- # local es=0 00:07:51.922 11:25:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 43e6eeb7-8ebb-41c7-9fc2-0470563639bb 00:07:51.922 11:25:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:51.922 11:25:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:51.922 11:25:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:51.922 11:25:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:51.922 11:25:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:51.922 11:25:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:51.922 11:25:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:51.922 11:25:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:07:51.922 11:25:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 43e6eeb7-8ebb-41c7-9fc2-0470563639bb 00:07:52.182 request: 00:07:52.182 { 00:07:52.182 "uuid": "43e6eeb7-8ebb-41c7-9fc2-0470563639bb", 00:07:52.182 "method": "bdev_lvol_get_lvstores", 00:07:52.182 "req_id": 1 00:07:52.182 } 00:07:52.182 Got JSON-RPC error response 00:07:52.182 response: 00:07:52.182 { 00:07:52.182 "code": -19, 00:07:52.182 "message": "No such device" 00:07:52.182 } 00:07:52.182 11:25:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@653 -- # es=1 00:07:52.182 11:25:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:07:52.182 11:25:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:07:52.182 11:25:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:07:52.182 11:25:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:07:52.441 aio_bdev 00:07:52.441 11:25:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev 5c07e716-2bd9-476f-a05e-acbddfcbca97 00:07:52.441 11:25:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@901 -- # local bdev_name=5c07e716-2bd9-476f-a05e-acbddfcbca97 00:07:52.441 11:25:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:07:52.441 11:25:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@903 -- # local i 00:07:52.441 11:25:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:07:52.441 11:25:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:07:52.441 11:25:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:07:52.700 11:25:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@908 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 5c07e716-2bd9-476f-a05e-acbddfcbca97 -t 2000 00:07:52.700 [ 00:07:52.700 { 00:07:52.700 "name": "5c07e716-2bd9-476f-a05e-acbddfcbca97", 00:07:52.700 "aliases": [ 00:07:52.700 "lvs/lvol" 00:07:52.700 ], 00:07:52.700 "product_name": "Logical Volume", 00:07:52.700 "block_size": 4096, 00:07:52.700 "num_blocks": 38912, 00:07:52.700 "uuid": "5c07e716-2bd9-476f-a05e-acbddfcbca97", 00:07:52.700 "assigned_rate_limits": { 00:07:52.700 "rw_ios_per_sec": 0, 00:07:52.700 "rw_mbytes_per_sec": 0, 00:07:52.700 "r_mbytes_per_sec": 0, 00:07:52.700 "w_mbytes_per_sec": 0 00:07:52.700 }, 00:07:52.700 "claimed": false, 00:07:52.700 "zoned": false, 00:07:52.700 "supported_io_types": { 00:07:52.700 "read": true, 00:07:52.700 "write": true, 00:07:52.700 "unmap": true, 00:07:52.700 "flush": false, 00:07:52.700 "reset": true, 00:07:52.700 "nvme_admin": false, 00:07:52.700 "nvme_io": false, 00:07:52.700 "nvme_io_md": false, 00:07:52.700 "write_zeroes": true, 00:07:52.700 "zcopy": false, 00:07:52.700 "get_zone_info": false, 00:07:52.700 "zone_management": false, 00:07:52.700 "zone_append": false, 00:07:52.700 "compare": false, 00:07:52.700 "compare_and_write": false, 00:07:52.700 "abort": false, 00:07:52.700 "seek_hole": true, 00:07:52.700 "seek_data": true, 00:07:52.700 "copy": false, 00:07:52.700 "nvme_iov_md": false 00:07:52.700 }, 00:07:52.700 "driver_specific": { 00:07:52.700 "lvol": { 00:07:52.700 "lvol_store_uuid": "43e6eeb7-8ebb-41c7-9fc2-0470563639bb", 00:07:52.700 "base_bdev": "aio_bdev", 00:07:52.700 "thin_provision": false, 00:07:52.700 "num_allocated_clusters": 38, 00:07:52.700 "snapshot": false, 00:07:52.700 "clone": false, 00:07:52.700 "esnap_clone": false 00:07:52.700 } 00:07:52.700 } 00:07:52.700 } 00:07:52.700 ] 00:07:52.700 11:25:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@909 -- # return 0 00:07:52.700 11:25:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 43e6eeb7-8ebb-41c7-9fc2-0470563639bb 00:07:52.700 11:25:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:07:52.959 11:25:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:07:52.959 11:25:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 43e6eeb7-8ebb-41c7-9fc2-0470563639bb 00:07:52.959 11:25:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:07:53.527 11:25:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:07:53.527 11:25:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 5c07e716-2bd9-476f-a05e-acbddfcbca97 00:07:53.527 11:25:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 43e6eeb7-8ebb-41c7-9fc2-0470563639bb 00:07:53.785 11:25:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:07:53.785 11:25:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:07:53.785 00:07:53.785 real 0m18.791s 00:07:53.785 user 0m48.065s 00:07:53.785 sys 0m3.722s 00:07:53.786 11:25:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1128 -- # xtrace_disable 00:07:53.786 11:25:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:07:53.786 ************************************ 00:07:53.786 END TEST lvs_grow_dirty 00:07:53.786 ************************************ 00:07:54.044 11:25:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # process_shm --id 0 00:07:54.044 11:25:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@810 -- # type=--id 00:07:54.044 11:25:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@811 -- # id=0 00:07:54.044 11:25:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@812 -- # '[' --id = --pid ']' 00:07:54.044 11:25:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@816 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:07:54.044 11:25:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@816 -- # shm_files=nvmf_trace.0 00:07:54.044 11:25:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@818 -- # [[ -z nvmf_trace.0 ]] 00:07:54.044 11:25:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@822 -- # for n in $shm_files 00:07:54.044 11:25:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@823 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:07:54.044 nvmf_trace.0 00:07:54.044 11:25:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@825 -- # return 0 00:07:54.044 11:25:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # nvmftestfini 00:07:54.044 11:25:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@516 -- # nvmfcleanup 00:07:54.045 11:25:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@121 -- # sync 00:07:54.045 11:25:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:07:54.045 11:25:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@124 -- # set +e 00:07:54.045 11:25:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@125 -- # for i in {1..20} 00:07:54.045 11:25:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:07:54.045 rmmod nvme_tcp 00:07:54.045 rmmod nvme_fabrics 00:07:54.045 rmmod nvme_keyring 00:07:54.045 11:25:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:07:54.045 11:25:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@128 -- # set -e 00:07:54.045 11:25:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@129 -- # return 0 00:07:54.045 11:25:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@517 -- # '[' -n 1077379 ']' 00:07:54.045 11:25:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@518 -- # killprocess 1077379 00:07:54.045 11:25:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@952 -- # '[' -z 1077379 ']' 00:07:54.045 11:25:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@956 -- # kill -0 1077379 00:07:54.045 11:25:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@957 -- # uname 00:07:54.045 11:25:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:07:54.045 11:25:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 1077379 00:07:54.045 11:25:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:07:54.045 11:25:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:07:54.045 11:25:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@970 -- # echo 'killing process with pid 1077379' 00:07:54.045 killing process with pid 1077379 00:07:54.045 11:25:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@971 -- # kill 1077379 00:07:54.045 11:25:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@976 -- # wait 1077379 00:07:54.304 11:25:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:07:54.304 11:25:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:07:54.304 11:25:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:07:54.304 11:25:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@297 -- # iptr 00:07:54.304 11:25:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@791 -- # iptables-save 00:07:54.304 11:25:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:07:54.304 11:25:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@791 -- # iptables-restore 00:07:54.304 11:25:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:07:54.304 11:25:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@302 -- # remove_spdk_ns 00:07:54.304 11:25:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:54.304 11:25:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:54.304 11:25:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:56.209 11:25:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:07:56.469 00:07:56.469 real 0m44.932s 00:07:56.469 user 1m10.960s 00:07:56.469 sys 0m10.201s 00:07:56.469 11:25:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1128 -- # xtrace_disable 00:07:56.469 11:25:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:07:56.469 ************************************ 00:07:56.469 END TEST nvmf_lvs_grow 00:07:56.469 ************************************ 00:07:56.469 11:25:57 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@29 -- # run_test nvmf_bdev_io_wait /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:07:56.469 11:25:57 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:07:56.469 11:25:57 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1109 -- # xtrace_disable 00:07:56.469 11:25:57 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:07:56.469 ************************************ 00:07:56.469 START TEST nvmf_bdev_io_wait 00:07:56.469 ************************************ 00:07:56.469 11:25:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:07:56.469 * Looking for test storage... 00:07:56.469 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:56.469 11:25:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:07:56.469 11:25:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1691 -- # lcov --version 00:07:56.469 11:25:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:07:56.469 11:25:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:07:56.469 11:25:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:56.469 11:25:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:56.469 11:25:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:56.469 11:25:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # IFS=.-: 00:07:56.469 11:25:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # read -ra ver1 00:07:56.469 11:25:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # IFS=.-: 00:07:56.469 11:25:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # read -ra ver2 00:07:56.469 11:25:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@338 -- # local 'op=<' 00:07:56.469 11:25:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@340 -- # ver1_l=2 00:07:56.469 11:25:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@341 -- # ver2_l=1 00:07:56.469 11:25:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:56.469 11:25:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@344 -- # case "$op" in 00:07:56.469 11:25:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@345 -- # : 1 00:07:56.469 11:25:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:56.469 11:25:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:56.469 11:25:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # decimal 1 00:07:56.469 11:25:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=1 00:07:56.469 11:25:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:56.469 11:25:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 1 00:07:56.469 11:25:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # ver1[v]=1 00:07:56.469 11:25:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # decimal 2 00:07:56.470 11:25:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=2 00:07:56.470 11:25:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:56.470 11:25:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 2 00:07:56.470 11:25:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # ver2[v]=2 00:07:56.470 11:25:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:56.470 11:25:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:56.470 11:25:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # return 0 00:07:56.470 11:25:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:56.470 11:25:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:07:56.470 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:56.470 --rc genhtml_branch_coverage=1 00:07:56.470 --rc genhtml_function_coverage=1 00:07:56.470 --rc genhtml_legend=1 00:07:56.470 --rc geninfo_all_blocks=1 00:07:56.470 --rc geninfo_unexecuted_blocks=1 00:07:56.470 00:07:56.470 ' 00:07:56.470 11:25:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:07:56.470 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:56.470 --rc genhtml_branch_coverage=1 00:07:56.470 --rc genhtml_function_coverage=1 00:07:56.470 --rc genhtml_legend=1 00:07:56.470 --rc geninfo_all_blocks=1 00:07:56.470 --rc geninfo_unexecuted_blocks=1 00:07:56.470 00:07:56.470 ' 00:07:56.470 11:25:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:07:56.470 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:56.470 --rc genhtml_branch_coverage=1 00:07:56.470 --rc genhtml_function_coverage=1 00:07:56.470 --rc genhtml_legend=1 00:07:56.470 --rc geninfo_all_blocks=1 00:07:56.470 --rc geninfo_unexecuted_blocks=1 00:07:56.470 00:07:56.470 ' 00:07:56.470 11:25:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:07:56.470 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:56.470 --rc genhtml_branch_coverage=1 00:07:56.470 --rc genhtml_function_coverage=1 00:07:56.470 --rc genhtml_legend=1 00:07:56.470 --rc geninfo_all_blocks=1 00:07:56.470 --rc geninfo_unexecuted_blocks=1 00:07:56.470 00:07:56.470 ' 00:07:56.470 11:25:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:56.470 11:25:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # uname -s 00:07:56.470 11:25:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:56.470 11:25:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:56.470 11:25:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:56.470 11:25:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:56.470 11:25:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:56.730 11:25:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:56.730 11:25:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:56.730 11:25:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:56.730 11:25:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:56.730 11:25:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:56.730 11:25:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:07:56.730 11:25:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@18 -- # NVME_HOSTID=00abaa28-3537-eb11-906e-0017a4403562 00:07:56.730 11:25:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:56.730 11:25:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:56.730 11:25:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:56.730 11:25:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:56.730 11:25:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:56.730 11:25:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@15 -- # shopt -s extglob 00:07:56.730 11:25:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:56.730 11:25:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:56.730 11:25:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:56.730 11:25:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:56.730 11:25:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:56.730 11:25:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:56.730 11:25:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@5 -- # export PATH 00:07:56.730 11:25:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:56.730 11:25:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@51 -- # : 0 00:07:56.730 11:25:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:07:56.730 11:25:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:07:56.730 11:25:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:56.730 11:25:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:56.730 11:25:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:56.730 11:25:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:07:56.730 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:07:56.730 11:25:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:07:56.730 11:25:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:07:56.730 11:25:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@55 -- # have_pci_nics=0 00:07:56.730 11:25:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@11 -- # MALLOC_BDEV_SIZE=64 00:07:56.730 11:25:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:07:56.730 11:25:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@14 -- # nvmftestinit 00:07:56.730 11:25:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:07:56.730 11:25:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:56.730 11:25:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@476 -- # prepare_net_devs 00:07:56.730 11:25:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@438 -- # local -g is_hw=no 00:07:56.730 11:25:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@440 -- # remove_spdk_ns 00:07:56.730 11:25:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:56.730 11:25:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:56.730 11:25:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:56.730 11:25:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:07:56.730 11:25:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:07:56.730 11:25:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@309 -- # xtrace_disable 00:07:56.730 11:25:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:02.004 11:26:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:02.004 11:26:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # pci_devs=() 00:08:02.004 11:26:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # local -a pci_devs 00:08:02.004 11:26:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@316 -- # pci_net_devs=() 00:08:02.004 11:26:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:08:02.004 11:26:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@317 -- # pci_drivers=() 00:08:02.004 11:26:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@317 -- # local -A pci_drivers 00:08:02.004 11:26:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@319 -- # net_devs=() 00:08:02.004 11:26:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@319 -- # local -ga net_devs 00:08:02.004 11:26:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@320 -- # e810=() 00:08:02.004 11:26:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@320 -- # local -ga e810 00:08:02.004 11:26:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # x722=() 00:08:02.004 11:26:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # local -ga x722 00:08:02.004 11:26:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@322 -- # mlx=() 00:08:02.004 11:26:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@322 -- # local -ga mlx 00:08:02.004 11:26:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:02.004 11:26:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:02.004 11:26:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:02.004 11:26:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:02.004 11:26:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:02.004 11:26:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:02.004 11:26:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:02.004 11:26:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:08:02.004 11:26:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:02.004 11:26:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:02.004 11:26:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:02.004 11:26:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:02.004 11:26:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:08:02.004 11:26:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:08:02.004 11:26:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:08:02.004 11:26:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:08:02.004 11:26:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:08:02.004 11:26:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:08:02.004 11:26:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:02.004 11:26:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:08:02.004 Found 0000:af:00.0 (0x8086 - 0x159b) 00:08:02.004 11:26:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:08:02.004 11:26:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:08:02.004 11:26:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:02.005 11:26:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:02.005 11:26:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:08:02.005 11:26:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:02.005 11:26:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:08:02.005 Found 0000:af:00.1 (0x8086 - 0x159b) 00:08:02.005 11:26:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:08:02.005 11:26:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:08:02.005 11:26:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:02.005 11:26:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:02.005 11:26:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:08:02.005 11:26:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:08:02.005 11:26:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:08:02.005 11:26:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:08:02.005 11:26:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:08:02.005 11:26:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:02.005 11:26:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:08:02.005 11:26:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:02.005 11:26:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@418 -- # [[ up == up ]] 00:08:02.005 11:26:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:08:02.005 11:26:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:02.005 11:26:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:08:02.005 Found net devices under 0000:af:00.0: cvl_0_0 00:08:02.005 11:26:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:08:02.005 11:26:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:08:02.005 11:26:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:02.005 11:26:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:08:02.005 11:26:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:02.005 11:26:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@418 -- # [[ up == up ]] 00:08:02.005 11:26:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:08:02.005 11:26:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:02.005 11:26:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:08:02.005 Found net devices under 0000:af:00.1: cvl_0_1 00:08:02.005 11:26:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:08:02.005 11:26:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:08:02.005 11:26:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # is_hw=yes 00:08:02.005 11:26:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:08:02.005 11:26:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:08:02.005 11:26:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:08:02.005 11:26:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:08:02.005 11:26:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:02.005 11:26:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:02.005 11:26:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:02.005 11:26:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:08:02.005 11:26:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:02.005 11:26:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:02.005 11:26:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:08:02.005 11:26:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:08:02.005 11:26:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:02.005 11:26:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:02.005 11:26:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:08:02.005 11:26:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:08:02.005 11:26:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:08:02.005 11:26:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:02.005 11:26:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:02.005 11:26:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:02.005 11:26:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:08:02.005 11:26:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:02.005 11:26:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:02.005 11:26:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:02.005 11:26:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:08:02.005 11:26:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:08:02.005 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:02.005 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.460 ms 00:08:02.005 00:08:02.005 --- 10.0.0.2 ping statistics --- 00:08:02.005 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:02.005 rtt min/avg/max/mdev = 0.460/0.460/0.460/0.000 ms 00:08:02.005 11:26:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:02.005 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:02.005 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.228 ms 00:08:02.005 00:08:02.005 --- 10.0.0.1 ping statistics --- 00:08:02.005 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:02.005 rtt min/avg/max/mdev = 0.228/0.228/0.228/0.000 ms 00:08:02.005 11:26:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:02.005 11:26:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@450 -- # return 0 00:08:02.005 11:26:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:08:02.005 11:26:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:02.005 11:26:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:08:02.005 11:26:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:08:02.005 11:26:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:02.005 11:26:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:08:02.005 11:26:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:08:02.005 11:26:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@15 -- # nvmfappstart -m 0xF --wait-for-rpc 00:08:02.005 11:26:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:08:02.005 11:26:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@724 -- # xtrace_disable 00:08:02.005 11:26:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:02.005 11:26:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@509 -- # nvmfpid=1081725 00:08:02.005 11:26:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@510 -- # waitforlisten 1081725 00:08:02.005 11:26:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:08:02.005 11:26:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@833 -- # '[' -z 1081725 ']' 00:08:02.005 11:26:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:02.005 11:26:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@838 -- # local max_retries=100 00:08:02.005 11:26:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:02.006 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:02.006 11:26:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@842 -- # xtrace_disable 00:08:02.006 11:26:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:02.265 [2024-11-15 11:26:02.903694] Starting SPDK v25.01-pre git sha1 4b2d483c6 / DPDK 24.03.0 initialization... 00:08:02.265 [2024-11-15 11:26:02.903757] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:02.265 [2024-11-15 11:26:03.007750] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:02.265 [2024-11-15 11:26:03.059378] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:02.265 [2024-11-15 11:26:03.059422] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:02.265 [2024-11-15 11:26:03.059432] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:02.265 [2024-11-15 11:26:03.059441] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:02.265 [2024-11-15 11:26:03.059449] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:02.265 [2024-11-15 11:26:03.061414] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:08:02.265 [2024-11-15 11:26:03.061517] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:08:02.265 [2024-11-15 11:26:03.061550] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:08:02.265 [2024-11-15 11:26:03.061555] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:02.525 11:26:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:08:02.525 11:26:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@866 -- # return 0 00:08:02.525 11:26:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:08:02.525 11:26:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@730 -- # xtrace_disable 00:08:02.525 11:26:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:02.525 11:26:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:02.525 11:26:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@18 -- # rpc_cmd bdev_set_options -p 5 -c 1 00:08:02.525 11:26:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:02.525 11:26:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:02.525 11:26:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:02.525 11:26:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@19 -- # rpc_cmd framework_start_init 00:08:02.525 11:26:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:02.525 11:26:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:02.525 11:26:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:02.525 11:26:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:08:02.525 11:26:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:02.525 11:26:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:02.525 [2024-11-15 11:26:03.242438] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:02.525 11:26:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:02.525 11:26:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:08:02.525 11:26:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:02.525 11:26:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:02.525 Malloc0 00:08:02.525 11:26:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:02.525 11:26:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:08:02.526 11:26:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:02.526 11:26:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:02.526 11:26:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:02.526 11:26:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:08:02.526 11:26:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:02.526 11:26:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:02.526 11:26:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:02.526 11:26:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:02.526 11:26:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:02.526 11:26:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:02.526 [2024-11-15 11:26:03.299556] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:02.526 11:26:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:02.526 11:26:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@28 -- # WRITE_PID=1081979 00:08:02.526 11:26:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x10 -i 1 --json /dev/fd/63 -q 128 -o 4096 -w write -t 1 -s 256 00:08:02.526 11:26:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # gen_nvmf_target_json 00:08:02.526 11:26:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@30 -- # READ_PID=1081981 00:08:02.526 11:26:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:08:02.526 11:26:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:08:02.526 11:26:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:08:02.526 11:26:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:08:02.526 { 00:08:02.526 "params": { 00:08:02.526 "name": "Nvme$subsystem", 00:08:02.526 "trtype": "$TEST_TRANSPORT", 00:08:02.526 "traddr": "$NVMF_FIRST_TARGET_IP", 00:08:02.526 "adrfam": "ipv4", 00:08:02.526 "trsvcid": "$NVMF_PORT", 00:08:02.526 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:08:02.526 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:08:02.526 "hdgst": ${hdgst:-false}, 00:08:02.526 "ddgst": ${ddgst:-false} 00:08:02.526 }, 00:08:02.526 "method": "bdev_nvme_attach_controller" 00:08:02.526 } 00:08:02.526 EOF 00:08:02.526 )") 00:08:02.526 11:26:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x20 -i 2 --json /dev/fd/63 -q 128 -o 4096 -w read -t 1 -s 256 00:08:02.526 11:26:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # gen_nvmf_target_json 00:08:02.526 11:26:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@32 -- # FLUSH_PID=1081983 00:08:02.526 11:26:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:08:02.526 11:26:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:08:02.526 11:26:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:08:02.526 11:26:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:08:02.526 { 00:08:02.526 "params": { 00:08:02.526 "name": "Nvme$subsystem", 00:08:02.526 "trtype": "$TEST_TRANSPORT", 00:08:02.526 "traddr": "$NVMF_FIRST_TARGET_IP", 00:08:02.526 "adrfam": "ipv4", 00:08:02.526 "trsvcid": "$NVMF_PORT", 00:08:02.526 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:08:02.526 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:08:02.526 "hdgst": ${hdgst:-false}, 00:08:02.526 "ddgst": ${ddgst:-false} 00:08:02.526 }, 00:08:02.526 "method": "bdev_nvme_attach_controller" 00:08:02.526 } 00:08:02.526 EOF 00:08:02.526 )") 00:08:02.526 11:26:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@34 -- # UNMAP_PID=1081986 00:08:02.526 11:26:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x40 -i 3 --json /dev/fd/63 -q 128 -o 4096 -w flush -t 1 -s 256 00:08:02.526 11:26:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # gen_nvmf_target_json 00:08:02.526 11:26:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:08:02.526 11:26:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@35 -- # sync 00:08:02.526 11:26:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:08:02.526 11:26:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:08:02.526 11:26:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:08:02.526 11:26:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:08:02.526 11:26:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x80 -i 4 --json /dev/fd/63 -q 128 -o 4096 -w unmap -t 1 -s 256 00:08:02.526 11:26:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:08:02.526 { 00:08:02.526 "params": { 00:08:02.526 "name": "Nvme$subsystem", 00:08:02.526 "trtype": "$TEST_TRANSPORT", 00:08:02.526 "traddr": "$NVMF_FIRST_TARGET_IP", 00:08:02.526 "adrfam": "ipv4", 00:08:02.526 "trsvcid": "$NVMF_PORT", 00:08:02.526 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:08:02.526 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:08:02.526 "hdgst": ${hdgst:-false}, 00:08:02.526 "ddgst": ${ddgst:-false} 00:08:02.526 }, 00:08:02.526 "method": "bdev_nvme_attach_controller" 00:08:02.526 } 00:08:02.526 EOF 00:08:02.526 )") 00:08:02.526 11:26:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # gen_nvmf_target_json 00:08:02.526 11:26:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:08:02.526 11:26:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:08:02.526 11:26:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:08:02.526 11:26:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:08:02.526 11:26:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:08:02.526 { 00:08:02.526 "params": { 00:08:02.526 "name": "Nvme$subsystem", 00:08:02.526 "trtype": "$TEST_TRANSPORT", 00:08:02.526 "traddr": "$NVMF_FIRST_TARGET_IP", 00:08:02.526 "adrfam": "ipv4", 00:08:02.526 "trsvcid": "$NVMF_PORT", 00:08:02.526 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:08:02.526 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:08:02.526 "hdgst": ${hdgst:-false}, 00:08:02.526 "ddgst": ${ddgst:-false} 00:08:02.526 }, 00:08:02.526 "method": "bdev_nvme_attach_controller" 00:08:02.526 } 00:08:02.526 EOF 00:08:02.526 )") 00:08:02.526 11:26:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:08:02.526 11:26:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@37 -- # wait 1081979 00:08:02.526 11:26:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:08:02.526 11:26:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:08:02.526 11:26:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:08:02.526 11:26:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:08:02.526 11:26:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:08:02.526 "params": { 00:08:02.526 "name": "Nvme1", 00:08:02.526 "trtype": "tcp", 00:08:02.526 "traddr": "10.0.0.2", 00:08:02.526 "adrfam": "ipv4", 00:08:02.526 "trsvcid": "4420", 00:08:02.526 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:08:02.526 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:08:02.526 "hdgst": false, 00:08:02.526 "ddgst": false 00:08:02.526 }, 00:08:02.526 "method": "bdev_nvme_attach_controller" 00:08:02.526 }' 00:08:02.526 11:26:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:08:02.526 11:26:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:08:02.526 11:26:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:08:02.527 11:26:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:08:02.527 "params": { 00:08:02.527 "name": "Nvme1", 00:08:02.527 "trtype": "tcp", 00:08:02.527 "traddr": "10.0.0.2", 00:08:02.527 "adrfam": "ipv4", 00:08:02.527 "trsvcid": "4420", 00:08:02.527 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:08:02.527 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:08:02.527 "hdgst": false, 00:08:02.527 "ddgst": false 00:08:02.527 }, 00:08:02.527 "method": "bdev_nvme_attach_controller" 00:08:02.527 }' 00:08:02.527 11:26:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:08:02.527 "params": { 00:08:02.527 "name": "Nvme1", 00:08:02.527 "trtype": "tcp", 00:08:02.527 "traddr": "10.0.0.2", 00:08:02.527 "adrfam": "ipv4", 00:08:02.527 "trsvcid": "4420", 00:08:02.527 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:08:02.527 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:08:02.527 "hdgst": false, 00:08:02.527 "ddgst": false 00:08:02.527 }, 00:08:02.527 "method": "bdev_nvme_attach_controller" 00:08:02.527 }' 00:08:02.527 11:26:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:08:02.527 11:26:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:08:02.527 "params": { 00:08:02.527 "name": "Nvme1", 00:08:02.527 "trtype": "tcp", 00:08:02.527 "traddr": "10.0.0.2", 00:08:02.527 "adrfam": "ipv4", 00:08:02.527 "trsvcid": "4420", 00:08:02.527 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:08:02.527 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:08:02.527 "hdgst": false, 00:08:02.527 "ddgst": false 00:08:02.527 }, 00:08:02.527 "method": "bdev_nvme_attach_controller" 00:08:02.527 }' 00:08:02.527 [2024-11-15 11:26:03.356084] Starting SPDK v25.01-pre git sha1 4b2d483c6 / DPDK 24.03.0 initialization... 00:08:02.527 [2024-11-15 11:26:03.356146] [ DPDK EAL parameters: bdevperf -c 0x10 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:08:02.527 [2024-11-15 11:26:03.358156] Starting SPDK v25.01-pre git sha1 4b2d483c6 / DPDK 24.03.0 initialization... 00:08:02.527 [2024-11-15 11:26:03.358210] [ DPDK EAL parameters: bdevperf -c 0x20 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk2 --proc-type=auto ] 00:08:02.527 [2024-11-15 11:26:03.359693] Starting SPDK v25.01-pre git sha1 4b2d483c6 / DPDK 24.03.0 initialization... 00:08:02.527 [2024-11-15 11:26:03.359745] [ DPDK EAL parameters: bdevperf -c 0x40 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk3 --proc-type=auto ] 00:08:02.527 [2024-11-15 11:26:03.360190] Starting SPDK v25.01-pre git sha1 4b2d483c6 / DPDK 24.03.0 initialization... 00:08:02.527 [2024-11-15 11:26:03.360242] [ DPDK EAL parameters: bdevperf -c 0x80 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk4 --proc-type=auto ] 00:08:02.786 [2024-11-15 11:26:03.522789] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:02.786 [2024-11-15 11:26:03.562717] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:08:03.045 [2024-11-15 11:26:03.642558] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:03.045 [2024-11-15 11:26:03.691876] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:08:03.045 [2024-11-15 11:26:03.736970] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:03.045 [2024-11-15 11:26:03.786716] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 7 00:08:03.045 [2024-11-15 11:26:03.832101] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:03.305 [2024-11-15 11:26:03.900703] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:08:03.305 Running I/O for 1 seconds... 00:08:03.305 Running I/O for 1 seconds... 00:08:03.305 Running I/O for 1 seconds... 00:08:03.305 Running I/O for 1 seconds... 00:08:04.242 12779.00 IOPS, 49.92 MiB/s 00:08:04.242 Latency(us) 00:08:04.242 [2024-11-15T10:26:05.095Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:04.242 Job: Nvme1n1 (Core Mask 0x20, workload: read, depth: 128, IO size: 4096) 00:08:04.242 Nvme1n1 : 1.01 12819.85 50.08 0.00 0.00 9947.93 5242.88 14120.03 00:08:04.242 [2024-11-15T10:26:05.095Z] =================================================================================================================== 00:08:04.242 [2024-11-15T10:26:05.095Z] Total : 12819.85 50.08 0.00 0.00 9947.93 5242.88 14120.03 00:08:04.242 8138.00 IOPS, 31.79 MiB/s 00:08:04.242 Latency(us) 00:08:04.242 [2024-11-15T10:26:05.095Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:04.242 Job: Nvme1n1 (Core Mask 0x80, workload: unmap, depth: 128, IO size: 4096) 00:08:04.242 Nvme1n1 : 1.01 8206.11 32.06 0.00 0.00 15530.79 5242.88 23116.33 00:08:04.242 [2024-11-15T10:26:05.095Z] =================================================================================================================== 00:08:04.242 [2024-11-15T10:26:05.095Z] Total : 8206.11 32.06 0.00 0.00 15530.79 5242.88 23116.33 00:08:04.242 11027.00 IOPS, 43.07 MiB/s 00:08:04.242 Latency(us) 00:08:04.242 [2024-11-15T10:26:05.095Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:04.242 Job: Nvme1n1 (Core Mask 0x10, workload: write, depth: 128, IO size: 4096) 00:08:04.242 Nvme1n1 : 1.01 11113.36 43.41 0.00 0.00 11488.74 3395.96 22997.18 00:08:04.242 [2024-11-15T10:26:05.095Z] =================================================================================================================== 00:08:04.242 [2024-11-15T10:26:05.095Z] Total : 11113.36 43.41 0.00 0.00 11488.74 3395.96 22997.18 00:08:04.502 163192.00 IOPS, 637.47 MiB/s 00:08:04.502 Latency(us) 00:08:04.502 [2024-11-15T10:26:05.355Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:04.502 Job: Nvme1n1 (Core Mask 0x40, workload: flush, depth: 128, IO size: 4096) 00:08:04.502 Nvme1n1 : 1.00 162809.10 635.97 0.00 0.00 781.25 357.47 2323.55 00:08:04.502 [2024-11-15T10:26:05.355Z] =================================================================================================================== 00:08:04.502 [2024-11-15T10:26:05.355Z] Total : 162809.10 635.97 0.00 0.00 781.25 357.47 2323.55 00:08:04.502 11:26:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@38 -- # wait 1081981 00:08:04.502 11:26:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@39 -- # wait 1081983 00:08:04.502 11:26:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@40 -- # wait 1081986 00:08:04.502 11:26:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@42 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:08:04.502 11:26:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:04.502 11:26:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:04.502 11:26:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:04.502 11:26:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@44 -- # trap - SIGINT SIGTERM EXIT 00:08:04.502 11:26:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@46 -- # nvmftestfini 00:08:04.502 11:26:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@516 -- # nvmfcleanup 00:08:04.502 11:26:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@121 -- # sync 00:08:04.502 11:26:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:08:04.502 11:26:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@124 -- # set +e 00:08:04.502 11:26:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@125 -- # for i in {1..20} 00:08:04.502 11:26:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:08:04.502 rmmod nvme_tcp 00:08:04.502 rmmod nvme_fabrics 00:08:04.502 rmmod nvme_keyring 00:08:04.502 11:26:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:08:04.502 11:26:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@128 -- # set -e 00:08:04.502 11:26:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@129 -- # return 0 00:08:04.502 11:26:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@517 -- # '[' -n 1081725 ']' 00:08:04.502 11:26:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@518 -- # killprocess 1081725 00:08:04.502 11:26:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@952 -- # '[' -z 1081725 ']' 00:08:04.502 11:26:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@956 -- # kill -0 1081725 00:08:04.502 11:26:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@957 -- # uname 00:08:04.502 11:26:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:08:04.502 11:26:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 1081725 00:08:04.762 11:26:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:08:04.762 11:26:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:08:04.762 11:26:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@970 -- # echo 'killing process with pid 1081725' 00:08:04.762 killing process with pid 1081725 00:08:04.762 11:26:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@971 -- # kill 1081725 00:08:04.762 11:26:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@976 -- # wait 1081725 00:08:04.762 11:26:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:08:04.762 11:26:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:08:04.762 11:26:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:08:04.763 11:26:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@297 -- # iptr 00:08:04.763 11:26:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # iptables-save 00:08:04.763 11:26:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:08:04.763 11:26:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # iptables-restore 00:08:04.763 11:26:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:08:04.763 11:26:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@302 -- # remove_spdk_ns 00:08:04.763 11:26:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:04.763 11:26:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:04.763 11:26:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:07.299 11:26:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:08:07.299 00:08:07.299 real 0m10.496s 00:08:07.299 user 0m16.757s 00:08:07.299 sys 0m5.825s 00:08:07.299 11:26:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1128 -- # xtrace_disable 00:08:07.299 11:26:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:07.299 ************************************ 00:08:07.299 END TEST nvmf_bdev_io_wait 00:08:07.299 ************************************ 00:08:07.299 11:26:07 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@30 -- # run_test nvmf_queue_depth /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:08:07.299 11:26:07 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:08:07.299 11:26:07 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1109 -- # xtrace_disable 00:08:07.299 11:26:07 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:08:07.299 ************************************ 00:08:07.299 START TEST nvmf_queue_depth 00:08:07.299 ************************************ 00:08:07.299 11:26:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:08:07.299 * Looking for test storage... 00:08:07.299 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:07.299 11:26:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:08:07.299 11:26:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1691 -- # lcov --version 00:08:07.299 11:26:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:08:07.299 11:26:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:08:07.299 11:26:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:07.299 11:26:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:07.299 11:26:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:07.299 11:26:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@336 -- # IFS=.-: 00:08:07.299 11:26:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@336 -- # read -ra ver1 00:08:07.299 11:26:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@337 -- # IFS=.-: 00:08:07.299 11:26:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@337 -- # read -ra ver2 00:08:07.299 11:26:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@338 -- # local 'op=<' 00:08:07.299 11:26:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@340 -- # ver1_l=2 00:08:07.299 11:26:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@341 -- # ver2_l=1 00:08:07.300 11:26:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:07.300 11:26:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@344 -- # case "$op" in 00:08:07.300 11:26:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@345 -- # : 1 00:08:07.300 11:26:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:07.300 11:26:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:07.300 11:26:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@365 -- # decimal 1 00:08:07.300 11:26:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=1 00:08:07.300 11:26:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:07.300 11:26:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 1 00:08:07.300 11:26:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@365 -- # ver1[v]=1 00:08:07.300 11:26:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@366 -- # decimal 2 00:08:07.300 11:26:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=2 00:08:07.300 11:26:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:07.300 11:26:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 2 00:08:07.300 11:26:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@366 -- # ver2[v]=2 00:08:07.300 11:26:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:07.300 11:26:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:07.300 11:26:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@368 -- # return 0 00:08:07.300 11:26:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:07.300 11:26:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:08:07.300 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:07.300 --rc genhtml_branch_coverage=1 00:08:07.300 --rc genhtml_function_coverage=1 00:08:07.300 --rc genhtml_legend=1 00:08:07.300 --rc geninfo_all_blocks=1 00:08:07.300 --rc geninfo_unexecuted_blocks=1 00:08:07.300 00:08:07.300 ' 00:08:07.300 11:26:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:08:07.300 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:07.300 --rc genhtml_branch_coverage=1 00:08:07.300 --rc genhtml_function_coverage=1 00:08:07.300 --rc genhtml_legend=1 00:08:07.300 --rc geninfo_all_blocks=1 00:08:07.300 --rc geninfo_unexecuted_blocks=1 00:08:07.300 00:08:07.300 ' 00:08:07.300 11:26:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:08:07.300 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:07.300 --rc genhtml_branch_coverage=1 00:08:07.300 --rc genhtml_function_coverage=1 00:08:07.300 --rc genhtml_legend=1 00:08:07.300 --rc geninfo_all_blocks=1 00:08:07.300 --rc geninfo_unexecuted_blocks=1 00:08:07.300 00:08:07.300 ' 00:08:07.300 11:26:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:08:07.300 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:07.300 --rc genhtml_branch_coverage=1 00:08:07.300 --rc genhtml_function_coverage=1 00:08:07.300 --rc genhtml_legend=1 00:08:07.300 --rc geninfo_all_blocks=1 00:08:07.300 --rc geninfo_unexecuted_blocks=1 00:08:07.300 00:08:07.300 ' 00:08:07.300 11:26:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:07.300 11:26:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@7 -- # uname -s 00:08:07.300 11:26:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:07.300 11:26:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:07.300 11:26:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:07.300 11:26:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:07.300 11:26:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:07.300 11:26:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:07.300 11:26:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:07.300 11:26:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:07.300 11:26:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:07.300 11:26:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:07.300 11:26:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:08:07.300 11:26:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@18 -- # NVME_HOSTID=00abaa28-3537-eb11-906e-0017a4403562 00:08:07.300 11:26:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:07.300 11:26:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:07.300 11:26:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:07.300 11:26:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:07.300 11:26:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:07.300 11:26:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@15 -- # shopt -s extglob 00:08:07.300 11:26:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:07.300 11:26:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:07.300 11:26:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:07.300 11:26:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:07.300 11:26:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:07.300 11:26:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:07.300 11:26:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@5 -- # export PATH 00:08:07.300 11:26:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:07.300 11:26:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@51 -- # : 0 00:08:07.300 11:26:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:08:07.300 11:26:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:08:07.300 11:26:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:07.300 11:26:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:07.300 11:26:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:07.300 11:26:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:08:07.300 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:08:07.300 11:26:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:08:07.300 11:26:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:08:07.300 11:26:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@55 -- # have_pci_nics=0 00:08:07.300 11:26:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@14 -- # MALLOC_BDEV_SIZE=64 00:08:07.300 11:26:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@15 -- # MALLOC_BLOCK_SIZE=512 00:08:07.300 11:26:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:08:07.300 11:26:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@19 -- # nvmftestinit 00:08:07.300 11:26:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:08:07.300 11:26:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:07.300 11:26:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@476 -- # prepare_net_devs 00:08:07.300 11:26:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@438 -- # local -g is_hw=no 00:08:07.300 11:26:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@440 -- # remove_spdk_ns 00:08:07.300 11:26:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:07.300 11:26:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:07.300 11:26:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:07.300 11:26:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:08:07.300 11:26:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:08:07.300 11:26:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@309 -- # xtrace_disable 00:08:07.300 11:26:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:12.577 11:26:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:12.577 11:26:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@315 -- # pci_devs=() 00:08:12.577 11:26:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@315 -- # local -a pci_devs 00:08:12.577 11:26:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@316 -- # pci_net_devs=() 00:08:12.577 11:26:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:08:12.577 11:26:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@317 -- # pci_drivers=() 00:08:12.577 11:26:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@317 -- # local -A pci_drivers 00:08:12.577 11:26:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@319 -- # net_devs=() 00:08:12.577 11:26:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@319 -- # local -ga net_devs 00:08:12.577 11:26:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@320 -- # e810=() 00:08:12.577 11:26:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@320 -- # local -ga e810 00:08:12.577 11:26:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@321 -- # x722=() 00:08:12.577 11:26:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@321 -- # local -ga x722 00:08:12.577 11:26:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@322 -- # mlx=() 00:08:12.577 11:26:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@322 -- # local -ga mlx 00:08:12.577 11:26:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:12.577 11:26:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:12.577 11:26:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:12.577 11:26:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:12.577 11:26:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:12.577 11:26:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:12.577 11:26:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:12.577 11:26:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:08:12.577 11:26:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:12.577 11:26:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:12.577 11:26:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:12.577 11:26:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:12.577 11:26:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:08:12.577 11:26:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:08:12.577 11:26:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:08:12.577 11:26:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:08:12.577 11:26:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:08:12.577 11:26:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:08:12.577 11:26:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:12.577 11:26:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:08:12.577 Found 0000:af:00.0 (0x8086 - 0x159b) 00:08:12.577 11:26:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:08:12.577 11:26:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:08:12.577 11:26:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:12.577 11:26:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:12.577 11:26:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:08:12.577 11:26:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:12.577 11:26:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:08:12.577 Found 0000:af:00.1 (0x8086 - 0x159b) 00:08:12.577 11:26:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:08:12.577 11:26:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:08:12.577 11:26:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:12.577 11:26:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:12.577 11:26:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:08:12.577 11:26:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:08:12.577 11:26:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:08:12.577 11:26:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:08:12.577 11:26:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:08:12.577 11:26:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:12.577 11:26:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:08:12.577 11:26:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:12.577 11:26:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@418 -- # [[ up == up ]] 00:08:12.577 11:26:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:08:12.577 11:26:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:12.577 11:26:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:08:12.577 Found net devices under 0000:af:00.0: cvl_0_0 00:08:12.577 11:26:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:08:12.578 11:26:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:08:12.578 11:26:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:12.578 11:26:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:08:12.578 11:26:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:12.578 11:26:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@418 -- # [[ up == up ]] 00:08:12.578 11:26:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:08:12.578 11:26:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:12.578 11:26:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:08:12.578 Found net devices under 0000:af:00.1: cvl_0_1 00:08:12.578 11:26:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:08:12.578 11:26:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:08:12.578 11:26:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@442 -- # is_hw=yes 00:08:12.578 11:26:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:08:12.578 11:26:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:08:12.578 11:26:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:08:12.578 11:26:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:08:12.578 11:26:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:12.578 11:26:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:12.578 11:26:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:12.578 11:26:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:08:12.578 11:26:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:12.578 11:26:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:12.578 11:26:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:08:12.578 11:26:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:08:12.578 11:26:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:12.578 11:26:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:12.578 11:26:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:08:12.578 11:26:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:08:12.578 11:26:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:08:12.578 11:26:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:12.578 11:26:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:12.578 11:26:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:12.578 11:26:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:08:12.578 11:26:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:12.578 11:26:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:12.578 11:26:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:12.578 11:26:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:08:12.578 11:26:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:08:12.578 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:12.578 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.479 ms 00:08:12.578 00:08:12.578 --- 10.0.0.2 ping statistics --- 00:08:12.578 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:12.578 rtt min/avg/max/mdev = 0.479/0.479/0.479/0.000 ms 00:08:12.578 11:26:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:12.578 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:12.578 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.223 ms 00:08:12.578 00:08:12.578 --- 10.0.0.1 ping statistics --- 00:08:12.578 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:12.578 rtt min/avg/max/mdev = 0.223/0.223/0.223/0.000 ms 00:08:12.578 11:26:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:12.578 11:26:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@450 -- # return 0 00:08:12.578 11:26:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:08:12.578 11:26:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:12.578 11:26:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:08:12.578 11:26:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:08:12.578 11:26:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:12.578 11:26:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:08:12.578 11:26:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:08:12.578 11:26:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@21 -- # nvmfappstart -m 0x2 00:08:12.578 11:26:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:08:12.578 11:26:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@724 -- # xtrace_disable 00:08:12.578 11:26:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:12.578 11:26:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@509 -- # nvmfpid=1085802 00:08:12.578 11:26:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@510 -- # waitforlisten 1085802 00:08:12.578 11:26:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:08:12.578 11:26:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@833 -- # '[' -z 1085802 ']' 00:08:12.578 11:26:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:12.578 11:26:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@838 -- # local max_retries=100 00:08:12.578 11:26:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:12.578 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:12.578 11:26:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@842 -- # xtrace_disable 00:08:12.578 11:26:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:12.578 [2024-11-15 11:26:12.988653] Starting SPDK v25.01-pre git sha1 4b2d483c6 / DPDK 24.03.0 initialization... 00:08:12.578 [2024-11-15 11:26:12.988710] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:12.578 [2024-11-15 11:26:13.064038] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:12.578 [2024-11-15 11:26:13.104494] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:12.578 [2024-11-15 11:26:13.104530] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:12.578 [2024-11-15 11:26:13.104537] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:12.578 [2024-11-15 11:26:13.104543] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:12.578 [2024-11-15 11:26:13.104548] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:12.578 [2024-11-15 11:26:13.105105] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:08:12.578 11:26:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:08:12.578 11:26:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@866 -- # return 0 00:08:12.578 11:26:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:08:12.578 11:26:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@730 -- # xtrace_disable 00:08:12.578 11:26:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:12.578 11:26:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:12.578 11:26:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:08:12.578 11:26:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:12.578 11:26:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:12.578 [2024-11-15 11:26:13.256345] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:12.578 11:26:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:12.578 11:26:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:08:12.578 11:26:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:12.578 11:26:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:12.578 Malloc0 00:08:12.578 11:26:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:12.578 11:26:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:08:12.578 11:26:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:12.578 11:26:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:12.578 11:26:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:12.578 11:26:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:08:12.578 11:26:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:12.578 11:26:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:12.578 11:26:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:12.578 11:26:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:12.578 11:26:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:12.578 11:26:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:12.579 [2024-11-15 11:26:13.302317] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:12.579 11:26:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:12.579 11:26:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@30 -- # bdevperf_pid=1086026 00:08:12.579 11:26:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@32 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:08:12.579 11:26:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 1024 -o 4096 -w verify -t 10 00:08:12.579 11:26:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@33 -- # waitforlisten 1086026 /var/tmp/bdevperf.sock 00:08:12.579 11:26:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@833 -- # '[' -z 1086026 ']' 00:08:12.579 11:26:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:08:12.579 11:26:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@838 -- # local max_retries=100 00:08:12.579 11:26:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:08:12.579 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:08:12.579 11:26:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@842 -- # xtrace_disable 00:08:12.579 11:26:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:12.579 [2024-11-15 11:26:13.360042] Starting SPDK v25.01-pre git sha1 4b2d483c6 / DPDK 24.03.0 initialization... 00:08:12.579 [2024-11-15 11:26:13.360096] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1086026 ] 00:08:12.837 [2024-11-15 11:26:13.454097] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:12.838 [2024-11-15 11:26:13.503203] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:13.404 11:26:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:08:13.404 11:26:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@866 -- # return 0 00:08:13.404 11:26:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@34 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:08:13.404 11:26:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:13.404 11:26:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:13.662 NVMe0n1 00:08:13.663 11:26:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:13.663 11:26:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:08:13.663 Running I/O for 10 seconds... 00:08:15.976 10084.00 IOPS, 39.39 MiB/s [2024-11-15T10:26:17.766Z] 10240.00 IOPS, 40.00 MiB/s [2024-11-15T10:26:18.704Z] 10244.67 IOPS, 40.02 MiB/s [2024-11-15T10:26:19.641Z] 10339.75 IOPS, 40.39 MiB/s [2024-11-15T10:26:20.578Z] 10392.20 IOPS, 40.59 MiB/s [2024-11-15T10:26:21.516Z] 10411.67 IOPS, 40.67 MiB/s [2024-11-15T10:26:22.895Z] 10389.00 IOPS, 40.58 MiB/s [2024-11-15T10:26:23.832Z] 10415.00 IOPS, 40.68 MiB/s [2024-11-15T10:26:24.770Z] 10446.89 IOPS, 40.81 MiB/s [2024-11-15T10:26:24.770Z] 10445.40 IOPS, 40.80 MiB/s 00:08:23.917 Latency(us) 00:08:23.917 [2024-11-15T10:26:24.770Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:23.917 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 1024, IO size: 4096) 00:08:23.917 Verification LBA range: start 0x0 length 0x4000 00:08:23.917 NVMe0n1 : 10.06 10482.05 40.95 0.00 0.00 97352.87 23950.43 66250.94 00:08:23.917 [2024-11-15T10:26:24.770Z] =================================================================================================================== 00:08:23.917 [2024-11-15T10:26:24.770Z] Total : 10482.05 40.95 0.00 0.00 97352.87 23950.43 66250.94 00:08:23.917 { 00:08:23.917 "results": [ 00:08:23.917 { 00:08:23.917 "job": "NVMe0n1", 00:08:23.917 "core_mask": "0x1", 00:08:23.917 "workload": "verify", 00:08:23.917 "status": "finished", 00:08:23.917 "verify_range": { 00:08:23.917 "start": 0, 00:08:23.917 "length": 16384 00:08:23.917 }, 00:08:23.917 "queue_depth": 1024, 00:08:23.917 "io_size": 4096, 00:08:23.917 "runtime": 10.062729, 00:08:23.917 "iops": 10482.047166330327, 00:08:23.917 "mibps": 40.94549674347784, 00:08:23.917 "io_failed": 0, 00:08:23.917 "io_timeout": 0, 00:08:23.917 "avg_latency_us": 97352.87119437229, 00:08:23.917 "min_latency_us": 23950.429090909092, 00:08:23.917 "max_latency_us": 66250.93818181819 00:08:23.917 } 00:08:23.917 ], 00:08:23.917 "core_count": 1 00:08:23.917 } 00:08:23.917 11:26:24 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@39 -- # killprocess 1086026 00:08:23.917 11:26:24 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@952 -- # '[' -z 1086026 ']' 00:08:23.917 11:26:24 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@956 -- # kill -0 1086026 00:08:23.917 11:26:24 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@957 -- # uname 00:08:23.917 11:26:24 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:08:23.917 11:26:24 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 1086026 00:08:23.917 11:26:24 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:08:23.917 11:26:24 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:08:23.917 11:26:24 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@970 -- # echo 'killing process with pid 1086026' 00:08:23.917 killing process with pid 1086026 00:08:23.917 11:26:24 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@971 -- # kill 1086026 00:08:23.917 Received shutdown signal, test time was about 10.000000 seconds 00:08:23.917 00:08:23.917 Latency(us) 00:08:23.917 [2024-11-15T10:26:24.770Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:23.917 [2024-11-15T10:26:24.770Z] =================================================================================================================== 00:08:23.917 [2024-11-15T10:26:24.770Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:08:23.917 11:26:24 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@976 -- # wait 1086026 00:08:24.176 11:26:24 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:08:24.176 11:26:24 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@43 -- # nvmftestfini 00:08:24.176 11:26:24 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@516 -- # nvmfcleanup 00:08:24.176 11:26:24 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@121 -- # sync 00:08:24.176 11:26:24 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:08:24.176 11:26:24 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@124 -- # set +e 00:08:24.176 11:26:24 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@125 -- # for i in {1..20} 00:08:24.176 11:26:24 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:08:24.176 rmmod nvme_tcp 00:08:24.176 rmmod nvme_fabrics 00:08:24.176 rmmod nvme_keyring 00:08:24.176 11:26:24 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:08:24.176 11:26:24 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@128 -- # set -e 00:08:24.176 11:26:24 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@129 -- # return 0 00:08:24.176 11:26:24 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@517 -- # '[' -n 1085802 ']' 00:08:24.176 11:26:24 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@518 -- # killprocess 1085802 00:08:24.176 11:26:24 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@952 -- # '[' -z 1085802 ']' 00:08:24.176 11:26:24 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@956 -- # kill -0 1085802 00:08:24.176 11:26:24 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@957 -- # uname 00:08:24.176 11:26:24 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:08:24.176 11:26:24 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 1085802 00:08:24.177 11:26:24 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:08:24.177 11:26:24 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:08:24.177 11:26:24 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@970 -- # echo 'killing process with pid 1085802' 00:08:24.177 killing process with pid 1085802 00:08:24.177 11:26:24 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@971 -- # kill 1085802 00:08:24.177 11:26:24 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@976 -- # wait 1085802 00:08:24.436 11:26:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:08:24.436 11:26:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:08:24.436 11:26:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:08:24.436 11:26:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@297 -- # iptr 00:08:24.436 11:26:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@791 -- # iptables-save 00:08:24.436 11:26:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:08:24.436 11:26:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@791 -- # iptables-restore 00:08:24.436 11:26:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:08:24.436 11:26:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@302 -- # remove_spdk_ns 00:08:24.436 11:26:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:24.436 11:26:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:24.436 11:26:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:26.973 11:26:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:08:26.973 00:08:26.973 real 0m19.516s 00:08:26.973 user 0m24.407s 00:08:26.973 sys 0m5.314s 00:08:26.973 11:26:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1128 -- # xtrace_disable 00:08:26.973 11:26:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:26.973 ************************************ 00:08:26.973 END TEST nvmf_queue_depth 00:08:26.973 ************************************ 00:08:26.973 11:26:27 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@31 -- # run_test nvmf_target_multipath /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:08:26.973 11:26:27 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:08:26.973 11:26:27 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1109 -- # xtrace_disable 00:08:26.973 11:26:27 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:08:26.973 ************************************ 00:08:26.973 START TEST nvmf_target_multipath 00:08:26.973 ************************************ 00:08:26.973 11:26:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:08:26.973 * Looking for test storage... 00:08:26.973 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:26.973 11:26:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:08:26.973 11:26:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1691 -- # lcov --version 00:08:26.973 11:26:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:08:26.973 11:26:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:08:26.973 11:26:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:26.973 11:26:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:26.973 11:26:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:26.973 11:26:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@336 -- # IFS=.-: 00:08:26.973 11:26:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@336 -- # read -ra ver1 00:08:26.973 11:26:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@337 -- # IFS=.-: 00:08:26.973 11:26:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@337 -- # read -ra ver2 00:08:26.973 11:26:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@338 -- # local 'op=<' 00:08:26.973 11:26:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@340 -- # ver1_l=2 00:08:26.973 11:26:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@341 -- # ver2_l=1 00:08:26.973 11:26:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:26.974 11:26:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@344 -- # case "$op" in 00:08:26.974 11:26:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@345 -- # : 1 00:08:26.974 11:26:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:26.974 11:26:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:26.974 11:26:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@365 -- # decimal 1 00:08:26.974 11:26:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=1 00:08:26.974 11:26:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:26.974 11:26:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 1 00:08:26.974 11:26:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@365 -- # ver1[v]=1 00:08:26.974 11:26:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@366 -- # decimal 2 00:08:26.974 11:26:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=2 00:08:26.974 11:26:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:26.974 11:26:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 2 00:08:26.974 11:26:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@366 -- # ver2[v]=2 00:08:26.974 11:26:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:26.974 11:26:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:26.974 11:26:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@368 -- # return 0 00:08:26.974 11:26:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:26.974 11:26:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:08:26.974 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:26.974 --rc genhtml_branch_coverage=1 00:08:26.974 --rc genhtml_function_coverage=1 00:08:26.974 --rc genhtml_legend=1 00:08:26.974 --rc geninfo_all_blocks=1 00:08:26.974 --rc geninfo_unexecuted_blocks=1 00:08:26.974 00:08:26.974 ' 00:08:26.974 11:26:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:08:26.974 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:26.974 --rc genhtml_branch_coverage=1 00:08:26.974 --rc genhtml_function_coverage=1 00:08:26.974 --rc genhtml_legend=1 00:08:26.974 --rc geninfo_all_blocks=1 00:08:26.974 --rc geninfo_unexecuted_blocks=1 00:08:26.974 00:08:26.974 ' 00:08:26.974 11:26:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:08:26.974 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:26.974 --rc genhtml_branch_coverage=1 00:08:26.974 --rc genhtml_function_coverage=1 00:08:26.974 --rc genhtml_legend=1 00:08:26.974 --rc geninfo_all_blocks=1 00:08:26.974 --rc geninfo_unexecuted_blocks=1 00:08:26.974 00:08:26.974 ' 00:08:26.974 11:26:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:08:26.974 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:26.974 --rc genhtml_branch_coverage=1 00:08:26.974 --rc genhtml_function_coverage=1 00:08:26.974 --rc genhtml_legend=1 00:08:26.974 --rc geninfo_all_blocks=1 00:08:26.974 --rc geninfo_unexecuted_blocks=1 00:08:26.974 00:08:26.974 ' 00:08:26.974 11:26:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:26.974 11:26:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@7 -- # uname -s 00:08:26.974 11:26:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:26.974 11:26:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:26.974 11:26:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:26.974 11:26:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:26.974 11:26:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:26.974 11:26:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:26.974 11:26:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:26.974 11:26:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:26.974 11:26:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:26.974 11:26:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:26.974 11:26:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:08:26.974 11:26:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@18 -- # NVME_HOSTID=00abaa28-3537-eb11-906e-0017a4403562 00:08:26.974 11:26:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:26.974 11:26:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:26.974 11:26:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:26.974 11:26:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:26.974 11:26:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:26.974 11:26:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@15 -- # shopt -s extglob 00:08:26.974 11:26:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:26.974 11:26:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:26.974 11:26:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:26.974 11:26:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:26.974 11:26:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:26.974 11:26:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:26.974 11:26:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@5 -- # export PATH 00:08:26.974 11:26:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:26.974 11:26:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@51 -- # : 0 00:08:26.974 11:26:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:08:26.974 11:26:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:08:26.974 11:26:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:26.974 11:26:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:26.974 11:26:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:26.974 11:26:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:08:26.974 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:08:26.974 11:26:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:08:26.974 11:26:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:08:26.974 11:26:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@55 -- # have_pci_nics=0 00:08:26.974 11:26:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:08:26.974 11:26:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:08:26.974 11:26:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:08:26.974 11:26:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:08:26.974 11:26:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@43 -- # nvmftestinit 00:08:26.974 11:26:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:08:26.974 11:26:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:26.974 11:26:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@476 -- # prepare_net_devs 00:08:26.974 11:26:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@438 -- # local -g is_hw=no 00:08:26.974 11:26:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@440 -- # remove_spdk_ns 00:08:26.974 11:26:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:26.974 11:26:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:26.974 11:26:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:26.974 11:26:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:08:26.975 11:26:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:08:26.975 11:26:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@309 -- # xtrace_disable 00:08:26.975 11:26:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:08:32.248 11:26:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:32.248 11:26:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@315 -- # pci_devs=() 00:08:32.248 11:26:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@315 -- # local -a pci_devs 00:08:32.248 11:26:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@316 -- # pci_net_devs=() 00:08:32.248 11:26:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:08:32.248 11:26:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@317 -- # pci_drivers=() 00:08:32.248 11:26:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@317 -- # local -A pci_drivers 00:08:32.248 11:26:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@319 -- # net_devs=() 00:08:32.248 11:26:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@319 -- # local -ga net_devs 00:08:32.248 11:26:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@320 -- # e810=() 00:08:32.248 11:26:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@320 -- # local -ga e810 00:08:32.248 11:26:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@321 -- # x722=() 00:08:32.248 11:26:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@321 -- # local -ga x722 00:08:32.248 11:26:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@322 -- # mlx=() 00:08:32.248 11:26:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@322 -- # local -ga mlx 00:08:32.248 11:26:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:32.248 11:26:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:32.248 11:26:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:32.248 11:26:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:32.248 11:26:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:32.248 11:26:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:32.248 11:26:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:32.248 11:26:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:08:32.248 11:26:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:32.248 11:26:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:32.248 11:26:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:32.248 11:26:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:32.248 11:26:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:08:32.248 11:26:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:08:32.248 11:26:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:08:32.248 11:26:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:08:32.248 11:26:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:08:32.248 11:26:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:08:32.248 11:26:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:32.248 11:26:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:08:32.248 Found 0000:af:00.0 (0x8086 - 0x159b) 00:08:32.248 11:26:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:08:32.248 11:26:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:08:32.248 11:26:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:32.248 11:26:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:32.248 11:26:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:08:32.248 11:26:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:32.248 11:26:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:08:32.248 Found 0000:af:00.1 (0x8086 - 0x159b) 00:08:32.248 11:26:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:08:32.248 11:26:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:08:32.248 11:26:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:32.248 11:26:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:32.248 11:26:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:08:32.248 11:26:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:08:32.248 11:26:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:08:32.248 11:26:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:08:32.248 11:26:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:08:32.248 11:26:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:32.248 11:26:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:08:32.248 11:26:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:32.248 11:26:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@418 -- # [[ up == up ]] 00:08:32.248 11:26:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:08:32.248 11:26:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:32.248 11:26:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:08:32.248 Found net devices under 0000:af:00.0: cvl_0_0 00:08:32.248 11:26:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:08:32.248 11:26:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:08:32.248 11:26:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:32.248 11:26:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:08:32.248 11:26:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:32.248 11:26:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@418 -- # [[ up == up ]] 00:08:32.248 11:26:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:08:32.248 11:26:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:32.249 11:26:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:08:32.249 Found net devices under 0000:af:00.1: cvl_0_1 00:08:32.249 11:26:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:08:32.249 11:26:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:08:32.249 11:26:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@442 -- # is_hw=yes 00:08:32.249 11:26:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:08:32.249 11:26:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:08:32.249 11:26:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:08:32.249 11:26:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:08:32.249 11:26:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:32.249 11:26:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:32.249 11:26:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:32.249 11:26:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:08:32.249 11:26:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:32.249 11:26:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:32.249 11:26:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:08:32.249 11:26:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:08:32.249 11:26:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:32.249 11:26:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:32.249 11:26:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:08:32.249 11:26:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:08:32.249 11:26:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:08:32.249 11:26:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:32.249 11:26:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:32.249 11:26:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:32.249 11:26:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:08:32.249 11:26:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:32.249 11:26:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:32.249 11:26:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:32.249 11:26:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:08:32.249 11:26:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:08:32.249 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:32.249 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.385 ms 00:08:32.249 00:08:32.249 --- 10.0.0.2 ping statistics --- 00:08:32.249 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:32.249 rtt min/avg/max/mdev = 0.385/0.385/0.385/0.000 ms 00:08:32.249 11:26:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:32.249 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:32.249 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.203 ms 00:08:32.249 00:08:32.249 --- 10.0.0.1 ping statistics --- 00:08:32.249 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:32.249 rtt min/avg/max/mdev = 0.203/0.203/0.203/0.000 ms 00:08:32.249 11:26:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:32.249 11:26:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@450 -- # return 0 00:08:32.249 11:26:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:08:32.249 11:26:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:32.249 11:26:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:08:32.249 11:26:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:08:32.249 11:26:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:32.249 11:26:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:08:32.249 11:26:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:08:32.249 11:26:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@45 -- # '[' -z ']' 00:08:32.249 11:26:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@46 -- # echo 'only one NIC for nvmf test' 00:08:32.249 only one NIC for nvmf test 00:08:32.249 11:26:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@47 -- # nvmftestfini 00:08:32.249 11:26:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@516 -- # nvmfcleanup 00:08:32.249 11:26:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@121 -- # sync 00:08:32.508 11:26:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:08:32.508 11:26:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@124 -- # set +e 00:08:32.508 11:26:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@125 -- # for i in {1..20} 00:08:32.508 11:26:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:08:32.508 rmmod nvme_tcp 00:08:32.508 rmmod nvme_fabrics 00:08:32.508 rmmod nvme_keyring 00:08:32.508 11:26:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:08:32.508 11:26:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@128 -- # set -e 00:08:32.508 11:26:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@129 -- # return 0 00:08:32.508 11:26:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:08:32.508 11:26:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:08:32.508 11:26:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:08:32.508 11:26:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:08:32.508 11:26:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@297 -- # iptr 00:08:32.508 11:26:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-save 00:08:32.508 11:26:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-restore 00:08:32.508 11:26:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:08:32.508 11:26:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:08:32.508 11:26:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@302 -- # remove_spdk_ns 00:08:32.508 11:26:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:32.508 11:26:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:32.508 11:26:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:34.413 11:26:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:08:34.413 11:26:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@48 -- # exit 0 00:08:34.413 11:26:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@1 -- # nvmftestfini 00:08:34.413 11:26:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@516 -- # nvmfcleanup 00:08:34.413 11:26:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@121 -- # sync 00:08:34.413 11:26:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:08:34.413 11:26:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@124 -- # set +e 00:08:34.413 11:26:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@125 -- # for i in {1..20} 00:08:34.413 11:26:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:08:34.413 11:26:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:08:34.413 11:26:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@128 -- # set -e 00:08:34.413 11:26:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@129 -- # return 0 00:08:34.413 11:26:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:08:34.413 11:26:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:08:34.413 11:26:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:08:34.413 11:26:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:08:34.413 11:26:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@297 -- # iptr 00:08:34.413 11:26:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-save 00:08:34.413 11:26:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:08:34.413 11:26:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-restore 00:08:34.673 11:26:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:08:34.673 11:26:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@302 -- # remove_spdk_ns 00:08:34.673 11:26:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:34.673 11:26:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:34.673 11:26:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:34.673 11:26:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:08:34.673 00:08:34.673 real 0m7.999s 00:08:34.673 user 0m1.594s 00:08:34.673 sys 0m4.339s 00:08:34.673 11:26:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1128 -- # xtrace_disable 00:08:34.673 11:26:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:08:34.673 ************************************ 00:08:34.673 END TEST nvmf_target_multipath 00:08:34.673 ************************************ 00:08:34.673 11:26:35 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@32 -- # run_test nvmf_zcopy /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:08:34.673 11:26:35 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:08:34.673 11:26:35 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1109 -- # xtrace_disable 00:08:34.673 11:26:35 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:08:34.673 ************************************ 00:08:34.673 START TEST nvmf_zcopy 00:08:34.673 ************************************ 00:08:34.673 11:26:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:08:34.673 * Looking for test storage... 00:08:34.673 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:34.673 11:26:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:08:34.673 11:26:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1691 -- # lcov --version 00:08:34.673 11:26:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:08:34.673 11:26:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:08:34.673 11:26:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:34.673 11:26:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:34.673 11:26:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:34.673 11:26:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@336 -- # IFS=.-: 00:08:34.673 11:26:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@336 -- # read -ra ver1 00:08:34.673 11:26:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@337 -- # IFS=.-: 00:08:34.673 11:26:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@337 -- # read -ra ver2 00:08:34.673 11:26:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@338 -- # local 'op=<' 00:08:34.673 11:26:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@340 -- # ver1_l=2 00:08:34.673 11:26:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@341 -- # ver2_l=1 00:08:34.673 11:26:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:34.673 11:26:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@344 -- # case "$op" in 00:08:34.673 11:26:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@345 -- # : 1 00:08:34.673 11:26:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:34.673 11:26:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:34.673 11:26:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@365 -- # decimal 1 00:08:34.673 11:26:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@353 -- # local d=1 00:08:34.673 11:26:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:34.673 11:26:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@355 -- # echo 1 00:08:34.673 11:26:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@365 -- # ver1[v]=1 00:08:34.673 11:26:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@366 -- # decimal 2 00:08:34.673 11:26:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@353 -- # local d=2 00:08:34.673 11:26:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:34.673 11:26:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@355 -- # echo 2 00:08:34.673 11:26:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@366 -- # ver2[v]=2 00:08:34.673 11:26:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:34.673 11:26:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:34.673 11:26:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@368 -- # return 0 00:08:34.673 11:26:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:34.673 11:26:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:08:34.673 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:34.673 --rc genhtml_branch_coverage=1 00:08:34.673 --rc genhtml_function_coverage=1 00:08:34.673 --rc genhtml_legend=1 00:08:34.673 --rc geninfo_all_blocks=1 00:08:34.673 --rc geninfo_unexecuted_blocks=1 00:08:34.673 00:08:34.673 ' 00:08:34.673 11:26:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:08:34.673 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:34.673 --rc genhtml_branch_coverage=1 00:08:34.673 --rc genhtml_function_coverage=1 00:08:34.673 --rc genhtml_legend=1 00:08:34.673 --rc geninfo_all_blocks=1 00:08:34.673 --rc geninfo_unexecuted_blocks=1 00:08:34.673 00:08:34.673 ' 00:08:34.673 11:26:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:08:34.673 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:34.673 --rc genhtml_branch_coverage=1 00:08:34.673 --rc genhtml_function_coverage=1 00:08:34.673 --rc genhtml_legend=1 00:08:34.673 --rc geninfo_all_blocks=1 00:08:34.673 --rc geninfo_unexecuted_blocks=1 00:08:34.673 00:08:34.673 ' 00:08:34.673 11:26:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:08:34.673 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:34.673 --rc genhtml_branch_coverage=1 00:08:34.673 --rc genhtml_function_coverage=1 00:08:34.673 --rc genhtml_legend=1 00:08:34.673 --rc geninfo_all_blocks=1 00:08:34.673 --rc geninfo_unexecuted_blocks=1 00:08:34.673 00:08:34.673 ' 00:08:34.673 11:26:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:34.673 11:26:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@7 -- # uname -s 00:08:34.673 11:26:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:34.673 11:26:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:34.673 11:26:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:34.673 11:26:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:34.673 11:26:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:34.673 11:26:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:34.673 11:26:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:34.673 11:26:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:34.673 11:26:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:34.673 11:26:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:34.673 11:26:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:08:34.673 11:26:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@18 -- # NVME_HOSTID=00abaa28-3537-eb11-906e-0017a4403562 00:08:34.673 11:26:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:34.673 11:26:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:34.673 11:26:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:34.673 11:26:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:34.673 11:26:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:34.673 11:26:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@15 -- # shopt -s extglob 00:08:34.673 11:26:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:34.673 11:26:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:34.673 11:26:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:34.673 11:26:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:34.674 11:26:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:34.674 11:26:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:34.674 11:26:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@5 -- # export PATH 00:08:34.674 11:26:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:34.674 11:26:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@51 -- # : 0 00:08:34.674 11:26:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:08:34.674 11:26:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:08:34.933 11:26:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:34.933 11:26:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:34.933 11:26:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:34.933 11:26:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:08:34.933 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:08:34.933 11:26:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:08:34.933 11:26:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:08:34.933 11:26:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@55 -- # have_pci_nics=0 00:08:34.933 11:26:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@12 -- # nvmftestinit 00:08:34.933 11:26:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:08:34.933 11:26:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:34.933 11:26:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@476 -- # prepare_net_devs 00:08:34.933 11:26:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@438 -- # local -g is_hw=no 00:08:34.933 11:26:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@440 -- # remove_spdk_ns 00:08:34.933 11:26:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:34.933 11:26:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:34.933 11:26:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:34.933 11:26:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:08:34.933 11:26:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:08:34.933 11:26:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@309 -- # xtrace_disable 00:08:34.933 11:26:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:41.506 11:26:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:41.506 11:26:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@315 -- # pci_devs=() 00:08:41.506 11:26:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@315 -- # local -a pci_devs 00:08:41.506 11:26:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@316 -- # pci_net_devs=() 00:08:41.506 11:26:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:08:41.506 11:26:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@317 -- # pci_drivers=() 00:08:41.506 11:26:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@317 -- # local -A pci_drivers 00:08:41.506 11:26:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@319 -- # net_devs=() 00:08:41.506 11:26:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@319 -- # local -ga net_devs 00:08:41.506 11:26:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@320 -- # e810=() 00:08:41.506 11:26:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@320 -- # local -ga e810 00:08:41.506 11:26:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@321 -- # x722=() 00:08:41.506 11:26:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@321 -- # local -ga x722 00:08:41.506 11:26:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@322 -- # mlx=() 00:08:41.506 11:26:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@322 -- # local -ga mlx 00:08:41.506 11:26:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:41.506 11:26:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:41.506 11:26:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:41.506 11:26:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:41.506 11:26:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:41.506 11:26:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:41.506 11:26:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:41.506 11:26:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:08:41.506 11:26:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:41.506 11:26:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:41.506 11:26:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:41.506 11:26:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:41.506 11:26:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:08:41.506 11:26:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:08:41.506 11:26:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:08:41.506 11:26:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:08:41.506 11:26:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:08:41.506 11:26:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:08:41.506 11:26:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:41.506 11:26:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:08:41.506 Found 0000:af:00.0 (0x8086 - 0x159b) 00:08:41.506 11:26:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:08:41.506 11:26:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:08:41.506 11:26:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:41.506 11:26:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:41.506 11:26:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:08:41.506 11:26:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:41.506 11:26:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:08:41.506 Found 0000:af:00.1 (0x8086 - 0x159b) 00:08:41.506 11:26:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:08:41.506 11:26:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:08:41.506 11:26:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:41.506 11:26:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:41.506 11:26:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:08:41.506 11:26:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:08:41.506 11:26:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:08:41.506 11:26:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:08:41.506 11:26:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:08:41.506 11:26:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:41.506 11:26:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:08:41.506 11:26:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:41.506 11:26:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@418 -- # [[ up == up ]] 00:08:41.506 11:26:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:08:41.506 11:26:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:41.506 11:26:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:08:41.506 Found net devices under 0000:af:00.0: cvl_0_0 00:08:41.506 11:26:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:08:41.506 11:26:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:08:41.506 11:26:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:41.506 11:26:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:08:41.506 11:26:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:41.506 11:26:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@418 -- # [[ up == up ]] 00:08:41.506 11:26:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:08:41.506 11:26:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:41.506 11:26:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:08:41.506 Found net devices under 0000:af:00.1: cvl_0_1 00:08:41.506 11:26:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:08:41.507 11:26:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:08:41.507 11:26:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@442 -- # is_hw=yes 00:08:41.507 11:26:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:08:41.507 11:26:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:08:41.507 11:26:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:08:41.507 11:26:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:08:41.507 11:26:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:41.507 11:26:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:41.507 11:26:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:41.507 11:26:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:08:41.507 11:26:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:41.507 11:26:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:41.507 11:26:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:08:41.507 11:26:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:08:41.507 11:26:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:41.507 11:26:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:41.507 11:26:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:08:41.507 11:26:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:08:41.507 11:26:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:08:41.507 11:26:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:41.507 11:26:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:41.507 11:26:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:41.507 11:26:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:08:41.507 11:26:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:41.507 11:26:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:41.507 11:26:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:41.507 11:26:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:08:41.507 11:26:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:08:41.507 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:41.507 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.439 ms 00:08:41.507 00:08:41.507 --- 10.0.0.2 ping statistics --- 00:08:41.507 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:41.507 rtt min/avg/max/mdev = 0.439/0.439/0.439/0.000 ms 00:08:41.507 11:26:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:41.507 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:41.507 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.231 ms 00:08:41.507 00:08:41.507 --- 10.0.0.1 ping statistics --- 00:08:41.507 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:41.507 rtt min/avg/max/mdev = 0.231/0.231/0.231/0.000 ms 00:08:41.507 11:26:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:41.507 11:26:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@450 -- # return 0 00:08:41.507 11:26:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:08:41.507 11:26:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:41.507 11:26:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:08:41.507 11:26:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:08:41.507 11:26:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:41.507 11:26:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:08:41.507 11:26:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:08:41.507 11:26:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@13 -- # nvmfappstart -m 0x2 00:08:41.507 11:26:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:08:41.507 11:26:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@724 -- # xtrace_disable 00:08:41.507 11:26:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:41.507 11:26:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@509 -- # nvmfpid=1095340 00:08:41.507 11:26:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@510 -- # waitforlisten 1095340 00:08:41.507 11:26:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:08:41.507 11:26:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@833 -- # '[' -z 1095340 ']' 00:08:41.507 11:26:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:41.507 11:26:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@838 -- # local max_retries=100 00:08:41.507 11:26:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:41.507 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:41.507 11:26:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@842 -- # xtrace_disable 00:08:41.507 11:26:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:41.507 [2024-11-15 11:26:41.509204] Starting SPDK v25.01-pre git sha1 4b2d483c6 / DPDK 24.03.0 initialization... 00:08:41.507 [2024-11-15 11:26:41.509269] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:41.507 [2024-11-15 11:26:41.583403] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:41.507 [2024-11-15 11:26:41.621722] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:41.507 [2024-11-15 11:26:41.621758] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:41.507 [2024-11-15 11:26:41.621764] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:41.507 [2024-11-15 11:26:41.621769] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:41.507 [2024-11-15 11:26:41.621774] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:41.507 [2024-11-15 11:26:41.622264] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:08:41.507 11:26:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:08:41.507 11:26:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@866 -- # return 0 00:08:41.507 11:26:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:08:41.507 11:26:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@730 -- # xtrace_disable 00:08:41.507 11:26:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:41.507 11:26:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:41.507 11:26:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@15 -- # '[' tcp '!=' tcp ']' 00:08:41.507 11:26:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@22 -- # rpc_cmd nvmf_create_transport -t tcp -o -c 0 --zcopy 00:08:41.507 11:26:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:41.507 11:26:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:41.507 [2024-11-15 11:26:41.776235] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:41.507 11:26:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:41.507 11:26:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:08:41.507 11:26:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:41.507 11:26:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:41.507 11:26:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:41.507 11:26:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:41.507 11:26:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:41.507 11:26:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:41.507 [2024-11-15 11:26:41.796440] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:41.507 11:26:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:41.507 11:26:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:08:41.507 11:26:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:41.507 11:26:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:41.507 11:26:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:41.507 11:26:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@29 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc0 00:08:41.507 11:26:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:41.507 11:26:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:41.507 malloc0 00:08:41.507 11:26:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:41.507 11:26:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:08:41.507 11:26:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:41.507 11:26:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:41.507 11:26:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:41.507 11:26:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -t 10 -q 128 -w verify -o 8192 00:08:41.507 11:26:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@33 -- # gen_nvmf_target_json 00:08:41.507 11:26:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@560 -- # config=() 00:08:41.507 11:26:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@560 -- # local subsystem config 00:08:41.507 11:26:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:08:41.507 11:26:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:08:41.507 { 00:08:41.508 "params": { 00:08:41.508 "name": "Nvme$subsystem", 00:08:41.508 "trtype": "$TEST_TRANSPORT", 00:08:41.508 "traddr": "$NVMF_FIRST_TARGET_IP", 00:08:41.508 "adrfam": "ipv4", 00:08:41.508 "trsvcid": "$NVMF_PORT", 00:08:41.508 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:08:41.508 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:08:41.508 "hdgst": ${hdgst:-false}, 00:08:41.508 "ddgst": ${ddgst:-false} 00:08:41.508 }, 00:08:41.508 "method": "bdev_nvme_attach_controller" 00:08:41.508 } 00:08:41.508 EOF 00:08:41.508 )") 00:08:41.508 11:26:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@582 -- # cat 00:08:41.508 11:26:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@584 -- # jq . 00:08:41.508 11:26:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@585 -- # IFS=, 00:08:41.508 11:26:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:08:41.508 "params": { 00:08:41.508 "name": "Nvme1", 00:08:41.508 "trtype": "tcp", 00:08:41.508 "traddr": "10.0.0.2", 00:08:41.508 "adrfam": "ipv4", 00:08:41.508 "trsvcid": "4420", 00:08:41.508 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:08:41.508 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:08:41.508 "hdgst": false, 00:08:41.508 "ddgst": false 00:08:41.508 }, 00:08:41.508 "method": "bdev_nvme_attach_controller" 00:08:41.508 }' 00:08:41.508 [2024-11-15 11:26:41.882233] Starting SPDK v25.01-pre git sha1 4b2d483c6 / DPDK 24.03.0 initialization... 00:08:41.508 [2024-11-15 11:26:41.882291] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1095556 ] 00:08:41.508 [2024-11-15 11:26:41.984731] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:41.508 [2024-11-15 11:26:42.051981] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:41.508 Running I/O for 10 seconds... 00:08:43.962 8215.00 IOPS, 64.18 MiB/s [2024-11-15T10:26:45.510Z] 8286.50 IOPS, 64.74 MiB/s [2024-11-15T10:26:46.446Z] 8311.67 IOPS, 64.93 MiB/s [2024-11-15T10:26:47.381Z] 8319.50 IOPS, 65.00 MiB/s [2024-11-15T10:26:48.765Z] 8327.60 IOPS, 65.06 MiB/s [2024-11-15T10:26:49.701Z] 8330.50 IOPS, 65.08 MiB/s [2024-11-15T10:26:50.636Z] 8331.43 IOPS, 65.09 MiB/s [2024-11-15T10:26:51.572Z] 8330.50 IOPS, 65.08 MiB/s [2024-11-15T10:26:52.508Z] 8332.00 IOPS, 65.09 MiB/s [2024-11-15T10:26:52.508Z] 8337.00 IOPS, 65.13 MiB/s 00:08:51.655 Latency(us) 00:08:51.655 [2024-11-15T10:26:52.508Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:51.655 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 8192) 00:08:51.655 Verification LBA range: start 0x0 length 0x1000 00:08:51.655 Nvme1n1 : 10.01 8335.58 65.12 0.00 0.00 15292.49 700.04 22043.93 00:08:51.655 [2024-11-15T10:26:52.508Z] =================================================================================================================== 00:08:51.655 [2024-11-15T10:26:52.508Z] Total : 8335.58 65.12 0.00 0.00 15292.49 700.04 22043.93 00:08:51.915 11:26:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@39 -- # perfpid=1097454 00:08:51.915 11:26:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@41 -- # xtrace_disable 00:08:51.915 11:26:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:51.915 11:26:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -t 5 -q 128 -w randrw -M 50 -o 8192 00:08:51.915 11:26:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@37 -- # gen_nvmf_target_json 00:08:51.915 11:26:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@560 -- # config=() 00:08:51.915 11:26:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@560 -- # local subsystem config 00:08:51.915 11:26:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:08:51.915 11:26:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:08:51.915 { 00:08:51.915 "params": { 00:08:51.915 "name": "Nvme$subsystem", 00:08:51.915 "trtype": "$TEST_TRANSPORT", 00:08:51.915 "traddr": "$NVMF_FIRST_TARGET_IP", 00:08:51.915 "adrfam": "ipv4", 00:08:51.915 "trsvcid": "$NVMF_PORT", 00:08:51.915 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:08:51.915 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:08:51.915 "hdgst": ${hdgst:-false}, 00:08:51.915 "ddgst": ${ddgst:-false} 00:08:51.915 }, 00:08:51.915 "method": "bdev_nvme_attach_controller" 00:08:51.915 } 00:08:51.915 EOF 00:08:51.915 )") 00:08:51.915 11:26:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@582 -- # cat 00:08:51.915 [2024-11-15 11:26:52.532846] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:51.915 [2024-11-15 11:26:52.532879] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:51.915 11:26:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@584 -- # jq . 00:08:51.915 11:26:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@585 -- # IFS=, 00:08:51.915 11:26:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:08:51.915 "params": { 00:08:51.915 "name": "Nvme1", 00:08:51.915 "trtype": "tcp", 00:08:51.915 "traddr": "10.0.0.2", 00:08:51.915 "adrfam": "ipv4", 00:08:51.915 "trsvcid": "4420", 00:08:51.915 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:08:51.915 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:08:51.915 "hdgst": false, 00:08:51.915 "ddgst": false 00:08:51.915 }, 00:08:51.915 "method": "bdev_nvme_attach_controller" 00:08:51.915 }' 00:08:51.915 [2024-11-15 11:26:52.544840] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:51.915 [2024-11-15 11:26:52.544852] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:51.915 [2024-11-15 11:26:52.556869] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:51.915 [2024-11-15 11:26:52.556879] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:51.915 [2024-11-15 11:26:52.568899] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:51.915 [2024-11-15 11:26:52.568908] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:51.915 [2024-11-15 11:26:52.577096] Starting SPDK v25.01-pre git sha1 4b2d483c6 / DPDK 24.03.0 initialization... 00:08:51.915 [2024-11-15 11:26:52.577152] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1097454 ] 00:08:51.915 [2024-11-15 11:26:52.580930] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:51.915 [2024-11-15 11:26:52.580940] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:51.915 [2024-11-15 11:26:52.592963] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:51.915 [2024-11-15 11:26:52.592973] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:51.916 [2024-11-15 11:26:52.604994] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:51.916 [2024-11-15 11:26:52.605003] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:51.916 [2024-11-15 11:26:52.617027] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:51.916 [2024-11-15 11:26:52.617036] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:51.916 [2024-11-15 11:26:52.629058] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:51.916 [2024-11-15 11:26:52.629068] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:51.916 [2024-11-15 11:26:52.641087] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:51.916 [2024-11-15 11:26:52.641102] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:51.916 [2024-11-15 11:26:52.653119] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:51.916 [2024-11-15 11:26:52.653128] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:51.916 [2024-11-15 11:26:52.665149] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:51.916 [2024-11-15 11:26:52.665159] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:51.916 [2024-11-15 11:26:52.672987] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:51.916 [2024-11-15 11:26:52.677182] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:51.916 [2024-11-15 11:26:52.677191] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:51.916 [2024-11-15 11:26:52.689215] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:51.916 [2024-11-15 11:26:52.689229] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:51.916 [2024-11-15 11:26:52.701247] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:51.916 [2024-11-15 11:26:52.701257] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:51.916 [2024-11-15 11:26:52.713281] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:51.916 [2024-11-15 11:26:52.713290] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:51.916 [2024-11-15 11:26:52.721570] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:51.916 [2024-11-15 11:26:52.725313] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:51.916 [2024-11-15 11:26:52.725325] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:51.916 [2024-11-15 11:26:52.737353] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:51.916 [2024-11-15 11:26:52.737370] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:51.916 [2024-11-15 11:26:52.749379] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:51.916 [2024-11-15 11:26:52.749394] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:51.916 [2024-11-15 11:26:52.761411] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:51.916 [2024-11-15 11:26:52.761424] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:52.175 [2024-11-15 11:26:52.773438] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:52.175 [2024-11-15 11:26:52.773448] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:52.175 [2024-11-15 11:26:52.785475] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:52.175 [2024-11-15 11:26:52.785504] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:52.175 [2024-11-15 11:26:52.797507] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:52.175 [2024-11-15 11:26:52.797517] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:52.175 [2024-11-15 11:26:52.809535] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:52.175 [2024-11-15 11:26:52.809544] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:52.175 [2024-11-15 11:26:52.821585] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:52.175 [2024-11-15 11:26:52.821606] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:52.175 [2024-11-15 11:26:52.833608] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:52.175 [2024-11-15 11:26:52.833622] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:52.175 [2024-11-15 11:26:52.845635] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:52.175 [2024-11-15 11:26:52.845649] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:52.175 [2024-11-15 11:26:52.857667] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:52.175 [2024-11-15 11:26:52.857684] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:52.175 [2024-11-15 11:26:52.869694] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:52.175 [2024-11-15 11:26:52.869704] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:52.175 [2024-11-15 11:26:52.881727] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:52.175 [2024-11-15 11:26:52.881737] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:52.175 [2024-11-15 11:26:52.893760] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:52.175 [2024-11-15 11:26:52.893770] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:52.175 [2024-11-15 11:26:52.905795] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:52.175 [2024-11-15 11:26:52.905807] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:52.175 [2024-11-15 11:26:52.917827] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:52.175 [2024-11-15 11:26:52.917837] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:52.175 [2024-11-15 11:26:52.929860] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:52.175 [2024-11-15 11:26:52.929869] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:52.175 [2024-11-15 11:26:52.941895] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:52.175 [2024-11-15 11:26:52.941908] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:52.175 [2024-11-15 11:26:52.953926] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:52.175 [2024-11-15 11:26:52.953936] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:52.175 [2024-11-15 11:26:52.965960] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:52.175 [2024-11-15 11:26:52.965969] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:52.175 [2024-11-15 11:26:52.977994] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:52.175 [2024-11-15 11:26:52.978003] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:52.175 [2024-11-15 11:26:52.990029] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:52.175 [2024-11-15 11:26:52.990040] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:52.435 [2024-11-15 11:26:53.036745] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:52.435 [2024-11-15 11:26:53.036763] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:52.435 [2024-11-15 11:26:53.046233] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:52.435 [2024-11-15 11:26:53.046245] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:52.435 Running I/O for 5 seconds... 00:08:52.435 [2024-11-15 11:26:53.062227] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:52.435 [2024-11-15 11:26:53.062247] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:52.435 [2024-11-15 11:26:53.075230] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:52.435 [2024-11-15 11:26:53.075249] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:52.435 [2024-11-15 11:26:53.088482] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:52.435 [2024-11-15 11:26:53.088501] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:52.435 [2024-11-15 11:26:53.101379] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:52.435 [2024-11-15 11:26:53.101397] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:52.435 [2024-11-15 11:26:53.114644] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:52.435 [2024-11-15 11:26:53.114663] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:52.435 [2024-11-15 11:26:53.127464] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:52.435 [2024-11-15 11:26:53.127485] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:52.435 [2024-11-15 11:26:53.140248] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:52.435 [2024-11-15 11:26:53.140266] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:52.435 [2024-11-15 11:26:53.154277] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:52.435 [2024-11-15 11:26:53.154295] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:52.435 [2024-11-15 11:26:53.167748] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:52.435 [2024-11-15 11:26:53.167766] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:52.435 [2024-11-15 11:26:53.180621] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:52.435 [2024-11-15 11:26:53.180639] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:52.435 [2024-11-15 11:26:53.193299] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:52.435 [2024-11-15 11:26:53.193319] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:52.435 [2024-11-15 11:26:53.206124] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:52.435 [2024-11-15 11:26:53.206143] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:52.435 [2024-11-15 11:26:53.219060] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:52.435 [2024-11-15 11:26:53.219079] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:52.435 [2024-11-15 11:26:53.232031] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:52.435 [2024-11-15 11:26:53.232050] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:52.435 [2024-11-15 11:26:53.245454] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:52.435 [2024-11-15 11:26:53.245479] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:52.435 [2024-11-15 11:26:53.258415] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:52.435 [2024-11-15 11:26:53.258434] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:52.435 [2024-11-15 11:26:53.271092] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:52.435 [2024-11-15 11:26:53.271111] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:52.435 [2024-11-15 11:26:53.284868] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:52.435 [2024-11-15 11:26:53.284887] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:52.694 [2024-11-15 11:26:53.297953] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:52.694 [2024-11-15 11:26:53.297972] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:52.694 [2024-11-15 11:26:53.310634] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:52.694 [2024-11-15 11:26:53.310652] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:52.694 [2024-11-15 11:26:53.323657] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:52.694 [2024-11-15 11:26:53.323675] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:52.694 [2024-11-15 11:26:53.336440] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:52.694 [2024-11-15 11:26:53.336465] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:52.694 [2024-11-15 11:26:53.350201] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:52.695 [2024-11-15 11:26:53.350219] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:52.695 [2024-11-15 11:26:53.363073] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:52.695 [2024-11-15 11:26:53.363092] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:52.695 [2024-11-15 11:26:53.376147] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:52.695 [2024-11-15 11:26:53.376166] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:52.695 [2024-11-15 11:26:53.388798] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:52.695 [2024-11-15 11:26:53.388816] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:52.695 [2024-11-15 11:26:53.402105] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:52.695 [2024-11-15 11:26:53.402123] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:52.695 [2024-11-15 11:26:53.414954] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:52.695 [2024-11-15 11:26:53.414972] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:52.695 [2024-11-15 11:26:53.428776] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:52.695 [2024-11-15 11:26:53.428795] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:52.695 [2024-11-15 11:26:53.442316] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:52.695 [2024-11-15 11:26:53.442334] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:52.695 [2024-11-15 11:26:53.455393] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:52.695 [2024-11-15 11:26:53.455412] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:52.695 [2024-11-15 11:26:53.468410] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:52.695 [2024-11-15 11:26:53.468429] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:52.695 [2024-11-15 11:26:53.481282] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:52.695 [2024-11-15 11:26:53.481301] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:52.695 [2024-11-15 11:26:53.494252] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:52.695 [2024-11-15 11:26:53.494272] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:52.695 [2024-11-15 11:26:53.507398] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:52.695 [2024-11-15 11:26:53.507417] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:52.695 [2024-11-15 11:26:53.521123] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:52.695 [2024-11-15 11:26:53.521142] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:52.695 [2024-11-15 11:26:53.534268] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:52.695 [2024-11-15 11:26:53.534287] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:52.954 [2024-11-15 11:26:53.547642] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:52.954 [2024-11-15 11:26:53.547661] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:52.954 [2024-11-15 11:26:53.560673] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:52.954 [2024-11-15 11:26:53.560692] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:52.954 [2024-11-15 11:26:53.573444] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:52.954 [2024-11-15 11:26:53.573470] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:52.954 [2024-11-15 11:26:53.586250] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:52.954 [2024-11-15 11:26:53.586269] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:52.954 [2024-11-15 11:26:53.599514] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:52.954 [2024-11-15 11:26:53.599533] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:52.954 [2024-11-15 11:26:53.612599] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:52.954 [2024-11-15 11:26:53.612627] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:52.954 [2024-11-15 11:26:53.626226] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:52.954 [2024-11-15 11:26:53.626245] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:52.954 [2024-11-15 11:26:53.639601] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:52.954 [2024-11-15 11:26:53.639621] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:52.954 [2024-11-15 11:26:53.652752] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:52.954 [2024-11-15 11:26:53.652771] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:52.954 [2024-11-15 11:26:53.666684] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:52.954 [2024-11-15 11:26:53.666704] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:52.954 [2024-11-15 11:26:53.679385] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:52.954 [2024-11-15 11:26:53.679404] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:52.954 [2024-11-15 11:26:53.692828] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:52.954 [2024-11-15 11:26:53.692845] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:52.954 [2024-11-15 11:26:53.706119] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:52.954 [2024-11-15 11:26:53.706136] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:52.954 [2024-11-15 11:26:53.719910] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:52.954 [2024-11-15 11:26:53.719928] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:52.954 [2024-11-15 11:26:53.732984] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:52.954 [2024-11-15 11:26:53.733001] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:52.954 [2024-11-15 11:26:53.746190] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:52.954 [2024-11-15 11:26:53.746208] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:52.954 [2024-11-15 11:26:53.759209] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:52.954 [2024-11-15 11:26:53.759227] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:52.954 [2024-11-15 11:26:53.772497] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:52.954 [2024-11-15 11:26:53.772516] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:52.954 [2024-11-15 11:26:53.785311] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:52.954 [2024-11-15 11:26:53.785329] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:52.954 [2024-11-15 11:26:53.798347] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:52.954 [2024-11-15 11:26:53.798365] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:53.214 [2024-11-15 11:26:53.811422] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:53.214 [2024-11-15 11:26:53.811439] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:53.214 [2024-11-15 11:26:53.824294] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:53.214 [2024-11-15 11:26:53.824312] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:53.214 [2024-11-15 11:26:53.837196] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:53.214 [2024-11-15 11:26:53.837213] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:53.214 [2024-11-15 11:26:53.850078] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:53.214 [2024-11-15 11:26:53.850096] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:53.214 [2024-11-15 11:26:53.862754] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:53.214 [2024-11-15 11:26:53.862776] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:53.214 [2024-11-15 11:26:53.875361] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:53.214 [2024-11-15 11:26:53.875379] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:53.214 [2024-11-15 11:26:53.888357] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:53.214 [2024-11-15 11:26:53.888375] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:53.214 [2024-11-15 11:26:53.901431] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:53.214 [2024-11-15 11:26:53.901449] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:53.214 [2024-11-15 11:26:53.914968] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:53.214 [2024-11-15 11:26:53.914986] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:53.214 [2024-11-15 11:26:53.928698] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:53.214 [2024-11-15 11:26:53.928716] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:53.214 [2024-11-15 11:26:53.941146] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:53.214 [2024-11-15 11:26:53.941164] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:53.214 [2024-11-15 11:26:53.954419] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:53.214 [2024-11-15 11:26:53.954437] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:53.214 [2024-11-15 11:26:53.967887] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:53.214 [2024-11-15 11:26:53.967904] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:53.214 [2024-11-15 11:26:53.980818] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:53.214 [2024-11-15 11:26:53.980836] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:53.214 [2024-11-15 11:26:53.993769] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:53.214 [2024-11-15 11:26:53.993788] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:53.214 [2024-11-15 11:26:54.007503] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:53.214 [2024-11-15 11:26:54.007522] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:53.214 [2024-11-15 11:26:54.020160] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:53.214 [2024-11-15 11:26:54.020178] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:53.214 [2024-11-15 11:26:54.033790] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:53.214 [2024-11-15 11:26:54.033808] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:53.214 [2024-11-15 11:26:54.046415] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:53.214 [2024-11-15 11:26:54.046433] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:53.214 18242.00 IOPS, 142.52 MiB/s [2024-11-15T10:26:54.067Z] [2024-11-15 11:26:54.059297] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:53.214 [2024-11-15 11:26:54.059316] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:53.473 [2024-11-15 11:26:54.072164] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:53.473 [2024-11-15 11:26:54.072182] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:53.473 [2024-11-15 11:26:54.085201] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:53.473 [2024-11-15 11:26:54.085219] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:53.473 [2024-11-15 11:26:54.098139] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:53.473 [2024-11-15 11:26:54.098157] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:53.473 [2024-11-15 11:26:54.111365] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:53.473 [2024-11-15 11:26:54.111387] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:53.473 [2024-11-15 11:26:54.124548] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:53.473 [2024-11-15 11:26:54.124566] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:53.473 [2024-11-15 11:26:54.137481] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:53.473 [2024-11-15 11:26:54.137500] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:53.473 [2024-11-15 11:26:54.149822] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:53.473 [2024-11-15 11:26:54.149840] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:53.473 [2024-11-15 11:26:54.163483] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:53.473 [2024-11-15 11:26:54.163500] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:53.474 [2024-11-15 11:26:54.176567] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:53.474 [2024-11-15 11:26:54.176585] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:53.474 [2024-11-15 11:26:54.189189] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:53.474 [2024-11-15 11:26:54.189207] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:53.474 [2024-11-15 11:26:54.202024] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:53.474 [2024-11-15 11:26:54.202042] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:53.474 [2024-11-15 11:26:54.215432] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:53.474 [2024-11-15 11:26:54.215450] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:53.474 [2024-11-15 11:26:54.228535] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:53.474 [2024-11-15 11:26:54.228553] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:53.474 [2024-11-15 11:26:54.242546] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:53.474 [2024-11-15 11:26:54.242565] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:53.474 [2024-11-15 11:26:54.255104] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:53.474 [2024-11-15 11:26:54.255122] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:53.474 [2024-11-15 11:26:54.268516] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:53.474 [2024-11-15 11:26:54.268535] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:53.474 [2024-11-15 11:26:54.281500] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:53.474 [2024-11-15 11:26:54.281518] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:53.474 [2024-11-15 11:26:54.294883] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:53.474 [2024-11-15 11:26:54.294901] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:53.474 [2024-11-15 11:26:54.307596] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:53.474 [2024-11-15 11:26:54.307615] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:53.474 [2024-11-15 11:26:54.321140] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:53.474 [2024-11-15 11:26:54.321158] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:53.734 [2024-11-15 11:26:54.334537] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:53.734 [2024-11-15 11:26:54.334556] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:53.734 [2024-11-15 11:26:54.347736] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:53.734 [2024-11-15 11:26:54.347753] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:53.734 [2024-11-15 11:26:54.361178] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:53.734 [2024-11-15 11:26:54.361201] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:53.734 [2024-11-15 11:26:54.374778] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:53.734 [2024-11-15 11:26:54.374796] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:53.734 [2024-11-15 11:26:54.387806] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:53.734 [2024-11-15 11:26:54.387824] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:53.734 [2024-11-15 11:26:54.400664] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:53.734 [2024-11-15 11:26:54.400682] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:53.734 [2024-11-15 11:26:54.413874] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:53.734 [2024-11-15 11:26:54.413892] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:53.734 [2024-11-15 11:26:54.427121] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:53.734 [2024-11-15 11:26:54.427139] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:53.734 [2024-11-15 11:26:54.439767] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:53.734 [2024-11-15 11:26:54.439785] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:53.734 [2024-11-15 11:26:54.452497] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:53.734 [2024-11-15 11:26:54.452517] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:53.734 [2024-11-15 11:26:54.464943] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:53.734 [2024-11-15 11:26:54.464961] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:53.734 [2024-11-15 11:26:54.478429] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:53.734 [2024-11-15 11:26:54.478447] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:53.734 [2024-11-15 11:26:54.491895] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:53.734 [2024-11-15 11:26:54.491913] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:53.734 [2024-11-15 11:26:54.504721] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:53.734 [2024-11-15 11:26:54.504739] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:53.734 [2024-11-15 11:26:54.518449] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:53.734 [2024-11-15 11:26:54.518472] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:53.734 [2024-11-15 11:26:54.532258] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:53.734 [2024-11-15 11:26:54.532278] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:53.734 [2024-11-15 11:26:54.545578] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:53.734 [2024-11-15 11:26:54.545596] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:53.734 [2024-11-15 11:26:54.558412] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:53.734 [2024-11-15 11:26:54.558431] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:53.734 [2024-11-15 11:26:54.571546] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:53.734 [2024-11-15 11:26:54.571563] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:53.734 [2024-11-15 11:26:54.585043] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:53.734 [2024-11-15 11:26:54.585061] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:53.993 [2024-11-15 11:26:54.598123] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:53.993 [2024-11-15 11:26:54.598141] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:53.993 [2024-11-15 11:26:54.610841] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:53.993 [2024-11-15 11:26:54.610859] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:53.993 [2024-11-15 11:26:54.623907] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:53.993 [2024-11-15 11:26:54.623925] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:53.993 [2024-11-15 11:26:54.636874] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:53.993 [2024-11-15 11:26:54.636892] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:53.993 [2024-11-15 11:26:54.649206] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:53.993 [2024-11-15 11:26:54.649224] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:53.993 [2024-11-15 11:26:54.662581] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:53.993 [2024-11-15 11:26:54.662600] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:53.993 [2024-11-15 11:26:54.675406] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:53.993 [2024-11-15 11:26:54.675425] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:53.993 [2024-11-15 11:26:54.688327] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:53.993 [2024-11-15 11:26:54.688346] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:53.993 [2024-11-15 11:26:54.701733] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:53.993 [2024-11-15 11:26:54.701753] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:53.993 [2024-11-15 11:26:54.714565] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:53.993 [2024-11-15 11:26:54.714585] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:53.993 [2024-11-15 11:26:54.727465] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:53.993 [2024-11-15 11:26:54.727485] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:53.993 [2024-11-15 11:26:54.740596] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:53.993 [2024-11-15 11:26:54.740614] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:53.993 [2024-11-15 11:26:54.753804] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:53.993 [2024-11-15 11:26:54.753823] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:53.993 [2024-11-15 11:26:54.767113] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:53.993 [2024-11-15 11:26:54.767132] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:53.993 [2024-11-15 11:26:54.779686] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:53.993 [2024-11-15 11:26:54.779716] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:53.993 [2024-11-15 11:26:54.792874] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:53.993 [2024-11-15 11:26:54.792892] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:53.993 [2024-11-15 11:26:54.805789] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:53.993 [2024-11-15 11:26:54.805808] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:53.993 [2024-11-15 11:26:54.818658] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:53.993 [2024-11-15 11:26:54.818676] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:53.993 [2024-11-15 11:26:54.831942] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:53.993 [2024-11-15 11:26:54.831960] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:53.993 [2024-11-15 11:26:54.845062] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:53.993 [2024-11-15 11:26:54.845081] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:54.251 [2024-11-15 11:26:54.858796] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:54.251 [2024-11-15 11:26:54.858815] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:54.251 [2024-11-15 11:26:54.871920] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:54.251 [2024-11-15 11:26:54.871940] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:54.251 [2024-11-15 11:26:54.885192] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:54.251 [2024-11-15 11:26:54.885211] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:54.251 [2024-11-15 11:26:54.898741] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:54.251 [2024-11-15 11:26:54.898760] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:54.251 [2024-11-15 11:26:54.912069] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:54.251 [2024-11-15 11:26:54.912088] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:54.251 [2024-11-15 11:26:54.925244] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:54.252 [2024-11-15 11:26:54.925263] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:54.252 [2024-11-15 11:26:54.938302] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:54.252 [2024-11-15 11:26:54.938321] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:54.252 [2024-11-15 11:26:54.951814] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:54.252 [2024-11-15 11:26:54.951833] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:54.252 [2024-11-15 11:26:54.964670] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:54.252 [2024-11-15 11:26:54.964688] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:54.252 [2024-11-15 11:26:54.978101] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:54.252 [2024-11-15 11:26:54.978119] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:54.252 [2024-11-15 11:26:54.991191] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:54.252 [2024-11-15 11:26:54.991210] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:54.252 [2024-11-15 11:26:55.004088] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:54.252 [2024-11-15 11:26:55.004106] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:54.252 [2024-11-15 11:26:55.016537] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:54.252 [2024-11-15 11:26:55.016555] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:54.252 [2024-11-15 11:26:55.029427] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:54.252 [2024-11-15 11:26:55.029445] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:54.252 [2024-11-15 11:26:55.042641] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:54.252 [2024-11-15 11:26:55.042661] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:54.252 [2024-11-15 11:26:55.056082] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:54.252 [2024-11-15 11:26:55.056101] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:54.252 18284.50 IOPS, 142.85 MiB/s [2024-11-15T10:26:55.105Z] [2024-11-15 11:26:55.069825] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:54.252 [2024-11-15 11:26:55.069844] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:54.252 [2024-11-15 11:26:55.082646] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:54.252 [2024-11-15 11:26:55.082664] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:54.252 [2024-11-15 11:26:55.095453] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:54.252 [2024-11-15 11:26:55.095476] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:54.511 [2024-11-15 11:26:55.108645] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:54.511 [2024-11-15 11:26:55.108664] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:54.511 [2024-11-15 11:26:55.122140] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:54.511 [2024-11-15 11:26:55.122159] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:54.511 [2024-11-15 11:26:55.136082] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:54.511 [2024-11-15 11:26:55.136101] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:54.511 [2024-11-15 11:26:55.148420] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:54.511 [2024-11-15 11:26:55.148439] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:54.511 [2024-11-15 11:26:55.162003] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:54.511 [2024-11-15 11:26:55.162021] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:54.511 [2024-11-15 11:26:55.174935] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:54.511 [2024-11-15 11:26:55.174954] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:54.511 [2024-11-15 11:26:55.187848] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:54.511 [2024-11-15 11:26:55.187866] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:54.511 [2024-11-15 11:26:55.201263] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:54.511 [2024-11-15 11:26:55.201281] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:54.511 [2024-11-15 11:26:55.214046] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:54.511 [2024-11-15 11:26:55.214064] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:54.511 [2024-11-15 11:26:55.226657] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:54.511 [2024-11-15 11:26:55.226674] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:54.511 [2024-11-15 11:26:55.240005] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:54.511 [2024-11-15 11:26:55.240023] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:54.511 [2024-11-15 11:26:55.252657] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:54.511 [2024-11-15 11:26:55.252675] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:54.511 [2024-11-15 11:26:55.265637] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:54.511 [2024-11-15 11:26:55.265655] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:54.511 [2024-11-15 11:26:55.278784] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:54.511 [2024-11-15 11:26:55.278802] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:54.511 [2024-11-15 11:26:55.292031] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:54.511 [2024-11-15 11:26:55.292048] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:54.511 [2024-11-15 11:26:55.305263] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:54.511 [2024-11-15 11:26:55.305281] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:54.511 [2024-11-15 11:26:55.319260] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:54.511 [2024-11-15 11:26:55.319277] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:54.511 [2024-11-15 11:26:55.332235] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:54.511 [2024-11-15 11:26:55.332253] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:54.512 [2024-11-15 11:26:55.344957] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:54.512 [2024-11-15 11:26:55.344979] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:54.512 [2024-11-15 11:26:55.358856] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:54.512 [2024-11-15 11:26:55.358874] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:54.771 [2024-11-15 11:26:55.371955] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:54.771 [2024-11-15 11:26:55.371974] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:54.771 [2024-11-15 11:26:55.385072] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:54.771 [2024-11-15 11:26:55.385090] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:54.771 [2024-11-15 11:26:55.398221] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:54.771 [2024-11-15 11:26:55.398239] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:54.771 [2024-11-15 11:26:55.411784] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:54.771 [2024-11-15 11:26:55.411802] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:54.771 [2024-11-15 11:26:55.425265] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:54.771 [2024-11-15 11:26:55.425284] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:54.771 [2024-11-15 11:26:55.439537] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:54.771 [2024-11-15 11:26:55.439556] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:54.771 [2024-11-15 11:26:55.453368] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:54.771 [2024-11-15 11:26:55.453388] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:54.771 [2024-11-15 11:26:55.466194] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:54.771 [2024-11-15 11:26:55.466212] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:54.771 [2024-11-15 11:26:55.479678] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:54.771 [2024-11-15 11:26:55.479696] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:54.771 [2024-11-15 11:26:55.492476] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:54.771 [2024-11-15 11:26:55.492493] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:54.771 [2024-11-15 11:26:55.506161] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:54.771 [2024-11-15 11:26:55.506179] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:54.771 [2024-11-15 11:26:55.519748] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:54.771 [2024-11-15 11:26:55.519766] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:54.771 [2024-11-15 11:26:55.533472] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:54.771 [2024-11-15 11:26:55.533491] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:54.771 [2024-11-15 11:26:55.546361] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:54.771 [2024-11-15 11:26:55.546380] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:54.771 [2024-11-15 11:26:55.559472] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:54.771 [2024-11-15 11:26:55.559507] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:54.771 [2024-11-15 11:26:55.572797] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:54.771 [2024-11-15 11:26:55.572815] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:54.771 [2024-11-15 11:26:55.586712] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:54.771 [2024-11-15 11:26:55.586741] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:54.771 [2024-11-15 11:26:55.599422] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:54.771 [2024-11-15 11:26:55.599444] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:54.771 [2024-11-15 11:26:55.612998] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:54.771 [2024-11-15 11:26:55.613016] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.029 [2024-11-15 11:26:55.626091] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.029 [2024-11-15 11:26:55.626110] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.029 [2024-11-15 11:26:55.639088] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.029 [2024-11-15 11:26:55.639106] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.029 [2024-11-15 11:26:55.652373] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.029 [2024-11-15 11:26:55.652392] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.029 [2024-11-15 11:26:55.665333] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.029 [2024-11-15 11:26:55.665351] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.029 [2024-11-15 11:26:55.678447] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.029 [2024-11-15 11:26:55.678470] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.029 [2024-11-15 11:26:55.691508] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.029 [2024-11-15 11:26:55.691526] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.029 [2024-11-15 11:26:55.704633] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.029 [2024-11-15 11:26:55.704651] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.029 [2024-11-15 11:26:55.717737] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.029 [2024-11-15 11:26:55.717756] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.029 [2024-11-15 11:26:55.730312] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.029 [2024-11-15 11:26:55.730331] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.029 [2024-11-15 11:26:55.743635] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.030 [2024-11-15 11:26:55.743653] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.030 [2024-11-15 11:26:55.756818] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.030 [2024-11-15 11:26:55.756836] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.030 [2024-11-15 11:26:55.769549] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.030 [2024-11-15 11:26:55.769568] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.030 [2024-11-15 11:26:55.782994] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.030 [2024-11-15 11:26:55.783011] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.030 [2024-11-15 11:26:55.796231] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.030 [2024-11-15 11:26:55.796250] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.030 [2024-11-15 11:26:55.809191] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.030 [2024-11-15 11:26:55.809210] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.030 [2024-11-15 11:26:55.822364] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.030 [2024-11-15 11:26:55.822383] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.030 [2024-11-15 11:26:55.835559] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.030 [2024-11-15 11:26:55.835577] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.030 [2024-11-15 11:26:55.848924] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.030 [2024-11-15 11:26:55.848946] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.030 [2024-11-15 11:26:55.861994] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.030 [2024-11-15 11:26:55.862012] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.030 [2024-11-15 11:26:55.875855] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.030 [2024-11-15 11:26:55.875873] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.289 [2024-11-15 11:26:55.888699] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.289 [2024-11-15 11:26:55.888717] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.289 [2024-11-15 11:26:55.901904] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.289 [2024-11-15 11:26:55.901923] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.289 [2024-11-15 11:26:55.914784] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.289 [2024-11-15 11:26:55.914801] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.289 [2024-11-15 11:26:55.928367] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.289 [2024-11-15 11:26:55.928385] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.289 [2024-11-15 11:26:55.941673] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.289 [2024-11-15 11:26:55.941691] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.289 [2024-11-15 11:26:55.955191] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.289 [2024-11-15 11:26:55.955210] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.289 [2024-11-15 11:26:55.969153] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.289 [2024-11-15 11:26:55.969171] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.289 [2024-11-15 11:26:55.982271] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.289 [2024-11-15 11:26:55.982289] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.289 [2024-11-15 11:26:55.995384] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.289 [2024-11-15 11:26:55.995402] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.289 [2024-11-15 11:26:56.007817] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.289 [2024-11-15 11:26:56.007835] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.289 [2024-11-15 11:26:56.020854] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.289 [2024-11-15 11:26:56.020872] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.289 [2024-11-15 11:26:56.034017] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.289 [2024-11-15 11:26:56.034036] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.289 [2024-11-15 11:26:56.047515] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.289 [2024-11-15 11:26:56.047534] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.289 [2024-11-15 11:26:56.060200] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.289 [2024-11-15 11:26:56.060220] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.289 18297.67 IOPS, 142.95 MiB/s [2024-11-15T10:26:56.142Z] [2024-11-15 11:26:56.074012] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.289 [2024-11-15 11:26:56.074030] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.289 [2024-11-15 11:26:56.087175] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.289 [2024-11-15 11:26:56.087193] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.289 [2024-11-15 11:26:56.100210] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.289 [2024-11-15 11:26:56.100228] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.289 [2024-11-15 11:26:56.112927] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.289 [2024-11-15 11:26:56.112945] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.289 [2024-11-15 11:26:56.125881] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.289 [2024-11-15 11:26:56.125899] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.289 [2024-11-15 11:26:56.139546] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.289 [2024-11-15 11:26:56.139565] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.548 [2024-11-15 11:26:56.153300] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.548 [2024-11-15 11:26:56.153319] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.548 [2024-11-15 11:26:56.166810] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.548 [2024-11-15 11:26:56.166829] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.548 [2024-11-15 11:26:56.179876] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.548 [2024-11-15 11:26:56.179896] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.548 [2024-11-15 11:26:56.193221] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.548 [2024-11-15 11:26:56.193240] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.548 [2024-11-15 11:26:56.206450] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.548 [2024-11-15 11:26:56.206474] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.548 [2024-11-15 11:26:56.219781] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.549 [2024-11-15 11:26:56.219800] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.549 [2024-11-15 11:26:56.232399] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.549 [2024-11-15 11:26:56.232417] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.549 [2024-11-15 11:26:56.245575] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.549 [2024-11-15 11:26:56.245593] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.549 [2024-11-15 11:26:56.258290] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.549 [2024-11-15 11:26:56.258309] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.549 [2024-11-15 11:26:56.271331] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.549 [2024-11-15 11:26:56.271350] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.549 [2024-11-15 11:26:56.283884] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.549 [2024-11-15 11:26:56.283902] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.549 [2024-11-15 11:26:56.296662] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.549 [2024-11-15 11:26:56.296681] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.549 [2024-11-15 11:26:56.309288] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.549 [2024-11-15 11:26:56.309307] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.549 [2024-11-15 11:26:56.322547] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.549 [2024-11-15 11:26:56.322566] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.549 [2024-11-15 11:26:56.335146] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.549 [2024-11-15 11:26:56.335164] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.549 [2024-11-15 11:26:56.348275] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.549 [2024-11-15 11:26:56.348294] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.549 [2024-11-15 11:26:56.361440] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.549 [2024-11-15 11:26:56.361465] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.549 [2024-11-15 11:26:56.374770] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.549 [2024-11-15 11:26:56.374788] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.549 [2024-11-15 11:26:56.387717] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.549 [2024-11-15 11:26:56.387737] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.549 [2024-11-15 11:26:56.401047] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.549 [2024-11-15 11:26:56.401066] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.808 [2024-11-15 11:26:56.413614] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.808 [2024-11-15 11:26:56.413633] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.808 [2024-11-15 11:26:56.426434] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.808 [2024-11-15 11:26:56.426453] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.808 [2024-11-15 11:26:56.440115] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.808 [2024-11-15 11:26:56.440134] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.808 [2024-11-15 11:26:56.453484] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.808 [2024-11-15 11:26:56.453518] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.808 [2024-11-15 11:26:56.466809] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.808 [2024-11-15 11:26:56.466827] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.808 [2024-11-15 11:26:56.480227] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.808 [2024-11-15 11:26:56.480246] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.808 [2024-11-15 11:26:56.493235] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.808 [2024-11-15 11:26:56.493254] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.808 [2024-11-15 11:26:56.505897] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.808 [2024-11-15 11:26:56.505916] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.809 [2024-11-15 11:26:56.519503] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.809 [2024-11-15 11:26:56.519522] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.809 [2024-11-15 11:26:56.532951] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.809 [2024-11-15 11:26:56.532969] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.809 [2024-11-15 11:26:56.546167] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.809 [2024-11-15 11:26:56.546185] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.809 [2024-11-15 11:26:56.560359] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.809 [2024-11-15 11:26:56.560378] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.809 [2024-11-15 11:26:56.573559] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.809 [2024-11-15 11:26:56.573577] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.809 [2024-11-15 11:26:56.587082] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.809 [2024-11-15 11:26:56.587101] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.809 [2024-11-15 11:26:56.600575] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.809 [2024-11-15 11:26:56.600593] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.809 [2024-11-15 11:26:56.614326] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.809 [2024-11-15 11:26:56.614344] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.809 [2024-11-15 11:26:56.627347] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.809 [2024-11-15 11:26:56.627366] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.809 [2024-11-15 11:26:56.640295] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.809 [2024-11-15 11:26:56.640313] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.809 [2024-11-15 11:26:56.653752] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.809 [2024-11-15 11:26:56.653769] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.067 [2024-11-15 11:26:56.666697] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.067 [2024-11-15 11:26:56.666715] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.067 [2024-11-15 11:26:56.679730] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.067 [2024-11-15 11:26:56.679748] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.067 [2024-11-15 11:26:56.692025] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.067 [2024-11-15 11:26:56.692043] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.067 [2024-11-15 11:26:56.705349] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.067 [2024-11-15 11:26:56.705367] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.067 [2024-11-15 11:26:56.718014] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.067 [2024-11-15 11:26:56.718032] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.067 [2024-11-15 11:26:56.731071] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.067 [2024-11-15 11:26:56.731090] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.067 [2024-11-15 11:26:56.744691] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.067 [2024-11-15 11:26:56.744710] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.067 [2024-11-15 11:26:56.757592] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.067 [2024-11-15 11:26:56.757611] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.067 [2024-11-15 11:26:56.770671] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.067 [2024-11-15 11:26:56.770689] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.067 [2024-11-15 11:26:56.783997] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.067 [2024-11-15 11:26:56.784015] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.068 [2024-11-15 11:26:56.797188] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.068 [2024-11-15 11:26:56.797207] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.068 [2024-11-15 11:26:56.810565] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.068 [2024-11-15 11:26:56.810582] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.068 [2024-11-15 11:26:56.823279] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.068 [2024-11-15 11:26:56.823297] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.068 [2024-11-15 11:26:56.836406] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.068 [2024-11-15 11:26:56.836424] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.068 [2024-11-15 11:26:56.849239] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.068 [2024-11-15 11:26:56.849256] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.068 [2024-11-15 11:26:56.862241] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.068 [2024-11-15 11:26:56.862259] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.068 [2024-11-15 11:26:56.874797] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.068 [2024-11-15 11:26:56.874815] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.068 [2024-11-15 11:26:56.887948] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.068 [2024-11-15 11:26:56.887966] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.068 [2024-11-15 11:26:56.900756] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.068 [2024-11-15 11:26:56.900775] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.068 [2024-11-15 11:26:56.913747] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.068 [2024-11-15 11:26:56.913765] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.327 [2024-11-15 11:26:56.927546] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.327 [2024-11-15 11:26:56.927564] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.327 [2024-11-15 11:26:56.941467] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.327 [2024-11-15 11:26:56.941485] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.327 [2024-11-15 11:26:56.954426] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.327 [2024-11-15 11:26:56.954444] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.327 [2024-11-15 11:26:56.966917] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.327 [2024-11-15 11:26:56.966935] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.327 [2024-11-15 11:26:56.980202] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.327 [2024-11-15 11:26:56.980219] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.327 [2024-11-15 11:26:56.993164] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.327 [2024-11-15 11:26:56.993182] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.327 [2024-11-15 11:26:57.006269] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.327 [2024-11-15 11:26:57.006287] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.327 [2024-11-15 11:26:57.019007] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.327 [2024-11-15 11:26:57.019026] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.327 [2024-11-15 11:26:57.032174] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.327 [2024-11-15 11:26:57.032192] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.327 [2024-11-15 11:26:57.046085] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.327 [2024-11-15 11:26:57.046104] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.327 [2024-11-15 11:26:57.059011] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.327 [2024-11-15 11:26:57.059029] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.327 18301.25 IOPS, 142.98 MiB/s [2024-11-15T10:26:57.180Z] [2024-11-15 11:26:57.071922] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.327 [2024-11-15 11:26:57.071942] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.327 [2024-11-15 11:26:57.084837] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.327 [2024-11-15 11:26:57.084861] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.327 [2024-11-15 11:26:57.097581] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.327 [2024-11-15 11:26:57.097600] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.327 [2024-11-15 11:26:57.110081] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.327 [2024-11-15 11:26:57.110099] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.327 [2024-11-15 11:26:57.123524] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.327 [2024-11-15 11:26:57.123543] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.327 [2024-11-15 11:26:57.137048] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.327 [2024-11-15 11:26:57.137066] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.327 [2024-11-15 11:26:57.150573] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.327 [2024-11-15 11:26:57.150592] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.327 [2024-11-15 11:26:57.163178] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.327 [2024-11-15 11:26:57.163196] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.327 [2024-11-15 11:26:57.175873] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.327 [2024-11-15 11:26:57.175891] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.586 [2024-11-15 11:26:57.188500] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.586 [2024-11-15 11:26:57.188518] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.586 [2024-11-15 11:26:57.201650] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.586 [2024-11-15 11:26:57.201668] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.586 [2024-11-15 11:26:57.214278] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.586 [2024-11-15 11:26:57.214296] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.586 [2024-11-15 11:26:57.227768] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.586 [2024-11-15 11:26:57.227786] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.586 [2024-11-15 11:26:57.240921] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.586 [2024-11-15 11:26:57.240939] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.586 [2024-11-15 11:26:57.253759] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.586 [2024-11-15 11:26:57.253777] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.586 [2024-11-15 11:26:57.266775] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.586 [2024-11-15 11:26:57.266793] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.586 [2024-11-15 11:26:57.280249] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.586 [2024-11-15 11:26:57.280267] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.586 [2024-11-15 11:26:57.293439] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.586 [2024-11-15 11:26:57.293463] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.586 [2024-11-15 11:26:57.306488] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.586 [2024-11-15 11:26:57.306505] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.586 [2024-11-15 11:26:57.319536] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.586 [2024-11-15 11:26:57.319554] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.586 [2024-11-15 11:26:57.332554] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.586 [2024-11-15 11:26:57.332579] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.586 [2024-11-15 11:26:57.345703] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.586 [2024-11-15 11:26:57.345722] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.586 [2024-11-15 11:26:57.358312] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.586 [2024-11-15 11:26:57.358330] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.586 [2024-11-15 11:26:57.371191] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.586 [2024-11-15 11:26:57.371209] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.586 [2024-11-15 11:26:57.384004] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.586 [2024-11-15 11:26:57.384022] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.587 [2024-11-15 11:26:57.397005] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.587 [2024-11-15 11:26:57.397023] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.587 [2024-11-15 11:26:57.410433] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.587 [2024-11-15 11:26:57.410450] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.587 [2024-11-15 11:26:57.423717] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.587 [2024-11-15 11:26:57.423735] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.587 [2024-11-15 11:26:57.437081] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.587 [2024-11-15 11:26:57.437100] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.845 [2024-11-15 11:26:57.449832] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.846 [2024-11-15 11:26:57.449849] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.846 [2024-11-15 11:26:57.463678] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.846 [2024-11-15 11:26:57.463696] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.846 [2024-11-15 11:26:57.476315] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.846 [2024-11-15 11:26:57.476333] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.846 [2024-11-15 11:26:57.489185] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.846 [2024-11-15 11:26:57.489203] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.846 [2024-11-15 11:26:57.502021] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.846 [2024-11-15 11:26:57.502039] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.846 [2024-11-15 11:26:57.515029] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.846 [2024-11-15 11:26:57.515048] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.846 [2024-11-15 11:26:57.527909] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.846 [2024-11-15 11:26:57.527926] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.846 [2024-11-15 11:26:57.540791] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.846 [2024-11-15 11:26:57.540809] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.846 [2024-11-15 11:26:57.553843] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.846 [2024-11-15 11:26:57.553860] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.846 [2024-11-15 11:26:57.566893] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.846 [2024-11-15 11:26:57.566911] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.846 [2024-11-15 11:26:57.579917] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.846 [2024-11-15 11:26:57.579940] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.846 [2024-11-15 11:26:57.592542] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.846 [2024-11-15 11:26:57.592559] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.846 [2024-11-15 11:26:57.605844] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.846 [2024-11-15 11:26:57.605863] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.846 [2024-11-15 11:26:57.619736] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.846 [2024-11-15 11:26:57.619754] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.846 [2024-11-15 11:26:57.632226] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.846 [2024-11-15 11:26:57.632246] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.846 [2024-11-15 11:26:57.645607] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.846 [2024-11-15 11:26:57.645627] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.846 [2024-11-15 11:26:57.658972] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.846 [2024-11-15 11:26:57.658991] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.846 [2024-11-15 11:26:57.672130] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.846 [2024-11-15 11:26:57.672149] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.846 [2024-11-15 11:26:57.685204] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.846 [2024-11-15 11:26:57.685223] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.105 [2024-11-15 11:26:57.698944] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.105 [2024-11-15 11:26:57.698963] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.105 [2024-11-15 11:26:57.712186] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.105 [2024-11-15 11:26:57.712204] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.105 [2024-11-15 11:26:57.725002] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.105 [2024-11-15 11:26:57.725021] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.105 [2024-11-15 11:26:57.738117] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.105 [2024-11-15 11:26:57.738136] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.105 [2024-11-15 11:26:57.751827] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.105 [2024-11-15 11:26:57.751845] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.105 [2024-11-15 11:26:57.765256] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.105 [2024-11-15 11:26:57.765275] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.105 [2024-11-15 11:26:57.777936] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.105 [2024-11-15 11:26:57.777955] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.105 [2024-11-15 11:26:57.790792] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.105 [2024-11-15 11:26:57.790810] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.105 [2024-11-15 11:26:57.804305] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.105 [2024-11-15 11:26:57.804323] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.105 [2024-11-15 11:26:57.816650] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.105 [2024-11-15 11:26:57.816669] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.105 [2024-11-15 11:26:57.829413] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.105 [2024-11-15 11:26:57.829432] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.105 [2024-11-15 11:26:57.841850] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.105 [2024-11-15 11:26:57.841868] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.105 [2024-11-15 11:26:57.855106] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.105 [2024-11-15 11:26:57.855125] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.105 [2024-11-15 11:26:57.868028] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.105 [2024-11-15 11:26:57.868047] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.105 [2024-11-15 11:26:57.880537] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.105 [2024-11-15 11:26:57.880555] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.105 [2024-11-15 11:26:57.893346] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.105 [2024-11-15 11:26:57.893364] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.105 [2024-11-15 11:26:57.905890] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.105 [2024-11-15 11:26:57.905909] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.105 [2024-11-15 11:26:57.919413] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.105 [2024-11-15 11:26:57.919432] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.105 [2024-11-15 11:26:57.932142] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.105 [2024-11-15 11:26:57.932160] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.105 [2024-11-15 11:26:57.944542] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.105 [2024-11-15 11:26:57.944560] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.106 [2024-11-15 11:26:57.957618] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.106 [2024-11-15 11:26:57.957636] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.365 [2024-11-15 11:26:57.971320] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.365 [2024-11-15 11:26:57.971339] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.365 [2024-11-15 11:26:57.985066] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.365 [2024-11-15 11:26:57.985085] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.365 [2024-11-15 11:26:57.998468] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.365 [2024-11-15 11:26:57.998487] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.365 [2024-11-15 11:26:58.011876] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.365 [2024-11-15 11:26:58.011895] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.365 [2024-11-15 11:26:58.024797] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.365 [2024-11-15 11:26:58.024816] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.365 [2024-11-15 11:26:58.037741] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.365 [2024-11-15 11:26:58.037760] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.365 [2024-11-15 11:26:58.050999] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.365 [2024-11-15 11:26:58.051017] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.365 [2024-11-15 11:26:58.064785] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.365 [2024-11-15 11:26:58.064804] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.365 18347.60 IOPS, 143.34 MiB/s 00:08:57.365 Latency(us) 00:08:57.365 [2024-11-15T10:26:58.218Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:57.365 Job: Nvme1n1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 128, IO size: 8192) 00:08:57.365 Nvme1n1 : 5.01 18349.38 143.35 0.00 0.00 6968.70 3053.38 15252.01 00:08:57.365 [2024-11-15T10:26:58.218Z] =================================================================================================================== 00:08:57.365 [2024-11-15T10:26:58.218Z] Total : 18349.38 143.35 0.00 0.00 6968.70 3053.38 15252.01 00:08:57.365 [2024-11-15 11:26:58.074591] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.365 [2024-11-15 11:26:58.074608] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.365 [2024-11-15 11:26:58.086589] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.365 [2024-11-15 11:26:58.086605] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.365 [2024-11-15 11:26:58.098613] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.365 [2024-11-15 11:26:58.098626] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.365 [2024-11-15 11:26:58.110652] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.365 [2024-11-15 11:26:58.110670] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.365 [2024-11-15 11:26:58.122674] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.365 [2024-11-15 11:26:58.122688] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.365 [2024-11-15 11:26:58.134706] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.365 [2024-11-15 11:26:58.134720] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.365 [2024-11-15 11:26:58.146735] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.365 [2024-11-15 11:26:58.146749] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.365 [2024-11-15 11:26:58.158767] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.365 [2024-11-15 11:26:58.158780] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.365 [2024-11-15 11:26:58.170798] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.365 [2024-11-15 11:26:58.170811] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.365 [2024-11-15 11:26:58.182830] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.365 [2024-11-15 11:26:58.182843] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.365 [2024-11-15 11:26:58.194858] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.365 [2024-11-15 11:26:58.194867] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.365 [2024-11-15 11:26:58.206892] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.365 [2024-11-15 11:26:58.206902] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.624 [2024-11-15 11:26:58.218927] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.624 [2024-11-15 11:26:58.218939] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.624 [2024-11-15 11:26:58.230956] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.624 [2024-11-15 11:26:58.230966] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.624 [2024-11-15 11:26:58.242988] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.624 [2024-11-15 11:26:58.242997] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.624 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh: line 42: kill: (1097454) - No such process 00:08:57.624 11:26:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@49 -- # wait 1097454 00:08:57.624 11:26:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@52 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:57.624 11:26:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:57.624 11:26:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:57.624 11:26:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:57.624 11:26:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@53 -- # rpc_cmd bdev_delay_create -b malloc0 -d delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:08:57.624 11:26:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:57.624 11:26:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:57.624 delay0 00:08:57.624 11:26:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:57.624 11:26:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@54 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 delay0 -n 1 00:08:57.624 11:26:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:57.624 11:26:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:57.624 11:26:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:57.624 11:26:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -c 0x1 -t 5 -q 64 -w randrw -M 50 -l warning -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 ns:1' 00:08:57.624 [2024-11-15 11:26:58.353909] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:09:05.745 Initializing NVMe Controllers 00:09:05.745 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:09:05.745 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:09:05.745 Initialization complete. Launching workers. 00:09:05.745 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 I/O completed: 246, failed: 30584 00:09:05.745 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) abort submitted 30708, failed to submit 122 00:09:05.745 success 30611, unsuccessful 97, failed 0 00:09:05.745 11:27:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@59 -- # trap - SIGINT SIGTERM EXIT 00:09:05.745 11:27:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@60 -- # nvmftestfini 00:09:05.745 11:27:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@516 -- # nvmfcleanup 00:09:05.745 11:27:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@121 -- # sync 00:09:05.745 11:27:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:09:05.745 11:27:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@124 -- # set +e 00:09:05.745 11:27:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@125 -- # for i in {1..20} 00:09:05.745 11:27:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:09:05.745 rmmod nvme_tcp 00:09:05.745 rmmod nvme_fabrics 00:09:05.745 rmmod nvme_keyring 00:09:05.745 11:27:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:09:05.745 11:27:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@128 -- # set -e 00:09:05.745 11:27:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@129 -- # return 0 00:09:05.745 11:27:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@517 -- # '[' -n 1095340 ']' 00:09:05.745 11:27:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@518 -- # killprocess 1095340 00:09:05.745 11:27:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@952 -- # '[' -z 1095340 ']' 00:09:05.745 11:27:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@956 -- # kill -0 1095340 00:09:05.745 11:27:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@957 -- # uname 00:09:05.745 11:27:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:09:05.745 11:27:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 1095340 00:09:05.745 11:27:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:09:05.745 11:27:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:09:05.745 11:27:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@970 -- # echo 'killing process with pid 1095340' 00:09:05.745 killing process with pid 1095340 00:09:05.745 11:27:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@971 -- # kill 1095340 00:09:05.745 11:27:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@976 -- # wait 1095340 00:09:05.745 11:27:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:09:05.745 11:27:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:09:05.745 11:27:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:09:05.745 11:27:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@297 -- # iptr 00:09:05.745 11:27:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@791 -- # iptables-save 00:09:05.745 11:27:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:09:05.745 11:27:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@791 -- # iptables-restore 00:09:05.745 11:27:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:09:05.745 11:27:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@302 -- # remove_spdk_ns 00:09:05.745 11:27:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:05.745 11:27:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:05.745 11:27:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:07.123 11:27:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:09:07.123 00:09:07.123 real 0m32.547s 00:09:07.123 user 0m43.443s 00:09:07.123 sys 0m12.015s 00:09:07.123 11:27:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1128 -- # xtrace_disable 00:09:07.123 11:27:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:07.123 ************************************ 00:09:07.123 END TEST nvmf_zcopy 00:09:07.123 ************************************ 00:09:07.123 11:27:07 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@33 -- # run_test nvmf_nmic /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:09:07.123 11:27:07 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:09:07.123 11:27:07 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1109 -- # xtrace_disable 00:09:07.123 11:27:07 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:09:07.123 ************************************ 00:09:07.123 START TEST nvmf_nmic 00:09:07.123 ************************************ 00:09:07.123 11:27:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:09:07.383 * Looking for test storage... 00:09:07.383 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:07.383 11:27:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:09:07.383 11:27:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1691 -- # lcov --version 00:09:07.383 11:27:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:09:07.383 11:27:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:09:07.383 11:27:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:07.383 11:27:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:07.383 11:27:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:07.383 11:27:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@336 -- # IFS=.-: 00:09:07.383 11:27:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@336 -- # read -ra ver1 00:09:07.383 11:27:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@337 -- # IFS=.-: 00:09:07.383 11:27:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@337 -- # read -ra ver2 00:09:07.383 11:27:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@338 -- # local 'op=<' 00:09:07.383 11:27:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@340 -- # ver1_l=2 00:09:07.383 11:27:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@341 -- # ver2_l=1 00:09:07.383 11:27:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:07.383 11:27:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@344 -- # case "$op" in 00:09:07.383 11:27:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@345 -- # : 1 00:09:07.383 11:27:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:07.383 11:27:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:07.383 11:27:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@365 -- # decimal 1 00:09:07.383 11:27:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@353 -- # local d=1 00:09:07.383 11:27:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:07.383 11:27:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@355 -- # echo 1 00:09:07.383 11:27:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@365 -- # ver1[v]=1 00:09:07.383 11:27:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@366 -- # decimal 2 00:09:07.383 11:27:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@353 -- # local d=2 00:09:07.383 11:27:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:07.383 11:27:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@355 -- # echo 2 00:09:07.383 11:27:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@366 -- # ver2[v]=2 00:09:07.383 11:27:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:07.383 11:27:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:07.383 11:27:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@368 -- # return 0 00:09:07.383 11:27:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:07.383 11:27:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:09:07.383 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:07.383 --rc genhtml_branch_coverage=1 00:09:07.383 --rc genhtml_function_coverage=1 00:09:07.383 --rc genhtml_legend=1 00:09:07.383 --rc geninfo_all_blocks=1 00:09:07.383 --rc geninfo_unexecuted_blocks=1 00:09:07.383 00:09:07.383 ' 00:09:07.383 11:27:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:09:07.383 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:07.383 --rc genhtml_branch_coverage=1 00:09:07.383 --rc genhtml_function_coverage=1 00:09:07.383 --rc genhtml_legend=1 00:09:07.383 --rc geninfo_all_blocks=1 00:09:07.383 --rc geninfo_unexecuted_blocks=1 00:09:07.383 00:09:07.383 ' 00:09:07.383 11:27:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:09:07.383 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:07.383 --rc genhtml_branch_coverage=1 00:09:07.383 --rc genhtml_function_coverage=1 00:09:07.383 --rc genhtml_legend=1 00:09:07.383 --rc geninfo_all_blocks=1 00:09:07.383 --rc geninfo_unexecuted_blocks=1 00:09:07.383 00:09:07.383 ' 00:09:07.383 11:27:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:09:07.383 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:07.383 --rc genhtml_branch_coverage=1 00:09:07.383 --rc genhtml_function_coverage=1 00:09:07.383 --rc genhtml_legend=1 00:09:07.383 --rc geninfo_all_blocks=1 00:09:07.383 --rc geninfo_unexecuted_blocks=1 00:09:07.383 00:09:07.383 ' 00:09:07.383 11:27:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:07.383 11:27:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@7 -- # uname -s 00:09:07.383 11:27:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:07.383 11:27:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:07.383 11:27:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:07.383 11:27:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:07.383 11:27:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:07.383 11:27:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:07.383 11:27:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:07.383 11:27:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:07.383 11:27:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:07.384 11:27:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:07.384 11:27:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:09:07.384 11:27:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@18 -- # NVME_HOSTID=00abaa28-3537-eb11-906e-0017a4403562 00:09:07.384 11:27:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:07.384 11:27:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:07.384 11:27:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:07.384 11:27:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:07.384 11:27:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:07.384 11:27:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@15 -- # shopt -s extglob 00:09:07.384 11:27:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:07.384 11:27:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:07.384 11:27:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:07.384 11:27:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:07.384 11:27:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:07.384 11:27:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:07.384 11:27:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@5 -- # export PATH 00:09:07.384 11:27:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:07.384 11:27:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@51 -- # : 0 00:09:07.384 11:27:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:09:07.384 11:27:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:09:07.384 11:27:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:07.384 11:27:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:07.384 11:27:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:07.384 11:27:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:09:07.384 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:09:07.384 11:27:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:09:07.384 11:27:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:09:07.384 11:27:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@55 -- # have_pci_nics=0 00:09:07.384 11:27:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@11 -- # MALLOC_BDEV_SIZE=64 00:09:07.384 11:27:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:09:07.384 11:27:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@14 -- # nvmftestinit 00:09:07.384 11:27:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:09:07.384 11:27:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:07.384 11:27:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@476 -- # prepare_net_devs 00:09:07.384 11:27:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@438 -- # local -g is_hw=no 00:09:07.384 11:27:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@440 -- # remove_spdk_ns 00:09:07.384 11:27:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:07.384 11:27:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:07.384 11:27:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:07.384 11:27:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:09:07.384 11:27:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:09:07.384 11:27:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@309 -- # xtrace_disable 00:09:07.384 11:27:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:13.951 11:27:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:13.951 11:27:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@315 -- # pci_devs=() 00:09:13.951 11:27:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@315 -- # local -a pci_devs 00:09:13.951 11:27:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@316 -- # pci_net_devs=() 00:09:13.951 11:27:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:09:13.951 11:27:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@317 -- # pci_drivers=() 00:09:13.951 11:27:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@317 -- # local -A pci_drivers 00:09:13.951 11:27:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@319 -- # net_devs=() 00:09:13.951 11:27:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@319 -- # local -ga net_devs 00:09:13.951 11:27:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@320 -- # e810=() 00:09:13.951 11:27:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@320 -- # local -ga e810 00:09:13.951 11:27:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@321 -- # x722=() 00:09:13.951 11:27:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@321 -- # local -ga x722 00:09:13.951 11:27:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@322 -- # mlx=() 00:09:13.951 11:27:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@322 -- # local -ga mlx 00:09:13.951 11:27:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:13.951 11:27:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:13.951 11:27:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:13.951 11:27:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:13.951 11:27:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:13.951 11:27:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:13.951 11:27:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:13.951 11:27:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:09:13.951 11:27:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:13.951 11:27:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:13.951 11:27:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:13.951 11:27:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:13.951 11:27:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:09:13.951 11:27:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:09:13.951 11:27:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:09:13.951 11:27:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:09:13.951 11:27:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:09:13.951 11:27:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:09:13.951 11:27:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:13.951 11:27:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:09:13.951 Found 0000:af:00.0 (0x8086 - 0x159b) 00:09:13.951 11:27:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:09:13.951 11:27:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:09:13.951 11:27:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:13.951 11:27:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:13.951 11:27:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:09:13.951 11:27:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:13.952 11:27:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:09:13.952 Found 0000:af:00.1 (0x8086 - 0x159b) 00:09:13.952 11:27:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:09:13.952 11:27:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:09:13.952 11:27:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:13.952 11:27:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:13.952 11:27:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:09:13.952 11:27:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:09:13.952 11:27:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:09:13.952 11:27:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:09:13.952 11:27:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:09:13.952 11:27:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:13.952 11:27:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:09:13.952 11:27:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:13.952 11:27:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@418 -- # [[ up == up ]] 00:09:13.952 11:27:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:09:13.952 11:27:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:13.952 11:27:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:09:13.952 Found net devices under 0000:af:00.0: cvl_0_0 00:09:13.952 11:27:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:09:13.952 11:27:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:09:13.952 11:27:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:13.952 11:27:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:09:13.952 11:27:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:13.952 11:27:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@418 -- # [[ up == up ]] 00:09:13.952 11:27:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:09:13.952 11:27:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:13.952 11:27:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:09:13.952 Found net devices under 0000:af:00.1: cvl_0_1 00:09:13.952 11:27:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:09:13.952 11:27:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:09:13.952 11:27:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@442 -- # is_hw=yes 00:09:13.952 11:27:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:09:13.952 11:27:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:09:13.952 11:27:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:09:13.952 11:27:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:09:13.952 11:27:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:13.952 11:27:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:13.952 11:27:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:09:13.952 11:27:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:09:13.952 11:27:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:09:13.952 11:27:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:09:13.952 11:27:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:09:13.952 11:27:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:09:13.953 11:27:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:09:13.953 11:27:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:13.953 11:27:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:09:13.953 11:27:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:09:13.953 11:27:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:09:13.953 11:27:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:09:13.953 11:27:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:09:13.953 11:27:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:09:13.953 11:27:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:09:13.953 11:27:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:09:13.953 11:27:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:09:13.953 11:27:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:09:13.953 11:27:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:09:13.953 11:27:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:09:13.953 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:13.953 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.292 ms 00:09:13.953 00:09:13.953 --- 10.0.0.2 ping statistics --- 00:09:13.953 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:13.953 rtt min/avg/max/mdev = 0.292/0.292/0.292/0.000 ms 00:09:13.953 11:27:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:09:13.953 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:13.953 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.175 ms 00:09:13.953 00:09:13.953 --- 10.0.0.1 ping statistics --- 00:09:13.953 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:13.953 rtt min/avg/max/mdev = 0.175/0.175/0.175/0.000 ms 00:09:13.953 11:27:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:13.953 11:27:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@450 -- # return 0 00:09:13.953 11:27:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:09:13.953 11:27:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:13.953 11:27:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:09:13.953 11:27:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:09:13.953 11:27:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:13.953 11:27:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:09:13.953 11:27:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:09:13.953 11:27:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@15 -- # nvmfappstart -m 0xF 00:09:13.953 11:27:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:09:13.953 11:27:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@724 -- # xtrace_disable 00:09:13.953 11:27:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:13.953 11:27:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@509 -- # nvmfpid=1103895 00:09:13.953 11:27:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@510 -- # waitforlisten 1103895 00:09:13.953 11:27:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:09:13.953 11:27:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@833 -- # '[' -z 1103895 ']' 00:09:13.953 11:27:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:13.954 11:27:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@838 -- # local max_retries=100 00:09:13.954 11:27:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:13.954 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:13.954 11:27:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@842 -- # xtrace_disable 00:09:13.954 11:27:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:13.954 [2024-11-15 11:27:13.921838] Starting SPDK v25.01-pre git sha1 4b2d483c6 / DPDK 24.03.0 initialization... 00:09:13.954 [2024-11-15 11:27:13.921900] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:13.954 [2024-11-15 11:27:14.027082] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:13.954 [2024-11-15 11:27:14.078603] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:13.954 [2024-11-15 11:27:14.078648] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:13.954 [2024-11-15 11:27:14.078658] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:13.954 [2024-11-15 11:27:14.078668] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:13.954 [2024-11-15 11:27:14.078675] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:13.954 [2024-11-15 11:27:14.080627] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:09:13.954 [2024-11-15 11:27:14.080728] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:09:13.954 [2024-11-15 11:27:14.080817] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:09:13.954 [2024-11-15 11:27:14.080828] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:13.954 11:27:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:09:13.954 11:27:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@866 -- # return 0 00:09:13.954 11:27:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:09:13.954 11:27:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@730 -- # xtrace_disable 00:09:13.954 11:27:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:13.954 11:27:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:13.954 11:27:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:09:13.954 11:27:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:13.954 11:27:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:13.954 [2024-11-15 11:27:14.230214] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:13.954 11:27:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:13.954 11:27:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:09:13.954 11:27:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:13.954 11:27:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:13.954 Malloc0 00:09:13.954 11:27:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:13.954 11:27:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@21 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:09:13.954 11:27:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:13.954 11:27:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:13.954 11:27:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:13.954 11:27:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:09:13.955 11:27:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:13.955 11:27:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:13.955 11:27:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:13.955 11:27:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:13.955 11:27:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:13.955 11:27:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:13.955 [2024-11-15 11:27:14.304838] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:13.955 11:27:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:13.955 11:27:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@25 -- # echo 'test case1: single bdev can'\''t be used in multiple subsystems' 00:09:13.955 test case1: single bdev can't be used in multiple subsystems 00:09:13.955 11:27:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@26 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:09:13.955 11:27:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:13.955 11:27:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:13.955 11:27:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:13.955 11:27:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:09:13.955 11:27:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:13.955 11:27:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:13.955 11:27:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:13.955 11:27:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@28 -- # nmic_status=0 00:09:13.955 11:27:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc0 00:09:13.955 11:27:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:13.955 11:27:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:13.955 [2024-11-15 11:27:14.332698] bdev.c:8468:bdev_open: *ERROR*: bdev Malloc0 already claimed: type exclusive_write by module NVMe-oF Target 00:09:13.955 [2024-11-15 11:27:14.332724] subsystem.c:2156:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2: bdev Malloc0 cannot be opened, error=-1 00:09:13.955 [2024-11-15 11:27:14.332734] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:13.955 request: 00:09:13.955 { 00:09:13.955 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:09:13.955 "namespace": { 00:09:13.955 "bdev_name": "Malloc0", 00:09:13.955 "no_auto_visible": false, 00:09:13.955 "no_metadata": false 00:09:13.955 }, 00:09:13.955 "method": "nvmf_subsystem_add_ns", 00:09:13.955 "req_id": 1 00:09:13.955 } 00:09:13.955 Got JSON-RPC error response 00:09:13.955 response: 00:09:13.955 { 00:09:13.955 "code": -32602, 00:09:13.955 "message": "Invalid parameters" 00:09:13.955 } 00:09:13.955 11:27:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:09:13.955 11:27:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@29 -- # nmic_status=1 00:09:13.955 11:27:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@31 -- # '[' 1 -eq 0 ']' 00:09:13.955 11:27:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@36 -- # echo ' Adding namespace failed - expected result.' 00:09:13.955 Adding namespace failed - expected result. 00:09:13.955 11:27:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@39 -- # echo 'test case2: host connect to nvmf target in multiple paths' 00:09:13.955 test case2: host connect to nvmf target in multiple paths 00:09:13.955 11:27:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:09:13.955 11:27:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:13.955 11:27:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:13.955 [2024-11-15 11:27:14.344860] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:09:13.955 11:27:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:13.955 11:27:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@41 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid=00abaa28-3537-eb11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:09:14.900 11:27:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@42 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid=00abaa28-3537-eb11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4421 00:09:16.277 11:27:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@44 -- # waitforserial SPDKISFASTANDAWESOME 00:09:16.277 11:27:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1200 -- # local i=0 00:09:16.277 11:27:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1201 -- # local nvme_device_counter=1 nvme_devices=0 00:09:16.277 11:27:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1202 -- # [[ -n '' ]] 00:09:16.277 11:27:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1207 -- # sleep 2 00:09:18.179 11:27:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1208 -- # (( i++ <= 15 )) 00:09:18.179 11:27:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1209 -- # lsblk -l -o NAME,SERIAL 00:09:18.179 11:27:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1209 -- # grep -c SPDKISFASTANDAWESOME 00:09:18.179 11:27:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1209 -- # nvme_devices=1 00:09:18.179 11:27:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1210 -- # (( nvme_devices == nvme_device_counter )) 00:09:18.179 11:27:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1210 -- # return 0 00:09:18.179 11:27:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:09:18.455 [global] 00:09:18.455 thread=1 00:09:18.455 invalidate=1 00:09:18.455 rw=write 00:09:18.455 time_based=1 00:09:18.455 runtime=1 00:09:18.455 ioengine=libaio 00:09:18.455 direct=1 00:09:18.455 bs=4096 00:09:18.455 iodepth=1 00:09:18.455 norandommap=0 00:09:18.455 numjobs=1 00:09:18.455 00:09:18.455 verify_dump=1 00:09:18.455 verify_backlog=512 00:09:18.455 verify_state_save=0 00:09:18.455 do_verify=1 00:09:18.455 verify=crc32c-intel 00:09:18.455 [job0] 00:09:18.455 filename=/dev/nvme0n1 00:09:18.455 Could not set queue depth (nvme0n1) 00:09:18.716 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:18.716 fio-3.35 00:09:18.716 Starting 1 thread 00:09:20.089 00:09:20.089 job0: (groupid=0, jobs=1): err= 0: pid=1105032: Fri Nov 15 11:27:20 2024 00:09:20.089 read: IOPS=2531, BW=9.89MiB/s (10.4MB/s)(9.89MiB/1000msec) 00:09:20.089 slat (nsec): min=7509, max=42658, avg=8641.33, stdev=1441.71 00:09:20.089 clat (usec): min=173, max=602, avg=202.92, stdev=28.46 00:09:20.089 lat (usec): min=181, max=636, avg=211.57, stdev=28.77 00:09:20.089 clat percentiles (usec): 00:09:20.089 | 1.00th=[ 176], 5.00th=[ 180], 10.00th=[ 182], 20.00th=[ 184], 00:09:20.089 | 30.00th=[ 186], 40.00th=[ 188], 50.00th=[ 190], 60.00th=[ 194], 00:09:20.089 | 70.00th=[ 212], 80.00th=[ 229], 90.00th=[ 237], 95.00th=[ 251], 00:09:20.089 | 99.00th=[ 289], 99.50th=[ 302], 99.90th=[ 420], 99.95th=[ 529], 00:09:20.089 | 99.99th=[ 603] 00:09:20.089 write: IOPS=2560, BW=10.0MiB/s (10.5MB/s)(10.0MiB/1000msec); 0 zone resets 00:09:20.089 slat (usec): min=11, max=25888, avg=22.68, stdev=511.42 00:09:20.089 clat (usec): min=120, max=1427, avg=151.55, stdev=34.71 00:09:20.089 lat (usec): min=132, max=26223, avg=174.22, stdev=516.22 00:09:20.089 clat percentiles (usec): 00:09:20.089 | 1.00th=[ 125], 5.00th=[ 128], 10.00th=[ 129], 20.00th=[ 131], 00:09:20.089 | 30.00th=[ 133], 40.00th=[ 135], 50.00th=[ 141], 60.00th=[ 161], 00:09:20.089 | 70.00th=[ 169], 80.00th=[ 174], 90.00th=[ 180], 95.00th=[ 186], 00:09:20.089 | 99.00th=[ 194], 99.50th=[ 217], 99.90th=[ 379], 99.95th=[ 545], 00:09:20.089 | 99.99th=[ 1434] 00:09:20.089 bw ( KiB/s): min=12176, max=12176, per=100.00%, avg=12176.00, stdev= 0.00, samples=1 00:09:20.089 iops : min= 3044, max= 3044, avg=3044.00, stdev= 0.00, samples=1 00:09:20.089 lat (usec) : 250=97.33%, 500=2.59%, 750=0.06% 00:09:20.089 lat (msec) : 2=0.02% 00:09:20.089 cpu : usr=4.50%, sys=8.40%, ctx=5094, majf=0, minf=1 00:09:20.089 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:20.089 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:20.089 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:20.089 issued rwts: total=2531,2560,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:20.089 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:20.089 00:09:20.089 Run status group 0 (all jobs): 00:09:20.089 READ: bw=9.89MiB/s (10.4MB/s), 9.89MiB/s-9.89MiB/s (10.4MB/s-10.4MB/s), io=9.89MiB (10.4MB), run=1000-1000msec 00:09:20.089 WRITE: bw=10.0MiB/s (10.5MB/s), 10.0MiB/s-10.0MiB/s (10.5MB/s-10.5MB/s), io=10.0MiB (10.5MB), run=1000-1000msec 00:09:20.089 00:09:20.089 Disk stats (read/write): 00:09:20.089 nvme0n1: ios=2080/2560, merge=0/0, ticks=1388/367, in_queue=1755, util=98.60% 00:09:20.089 11:27:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@48 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:09:20.089 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:09:20.089 11:27:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@49 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:09:20.089 11:27:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1221 -- # local i=0 00:09:20.089 11:27:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1222 -- # lsblk -o NAME,SERIAL 00:09:20.089 11:27:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1222 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:20.089 11:27:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1229 -- # lsblk -l -o NAME,SERIAL 00:09:20.089 11:27:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1229 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:20.089 11:27:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1233 -- # return 0 00:09:20.089 11:27:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:09:20.089 11:27:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@53 -- # nvmftestfini 00:09:20.089 11:27:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@516 -- # nvmfcleanup 00:09:20.089 11:27:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@121 -- # sync 00:09:20.090 11:27:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:09:20.090 11:27:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@124 -- # set +e 00:09:20.090 11:27:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@125 -- # for i in {1..20} 00:09:20.090 11:27:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:09:20.090 rmmod nvme_tcp 00:09:20.090 rmmod nvme_fabrics 00:09:20.090 rmmod nvme_keyring 00:09:20.090 11:27:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:09:20.090 11:27:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@128 -- # set -e 00:09:20.090 11:27:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@129 -- # return 0 00:09:20.090 11:27:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@517 -- # '[' -n 1103895 ']' 00:09:20.090 11:27:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@518 -- # killprocess 1103895 00:09:20.090 11:27:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@952 -- # '[' -z 1103895 ']' 00:09:20.090 11:27:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@956 -- # kill -0 1103895 00:09:20.090 11:27:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@957 -- # uname 00:09:20.090 11:27:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:09:20.090 11:27:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 1103895 00:09:20.090 11:27:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:09:20.090 11:27:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:09:20.090 11:27:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@970 -- # echo 'killing process with pid 1103895' 00:09:20.090 killing process with pid 1103895 00:09:20.090 11:27:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@971 -- # kill 1103895 00:09:20.090 11:27:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@976 -- # wait 1103895 00:09:20.348 11:27:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:09:20.348 11:27:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:09:20.348 11:27:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:09:20.348 11:27:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@297 -- # iptr 00:09:20.348 11:27:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:09:20.348 11:27:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@791 -- # iptables-save 00:09:20.348 11:27:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@791 -- # iptables-restore 00:09:20.348 11:27:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:09:20.348 11:27:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@302 -- # remove_spdk_ns 00:09:20.348 11:27:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:20.348 11:27:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:20.348 11:27:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:22.251 11:27:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:09:22.510 00:09:22.510 real 0m15.159s 00:09:22.510 user 0m39.225s 00:09:22.510 sys 0m5.310s 00:09:22.510 11:27:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1128 -- # xtrace_disable 00:09:22.510 11:27:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:22.510 ************************************ 00:09:22.510 END TEST nvmf_nmic 00:09:22.510 ************************************ 00:09:22.510 11:27:23 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@34 -- # run_test nvmf_fio_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp 00:09:22.510 11:27:23 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:09:22.510 11:27:23 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1109 -- # xtrace_disable 00:09:22.510 11:27:23 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:09:22.510 ************************************ 00:09:22.510 START TEST nvmf_fio_target 00:09:22.510 ************************************ 00:09:22.510 11:27:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp 00:09:22.510 * Looking for test storage... 00:09:22.510 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:22.510 11:27:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:09:22.510 11:27:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1691 -- # lcov --version 00:09:22.510 11:27:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:09:22.510 11:27:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:09:22.510 11:27:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:22.510 11:27:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:22.510 11:27:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:22.510 11:27:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@336 -- # IFS=.-: 00:09:22.510 11:27:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@336 -- # read -ra ver1 00:09:22.510 11:27:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@337 -- # IFS=.-: 00:09:22.510 11:27:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@337 -- # read -ra ver2 00:09:22.510 11:27:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@338 -- # local 'op=<' 00:09:22.510 11:27:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@340 -- # ver1_l=2 00:09:22.510 11:27:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@341 -- # ver2_l=1 00:09:22.510 11:27:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:22.510 11:27:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@344 -- # case "$op" in 00:09:22.510 11:27:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@345 -- # : 1 00:09:22.510 11:27:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:22.510 11:27:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:22.510 11:27:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@365 -- # decimal 1 00:09:22.510 11:27:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@353 -- # local d=1 00:09:22.510 11:27:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:22.510 11:27:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@355 -- # echo 1 00:09:22.510 11:27:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@365 -- # ver1[v]=1 00:09:22.510 11:27:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@366 -- # decimal 2 00:09:22.510 11:27:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@353 -- # local d=2 00:09:22.510 11:27:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:22.769 11:27:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@355 -- # echo 2 00:09:22.769 11:27:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@366 -- # ver2[v]=2 00:09:22.769 11:27:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:22.769 11:27:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:22.769 11:27:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@368 -- # return 0 00:09:22.770 11:27:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:22.770 11:27:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:09:22.770 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:22.770 --rc genhtml_branch_coverage=1 00:09:22.770 --rc genhtml_function_coverage=1 00:09:22.770 --rc genhtml_legend=1 00:09:22.770 --rc geninfo_all_blocks=1 00:09:22.770 --rc geninfo_unexecuted_blocks=1 00:09:22.770 00:09:22.770 ' 00:09:22.770 11:27:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:09:22.770 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:22.770 --rc genhtml_branch_coverage=1 00:09:22.770 --rc genhtml_function_coverage=1 00:09:22.770 --rc genhtml_legend=1 00:09:22.770 --rc geninfo_all_blocks=1 00:09:22.770 --rc geninfo_unexecuted_blocks=1 00:09:22.770 00:09:22.770 ' 00:09:22.770 11:27:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:09:22.770 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:22.770 --rc genhtml_branch_coverage=1 00:09:22.770 --rc genhtml_function_coverage=1 00:09:22.770 --rc genhtml_legend=1 00:09:22.770 --rc geninfo_all_blocks=1 00:09:22.770 --rc geninfo_unexecuted_blocks=1 00:09:22.770 00:09:22.770 ' 00:09:22.770 11:27:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:09:22.770 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:22.770 --rc genhtml_branch_coverage=1 00:09:22.770 --rc genhtml_function_coverage=1 00:09:22.770 --rc genhtml_legend=1 00:09:22.770 --rc geninfo_all_blocks=1 00:09:22.770 --rc geninfo_unexecuted_blocks=1 00:09:22.770 00:09:22.770 ' 00:09:22.770 11:27:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:22.770 11:27:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@7 -- # uname -s 00:09:22.770 11:27:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:22.770 11:27:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:22.770 11:27:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:22.770 11:27:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:22.770 11:27:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:22.770 11:27:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:22.770 11:27:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:22.770 11:27:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:22.770 11:27:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:22.770 11:27:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:22.770 11:27:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:09:22.770 11:27:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@18 -- # NVME_HOSTID=00abaa28-3537-eb11-906e-0017a4403562 00:09:22.770 11:27:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:22.770 11:27:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:22.770 11:27:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:22.770 11:27:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:22.770 11:27:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:22.770 11:27:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@15 -- # shopt -s extglob 00:09:22.770 11:27:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:22.770 11:27:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:22.770 11:27:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:22.770 11:27:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:22.770 11:27:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:22.770 11:27:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:22.770 11:27:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@5 -- # export PATH 00:09:22.770 11:27:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:22.770 11:27:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@51 -- # : 0 00:09:22.770 11:27:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:09:22.770 11:27:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:09:22.770 11:27:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:22.770 11:27:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:22.770 11:27:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:22.770 11:27:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:09:22.770 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:09:22.770 11:27:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:09:22.770 11:27:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:09:22.770 11:27:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:09:22.770 11:27:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:09:22.770 11:27:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:09:22.770 11:27:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:09:22.770 11:27:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@16 -- # nvmftestinit 00:09:22.770 11:27:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:09:22.770 11:27:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:22.770 11:27:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@476 -- # prepare_net_devs 00:09:22.770 11:27:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@438 -- # local -g is_hw=no 00:09:22.770 11:27:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@440 -- # remove_spdk_ns 00:09:22.770 11:27:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:22.770 11:27:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:22.770 11:27:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:22.770 11:27:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:09:22.770 11:27:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:09:22.770 11:27:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@309 -- # xtrace_disable 00:09:22.770 11:27:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:09:28.039 11:27:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:28.039 11:27:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@315 -- # pci_devs=() 00:09:28.039 11:27:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@315 -- # local -a pci_devs 00:09:28.039 11:27:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@316 -- # pci_net_devs=() 00:09:28.039 11:27:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:09:28.039 11:27:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@317 -- # pci_drivers=() 00:09:28.039 11:27:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@317 -- # local -A pci_drivers 00:09:28.039 11:27:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@319 -- # net_devs=() 00:09:28.039 11:27:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@319 -- # local -ga net_devs 00:09:28.039 11:27:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@320 -- # e810=() 00:09:28.039 11:27:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@320 -- # local -ga e810 00:09:28.039 11:27:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@321 -- # x722=() 00:09:28.039 11:27:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@321 -- # local -ga x722 00:09:28.039 11:27:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@322 -- # mlx=() 00:09:28.039 11:27:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@322 -- # local -ga mlx 00:09:28.039 11:27:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:28.039 11:27:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:28.039 11:27:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:28.039 11:27:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:28.039 11:27:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:28.039 11:27:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:28.039 11:27:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:28.039 11:27:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:09:28.039 11:27:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:28.039 11:27:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:28.039 11:27:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:28.039 11:27:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:28.040 11:27:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:09:28.040 11:27:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:09:28.040 11:27:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:09:28.040 11:27:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:09:28.040 11:27:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:09:28.040 11:27:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:09:28.040 11:27:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:28.040 11:27:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:09:28.040 Found 0000:af:00.0 (0x8086 - 0x159b) 00:09:28.040 11:27:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:09:28.040 11:27:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:09:28.040 11:27:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:28.040 11:27:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:28.040 11:27:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:09:28.040 11:27:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:28.040 11:27:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:09:28.040 Found 0000:af:00.1 (0x8086 - 0x159b) 00:09:28.040 11:27:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:09:28.040 11:27:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:09:28.040 11:27:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:28.040 11:27:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:28.040 11:27:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:09:28.040 11:27:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:09:28.040 11:27:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:09:28.040 11:27:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:09:28.040 11:27:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:09:28.040 11:27:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:28.040 11:27:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:09:28.040 11:27:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:28.040 11:27:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:09:28.040 11:27:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:09:28.040 11:27:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:28.040 11:27:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:09:28.040 Found net devices under 0000:af:00.0: cvl_0_0 00:09:28.040 11:27:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:09:28.040 11:27:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:09:28.040 11:27:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:28.040 11:27:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:09:28.040 11:27:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:28.040 11:27:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:09:28.040 11:27:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:09:28.040 11:27:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:28.040 11:27:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:09:28.040 Found net devices under 0000:af:00.1: cvl_0_1 00:09:28.040 11:27:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:09:28.040 11:27:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:09:28.040 11:27:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@442 -- # is_hw=yes 00:09:28.040 11:27:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:09:28.040 11:27:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:09:28.040 11:27:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:09:28.040 11:27:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:09:28.040 11:27:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:28.040 11:27:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:28.040 11:27:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:09:28.040 11:27:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:09:28.040 11:27:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:09:28.040 11:27:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:09:28.040 11:27:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:09:28.040 11:27:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:09:28.040 11:27:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:09:28.040 11:27:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:28.040 11:27:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:09:28.040 11:27:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:09:28.040 11:27:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:09:28.040 11:27:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:09:28.299 11:27:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:09:28.299 11:27:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:09:28.299 11:27:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:09:28.299 11:27:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:09:28.299 11:27:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:09:28.299 11:27:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:09:28.300 11:27:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:09:28.300 11:27:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:09:28.300 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:28.300 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.451 ms 00:09:28.300 00:09:28.300 --- 10.0.0.2 ping statistics --- 00:09:28.300 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:28.300 rtt min/avg/max/mdev = 0.451/0.451/0.451/0.000 ms 00:09:28.300 11:27:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:09:28.300 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:28.300 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.226 ms 00:09:28.300 00:09:28.300 --- 10.0.0.1 ping statistics --- 00:09:28.300 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:28.300 rtt min/avg/max/mdev = 0.226/0.226/0.226/0.000 ms 00:09:28.300 11:27:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:28.300 11:27:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@450 -- # return 0 00:09:28.300 11:27:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:09:28.300 11:27:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:28.300 11:27:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:09:28.300 11:27:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:09:28.300 11:27:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:28.300 11:27:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:09:28.300 11:27:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:09:28.300 11:27:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@17 -- # nvmfappstart -m 0xF 00:09:28.300 11:27:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:09:28.300 11:27:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@724 -- # xtrace_disable 00:09:28.300 11:27:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:09:28.300 11:27:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@509 -- # nvmfpid=1109020 00:09:28.300 11:27:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@510 -- # waitforlisten 1109020 00:09:28.300 11:27:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:09:28.300 11:27:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@833 -- # '[' -z 1109020 ']' 00:09:28.300 11:27:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:28.300 11:27:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@838 -- # local max_retries=100 00:09:28.300 11:27:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:28.300 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:28.300 11:27:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@842 -- # xtrace_disable 00:09:28.300 11:27:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:09:28.558 [2024-11-15 11:27:29.186288] Starting SPDK v25.01-pre git sha1 4b2d483c6 / DPDK 24.03.0 initialization... 00:09:28.558 [2024-11-15 11:27:29.186343] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:28.558 [2024-11-15 11:27:29.288280] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:28.559 [2024-11-15 11:27:29.337941] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:28.559 [2024-11-15 11:27:29.337983] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:28.559 [2024-11-15 11:27:29.337994] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:28.559 [2024-11-15 11:27:29.338006] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:28.559 [2024-11-15 11:27:29.338013] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:28.559 [2024-11-15 11:27:29.340068] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:09:28.559 [2024-11-15 11:27:29.340167] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:09:28.559 [2024-11-15 11:27:29.340275] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:09:28.559 [2024-11-15 11:27:29.340276] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:28.817 11:27:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:09:28.817 11:27:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@866 -- # return 0 00:09:28.817 11:27:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:09:28.818 11:27:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@730 -- # xtrace_disable 00:09:28.818 11:27:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:09:28.818 11:27:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:28.818 11:27:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:09:29.076 [2024-11-15 11:27:29.741226] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:29.076 11:27:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:09:29.336 11:27:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@21 -- # malloc_bdevs='Malloc0 ' 00:09:29.336 11:27:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:09:29.595 11:27:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@22 -- # malloc_bdevs+=Malloc1 00:09:29.595 11:27:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:09:29.853 11:27:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@24 -- # raid_malloc_bdevs='Malloc2 ' 00:09:29.853 11:27:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:09:30.111 11:27:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@25 -- # raid_malloc_bdevs+=Malloc3 00:09:30.111 11:27:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc2 Malloc3' 00:09:30.111 11:27:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:09:30.370 11:27:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@29 -- # concat_malloc_bdevs='Malloc4 ' 00:09:30.370 11:27:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:09:30.938 11:27:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@30 -- # concat_malloc_bdevs+='Malloc5 ' 00:09:30.938 11:27:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:09:30.938 11:27:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@31 -- # concat_malloc_bdevs+=Malloc6 00:09:30.938 11:27:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n concat0 -r concat -z 64 -b 'Malloc4 Malloc5 Malloc6' 00:09:31.505 11:27:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:09:31.505 11:27:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:09:31.505 11:27:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:09:31.764 11:27:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:09:31.764 11:27:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:09:32.022 11:27:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:32.281 [2024-11-15 11:27:33.065276] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:32.281 11:27:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 raid0 00:09:32.539 11:27:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 concat0 00:09:33.107 11:27:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@46 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid=00abaa28-3537-eb11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:09:34.485 11:27:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@48 -- # waitforserial SPDKISFASTANDAWESOME 4 00:09:34.485 11:27:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1200 -- # local i=0 00:09:34.485 11:27:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1201 -- # local nvme_device_counter=1 nvme_devices=0 00:09:34.485 11:27:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1202 -- # [[ -n 4 ]] 00:09:34.485 11:27:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1203 -- # nvme_device_counter=4 00:09:34.485 11:27:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1207 -- # sleep 2 00:09:36.394 11:27:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1208 -- # (( i++ <= 15 )) 00:09:36.394 11:27:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1209 -- # lsblk -l -o NAME,SERIAL 00:09:36.394 11:27:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1209 -- # grep -c SPDKISFASTANDAWESOME 00:09:36.394 11:27:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1209 -- # nvme_devices=4 00:09:36.394 11:27:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1210 -- # (( nvme_devices == nvme_device_counter )) 00:09:36.394 11:27:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1210 -- # return 0 00:09:36.394 11:27:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:09:36.394 [global] 00:09:36.394 thread=1 00:09:36.394 invalidate=1 00:09:36.394 rw=write 00:09:36.394 time_based=1 00:09:36.394 runtime=1 00:09:36.394 ioengine=libaio 00:09:36.394 direct=1 00:09:36.394 bs=4096 00:09:36.394 iodepth=1 00:09:36.394 norandommap=0 00:09:36.394 numjobs=1 00:09:36.394 00:09:36.394 verify_dump=1 00:09:36.394 verify_backlog=512 00:09:36.394 verify_state_save=0 00:09:36.394 do_verify=1 00:09:36.394 verify=crc32c-intel 00:09:36.394 [job0] 00:09:36.394 filename=/dev/nvme0n1 00:09:36.394 [job1] 00:09:36.394 filename=/dev/nvme0n2 00:09:36.394 [job2] 00:09:36.394 filename=/dev/nvme0n3 00:09:36.394 [job3] 00:09:36.394 filename=/dev/nvme0n4 00:09:36.394 Could not set queue depth (nvme0n1) 00:09:36.394 Could not set queue depth (nvme0n2) 00:09:36.394 Could not set queue depth (nvme0n3) 00:09:36.394 Could not set queue depth (nvme0n4) 00:09:36.661 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:36.661 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:36.661 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:36.661 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:36.661 fio-3.35 00:09:36.661 Starting 4 threads 00:09:38.038 00:09:38.038 job0: (groupid=0, jobs=1): err= 0: pid=1110634: Fri Nov 15 11:27:38 2024 00:09:38.038 read: IOPS=22, BW=91.7KiB/s (93.9kB/s)(92.0KiB/1003msec) 00:09:38.038 slat (nsec): min=11010, max=23568, avg=20923.52, stdev=2225.17 00:09:38.038 clat (usec): min=396, max=41175, avg=39217.44, stdev=8462.95 00:09:38.038 lat (usec): min=407, max=41198, avg=39238.37, stdev=8465.11 00:09:38.038 clat percentiles (usec): 00:09:38.038 | 1.00th=[ 396], 5.00th=[41157], 10.00th=[41157], 20.00th=[41157], 00:09:38.038 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:09:38.038 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:09:38.038 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:09:38.038 | 99.99th=[41157] 00:09:38.038 write: IOPS=510, BW=2042KiB/s (2091kB/s)(2048KiB/1003msec); 0 zone resets 00:09:38.038 slat (nsec): min=10169, max=44089, avg=12911.44, stdev=2543.52 00:09:38.038 clat (usec): min=137, max=3876, avg=180.38, stdev=165.73 00:09:38.038 lat (usec): min=149, max=3888, avg=193.29, stdev=165.79 00:09:38.038 clat percentiles (usec): 00:09:38.038 | 1.00th=[ 143], 5.00th=[ 149], 10.00th=[ 151], 20.00th=[ 157], 00:09:38.038 | 30.00th=[ 159], 40.00th=[ 163], 50.00th=[ 167], 60.00th=[ 172], 00:09:38.038 | 70.00th=[ 176], 80.00th=[ 184], 90.00th=[ 198], 95.00th=[ 239], 00:09:38.038 | 99.00th=[ 265], 99.50th=[ 326], 99.90th=[ 3884], 99.95th=[ 3884], 00:09:38.038 | 99.99th=[ 3884] 00:09:38.038 bw ( KiB/s): min= 4096, max= 4096, per=29.51%, avg=4096.00, stdev= 0.00, samples=1 00:09:38.038 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:09:38.038 lat (usec) : 250=94.39%, 500=1.31% 00:09:38.038 lat (msec) : 4=0.19%, 50=4.11% 00:09:38.038 cpu : usr=0.60%, sys=0.80%, ctx=535, majf=0, minf=1 00:09:38.038 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:38.038 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:38.038 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:38.038 issued rwts: total=23,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:38.038 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:38.038 job1: (groupid=0, jobs=1): err= 0: pid=1110654: Fri Nov 15 11:27:38 2024 00:09:38.038 read: IOPS=1836, BW=7346KiB/s (7522kB/s)(7588KiB/1033msec) 00:09:38.038 slat (nsec): min=6534, max=29306, avg=7438.24, stdev=1229.58 00:09:38.038 clat (usec): min=202, max=41088, avg=331.29, stdev=1868.00 00:09:38.038 lat (usec): min=209, max=41099, avg=338.73, stdev=1868.56 00:09:38.038 clat percentiles (usec): 00:09:38.038 | 1.00th=[ 219], 5.00th=[ 227], 10.00th=[ 231], 20.00th=[ 237], 00:09:38.038 | 30.00th=[ 241], 40.00th=[ 243], 50.00th=[ 245], 60.00th=[ 249], 00:09:38.038 | 70.00th=[ 251], 80.00th=[ 253], 90.00th=[ 260], 95.00th=[ 265], 00:09:38.038 | 99.00th=[ 277], 99.50th=[ 293], 99.90th=[41157], 99.95th=[41157], 00:09:38.038 | 99.99th=[41157] 00:09:38.038 write: IOPS=1982, BW=7930KiB/s (8121kB/s)(8192KiB/1033msec); 0 zone resets 00:09:38.038 slat (nsec): min=9546, max=36812, avg=11083.32, stdev=1745.25 00:09:38.038 clat (usec): min=142, max=423, avg=175.30, stdev=18.90 00:09:38.038 lat (usec): min=153, max=437, avg=186.38, stdev=19.74 00:09:38.038 clat percentiles (usec): 00:09:38.038 | 1.00th=[ 151], 5.00th=[ 155], 10.00th=[ 159], 20.00th=[ 163], 00:09:38.038 | 30.00th=[ 165], 40.00th=[ 167], 50.00th=[ 169], 60.00th=[ 174], 00:09:38.038 | 70.00th=[ 180], 80.00th=[ 190], 90.00th=[ 202], 95.00th=[ 212], 00:09:38.038 | 99.00th=[ 229], 99.50th=[ 235], 99.90th=[ 253], 99.95th=[ 363], 00:09:38.038 | 99.99th=[ 424] 00:09:38.038 bw ( KiB/s): min= 6608, max= 9776, per=59.03%, avg=8192.00, stdev=2240.11, samples=2 00:09:38.038 iops : min= 1652, max= 2444, avg=2048.00, stdev=560.03, samples=2 00:09:38.038 lat (usec) : 250=83.98%, 500=15.92% 00:09:38.038 lat (msec) : 50=0.10% 00:09:38.038 cpu : usr=2.03%, sys=3.49%, ctx=3945, majf=0, minf=1 00:09:38.038 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:38.038 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:38.038 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:38.038 issued rwts: total=1897,2048,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:38.038 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:38.038 job2: (groupid=0, jobs=1): err= 0: pid=1110690: Fri Nov 15 11:27:38 2024 00:09:38.038 read: IOPS=21, BW=86.6KiB/s (88.7kB/s)(88.0KiB/1016msec) 00:09:38.038 slat (nsec): min=10299, max=23457, avg=22393.36, stdev=2709.06 00:09:38.038 clat (usec): min=40890, max=41964, avg=41102.84, stdev=348.55 00:09:38.038 lat (usec): min=40913, max=41987, avg=41125.24, stdev=348.64 00:09:38.038 clat percentiles (usec): 00:09:38.038 | 1.00th=[40633], 5.00th=[40633], 10.00th=[41157], 20.00th=[41157], 00:09:38.038 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:09:38.038 | 70.00th=[41157], 80.00th=[41157], 90.00th=[42206], 95.00th=[42206], 00:09:38.038 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:09:38.038 | 99.99th=[42206] 00:09:38.038 write: IOPS=503, BW=2016KiB/s (2064kB/s)(2048KiB/1016msec); 0 zone resets 00:09:38.038 slat (nsec): min=9483, max=61714, avg=11121.31, stdev=2734.16 00:09:38.038 clat (usec): min=143, max=411, avg=202.64, stdev=40.48 00:09:38.038 lat (usec): min=156, max=440, avg=213.76, stdev=40.70 00:09:38.038 clat percentiles (usec): 00:09:38.038 | 1.00th=[ 149], 5.00th=[ 155], 10.00th=[ 159], 20.00th=[ 165], 00:09:38.038 | 30.00th=[ 172], 40.00th=[ 178], 50.00th=[ 186], 60.00th=[ 231], 00:09:38.038 | 70.00th=[ 241], 80.00th=[ 243], 90.00th=[ 247], 95.00th=[ 253], 00:09:38.038 | 99.00th=[ 273], 99.50th=[ 379], 99.90th=[ 412], 99.95th=[ 412], 00:09:38.038 | 99.99th=[ 412] 00:09:38.038 bw ( KiB/s): min= 4096, max= 4096, per=29.51%, avg=4096.00, stdev= 0.00, samples=1 00:09:38.038 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:09:38.038 lat (usec) : 250=88.39%, 500=7.49% 00:09:38.038 lat (msec) : 50=4.12% 00:09:38.038 cpu : usr=0.59%, sys=0.30%, ctx=535, majf=0, minf=1 00:09:38.038 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:38.038 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:38.038 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:38.038 issued rwts: total=22,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:38.038 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:38.038 job3: (groupid=0, jobs=1): err= 0: pid=1110694: Fri Nov 15 11:27:38 2024 00:09:38.038 read: IOPS=21, BW=86.1KiB/s (88.2kB/s)(88.0KiB/1022msec) 00:09:38.038 slat (nsec): min=10332, max=23140, avg=22250.00, stdev=2665.83 00:09:38.038 clat (usec): min=40818, max=42074, avg=41054.87, stdev=313.62 00:09:38.038 lat (usec): min=40841, max=42097, avg=41077.12, stdev=313.99 00:09:38.038 clat percentiles (usec): 00:09:38.038 | 1.00th=[40633], 5.00th=[40633], 10.00th=[40633], 20.00th=[41157], 00:09:38.038 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:09:38.038 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41681], 00:09:38.038 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:09:38.038 | 99.99th=[42206] 00:09:38.038 write: IOPS=500, BW=2004KiB/s (2052kB/s)(2048KiB/1022msec); 0 zone resets 00:09:38.038 slat (nsec): min=9329, max=59890, avg=10511.69, stdev=2390.57 00:09:38.038 clat (usec): min=136, max=506, avg=219.02, stdev=38.97 00:09:38.038 lat (usec): min=147, max=516, avg=229.53, stdev=39.39 00:09:38.038 clat percentiles (usec): 00:09:38.038 | 1.00th=[ 143], 5.00th=[ 157], 10.00th=[ 165], 20.00th=[ 174], 00:09:38.038 | 30.00th=[ 186], 40.00th=[ 233], 50.00th=[ 239], 60.00th=[ 241], 00:09:38.038 | 70.00th=[ 243], 80.00th=[ 245], 90.00th=[ 249], 95.00th=[ 253], 00:09:38.038 | 99.00th=[ 265], 99.50th=[ 322], 99.90th=[ 506], 99.95th=[ 506], 00:09:38.038 | 99.99th=[ 506] 00:09:38.038 bw ( KiB/s): min= 4096, max= 4096, per=29.51%, avg=4096.00, stdev= 0.00, samples=1 00:09:38.038 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:09:38.038 lat (usec) : 250=86.89%, 500=8.80%, 750=0.19% 00:09:38.038 lat (msec) : 50=4.12% 00:09:38.038 cpu : usr=0.49%, sys=0.20%, ctx=535, majf=0, minf=1 00:09:38.038 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:38.038 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:38.038 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:38.038 issued rwts: total=22,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:38.038 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:38.038 00:09:38.038 Run status group 0 (all jobs): 00:09:38.038 READ: bw=7605KiB/s (7788kB/s), 86.1KiB/s-7346KiB/s (88.2kB/s-7522kB/s), io=7856KiB (8045kB), run=1003-1033msec 00:09:38.039 WRITE: bw=13.6MiB/s (14.2MB/s), 2004KiB/s-7930KiB/s (2052kB/s-8121kB/s), io=14.0MiB (14.7MB), run=1003-1033msec 00:09:38.039 00:09:38.039 Disk stats (read/write): 00:09:38.039 nvme0n1: ios=68/512, merge=0/0, ticks=835/87, in_queue=922, util=86.37% 00:09:38.039 nvme0n2: ios=1626/2048, merge=0/0, ticks=395/346, in_queue=741, util=82.49% 00:09:38.039 nvme0n3: ios=16/512, merge=0/0, ticks=657/104, in_queue=761, util=87.36% 00:09:38.039 nvme0n4: ios=16/512, merge=0/0, ticks=657/114, in_queue=771, util=89.11% 00:09:38.039 11:27:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t randwrite -r 1 -v 00:09:38.039 [global] 00:09:38.039 thread=1 00:09:38.039 invalidate=1 00:09:38.039 rw=randwrite 00:09:38.039 time_based=1 00:09:38.039 runtime=1 00:09:38.039 ioengine=libaio 00:09:38.039 direct=1 00:09:38.039 bs=4096 00:09:38.039 iodepth=1 00:09:38.039 norandommap=0 00:09:38.039 numjobs=1 00:09:38.039 00:09:38.039 verify_dump=1 00:09:38.039 verify_backlog=512 00:09:38.039 verify_state_save=0 00:09:38.039 do_verify=1 00:09:38.039 verify=crc32c-intel 00:09:38.039 [job0] 00:09:38.039 filename=/dev/nvme0n1 00:09:38.039 [job1] 00:09:38.039 filename=/dev/nvme0n2 00:09:38.039 [job2] 00:09:38.039 filename=/dev/nvme0n3 00:09:38.039 [job3] 00:09:38.039 filename=/dev/nvme0n4 00:09:38.039 Could not set queue depth (nvme0n1) 00:09:38.039 Could not set queue depth (nvme0n2) 00:09:38.039 Could not set queue depth (nvme0n3) 00:09:38.039 Could not set queue depth (nvme0n4) 00:09:38.297 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:38.297 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:38.297 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:38.297 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:38.297 fio-3.35 00:09:38.297 Starting 4 threads 00:09:39.672 00:09:39.672 job0: (groupid=0, jobs=1): err= 0: pid=1111151: Fri Nov 15 11:27:40 2024 00:09:39.672 read: IOPS=21, BW=87.6KiB/s (89.8kB/s)(88.0KiB/1004msec) 00:09:39.672 slat (nsec): min=9589, max=22776, avg=16884.77, stdev=5424.39 00:09:39.672 clat (usec): min=421, max=41503, avg=39190.50, stdev=8660.71 00:09:39.672 lat (usec): min=443, max=41513, avg=39207.38, stdev=8659.60 00:09:39.672 clat percentiles (usec): 00:09:39.672 | 1.00th=[ 420], 5.00th=[40633], 10.00th=[40633], 20.00th=[41157], 00:09:39.673 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:09:39.673 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41681], 00:09:39.673 | 99.00th=[41681], 99.50th=[41681], 99.90th=[41681], 99.95th=[41681], 00:09:39.673 | 99.99th=[41681] 00:09:39.673 write: IOPS=509, BW=2040KiB/s (2089kB/s)(2048KiB/1004msec); 0 zone resets 00:09:39.673 slat (nsec): min=9801, max=41111, avg=11334.13, stdev=2170.22 00:09:39.673 clat (usec): min=199, max=403, avg=261.63, stdev=30.46 00:09:39.673 lat (usec): min=211, max=415, avg=272.97, stdev=30.60 00:09:39.673 clat percentiles (usec): 00:09:39.673 | 1.00th=[ 212], 5.00th=[ 225], 10.00th=[ 231], 20.00th=[ 237], 00:09:39.673 | 30.00th=[ 243], 40.00th=[ 251], 50.00th=[ 260], 60.00th=[ 265], 00:09:39.673 | 70.00th=[ 273], 80.00th=[ 277], 90.00th=[ 293], 95.00th=[ 310], 00:09:39.673 | 99.00th=[ 371], 99.50th=[ 396], 99.90th=[ 404], 99.95th=[ 404], 00:09:39.673 | 99.99th=[ 404] 00:09:39.673 bw ( KiB/s): min= 4096, max= 4096, per=25.10%, avg=4096.00, stdev= 0.00, samples=1 00:09:39.673 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:09:39.673 lat (usec) : 250=36.70%, 500=59.36% 00:09:39.673 lat (msec) : 50=3.93% 00:09:39.673 cpu : usr=0.50%, sys=0.80%, ctx=535, majf=0, minf=1 00:09:39.673 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:39.673 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:39.673 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:39.673 issued rwts: total=22,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:39.673 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:39.673 job1: (groupid=0, jobs=1): err= 0: pid=1111163: Fri Nov 15 11:27:40 2024 00:09:39.673 read: IOPS=2467, BW=9870KiB/s (10.1MB/s)(9880KiB/1001msec) 00:09:39.673 slat (nsec): min=6761, max=45073, avg=7950.19, stdev=1664.64 00:09:39.673 clat (usec): min=160, max=1105, avg=225.88, stdev=29.01 00:09:39.673 lat (usec): min=181, max=1116, avg=233.83, stdev=29.07 00:09:39.673 clat percentiles (usec): 00:09:39.673 | 1.00th=[ 182], 5.00th=[ 192], 10.00th=[ 198], 20.00th=[ 206], 00:09:39.673 | 30.00th=[ 212], 40.00th=[ 219], 50.00th=[ 225], 60.00th=[ 231], 00:09:39.673 | 70.00th=[ 239], 80.00th=[ 247], 90.00th=[ 255], 95.00th=[ 262], 00:09:39.673 | 99.00th=[ 273], 99.50th=[ 277], 99.90th=[ 441], 99.95th=[ 457], 00:09:39.673 | 99.99th=[ 1106] 00:09:39.673 write: IOPS=2557, BW=9.99MiB/s (10.5MB/s)(10.0MiB/1001msec); 0 zone resets 00:09:39.673 slat (nsec): min=9694, max=49380, avg=10728.66, stdev=1735.86 00:09:39.673 clat (usec): min=111, max=1872, avg=148.67, stdev=38.58 00:09:39.673 lat (usec): min=126, max=1888, avg=159.40, stdev=38.76 00:09:39.673 clat percentiles (usec): 00:09:39.673 | 1.00th=[ 124], 5.00th=[ 129], 10.00th=[ 133], 20.00th=[ 135], 00:09:39.673 | 30.00th=[ 139], 40.00th=[ 141], 50.00th=[ 145], 60.00th=[ 149], 00:09:39.673 | 70.00th=[ 153], 80.00th=[ 161], 90.00th=[ 172], 95.00th=[ 180], 00:09:39.673 | 99.00th=[ 192], 99.50th=[ 200], 99.90th=[ 269], 99.95th=[ 570], 00:09:39.673 | 99.99th=[ 1876] 00:09:39.673 bw ( KiB/s): min=12288, max=12288, per=75.30%, avg=12288.00, stdev= 0.00, samples=1 00:09:39.673 iops : min= 3072, max= 3072, avg=3072.00, stdev= 0.00, samples=1 00:09:39.673 lat (usec) : 250=92.33%, 500=7.61%, 750=0.02% 00:09:39.673 lat (msec) : 2=0.04% 00:09:39.673 cpu : usr=4.20%, sys=7.50%, ctx=5030, majf=0, minf=1 00:09:39.673 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:39.673 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:39.673 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:39.673 issued rwts: total=2470,2560,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:39.673 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:39.673 job2: (groupid=0, jobs=1): err= 0: pid=1111180: Fri Nov 15 11:27:40 2024 00:09:39.673 read: IOPS=21, BW=87.8KiB/s (89.9kB/s)(88.0KiB/1002msec) 00:09:39.673 slat (nsec): min=9696, max=24117, avg=22457.27, stdev=2866.90 00:09:39.673 clat (usec): min=40899, max=42058, avg=41211.22, stdev=422.64 00:09:39.673 lat (usec): min=40922, max=42082, avg=41233.68, stdev=422.79 00:09:39.673 clat percentiles (usec): 00:09:39.673 | 1.00th=[41157], 5.00th=[41157], 10.00th=[41157], 20.00th=[41157], 00:09:39.673 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:09:39.673 | 70.00th=[41157], 80.00th=[41681], 90.00th=[42206], 95.00th=[42206], 00:09:39.673 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:09:39.673 | 99.99th=[42206] 00:09:39.673 write: IOPS=510, BW=2044KiB/s (2093kB/s)(2048KiB/1002msec); 0 zone resets 00:09:39.673 slat (nsec): min=9390, max=41283, avg=10466.65, stdev=1792.00 00:09:39.673 clat (usec): min=138, max=343, avg=171.07, stdev=24.23 00:09:39.673 lat (usec): min=148, max=353, avg=181.54, stdev=24.58 00:09:39.673 clat percentiles (usec): 00:09:39.673 | 1.00th=[ 143], 5.00th=[ 147], 10.00th=[ 151], 20.00th=[ 155], 00:09:39.673 | 30.00th=[ 159], 40.00th=[ 163], 50.00th=[ 165], 60.00th=[ 169], 00:09:39.673 | 70.00th=[ 174], 80.00th=[ 180], 90.00th=[ 194], 95.00th=[ 239], 00:09:39.673 | 99.00th=[ 245], 99.50th=[ 247], 99.90th=[ 343], 99.95th=[ 343], 00:09:39.673 | 99.99th=[ 343] 00:09:39.673 bw ( KiB/s): min= 4096, max= 4096, per=25.10%, avg=4096.00, stdev= 0.00, samples=1 00:09:39.673 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:09:39.673 lat (usec) : 250=95.51%, 500=0.37% 00:09:39.673 lat (msec) : 50=4.12% 00:09:39.673 cpu : usr=0.30%, sys=0.50%, ctx=537, majf=0, minf=1 00:09:39.673 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:39.673 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:39.673 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:39.673 issued rwts: total=22,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:39.673 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:39.673 job3: (groupid=0, jobs=1): err= 0: pid=1111186: Fri Nov 15 11:27:40 2024 00:09:39.673 read: IOPS=20, BW=83.7KiB/s (85.8kB/s)(84.0KiB/1003msec) 00:09:39.673 slat (nsec): min=9866, max=27244, avg=23171.38, stdev=3202.68 00:09:39.673 clat (usec): min=40745, max=41930, avg=41003.95, stdev=224.42 00:09:39.673 lat (usec): min=40755, max=41953, avg=41027.12, stdev=225.23 00:09:39.673 clat percentiles (usec): 00:09:39.673 | 1.00th=[40633], 5.00th=[40633], 10.00th=[41157], 20.00th=[41157], 00:09:39.673 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:09:39.673 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:09:39.673 | 99.00th=[41681], 99.50th=[41681], 99.90th=[41681], 99.95th=[41681], 00:09:39.673 | 99.99th=[41681] 00:09:39.673 write: IOPS=510, BW=2042KiB/s (2091kB/s)(2048KiB/1003msec); 0 zone resets 00:09:39.673 slat (nsec): min=10676, max=40010, avg=12003.94, stdev=1644.82 00:09:39.673 clat (usec): min=210, max=359, avg=259.32, stdev=23.98 00:09:39.673 lat (usec): min=222, max=371, avg=271.32, stdev=24.01 00:09:39.673 clat percentiles (usec): 00:09:39.673 | 1.00th=[ 217], 5.00th=[ 227], 10.00th=[ 231], 20.00th=[ 237], 00:09:39.673 | 30.00th=[ 243], 40.00th=[ 251], 50.00th=[ 260], 60.00th=[ 265], 00:09:39.673 | 70.00th=[ 273], 80.00th=[ 277], 90.00th=[ 289], 95.00th=[ 302], 00:09:39.673 | 99.00th=[ 326], 99.50th=[ 338], 99.90th=[ 359], 99.95th=[ 359], 00:09:39.673 | 99.99th=[ 359] 00:09:39.673 bw ( KiB/s): min= 4096, max= 4096, per=25.10%, avg=4096.00, stdev= 0.00, samples=1 00:09:39.673 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:09:39.673 lat (usec) : 250=37.52%, 500=58.54% 00:09:39.673 lat (msec) : 50=3.94% 00:09:39.673 cpu : usr=0.80%, sys=0.50%, ctx=534, majf=0, minf=1 00:09:39.673 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:39.673 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:39.673 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:39.673 issued rwts: total=21,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:39.673 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:39.673 00:09:39.673 Run status group 0 (all jobs): 00:09:39.673 READ: bw=9.86MiB/s (10.3MB/s), 83.7KiB/s-9870KiB/s (85.8kB/s-10.1MB/s), io=9.90MiB (10.4MB), run=1001-1004msec 00:09:39.673 WRITE: bw=15.9MiB/s (16.7MB/s), 2040KiB/s-9.99MiB/s (2089kB/s-10.5MB/s), io=16.0MiB (16.8MB), run=1001-1004msec 00:09:39.673 00:09:39.673 Disk stats (read/write): 00:09:39.673 nvme0n1: ios=68/512, merge=0/0, ticks=716/126, in_queue=842, util=86.57% 00:09:39.673 nvme0n2: ios=2048/2307, merge=0/0, ticks=433/315, in_queue=748, util=86.67% 00:09:39.673 nvme0n3: ios=43/512, merge=0/0, ticks=1728/86, in_queue=1814, util=98.12% 00:09:39.673 nvme0n4: ios=55/512, merge=0/0, ticks=1536/131, in_queue=1667, util=99.16% 00:09:39.673 11:27:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t write -r 1 -v 00:09:39.673 [global] 00:09:39.673 thread=1 00:09:39.673 invalidate=1 00:09:39.673 rw=write 00:09:39.673 time_based=1 00:09:39.673 runtime=1 00:09:39.673 ioengine=libaio 00:09:39.673 direct=1 00:09:39.673 bs=4096 00:09:39.673 iodepth=128 00:09:39.673 norandommap=0 00:09:39.673 numjobs=1 00:09:39.673 00:09:39.673 verify_dump=1 00:09:39.673 verify_backlog=512 00:09:39.673 verify_state_save=0 00:09:39.673 do_verify=1 00:09:39.674 verify=crc32c-intel 00:09:39.674 [job0] 00:09:39.674 filename=/dev/nvme0n1 00:09:39.674 [job1] 00:09:39.674 filename=/dev/nvme0n2 00:09:39.674 [job2] 00:09:39.674 filename=/dev/nvme0n3 00:09:39.674 [job3] 00:09:39.674 filename=/dev/nvme0n4 00:09:39.674 Could not set queue depth (nvme0n1) 00:09:39.674 Could not set queue depth (nvme0n2) 00:09:39.674 Could not set queue depth (nvme0n3) 00:09:39.674 Could not set queue depth (nvme0n4) 00:09:39.932 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:09:39.932 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:09:39.932 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:09:39.932 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:09:39.932 fio-3.35 00:09:39.932 Starting 4 threads 00:09:41.311 00:09:41.311 job0: (groupid=0, jobs=1): err= 0: pid=1111628: Fri Nov 15 11:27:41 2024 00:09:41.311 read: IOPS=3062, BW=12.0MiB/s (12.5MB/s)(12.0MiB/1003msec) 00:09:41.311 slat (nsec): min=1608, max=45077k, avg=183511.13, stdev=1379093.68 00:09:41.311 clat (usec): min=1311, max=83819, avg=23151.51, stdev=12850.43 00:09:41.311 lat (usec): min=1318, max=83827, avg=23335.02, stdev=12905.89 00:09:41.311 clat percentiles (usec): 00:09:41.311 | 1.00th=[ 3916], 5.00th=[ 7570], 10.00th=[10290], 20.00th=[14877], 00:09:41.311 | 30.00th=[15926], 40.00th=[17171], 50.00th=[20317], 60.00th=[23987], 00:09:41.312 | 70.00th=[26870], 80.00th=[30802], 90.00th=[38011], 95.00th=[45351], 00:09:41.312 | 99.00th=[78119], 99.50th=[82314], 99.90th=[83362], 99.95th=[83362], 00:09:41.312 | 99.99th=[83362] 00:09:41.312 write: IOPS=3557, BW=13.9MiB/s (14.6MB/s)(13.9MiB/1003msec); 0 zone resets 00:09:41.312 slat (usec): min=2, max=10748, avg=116.94, stdev=704.31 00:09:41.312 clat (usec): min=325, max=74588, avg=15741.13, stdev=13280.88 00:09:41.312 lat (usec): min=755, max=74597, avg=15858.07, stdev=13361.66 00:09:41.312 clat percentiles (usec): 00:09:41.312 | 1.00th=[ 3130], 5.00th=[ 6194], 10.00th=[ 6456], 20.00th=[ 9372], 00:09:41.312 | 30.00th=[10552], 40.00th=[11076], 50.00th=[11469], 60.00th=[11731], 00:09:41.312 | 70.00th=[12911], 80.00th=[16909], 90.00th=[34866], 95.00th=[41157], 00:09:41.312 | 99.00th=[73925], 99.50th=[74974], 99.90th=[74974], 99.95th=[74974], 00:09:41.312 | 99.99th=[74974] 00:09:41.312 bw ( KiB/s): min=11136, max=16384, per=23.60%, avg=13760.00, stdev=3710.90, samples=2 00:09:41.312 iops : min= 2784, max= 4096, avg=3440.00, stdev=927.72, samples=2 00:09:41.312 lat (usec) : 500=0.02%, 750=0.02%, 1000=0.11% 00:09:41.312 lat (msec) : 2=0.12%, 4=1.16%, 10=16.40%, 20=50.02%, 50=28.40% 00:09:41.312 lat (msec) : 100=3.77% 00:09:41.312 cpu : usr=2.50%, sys=3.79%, ctx=276, majf=0, minf=2 00:09:41.312 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.1% 00:09:41.312 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:41.312 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:41.312 issued rwts: total=3072,3568,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:41.312 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:41.312 job1: (groupid=0, jobs=1): err= 0: pid=1111643: Fri Nov 15 11:27:41 2024 00:09:41.312 read: IOPS=3020, BW=11.8MiB/s (12.4MB/s)(12.0MiB/1017msec) 00:09:41.312 slat (usec): min=2, max=12538, avg=107.73, stdev=863.71 00:09:41.312 clat (usec): min=5294, max=45500, avg=14705.83, stdev=6099.61 00:09:41.312 lat (usec): min=5300, max=45507, avg=14813.55, stdev=6190.23 00:09:41.312 clat percentiles (usec): 00:09:41.312 | 1.00th=[ 5342], 5.00th=[ 8094], 10.00th=[ 9634], 20.00th=[10683], 00:09:41.312 | 30.00th=[12518], 40.00th=[13042], 50.00th=[13304], 60.00th=[13566], 00:09:41.312 | 70.00th=[14091], 80.00th=[17957], 90.00th=[22152], 95.00th=[24511], 00:09:41.312 | 99.00th=[41681], 99.50th=[41681], 99.90th=[45351], 99.95th=[45351], 00:09:41.312 | 99.99th=[45351] 00:09:41.312 write: IOPS=3522, BW=13.8MiB/s (14.4MB/s)(14.0MiB/1017msec); 0 zone resets 00:09:41.312 slat (usec): min=3, max=19930, avg=157.86, stdev=984.69 00:09:41.312 clat (usec): min=619, max=114426, avg=23418.37, stdev=23619.20 00:09:41.312 lat (usec): min=650, max=114438, avg=23576.24, stdev=23770.71 00:09:41.312 clat percentiles (msec): 00:09:41.312 | 1.00th=[ 3], 5.00th=[ 5], 10.00th=[ 7], 20.00th=[ 9], 00:09:41.312 | 30.00th=[ 12], 40.00th=[ 12], 50.00th=[ 12], 60.00th=[ 15], 00:09:41.312 | 70.00th=[ 23], 80.00th=[ 35], 90.00th=[ 60], 95.00th=[ 84], 00:09:41.312 | 99.00th=[ 102], 99.50th=[ 107], 99.90th=[ 115], 99.95th=[ 115], 00:09:41.312 | 99.99th=[ 115] 00:09:41.312 bw ( KiB/s): min= 7656, max=19976, per=23.70%, avg=13816.00, stdev=8711.56, samples=2 00:09:41.312 iops : min= 1914, max= 4994, avg=3454.00, stdev=2177.89, samples=2 00:09:41.312 lat (usec) : 750=0.09% 00:09:41.312 lat (msec) : 4=1.56%, 10=18.77%, 20=55.34%, 50=17.66%, 100=5.98% 00:09:41.312 lat (msec) : 250=0.60% 00:09:41.312 cpu : usr=1.67%, sys=4.82%, ctx=272, majf=0, minf=1 00:09:41.312 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.1% 00:09:41.312 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:41.312 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:41.312 issued rwts: total=3072,3582,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:41.312 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:41.312 job2: (groupid=0, jobs=1): err= 0: pid=1111651: Fri Nov 15 11:27:41 2024 00:09:41.312 read: IOPS=1523, BW=6095KiB/s (6242kB/s)(6144KiB/1008msec) 00:09:41.312 slat (nsec): min=1640, max=27854k, avg=167406.33, stdev=1185134.54 00:09:41.312 clat (usec): min=5117, max=65290, avg=19749.54, stdev=11627.42 00:09:41.312 lat (usec): min=5123, max=65319, avg=19916.94, stdev=11720.33 00:09:41.312 clat percentiles (usec): 00:09:41.312 | 1.00th=[ 8979], 5.00th=[ 9765], 10.00th=[11469], 20.00th=[12911], 00:09:41.312 | 30.00th=[13173], 40.00th=[15401], 50.00th=[15664], 60.00th=[16319], 00:09:41.312 | 70.00th=[17433], 80.00th=[23725], 90.00th=[38536], 95.00th=[51119], 00:09:41.312 | 99.00th=[57934], 99.50th=[60031], 99.90th=[60031], 99.95th=[65274], 00:09:41.312 | 99.99th=[65274] 00:09:41.312 write: IOPS=2025, BW=8103KiB/s (8298kB/s)(8168KiB/1008msec); 0 zone resets 00:09:41.312 slat (usec): min=3, max=16685, avg=347.86, stdev=1539.64 00:09:41.312 clat (msec): min=5, max=144, avg=47.21, stdev=34.06 00:09:41.312 lat (msec): min=6, max=144, avg=47.56, stdev=34.27 00:09:41.312 clat percentiles (msec): 00:09:41.312 | 1.00th=[ 9], 5.00th=[ 14], 10.00th=[ 14], 20.00th=[ 15], 00:09:41.312 | 30.00th=[ 20], 40.00th=[ 33], 50.00th=[ 36], 60.00th=[ 45], 00:09:41.312 | 70.00th=[ 59], 80.00th=[ 75], 90.00th=[ 104], 95.00th=[ 115], 00:09:41.312 | 99.00th=[ 144], 99.50th=[ 144], 99.90th=[ 144], 99.95th=[ 144], 00:09:41.312 | 99.99th=[ 144] 00:09:41.312 bw ( KiB/s): min= 5624, max= 9688, per=13.13%, avg=7656.00, stdev=2873.68, samples=2 00:09:41.312 iops : min= 1406, max= 2422, avg=1914.00, stdev=718.42, samples=2 00:09:41.312 lat (msec) : 10=3.49%, 20=46.79%, 50=27.08%, 100=15.34%, 250=7.29% 00:09:41.312 cpu : usr=1.39%, sys=2.38%, ctx=240, majf=0, minf=1 00:09:41.312 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.9%, >=64=98.2% 00:09:41.312 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:41.312 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:41.312 issued rwts: total=1536,2042,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:41.312 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:41.312 job3: (groupid=0, jobs=1): err= 0: pid=1111652: Fri Nov 15 11:27:41 2024 00:09:41.312 read: IOPS=5189, BW=20.3MiB/s (21.3MB/s)(20.3MiB/1002msec) 00:09:41.312 slat (usec): min=2, max=19911, avg=84.01, stdev=522.28 00:09:41.312 clat (usec): min=932, max=30243, avg=10482.76, stdev=3557.76 00:09:41.312 lat (usec): min=3426, max=30250, avg=10566.77, stdev=3592.94 00:09:41.312 clat percentiles (usec): 00:09:41.312 | 1.00th=[ 6194], 5.00th=[ 7439], 10.00th=[ 8094], 20.00th=[ 8455], 00:09:41.312 | 30.00th=[ 8586], 40.00th=[ 8979], 50.00th=[ 9372], 60.00th=[10159], 00:09:41.312 | 70.00th=[10814], 80.00th=[11994], 90.00th=[13566], 95.00th=[15795], 00:09:41.312 | 99.00th=[29754], 99.50th=[29754], 99.90th=[30278], 99.95th=[30278], 00:09:41.312 | 99.99th=[30278] 00:09:41.312 write: IOPS=5620, BW=22.0MiB/s (23.0MB/s)(22.0MiB/1002msec); 0 zone resets 00:09:41.312 slat (usec): min=3, max=9593, avg=93.78, stdev=435.31 00:09:41.312 clat (usec): min=1529, max=30220, avg=12883.26, stdev=5717.94 00:09:41.312 lat (usec): min=1541, max=30225, avg=12977.05, stdev=5758.87 00:09:41.312 clat percentiles (usec): 00:09:41.312 | 1.00th=[ 5407], 5.00th=[ 7373], 10.00th=[ 8356], 20.00th=[ 8717], 00:09:41.312 | 30.00th=[ 8979], 40.00th=[ 9372], 50.00th=[10028], 60.00th=[11207], 00:09:41.312 | 70.00th=[15795], 80.00th=[18482], 90.00th=[22676], 95.00th=[23725], 00:09:41.312 | 99.00th=[29492], 99.50th=[29754], 99.90th=[30278], 99.95th=[30278], 00:09:41.312 | 99.99th=[30278] 00:09:41.312 bw ( KiB/s): min=19320, max=25360, per=38.32%, avg=22340.00, stdev=4270.92, samples=2 00:09:41.312 iops : min= 4830, max= 6340, avg=5585.00, stdev=1067.73, samples=2 00:09:41.312 lat (usec) : 1000=0.01% 00:09:41.312 lat (msec) : 2=0.03%, 4=0.47%, 10=53.79%, 20=35.98%, 50=9.72% 00:09:41.312 cpu : usr=5.39%, sys=6.59%, ctx=650, majf=0, minf=1 00:09:41.312 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.4% 00:09:41.312 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:41.312 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:41.312 issued rwts: total=5200,5632,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:41.312 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:41.312 00:09:41.312 Run status group 0 (all jobs): 00:09:41.312 READ: bw=49.5MiB/s (51.9MB/s), 6095KiB/s-20.3MiB/s (6242kB/s-21.3MB/s), io=50.3MiB (52.8MB), run=1002-1017msec 00:09:41.312 WRITE: bw=56.9MiB/s (59.7MB/s), 8103KiB/s-22.0MiB/s (8298kB/s-23.0MB/s), io=57.9MiB (60.7MB), run=1002-1017msec 00:09:41.312 00:09:41.312 Disk stats (read/write): 00:09:41.312 nvme0n1: ios=2613/2911, merge=0/0, ticks=30996/29535, in_queue=60531, util=99.80% 00:09:41.312 nvme0n2: ios=3118/3135, merge=0/0, ticks=43954/57022, in_queue=100976, util=96.52% 00:09:41.312 nvme0n3: ios=1024/1391, merge=0/0, ticks=10034/40840, in_queue=50874, util=88.29% 00:09:41.312 nvme0n4: ios=4096/4399, merge=0/0, ticks=22781/31594, in_queue=54375, util=89.21% 00:09:41.313 11:27:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randwrite -r 1 -v 00:09:41.313 [global] 00:09:41.313 thread=1 00:09:41.313 invalidate=1 00:09:41.313 rw=randwrite 00:09:41.313 time_based=1 00:09:41.313 runtime=1 00:09:41.313 ioengine=libaio 00:09:41.313 direct=1 00:09:41.313 bs=4096 00:09:41.313 iodepth=128 00:09:41.313 norandommap=0 00:09:41.313 numjobs=1 00:09:41.313 00:09:41.313 verify_dump=1 00:09:41.313 verify_backlog=512 00:09:41.313 verify_state_save=0 00:09:41.313 do_verify=1 00:09:41.313 verify=crc32c-intel 00:09:41.313 [job0] 00:09:41.313 filename=/dev/nvme0n1 00:09:41.313 [job1] 00:09:41.313 filename=/dev/nvme0n2 00:09:41.313 [job2] 00:09:41.313 filename=/dev/nvme0n3 00:09:41.313 [job3] 00:09:41.313 filename=/dev/nvme0n4 00:09:41.313 Could not set queue depth (nvme0n1) 00:09:41.313 Could not set queue depth (nvme0n2) 00:09:41.313 Could not set queue depth (nvme0n3) 00:09:41.313 Could not set queue depth (nvme0n4) 00:09:41.572 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:09:41.572 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:09:41.572 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:09:41.572 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:09:41.572 fio-3.35 00:09:41.572 Starting 4 threads 00:09:42.951 00:09:42.951 job0: (groupid=0, jobs=1): err= 0: pid=1112072: Fri Nov 15 11:27:43 2024 00:09:42.951 read: IOPS=5079, BW=19.8MiB/s (20.8MB/s)(20.0MiB/1008msec) 00:09:42.951 slat (nsec): min=1630, max=13777k, avg=90919.24, stdev=529050.47 00:09:42.951 clat (usec): min=2162, max=48607, avg=11649.40, stdev=4619.12 00:09:42.951 lat (usec): min=2175, max=52960, avg=11740.32, stdev=4635.90 00:09:42.951 clat percentiles (usec): 00:09:42.951 | 1.00th=[ 5211], 5.00th=[ 6915], 10.00th=[ 7635], 20.00th=[ 9110], 00:09:42.951 | 30.00th=[ 9765], 40.00th=[10421], 50.00th=[11076], 60.00th=[11731], 00:09:42.951 | 70.00th=[12125], 80.00th=[12780], 90.00th=[15270], 95.00th=[17695], 00:09:42.951 | 99.00th=[33817], 99.50th=[39060], 99.90th=[48497], 99.95th=[48497], 00:09:42.951 | 99.99th=[48497] 00:09:42.951 write: IOPS=5346, BW=20.9MiB/s (21.9MB/s)(21.1MiB/1008msec); 0 zone resets 00:09:42.951 slat (usec): min=2, max=13552, avg=94.65, stdev=472.03 00:09:42.951 clat (usec): min=3974, max=40659, avg=12534.93, stdev=5914.47 00:09:42.951 lat (usec): min=5468, max=40684, avg=12629.58, stdev=5941.36 00:09:42.951 clat percentiles (usec): 00:09:42.951 | 1.00th=[ 6259], 5.00th=[ 7635], 10.00th=[ 8848], 20.00th=[ 9372], 00:09:42.951 | 30.00th=[ 9634], 40.00th=[10159], 50.00th=[10814], 60.00th=[11207], 00:09:42.951 | 70.00th=[11731], 80.00th=[12911], 90.00th=[19530], 95.00th=[28705], 00:09:42.951 | 99.00th=[35390], 99.50th=[35390], 99.90th=[38011], 99.95th=[38011], 00:09:42.951 | 99.99th=[40633] 00:09:42.951 bw ( KiB/s): min=20480, max=21608, per=28.31%, avg=21044.00, stdev=797.62, samples=2 00:09:42.951 iops : min= 5120, max= 5402, avg=5261.00, stdev=199.40, samples=2 00:09:42.951 lat (msec) : 4=0.20%, 10=36.22%, 20=56.84%, 50=6.75% 00:09:42.951 cpu : usr=3.08%, sys=4.87%, ctx=661, majf=0, minf=1 00:09:42.951 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.4% 00:09:42.951 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:42.951 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:42.951 issued rwts: total=5120,5389,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:42.951 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:42.951 job1: (groupid=0, jobs=1): err= 0: pid=1112073: Fri Nov 15 11:27:43 2024 00:09:42.951 read: IOPS=4921, BW=19.2MiB/s (20.2MB/s)(19.3MiB/1005msec) 00:09:42.951 slat (nsec): min=1784, max=16473k, avg=95828.17, stdev=713334.88 00:09:42.952 clat (usec): min=892, max=37086, avg=12347.42, stdev=4471.99 00:09:42.952 lat (usec): min=910, max=37109, avg=12443.24, stdev=4537.88 00:09:42.952 clat percentiles (usec): 00:09:42.952 | 1.00th=[ 1811], 5.00th=[ 5932], 10.00th=[ 8455], 20.00th=[ 9634], 00:09:42.952 | 30.00th=[10159], 40.00th=[10683], 50.00th=[11076], 60.00th=[11338], 00:09:42.952 | 70.00th=[14353], 80.00th=[16319], 90.00th=[17957], 95.00th=[20579], 00:09:42.952 | 99.00th=[26608], 99.50th=[29230], 99.90th=[29492], 99.95th=[32900], 00:09:42.952 | 99.99th=[36963] 00:09:42.952 write: IOPS=5094, BW=19.9MiB/s (20.9MB/s)(20.0MiB/1005msec); 0 zone resets 00:09:42.952 slat (usec): min=2, max=9583, avg=93.66, stdev=550.10 00:09:42.952 clat (usec): min=1411, max=40886, avg=12972.39, stdev=6514.49 00:09:42.952 lat (usec): min=1423, max=40895, avg=13066.05, stdev=6566.89 00:09:42.952 clat percentiles (usec): 00:09:42.952 | 1.00th=[ 5997], 5.00th=[ 8586], 10.00th=[ 9372], 20.00th=[ 9634], 00:09:42.952 | 30.00th=[ 9765], 40.00th=[10159], 50.00th=[10552], 60.00th=[11076], 00:09:42.952 | 70.00th=[11469], 80.00th=[13829], 90.00th=[19792], 95.00th=[29754], 00:09:42.952 | 99.00th=[37487], 99.50th=[39584], 99.90th=[40633], 99.95th=[40633], 00:09:42.952 | 99.99th=[40633] 00:09:42.952 bw ( KiB/s): min=16120, max=24576, per=27.38%, avg=20348.00, stdev=5979.29, samples=2 00:09:42.952 iops : min= 4030, max= 6144, avg=5087.00, stdev=1494.82, samples=2 00:09:42.952 lat (usec) : 1000=0.30% 00:09:42.952 lat (msec) : 2=0.41%, 4=0.91%, 10=28.37%, 20=61.57%, 50=8.43% 00:09:42.952 cpu : usr=4.68%, sys=6.18%, ctx=441, majf=0, minf=2 00:09:42.952 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.4% 00:09:42.952 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:42.952 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:42.952 issued rwts: total=4946,5120,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:42.952 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:42.952 job2: (groupid=0, jobs=1): err= 0: pid=1112074: Fri Nov 15 11:27:43 2024 00:09:42.952 read: IOPS=5109, BW=20.0MiB/s (20.9MB/s)(20.0MiB/1002msec) 00:09:42.952 slat (usec): min=2, max=9301, avg=101.21, stdev=560.56 00:09:42.952 clat (usec): min=2647, max=47336, avg=12877.14, stdev=5473.99 00:09:42.952 lat (usec): min=2656, max=47363, avg=12978.35, stdev=5520.80 00:09:42.952 clat percentiles (usec): 00:09:42.952 | 1.00th=[ 7701], 5.00th=[ 9372], 10.00th=[10028], 20.00th=[10683], 00:09:42.952 | 30.00th=[11207], 40.00th=[11600], 50.00th=[11994], 60.00th=[12387], 00:09:42.952 | 70.00th=[12649], 80.00th=[12911], 90.00th=[13435], 95.00th=[16188], 00:09:42.952 | 99.00th=[40633], 99.50th=[42730], 99.90th=[42730], 99.95th=[46400], 00:09:42.952 | 99.99th=[47449] 00:09:42.952 write: IOPS=5138, BW=20.1MiB/s (21.0MB/s)(20.1MiB/1002msec); 0 zone resets 00:09:42.952 slat (usec): min=3, max=3880, avg=86.74, stdev=429.64 00:09:42.952 clat (usec): min=1576, max=31698, avg=11771.56, stdev=1944.21 00:09:42.952 lat (usec): min=2269, max=31721, avg=11858.31, stdev=1979.70 00:09:42.952 clat percentiles (usec): 00:09:42.952 | 1.00th=[ 7898], 5.00th=[10290], 10.00th=[10421], 20.00th=[10683], 00:09:42.952 | 30.00th=[10945], 40.00th=[11076], 50.00th=[11469], 60.00th=[11994], 00:09:42.952 | 70.00th=[12387], 80.00th=[12780], 90.00th=[13042], 95.00th=[13566], 00:09:42.952 | 99.00th=[18482], 99.50th=[24511], 99.90th=[27919], 99.95th=[27919], 00:09:42.952 | 99.99th=[31589] 00:09:42.952 bw ( KiB/s): min=19536, max=21424, per=27.55%, avg=20480.00, stdev=1335.02, samples=2 00:09:42.952 iops : min= 4884, max= 5356, avg=5120.00, stdev=333.75, samples=2 00:09:42.952 lat (msec) : 2=0.01%, 4=0.38%, 10=6.79%, 20=89.94%, 50=2.88% 00:09:42.952 cpu : usr=4.40%, sys=7.09%, ctx=509, majf=0, minf=1 00:09:42.952 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.4% 00:09:42.952 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:42.952 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:42.952 issued rwts: total=5120,5149,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:42.952 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:42.952 job3: (groupid=0, jobs=1): err= 0: pid=1112075: Fri Nov 15 11:27:43 2024 00:09:42.952 read: IOPS=2957, BW=11.6MiB/s (12.1MB/s)(11.6MiB/1005msec) 00:09:42.952 slat (nsec): min=1704, max=46954k, avg=163219.19, stdev=1167169.12 00:09:42.952 clat (usec): min=2821, max=62141, avg=21085.92, stdev=10505.36 00:09:42.952 lat (usec): min=7178, max=66211, avg=21249.14, stdev=10529.95 00:09:42.952 clat percentiles (usec): 00:09:42.952 | 1.00th=[ 8455], 5.00th=[12649], 10.00th=[14091], 20.00th=[14746], 00:09:42.952 | 30.00th=[16450], 40.00th=[17171], 50.00th=[17957], 60.00th=[18220], 00:09:42.952 | 70.00th=[18744], 80.00th=[27132], 90.00th=[29754], 95.00th=[40633], 00:09:42.952 | 99.00th=[62129], 99.50th=[62129], 99.90th=[62129], 99.95th=[62129], 00:09:42.952 | 99.99th=[62129] 00:09:42.952 write: IOPS=3056, BW=11.9MiB/s (12.5MB/s)(12.0MiB/1005msec); 0 zone resets 00:09:42.952 slat (usec): min=2, max=15649, avg=161.20, stdev=940.01 00:09:42.952 clat (usec): min=1423, max=50134, avg=20924.05, stdev=7427.21 00:09:42.952 lat (usec): min=1446, max=50151, avg=21085.25, stdev=7453.48 00:09:42.952 clat percentiles (usec): 00:09:42.952 | 1.00th=[ 8717], 5.00th=[13304], 10.00th=[14746], 20.00th=[16319], 00:09:42.952 | 30.00th=[16909], 40.00th=[17433], 50.00th=[17695], 60.00th=[17957], 00:09:42.952 | 70.00th=[22414], 80.00th=[27657], 90.00th=[31851], 95.00th=[36439], 00:09:42.952 | 99.00th=[50070], 99.50th=[50070], 99.90th=[50070], 99.95th=[50070], 00:09:42.952 | 99.99th=[50070] 00:09:42.952 bw ( KiB/s): min=12288, max=12288, per=16.53%, avg=12288.00, stdev= 0.00, samples=2 00:09:42.952 iops : min= 3072, max= 3072, avg=3072.00, stdev= 0.00, samples=2 00:09:42.952 lat (msec) : 2=0.03%, 4=0.02%, 10=1.80%, 20=67.92%, 50=27.80% 00:09:42.952 lat (msec) : 100=2.43% 00:09:42.952 cpu : usr=2.69%, sys=3.39%, ctx=250, majf=0, minf=2 00:09:42.952 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.5%, >=64=99.0% 00:09:42.952 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:42.952 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:42.952 issued rwts: total=2972,3072,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:42.952 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:42.952 00:09:42.952 Run status group 0 (all jobs): 00:09:42.952 READ: bw=70.4MiB/s (73.8MB/s), 11.6MiB/s-20.0MiB/s (12.1MB/s-20.9MB/s), io=70.9MiB (74.4MB), run=1002-1008msec 00:09:42.952 WRITE: bw=72.6MiB/s (76.1MB/s), 11.9MiB/s-20.9MiB/s (12.5MB/s-21.9MB/s), io=73.2MiB (76.7MB), run=1002-1008msec 00:09:42.952 00:09:42.952 Disk stats (read/write): 00:09:42.952 nvme0n1: ios=4211/4608, merge=0/0, ticks=20466/25485, in_queue=45951, util=97.29% 00:09:42.952 nvme0n2: ios=4489/4608, merge=0/0, ticks=26063/21697, in_queue=47760, util=86.27% 00:09:42.952 nvme0n3: ios=4117/4471, merge=0/0, ticks=19109/15991, in_queue=35100, util=98.23% 00:09:42.952 nvme0n4: ios=2573/2800, merge=0/0, ticks=15845/19001, in_queue=34846, util=90.65% 00:09:42.952 11:27:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@55 -- # sync 00:09:42.952 11:27:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@59 -- # fio_pid=1112284 00:09:42.952 11:27:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t read -r 10 00:09:42.952 11:27:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@61 -- # sleep 3 00:09:42.952 [global] 00:09:42.952 thread=1 00:09:42.952 invalidate=1 00:09:42.952 rw=read 00:09:42.952 time_based=1 00:09:42.952 runtime=10 00:09:42.952 ioengine=libaio 00:09:42.952 direct=1 00:09:42.952 bs=4096 00:09:42.952 iodepth=1 00:09:42.952 norandommap=1 00:09:42.952 numjobs=1 00:09:42.952 00:09:42.952 [job0] 00:09:42.952 filename=/dev/nvme0n1 00:09:42.952 [job1] 00:09:42.952 filename=/dev/nvme0n2 00:09:42.952 [job2] 00:09:42.952 filename=/dev/nvme0n3 00:09:42.952 [job3] 00:09:42.952 filename=/dev/nvme0n4 00:09:42.952 Could not set queue depth (nvme0n1) 00:09:42.952 Could not set queue depth (nvme0n2) 00:09:42.952 Could not set queue depth (nvme0n3) 00:09:42.952 Could not set queue depth (nvme0n4) 00:09:43.211 job0: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:43.211 job1: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:43.211 job2: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:43.211 job3: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:43.211 fio-3.35 00:09:43.211 Starting 4 threads 00:09:45.745 11:27:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete concat0 00:09:46.004 11:27:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete raid0 00:09:46.262 fio: io_u error on file /dev/nvme0n4: Operation not supported: read offset=4046848, buflen=4096 00:09:46.262 fio: pid=1112504, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:09:46.521 11:27:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:09:46.521 11:27:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc0 00:09:46.521 fio: io_u error on file /dev/nvme0n3: Operation not supported: read offset=41000960, buflen=4096 00:09:46.521 fio: pid=1112503, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:09:46.779 11:27:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:09:46.779 11:27:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc1 00:09:46.779 fio: io_u error on file /dev/nvme0n1: Operation not supported: read offset=331776, buflen=4096 00:09:46.779 fio: pid=1112497, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:09:47.039 11:27:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:09:47.039 11:27:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc2 00:09:47.039 fio: io_u error on file /dev/nvme0n2: Operation not supported: read offset=54444032, buflen=4096 00:09:47.039 fio: pid=1112501, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:09:47.039 00:09:47.039 job0: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=1112497: Fri Nov 15 11:27:47 2024 00:09:47.039 read: IOPS=24, BW=98.1KiB/s (100kB/s)(324KiB/3303msec) 00:09:47.039 slat (nsec): min=12263, max=74190, avg=25103.80, stdev=7701.77 00:09:47.039 clat (usec): min=589, max=41984, avg=40480.17, stdev=4489.38 00:09:47.039 lat (usec): min=620, max=42012, avg=40505.31, stdev=4488.73 00:09:47.039 clat percentiles (usec): 00:09:47.039 | 1.00th=[ 594], 5.00th=[40633], 10.00th=[41157], 20.00th=[41157], 00:09:47.039 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:09:47.039 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:09:47.039 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:09:47.039 | 99.99th=[42206] 00:09:47.039 bw ( KiB/s): min= 93, max= 104, per=0.36%, avg=98.17, stdev= 4.67, samples=6 00:09:47.039 iops : min= 23, max= 26, avg=24.50, stdev= 1.22, samples=6 00:09:47.039 lat (usec) : 750=1.22% 00:09:47.039 lat (msec) : 50=97.56% 00:09:47.039 cpu : usr=0.12%, sys=0.00%, ctx=86, majf=0, minf=1 00:09:47.039 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:47.039 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:47.039 complete : 0=1.2%, 4=98.8%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:47.039 issued rwts: total=82,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:47.039 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:47.039 job1: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=1112501: Fri Nov 15 11:27:47 2024 00:09:47.039 read: IOPS=3707, BW=14.5MiB/s (15.2MB/s)(51.9MiB/3585msec) 00:09:47.039 slat (usec): min=5, max=8622, avg=10.45, stdev=133.25 00:09:47.039 clat (usec): min=158, max=41128, avg=256.69, stdev=1118.24 00:09:47.039 lat (usec): min=165, max=41137, avg=267.14, stdev=1127.02 00:09:47.039 clat percentiles (usec): 00:09:47.039 | 1.00th=[ 172], 5.00th=[ 184], 10.00th=[ 194], 20.00th=[ 206], 00:09:47.039 | 30.00th=[ 215], 40.00th=[ 221], 50.00th=[ 227], 60.00th=[ 231], 00:09:47.039 | 70.00th=[ 237], 80.00th=[ 243], 90.00th=[ 251], 95.00th=[ 265], 00:09:47.040 | 99.00th=[ 367], 99.50th=[ 392], 99.90th=[ 465], 99.95th=[41157], 00:09:47.040 | 99.99th=[41157] 00:09:47.040 bw ( KiB/s): min=15976, max=19200, per=62.20%, avg=16914.67, stdev=1162.32, samples=6 00:09:47.040 iops : min= 3994, max= 4800, avg=4228.67, stdev=290.58, samples=6 00:09:47.040 lat (usec) : 250=88.69%, 500=11.22% 00:09:47.040 lat (msec) : 4=0.01%, 50=0.08% 00:09:47.040 cpu : usr=1.65%, sys=4.85%, ctx=13299, majf=0, minf=2 00:09:47.040 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:47.040 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:47.040 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:47.040 issued rwts: total=13293,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:47.040 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:47.040 job2: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=1112503: Fri Nov 15 11:27:47 2024 00:09:47.040 read: IOPS=3325, BW=13.0MiB/s (13.6MB/s)(39.1MiB/3010msec) 00:09:47.040 slat (nsec): min=5009, max=31340, avg=7513.28, stdev=1359.16 00:09:47.040 clat (usec): min=177, max=42036, avg=289.87, stdev=1441.43 00:09:47.040 lat (usec): min=184, max=42058, avg=297.39, stdev=1441.95 00:09:47.040 clat percentiles (usec): 00:09:47.040 | 1.00th=[ 208], 5.00th=[ 217], 10.00th=[ 221], 20.00th=[ 227], 00:09:47.040 | 30.00th=[ 231], 40.00th=[ 233], 50.00th=[ 237], 60.00th=[ 239], 00:09:47.040 | 70.00th=[ 243], 80.00th=[ 249], 90.00th=[ 258], 95.00th=[ 265], 00:09:47.040 | 99.00th=[ 293], 99.50th=[ 310], 99.90th=[41157], 99.95th=[41157], 00:09:47.040 | 99.99th=[42206] 00:09:47.040 bw ( KiB/s): min= 1232, max=16536, per=47.04%, avg=12792.00, stdev=6500.35, samples=5 00:09:47.040 iops : min= 308, max= 4134, avg=3198.00, stdev=1625.09, samples=5 00:09:47.040 lat (usec) : 250=82.80%, 500=17.02%, 750=0.04% 00:09:47.040 lat (msec) : 50=0.13% 00:09:47.040 cpu : usr=0.86%, sys=2.99%, ctx=10011, majf=0, minf=2 00:09:47.040 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:47.040 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:47.040 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:47.040 issued rwts: total=10011,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:47.040 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:47.040 job3: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=1112504: Fri Nov 15 11:27:47 2024 00:09:47.040 read: IOPS=360, BW=1439KiB/s (1473kB/s)(3952KiB/2747msec) 00:09:47.040 slat (nsec): min=5700, max=64411, avg=9462.67, stdev=6528.00 00:09:47.040 clat (usec): min=205, max=42017, avg=2747.56, stdev=9688.96 00:09:47.040 lat (usec): min=212, max=42039, avg=2757.01, stdev=9692.21 00:09:47.040 clat percentiles (usec): 00:09:47.040 | 1.00th=[ 233], 5.00th=[ 243], 10.00th=[ 247], 20.00th=[ 255], 00:09:47.040 | 30.00th=[ 265], 40.00th=[ 273], 50.00th=[ 285], 60.00th=[ 306], 00:09:47.040 | 70.00th=[ 326], 80.00th=[ 379], 90.00th=[ 416], 95.00th=[41157], 00:09:47.040 | 99.00th=[41681], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:09:47.040 | 99.99th=[42206] 00:09:47.040 bw ( KiB/s): min= 96, max= 4320, per=5.78%, avg=1571.20, stdev=1858.43, samples=5 00:09:47.040 iops : min= 24, max= 1080, avg=392.80, stdev=464.61, samples=5 00:09:47.040 lat (usec) : 250=13.14%, 500=80.38%, 750=0.30% 00:09:47.040 lat (msec) : 10=0.10%, 50=5.97% 00:09:47.040 cpu : usr=0.04%, sys=0.44%, ctx=989, majf=0, minf=2 00:09:47.040 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:47.040 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:47.040 complete : 0=0.1%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:47.040 issued rwts: total=989,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:47.040 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:47.040 00:09:47.040 Run status group 0 (all jobs): 00:09:47.040 READ: bw=26.6MiB/s (27.8MB/s), 98.1KiB/s-14.5MiB/s (100kB/s-15.2MB/s), io=95.2MiB (99.8MB), run=2747-3585msec 00:09:47.040 00:09:47.040 Disk stats (read/write): 00:09:47.040 nvme0n1: ios=103/0, merge=0/0, ticks=3744/0, in_queue=3744, util=99.78% 00:09:47.040 nvme0n2: ios=13286/0, merge=0/0, ticks=3074/0, in_queue=3074, util=95.58% 00:09:47.040 nvme0n3: ios=9462/0, merge=0/0, ticks=2734/0, in_queue=2734, util=96.55% 00:09:47.040 nvme0n4: ios=985/0, merge=0/0, ticks=2590/0, in_queue=2590, util=96.48% 00:09:47.299 11:27:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:09:47.299 11:27:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc3 00:09:47.558 11:27:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:09:47.558 11:27:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc4 00:09:47.817 11:27:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:09:47.817 11:27:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc5 00:09:48.076 11:27:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:09:48.076 11:27:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc6 00:09:48.335 11:27:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@69 -- # fio_status=0 00:09:48.335 11:27:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@70 -- # wait 1112284 00:09:48.335 11:27:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@70 -- # fio_status=4 00:09:48.335 11:27:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@72 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:09:48.335 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:48.335 11:27:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@73 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:09:48.335 11:27:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1221 -- # local i=0 00:09:48.335 11:27:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1222 -- # lsblk -o NAME,SERIAL 00:09:48.335 11:27:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1222 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:48.594 11:27:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1229 -- # lsblk -l -o NAME,SERIAL 00:09:48.594 11:27:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1229 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:48.594 11:27:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1233 -- # return 0 00:09:48.594 11:27:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@75 -- # '[' 4 -eq 0 ']' 00:09:48.594 11:27:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@80 -- # echo 'nvmf hotplug test: fio failed as expected' 00:09:48.594 nvmf hotplug test: fio failed as expected 00:09:48.594 11:27:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:09:48.594 11:27:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@85 -- # rm -f ./local-job0-0-verify.state 00:09:48.594 11:27:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@86 -- # rm -f ./local-job1-1-verify.state 00:09:48.594 11:27:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@87 -- # rm -f ./local-job2-2-verify.state 00:09:48.594 11:27:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@89 -- # trap - SIGINT SIGTERM EXIT 00:09:48.594 11:27:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@91 -- # nvmftestfini 00:09:48.594 11:27:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@516 -- # nvmfcleanup 00:09:48.594 11:27:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@121 -- # sync 00:09:48.594 11:27:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:09:48.594 11:27:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@124 -- # set +e 00:09:48.594 11:27:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:09:48.594 11:27:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:09:48.594 rmmod nvme_tcp 00:09:48.594 rmmod nvme_fabrics 00:09:48.853 rmmod nvme_keyring 00:09:48.853 11:27:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:09:48.853 11:27:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@128 -- # set -e 00:09:48.853 11:27:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@129 -- # return 0 00:09:48.853 11:27:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@517 -- # '[' -n 1109020 ']' 00:09:48.853 11:27:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@518 -- # killprocess 1109020 00:09:48.853 11:27:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@952 -- # '[' -z 1109020 ']' 00:09:48.853 11:27:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@956 -- # kill -0 1109020 00:09:48.853 11:27:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@957 -- # uname 00:09:48.854 11:27:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:09:48.854 11:27:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 1109020 00:09:48.854 11:27:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:09:48.854 11:27:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:09:48.854 11:27:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@970 -- # echo 'killing process with pid 1109020' 00:09:48.854 killing process with pid 1109020 00:09:48.854 11:27:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@971 -- # kill 1109020 00:09:48.854 11:27:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@976 -- # wait 1109020 00:09:49.113 11:27:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:09:49.113 11:27:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:09:49.113 11:27:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:09:49.113 11:27:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@297 -- # iptr 00:09:49.113 11:27:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@791 -- # iptables-save 00:09:49.113 11:27:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:09:49.113 11:27:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@791 -- # iptables-restore 00:09:49.113 11:27:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:09:49.113 11:27:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@302 -- # remove_spdk_ns 00:09:49.113 11:27:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:49.113 11:27:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:49.113 11:27:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:51.019 11:27:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:09:51.019 00:09:51.019 real 0m28.622s 00:09:51.019 user 2m20.807s 00:09:51.019 sys 0m8.806s 00:09:51.019 11:27:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1128 -- # xtrace_disable 00:09:51.019 11:27:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:09:51.019 ************************************ 00:09:51.019 END TEST nvmf_fio_target 00:09:51.019 ************************************ 00:09:51.019 11:27:51 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@35 -- # run_test nvmf_bdevio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:09:51.019 11:27:51 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:09:51.019 11:27:51 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1109 -- # xtrace_disable 00:09:51.019 11:27:51 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:09:51.019 ************************************ 00:09:51.019 START TEST nvmf_bdevio 00:09:51.019 ************************************ 00:09:51.019 11:27:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:09:51.279 * Looking for test storage... 00:09:51.279 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:51.279 11:27:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:09:51.279 11:27:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1691 -- # lcov --version 00:09:51.279 11:27:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:09:51.279 11:27:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:09:51.279 11:27:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:51.279 11:27:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:51.279 11:27:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:51.279 11:27:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@336 -- # IFS=.-: 00:09:51.279 11:27:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@336 -- # read -ra ver1 00:09:51.279 11:27:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@337 -- # IFS=.-: 00:09:51.279 11:27:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@337 -- # read -ra ver2 00:09:51.279 11:27:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@338 -- # local 'op=<' 00:09:51.279 11:27:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@340 -- # ver1_l=2 00:09:51.279 11:27:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@341 -- # ver2_l=1 00:09:51.279 11:27:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:51.279 11:27:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@344 -- # case "$op" in 00:09:51.279 11:27:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@345 -- # : 1 00:09:51.279 11:27:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:51.279 11:27:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:51.279 11:27:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@365 -- # decimal 1 00:09:51.279 11:27:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@353 -- # local d=1 00:09:51.279 11:27:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:51.279 11:27:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@355 -- # echo 1 00:09:51.279 11:27:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@365 -- # ver1[v]=1 00:09:51.279 11:27:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@366 -- # decimal 2 00:09:51.279 11:27:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@353 -- # local d=2 00:09:51.279 11:27:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:51.280 11:27:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@355 -- # echo 2 00:09:51.280 11:27:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@366 -- # ver2[v]=2 00:09:51.280 11:27:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:51.280 11:27:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:51.280 11:27:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@368 -- # return 0 00:09:51.280 11:27:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:51.280 11:27:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:09:51.280 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:51.280 --rc genhtml_branch_coverage=1 00:09:51.280 --rc genhtml_function_coverage=1 00:09:51.280 --rc genhtml_legend=1 00:09:51.280 --rc geninfo_all_blocks=1 00:09:51.280 --rc geninfo_unexecuted_blocks=1 00:09:51.280 00:09:51.280 ' 00:09:51.280 11:27:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:09:51.280 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:51.280 --rc genhtml_branch_coverage=1 00:09:51.280 --rc genhtml_function_coverage=1 00:09:51.280 --rc genhtml_legend=1 00:09:51.280 --rc geninfo_all_blocks=1 00:09:51.280 --rc geninfo_unexecuted_blocks=1 00:09:51.280 00:09:51.280 ' 00:09:51.280 11:27:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:09:51.280 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:51.280 --rc genhtml_branch_coverage=1 00:09:51.280 --rc genhtml_function_coverage=1 00:09:51.280 --rc genhtml_legend=1 00:09:51.280 --rc geninfo_all_blocks=1 00:09:51.280 --rc geninfo_unexecuted_blocks=1 00:09:51.280 00:09:51.280 ' 00:09:51.280 11:27:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:09:51.280 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:51.280 --rc genhtml_branch_coverage=1 00:09:51.280 --rc genhtml_function_coverage=1 00:09:51.280 --rc genhtml_legend=1 00:09:51.280 --rc geninfo_all_blocks=1 00:09:51.280 --rc geninfo_unexecuted_blocks=1 00:09:51.280 00:09:51.280 ' 00:09:51.280 11:27:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:51.280 11:27:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@7 -- # uname -s 00:09:51.280 11:27:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:51.280 11:27:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:51.280 11:27:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:51.280 11:27:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:51.280 11:27:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:51.280 11:27:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:51.280 11:27:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:51.280 11:27:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:51.280 11:27:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:51.280 11:27:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:51.280 11:27:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:09:51.280 11:27:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@18 -- # NVME_HOSTID=00abaa28-3537-eb11-906e-0017a4403562 00:09:51.280 11:27:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:51.280 11:27:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:51.280 11:27:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:51.280 11:27:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:51.280 11:27:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:51.280 11:27:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@15 -- # shopt -s extglob 00:09:51.280 11:27:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:51.280 11:27:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:51.280 11:27:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:51.280 11:27:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:51.280 11:27:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:51.280 11:27:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:51.280 11:27:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@5 -- # export PATH 00:09:51.280 11:27:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:51.280 11:27:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@51 -- # : 0 00:09:51.280 11:27:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:09:51.280 11:27:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:09:51.280 11:27:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:51.280 11:27:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:51.280 11:27:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:51.280 11:27:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:09:51.280 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:09:51.280 11:27:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:09:51.280 11:27:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:09:51.280 11:27:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@55 -- # have_pci_nics=0 00:09:51.280 11:27:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:09:51.280 11:27:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:09:51.280 11:27:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@14 -- # nvmftestinit 00:09:51.280 11:27:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:09:51.280 11:27:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:51.280 11:27:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@476 -- # prepare_net_devs 00:09:51.280 11:27:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@438 -- # local -g is_hw=no 00:09:51.280 11:27:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@440 -- # remove_spdk_ns 00:09:51.280 11:27:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:51.280 11:27:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:51.280 11:27:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:51.280 11:27:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:09:51.280 11:27:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:09:51.280 11:27:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@309 -- # xtrace_disable 00:09:51.280 11:27:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:09:57.851 11:27:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:57.851 11:27:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@315 -- # pci_devs=() 00:09:57.851 11:27:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@315 -- # local -a pci_devs 00:09:57.851 11:27:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@316 -- # pci_net_devs=() 00:09:57.851 11:27:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:09:57.851 11:27:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@317 -- # pci_drivers=() 00:09:57.851 11:27:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@317 -- # local -A pci_drivers 00:09:57.851 11:27:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@319 -- # net_devs=() 00:09:57.851 11:27:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@319 -- # local -ga net_devs 00:09:57.851 11:27:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@320 -- # e810=() 00:09:57.851 11:27:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@320 -- # local -ga e810 00:09:57.851 11:27:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@321 -- # x722=() 00:09:57.851 11:27:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@321 -- # local -ga x722 00:09:57.851 11:27:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@322 -- # mlx=() 00:09:57.851 11:27:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@322 -- # local -ga mlx 00:09:57.851 11:27:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:57.851 11:27:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:57.851 11:27:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:57.851 11:27:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:57.851 11:27:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:57.851 11:27:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:57.851 11:27:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:57.851 11:27:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:09:57.851 11:27:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:57.851 11:27:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:57.851 11:27:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:57.851 11:27:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:57.851 11:27:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:09:57.851 11:27:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:09:57.851 11:27:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:09:57.851 11:27:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:09:57.851 11:27:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:09:57.851 11:27:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:09:57.851 11:27:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:57.851 11:27:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:09:57.851 Found 0000:af:00.0 (0x8086 - 0x159b) 00:09:57.851 11:27:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:09:57.851 11:27:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:09:57.851 11:27:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:57.851 11:27:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:57.852 11:27:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:09:57.852 11:27:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:57.852 11:27:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:09:57.852 Found 0000:af:00.1 (0x8086 - 0x159b) 00:09:57.852 11:27:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:09:57.852 11:27:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:09:57.852 11:27:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:57.852 11:27:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:57.852 11:27:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:09:57.852 11:27:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:09:57.852 11:27:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:09:57.852 11:27:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:09:57.852 11:27:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:09:57.852 11:27:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:57.852 11:27:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:09:57.852 11:27:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:57.852 11:27:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@418 -- # [[ up == up ]] 00:09:57.852 11:27:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:09:57.852 11:27:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:57.852 11:27:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:09:57.852 Found net devices under 0000:af:00.0: cvl_0_0 00:09:57.852 11:27:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:09:57.852 11:27:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:09:57.852 11:27:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:57.852 11:27:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:09:57.852 11:27:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:57.852 11:27:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@418 -- # [[ up == up ]] 00:09:57.852 11:27:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:09:57.852 11:27:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:57.852 11:27:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:09:57.852 Found net devices under 0000:af:00.1: cvl_0_1 00:09:57.852 11:27:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:09:57.852 11:27:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:09:57.852 11:27:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@442 -- # is_hw=yes 00:09:57.852 11:27:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:09:57.852 11:27:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:09:57.852 11:27:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:09:57.852 11:27:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:09:57.852 11:27:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:57.852 11:27:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:57.852 11:27:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:09:57.852 11:27:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:09:57.852 11:27:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:09:57.852 11:27:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:09:57.852 11:27:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:09:57.852 11:27:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:09:57.852 11:27:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:09:57.852 11:27:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:57.852 11:27:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:09:57.852 11:27:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:09:57.852 11:27:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:09:57.852 11:27:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:09:57.852 11:27:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:09:57.852 11:27:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:09:57.852 11:27:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:09:57.852 11:27:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:09:57.852 11:27:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:09:57.852 11:27:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:09:57.852 11:27:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:09:57.852 11:27:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:09:57.852 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:57.852 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.464 ms 00:09:57.852 00:09:57.852 --- 10.0.0.2 ping statistics --- 00:09:57.852 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:57.852 rtt min/avg/max/mdev = 0.464/0.464/0.464/0.000 ms 00:09:57.852 11:27:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:09:57.852 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:57.852 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.218 ms 00:09:57.852 00:09:57.852 --- 10.0.0.1 ping statistics --- 00:09:57.852 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:57.852 rtt min/avg/max/mdev = 0.218/0.218/0.218/0.000 ms 00:09:57.852 11:27:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:57.852 11:27:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@450 -- # return 0 00:09:57.852 11:27:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:09:57.852 11:27:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:57.852 11:27:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:09:57.852 11:27:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:09:57.852 11:27:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:57.852 11:27:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:09:57.852 11:27:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:09:57.852 11:27:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:09:57.852 11:27:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:09:57.852 11:27:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@724 -- # xtrace_disable 00:09:57.852 11:27:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:09:57.852 11:27:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@509 -- # nvmfpid=1117056 00:09:57.852 11:27:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@510 -- # waitforlisten 1117056 00:09:57.852 11:27:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x78 00:09:57.852 11:27:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@833 -- # '[' -z 1117056 ']' 00:09:57.852 11:27:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:57.852 11:27:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@838 -- # local max_retries=100 00:09:57.852 11:27:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:57.852 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:57.852 11:27:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@842 -- # xtrace_disable 00:09:57.852 11:27:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:09:57.852 [2024-11-15 11:27:57.869106] Starting SPDK v25.01-pre git sha1 4b2d483c6 / DPDK 24.03.0 initialization... 00:09:57.852 [2024-11-15 11:27:57.869165] [ DPDK EAL parameters: nvmf -c 0x78 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:57.852 [2024-11-15 11:27:57.942179] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:57.852 [2024-11-15 11:27:57.983399] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:57.852 [2024-11-15 11:27:57.983429] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:57.852 [2024-11-15 11:27:57.983436] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:57.852 [2024-11-15 11:27:57.983441] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:57.852 [2024-11-15 11:27:57.983446] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:57.852 [2024-11-15 11:27:57.985087] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:09:57.852 [2024-11-15 11:27:57.985198] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:09:57.852 [2024-11-15 11:27:57.985312] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:09:57.852 [2024-11-15 11:27:57.985312] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:09:57.852 11:27:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:09:57.852 11:27:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@866 -- # return 0 00:09:57.852 11:27:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:09:57.852 11:27:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@730 -- # xtrace_disable 00:09:57.852 11:27:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:09:57.852 11:27:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:57.852 11:27:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:09:57.853 11:27:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:57.853 11:27:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:09:57.853 [2024-11-15 11:27:58.136937] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:57.853 11:27:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:57.853 11:27:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:09:57.853 11:27:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:57.853 11:27:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:09:57.853 Malloc0 00:09:57.853 11:27:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:57.853 11:27:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:09:57.853 11:27:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:57.853 11:27:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:09:57.853 11:27:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:57.853 11:27:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:09:57.853 11:27:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:57.853 11:27:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:09:57.853 11:27:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:57.853 11:27:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:57.853 11:27:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:57.853 11:27:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:09:57.853 [2024-11-15 11:27:58.205009] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:57.853 11:27:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:57.853 11:27:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 00:09:57.853 11:27:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:09:57.853 11:27:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@560 -- # config=() 00:09:57.853 11:27:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@560 -- # local subsystem config 00:09:57.853 11:27:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:09:57.853 11:27:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:09:57.853 { 00:09:57.853 "params": { 00:09:57.853 "name": "Nvme$subsystem", 00:09:57.853 "trtype": "$TEST_TRANSPORT", 00:09:57.853 "traddr": "$NVMF_FIRST_TARGET_IP", 00:09:57.853 "adrfam": "ipv4", 00:09:57.853 "trsvcid": "$NVMF_PORT", 00:09:57.853 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:09:57.853 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:09:57.853 "hdgst": ${hdgst:-false}, 00:09:57.853 "ddgst": ${ddgst:-false} 00:09:57.853 }, 00:09:57.853 "method": "bdev_nvme_attach_controller" 00:09:57.853 } 00:09:57.853 EOF 00:09:57.853 )") 00:09:57.853 11:27:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@582 -- # cat 00:09:57.853 11:27:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@584 -- # jq . 00:09:57.853 11:27:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@585 -- # IFS=, 00:09:57.853 11:27:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:09:57.853 "params": { 00:09:57.853 "name": "Nvme1", 00:09:57.853 "trtype": "tcp", 00:09:57.853 "traddr": "10.0.0.2", 00:09:57.853 "adrfam": "ipv4", 00:09:57.853 "trsvcid": "4420", 00:09:57.853 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:09:57.853 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:09:57.853 "hdgst": false, 00:09:57.853 "ddgst": false 00:09:57.853 }, 00:09:57.853 "method": "bdev_nvme_attach_controller" 00:09:57.853 }' 00:09:57.853 [2024-11-15 11:27:58.262537] Starting SPDK v25.01-pre git sha1 4b2d483c6 / DPDK 24.03.0 initialization... 00:09:57.853 [2024-11-15 11:27:58.262593] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1117215 ] 00:09:57.853 [2024-11-15 11:27:58.355587] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:09:57.853 [2024-11-15 11:27:58.407391] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:09:57.853 [2024-11-15 11:27:58.407495] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:09:57.853 [2024-11-15 11:27:58.407497] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:57.853 I/O targets: 00:09:57.853 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:09:57.853 00:09:57.853 00:09:57.853 CUnit - A unit testing framework for C - Version 2.1-3 00:09:57.853 http://cunit.sourceforge.net/ 00:09:57.853 00:09:57.853 00:09:57.853 Suite: bdevio tests on: Nvme1n1 00:09:58.112 Test: blockdev write read block ...passed 00:09:58.112 Test: blockdev write zeroes read block ...passed 00:09:58.112 Test: blockdev write zeroes read no split ...passed 00:09:58.112 Test: blockdev write zeroes read split ...passed 00:09:58.112 Test: blockdev write zeroes read split partial ...passed 00:09:58.112 Test: blockdev reset ...[2024-11-15 11:27:58.849089] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:09:58.112 [2024-11-15 11:27:58.849164] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23267c0 (9): Bad file descriptor 00:09:58.371 [2024-11-15 11:27:58.993645] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller successful. 00:09:58.371 passed 00:09:58.371 Test: blockdev write read 8 blocks ...passed 00:09:58.371 Test: blockdev write read size > 128k ...passed 00:09:58.371 Test: blockdev write read invalid size ...passed 00:09:58.371 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:09:58.371 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:09:58.371 Test: blockdev write read max offset ...passed 00:09:58.371 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:09:58.371 Test: blockdev writev readv 8 blocks ...passed 00:09:58.371 Test: blockdev writev readv 30 x 1block ...passed 00:09:58.371 Test: blockdev writev readv block ...passed 00:09:58.631 Test: blockdev writev readv size > 128k ...passed 00:09:58.631 Test: blockdev writev readv size > 128k in two iovs ...passed 00:09:58.631 Test: blockdev comparev and writev ...[2024-11-15 11:27:59.245255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:09:58.632 [2024-11-15 11:27:59.245283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:09:58.632 [2024-11-15 11:27:59.245296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:09:58.632 [2024-11-15 11:27:59.245304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:09:58.632 [2024-11-15 11:27:59.245577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:09:58.632 [2024-11-15 11:27:59.245591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:09:58.632 [2024-11-15 11:27:59.245602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:09:58.632 [2024-11-15 11:27:59.245609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:09:58.632 [2024-11-15 11:27:59.245836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:09:58.632 [2024-11-15 11:27:59.245846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:09:58.632 [2024-11-15 11:27:59.245856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:09:58.632 [2024-11-15 11:27:59.245862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:09:58.632 [2024-11-15 11:27:59.246097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:09:58.632 [2024-11-15 11:27:59.246107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:09:58.632 [2024-11-15 11:27:59.246118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:09:58.632 [2024-11-15 11:27:59.246124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:09:58.632 passed 00:09:58.632 Test: blockdev nvme passthru rw ...passed 00:09:58.632 Test: blockdev nvme passthru vendor specific ...[2024-11-15 11:27:59.327809] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:09:58.632 [2024-11-15 11:27:59.327823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:09:58.632 [2024-11-15 11:27:59.327929] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:09:58.632 [2024-11-15 11:27:59.327939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:09:58.632 [2024-11-15 11:27:59.328045] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:09:58.632 [2024-11-15 11:27:59.328054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:09:58.632 [2024-11-15 11:27:59.328159] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:09:58.632 [2024-11-15 11:27:59.328167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:09:58.632 passed 00:09:58.632 Test: blockdev nvme admin passthru ...passed 00:09:58.632 Test: blockdev copy ...passed 00:09:58.632 00:09:58.632 Run Summary: Type Total Ran Passed Failed Inactive 00:09:58.632 suites 1 1 n/a 0 0 00:09:58.632 tests 23 23 23 0 0 00:09:58.632 asserts 152 152 152 0 n/a 00:09:58.632 00:09:58.632 Elapsed time = 1.373 seconds 00:09:58.891 11:27:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:09:58.891 11:27:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:58.891 11:27:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:09:58.891 11:27:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:58.891 11:27:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:09:58.891 11:27:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@30 -- # nvmftestfini 00:09:58.891 11:27:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@516 -- # nvmfcleanup 00:09:58.891 11:27:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@121 -- # sync 00:09:58.891 11:27:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:09:58.891 11:27:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@124 -- # set +e 00:09:58.891 11:27:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@125 -- # for i in {1..20} 00:09:58.891 11:27:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:09:58.891 rmmod nvme_tcp 00:09:58.891 rmmod nvme_fabrics 00:09:58.891 rmmod nvme_keyring 00:09:58.891 11:27:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:09:58.891 11:27:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@128 -- # set -e 00:09:58.891 11:27:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@129 -- # return 0 00:09:58.891 11:27:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@517 -- # '[' -n 1117056 ']' 00:09:58.891 11:27:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@518 -- # killprocess 1117056 00:09:58.891 11:27:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@952 -- # '[' -z 1117056 ']' 00:09:58.891 11:27:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@956 -- # kill -0 1117056 00:09:58.891 11:27:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@957 -- # uname 00:09:58.891 11:27:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:09:58.891 11:27:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 1117056 00:09:58.891 11:27:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@958 -- # process_name=reactor_3 00:09:58.891 11:27:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@962 -- # '[' reactor_3 = sudo ']' 00:09:58.891 11:27:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@970 -- # echo 'killing process with pid 1117056' 00:09:58.891 killing process with pid 1117056 00:09:58.891 11:27:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@971 -- # kill 1117056 00:09:58.891 11:27:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@976 -- # wait 1117056 00:09:59.151 11:27:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:09:59.151 11:27:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:09:59.151 11:27:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:09:59.151 11:27:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@297 -- # iptr 00:09:59.151 11:27:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@791 -- # iptables-save 00:09:59.151 11:27:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:09:59.151 11:27:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@791 -- # iptables-restore 00:09:59.151 11:27:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:09:59.151 11:27:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@302 -- # remove_spdk_ns 00:09:59.151 11:27:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:59.151 11:27:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:59.151 11:27:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:01.687 11:28:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:10:01.688 00:10:01.688 real 0m10.047s 00:10:01.688 user 0m11.742s 00:10:01.688 sys 0m4.875s 00:10:01.688 11:28:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1128 -- # xtrace_disable 00:10:01.688 11:28:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:01.688 ************************************ 00:10:01.688 END TEST nvmf_bdevio 00:10:01.688 ************************************ 00:10:01.688 11:28:01 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:10:01.688 00:10:01.688 real 4m41.827s 00:10:01.688 user 11m34.753s 00:10:01.688 sys 1m36.063s 00:10:01.688 11:28:01 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1128 -- # xtrace_disable 00:10:01.688 11:28:01 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:10:01.688 ************************************ 00:10:01.688 END TEST nvmf_target_core 00:10:01.688 ************************************ 00:10:01.688 11:28:01 nvmf_tcp -- nvmf/nvmf.sh@15 -- # run_test nvmf_target_extra /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_extra.sh --transport=tcp 00:10:01.688 11:28:01 nvmf_tcp -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:10:01.688 11:28:01 nvmf_tcp -- common/autotest_common.sh@1109 -- # xtrace_disable 00:10:01.688 11:28:01 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:10:01.688 ************************************ 00:10:01.688 START TEST nvmf_target_extra 00:10:01.688 ************************************ 00:10:01.688 11:28:02 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_extra.sh --transport=tcp 00:10:01.688 * Looking for test storage... 00:10:01.688 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:10:01.688 11:28:02 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:10:01.688 11:28:02 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1691 -- # lcov --version 00:10:01.688 11:28:02 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:10:01.688 11:28:02 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:10:01.688 11:28:02 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:01.688 11:28:02 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:01.688 11:28:02 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:01.688 11:28:02 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@336 -- # IFS=.-: 00:10:01.688 11:28:02 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@336 -- # read -ra ver1 00:10:01.688 11:28:02 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@337 -- # IFS=.-: 00:10:01.688 11:28:02 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@337 -- # read -ra ver2 00:10:01.688 11:28:02 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@338 -- # local 'op=<' 00:10:01.688 11:28:02 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@340 -- # ver1_l=2 00:10:01.688 11:28:02 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@341 -- # ver2_l=1 00:10:01.688 11:28:02 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:01.688 11:28:02 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@344 -- # case "$op" in 00:10:01.688 11:28:02 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@345 -- # : 1 00:10:01.688 11:28:02 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:01.688 11:28:02 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:01.688 11:28:02 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@365 -- # decimal 1 00:10:01.688 11:28:02 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@353 -- # local d=1 00:10:01.688 11:28:02 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:01.688 11:28:02 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@355 -- # echo 1 00:10:01.688 11:28:02 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@365 -- # ver1[v]=1 00:10:01.688 11:28:02 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@366 -- # decimal 2 00:10:01.688 11:28:02 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@353 -- # local d=2 00:10:01.688 11:28:02 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:01.688 11:28:02 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@355 -- # echo 2 00:10:01.688 11:28:02 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@366 -- # ver2[v]=2 00:10:01.688 11:28:02 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:01.688 11:28:02 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:01.688 11:28:02 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@368 -- # return 0 00:10:01.688 11:28:02 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:01.688 11:28:02 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:10:01.688 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:01.688 --rc genhtml_branch_coverage=1 00:10:01.688 --rc genhtml_function_coverage=1 00:10:01.688 --rc genhtml_legend=1 00:10:01.688 --rc geninfo_all_blocks=1 00:10:01.688 --rc geninfo_unexecuted_blocks=1 00:10:01.688 00:10:01.688 ' 00:10:01.688 11:28:02 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:10:01.688 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:01.688 --rc genhtml_branch_coverage=1 00:10:01.688 --rc genhtml_function_coverage=1 00:10:01.688 --rc genhtml_legend=1 00:10:01.688 --rc geninfo_all_blocks=1 00:10:01.688 --rc geninfo_unexecuted_blocks=1 00:10:01.688 00:10:01.688 ' 00:10:01.688 11:28:02 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:10:01.688 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:01.688 --rc genhtml_branch_coverage=1 00:10:01.688 --rc genhtml_function_coverage=1 00:10:01.688 --rc genhtml_legend=1 00:10:01.688 --rc geninfo_all_blocks=1 00:10:01.688 --rc geninfo_unexecuted_blocks=1 00:10:01.688 00:10:01.688 ' 00:10:01.688 11:28:02 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:10:01.688 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:01.688 --rc genhtml_branch_coverage=1 00:10:01.688 --rc genhtml_function_coverage=1 00:10:01.688 --rc genhtml_legend=1 00:10:01.688 --rc geninfo_all_blocks=1 00:10:01.688 --rc geninfo_unexecuted_blocks=1 00:10:01.688 00:10:01.688 ' 00:10:01.688 11:28:02 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:01.688 11:28:02 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@7 -- # uname -s 00:10:01.688 11:28:02 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:01.688 11:28:02 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:01.688 11:28:02 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:01.688 11:28:02 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:01.688 11:28:02 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:01.688 11:28:02 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:01.688 11:28:02 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:01.688 11:28:02 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:01.688 11:28:02 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:01.688 11:28:02 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:01.688 11:28:02 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:10:01.688 11:28:02 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@18 -- # NVME_HOSTID=00abaa28-3537-eb11-906e-0017a4403562 00:10:01.688 11:28:02 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:01.688 11:28:02 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:01.688 11:28:02 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:01.688 11:28:02 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:01.688 11:28:02 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:01.688 11:28:02 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@15 -- # shopt -s extglob 00:10:01.688 11:28:02 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:01.688 11:28:02 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:01.688 11:28:02 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:01.688 11:28:02 nvmf_tcp.nvmf_target_extra -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:01.688 11:28:02 nvmf_tcp.nvmf_target_extra -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:01.688 11:28:02 nvmf_tcp.nvmf_target_extra -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:01.688 11:28:02 nvmf_tcp.nvmf_target_extra -- paths/export.sh@5 -- # export PATH 00:10:01.688 11:28:02 nvmf_tcp.nvmf_target_extra -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:01.688 11:28:02 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@51 -- # : 0 00:10:01.688 11:28:02 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:10:01.688 11:28:02 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:10:01.688 11:28:02 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:01.689 11:28:02 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:01.689 11:28:02 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:01.689 11:28:02 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:10:01.689 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:10:01.689 11:28:02 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:10:01.689 11:28:02 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:10:01.689 11:28:02 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@55 -- # have_pci_nics=0 00:10:01.689 11:28:02 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@11 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:10:01.689 11:28:02 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@13 -- # TEST_ARGS=("$@") 00:10:01.689 11:28:02 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@15 -- # [[ 0 -eq 0 ]] 00:10:01.689 11:28:02 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@16 -- # run_test nvmf_example /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:10:01.689 11:28:02 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:10:01.689 11:28:02 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1109 -- # xtrace_disable 00:10:01.689 11:28:02 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:10:01.689 ************************************ 00:10:01.689 START TEST nvmf_example 00:10:01.689 ************************************ 00:10:01.689 11:28:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:10:01.689 * Looking for test storage... 00:10:01.689 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:01.689 11:28:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:10:01.689 11:28:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1691 -- # lcov --version 00:10:01.689 11:28:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:10:01.689 11:28:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:10:01.689 11:28:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:01.689 11:28:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:01.689 11:28:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:01.689 11:28:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@336 -- # IFS=.-: 00:10:01.689 11:28:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@336 -- # read -ra ver1 00:10:01.689 11:28:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@337 -- # IFS=.-: 00:10:01.689 11:28:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@337 -- # read -ra ver2 00:10:01.689 11:28:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@338 -- # local 'op=<' 00:10:01.689 11:28:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@340 -- # ver1_l=2 00:10:01.689 11:28:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@341 -- # ver2_l=1 00:10:01.689 11:28:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:01.689 11:28:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@344 -- # case "$op" in 00:10:01.689 11:28:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@345 -- # : 1 00:10:01.689 11:28:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:01.689 11:28:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:01.689 11:28:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@365 -- # decimal 1 00:10:01.689 11:28:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@353 -- # local d=1 00:10:01.689 11:28:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:01.689 11:28:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@355 -- # echo 1 00:10:01.689 11:28:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@365 -- # ver1[v]=1 00:10:01.689 11:28:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@366 -- # decimal 2 00:10:01.689 11:28:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@353 -- # local d=2 00:10:01.689 11:28:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:01.689 11:28:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@355 -- # echo 2 00:10:01.689 11:28:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@366 -- # ver2[v]=2 00:10:01.689 11:28:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:01.689 11:28:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:01.689 11:28:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@368 -- # return 0 00:10:01.689 11:28:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:01.689 11:28:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:10:01.689 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:01.689 --rc genhtml_branch_coverage=1 00:10:01.689 --rc genhtml_function_coverage=1 00:10:01.689 --rc genhtml_legend=1 00:10:01.689 --rc geninfo_all_blocks=1 00:10:01.689 --rc geninfo_unexecuted_blocks=1 00:10:01.689 00:10:01.689 ' 00:10:01.689 11:28:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:10:01.689 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:01.689 --rc genhtml_branch_coverage=1 00:10:01.689 --rc genhtml_function_coverage=1 00:10:01.689 --rc genhtml_legend=1 00:10:01.689 --rc geninfo_all_blocks=1 00:10:01.689 --rc geninfo_unexecuted_blocks=1 00:10:01.689 00:10:01.689 ' 00:10:01.689 11:28:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:10:01.689 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:01.689 --rc genhtml_branch_coverage=1 00:10:01.689 --rc genhtml_function_coverage=1 00:10:01.689 --rc genhtml_legend=1 00:10:01.689 --rc geninfo_all_blocks=1 00:10:01.689 --rc geninfo_unexecuted_blocks=1 00:10:01.689 00:10:01.689 ' 00:10:01.689 11:28:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:10:01.689 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:01.689 --rc genhtml_branch_coverage=1 00:10:01.689 --rc genhtml_function_coverage=1 00:10:01.689 --rc genhtml_legend=1 00:10:01.689 --rc geninfo_all_blocks=1 00:10:01.689 --rc geninfo_unexecuted_blocks=1 00:10:01.689 00:10:01.689 ' 00:10:01.689 11:28:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:01.689 11:28:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@7 -- # uname -s 00:10:01.689 11:28:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:01.689 11:28:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:01.689 11:28:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:01.689 11:28:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:01.689 11:28:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:01.689 11:28:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:01.689 11:28:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:01.689 11:28:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:01.689 11:28:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:01.689 11:28:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:01.689 11:28:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:10:01.689 11:28:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@18 -- # NVME_HOSTID=00abaa28-3537-eb11-906e-0017a4403562 00:10:01.689 11:28:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:01.689 11:28:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:01.689 11:28:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:01.689 11:28:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:01.689 11:28:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:01.689 11:28:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@15 -- # shopt -s extglob 00:10:01.689 11:28:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:01.689 11:28:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:01.689 11:28:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:01.689 11:28:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:01.689 11:28:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:01.689 11:28:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:01.689 11:28:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@5 -- # export PATH 00:10:01.690 11:28:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:01.690 11:28:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@51 -- # : 0 00:10:01.690 11:28:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:10:01.690 11:28:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:10:01.690 11:28:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:01.690 11:28:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:01.690 11:28:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:01.690 11:28:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:10:01.690 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:10:01.690 11:28:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:10:01.690 11:28:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:10:01.690 11:28:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@55 -- # have_pci_nics=0 00:10:01.690 11:28:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@11 -- # NVMF_EXAMPLE=("$SPDK_EXAMPLE_DIR/nvmf") 00:10:01.690 11:28:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@13 -- # MALLOC_BDEV_SIZE=64 00:10:01.690 11:28:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:10:01.690 11:28:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@24 -- # build_nvmf_example_args 00:10:01.690 11:28:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@17 -- # '[' 0 -eq 1 ']' 00:10:01.690 11:28:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@20 -- # NVMF_EXAMPLE+=(-i "$NVMF_APP_SHM_ID" -g 10000) 00:10:01.690 11:28:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@21 -- # NVMF_EXAMPLE+=("${NO_HUGE[@]}") 00:10:01.690 11:28:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@40 -- # timing_enter nvmf_example_test 00:10:01.690 11:28:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@724 -- # xtrace_disable 00:10:01.690 11:28:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:10:01.690 11:28:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@41 -- # nvmftestinit 00:10:01.690 11:28:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:10:01.690 11:28:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:01.690 11:28:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@476 -- # prepare_net_devs 00:10:01.690 11:28:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@438 -- # local -g is_hw=no 00:10:01.690 11:28:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@440 -- # remove_spdk_ns 00:10:01.690 11:28:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:01.690 11:28:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:01.690 11:28:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:01.690 11:28:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:10:01.690 11:28:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:10:01.690 11:28:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@309 -- # xtrace_disable 00:10:01.690 11:28:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:10:06.962 11:28:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:10:06.962 11:28:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@315 -- # pci_devs=() 00:10:06.962 11:28:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@315 -- # local -a pci_devs 00:10:06.962 11:28:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@316 -- # pci_net_devs=() 00:10:06.962 11:28:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:10:06.962 11:28:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@317 -- # pci_drivers=() 00:10:06.962 11:28:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@317 -- # local -A pci_drivers 00:10:06.962 11:28:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@319 -- # net_devs=() 00:10:06.962 11:28:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@319 -- # local -ga net_devs 00:10:06.962 11:28:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@320 -- # e810=() 00:10:06.962 11:28:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@320 -- # local -ga e810 00:10:06.962 11:28:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@321 -- # x722=() 00:10:06.962 11:28:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@321 -- # local -ga x722 00:10:06.962 11:28:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@322 -- # mlx=() 00:10:06.962 11:28:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@322 -- # local -ga mlx 00:10:06.962 11:28:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:06.962 11:28:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:06.962 11:28:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:06.962 11:28:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:06.962 11:28:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:06.962 11:28:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:06.962 11:28:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:06.962 11:28:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:10:06.962 11:28:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:06.962 11:28:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:06.962 11:28:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:06.962 11:28:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:06.962 11:28:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:10:06.962 11:28:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:10:06.962 11:28:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:10:06.962 11:28:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:10:06.962 11:28:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:10:06.962 11:28:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:10:06.962 11:28:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:06.962 11:28:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:10:06.962 Found 0000:af:00.0 (0x8086 - 0x159b) 00:10:06.962 11:28:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:06.962 11:28:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:06.962 11:28:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:06.962 11:28:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:06.962 11:28:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:06.962 11:28:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:06.962 11:28:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:10:06.962 Found 0000:af:00.1 (0x8086 - 0x159b) 00:10:06.962 11:28:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:06.962 11:28:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:06.962 11:28:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:06.962 11:28:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:06.962 11:28:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:06.962 11:28:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:10:06.962 11:28:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:10:06.962 11:28:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:10:06.962 11:28:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:10:06.962 11:28:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:06.962 11:28:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:10:06.962 11:28:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:06.962 11:28:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@418 -- # [[ up == up ]] 00:10:06.962 11:28:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:10:06.962 11:28:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:06.962 11:28:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:10:06.962 Found net devices under 0000:af:00.0: cvl_0_0 00:10:06.962 11:28:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:10:06.962 11:28:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:10:06.962 11:28:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:06.962 11:28:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:10:06.962 11:28:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:06.962 11:28:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@418 -- # [[ up == up ]] 00:10:06.962 11:28:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:10:06.962 11:28:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:06.962 11:28:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:10:06.962 Found net devices under 0000:af:00.1: cvl_0_1 00:10:06.962 11:28:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:10:06.962 11:28:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:10:06.962 11:28:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@442 -- # is_hw=yes 00:10:06.962 11:28:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:10:06.962 11:28:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:10:06.962 11:28:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:10:06.962 11:28:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:10:06.962 11:28:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:06.962 11:28:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:06.962 11:28:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:10:06.962 11:28:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:10:06.962 11:28:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:10:06.963 11:28:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:10:06.963 11:28:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:10:06.963 11:28:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:10:06.963 11:28:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:10:06.963 11:28:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:06.963 11:28:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:10:06.963 11:28:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:10:06.963 11:28:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:10:06.963 11:28:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:10:07.221 11:28:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:10:07.221 11:28:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:10:07.221 11:28:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:10:07.221 11:28:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:10:07.221 11:28:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:10:07.221 11:28:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:10:07.221 11:28:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:10:07.221 11:28:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:10:07.221 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:07.221 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.484 ms 00:10:07.221 00:10:07.221 --- 10.0.0.2 ping statistics --- 00:10:07.221 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:07.221 rtt min/avg/max/mdev = 0.484/0.484/0.484/0.000 ms 00:10:07.221 11:28:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:10:07.221 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:07.221 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.202 ms 00:10:07.221 00:10:07.221 --- 10.0.0.1 ping statistics --- 00:10:07.221 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:07.221 rtt min/avg/max/mdev = 0.202/0.202/0.202/0.000 ms 00:10:07.221 11:28:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:07.221 11:28:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@450 -- # return 0 00:10:07.221 11:28:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:10:07.221 11:28:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:07.221 11:28:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:10:07.221 11:28:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:10:07.221 11:28:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:07.221 11:28:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:10:07.221 11:28:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:10:07.221 11:28:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@42 -- # nvmfexamplestart '-m 0xF' 00:10:07.221 11:28:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@27 -- # timing_enter start_nvmf_example 00:10:07.221 11:28:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@724 -- # xtrace_disable 00:10:07.221 11:28:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:10:07.221 11:28:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@29 -- # '[' tcp == tcp ']' 00:10:07.221 11:28:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@30 -- # NVMF_EXAMPLE=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_EXAMPLE[@]}") 00:10:07.221 11:28:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@34 -- # nvmfpid=1121136 00:10:07.221 11:28:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@33 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/nvmf -i 0 -g 10000 -m 0xF 00:10:07.222 11:28:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@35 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:10:07.222 11:28:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@36 -- # waitforlisten 1121136 00:10:07.222 11:28:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@833 -- # '[' -z 1121136 ']' 00:10:07.222 11:28:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:07.222 11:28:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@838 -- # local max_retries=100 00:10:07.222 11:28:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:07.222 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:07.222 11:28:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@842 -- # xtrace_disable 00:10:07.222 11:28:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:10:08.598 11:28:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:10:08.598 11:28:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@866 -- # return 0 00:10:08.598 11:28:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@37 -- # timing_exit start_nvmf_example 00:10:08.598 11:28:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@730 -- # xtrace_disable 00:10:08.598 11:28:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:10:08.598 11:28:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:10:08.598 11:28:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:08.598 11:28:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:10:08.598 11:28:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:08.598 11:28:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@47 -- # rpc_cmd bdev_malloc_create 64 512 00:10:08.598 11:28:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:08.598 11:28:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:10:08.598 11:28:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:08.598 11:28:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@47 -- # malloc_bdevs='Malloc0 ' 00:10:08.598 11:28:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@49 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:10:08.598 11:28:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:08.598 11:28:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:10:08.598 11:28:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:08.598 11:28:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@52 -- # for malloc_bdev in $malloc_bdevs 00:10:08.598 11:28:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:10:08.598 11:28:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:08.598 11:28:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:10:08.598 11:28:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:08.598 11:28:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@57 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:08.598 11:28:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:08.598 11:28:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:10:08.598 11:28:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:08.598 11:28:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@59 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:10:08.598 11:28:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:10:18.578 Initializing NVMe Controllers 00:10:18.578 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:10:18.578 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:10:18.578 Initialization complete. Launching workers. 00:10:18.578 ======================================================== 00:10:18.578 Latency(us) 00:10:18.578 Device Information : IOPS MiB/s Average min max 00:10:18.578 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 17846.74 69.71 3585.18 665.65 15971.36 00:10:18.578 ======================================================== 00:10:18.578 Total : 17846.74 69.71 3585.18 665.65 15971.36 00:10:18.578 00:10:18.836 11:28:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@65 -- # trap - SIGINT SIGTERM EXIT 00:10:18.836 11:28:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@66 -- # nvmftestfini 00:10:18.836 11:28:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@516 -- # nvmfcleanup 00:10:18.836 11:28:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@121 -- # sync 00:10:18.836 11:28:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:10:18.836 11:28:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@124 -- # set +e 00:10:18.836 11:28:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@125 -- # for i in {1..20} 00:10:18.836 11:28:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:10:18.836 rmmod nvme_tcp 00:10:18.836 rmmod nvme_fabrics 00:10:18.836 rmmod nvme_keyring 00:10:18.836 11:28:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:10:18.836 11:28:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@128 -- # set -e 00:10:18.836 11:28:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@129 -- # return 0 00:10:18.836 11:28:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@517 -- # '[' -n 1121136 ']' 00:10:18.836 11:28:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@518 -- # killprocess 1121136 00:10:18.836 11:28:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@952 -- # '[' -z 1121136 ']' 00:10:18.836 11:28:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@956 -- # kill -0 1121136 00:10:18.836 11:28:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@957 -- # uname 00:10:18.836 11:28:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:10:18.836 11:28:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 1121136 00:10:18.836 11:28:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@958 -- # process_name=nvmf 00:10:18.836 11:28:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@962 -- # '[' nvmf = sudo ']' 00:10:18.836 11:28:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@970 -- # echo 'killing process with pid 1121136' 00:10:18.836 killing process with pid 1121136 00:10:18.836 11:28:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@971 -- # kill 1121136 00:10:18.836 11:28:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@976 -- # wait 1121136 00:10:19.095 nvmf threads initialize successfully 00:10:19.095 bdev subsystem init successfully 00:10:19.095 created a nvmf target service 00:10:19.095 create targets's poll groups done 00:10:19.095 all subsystems of target started 00:10:19.095 nvmf target is running 00:10:19.095 all subsystems of target stopped 00:10:19.095 destroy targets's poll groups done 00:10:19.095 destroyed the nvmf target service 00:10:19.095 bdev subsystem finish successfully 00:10:19.095 nvmf threads destroy successfully 00:10:19.095 11:28:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:10:19.095 11:28:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:10:19.095 11:28:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:10:19.095 11:28:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@297 -- # iptr 00:10:19.095 11:28:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@791 -- # iptables-save 00:10:19.095 11:28:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:10:19.095 11:28:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@791 -- # iptables-restore 00:10:19.095 11:28:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:10:19.095 11:28:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@302 -- # remove_spdk_ns 00:10:19.095 11:28:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:19.095 11:28:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:19.095 11:28:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:20.998 11:28:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:10:20.998 11:28:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@67 -- # timing_exit nvmf_example_test 00:10:20.998 11:28:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@730 -- # xtrace_disable 00:10:20.998 11:28:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:10:21.256 00:10:21.256 real 0m19.595s 00:10:21.256 user 0m46.879s 00:10:21.256 sys 0m5.716s 00:10:21.256 11:28:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1128 -- # xtrace_disable 00:10:21.256 11:28:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:10:21.256 ************************************ 00:10:21.256 END TEST nvmf_example 00:10:21.256 ************************************ 00:10:21.256 11:28:21 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@17 -- # run_test nvmf_filesystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:10:21.256 11:28:21 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:10:21.256 11:28:21 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1109 -- # xtrace_disable 00:10:21.256 11:28:21 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:10:21.256 ************************************ 00:10:21.256 START TEST nvmf_filesystem 00:10:21.256 ************************************ 00:10:21.256 11:28:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:10:21.256 * Looking for test storage... 00:10:21.256 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:21.256 11:28:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:10:21.256 11:28:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1691 -- # lcov --version 00:10:21.256 11:28:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:10:21.519 11:28:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:10:21.519 11:28:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:21.519 11:28:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:21.519 11:28:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:21.519 11:28:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@336 -- # IFS=.-: 00:10:21.519 11:28:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@336 -- # read -ra ver1 00:10:21.519 11:28:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@337 -- # IFS=.-: 00:10:21.519 11:28:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@337 -- # read -ra ver2 00:10:21.519 11:28:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@338 -- # local 'op=<' 00:10:21.519 11:28:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@340 -- # ver1_l=2 00:10:21.519 11:28:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@341 -- # ver2_l=1 00:10:21.519 11:28:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:21.519 11:28:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@344 -- # case "$op" in 00:10:21.519 11:28:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@345 -- # : 1 00:10:21.519 11:28:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:21.519 11:28:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:21.519 11:28:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@365 -- # decimal 1 00:10:21.519 11:28:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@353 -- # local d=1 00:10:21.519 11:28:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:21.519 11:28:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@355 -- # echo 1 00:10:21.519 11:28:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@365 -- # ver1[v]=1 00:10:21.519 11:28:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@366 -- # decimal 2 00:10:21.519 11:28:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@353 -- # local d=2 00:10:21.519 11:28:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:21.519 11:28:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@355 -- # echo 2 00:10:21.519 11:28:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@366 -- # ver2[v]=2 00:10:21.519 11:28:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:21.519 11:28:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:21.519 11:28:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@368 -- # return 0 00:10:21.519 11:28:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:21.519 11:28:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:10:21.519 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:21.519 --rc genhtml_branch_coverage=1 00:10:21.519 --rc genhtml_function_coverage=1 00:10:21.519 --rc genhtml_legend=1 00:10:21.519 --rc geninfo_all_blocks=1 00:10:21.519 --rc geninfo_unexecuted_blocks=1 00:10:21.519 00:10:21.519 ' 00:10:21.519 11:28:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:10:21.519 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:21.519 --rc genhtml_branch_coverage=1 00:10:21.519 --rc genhtml_function_coverage=1 00:10:21.519 --rc genhtml_legend=1 00:10:21.519 --rc geninfo_all_blocks=1 00:10:21.519 --rc geninfo_unexecuted_blocks=1 00:10:21.519 00:10:21.519 ' 00:10:21.519 11:28:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:10:21.519 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:21.519 --rc genhtml_branch_coverage=1 00:10:21.519 --rc genhtml_function_coverage=1 00:10:21.519 --rc genhtml_legend=1 00:10:21.519 --rc geninfo_all_blocks=1 00:10:21.519 --rc geninfo_unexecuted_blocks=1 00:10:21.519 00:10:21.519 ' 00:10:21.519 11:28:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:10:21.519 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:21.519 --rc genhtml_branch_coverage=1 00:10:21.519 --rc genhtml_function_coverage=1 00:10:21.519 --rc genhtml_legend=1 00:10:21.519 --rc geninfo_all_blocks=1 00:10:21.519 --rc geninfo_unexecuted_blocks=1 00:10:21.519 00:10:21.519 ' 00:10:21.519 11:28:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh 00:10:21.519 11:28:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@7 -- # rpc_py=rpc_cmd 00:10:21.519 11:28:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@34 -- # set -e 00:10:21.519 11:28:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@35 -- # shopt -s nullglob 00:10:21.519 11:28:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@36 -- # shopt -s extglob 00:10:21.519 11:28:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@37 -- # shopt -s inherit_errexit 00:10:21.519 11:28:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@39 -- # '[' -z /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output ']' 00:10:21.519 11:28:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@44 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/build_config.sh ]] 00:10:21.519 11:28:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/build_config.sh 00:10:21.519 11:28:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@1 -- # CONFIG_WPDK_DIR= 00:10:21.519 11:28:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@2 -- # CONFIG_ASAN=n 00:10:21.519 11:28:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@3 -- # CONFIG_VBDEV_COMPRESS=n 00:10:21.519 11:28:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@4 -- # CONFIG_HAVE_EXECINFO_H=y 00:10:21.519 11:28:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@5 -- # CONFIG_USDT=n 00:10:21.519 11:28:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@6 -- # CONFIG_CUSTOMOCF=n 00:10:21.519 11:28:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@7 -- # CONFIG_PREFIX=/usr/local 00:10:21.519 11:28:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@8 -- # CONFIG_RBD=n 00:10:21.519 11:28:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@9 -- # CONFIG_LIBDIR= 00:10:21.519 11:28:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@10 -- # CONFIG_IDXD=y 00:10:21.519 11:28:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@11 -- # CONFIG_NVME_CUSE=y 00:10:21.519 11:28:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@12 -- # CONFIG_SMA=n 00:10:21.519 11:28:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@13 -- # CONFIG_VTUNE=n 00:10:21.519 11:28:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@14 -- # CONFIG_TSAN=n 00:10:21.519 11:28:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@15 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:10:21.519 11:28:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@16 -- # CONFIG_VFIO_USER_DIR= 00:10:21.519 11:28:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@17 -- # CONFIG_MAX_NUMA_NODES=1 00:10:21.519 11:28:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@18 -- # CONFIG_PGO_CAPTURE=n 00:10:21.519 11:28:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@19 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:10:21.519 11:28:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@20 -- # CONFIG_ENV=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:10:21.520 11:28:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@21 -- # CONFIG_LTO=n 00:10:21.520 11:28:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@22 -- # CONFIG_ISCSI_INITIATOR=y 00:10:21.520 11:28:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@23 -- # CONFIG_CET=n 00:10:21.520 11:28:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@24 -- # CONFIG_VBDEV_COMPRESS_MLX5=n 00:10:21.520 11:28:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@25 -- # CONFIG_OCF_PATH= 00:10:21.520 11:28:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@26 -- # CONFIG_RDMA_SET_TOS=y 00:10:21.520 11:28:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@27 -- # CONFIG_AIO_FSDEV=y 00:10:21.520 11:28:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@28 -- # CONFIG_HAVE_ARC4RANDOM=y 00:10:21.520 11:28:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@29 -- # CONFIG_HAVE_LIBARCHIVE=n 00:10:21.520 11:28:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@30 -- # CONFIG_UBLK=y 00:10:21.520 11:28:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@31 -- # CONFIG_ISAL_CRYPTO=y 00:10:21.520 11:28:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@32 -- # CONFIG_OPENSSL_PATH= 00:10:21.520 11:28:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@33 -- # CONFIG_OCF=n 00:10:21.520 11:28:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@34 -- # CONFIG_FUSE=n 00:10:21.520 11:28:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@35 -- # CONFIG_VTUNE_DIR= 00:10:21.520 11:28:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@36 -- # CONFIG_FUZZER_LIB= 00:10:21.520 11:28:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@37 -- # CONFIG_FUZZER=n 00:10:21.520 11:28:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@38 -- # CONFIG_FSDEV=y 00:10:21.520 11:28:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@39 -- # CONFIG_DPDK_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:10:21.520 11:28:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@40 -- # CONFIG_CRYPTO=n 00:10:21.520 11:28:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@41 -- # CONFIG_PGO_USE=n 00:10:21.520 11:28:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@42 -- # CONFIG_VHOST=y 00:10:21.520 11:28:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@43 -- # CONFIG_DAOS=n 00:10:21.520 11:28:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@44 -- # CONFIG_DPDK_INC_DIR= 00:10:21.520 11:28:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@45 -- # CONFIG_DAOS_DIR= 00:10:21.520 11:28:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@46 -- # CONFIG_UNIT_TESTS=n 00:10:21.520 11:28:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@47 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:10:21.520 11:28:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@48 -- # CONFIG_VIRTIO=y 00:10:21.520 11:28:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@49 -- # CONFIG_DPDK_UADK=n 00:10:21.520 11:28:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@50 -- # CONFIG_COVERAGE=y 00:10:21.520 11:28:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@51 -- # CONFIG_RDMA=y 00:10:21.520 11:28:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@52 -- # CONFIG_HAVE_STRUCT_STAT_ST_ATIM=y 00:10:21.520 11:28:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@53 -- # CONFIG_HAVE_LZ4=n 00:10:21.520 11:28:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@54 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:10:21.520 11:28:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@55 -- # CONFIG_URING_PATH= 00:10:21.520 11:28:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@56 -- # CONFIG_XNVME=n 00:10:21.520 11:28:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@57 -- # CONFIG_VFIO_USER=y 00:10:21.520 11:28:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@58 -- # CONFIG_ARCH=native 00:10:21.520 11:28:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@59 -- # CONFIG_HAVE_EVP_MAC=y 00:10:21.520 11:28:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@60 -- # CONFIG_URING_ZNS=n 00:10:21.520 11:28:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@61 -- # CONFIG_WERROR=y 00:10:21.520 11:28:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@62 -- # CONFIG_HAVE_LIBBSD=n 00:10:21.520 11:28:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@63 -- # CONFIG_UBSAN=y 00:10:21.520 11:28:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@64 -- # CONFIG_HAVE_STRUCT_STAT_ST_ATIMESPEC=n 00:10:21.520 11:28:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@65 -- # CONFIG_IPSEC_MB_DIR= 00:10:21.520 11:28:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@66 -- # CONFIG_GOLANG=n 00:10:21.520 11:28:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@67 -- # CONFIG_ISAL=y 00:10:21.520 11:28:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@68 -- # CONFIG_IDXD_KERNEL=y 00:10:21.520 11:28:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@69 -- # CONFIG_DPDK_LIB_DIR= 00:10:21.520 11:28:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@70 -- # CONFIG_RDMA_PROV=verbs 00:10:21.520 11:28:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@71 -- # CONFIG_APPS=y 00:10:21.520 11:28:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@72 -- # CONFIG_SHARED=y 00:10:21.520 11:28:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@73 -- # CONFIG_HAVE_KEYUTILS=y 00:10:21.520 11:28:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@74 -- # CONFIG_FC_PATH= 00:10:21.520 11:28:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@75 -- # CONFIG_DPDK_PKG_CONFIG=n 00:10:21.520 11:28:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@76 -- # CONFIG_FC=n 00:10:21.520 11:28:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@77 -- # CONFIG_AVAHI=n 00:10:21.520 11:28:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@78 -- # CONFIG_FIO_PLUGIN=y 00:10:21.520 11:28:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@79 -- # CONFIG_RAID5F=n 00:10:21.520 11:28:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@80 -- # CONFIG_EXAMPLES=y 00:10:21.520 11:28:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@81 -- # CONFIG_TESTS=y 00:10:21.520 11:28:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@82 -- # CONFIG_CRYPTO_MLX5=n 00:10:21.520 11:28:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@83 -- # CONFIG_MAX_LCORES=128 00:10:21.520 11:28:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@84 -- # CONFIG_IPSEC_MB=n 00:10:21.520 11:28:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@85 -- # CONFIG_PGO_DIR= 00:10:21.520 11:28:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@86 -- # CONFIG_DEBUG=y 00:10:21.520 11:28:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@87 -- # CONFIG_DPDK_COMPRESSDEV=n 00:10:21.520 11:28:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@88 -- # CONFIG_CROSS_PREFIX= 00:10:21.520 11:28:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@89 -- # CONFIG_COPY_FILE_RANGE=y 00:10:21.520 11:28:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@90 -- # CONFIG_URING=n 00:10:21.520 11:28:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@54 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/applications.sh 00:10:21.520 11:28:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@8 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/applications.sh 00:10:21.520 11:28:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@8 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common 00:10:21.520 11:28:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@8 -- # _root=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common 00:10:21.520 11:28:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@9 -- # _root=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:10:21.520 11:28:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@10 -- # _app_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:10:21.520 11:28:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@11 -- # _test_app_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:10:21.520 11:28:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@12 -- # _examples_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:10:21.520 11:28:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@14 -- # VHOST_FUZZ_APP=("$_test_app_dir/fuzz/vhost_fuzz/vhost_fuzz") 00:10:21.520 11:28:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@15 -- # ISCSI_APP=("$_app_dir/iscsi_tgt") 00:10:21.520 11:28:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@16 -- # NVMF_APP=("$_app_dir/nvmf_tgt") 00:10:21.520 11:28:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@17 -- # VHOST_APP=("$_app_dir/vhost") 00:10:21.520 11:28:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@18 -- # DD_APP=("$_app_dir/spdk_dd") 00:10:21.520 11:28:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@19 -- # SPDK_APP=("$_app_dir/spdk_tgt") 00:10:21.520 11:28:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@22 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/config.h ]] 00:10:21.520 11:28:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@23 -- # [[ #ifndef SPDK_CONFIG_H 00:10:21.520 #define SPDK_CONFIG_H 00:10:21.520 #define SPDK_CONFIG_AIO_FSDEV 1 00:10:21.520 #define SPDK_CONFIG_APPS 1 00:10:21.520 #define SPDK_CONFIG_ARCH native 00:10:21.520 #undef SPDK_CONFIG_ASAN 00:10:21.520 #undef SPDK_CONFIG_AVAHI 00:10:21.520 #undef SPDK_CONFIG_CET 00:10:21.520 #define SPDK_CONFIG_COPY_FILE_RANGE 1 00:10:21.520 #define SPDK_CONFIG_COVERAGE 1 00:10:21.520 #define SPDK_CONFIG_CROSS_PREFIX 00:10:21.520 #undef SPDK_CONFIG_CRYPTO 00:10:21.520 #undef SPDK_CONFIG_CRYPTO_MLX5 00:10:21.520 #undef SPDK_CONFIG_CUSTOMOCF 00:10:21.520 #undef SPDK_CONFIG_DAOS 00:10:21.520 #define SPDK_CONFIG_DAOS_DIR 00:10:21.520 #define SPDK_CONFIG_DEBUG 1 00:10:21.520 #undef SPDK_CONFIG_DPDK_COMPRESSDEV 00:10:21.520 #define SPDK_CONFIG_DPDK_DIR /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:10:21.520 #define SPDK_CONFIG_DPDK_INC_DIR 00:10:21.520 #define SPDK_CONFIG_DPDK_LIB_DIR 00:10:21.520 #undef SPDK_CONFIG_DPDK_PKG_CONFIG 00:10:21.520 #undef SPDK_CONFIG_DPDK_UADK 00:10:21.520 #define SPDK_CONFIG_ENV /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:10:21.520 #define SPDK_CONFIG_EXAMPLES 1 00:10:21.520 #undef SPDK_CONFIG_FC 00:10:21.520 #define SPDK_CONFIG_FC_PATH 00:10:21.520 #define SPDK_CONFIG_FIO_PLUGIN 1 00:10:21.520 #define SPDK_CONFIG_FIO_SOURCE_DIR /usr/src/fio 00:10:21.520 #define SPDK_CONFIG_FSDEV 1 00:10:21.520 #undef SPDK_CONFIG_FUSE 00:10:21.520 #undef SPDK_CONFIG_FUZZER 00:10:21.520 #define SPDK_CONFIG_FUZZER_LIB 00:10:21.520 #undef SPDK_CONFIG_GOLANG 00:10:21.520 #define SPDK_CONFIG_HAVE_ARC4RANDOM 1 00:10:21.520 #define SPDK_CONFIG_HAVE_EVP_MAC 1 00:10:21.520 #define SPDK_CONFIG_HAVE_EXECINFO_H 1 00:10:21.520 #define SPDK_CONFIG_HAVE_KEYUTILS 1 00:10:21.520 #undef SPDK_CONFIG_HAVE_LIBARCHIVE 00:10:21.520 #undef SPDK_CONFIG_HAVE_LIBBSD 00:10:21.521 #undef SPDK_CONFIG_HAVE_LZ4 00:10:21.521 #define SPDK_CONFIG_HAVE_STRUCT_STAT_ST_ATIM 1 00:10:21.521 #undef SPDK_CONFIG_HAVE_STRUCT_STAT_ST_ATIMESPEC 00:10:21.521 #define SPDK_CONFIG_HAVE_UUID_GENERATE_SHA1 1 00:10:21.521 #define SPDK_CONFIG_IDXD 1 00:10:21.521 #define SPDK_CONFIG_IDXD_KERNEL 1 00:10:21.521 #undef SPDK_CONFIG_IPSEC_MB 00:10:21.521 #define SPDK_CONFIG_IPSEC_MB_DIR 00:10:21.521 #define SPDK_CONFIG_ISAL 1 00:10:21.521 #define SPDK_CONFIG_ISAL_CRYPTO 1 00:10:21.521 #define SPDK_CONFIG_ISCSI_INITIATOR 1 00:10:21.521 #define SPDK_CONFIG_LIBDIR 00:10:21.521 #undef SPDK_CONFIG_LTO 00:10:21.521 #define SPDK_CONFIG_MAX_LCORES 128 00:10:21.521 #define SPDK_CONFIG_MAX_NUMA_NODES 1 00:10:21.521 #define SPDK_CONFIG_NVME_CUSE 1 00:10:21.521 #undef SPDK_CONFIG_OCF 00:10:21.521 #define SPDK_CONFIG_OCF_PATH 00:10:21.521 #define SPDK_CONFIG_OPENSSL_PATH 00:10:21.521 #undef SPDK_CONFIG_PGO_CAPTURE 00:10:21.521 #define SPDK_CONFIG_PGO_DIR 00:10:21.521 #undef SPDK_CONFIG_PGO_USE 00:10:21.521 #define SPDK_CONFIG_PREFIX /usr/local 00:10:21.521 #undef SPDK_CONFIG_RAID5F 00:10:21.521 #undef SPDK_CONFIG_RBD 00:10:21.521 #define SPDK_CONFIG_RDMA 1 00:10:21.521 #define SPDK_CONFIG_RDMA_PROV verbs 00:10:21.521 #define SPDK_CONFIG_RDMA_SEND_WITH_INVAL 1 00:10:21.521 #define SPDK_CONFIG_RDMA_SET_ACK_TIMEOUT 1 00:10:21.521 #define SPDK_CONFIG_RDMA_SET_TOS 1 00:10:21.521 #define SPDK_CONFIG_SHARED 1 00:10:21.521 #undef SPDK_CONFIG_SMA 00:10:21.521 #define SPDK_CONFIG_TESTS 1 00:10:21.521 #undef SPDK_CONFIG_TSAN 00:10:21.521 #define SPDK_CONFIG_UBLK 1 00:10:21.521 #define SPDK_CONFIG_UBSAN 1 00:10:21.521 #undef SPDK_CONFIG_UNIT_TESTS 00:10:21.521 #undef SPDK_CONFIG_URING 00:10:21.521 #define SPDK_CONFIG_URING_PATH 00:10:21.521 #undef SPDK_CONFIG_URING_ZNS 00:10:21.521 #undef SPDK_CONFIG_USDT 00:10:21.521 #undef SPDK_CONFIG_VBDEV_COMPRESS 00:10:21.521 #undef SPDK_CONFIG_VBDEV_COMPRESS_MLX5 00:10:21.521 #define SPDK_CONFIG_VFIO_USER 1 00:10:21.521 #define SPDK_CONFIG_VFIO_USER_DIR 00:10:21.521 #define SPDK_CONFIG_VHOST 1 00:10:21.521 #define SPDK_CONFIG_VIRTIO 1 00:10:21.521 #undef SPDK_CONFIG_VTUNE 00:10:21.521 #define SPDK_CONFIG_VTUNE_DIR 00:10:21.521 #define SPDK_CONFIG_WERROR 1 00:10:21.521 #define SPDK_CONFIG_WPDK_DIR 00:10:21.521 #undef SPDK_CONFIG_XNVME 00:10:21.521 #endif /* SPDK_CONFIG_H */ == *\#\d\e\f\i\n\e\ \S\P\D\K\_\C\O\N\F\I\G\_\D\E\B\U\G* ]] 00:10:21.521 11:28:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@24 -- # (( SPDK_AUTOTEST_DEBUG_APPS )) 00:10:21.521 11:28:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@55 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:21.521 11:28:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@15 -- # shopt -s extglob 00:10:21.521 11:28:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:21.521 11:28:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:21.521 11:28:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:21.521 11:28:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:21.521 11:28:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:21.521 11:28:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:21.521 11:28:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@5 -- # export PATH 00:10:21.521 11:28:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:21.521 11:28:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@56 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/common 00:10:21.521 11:28:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@6 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/common 00:10:21.521 11:28:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@6 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm 00:10:21.521 11:28:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@6 -- # _pmdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm 00:10:21.521 11:28:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@7 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/../../../ 00:10:21.521 11:28:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@7 -- # _pmrootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:10:21.521 11:28:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@64 -- # TEST_TAG=N/A 00:10:21.521 11:28:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@65 -- # TEST_TAG_FILE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.run_test_name 00:10:21.521 11:28:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@67 -- # PM_OUTPUTDIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power 00:10:21.521 11:28:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@68 -- # uname -s 00:10:21.521 11:28:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@68 -- # PM_OS=Linux 00:10:21.521 11:28:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@70 -- # MONITOR_RESOURCES_SUDO=() 00:10:21.521 11:28:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@70 -- # declare -A MONITOR_RESOURCES_SUDO 00:10:21.521 11:28:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@71 -- # MONITOR_RESOURCES_SUDO["collect-bmc-pm"]=1 00:10:21.521 11:28:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@72 -- # MONITOR_RESOURCES_SUDO["collect-cpu-load"]=0 00:10:21.521 11:28:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@73 -- # MONITOR_RESOURCES_SUDO["collect-cpu-temp"]=0 00:10:21.521 11:28:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@74 -- # MONITOR_RESOURCES_SUDO["collect-vmstat"]=0 00:10:21.521 11:28:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@76 -- # SUDO[0]= 00:10:21.521 11:28:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@76 -- # SUDO[1]='sudo -E' 00:10:21.521 11:28:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@78 -- # MONITOR_RESOURCES=(collect-cpu-load collect-vmstat) 00:10:21.521 11:28:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@79 -- # [[ Linux == FreeBSD ]] 00:10:21.521 11:28:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@81 -- # [[ Linux == Linux ]] 00:10:21.521 11:28:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@81 -- # [[ ............................... != QEMU ]] 00:10:21.521 11:28:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@81 -- # [[ ! -e /.dockerenv ]] 00:10:21.521 11:28:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@84 -- # MONITOR_RESOURCES+=(collect-cpu-temp) 00:10:21.521 11:28:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@85 -- # MONITOR_RESOURCES+=(collect-bmc-pm) 00:10:21.521 11:28:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@88 -- # [[ ! -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power ]] 00:10:21.521 11:28:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@58 -- # : 0 00:10:21.521 11:28:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@59 -- # export RUN_NIGHTLY 00:10:21.521 11:28:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@62 -- # : 0 00:10:21.521 11:28:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@63 -- # export SPDK_AUTOTEST_DEBUG_APPS 00:10:21.521 11:28:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@64 -- # : 0 00:10:21.521 11:28:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@65 -- # export SPDK_RUN_VALGRIND 00:10:21.521 11:28:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@66 -- # : 1 00:10:21.521 11:28:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@67 -- # export SPDK_RUN_FUNCTIONAL_TEST 00:10:21.521 11:28:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@68 -- # : 0 00:10:21.521 11:28:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@69 -- # export SPDK_TEST_UNITTEST 00:10:21.521 11:28:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@70 -- # : 00:10:21.521 11:28:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@71 -- # export SPDK_TEST_AUTOBUILD 00:10:21.522 11:28:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@72 -- # : 0 00:10:21.522 11:28:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@73 -- # export SPDK_TEST_RELEASE_BUILD 00:10:21.522 11:28:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@74 -- # : 0 00:10:21.522 11:28:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@75 -- # export SPDK_TEST_ISAL 00:10:21.522 11:28:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@76 -- # : 0 00:10:21.522 11:28:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@77 -- # export SPDK_TEST_ISCSI 00:10:21.522 11:28:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@78 -- # : 0 00:10:21.522 11:28:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@79 -- # export SPDK_TEST_ISCSI_INITIATOR 00:10:21.522 11:28:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@80 -- # : 0 00:10:21.522 11:28:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@81 -- # export SPDK_TEST_NVME 00:10:21.522 11:28:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@82 -- # : 0 00:10:21.522 11:28:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@83 -- # export SPDK_TEST_NVME_PMR 00:10:21.522 11:28:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@84 -- # : 0 00:10:21.522 11:28:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@85 -- # export SPDK_TEST_NVME_BP 00:10:21.522 11:28:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@86 -- # : 1 00:10:21.522 11:28:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@87 -- # export SPDK_TEST_NVME_CLI 00:10:21.522 11:28:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@88 -- # : 0 00:10:21.522 11:28:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@89 -- # export SPDK_TEST_NVME_CUSE 00:10:21.522 11:28:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@90 -- # : 0 00:10:21.522 11:28:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@91 -- # export SPDK_TEST_NVME_FDP 00:10:21.522 11:28:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@92 -- # : 1 00:10:21.522 11:28:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@93 -- # export SPDK_TEST_NVMF 00:10:21.522 11:28:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@94 -- # : 1 00:10:21.522 11:28:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@95 -- # export SPDK_TEST_VFIOUSER 00:10:21.522 11:28:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@96 -- # : 0 00:10:21.522 11:28:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@97 -- # export SPDK_TEST_VFIOUSER_QEMU 00:10:21.522 11:28:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@98 -- # : 0 00:10:21.522 11:28:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@99 -- # export SPDK_TEST_FUZZER 00:10:21.522 11:28:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@100 -- # : 0 00:10:21.522 11:28:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@101 -- # export SPDK_TEST_FUZZER_SHORT 00:10:21.522 11:28:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@102 -- # : tcp 00:10:21.522 11:28:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@103 -- # export SPDK_TEST_NVMF_TRANSPORT 00:10:21.522 11:28:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@104 -- # : 0 00:10:21.522 11:28:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@105 -- # export SPDK_TEST_RBD 00:10:21.522 11:28:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@106 -- # : 0 00:10:21.522 11:28:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@107 -- # export SPDK_TEST_VHOST 00:10:21.522 11:28:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@108 -- # : 0 00:10:21.522 11:28:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@109 -- # export SPDK_TEST_BLOCKDEV 00:10:21.522 11:28:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@110 -- # : 0 00:10:21.522 11:28:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@111 -- # export SPDK_TEST_RAID 00:10:21.522 11:28:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@112 -- # : 0 00:10:21.522 11:28:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@113 -- # export SPDK_TEST_IOAT 00:10:21.522 11:28:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@114 -- # : 0 00:10:21.522 11:28:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@115 -- # export SPDK_TEST_BLOBFS 00:10:21.522 11:28:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@116 -- # : 0 00:10:21.522 11:28:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@117 -- # export SPDK_TEST_VHOST_INIT 00:10:21.522 11:28:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@118 -- # : 0 00:10:21.522 11:28:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@119 -- # export SPDK_TEST_LVOL 00:10:21.522 11:28:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@120 -- # : 0 00:10:21.522 11:28:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@121 -- # export SPDK_TEST_VBDEV_COMPRESS 00:10:21.522 11:28:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@122 -- # : 0 00:10:21.522 11:28:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@123 -- # export SPDK_RUN_ASAN 00:10:21.522 11:28:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@124 -- # : 1 00:10:21.522 11:28:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@125 -- # export SPDK_RUN_UBSAN 00:10:21.522 11:28:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@126 -- # : 00:10:21.522 11:28:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@127 -- # export SPDK_RUN_EXTERNAL_DPDK 00:10:21.522 11:28:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@128 -- # : 0 00:10:21.522 11:28:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@129 -- # export SPDK_RUN_NON_ROOT 00:10:21.522 11:28:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@130 -- # : 0 00:10:21.522 11:28:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@131 -- # export SPDK_TEST_CRYPTO 00:10:21.522 11:28:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@132 -- # : 0 00:10:21.522 11:28:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@133 -- # export SPDK_TEST_FTL 00:10:21.522 11:28:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@134 -- # : 0 00:10:21.522 11:28:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@135 -- # export SPDK_TEST_OCF 00:10:21.522 11:28:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@136 -- # : 0 00:10:21.522 11:28:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@137 -- # export SPDK_TEST_VMD 00:10:21.522 11:28:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@138 -- # : 0 00:10:21.522 11:28:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@139 -- # export SPDK_TEST_OPAL 00:10:21.522 11:28:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@140 -- # : 00:10:21.522 11:28:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@141 -- # export SPDK_TEST_NATIVE_DPDK 00:10:21.522 11:28:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@142 -- # : true 00:10:21.522 11:28:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@143 -- # export SPDK_AUTOTEST_X 00:10:21.522 11:28:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@144 -- # : 0 00:10:21.522 11:28:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@145 -- # export SPDK_TEST_URING 00:10:21.522 11:28:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@146 -- # : 0 00:10:21.522 11:28:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@147 -- # export SPDK_TEST_USDT 00:10:21.522 11:28:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@148 -- # : 0 00:10:21.522 11:28:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@149 -- # export SPDK_TEST_USE_IGB_UIO 00:10:21.522 11:28:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@150 -- # : 0 00:10:21.522 11:28:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@151 -- # export SPDK_TEST_SCHEDULER 00:10:21.522 11:28:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@152 -- # : 0 00:10:21.522 11:28:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@153 -- # export SPDK_TEST_SCANBUILD 00:10:21.522 11:28:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@154 -- # : e810 00:10:21.522 11:28:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@155 -- # export SPDK_TEST_NVMF_NICS 00:10:21.522 11:28:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@156 -- # : 0 00:10:21.522 11:28:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@157 -- # export SPDK_TEST_SMA 00:10:21.522 11:28:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@158 -- # : 0 00:10:21.522 11:28:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@159 -- # export SPDK_TEST_DAOS 00:10:21.522 11:28:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@160 -- # : 0 00:10:21.522 11:28:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@161 -- # export SPDK_TEST_XNVME 00:10:21.522 11:28:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@162 -- # : 0 00:10:21.522 11:28:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@163 -- # export SPDK_TEST_ACCEL 00:10:21.522 11:28:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@164 -- # : 0 00:10:21.522 11:28:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@165 -- # export SPDK_TEST_ACCEL_DSA 00:10:21.522 11:28:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@166 -- # : 0 00:10:21.522 11:28:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@167 -- # export SPDK_TEST_ACCEL_IAA 00:10:21.522 11:28:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@169 -- # : 00:10:21.522 11:28:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@170 -- # export SPDK_TEST_FUZZER_TARGET 00:10:21.522 11:28:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@171 -- # : 0 00:10:21.522 11:28:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@172 -- # export SPDK_TEST_NVMF_MDNS 00:10:21.522 11:28:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@173 -- # : 0 00:10:21.522 11:28:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@174 -- # export SPDK_JSONRPC_GO_CLIENT 00:10:21.522 11:28:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@175 -- # : 0 00:10:21.522 11:28:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@176 -- # export SPDK_TEST_SETUP 00:10:21.522 11:28:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@179 -- # export SPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib 00:10:21.523 11:28:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@179 -- # SPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib 00:10:21.523 11:28:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@180 -- # export DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib 00:10:21.523 11:28:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@180 -- # DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib 00:10:21.523 11:28:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@181 -- # export VFIO_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:10:21.523 11:28:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@181 -- # VFIO_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:10:21.523 11:28:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@182 -- # export LD_LIBRARY_PATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:10:21.523 11:28:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@182 -- # LD_LIBRARY_PATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:10:21.523 11:28:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@185 -- # export PCI_BLOCK_SYNC_ON_RESET=yes 00:10:21.523 11:28:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@185 -- # PCI_BLOCK_SYNC_ON_RESET=yes 00:10:21.523 11:28:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@189 -- # export PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:10:21.523 11:28:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@189 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:10:21.523 11:28:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@193 -- # export PYTHONDONTWRITEBYTECODE=1 00:10:21.523 11:28:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@193 -- # PYTHONDONTWRITEBYTECODE=1 00:10:21.523 11:28:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@197 -- # export ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:10:21.523 11:28:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@197 -- # ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:10:21.523 11:28:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@198 -- # export UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:10:21.523 11:28:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@198 -- # UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:10:21.523 11:28:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@202 -- # asan_suppression_file=/var/tmp/asan_suppression_file 00:10:21.523 11:28:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@203 -- # rm -rf /var/tmp/asan_suppression_file 00:10:21.523 11:28:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@204 -- # cat 00:10:21.523 11:28:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@240 -- # echo leak:libfuse3.so 00:10:21.523 11:28:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@242 -- # export LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:10:21.523 11:28:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@242 -- # LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:10:21.523 11:28:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@244 -- # export DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:10:21.523 11:28:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@244 -- # DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:10:21.523 11:28:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@246 -- # '[' -z /var/spdk/dependencies ']' 00:10:21.523 11:28:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@249 -- # export DEPENDENCY_DIR 00:10:21.523 11:28:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@253 -- # export SPDK_BIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:10:21.523 11:28:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@253 -- # SPDK_BIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:10:21.523 11:28:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@254 -- # export SPDK_EXAMPLE_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:10:21.523 11:28:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@254 -- # SPDK_EXAMPLE_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:10:21.523 11:28:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@257 -- # export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:10:21.523 11:28:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@257 -- # QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:10:21.523 11:28:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@258 -- # export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:10:21.523 11:28:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@258 -- # VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:10:21.523 11:28:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@260 -- # export AR_TOOL=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:10:21.523 11:28:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@260 -- # AR_TOOL=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:10:21.523 11:28:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@263 -- # export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:10:21.523 11:28:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@263 -- # UNBIND_ENTIRE_IOMMU_GROUP=yes 00:10:21.523 11:28:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@265 -- # _LCOV_MAIN=0 00:10:21.523 11:28:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@266 -- # _LCOV_LLVM=1 00:10:21.523 11:28:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@267 -- # _LCOV= 00:10:21.523 11:28:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@268 -- # [[ '' == *clang* ]] 00:10:21.523 11:28:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@268 -- # [[ 0 -eq 1 ]] 00:10:21.523 11:28:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@270 -- # _lcov_opt[_LCOV_LLVM]='--gcov-tool /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh' 00:10:21.523 11:28:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@271 -- # _lcov_opt[_LCOV_MAIN]= 00:10:21.523 11:28:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@273 -- # lcov_opt= 00:10:21.523 11:28:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@276 -- # '[' 0 -eq 0 ']' 00:10:21.523 11:28:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@277 -- # export valgrind= 00:10:21.523 11:28:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@277 -- # valgrind= 00:10:21.523 11:28:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@283 -- # uname -s 00:10:21.523 11:28:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@283 -- # '[' Linux = Linux ']' 00:10:21.523 11:28:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@284 -- # HUGEMEM=4096 00:10:21.523 11:28:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@285 -- # export CLEAR_HUGE=yes 00:10:21.523 11:28:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@285 -- # CLEAR_HUGE=yes 00:10:21.523 11:28:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@287 -- # MAKE=make 00:10:21.523 11:28:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@288 -- # MAKEFLAGS=-j112 00:10:21.523 11:28:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@304 -- # export HUGEMEM=4096 00:10:21.523 11:28:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@304 -- # HUGEMEM=4096 00:10:21.523 11:28:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@306 -- # NO_HUGE=() 00:10:21.523 11:28:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@307 -- # TEST_MODE= 00:10:21.523 11:28:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@308 -- # for i in "$@" 00:10:21.523 11:28:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@309 -- # case "$i" in 00:10:21.523 11:28:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@314 -- # TEST_TRANSPORT=tcp 00:10:21.523 11:28:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@329 -- # [[ -z 1123859 ]] 00:10:21.523 11:28:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@329 -- # kill -0 1123859 00:10:21.523 11:28:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1676 -- # set_test_storage 2147483648 00:10:21.524 11:28:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@339 -- # [[ -v testdir ]] 00:10:21.524 11:28:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@341 -- # local requested_size=2147483648 00:10:21.524 11:28:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@342 -- # local mount target_dir 00:10:21.524 11:28:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@344 -- # local -A mounts fss sizes avails uses 00:10:21.524 11:28:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@345 -- # local source fs size avail mount use 00:10:21.524 11:28:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@347 -- # local storage_fallback storage_candidates 00:10:21.524 11:28:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@349 -- # mktemp -udt spdk.XXXXXX 00:10:21.524 11:28:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@349 -- # storage_fallback=/tmp/spdk.kkoWP8 00:10:21.524 11:28:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@354 -- # storage_candidates=("$testdir" "$storage_fallback/tests/${testdir##*/}" "$storage_fallback") 00:10:21.524 11:28:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@356 -- # [[ -n '' ]] 00:10:21.524 11:28:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@361 -- # [[ -n '' ]] 00:10:21.524 11:28:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@366 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target /tmp/spdk.kkoWP8/tests/target /tmp/spdk.kkoWP8 00:10:21.524 11:28:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@369 -- # requested_size=2214592512 00:10:21.524 11:28:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@371 -- # read -r source fs size use avail _ mount 00:10:21.524 11:28:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@338 -- # df -T 00:10:21.524 11:28:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@338 -- # grep -v Filesystem 00:10:21.524 11:28:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # mounts["$mount"]=spdk_devtmpfs 00:10:21.524 11:28:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # fss["$mount"]=devtmpfs 00:10:21.524 11:28:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # avails["$mount"]=67108864 00:10:21.524 11:28:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # sizes["$mount"]=67108864 00:10:21.524 11:28:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # uses["$mount"]=0 00:10:21.524 11:28:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@371 -- # read -r source fs size use avail _ mount 00:10:21.524 11:28:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # mounts["$mount"]=/dev/pmem0 00:10:21.524 11:28:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # fss["$mount"]=ext2 00:10:21.524 11:28:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # avails["$mount"]=4096 00:10:21.524 11:28:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # sizes["$mount"]=5284429824 00:10:21.524 11:28:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # uses["$mount"]=5284425728 00:10:21.524 11:28:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@371 -- # read -r source fs size use avail _ mount 00:10:21.524 11:28:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # mounts["$mount"]=spdk_root 00:10:21.524 11:28:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # fss["$mount"]=overlay 00:10:21.524 11:28:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # avails["$mount"]=83286999040 00:10:21.524 11:28:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # sizes["$mount"]=94489735168 00:10:21.524 11:28:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # uses["$mount"]=11202736128 00:10:21.524 11:28:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@371 -- # read -r source fs size use avail _ mount 00:10:21.524 11:28:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # mounts["$mount"]=tmpfs 00:10:21.524 11:28:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # fss["$mount"]=tmpfs 00:10:21.524 11:28:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # avails["$mount"]=47233499136 00:10:21.524 11:28:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # sizes["$mount"]=47244865536 00:10:21.524 11:28:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # uses["$mount"]=11366400 00:10:21.524 11:28:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@371 -- # read -r source fs size use avail _ mount 00:10:21.524 11:28:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # mounts["$mount"]=tmpfs 00:10:21.524 11:28:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # fss["$mount"]=tmpfs 00:10:21.524 11:28:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # avails["$mount"]=18874843136 00:10:21.524 11:28:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # sizes["$mount"]=18897948672 00:10:21.524 11:28:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # uses["$mount"]=23105536 00:10:21.524 11:28:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@371 -- # read -r source fs size use avail _ mount 00:10:21.524 11:28:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # mounts["$mount"]=tmpfs 00:10:21.524 11:28:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # fss["$mount"]=tmpfs 00:10:21.524 11:28:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # avails["$mount"]=47244181504 00:10:21.524 11:28:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # sizes["$mount"]=47244869632 00:10:21.524 11:28:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # uses["$mount"]=688128 00:10:21.524 11:28:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@371 -- # read -r source fs size use avail _ mount 00:10:21.524 11:28:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # mounts["$mount"]=tmpfs 00:10:21.524 11:28:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # fss["$mount"]=tmpfs 00:10:21.524 11:28:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # avails["$mount"]=9448960000 00:10:21.524 11:28:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # sizes["$mount"]=9448972288 00:10:21.524 11:28:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # uses["$mount"]=12288 00:10:21.524 11:28:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@371 -- # read -r source fs size use avail _ mount 00:10:21.524 11:28:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@377 -- # printf '* Looking for test storage...\n' 00:10:21.524 * Looking for test storage... 00:10:21.524 11:28:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@379 -- # local target_space new_size 00:10:21.524 11:28:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@380 -- # for target_dir in "${storage_candidates[@]}" 00:10:21.524 11:28:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@383 -- # df /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:21.524 11:28:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@383 -- # awk '$1 !~ /Filesystem/{print $6}' 00:10:21.524 11:28:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@383 -- # mount=/ 00:10:21.524 11:28:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@385 -- # target_space=83286999040 00:10:21.524 11:28:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@386 -- # (( target_space == 0 || target_space < requested_size )) 00:10:21.524 11:28:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@389 -- # (( target_space >= requested_size )) 00:10:21.524 11:28:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@391 -- # [[ overlay == tmpfs ]] 00:10:21.524 11:28:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@391 -- # [[ overlay == ramfs ]] 00:10:21.524 11:28:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@391 -- # [[ / == / ]] 00:10:21.524 11:28:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@392 -- # new_size=13417328640 00:10:21.524 11:28:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@393 -- # (( new_size * 100 / sizes[/] > 95 )) 00:10:21.524 11:28:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@398 -- # export SPDK_TEST_STORAGE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:21.524 11:28:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@398 -- # SPDK_TEST_STORAGE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:21.524 11:28:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@399 -- # printf '* Found test storage at %s\n' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:21.524 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:21.524 11:28:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@400 -- # return 0 00:10:21.524 11:28:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1678 -- # set -o errtrace 00:10:21.524 11:28:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1679 -- # shopt -s extdebug 00:10:21.524 11:28:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1680 -- # trap 'trap - ERR; print_backtrace >&2' ERR 00:10:21.524 11:28:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1682 -- # PS4=' \t ${test_domain:-} -- ${BASH_SOURCE#${BASH_SOURCE%/*/*}/}@${LINENO} -- \$ ' 00:10:21.524 11:28:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1683 -- # true 00:10:21.524 11:28:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1685 -- # xtrace_fd 00:10:21.524 11:28:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@25 -- # [[ -n 15 ]] 00:10:21.524 11:28:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@25 -- # [[ -e /proc/self/fd/15 ]] 00:10:21.524 11:28:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@27 -- # exec 00:10:21.524 11:28:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@29 -- # exec 00:10:21.524 11:28:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@31 -- # xtrace_restore 00:10:21.525 11:28:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@16 -- # unset -v 'X_STACK[0 - 1 < 0 ? 0 : 0 - 1]' 00:10:21.525 11:28:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@17 -- # (( 0 == 0 )) 00:10:21.525 11:28:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@18 -- # set -x 00:10:21.525 11:28:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:10:21.525 11:28:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1691 -- # lcov --version 00:10:21.525 11:28:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:10:21.525 11:28:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:10:21.525 11:28:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:21.525 11:28:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:21.525 11:28:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:21.525 11:28:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@336 -- # IFS=.-: 00:10:21.525 11:28:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@336 -- # read -ra ver1 00:10:21.525 11:28:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@337 -- # IFS=.-: 00:10:21.525 11:28:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@337 -- # read -ra ver2 00:10:21.525 11:28:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@338 -- # local 'op=<' 00:10:21.525 11:28:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@340 -- # ver1_l=2 00:10:21.525 11:28:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@341 -- # ver2_l=1 00:10:21.525 11:28:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:21.525 11:28:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@344 -- # case "$op" in 00:10:21.525 11:28:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@345 -- # : 1 00:10:21.525 11:28:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:21.525 11:28:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:21.525 11:28:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@365 -- # decimal 1 00:10:21.525 11:28:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@353 -- # local d=1 00:10:21.525 11:28:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:21.525 11:28:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@355 -- # echo 1 00:10:21.525 11:28:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@365 -- # ver1[v]=1 00:10:21.785 11:28:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@366 -- # decimal 2 00:10:21.785 11:28:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@353 -- # local d=2 00:10:21.785 11:28:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:21.785 11:28:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@355 -- # echo 2 00:10:21.785 11:28:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@366 -- # ver2[v]=2 00:10:21.785 11:28:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:21.785 11:28:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:21.785 11:28:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@368 -- # return 0 00:10:21.785 11:28:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:21.785 11:28:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:10:21.785 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:21.785 --rc genhtml_branch_coverage=1 00:10:21.785 --rc genhtml_function_coverage=1 00:10:21.785 --rc genhtml_legend=1 00:10:21.785 --rc geninfo_all_blocks=1 00:10:21.785 --rc geninfo_unexecuted_blocks=1 00:10:21.785 00:10:21.785 ' 00:10:21.785 11:28:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:10:21.785 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:21.785 --rc genhtml_branch_coverage=1 00:10:21.785 --rc genhtml_function_coverage=1 00:10:21.785 --rc genhtml_legend=1 00:10:21.785 --rc geninfo_all_blocks=1 00:10:21.785 --rc geninfo_unexecuted_blocks=1 00:10:21.785 00:10:21.785 ' 00:10:21.785 11:28:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:10:21.785 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:21.785 --rc genhtml_branch_coverage=1 00:10:21.785 --rc genhtml_function_coverage=1 00:10:21.785 --rc genhtml_legend=1 00:10:21.785 --rc geninfo_all_blocks=1 00:10:21.785 --rc geninfo_unexecuted_blocks=1 00:10:21.785 00:10:21.785 ' 00:10:21.785 11:28:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:10:21.785 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:21.785 --rc genhtml_branch_coverage=1 00:10:21.785 --rc genhtml_function_coverage=1 00:10:21.785 --rc genhtml_legend=1 00:10:21.785 --rc geninfo_all_blocks=1 00:10:21.785 --rc geninfo_unexecuted_blocks=1 00:10:21.785 00:10:21.785 ' 00:10:21.785 11:28:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:21.785 11:28:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@7 -- # uname -s 00:10:21.785 11:28:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:21.785 11:28:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:21.785 11:28:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:21.785 11:28:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:21.785 11:28:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:21.785 11:28:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:21.785 11:28:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:21.785 11:28:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:21.785 11:28:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:21.785 11:28:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:21.785 11:28:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:10:21.785 11:28:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@18 -- # NVME_HOSTID=00abaa28-3537-eb11-906e-0017a4403562 00:10:21.785 11:28:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:21.785 11:28:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:21.785 11:28:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:21.785 11:28:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:21.785 11:28:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:21.785 11:28:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@15 -- # shopt -s extglob 00:10:21.785 11:28:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:21.785 11:28:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:21.785 11:28:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:21.785 11:28:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:21.785 11:28:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:21.785 11:28:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:21.785 11:28:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@5 -- # export PATH 00:10:21.785 11:28:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:21.785 11:28:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@51 -- # : 0 00:10:21.785 11:28:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:10:21.785 11:28:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:10:21.785 11:28:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:21.785 11:28:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:21.785 11:28:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:21.785 11:28:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:10:21.785 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:10:21.785 11:28:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:10:21.785 11:28:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:10:21.785 11:28:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@55 -- # have_pci_nics=0 00:10:21.785 11:28:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@12 -- # MALLOC_BDEV_SIZE=512 00:10:21.785 11:28:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:10:21.785 11:28:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@15 -- # nvmftestinit 00:10:21.785 11:28:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:10:21.785 11:28:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:21.785 11:28:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@476 -- # prepare_net_devs 00:10:21.785 11:28:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@438 -- # local -g is_hw=no 00:10:21.785 11:28:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@440 -- # remove_spdk_ns 00:10:21.785 11:28:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:21.785 11:28:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:21.785 11:28:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:21.786 11:28:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:10:21.786 11:28:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:10:21.786 11:28:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@309 -- # xtrace_disable 00:10:21.786 11:28:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:10:27.048 11:28:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:10:27.048 11:28:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@315 -- # pci_devs=() 00:10:27.048 11:28:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@315 -- # local -a pci_devs 00:10:27.048 11:28:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@316 -- # pci_net_devs=() 00:10:27.048 11:28:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:10:27.048 11:28:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@317 -- # pci_drivers=() 00:10:27.048 11:28:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@317 -- # local -A pci_drivers 00:10:27.048 11:28:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@319 -- # net_devs=() 00:10:27.048 11:28:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@319 -- # local -ga net_devs 00:10:27.048 11:28:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@320 -- # e810=() 00:10:27.048 11:28:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@320 -- # local -ga e810 00:10:27.048 11:28:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@321 -- # x722=() 00:10:27.048 11:28:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@321 -- # local -ga x722 00:10:27.048 11:28:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@322 -- # mlx=() 00:10:27.048 11:28:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@322 -- # local -ga mlx 00:10:27.048 11:28:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:27.048 11:28:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:27.048 11:28:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:27.048 11:28:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:27.048 11:28:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:27.048 11:28:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:27.048 11:28:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:27.048 11:28:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:10:27.048 11:28:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:27.048 11:28:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:27.048 11:28:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:27.048 11:28:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:27.048 11:28:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:10:27.048 11:28:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:10:27.048 11:28:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:10:27.048 11:28:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:10:27.048 11:28:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:10:27.048 11:28:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:10:27.048 11:28:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:27.048 11:28:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:10:27.048 Found 0000:af:00.0 (0x8086 - 0x159b) 00:10:27.048 11:28:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:27.048 11:28:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:27.048 11:28:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:27.048 11:28:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:27.048 11:28:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:27.048 11:28:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:27.048 11:28:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:10:27.048 Found 0000:af:00.1 (0x8086 - 0x159b) 00:10:27.048 11:28:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:27.048 11:28:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:27.048 11:28:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:27.048 11:28:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:27.048 11:28:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:27.048 11:28:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:10:27.048 11:28:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:10:27.048 11:28:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:10:27.048 11:28:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:10:27.048 11:28:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:27.048 11:28:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:10:27.048 11:28:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:27.048 11:28:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@418 -- # [[ up == up ]] 00:10:27.048 11:28:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:10:27.048 11:28:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:27.048 11:28:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:10:27.048 Found net devices under 0000:af:00.0: cvl_0_0 00:10:27.048 11:28:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:10:27.048 11:28:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:10:27.048 11:28:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:27.048 11:28:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:10:27.048 11:28:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:27.048 11:28:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@418 -- # [[ up == up ]] 00:10:27.048 11:28:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:10:27.048 11:28:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:27.048 11:28:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:10:27.048 Found net devices under 0000:af:00.1: cvl_0_1 00:10:27.048 11:28:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:10:27.048 11:28:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:10:27.048 11:28:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@442 -- # is_hw=yes 00:10:27.048 11:28:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:10:27.048 11:28:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:10:27.048 11:28:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:10:27.048 11:28:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:10:27.048 11:28:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:27.048 11:28:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:27.048 11:28:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:10:27.048 11:28:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:10:27.048 11:28:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:10:27.048 11:28:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:10:27.048 11:28:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:10:27.048 11:28:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:10:27.048 11:28:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:10:27.048 11:28:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:27.049 11:28:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:10:27.049 11:28:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:10:27.049 11:28:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:10:27.049 11:28:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:10:27.307 11:28:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:10:27.307 11:28:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:10:27.307 11:28:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:10:27.307 11:28:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:10:27.307 11:28:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:10:27.307 11:28:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:10:27.307 11:28:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:10:27.307 11:28:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:10:27.307 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:27.307 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.483 ms 00:10:27.307 00:10:27.307 --- 10.0.0.2 ping statistics --- 00:10:27.307 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:27.307 rtt min/avg/max/mdev = 0.483/0.483/0.483/0.000 ms 00:10:27.307 11:28:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:10:27.307 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:27.307 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.221 ms 00:10:27.307 00:10:27.307 --- 10.0.0.1 ping statistics --- 00:10:27.307 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:27.307 rtt min/avg/max/mdev = 0.221/0.221/0.221/0.000 ms 00:10:27.307 11:28:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:27.307 11:28:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@450 -- # return 0 00:10:27.307 11:28:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:10:27.307 11:28:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:27.307 11:28:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:10:27.308 11:28:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:10:27.308 11:28:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:27.308 11:28:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:10:27.308 11:28:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:10:27.308 11:28:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@105 -- # run_test nvmf_filesystem_no_in_capsule nvmf_filesystem_part 0 00:10:27.308 11:28:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:10:27.308 11:28:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1109 -- # xtrace_disable 00:10:27.308 11:28:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:10:27.308 ************************************ 00:10:27.308 START TEST nvmf_filesystem_no_in_capsule 00:10:27.308 ************************************ 00:10:27.308 11:28:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1127 -- # nvmf_filesystem_part 0 00:10:27.308 11:28:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@47 -- # in_capsule=0 00:10:27.308 11:28:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:10:27.308 11:28:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:10:27.308 11:28:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@724 -- # xtrace_disable 00:10:27.308 11:28:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:27.308 11:28:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:10:27.308 11:28:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@509 -- # nvmfpid=1127031 00:10:27.308 11:28:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@510 -- # waitforlisten 1127031 00:10:27.308 11:28:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@833 -- # '[' -z 1127031 ']' 00:10:27.308 11:28:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:27.308 11:28:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@838 -- # local max_retries=100 00:10:27.308 11:28:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:27.308 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:27.308 11:28:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@842 -- # xtrace_disable 00:10:27.308 11:28:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:27.308 [2024-11-15 11:28:28.150392] Starting SPDK v25.01-pre git sha1 4b2d483c6 / DPDK 24.03.0 initialization... 00:10:27.308 [2024-11-15 11:28:28.150430] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:27.567 [2024-11-15 11:28:28.237171] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:27.567 [2024-11-15 11:28:28.288438] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:27.567 [2024-11-15 11:28:28.288484] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:27.567 [2024-11-15 11:28:28.288495] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:27.567 [2024-11-15 11:28:28.288504] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:27.567 [2024-11-15 11:28:28.288512] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:27.567 [2024-11-15 11:28:28.290506] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:10:27.567 [2024-11-15 11:28:28.290530] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:10:27.567 [2024-11-15 11:28:28.290625] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:10:27.567 [2024-11-15 11:28:28.290636] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:28.502 11:28:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:10:28.502 11:28:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@866 -- # return 0 00:10:28.502 11:28:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:10:28.502 11:28:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@730 -- # xtrace_disable 00:10:28.502 11:28:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:28.503 11:28:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:28.503 11:28:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:10:28.503 11:28:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:10:28.503 11:28:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:28.503 11:28:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:28.503 [2024-11-15 11:28:29.112905] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:28.503 11:28:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:28.503 11:28:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:10:28.503 11:28:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:28.503 11:28:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:28.503 Malloc1 00:10:28.503 11:28:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:28.503 11:28:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:10:28.503 11:28:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:28.503 11:28:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:28.503 11:28:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:28.503 11:28:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:10:28.503 11:28:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:28.503 11:28:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:28.503 11:28:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:28.503 11:28:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:28.503 11:28:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:28.503 11:28:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:28.503 [2024-11-15 11:28:29.289830] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:28.503 11:28:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:28.503 11:28:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:10:28.503 11:28:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1380 -- # local bdev_name=Malloc1 00:10:28.503 11:28:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1381 -- # local bdev_info 00:10:28.503 11:28:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1382 -- # local bs 00:10:28.503 11:28:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1383 -- # local nb 00:10:28.503 11:28:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1384 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:10:28.503 11:28:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:28.503 11:28:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:28.503 11:28:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:28.503 11:28:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1384 -- # bdev_info='[ 00:10:28.503 { 00:10:28.503 "name": "Malloc1", 00:10:28.503 "aliases": [ 00:10:28.503 "1da1d6f9-5c4c-4945-b5fa-4b0c32fab7fa" 00:10:28.503 ], 00:10:28.503 "product_name": "Malloc disk", 00:10:28.503 "block_size": 512, 00:10:28.503 "num_blocks": 1048576, 00:10:28.503 "uuid": "1da1d6f9-5c4c-4945-b5fa-4b0c32fab7fa", 00:10:28.503 "assigned_rate_limits": { 00:10:28.503 "rw_ios_per_sec": 0, 00:10:28.503 "rw_mbytes_per_sec": 0, 00:10:28.503 "r_mbytes_per_sec": 0, 00:10:28.503 "w_mbytes_per_sec": 0 00:10:28.503 }, 00:10:28.503 "claimed": true, 00:10:28.503 "claim_type": "exclusive_write", 00:10:28.503 "zoned": false, 00:10:28.503 "supported_io_types": { 00:10:28.503 "read": true, 00:10:28.503 "write": true, 00:10:28.503 "unmap": true, 00:10:28.503 "flush": true, 00:10:28.503 "reset": true, 00:10:28.503 "nvme_admin": false, 00:10:28.503 "nvme_io": false, 00:10:28.503 "nvme_io_md": false, 00:10:28.503 "write_zeroes": true, 00:10:28.503 "zcopy": true, 00:10:28.503 "get_zone_info": false, 00:10:28.503 "zone_management": false, 00:10:28.503 "zone_append": false, 00:10:28.503 "compare": false, 00:10:28.503 "compare_and_write": false, 00:10:28.503 "abort": true, 00:10:28.503 "seek_hole": false, 00:10:28.503 "seek_data": false, 00:10:28.503 "copy": true, 00:10:28.503 "nvme_iov_md": false 00:10:28.503 }, 00:10:28.503 "memory_domains": [ 00:10:28.503 { 00:10:28.503 "dma_device_id": "system", 00:10:28.503 "dma_device_type": 1 00:10:28.503 }, 00:10:28.503 { 00:10:28.503 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:28.503 "dma_device_type": 2 00:10:28.503 } 00:10:28.503 ], 00:10:28.503 "driver_specific": {} 00:10:28.503 } 00:10:28.503 ]' 00:10:28.503 11:28:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1385 -- # jq '.[] .block_size' 00:10:28.762 11:28:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1385 -- # bs=512 00:10:28.762 11:28:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1386 -- # jq '.[] .num_blocks' 00:10:28.762 11:28:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1386 -- # nb=1048576 00:10:28.762 11:28:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1389 -- # bdev_size=512 00:10:28.762 11:28:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1390 -- # echo 512 00:10:28.762 11:28:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@58 -- # malloc_size=536870912 00:10:28.762 11:28:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid=00abaa28-3537-eb11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:10:30.138 11:28:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:10:30.138 11:28:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1200 -- # local i=0 00:10:30.138 11:28:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1201 -- # local nvme_device_counter=1 nvme_devices=0 00:10:30.138 11:28:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1202 -- # [[ -n '' ]] 00:10:30.138 11:28:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1207 -- # sleep 2 00:10:32.041 11:28:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1208 -- # (( i++ <= 15 )) 00:10:32.041 11:28:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1209 -- # lsblk -l -o NAME,SERIAL 00:10:32.041 11:28:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1209 -- # grep -c SPDKISFASTANDAWESOME 00:10:32.041 11:28:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1209 -- # nvme_devices=1 00:10:32.041 11:28:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1210 -- # (( nvme_devices == nvme_device_counter )) 00:10:32.041 11:28:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1210 -- # return 0 00:10:32.041 11:28:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:10:32.041 11:28:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:10:32.041 11:28:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:10:32.041 11:28:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:10:32.041 11:28:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@76 -- # local dev=nvme0n1 00:10:32.041 11:28:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:10:32.041 11:28:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@80 -- # echo 536870912 00:10:32.041 11:28:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@64 -- # nvme_size=536870912 00:10:32.041 11:28:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:10:32.041 11:28:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:10:32.041 11:28:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:10:32.300 11:28:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@69 -- # partprobe 00:10:32.868 11:28:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@70 -- # sleep 1 00:10:34.245 11:28:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@76 -- # '[' 0 -eq 0 ']' 00:10:34.245 11:28:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@77 -- # run_test filesystem_ext4 nvmf_filesystem_create ext4 nvme0n1 00:10:34.245 11:28:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:10:34.245 11:28:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1109 -- # xtrace_disable 00:10:34.245 11:28:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:34.245 ************************************ 00:10:34.245 START TEST filesystem_ext4 00:10:34.245 ************************************ 00:10:34.246 11:28:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@1127 -- # nvmf_filesystem_create ext4 nvme0n1 00:10:34.246 11:28:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@18 -- # fstype=ext4 00:10:34.246 11:28:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:10:34.246 11:28:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:10:34.246 11:28:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@928 -- # local fstype=ext4 00:10:34.246 11:28:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@929 -- # local dev_name=/dev/nvme0n1p1 00:10:34.246 11:28:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@930 -- # local i=0 00:10:34.246 11:28:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@931 -- # local force 00:10:34.246 11:28:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@933 -- # '[' ext4 = ext4 ']' 00:10:34.246 11:28:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@934 -- # force=-F 00:10:34.246 11:28:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@939 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:10:34.246 mke2fs 1.47.0 (5-Feb-2023) 00:10:34.246 Discarding device blocks: 0/522240 done 00:10:34.246 Creating filesystem with 522240 1k blocks and 130560 inodes 00:10:34.246 Filesystem UUID: 30996261-78a1-4cd0-b430-2e653c2f4def 00:10:34.246 Superblock backups stored on blocks: 00:10:34.246 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:10:34.246 00:10:34.246 Allocating group tables: 0/64 done 00:10:34.246 Writing inode tables: 0/64 done 00:10:34.814 Creating journal (8192 blocks): done 00:10:35.751 Writing superblocks and filesystem accounting information: 0/64 done 00:10:35.751 00:10:35.751 11:28:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@947 -- # return 0 00:10:35.751 11:28:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:10:42.571 11:28:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:10:42.571 11:28:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@25 -- # sync 00:10:42.571 11:28:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:10:42.571 11:28:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@27 -- # sync 00:10:42.571 11:28:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@29 -- # i=0 00:10:42.572 11:28:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@30 -- # umount /mnt/device 00:10:42.572 11:28:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@37 -- # kill -0 1127031 00:10:42.572 11:28:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:10:42.572 11:28:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:10:42.572 11:28:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:10:42.572 11:28:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:10:42.572 00:10:42.572 real 0m7.832s 00:10:42.572 user 0m0.037s 00:10:42.572 sys 0m0.065s 00:10:42.572 11:28:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@1128 -- # xtrace_disable 00:10:42.572 11:28:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@10 -- # set +x 00:10:42.572 ************************************ 00:10:42.572 END TEST filesystem_ext4 00:10:42.572 ************************************ 00:10:42.572 11:28:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@78 -- # run_test filesystem_btrfs nvmf_filesystem_create btrfs nvme0n1 00:10:42.572 11:28:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:10:42.572 11:28:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1109 -- # xtrace_disable 00:10:42.572 11:28:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:42.572 ************************************ 00:10:42.572 START TEST filesystem_btrfs 00:10:42.572 ************************************ 00:10:42.572 11:28:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@1127 -- # nvmf_filesystem_create btrfs nvme0n1 00:10:42.572 11:28:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@18 -- # fstype=btrfs 00:10:42.572 11:28:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:10:42.572 11:28:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:10:42.572 11:28:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@928 -- # local fstype=btrfs 00:10:42.572 11:28:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@929 -- # local dev_name=/dev/nvme0n1p1 00:10:42.572 11:28:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@930 -- # local i=0 00:10:42.572 11:28:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@931 -- # local force 00:10:42.572 11:28:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@933 -- # '[' btrfs = ext4 ']' 00:10:42.572 11:28:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@936 -- # force=-f 00:10:42.572 11:28:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@939 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:10:42.572 btrfs-progs v6.8.1 00:10:42.572 See https://btrfs.readthedocs.io for more information. 00:10:42.572 00:10:42.572 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:10:42.572 NOTE: several default settings have changed in version 5.15, please make sure 00:10:42.572 this does not affect your deployments: 00:10:42.572 - DUP for metadata (-m dup) 00:10:42.572 - enabled no-holes (-O no-holes) 00:10:42.572 - enabled free-space-tree (-R free-space-tree) 00:10:42.572 00:10:42.572 Label: (null) 00:10:42.572 UUID: f3b313b6-7392-4501-a510-0138a5b93aaa 00:10:42.572 Node size: 16384 00:10:42.572 Sector size: 4096 (CPU page size: 4096) 00:10:42.572 Filesystem size: 510.00MiB 00:10:42.572 Block group profiles: 00:10:42.572 Data: single 8.00MiB 00:10:42.572 Metadata: DUP 32.00MiB 00:10:42.572 System: DUP 8.00MiB 00:10:42.572 SSD detected: yes 00:10:42.572 Zoned device: no 00:10:42.572 Features: extref, skinny-metadata, no-holes, free-space-tree 00:10:42.572 Checksum: crc32c 00:10:42.572 Number of devices: 1 00:10:42.572 Devices: 00:10:42.572 ID SIZE PATH 00:10:42.572 1 510.00MiB /dev/nvme0n1p1 00:10:42.572 00:10:42.572 11:28:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@947 -- # return 0 00:10:42.572 11:28:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:10:42.572 11:28:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:10:42.572 11:28:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@25 -- # sync 00:10:42.572 11:28:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:10:42.572 11:28:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@27 -- # sync 00:10:42.572 11:28:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@29 -- # i=0 00:10:42.572 11:28:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:10:42.572 11:28:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@37 -- # kill -0 1127031 00:10:42.572 11:28:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:10:42.572 11:28:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:10:42.572 11:28:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:10:42.572 11:28:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:10:42.572 00:10:42.572 real 0m0.507s 00:10:42.572 user 0m0.026s 00:10:42.572 sys 0m0.116s 00:10:42.572 11:28:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@1128 -- # xtrace_disable 00:10:42.572 11:28:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@10 -- # set +x 00:10:42.572 ************************************ 00:10:42.572 END TEST filesystem_btrfs 00:10:42.572 ************************************ 00:10:42.572 11:28:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@79 -- # run_test filesystem_xfs nvmf_filesystem_create xfs nvme0n1 00:10:42.572 11:28:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:10:42.572 11:28:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1109 -- # xtrace_disable 00:10:42.572 11:28:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:42.572 ************************************ 00:10:42.572 START TEST filesystem_xfs 00:10:42.572 ************************************ 00:10:42.572 11:28:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@1127 -- # nvmf_filesystem_create xfs nvme0n1 00:10:42.572 11:28:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@18 -- # fstype=xfs 00:10:42.572 11:28:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:10:42.572 11:28:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:10:42.572 11:28:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@928 -- # local fstype=xfs 00:10:42.572 11:28:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@929 -- # local dev_name=/dev/nvme0n1p1 00:10:42.572 11:28:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@930 -- # local i=0 00:10:42.572 11:28:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@931 -- # local force 00:10:42.572 11:28:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@933 -- # '[' xfs = ext4 ']' 00:10:42.572 11:28:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@936 -- # force=-f 00:10:42.572 11:28:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@939 -- # mkfs.xfs -f /dev/nvme0n1p1 00:10:42.572 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:10:42.572 = sectsz=512 attr=2, projid32bit=1 00:10:42.572 = crc=1 finobt=1, sparse=1, rmapbt=0 00:10:42.572 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:10:42.573 data = bsize=4096 blocks=130560, imaxpct=25 00:10:42.573 = sunit=0 swidth=0 blks 00:10:42.573 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:10:42.573 log =internal log bsize=4096 blocks=16384, version=2 00:10:42.573 = sectsz=512 sunit=0 blks, lazy-count=1 00:10:42.573 realtime =none extsz=4096 blocks=0, rtextents=0 00:10:43.509 Discarding blocks...Done. 00:10:43.509 11:28:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@947 -- # return 0 00:10:43.509 11:28:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:10:45.466 11:28:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:10:45.466 11:28:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@25 -- # sync 00:10:45.466 11:28:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:10:45.466 11:28:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@27 -- # sync 00:10:45.466 11:28:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@29 -- # i=0 00:10:45.466 11:28:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:10:45.466 11:28:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@37 -- # kill -0 1127031 00:10:45.466 11:28:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:10:45.466 11:28:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:10:45.466 11:28:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:10:45.466 11:28:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:10:45.466 00:10:45.466 real 0m2.937s 00:10:45.466 user 0m0.031s 00:10:45.466 sys 0m0.069s 00:10:45.466 11:28:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@1128 -- # xtrace_disable 00:10:45.466 11:28:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@10 -- # set +x 00:10:45.466 ************************************ 00:10:45.466 END TEST filesystem_xfs 00:10:45.466 ************************************ 00:10:45.466 11:28:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:10:45.725 11:28:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@93 -- # sync 00:10:45.725 11:28:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:10:45.984 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:45.984 11:28:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:10:45.984 11:28:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1221 -- # local i=0 00:10:45.984 11:28:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1222 -- # lsblk -o NAME,SERIAL 00:10:45.984 11:28:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1222 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:45.984 11:28:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1229 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:45.984 11:28:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1229 -- # lsblk -l -o NAME,SERIAL 00:10:45.984 11:28:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1233 -- # return 0 00:10:45.985 11:28:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:10:45.985 11:28:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:45.985 11:28:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:45.985 11:28:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:45.985 11:28:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:10:45.985 11:28:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@101 -- # killprocess 1127031 00:10:45.985 11:28:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@952 -- # '[' -z 1127031 ']' 00:10:45.985 11:28:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@956 -- # kill -0 1127031 00:10:45.985 11:28:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@957 -- # uname 00:10:45.985 11:28:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:10:45.985 11:28:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 1127031 00:10:45.985 11:28:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:10:45.985 11:28:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:10:45.985 11:28:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@970 -- # echo 'killing process with pid 1127031' 00:10:45.985 killing process with pid 1127031 00:10:45.985 11:28:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@971 -- # kill 1127031 00:10:45.985 11:28:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@976 -- # wait 1127031 00:10:46.244 11:28:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@102 -- # nvmfpid= 00:10:46.244 00:10:46.244 real 0m18.915s 00:10:46.244 user 1m14.671s 00:10:46.244 sys 0m1.473s 00:10:46.244 11:28:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1128 -- # xtrace_disable 00:10:46.244 11:28:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:46.244 ************************************ 00:10:46.244 END TEST nvmf_filesystem_no_in_capsule 00:10:46.244 ************************************ 00:10:46.244 11:28:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@106 -- # run_test nvmf_filesystem_in_capsule nvmf_filesystem_part 4096 00:10:46.244 11:28:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:10:46.244 11:28:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1109 -- # xtrace_disable 00:10:46.244 11:28:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:10:46.503 ************************************ 00:10:46.503 START TEST nvmf_filesystem_in_capsule 00:10:46.503 ************************************ 00:10:46.503 11:28:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1127 -- # nvmf_filesystem_part 4096 00:10:46.503 11:28:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@47 -- # in_capsule=4096 00:10:46.503 11:28:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:10:46.503 11:28:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:10:46.503 11:28:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@724 -- # xtrace_disable 00:10:46.503 11:28:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:46.503 11:28:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@509 -- # nvmfpid=1130680 00:10:46.503 11:28:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@510 -- # waitforlisten 1130680 00:10:46.503 11:28:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:10:46.503 11:28:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@833 -- # '[' -z 1130680 ']' 00:10:46.503 11:28:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:46.503 11:28:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@838 -- # local max_retries=100 00:10:46.503 11:28:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:46.503 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:46.503 11:28:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@842 -- # xtrace_disable 00:10:46.503 11:28:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:46.503 [2024-11-15 11:28:47.166686] Starting SPDK v25.01-pre git sha1 4b2d483c6 / DPDK 24.03.0 initialization... 00:10:46.503 [2024-11-15 11:28:47.166738] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:46.503 [2024-11-15 11:28:47.265523] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:46.503 [2024-11-15 11:28:47.315142] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:46.503 [2024-11-15 11:28:47.315181] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:46.503 [2024-11-15 11:28:47.315192] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:46.503 [2024-11-15 11:28:47.315201] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:46.503 [2024-11-15 11:28:47.315209] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:46.503 [2024-11-15 11:28:47.317297] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:10:46.503 [2024-11-15 11:28:47.317403] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:10:46.503 [2024-11-15 11:28:47.317497] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:10:46.503 [2024-11-15 11:28:47.317501] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:46.762 11:28:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:10:46.762 11:28:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@866 -- # return 0 00:10:46.762 11:28:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:10:46.762 11:28:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@730 -- # xtrace_disable 00:10:46.762 11:28:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:46.762 11:28:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:46.762 11:28:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:10:46.762 11:28:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 4096 00:10:46.762 11:28:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:46.762 11:28:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:46.762 [2024-11-15 11:28:47.470828] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:46.762 11:28:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:46.762 11:28:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:10:46.762 11:28:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:46.762 11:28:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:46.762 Malloc1 00:10:46.762 11:28:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:46.762 11:28:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:10:46.762 11:28:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:46.762 11:28:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:47.021 11:28:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:47.021 11:28:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:10:47.021 11:28:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:47.021 11:28:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:47.021 11:28:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:47.021 11:28:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:47.021 11:28:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:47.021 11:28:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:47.021 [2024-11-15 11:28:47.649696] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:47.021 11:28:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:47.021 11:28:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:10:47.021 11:28:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1380 -- # local bdev_name=Malloc1 00:10:47.021 11:28:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1381 -- # local bdev_info 00:10:47.021 11:28:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1382 -- # local bs 00:10:47.021 11:28:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1383 -- # local nb 00:10:47.021 11:28:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1384 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:10:47.021 11:28:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:47.021 11:28:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:47.021 11:28:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:47.021 11:28:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1384 -- # bdev_info='[ 00:10:47.021 { 00:10:47.021 "name": "Malloc1", 00:10:47.021 "aliases": [ 00:10:47.021 "2430e7c1-00f8-4ce6-8a80-fa4509b27911" 00:10:47.021 ], 00:10:47.021 "product_name": "Malloc disk", 00:10:47.021 "block_size": 512, 00:10:47.021 "num_blocks": 1048576, 00:10:47.021 "uuid": "2430e7c1-00f8-4ce6-8a80-fa4509b27911", 00:10:47.021 "assigned_rate_limits": { 00:10:47.021 "rw_ios_per_sec": 0, 00:10:47.021 "rw_mbytes_per_sec": 0, 00:10:47.021 "r_mbytes_per_sec": 0, 00:10:47.021 "w_mbytes_per_sec": 0 00:10:47.021 }, 00:10:47.021 "claimed": true, 00:10:47.021 "claim_type": "exclusive_write", 00:10:47.021 "zoned": false, 00:10:47.021 "supported_io_types": { 00:10:47.021 "read": true, 00:10:47.021 "write": true, 00:10:47.021 "unmap": true, 00:10:47.021 "flush": true, 00:10:47.021 "reset": true, 00:10:47.021 "nvme_admin": false, 00:10:47.021 "nvme_io": false, 00:10:47.021 "nvme_io_md": false, 00:10:47.021 "write_zeroes": true, 00:10:47.021 "zcopy": true, 00:10:47.021 "get_zone_info": false, 00:10:47.021 "zone_management": false, 00:10:47.021 "zone_append": false, 00:10:47.021 "compare": false, 00:10:47.021 "compare_and_write": false, 00:10:47.021 "abort": true, 00:10:47.021 "seek_hole": false, 00:10:47.021 "seek_data": false, 00:10:47.021 "copy": true, 00:10:47.021 "nvme_iov_md": false 00:10:47.021 }, 00:10:47.021 "memory_domains": [ 00:10:47.021 { 00:10:47.021 "dma_device_id": "system", 00:10:47.021 "dma_device_type": 1 00:10:47.021 }, 00:10:47.021 { 00:10:47.021 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:47.021 "dma_device_type": 2 00:10:47.021 } 00:10:47.021 ], 00:10:47.021 "driver_specific": {} 00:10:47.021 } 00:10:47.021 ]' 00:10:47.021 11:28:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1385 -- # jq '.[] .block_size' 00:10:47.021 11:28:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1385 -- # bs=512 00:10:47.021 11:28:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1386 -- # jq '.[] .num_blocks' 00:10:47.021 11:28:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1386 -- # nb=1048576 00:10:47.021 11:28:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1389 -- # bdev_size=512 00:10:47.021 11:28:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1390 -- # echo 512 00:10:47.021 11:28:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@58 -- # malloc_size=536870912 00:10:47.021 11:28:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid=00abaa28-3537-eb11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:10:48.399 11:28:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:10:48.399 11:28:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1200 -- # local i=0 00:10:48.399 11:28:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1201 -- # local nvme_device_counter=1 nvme_devices=0 00:10:48.399 11:28:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1202 -- # [[ -n '' ]] 00:10:48.399 11:28:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1207 -- # sleep 2 00:10:50.303 11:28:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1208 -- # (( i++ <= 15 )) 00:10:50.303 11:28:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1209 -- # lsblk -l -o NAME,SERIAL 00:10:50.303 11:28:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1209 -- # grep -c SPDKISFASTANDAWESOME 00:10:50.303 11:28:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1209 -- # nvme_devices=1 00:10:50.303 11:28:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1210 -- # (( nvme_devices == nvme_device_counter )) 00:10:50.303 11:28:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1210 -- # return 0 00:10:50.303 11:28:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:10:50.303 11:28:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:10:50.303 11:28:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:10:50.303 11:28:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:10:50.303 11:28:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@76 -- # local dev=nvme0n1 00:10:50.303 11:28:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:10:50.303 11:28:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@80 -- # echo 536870912 00:10:50.303 11:28:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@64 -- # nvme_size=536870912 00:10:50.303 11:28:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:10:50.303 11:28:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:10:50.303 11:28:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:10:50.563 11:28:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@69 -- # partprobe 00:10:51.500 11:28:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@70 -- # sleep 1 00:10:52.439 11:28:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@76 -- # '[' 4096 -eq 0 ']' 00:10:52.439 11:28:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@81 -- # run_test filesystem_in_capsule_ext4 nvmf_filesystem_create ext4 nvme0n1 00:10:52.439 11:28:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:10:52.439 11:28:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1109 -- # xtrace_disable 00:10:52.439 11:28:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:52.439 ************************************ 00:10:52.439 START TEST filesystem_in_capsule_ext4 00:10:52.439 ************************************ 00:10:52.439 11:28:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@1127 -- # nvmf_filesystem_create ext4 nvme0n1 00:10:52.439 11:28:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@18 -- # fstype=ext4 00:10:52.439 11:28:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:10:52.439 11:28:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:10:52.439 11:28:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@928 -- # local fstype=ext4 00:10:52.439 11:28:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@929 -- # local dev_name=/dev/nvme0n1p1 00:10:52.439 11:28:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@930 -- # local i=0 00:10:52.439 11:28:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@931 -- # local force 00:10:52.439 11:28:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@933 -- # '[' ext4 = ext4 ']' 00:10:52.439 11:28:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@934 -- # force=-F 00:10:52.439 11:28:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@939 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:10:52.439 mke2fs 1.47.0 (5-Feb-2023) 00:10:52.439 Discarding device blocks: 0/522240 done 00:10:52.439 Creating filesystem with 522240 1k blocks and 130560 inodes 00:10:52.439 Filesystem UUID: c58cddb2-423a-4726-9307-1034a6d1814c 00:10:52.439 Superblock backups stored on blocks: 00:10:52.439 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:10:52.439 00:10:52.439 Allocating group tables: 0/64 done 00:10:52.439 Writing inode tables: 0/64 done 00:10:55.726 Creating journal (8192 blocks): done 00:10:57.490 Writing superblocks and filesystem accounting information: 0/6426/64 done 00:10:57.490 00:10:57.490 11:28:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@947 -- # return 0 00:10:57.490 11:28:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:11:02.765 11:29:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:11:03.024 11:29:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@25 -- # sync 00:11:03.024 11:29:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:11:03.024 11:29:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@27 -- # sync 00:11:03.024 11:29:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@29 -- # i=0 00:11:03.024 11:29:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@30 -- # umount /mnt/device 00:11:03.024 11:29:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@37 -- # kill -0 1130680 00:11:03.024 11:29:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:11:03.024 11:29:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:11:03.024 11:29:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:11:03.024 11:29:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:11:03.024 00:11:03.024 real 0m10.585s 00:11:03.024 user 0m0.027s 00:11:03.024 sys 0m0.073s 00:11:03.024 11:29:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@1128 -- # xtrace_disable 00:11:03.024 11:29:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@10 -- # set +x 00:11:03.024 ************************************ 00:11:03.024 END TEST filesystem_in_capsule_ext4 00:11:03.024 ************************************ 00:11:03.024 11:29:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@82 -- # run_test filesystem_in_capsule_btrfs nvmf_filesystem_create btrfs nvme0n1 00:11:03.024 11:29:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:11:03.024 11:29:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1109 -- # xtrace_disable 00:11:03.024 11:29:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:03.024 ************************************ 00:11:03.024 START TEST filesystem_in_capsule_btrfs 00:11:03.024 ************************************ 00:11:03.024 11:29:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@1127 -- # nvmf_filesystem_create btrfs nvme0n1 00:11:03.024 11:29:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@18 -- # fstype=btrfs 00:11:03.024 11:29:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:11:03.024 11:29:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:11:03.024 11:29:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@928 -- # local fstype=btrfs 00:11:03.024 11:29:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@929 -- # local dev_name=/dev/nvme0n1p1 00:11:03.024 11:29:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@930 -- # local i=0 00:11:03.024 11:29:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@931 -- # local force 00:11:03.024 11:29:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@933 -- # '[' btrfs = ext4 ']' 00:11:03.024 11:29:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@936 -- # force=-f 00:11:03.024 11:29:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@939 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:11:03.592 btrfs-progs v6.8.1 00:11:03.592 See https://btrfs.readthedocs.io for more information. 00:11:03.592 00:11:03.592 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:11:03.592 NOTE: several default settings have changed in version 5.15, please make sure 00:11:03.592 this does not affect your deployments: 00:11:03.592 - DUP for metadata (-m dup) 00:11:03.592 - enabled no-holes (-O no-holes) 00:11:03.592 - enabled free-space-tree (-R free-space-tree) 00:11:03.592 00:11:03.592 Label: (null) 00:11:03.592 UUID: 1773df67-4052-41c9-a76e-fb21e2926695 00:11:03.592 Node size: 16384 00:11:03.592 Sector size: 4096 (CPU page size: 4096) 00:11:03.592 Filesystem size: 510.00MiB 00:11:03.592 Block group profiles: 00:11:03.592 Data: single 8.00MiB 00:11:03.592 Metadata: DUP 32.00MiB 00:11:03.592 System: DUP 8.00MiB 00:11:03.592 SSD detected: yes 00:11:03.592 Zoned device: no 00:11:03.592 Features: extref, skinny-metadata, no-holes, free-space-tree 00:11:03.592 Checksum: crc32c 00:11:03.592 Number of devices: 1 00:11:03.592 Devices: 00:11:03.592 ID SIZE PATH 00:11:03.592 1 510.00MiB /dev/nvme0n1p1 00:11:03.592 00:11:03.592 11:29:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@947 -- # return 0 00:11:03.592 11:29:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:11:03.852 11:29:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:11:03.852 11:29:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@25 -- # sync 00:11:03.852 11:29:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:11:03.852 11:29:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@27 -- # sync 00:11:03.852 11:29:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@29 -- # i=0 00:11:03.852 11:29:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:11:03.852 11:29:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@37 -- # kill -0 1130680 00:11:03.852 11:29:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:11:03.852 11:29:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:11:03.852 11:29:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:11:03.852 11:29:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:11:03.852 00:11:03.852 real 0m0.792s 00:11:03.852 user 0m0.023s 00:11:03.852 sys 0m0.121s 00:11:03.852 11:29:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@1128 -- # xtrace_disable 00:11:03.852 11:29:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@10 -- # set +x 00:11:03.852 ************************************ 00:11:03.852 END TEST filesystem_in_capsule_btrfs 00:11:03.852 ************************************ 00:11:03.853 11:29:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@83 -- # run_test filesystem_in_capsule_xfs nvmf_filesystem_create xfs nvme0n1 00:11:03.853 11:29:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:11:03.853 11:29:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1109 -- # xtrace_disable 00:11:03.853 11:29:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:03.853 ************************************ 00:11:03.853 START TEST filesystem_in_capsule_xfs 00:11:03.853 ************************************ 00:11:03.853 11:29:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@1127 -- # nvmf_filesystem_create xfs nvme0n1 00:11:03.853 11:29:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@18 -- # fstype=xfs 00:11:03.853 11:29:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:11:03.853 11:29:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:11:03.853 11:29:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@928 -- # local fstype=xfs 00:11:03.853 11:29:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@929 -- # local dev_name=/dev/nvme0n1p1 00:11:03.853 11:29:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@930 -- # local i=0 00:11:03.853 11:29:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@931 -- # local force 00:11:03.853 11:29:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@933 -- # '[' xfs = ext4 ']' 00:11:03.853 11:29:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@936 -- # force=-f 00:11:03.853 11:29:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@939 -- # mkfs.xfs -f /dev/nvme0n1p1 00:11:04.111 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:11:04.111 = sectsz=512 attr=2, projid32bit=1 00:11:04.111 = crc=1 finobt=1, sparse=1, rmapbt=0 00:11:04.111 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:11:04.111 data = bsize=4096 blocks=130560, imaxpct=25 00:11:04.111 = sunit=0 swidth=0 blks 00:11:04.111 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:11:04.111 log =internal log bsize=4096 blocks=16384, version=2 00:11:04.111 = sectsz=512 sunit=0 blks, lazy-count=1 00:11:04.111 realtime =none extsz=4096 blocks=0, rtextents=0 00:11:04.679 Discarding blocks...Done. 00:11:04.679 11:29:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@947 -- # return 0 00:11:04.679 11:29:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:11:06.584 11:29:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:11:06.584 11:29:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@25 -- # sync 00:11:06.584 11:29:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:11:06.584 11:29:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@27 -- # sync 00:11:06.584 11:29:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@29 -- # i=0 00:11:06.584 11:29:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:11:06.584 11:29:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@37 -- # kill -0 1130680 00:11:06.584 11:29:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:11:06.584 11:29:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:11:06.584 11:29:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:11:06.584 11:29:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:11:06.584 00:11:06.584 real 0m2.642s 00:11:06.584 user 0m0.024s 00:11:06.584 sys 0m0.076s 00:11:06.584 11:29:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@1128 -- # xtrace_disable 00:11:06.584 11:29:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@10 -- # set +x 00:11:06.584 ************************************ 00:11:06.584 END TEST filesystem_in_capsule_xfs 00:11:06.584 ************************************ 00:11:06.584 11:29:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:11:06.584 11:29:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@93 -- # sync 00:11:06.584 11:29:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:11:06.844 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:06.844 11:29:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:11:06.844 11:29:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1221 -- # local i=0 00:11:06.845 11:29:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1222 -- # lsblk -o NAME,SERIAL 00:11:06.845 11:29:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1222 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:06.845 11:29:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1229 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:06.845 11:29:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1229 -- # lsblk -l -o NAME,SERIAL 00:11:06.845 11:29:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1233 -- # return 0 00:11:06.845 11:29:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:06.845 11:29:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:06.845 11:29:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:06.845 11:29:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:06.845 11:29:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:11:06.845 11:29:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@101 -- # killprocess 1130680 00:11:06.845 11:29:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@952 -- # '[' -z 1130680 ']' 00:11:06.845 11:29:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@956 -- # kill -0 1130680 00:11:06.845 11:29:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@957 -- # uname 00:11:06.845 11:29:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:11:06.845 11:29:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 1130680 00:11:06.845 11:29:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:11:06.845 11:29:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:11:06.845 11:29:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@970 -- # echo 'killing process with pid 1130680' 00:11:06.845 killing process with pid 1130680 00:11:06.845 11:29:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@971 -- # kill 1130680 00:11:06.845 11:29:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@976 -- # wait 1130680 00:11:07.104 11:29:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@102 -- # nvmfpid= 00:11:07.104 00:11:07.104 real 0m20.800s 00:11:07.104 user 1m21.894s 00:11:07.104 sys 0m1.520s 00:11:07.104 11:29:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1128 -- # xtrace_disable 00:11:07.104 11:29:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:07.104 ************************************ 00:11:07.104 END TEST nvmf_filesystem_in_capsule 00:11:07.104 ************************************ 00:11:07.104 11:29:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@108 -- # nvmftestfini 00:11:07.104 11:29:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@516 -- # nvmfcleanup 00:11:07.104 11:29:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@121 -- # sync 00:11:07.104 11:29:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:11:07.104 11:29:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@124 -- # set +e 00:11:07.104 11:29:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@125 -- # for i in {1..20} 00:11:07.104 11:29:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:11:07.104 rmmod nvme_tcp 00:11:07.363 rmmod nvme_fabrics 00:11:07.363 rmmod nvme_keyring 00:11:07.363 11:29:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:11:07.363 11:29:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@128 -- # set -e 00:11:07.363 11:29:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@129 -- # return 0 00:11:07.363 11:29:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:11:07.363 11:29:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:11:07.363 11:29:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:11:07.363 11:29:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:11:07.363 11:29:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@297 -- # iptr 00:11:07.363 11:29:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@791 -- # iptables-save 00:11:07.363 11:29:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:11:07.363 11:29:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@791 -- # iptables-restore 00:11:07.363 11:29:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:11:07.363 11:29:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@302 -- # remove_spdk_ns 00:11:07.363 11:29:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:07.363 11:29:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:07.363 11:29:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:09.263 11:29:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:11:09.263 00:11:09.263 real 0m48.134s 00:11:09.263 user 2m38.502s 00:11:09.263 sys 0m7.501s 00:11:09.263 11:29:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1128 -- # xtrace_disable 00:11:09.263 11:29:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:11:09.263 ************************************ 00:11:09.263 END TEST nvmf_filesystem 00:11:09.263 ************************************ 00:11:09.523 11:29:10 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@18 -- # run_test nvmf_target_discovery /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:11:09.523 11:29:10 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:11:09.523 11:29:10 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1109 -- # xtrace_disable 00:11:09.523 11:29:10 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:11:09.523 ************************************ 00:11:09.523 START TEST nvmf_target_discovery 00:11:09.523 ************************************ 00:11:09.523 11:29:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:11:09.523 * Looking for test storage... 00:11:09.523 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:09.523 11:29:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:11:09.523 11:29:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1691 -- # lcov --version 00:11:09.523 11:29:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:11:09.523 11:29:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:11:09.523 11:29:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:09.523 11:29:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:09.523 11:29:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:09.523 11:29:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@336 -- # IFS=.-: 00:11:09.523 11:29:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@336 -- # read -ra ver1 00:11:09.523 11:29:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@337 -- # IFS=.-: 00:11:09.523 11:29:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@337 -- # read -ra ver2 00:11:09.523 11:29:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@338 -- # local 'op=<' 00:11:09.523 11:29:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@340 -- # ver1_l=2 00:11:09.523 11:29:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@341 -- # ver2_l=1 00:11:09.523 11:29:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:09.523 11:29:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@344 -- # case "$op" in 00:11:09.523 11:29:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@345 -- # : 1 00:11:09.523 11:29:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:09.523 11:29:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:09.523 11:29:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@365 -- # decimal 1 00:11:09.523 11:29:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@353 -- # local d=1 00:11:09.523 11:29:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:09.523 11:29:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@355 -- # echo 1 00:11:09.523 11:29:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@365 -- # ver1[v]=1 00:11:09.523 11:29:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@366 -- # decimal 2 00:11:09.523 11:29:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@353 -- # local d=2 00:11:09.523 11:29:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:09.523 11:29:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@355 -- # echo 2 00:11:09.523 11:29:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@366 -- # ver2[v]=2 00:11:09.523 11:29:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:09.523 11:29:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:09.523 11:29:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@368 -- # return 0 00:11:09.523 11:29:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:09.523 11:29:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:11:09.523 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:09.523 --rc genhtml_branch_coverage=1 00:11:09.523 --rc genhtml_function_coverage=1 00:11:09.523 --rc genhtml_legend=1 00:11:09.523 --rc geninfo_all_blocks=1 00:11:09.523 --rc geninfo_unexecuted_blocks=1 00:11:09.523 00:11:09.523 ' 00:11:09.523 11:29:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:11:09.523 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:09.523 --rc genhtml_branch_coverage=1 00:11:09.523 --rc genhtml_function_coverage=1 00:11:09.523 --rc genhtml_legend=1 00:11:09.523 --rc geninfo_all_blocks=1 00:11:09.523 --rc geninfo_unexecuted_blocks=1 00:11:09.523 00:11:09.523 ' 00:11:09.523 11:29:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:11:09.523 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:09.523 --rc genhtml_branch_coverage=1 00:11:09.523 --rc genhtml_function_coverage=1 00:11:09.523 --rc genhtml_legend=1 00:11:09.523 --rc geninfo_all_blocks=1 00:11:09.523 --rc geninfo_unexecuted_blocks=1 00:11:09.523 00:11:09.523 ' 00:11:09.523 11:29:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:11:09.523 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:09.523 --rc genhtml_branch_coverage=1 00:11:09.523 --rc genhtml_function_coverage=1 00:11:09.523 --rc genhtml_legend=1 00:11:09.523 --rc geninfo_all_blocks=1 00:11:09.523 --rc geninfo_unexecuted_blocks=1 00:11:09.523 00:11:09.523 ' 00:11:09.523 11:29:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:09.523 11:29:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@7 -- # uname -s 00:11:09.523 11:29:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:09.523 11:29:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:09.523 11:29:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:09.523 11:29:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:09.523 11:29:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:09.523 11:29:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:09.524 11:29:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:09.524 11:29:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:09.524 11:29:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:09.524 11:29:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:09.524 11:29:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:11:09.524 11:29:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=00abaa28-3537-eb11-906e-0017a4403562 00:11:09.524 11:29:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:09.524 11:29:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:09.524 11:29:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:09.524 11:29:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:09.524 11:29:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:09.524 11:29:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@15 -- # shopt -s extglob 00:11:09.524 11:29:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:09.524 11:29:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:09.524 11:29:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:09.524 11:29:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:09.524 11:29:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:09.524 11:29:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:09.524 11:29:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@5 -- # export PATH 00:11:09.524 11:29:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:09.524 11:29:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@51 -- # : 0 00:11:09.524 11:29:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:11:09.524 11:29:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:11:09.524 11:29:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:09.524 11:29:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:09.524 11:29:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:09.524 11:29:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:11:09.524 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:11:09.524 11:29:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:11:09.524 11:29:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:11:09.524 11:29:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@55 -- # have_pci_nics=0 00:11:09.524 11:29:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@11 -- # NULL_BDEV_SIZE=102400 00:11:09.524 11:29:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@12 -- # NULL_BLOCK_SIZE=512 00:11:09.524 11:29:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@13 -- # NVMF_PORT_REFERRAL=4430 00:11:09.524 11:29:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@15 -- # hash nvme 00:11:09.524 11:29:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@20 -- # nvmftestinit 00:11:09.524 11:29:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:11:09.524 11:29:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:09.524 11:29:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@476 -- # prepare_net_devs 00:11:09.524 11:29:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@438 -- # local -g is_hw=no 00:11:09.524 11:29:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@440 -- # remove_spdk_ns 00:11:09.524 11:29:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:09.524 11:29:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:09.524 11:29:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:09.524 11:29:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:11:09.524 11:29:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:11:09.524 11:29:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@309 -- # xtrace_disable 00:11:09.524 11:29:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:14.793 11:29:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:14.793 11:29:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@315 -- # pci_devs=() 00:11:14.793 11:29:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@315 -- # local -a pci_devs 00:11:14.793 11:29:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@316 -- # pci_net_devs=() 00:11:14.793 11:29:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:11:14.793 11:29:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@317 -- # pci_drivers=() 00:11:14.793 11:29:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@317 -- # local -A pci_drivers 00:11:14.793 11:29:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@319 -- # net_devs=() 00:11:14.793 11:29:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@319 -- # local -ga net_devs 00:11:14.793 11:29:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@320 -- # e810=() 00:11:14.793 11:29:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@320 -- # local -ga e810 00:11:14.793 11:29:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@321 -- # x722=() 00:11:14.793 11:29:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@321 -- # local -ga x722 00:11:14.793 11:29:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@322 -- # mlx=() 00:11:14.793 11:29:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@322 -- # local -ga mlx 00:11:14.793 11:29:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:14.793 11:29:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:14.793 11:29:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:14.793 11:29:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:14.793 11:29:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:14.793 11:29:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:14.793 11:29:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:14.793 11:29:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:11:14.793 11:29:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:14.793 11:29:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:14.793 11:29:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:14.793 11:29:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:14.793 11:29:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:11:14.794 11:29:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:11:14.794 11:29:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:11:14.794 11:29:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:11:14.794 11:29:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:11:14.794 11:29:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:11:14.794 11:29:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:14.794 11:29:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:11:14.794 Found 0000:af:00.0 (0x8086 - 0x159b) 00:11:14.794 11:29:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:14.794 11:29:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:14.794 11:29:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:14.794 11:29:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:14.794 11:29:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:14.794 11:29:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:14.794 11:29:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:11:14.794 Found 0000:af:00.1 (0x8086 - 0x159b) 00:11:14.794 11:29:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:14.794 11:29:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:14.794 11:29:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:14.794 11:29:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:14.794 11:29:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:14.794 11:29:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:11:14.794 11:29:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:11:14.794 11:29:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:11:14.794 11:29:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:11:14.794 11:29:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:14.794 11:29:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:11:14.794 11:29:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:14.794 11:29:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@418 -- # [[ up == up ]] 00:11:14.794 11:29:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:11:14.794 11:29:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:14.794 11:29:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:11:14.794 Found net devices under 0000:af:00.0: cvl_0_0 00:11:14.794 11:29:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:11:14.794 11:29:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:11:14.794 11:29:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:14.794 11:29:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:11:14.794 11:29:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:14.794 11:29:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@418 -- # [[ up == up ]] 00:11:14.794 11:29:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:11:14.794 11:29:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:14.794 11:29:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:11:14.794 Found net devices under 0000:af:00.1: cvl_0_1 00:11:14.794 11:29:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:11:14.794 11:29:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:11:14.794 11:29:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@442 -- # is_hw=yes 00:11:14.794 11:29:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:11:14.794 11:29:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:11:14.794 11:29:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:11:14.794 11:29:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:11:14.794 11:29:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:14.794 11:29:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:14.794 11:29:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:11:14.794 11:29:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:11:14.794 11:29:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:11:14.794 11:29:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:11:14.794 11:29:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:11:14.794 11:29:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:11:14.794 11:29:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:11:14.794 11:29:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:14.794 11:29:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:11:14.794 11:29:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:11:14.794 11:29:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:11:14.794 11:29:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:11:14.794 11:29:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:11:14.794 11:29:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:11:14.794 11:29:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:11:14.794 11:29:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:11:15.054 11:29:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:11:15.054 11:29:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:11:15.054 11:29:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:11:15.054 11:29:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:11:15.054 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:15.054 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.393 ms 00:11:15.054 00:11:15.054 --- 10.0.0.2 ping statistics --- 00:11:15.054 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:15.054 rtt min/avg/max/mdev = 0.393/0.393/0.393/0.000 ms 00:11:15.054 11:29:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:11:15.054 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:15.054 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.208 ms 00:11:15.054 00:11:15.054 --- 10.0.0.1 ping statistics --- 00:11:15.054 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:15.054 rtt min/avg/max/mdev = 0.208/0.208/0.208/0.000 ms 00:11:15.054 11:29:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:15.054 11:29:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@450 -- # return 0 00:11:15.054 11:29:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:11:15.054 11:29:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:15.054 11:29:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:11:15.054 11:29:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:11:15.054 11:29:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:15.054 11:29:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:11:15.054 11:29:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:11:15.054 11:29:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@21 -- # nvmfappstart -m 0xF 00:11:15.054 11:29:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:11:15.054 11:29:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@724 -- # xtrace_disable 00:11:15.054 11:29:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:15.054 11:29:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@509 -- # nvmfpid=1138046 00:11:15.054 11:29:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@510 -- # waitforlisten 1138046 00:11:15.054 11:29:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:11:15.054 11:29:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@833 -- # '[' -z 1138046 ']' 00:11:15.054 11:29:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:15.054 11:29:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@838 -- # local max_retries=100 00:11:15.054 11:29:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:15.054 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:15.054 11:29:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@842 -- # xtrace_disable 00:11:15.054 11:29:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:15.054 [2024-11-15 11:29:15.828018] Starting SPDK v25.01-pre git sha1 4b2d483c6 / DPDK 24.03.0 initialization... 00:11:15.054 [2024-11-15 11:29:15.828082] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:15.329 [2024-11-15 11:29:15.931293] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:15.329 [2024-11-15 11:29:15.981197] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:15.329 [2024-11-15 11:29:15.981237] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:15.329 [2024-11-15 11:29:15.981248] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:15.329 [2024-11-15 11:29:15.981257] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:15.329 [2024-11-15 11:29:15.981264] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:15.329 [2024-11-15 11:29:15.983301] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:11:15.329 [2024-11-15 11:29:15.983432] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:11:15.329 [2024-11-15 11:29:15.983447] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:11:15.329 [2024-11-15 11:29:15.983452] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:15.329 11:29:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:11:15.329 11:29:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@866 -- # return 0 00:11:15.329 11:29:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:11:15.329 11:29:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@730 -- # xtrace_disable 00:11:15.329 11:29:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:15.330 11:29:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:15.330 11:29:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:11:15.330 11:29:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:15.330 11:29:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:15.330 [2024-11-15 11:29:16.123903] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:15.330 11:29:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:15.330 11:29:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # seq 1 4 00:11:15.330 11:29:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:11:15.330 11:29:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null1 102400 512 00:11:15.330 11:29:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:15.330 11:29:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:15.330 Null1 00:11:15.330 11:29:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:15.330 11:29:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:11:15.330 11:29:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:15.330 11:29:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:15.330 11:29:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:15.330 11:29:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Null1 00:11:15.330 11:29:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:15.330 11:29:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:15.330 11:29:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:15.330 11:29:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:15.330 11:29:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:15.330 11:29:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:15.330 [2024-11-15 11:29:16.179637] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:15.590 11:29:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:15.590 11:29:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:11:15.590 11:29:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null2 102400 512 00:11:15.590 11:29:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:15.590 11:29:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:15.590 Null2 00:11:15.590 11:29:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:15.590 11:29:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:11:15.590 11:29:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:15.590 11:29:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:15.590 11:29:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:15.590 11:29:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Null2 00:11:15.590 11:29:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:15.590 11:29:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:15.590 11:29:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:15.590 11:29:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:11:15.590 11:29:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:15.590 11:29:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:15.590 11:29:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:15.590 11:29:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:11:15.590 11:29:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null3 102400 512 00:11:15.590 11:29:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:15.590 11:29:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:15.590 Null3 00:11:15.591 11:29:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:15.591 11:29:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK00000000000003 00:11:15.591 11:29:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:15.591 11:29:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:15.591 11:29:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:15.591 11:29:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 Null3 00:11:15.591 11:29:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:15.591 11:29:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:15.591 11:29:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:15.591 11:29:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.2 -s 4420 00:11:15.591 11:29:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:15.591 11:29:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:15.591 11:29:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:15.591 11:29:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:11:15.591 11:29:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null4 102400 512 00:11:15.591 11:29:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:15.591 11:29:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:15.591 Null4 00:11:15.591 11:29:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:15.591 11:29:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode4 -a -s SPDK00000000000004 00:11:15.591 11:29:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:15.591 11:29:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:15.591 11:29:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:15.591 11:29:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode4 Null4 00:11:15.591 11:29:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:15.591 11:29:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:15.591 11:29:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:15.591 11:29:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode4 -t tcp -a 10.0.0.2 -s 4420 00:11:15.591 11:29:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:15.591 11:29:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:15.591 11:29:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:15.591 11:29:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@32 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:11:15.591 11:29:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:15.591 11:29:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:15.591 11:29:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:15.591 11:29:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@35 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 10.0.0.2 -s 4430 00:11:15.591 11:29:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:15.591 11:29:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:15.591 11:29:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:15.591 11:29:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@37 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid=00abaa28-3537-eb11-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 4420 00:11:15.851 00:11:15.851 Discovery Log Number of Records 6, Generation counter 6 00:11:15.851 =====Discovery Log Entry 0====== 00:11:15.851 trtype: tcp 00:11:15.851 adrfam: ipv4 00:11:15.851 subtype: current discovery subsystem 00:11:15.851 treq: not required 00:11:15.851 portid: 0 00:11:15.851 trsvcid: 4420 00:11:15.851 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:11:15.851 traddr: 10.0.0.2 00:11:15.851 eflags: explicit discovery connections, duplicate discovery information 00:11:15.851 sectype: none 00:11:15.851 =====Discovery Log Entry 1====== 00:11:15.851 trtype: tcp 00:11:15.851 adrfam: ipv4 00:11:15.851 subtype: nvme subsystem 00:11:15.851 treq: not required 00:11:15.851 portid: 0 00:11:15.851 trsvcid: 4420 00:11:15.851 subnqn: nqn.2016-06.io.spdk:cnode1 00:11:15.851 traddr: 10.0.0.2 00:11:15.851 eflags: none 00:11:15.851 sectype: none 00:11:15.851 =====Discovery Log Entry 2====== 00:11:15.851 trtype: tcp 00:11:15.851 adrfam: ipv4 00:11:15.851 subtype: nvme subsystem 00:11:15.851 treq: not required 00:11:15.851 portid: 0 00:11:15.851 trsvcid: 4420 00:11:15.851 subnqn: nqn.2016-06.io.spdk:cnode2 00:11:15.851 traddr: 10.0.0.2 00:11:15.851 eflags: none 00:11:15.851 sectype: none 00:11:15.851 =====Discovery Log Entry 3====== 00:11:15.851 trtype: tcp 00:11:15.851 adrfam: ipv4 00:11:15.851 subtype: nvme subsystem 00:11:15.851 treq: not required 00:11:15.851 portid: 0 00:11:15.851 trsvcid: 4420 00:11:15.851 subnqn: nqn.2016-06.io.spdk:cnode3 00:11:15.851 traddr: 10.0.0.2 00:11:15.851 eflags: none 00:11:15.851 sectype: none 00:11:15.851 =====Discovery Log Entry 4====== 00:11:15.851 trtype: tcp 00:11:15.851 adrfam: ipv4 00:11:15.851 subtype: nvme subsystem 00:11:15.851 treq: not required 00:11:15.851 portid: 0 00:11:15.851 trsvcid: 4420 00:11:15.851 subnqn: nqn.2016-06.io.spdk:cnode4 00:11:15.851 traddr: 10.0.0.2 00:11:15.851 eflags: none 00:11:15.851 sectype: none 00:11:15.851 =====Discovery Log Entry 5====== 00:11:15.851 trtype: tcp 00:11:15.851 adrfam: ipv4 00:11:15.851 subtype: discovery subsystem referral 00:11:15.851 treq: not required 00:11:15.851 portid: 0 00:11:15.851 trsvcid: 4430 00:11:15.851 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:11:15.851 traddr: 10.0.0.2 00:11:15.851 eflags: none 00:11:15.851 sectype: none 00:11:15.851 11:29:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@39 -- # echo 'Perform nvmf subsystem discovery via RPC' 00:11:15.851 Perform nvmf subsystem discovery via RPC 00:11:15.851 11:29:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@40 -- # rpc_cmd nvmf_get_subsystems 00:11:15.851 11:29:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:15.851 11:29:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:15.851 [ 00:11:15.851 { 00:11:15.851 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:11:15.851 "subtype": "Discovery", 00:11:15.851 "listen_addresses": [ 00:11:15.851 { 00:11:15.851 "trtype": "TCP", 00:11:15.851 "adrfam": "IPv4", 00:11:15.851 "traddr": "10.0.0.2", 00:11:15.851 "trsvcid": "4420" 00:11:15.851 } 00:11:15.851 ], 00:11:15.851 "allow_any_host": true, 00:11:15.851 "hosts": [] 00:11:15.851 }, 00:11:15.851 { 00:11:15.851 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:11:15.851 "subtype": "NVMe", 00:11:15.851 "listen_addresses": [ 00:11:15.851 { 00:11:15.851 "trtype": "TCP", 00:11:15.851 "adrfam": "IPv4", 00:11:15.851 "traddr": "10.0.0.2", 00:11:15.851 "trsvcid": "4420" 00:11:15.851 } 00:11:15.851 ], 00:11:15.851 "allow_any_host": true, 00:11:15.851 "hosts": [], 00:11:15.851 "serial_number": "SPDK00000000000001", 00:11:15.851 "model_number": "SPDK bdev Controller", 00:11:15.851 "max_namespaces": 32, 00:11:15.851 "min_cntlid": 1, 00:11:15.851 "max_cntlid": 65519, 00:11:15.851 "namespaces": [ 00:11:15.851 { 00:11:15.851 "nsid": 1, 00:11:15.851 "bdev_name": "Null1", 00:11:15.851 "name": "Null1", 00:11:15.851 "nguid": "14FC4D2B407D4CE890A4D6B7AAB55EDE", 00:11:15.851 "uuid": "14fc4d2b-407d-4ce8-90a4-d6b7aab55ede" 00:11:15.851 } 00:11:15.851 ] 00:11:15.851 }, 00:11:15.851 { 00:11:15.851 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:11:15.851 "subtype": "NVMe", 00:11:15.851 "listen_addresses": [ 00:11:15.851 { 00:11:15.851 "trtype": "TCP", 00:11:15.851 "adrfam": "IPv4", 00:11:15.851 "traddr": "10.0.0.2", 00:11:15.851 "trsvcid": "4420" 00:11:15.851 } 00:11:15.851 ], 00:11:15.851 "allow_any_host": true, 00:11:15.851 "hosts": [], 00:11:15.851 "serial_number": "SPDK00000000000002", 00:11:15.851 "model_number": "SPDK bdev Controller", 00:11:15.851 "max_namespaces": 32, 00:11:15.851 "min_cntlid": 1, 00:11:15.851 "max_cntlid": 65519, 00:11:15.851 "namespaces": [ 00:11:15.851 { 00:11:15.851 "nsid": 1, 00:11:15.851 "bdev_name": "Null2", 00:11:15.851 "name": "Null2", 00:11:15.851 "nguid": "84A50DA392EE4828ADB52C7D52846DC5", 00:11:15.851 "uuid": "84a50da3-92ee-4828-adb5-2c7d52846dc5" 00:11:15.851 } 00:11:15.851 ] 00:11:15.851 }, 00:11:15.851 { 00:11:15.851 "nqn": "nqn.2016-06.io.spdk:cnode3", 00:11:15.851 "subtype": "NVMe", 00:11:15.851 "listen_addresses": [ 00:11:15.851 { 00:11:15.851 "trtype": "TCP", 00:11:15.851 "adrfam": "IPv4", 00:11:15.851 "traddr": "10.0.0.2", 00:11:15.851 "trsvcid": "4420" 00:11:15.851 } 00:11:15.851 ], 00:11:15.851 "allow_any_host": true, 00:11:15.851 "hosts": [], 00:11:15.851 "serial_number": "SPDK00000000000003", 00:11:15.851 "model_number": "SPDK bdev Controller", 00:11:15.851 "max_namespaces": 32, 00:11:15.851 "min_cntlid": 1, 00:11:15.851 "max_cntlid": 65519, 00:11:15.851 "namespaces": [ 00:11:15.851 { 00:11:15.851 "nsid": 1, 00:11:15.851 "bdev_name": "Null3", 00:11:15.851 "name": "Null3", 00:11:15.851 "nguid": "65B9C6F1FF0F4C3095F5D75E9184E9F1", 00:11:15.851 "uuid": "65b9c6f1-ff0f-4c30-95f5-d75e9184e9f1" 00:11:15.851 } 00:11:15.851 ] 00:11:15.851 }, 00:11:15.851 { 00:11:15.851 "nqn": "nqn.2016-06.io.spdk:cnode4", 00:11:15.851 "subtype": "NVMe", 00:11:15.851 "listen_addresses": [ 00:11:15.851 { 00:11:15.851 "trtype": "TCP", 00:11:15.851 "adrfam": "IPv4", 00:11:15.851 "traddr": "10.0.0.2", 00:11:15.851 "trsvcid": "4420" 00:11:15.851 } 00:11:15.851 ], 00:11:15.851 "allow_any_host": true, 00:11:15.851 "hosts": [], 00:11:15.851 "serial_number": "SPDK00000000000004", 00:11:15.851 "model_number": "SPDK bdev Controller", 00:11:15.851 "max_namespaces": 32, 00:11:15.851 "min_cntlid": 1, 00:11:15.851 "max_cntlid": 65519, 00:11:15.851 "namespaces": [ 00:11:15.851 { 00:11:15.851 "nsid": 1, 00:11:15.851 "bdev_name": "Null4", 00:11:15.851 "name": "Null4", 00:11:15.851 "nguid": "5DCA492D6C9B4739AB3D947A5928074C", 00:11:15.851 "uuid": "5dca492d-6c9b-4739-ab3d-947a5928074c" 00:11:15.851 } 00:11:15.851 ] 00:11:15.851 } 00:11:15.851 ] 00:11:15.851 11:29:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:15.851 11:29:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # seq 1 4 00:11:15.851 11:29:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:11:15.851 11:29:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:15.851 11:29:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:15.851 11:29:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:15.851 11:29:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:15.851 11:29:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null1 00:11:15.851 11:29:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:15.851 11:29:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:15.851 11:29:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:15.851 11:29:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:11:15.851 11:29:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:11:15.851 11:29:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:15.851 11:29:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:15.851 11:29:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:15.851 11:29:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null2 00:11:15.851 11:29:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:15.851 11:29:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:15.851 11:29:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:15.851 11:29:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:11:15.851 11:29:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:11:15.851 11:29:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:15.851 11:29:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:15.852 11:29:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:15.852 11:29:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null3 00:11:15.852 11:29:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:15.852 11:29:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:15.852 11:29:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:15.852 11:29:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:11:15.852 11:29:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode4 00:11:15.852 11:29:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:15.852 11:29:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:15.852 11:29:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:15.852 11:29:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null4 00:11:15.852 11:29:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:15.852 11:29:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:15.852 11:29:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:15.852 11:29:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@47 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 10.0.0.2 -s 4430 00:11:15.852 11:29:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:15.852 11:29:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:15.852 11:29:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:15.852 11:29:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@49 -- # rpc_cmd bdev_get_bdevs 00:11:15.852 11:29:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:15.852 11:29:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:15.852 11:29:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@49 -- # jq -r '.[].name' 00:11:15.852 11:29:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:15.852 11:29:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@49 -- # check_bdevs= 00:11:15.852 11:29:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@50 -- # '[' -n '' ']' 00:11:15.852 11:29:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@55 -- # trap - SIGINT SIGTERM EXIT 00:11:15.852 11:29:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@57 -- # nvmftestfini 00:11:15.852 11:29:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@516 -- # nvmfcleanup 00:11:15.852 11:29:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@121 -- # sync 00:11:15.852 11:29:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:11:15.852 11:29:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@124 -- # set +e 00:11:15.852 11:29:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@125 -- # for i in {1..20} 00:11:15.852 11:29:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:11:15.852 rmmod nvme_tcp 00:11:15.852 rmmod nvme_fabrics 00:11:15.852 rmmod nvme_keyring 00:11:15.852 11:29:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:11:15.852 11:29:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@128 -- # set -e 00:11:15.852 11:29:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@129 -- # return 0 00:11:15.852 11:29:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@517 -- # '[' -n 1138046 ']' 00:11:15.852 11:29:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@518 -- # killprocess 1138046 00:11:15.852 11:29:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@952 -- # '[' -z 1138046 ']' 00:11:15.852 11:29:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@956 -- # kill -0 1138046 00:11:15.852 11:29:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@957 -- # uname 00:11:15.852 11:29:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:11:15.852 11:29:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 1138046 00:11:16.112 11:29:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:11:16.112 11:29:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:11:16.112 11:29:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@970 -- # echo 'killing process with pid 1138046' 00:11:16.112 killing process with pid 1138046 00:11:16.112 11:29:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@971 -- # kill 1138046 00:11:16.112 11:29:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@976 -- # wait 1138046 00:11:16.112 11:29:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:11:16.112 11:29:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:11:16.112 11:29:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:11:16.112 11:29:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@297 -- # iptr 00:11:16.112 11:29:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@791 -- # iptables-save 00:11:16.112 11:29:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:11:16.112 11:29:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@791 -- # iptables-restore 00:11:16.112 11:29:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:11:16.112 11:29:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@302 -- # remove_spdk_ns 00:11:16.112 11:29:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:16.112 11:29:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:16.112 11:29:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:18.649 11:29:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:11:18.649 00:11:18.649 real 0m8.813s 00:11:18.649 user 0m5.211s 00:11:18.649 sys 0m4.403s 00:11:18.649 11:29:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1128 -- # xtrace_disable 00:11:18.649 11:29:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:18.649 ************************************ 00:11:18.649 END TEST nvmf_target_discovery 00:11:18.649 ************************************ 00:11:18.649 11:29:19 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@19 -- # run_test nvmf_referrals /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:11:18.649 11:29:19 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:11:18.649 11:29:19 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1109 -- # xtrace_disable 00:11:18.649 11:29:19 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:11:18.649 ************************************ 00:11:18.649 START TEST nvmf_referrals 00:11:18.649 ************************************ 00:11:18.649 11:29:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:11:18.649 * Looking for test storage... 00:11:18.649 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:18.649 11:29:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:11:18.649 11:29:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1691 -- # lcov --version 00:11:18.649 11:29:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:11:18.649 11:29:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:11:18.649 11:29:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:18.649 11:29:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:18.649 11:29:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:18.649 11:29:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@336 -- # IFS=.-: 00:11:18.649 11:29:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@336 -- # read -ra ver1 00:11:18.649 11:29:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@337 -- # IFS=.-: 00:11:18.649 11:29:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@337 -- # read -ra ver2 00:11:18.649 11:29:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@338 -- # local 'op=<' 00:11:18.649 11:29:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@340 -- # ver1_l=2 00:11:18.649 11:29:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@341 -- # ver2_l=1 00:11:18.649 11:29:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:18.649 11:29:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@344 -- # case "$op" in 00:11:18.649 11:29:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@345 -- # : 1 00:11:18.649 11:29:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:18.649 11:29:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:18.649 11:29:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@365 -- # decimal 1 00:11:18.649 11:29:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@353 -- # local d=1 00:11:18.649 11:29:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:18.649 11:29:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@355 -- # echo 1 00:11:18.649 11:29:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@365 -- # ver1[v]=1 00:11:18.649 11:29:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@366 -- # decimal 2 00:11:18.649 11:29:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@353 -- # local d=2 00:11:18.649 11:29:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:18.649 11:29:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@355 -- # echo 2 00:11:18.649 11:29:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@366 -- # ver2[v]=2 00:11:18.649 11:29:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:18.649 11:29:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:18.649 11:29:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@368 -- # return 0 00:11:18.649 11:29:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:18.649 11:29:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:11:18.649 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:18.649 --rc genhtml_branch_coverage=1 00:11:18.649 --rc genhtml_function_coverage=1 00:11:18.649 --rc genhtml_legend=1 00:11:18.649 --rc geninfo_all_blocks=1 00:11:18.649 --rc geninfo_unexecuted_blocks=1 00:11:18.649 00:11:18.649 ' 00:11:18.649 11:29:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:11:18.649 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:18.649 --rc genhtml_branch_coverage=1 00:11:18.649 --rc genhtml_function_coverage=1 00:11:18.649 --rc genhtml_legend=1 00:11:18.649 --rc geninfo_all_blocks=1 00:11:18.649 --rc geninfo_unexecuted_blocks=1 00:11:18.649 00:11:18.649 ' 00:11:18.649 11:29:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:11:18.649 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:18.649 --rc genhtml_branch_coverage=1 00:11:18.649 --rc genhtml_function_coverage=1 00:11:18.649 --rc genhtml_legend=1 00:11:18.649 --rc geninfo_all_blocks=1 00:11:18.649 --rc geninfo_unexecuted_blocks=1 00:11:18.649 00:11:18.649 ' 00:11:18.649 11:29:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:11:18.649 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:18.649 --rc genhtml_branch_coverage=1 00:11:18.649 --rc genhtml_function_coverage=1 00:11:18.649 --rc genhtml_legend=1 00:11:18.649 --rc geninfo_all_blocks=1 00:11:18.649 --rc geninfo_unexecuted_blocks=1 00:11:18.649 00:11:18.649 ' 00:11:18.649 11:29:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:18.649 11:29:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@7 -- # uname -s 00:11:18.649 11:29:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:18.649 11:29:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:18.649 11:29:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:18.649 11:29:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:18.649 11:29:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:18.649 11:29:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:18.649 11:29:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:18.649 11:29:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:18.649 11:29:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:18.649 11:29:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:18.649 11:29:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:11:18.649 11:29:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@18 -- # NVME_HOSTID=00abaa28-3537-eb11-906e-0017a4403562 00:11:18.649 11:29:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:18.649 11:29:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:18.649 11:29:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:18.649 11:29:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:18.649 11:29:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:18.649 11:29:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@15 -- # shopt -s extglob 00:11:18.649 11:29:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:18.649 11:29:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:18.649 11:29:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:18.649 11:29:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:18.649 11:29:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:18.650 11:29:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:18.650 11:29:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@5 -- # export PATH 00:11:18.650 11:29:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:18.650 11:29:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@51 -- # : 0 00:11:18.650 11:29:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:11:18.650 11:29:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:11:18.650 11:29:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:18.650 11:29:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:18.650 11:29:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:18.650 11:29:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:11:18.650 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:11:18.650 11:29:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:11:18.650 11:29:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:11:18.650 11:29:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@55 -- # have_pci_nics=0 00:11:18.650 11:29:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@11 -- # NVMF_REFERRAL_IP_1=127.0.0.2 00:11:18.650 11:29:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@12 -- # NVMF_REFERRAL_IP_2=127.0.0.3 00:11:18.650 11:29:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@13 -- # NVMF_REFERRAL_IP_3=127.0.0.4 00:11:18.650 11:29:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@14 -- # NVMF_PORT_REFERRAL=4430 00:11:18.650 11:29:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@15 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:11:18.650 11:29:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@16 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:11:18.650 11:29:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@37 -- # nvmftestinit 00:11:18.650 11:29:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:11:18.650 11:29:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:18.650 11:29:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@476 -- # prepare_net_devs 00:11:18.650 11:29:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@438 -- # local -g is_hw=no 00:11:18.650 11:29:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@440 -- # remove_spdk_ns 00:11:18.650 11:29:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:18.650 11:29:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:18.650 11:29:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:18.650 11:29:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:11:18.650 11:29:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:11:18.650 11:29:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@309 -- # xtrace_disable 00:11:18.650 11:29:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:23.924 11:29:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:23.924 11:29:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@315 -- # pci_devs=() 00:11:23.924 11:29:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@315 -- # local -a pci_devs 00:11:23.924 11:29:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@316 -- # pci_net_devs=() 00:11:23.924 11:29:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:11:23.924 11:29:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@317 -- # pci_drivers=() 00:11:23.924 11:29:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@317 -- # local -A pci_drivers 00:11:23.924 11:29:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@319 -- # net_devs=() 00:11:23.924 11:29:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@319 -- # local -ga net_devs 00:11:23.924 11:29:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@320 -- # e810=() 00:11:23.924 11:29:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@320 -- # local -ga e810 00:11:23.924 11:29:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@321 -- # x722=() 00:11:23.924 11:29:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@321 -- # local -ga x722 00:11:23.924 11:29:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@322 -- # mlx=() 00:11:23.924 11:29:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@322 -- # local -ga mlx 00:11:23.924 11:29:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:23.924 11:29:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:23.924 11:29:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:23.924 11:29:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:23.924 11:29:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:23.924 11:29:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:23.924 11:29:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:23.924 11:29:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:11:23.924 11:29:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:23.924 11:29:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:23.924 11:29:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:23.924 11:29:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:23.924 11:29:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:11:23.924 11:29:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:11:23.924 11:29:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:11:23.924 11:29:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:11:23.924 11:29:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:11:23.924 11:29:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:11:23.924 11:29:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:23.924 11:29:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:11:23.924 Found 0000:af:00.0 (0x8086 - 0x159b) 00:11:23.924 11:29:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:23.924 11:29:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:23.924 11:29:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:23.924 11:29:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:23.924 11:29:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:23.924 11:29:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:23.924 11:29:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:11:23.924 Found 0000:af:00.1 (0x8086 - 0x159b) 00:11:23.924 11:29:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:23.924 11:29:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:23.924 11:29:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:23.924 11:29:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:23.924 11:29:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:23.924 11:29:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:11:23.924 11:29:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:11:23.924 11:29:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:11:23.924 11:29:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:11:23.924 11:29:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:23.924 11:29:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:11:23.924 11:29:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:23.924 11:29:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@418 -- # [[ up == up ]] 00:11:23.924 11:29:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:11:23.924 11:29:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:23.924 11:29:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:11:23.924 Found net devices under 0000:af:00.0: cvl_0_0 00:11:23.924 11:29:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:11:23.924 11:29:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:11:23.924 11:29:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:23.924 11:29:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:11:23.924 11:29:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:23.924 11:29:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@418 -- # [[ up == up ]] 00:11:23.924 11:29:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:11:23.924 11:29:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:23.924 11:29:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:11:23.924 Found net devices under 0000:af:00.1: cvl_0_1 00:11:23.924 11:29:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:11:23.924 11:29:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:11:23.924 11:29:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@442 -- # is_hw=yes 00:11:23.925 11:29:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:11:23.925 11:29:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:11:23.925 11:29:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:11:23.925 11:29:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:11:23.925 11:29:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:23.925 11:29:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:23.925 11:29:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:11:23.925 11:29:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:11:23.925 11:29:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:11:23.925 11:29:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:11:23.925 11:29:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:11:23.925 11:29:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:11:23.925 11:29:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:11:23.925 11:29:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:23.925 11:29:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:11:23.925 11:29:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:11:23.925 11:29:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:11:23.925 11:29:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:11:23.925 11:29:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:11:23.925 11:29:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:11:23.925 11:29:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:11:23.925 11:29:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:11:24.184 11:29:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:11:24.184 11:29:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:11:24.184 11:29:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:11:24.184 11:29:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:11:24.184 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:24.184 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.324 ms 00:11:24.184 00:11:24.184 --- 10.0.0.2 ping statistics --- 00:11:24.184 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:24.184 rtt min/avg/max/mdev = 0.324/0.324/0.324/0.000 ms 00:11:24.184 11:29:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:11:24.184 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:24.184 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.134 ms 00:11:24.184 00:11:24.184 --- 10.0.0.1 ping statistics --- 00:11:24.184 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:24.184 rtt min/avg/max/mdev = 0.134/0.134/0.134/0.000 ms 00:11:24.184 11:29:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:24.184 11:29:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@450 -- # return 0 00:11:24.184 11:29:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:11:24.184 11:29:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:24.184 11:29:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:11:24.184 11:29:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:11:24.184 11:29:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:24.184 11:29:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:11:24.185 11:29:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:11:24.185 11:29:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@38 -- # nvmfappstart -m 0xF 00:11:24.185 11:29:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:11:24.185 11:29:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@724 -- # xtrace_disable 00:11:24.185 11:29:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:24.185 11:29:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@509 -- # nvmfpid=1141937 00:11:24.185 11:29:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@510 -- # waitforlisten 1141937 00:11:24.185 11:29:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:11:24.185 11:29:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@833 -- # '[' -z 1141937 ']' 00:11:24.185 11:29:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:24.185 11:29:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@838 -- # local max_retries=100 00:11:24.185 11:29:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:24.185 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:24.185 11:29:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@842 -- # xtrace_disable 00:11:24.185 11:29:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:24.185 [2024-11-15 11:29:24.887910] Starting SPDK v25.01-pre git sha1 4b2d483c6 / DPDK 24.03.0 initialization... 00:11:24.185 [2024-11-15 11:29:24.887950] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:24.185 [2024-11-15 11:29:24.977483] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:24.185 [2024-11-15 11:29:25.032537] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:24.185 [2024-11-15 11:29:25.032577] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:24.185 [2024-11-15 11:29:25.032588] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:24.185 [2024-11-15 11:29:25.032596] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:24.185 [2024-11-15 11:29:25.032604] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:24.185 [2024-11-15 11:29:25.034684] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:11:24.185 [2024-11-15 11:29:25.034703] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:11:24.185 [2024-11-15 11:29:25.034729] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:11:24.185 [2024-11-15 11:29:25.034734] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:25.122 11:29:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:11:25.123 11:29:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@866 -- # return 0 00:11:25.123 11:29:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:11:25.123 11:29:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@730 -- # xtrace_disable 00:11:25.123 11:29:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:25.123 11:29:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:25.123 11:29:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@40 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:11:25.123 11:29:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:25.123 11:29:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:25.123 [2024-11-15 11:29:25.850555] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:25.123 11:29:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:25.123 11:29:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 10.0.0.2 -s 8009 discovery 00:11:25.123 11:29:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:25.123 11:29:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:25.123 [2024-11-15 11:29:25.880652] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:11:25.123 11:29:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:25.123 11:29:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@44 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 00:11:25.123 11:29:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:25.123 11:29:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:25.123 11:29:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:25.123 11:29:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@45 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.3 -s 4430 00:11:25.123 11:29:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:25.123 11:29:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:25.123 11:29:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:25.123 11:29:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@46 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.4 -s 4430 00:11:25.123 11:29:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:25.123 11:29:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:25.123 11:29:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:25.123 11:29:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@48 -- # rpc_cmd nvmf_discovery_get_referrals 00:11:25.123 11:29:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:25.123 11:29:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@48 -- # jq length 00:11:25.123 11:29:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:25.123 11:29:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:25.123 11:29:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@48 -- # (( 3 == 3 )) 00:11:25.123 11:29:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@49 -- # get_referral_ips rpc 00:11:25.123 11:29:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:11:25.123 11:29:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:11:25.123 11:29:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:11:25.123 11:29:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:25.123 11:29:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:11:25.123 11:29:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:25.382 11:29:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:25.382 11:29:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:11:25.382 11:29:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@49 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:11:25.382 11:29:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@50 -- # get_referral_ips nvme 00:11:25.382 11:29:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:11:25.382 11:29:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:11:25.382 11:29:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:11:25.382 11:29:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid=00abaa28-3537-eb11-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:11:25.382 11:29:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:11:25.382 11:29:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:11:25.382 11:29:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@50 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:11:25.382 11:29:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@52 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 00:11:25.382 11:29:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:25.382 11:29:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:25.641 11:29:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:25.641 11:29:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@53 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.3 -s 4430 00:11:25.641 11:29:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:25.641 11:29:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:25.641 11:29:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:25.641 11:29:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@54 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.4 -s 4430 00:11:25.642 11:29:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:25.642 11:29:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:25.642 11:29:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:25.642 11:29:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@56 -- # rpc_cmd nvmf_discovery_get_referrals 00:11:25.642 11:29:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@56 -- # jq length 00:11:25.642 11:29:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:25.642 11:29:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:25.642 11:29:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:25.642 11:29:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@56 -- # (( 0 == 0 )) 00:11:25.642 11:29:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@57 -- # get_referral_ips nvme 00:11:25.642 11:29:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:11:25.642 11:29:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:11:25.642 11:29:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid=00abaa28-3537-eb11-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:11:25.642 11:29:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:11:25.642 11:29:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:11:25.642 11:29:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 00:11:25.642 11:29:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@57 -- # [[ '' == '' ]] 00:11:25.642 11:29:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@60 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n discovery 00:11:25.642 11:29:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:25.642 11:29:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:25.901 11:29:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:25.901 11:29:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@62 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:11:25.901 11:29:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:25.901 11:29:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:25.901 11:29:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:25.901 11:29:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@65 -- # get_referral_ips rpc 00:11:25.901 11:29:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:11:25.901 11:29:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:11:25.901 11:29:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:11:25.901 11:29:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:25.901 11:29:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:11:25.901 11:29:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:25.901 11:29:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:25.901 11:29:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.2 00:11:25.901 11:29:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@65 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:11:25.901 11:29:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@66 -- # get_referral_ips nvme 00:11:25.901 11:29:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:11:25.901 11:29:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:11:25.901 11:29:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid=00abaa28-3537-eb11-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:11:25.901 11:29:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:11:25.901 11:29:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:11:26.160 11:29:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.2 00:11:26.160 11:29:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@66 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:11:26.160 11:29:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@67 -- # get_discovery_entries 'nvme subsystem' 00:11:26.160 11:29:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@67 -- # jq -r .subnqn 00:11:26.160 11:29:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:11:26.160 11:29:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid=00abaa28-3537-eb11-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:11:26.160 11:29:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:11:26.160 11:29:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@67 -- # [[ nqn.2016-06.io.spdk:cnode1 == \n\q\n\.\2\0\1\6\-\0\6\.\i\o\.\s\p\d\k\:\c\n\o\d\e\1 ]] 00:11:26.160 11:29:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@68 -- # get_discovery_entries 'discovery subsystem referral' 00:11:26.160 11:29:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@68 -- # jq -r .subnqn 00:11:26.160 11:29:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:11:26.160 11:29:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid=00abaa28-3537-eb11-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:11:26.160 11:29:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:11:26.419 11:29:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@68 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:11:26.419 11:29:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@71 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:11:26.419 11:29:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:26.419 11:29:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:26.419 11:29:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:26.419 11:29:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@73 -- # get_referral_ips rpc 00:11:26.419 11:29:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:11:26.419 11:29:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:11:26.419 11:29:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:11:26.419 11:29:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:26.419 11:29:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:11:26.419 11:29:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:26.419 11:29:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:26.419 11:29:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 00:11:26.419 11:29:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@73 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:11:26.419 11:29:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@74 -- # get_referral_ips nvme 00:11:26.419 11:29:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:11:26.419 11:29:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:11:26.419 11:29:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid=00abaa28-3537-eb11-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:11:26.419 11:29:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:11:26.419 11:29:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:11:26.678 11:29:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 00:11:26.678 11:29:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@74 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:11:26.678 11:29:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@75 -- # get_discovery_entries 'nvme subsystem' 00:11:26.678 11:29:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@75 -- # jq -r .subnqn 00:11:26.678 11:29:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:11:26.678 11:29:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid=00abaa28-3537-eb11-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:11:26.678 11:29:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:11:26.937 11:29:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@75 -- # [[ '' == '' ]] 00:11:26.937 11:29:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@76 -- # get_discovery_entries 'discovery subsystem referral' 00:11:26.937 11:29:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@76 -- # jq -r .subnqn 00:11:26.937 11:29:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:11:26.937 11:29:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid=00abaa28-3537-eb11-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:11:26.937 11:29:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:11:26.937 11:29:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@76 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:11:26.937 11:29:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@79 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2014-08.org.nvmexpress.discovery 00:11:26.937 11:29:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:26.937 11:29:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:26.937 11:29:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:26.937 11:29:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@82 -- # rpc_cmd nvmf_discovery_get_referrals 00:11:26.937 11:29:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@82 -- # jq length 00:11:26.937 11:29:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:26.937 11:29:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:26.937 11:29:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:26.937 11:29:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@82 -- # (( 0 == 0 )) 00:11:27.196 11:29:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@83 -- # get_referral_ips nvme 00:11:27.196 11:29:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:11:27.196 11:29:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:11:27.196 11:29:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid=00abaa28-3537-eb11-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:11:27.196 11:29:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:11:27.196 11:29:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:11:27.196 11:29:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 00:11:27.196 11:29:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@83 -- # [[ '' == '' ]] 00:11:27.196 11:29:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@85 -- # trap - SIGINT SIGTERM EXIT 00:11:27.196 11:29:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@86 -- # nvmftestfini 00:11:27.196 11:29:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@516 -- # nvmfcleanup 00:11:27.196 11:29:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@121 -- # sync 00:11:27.196 11:29:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:11:27.196 11:29:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@124 -- # set +e 00:11:27.196 11:29:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@125 -- # for i in {1..20} 00:11:27.196 11:29:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:11:27.196 rmmod nvme_tcp 00:11:27.196 rmmod nvme_fabrics 00:11:27.196 rmmod nvme_keyring 00:11:27.196 11:29:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:11:27.196 11:29:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@128 -- # set -e 00:11:27.196 11:29:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@129 -- # return 0 00:11:27.196 11:29:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@517 -- # '[' -n 1141937 ']' 00:11:27.196 11:29:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@518 -- # killprocess 1141937 00:11:27.196 11:29:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@952 -- # '[' -z 1141937 ']' 00:11:27.196 11:29:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@956 -- # kill -0 1141937 00:11:27.196 11:29:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@957 -- # uname 00:11:27.196 11:29:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:11:27.196 11:29:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 1141937 00:11:27.455 11:29:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:11:27.455 11:29:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:11:27.455 11:29:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@970 -- # echo 'killing process with pid 1141937' 00:11:27.455 killing process with pid 1141937 00:11:27.456 11:29:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@971 -- # kill 1141937 00:11:27.456 11:29:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@976 -- # wait 1141937 00:11:27.456 11:29:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:11:27.456 11:29:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:11:27.456 11:29:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:11:27.456 11:29:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@297 -- # iptr 00:11:27.456 11:29:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@791 -- # iptables-save 00:11:27.456 11:29:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:11:27.456 11:29:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@791 -- # iptables-restore 00:11:27.456 11:29:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:11:27.456 11:29:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@302 -- # remove_spdk_ns 00:11:27.456 11:29:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:27.456 11:29:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:27.456 11:29:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:29.993 11:29:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:11:29.993 00:11:29.993 real 0m11.313s 00:11:29.993 user 0m15.587s 00:11:29.993 sys 0m5.073s 00:11:29.993 11:29:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1128 -- # xtrace_disable 00:11:29.993 11:29:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:29.993 ************************************ 00:11:29.993 END TEST nvmf_referrals 00:11:29.993 ************************************ 00:11:29.993 11:29:30 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@20 -- # run_test nvmf_connect_disconnect /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:11:29.993 11:29:30 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:11:29.993 11:29:30 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1109 -- # xtrace_disable 00:11:29.993 11:29:30 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:11:29.993 ************************************ 00:11:29.993 START TEST nvmf_connect_disconnect 00:11:29.993 ************************************ 00:11:29.993 11:29:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:11:29.993 * Looking for test storage... 00:11:29.993 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:29.993 11:29:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:11:29.993 11:29:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1691 -- # lcov --version 00:11:29.993 11:29:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:11:29.993 11:29:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:11:29.993 11:29:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:29.993 11:29:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:29.993 11:29:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:29.993 11:29:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@336 -- # IFS=.-: 00:11:29.993 11:29:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@336 -- # read -ra ver1 00:11:29.993 11:29:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@337 -- # IFS=.-: 00:11:29.993 11:29:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@337 -- # read -ra ver2 00:11:29.993 11:29:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@338 -- # local 'op=<' 00:11:29.993 11:29:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@340 -- # ver1_l=2 00:11:29.993 11:29:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@341 -- # ver2_l=1 00:11:29.993 11:29:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:29.993 11:29:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@344 -- # case "$op" in 00:11:29.993 11:29:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@345 -- # : 1 00:11:29.993 11:29:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:29.993 11:29:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:29.993 11:29:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@365 -- # decimal 1 00:11:29.993 11:29:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@353 -- # local d=1 00:11:29.993 11:29:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:29.993 11:29:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@355 -- # echo 1 00:11:29.993 11:29:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@365 -- # ver1[v]=1 00:11:29.993 11:29:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@366 -- # decimal 2 00:11:29.993 11:29:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@353 -- # local d=2 00:11:29.993 11:29:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:29.993 11:29:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@355 -- # echo 2 00:11:29.993 11:29:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@366 -- # ver2[v]=2 00:11:29.993 11:29:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:29.993 11:29:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:29.993 11:29:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@368 -- # return 0 00:11:29.993 11:29:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:29.993 11:29:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:11:29.993 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:29.993 --rc genhtml_branch_coverage=1 00:11:29.993 --rc genhtml_function_coverage=1 00:11:29.993 --rc genhtml_legend=1 00:11:29.993 --rc geninfo_all_blocks=1 00:11:29.993 --rc geninfo_unexecuted_blocks=1 00:11:29.993 00:11:29.993 ' 00:11:29.993 11:29:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:11:29.993 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:29.993 --rc genhtml_branch_coverage=1 00:11:29.993 --rc genhtml_function_coverage=1 00:11:29.993 --rc genhtml_legend=1 00:11:29.993 --rc geninfo_all_blocks=1 00:11:29.993 --rc geninfo_unexecuted_blocks=1 00:11:29.993 00:11:29.993 ' 00:11:29.993 11:29:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:11:29.993 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:29.993 --rc genhtml_branch_coverage=1 00:11:29.993 --rc genhtml_function_coverage=1 00:11:29.993 --rc genhtml_legend=1 00:11:29.993 --rc geninfo_all_blocks=1 00:11:29.993 --rc geninfo_unexecuted_blocks=1 00:11:29.993 00:11:29.993 ' 00:11:29.993 11:29:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:11:29.993 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:29.993 --rc genhtml_branch_coverage=1 00:11:29.993 --rc genhtml_function_coverage=1 00:11:29.993 --rc genhtml_legend=1 00:11:29.993 --rc geninfo_all_blocks=1 00:11:29.993 --rc geninfo_unexecuted_blocks=1 00:11:29.993 00:11:29.993 ' 00:11:29.993 11:29:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:29.993 11:29:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@7 -- # uname -s 00:11:29.993 11:29:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:29.993 11:29:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:29.994 11:29:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:29.994 11:29:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:29.994 11:29:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:29.994 11:29:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:29.994 11:29:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:29.994 11:29:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:29.994 11:29:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:29.994 11:29:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:29.994 11:29:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:11:29.994 11:29:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@18 -- # NVME_HOSTID=00abaa28-3537-eb11-906e-0017a4403562 00:11:29.994 11:29:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:29.994 11:29:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:29.994 11:29:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:29.994 11:29:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:29.994 11:29:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:29.994 11:29:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@15 -- # shopt -s extglob 00:11:29.994 11:29:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:29.994 11:29:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:29.994 11:29:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:29.994 11:29:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:29.994 11:29:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:29.994 11:29:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:29.994 11:29:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@5 -- # export PATH 00:11:29.994 11:29:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:29.994 11:29:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@51 -- # : 0 00:11:29.994 11:29:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:11:29.994 11:29:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:11:29.994 11:29:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:29.994 11:29:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:29.994 11:29:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:29.994 11:29:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:11:29.994 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:11:29.994 11:29:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:11:29.994 11:29:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:11:29.994 11:29:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@55 -- # have_pci_nics=0 00:11:29.994 11:29:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@11 -- # MALLOC_BDEV_SIZE=64 00:11:29.994 11:29:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:11:29.994 11:29:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@15 -- # nvmftestinit 00:11:29.994 11:29:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:11:29.994 11:29:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:29.994 11:29:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@476 -- # prepare_net_devs 00:11:29.994 11:29:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@438 -- # local -g is_hw=no 00:11:29.994 11:29:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@440 -- # remove_spdk_ns 00:11:29.994 11:29:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:29.994 11:29:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:29.994 11:29:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:29.994 11:29:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:11:29.994 11:29:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:11:29.994 11:29:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@309 -- # xtrace_disable 00:11:29.994 11:29:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:11:35.266 11:29:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:35.266 11:29:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@315 -- # pci_devs=() 00:11:35.266 11:29:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@315 -- # local -a pci_devs 00:11:35.266 11:29:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@316 -- # pci_net_devs=() 00:11:35.266 11:29:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:11:35.266 11:29:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@317 -- # pci_drivers=() 00:11:35.266 11:29:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@317 -- # local -A pci_drivers 00:11:35.266 11:29:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@319 -- # net_devs=() 00:11:35.266 11:29:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@319 -- # local -ga net_devs 00:11:35.266 11:29:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@320 -- # e810=() 00:11:35.266 11:29:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@320 -- # local -ga e810 00:11:35.266 11:29:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@321 -- # x722=() 00:11:35.266 11:29:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@321 -- # local -ga x722 00:11:35.266 11:29:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@322 -- # mlx=() 00:11:35.266 11:29:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@322 -- # local -ga mlx 00:11:35.266 11:29:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:35.266 11:29:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:35.266 11:29:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:35.266 11:29:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:35.266 11:29:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:35.266 11:29:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:35.266 11:29:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:35.266 11:29:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:11:35.266 11:29:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:35.266 11:29:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:35.266 11:29:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:35.266 11:29:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:35.266 11:29:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:11:35.266 11:29:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:11:35.266 11:29:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:11:35.266 11:29:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:11:35.266 11:29:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:11:35.266 11:29:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:11:35.266 11:29:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:35.266 11:29:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:11:35.266 Found 0000:af:00.0 (0x8086 - 0x159b) 00:11:35.266 11:29:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:35.266 11:29:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:35.266 11:29:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:35.266 11:29:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:35.266 11:29:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:35.266 11:29:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:35.266 11:29:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:11:35.266 Found 0000:af:00.1 (0x8086 - 0x159b) 00:11:35.266 11:29:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:35.266 11:29:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:35.266 11:29:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:35.266 11:29:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:35.266 11:29:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:35.266 11:29:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:11:35.266 11:29:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:11:35.266 11:29:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:11:35.266 11:29:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:11:35.266 11:29:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:35.266 11:29:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:11:35.266 11:29:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:35.266 11:29:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@418 -- # [[ up == up ]] 00:11:35.266 11:29:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:11:35.266 11:29:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:35.266 11:29:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:11:35.266 Found net devices under 0000:af:00.0: cvl_0_0 00:11:35.266 11:29:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:11:35.266 11:29:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:11:35.266 11:29:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:35.266 11:29:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:11:35.266 11:29:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:35.266 11:29:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@418 -- # [[ up == up ]] 00:11:35.266 11:29:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:11:35.266 11:29:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:35.266 11:29:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:11:35.266 Found net devices under 0000:af:00.1: cvl_0_1 00:11:35.266 11:29:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:11:35.266 11:29:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:11:35.266 11:29:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@442 -- # is_hw=yes 00:11:35.266 11:29:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:11:35.266 11:29:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:11:35.266 11:29:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:11:35.266 11:29:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:11:35.266 11:29:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:35.266 11:29:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:35.266 11:29:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:11:35.266 11:29:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:11:35.266 11:29:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:11:35.266 11:29:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:11:35.266 11:29:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:11:35.266 11:29:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:11:35.266 11:29:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:11:35.266 11:29:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:35.266 11:29:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:11:35.266 11:29:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:11:35.266 11:29:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:11:35.266 11:29:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:11:35.266 11:29:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:11:35.266 11:29:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:11:35.266 11:29:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:11:35.266 11:29:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:11:35.526 11:29:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:11:35.526 11:29:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:11:35.526 11:29:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:11:35.526 11:29:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:11:35.526 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:35.526 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.470 ms 00:11:35.526 00:11:35.526 --- 10.0.0.2 ping statistics --- 00:11:35.526 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:35.526 rtt min/avg/max/mdev = 0.470/0.470/0.470/0.000 ms 00:11:35.526 11:29:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:11:35.526 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:35.526 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.193 ms 00:11:35.526 00:11:35.526 --- 10.0.0.1 ping statistics --- 00:11:35.526 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:35.526 rtt min/avg/max/mdev = 0.193/0.193/0.193/0.000 ms 00:11:35.526 11:29:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:35.526 11:29:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@450 -- # return 0 00:11:35.526 11:29:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:11:35.526 11:29:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:35.526 11:29:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:11:35.526 11:29:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:11:35.526 11:29:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:35.526 11:29:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:11:35.526 11:29:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:11:35.526 11:29:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@16 -- # nvmfappstart -m 0xF 00:11:35.526 11:29:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:11:35.526 11:29:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@724 -- # xtrace_disable 00:11:35.526 11:29:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:11:35.526 11:29:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@509 -- # nvmfpid=1146152 00:11:35.526 11:29:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@510 -- # waitforlisten 1146152 00:11:35.526 11:29:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@833 -- # '[' -z 1146152 ']' 00:11:35.526 11:29:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:35.526 11:29:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:11:35.526 11:29:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@838 -- # local max_retries=100 00:11:35.526 11:29:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:35.526 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:35.526 11:29:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@842 -- # xtrace_disable 00:11:35.526 11:29:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:11:35.526 [2024-11-15 11:29:36.293712] Starting SPDK v25.01-pre git sha1 4b2d483c6 / DPDK 24.03.0 initialization... 00:11:35.526 [2024-11-15 11:29:36.293770] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:35.785 [2024-11-15 11:29:36.393539] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:35.785 [2024-11-15 11:29:36.443149] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:35.785 [2024-11-15 11:29:36.443192] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:35.785 [2024-11-15 11:29:36.443203] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:35.785 [2024-11-15 11:29:36.443212] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:35.785 [2024-11-15 11:29:36.443219] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:35.785 [2024-11-15 11:29:36.445185] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:11:35.785 [2024-11-15 11:29:36.445278] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:11:35.785 [2024-11-15 11:29:36.445374] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:11:35.785 [2024-11-15 11:29:36.445385] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:35.786 11:29:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:11:35.786 11:29:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@866 -- # return 0 00:11:35.786 11:29:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:11:35.786 11:29:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@730 -- # xtrace_disable 00:11:35.786 11:29:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:11:35.786 11:29:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:35.786 11:29:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:11:35.786 11:29:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:35.786 11:29:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:11:35.786 [2024-11-15 11:29:36.589735] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:35.786 11:29:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:35.786 11:29:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 00:11:35.786 11:29:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:35.786 11:29:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:11:35.786 11:29:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:35.786 11:29:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@20 -- # bdev=Malloc0 00:11:35.786 11:29:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:11:35.786 11:29:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:35.786 11:29:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:11:36.044 11:29:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:36.044 11:29:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:11:36.044 11:29:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:36.044 11:29:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:11:36.044 11:29:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:36.044 11:29:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:36.044 11:29:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:36.044 11:29:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:11:36.044 [2024-11-15 11:29:36.653360] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:36.044 11:29:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:36.044 11:29:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@26 -- # '[' 0 -eq 1 ']' 00:11:36.044 11:29:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@31 -- # num_iterations=5 00:11:36.044 11:29:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@34 -- # set +x 00:11:39.331 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:43.520 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:46.806 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:50.093 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:53.382 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:53.382 11:29:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@43 -- # trap - SIGINT SIGTERM EXIT 00:11:53.383 11:29:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@45 -- # nvmftestfini 00:11:53.383 11:29:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@516 -- # nvmfcleanup 00:11:53.383 11:29:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@121 -- # sync 00:11:53.383 11:29:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:11:53.383 11:29:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@124 -- # set +e 00:11:53.383 11:29:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@125 -- # for i in {1..20} 00:11:53.383 11:29:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:11:53.383 rmmod nvme_tcp 00:11:53.383 rmmod nvme_fabrics 00:11:53.383 rmmod nvme_keyring 00:11:53.383 11:29:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:11:53.383 11:29:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@128 -- # set -e 00:11:53.383 11:29:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@129 -- # return 0 00:11:53.383 11:29:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@517 -- # '[' -n 1146152 ']' 00:11:53.383 11:29:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@518 -- # killprocess 1146152 00:11:53.383 11:29:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@952 -- # '[' -z 1146152 ']' 00:11:53.383 11:29:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@956 -- # kill -0 1146152 00:11:53.383 11:29:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@957 -- # uname 00:11:53.383 11:29:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:11:53.383 11:29:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 1146152 00:11:53.383 11:29:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:11:53.383 11:29:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:11:53.383 11:29:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@970 -- # echo 'killing process with pid 1146152' 00:11:53.383 killing process with pid 1146152 00:11:53.383 11:29:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@971 -- # kill 1146152 00:11:53.383 11:29:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@976 -- # wait 1146152 00:11:53.383 11:29:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:11:53.383 11:29:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:11:53.383 11:29:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:11:53.383 11:29:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@297 -- # iptr 00:11:53.383 11:29:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@791 -- # iptables-save 00:11:53.383 11:29:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:11:53.383 11:29:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@791 -- # iptables-restore 00:11:53.383 11:29:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:11:53.383 11:29:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@302 -- # remove_spdk_ns 00:11:53.383 11:29:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:53.383 11:29:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:53.383 11:29:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:55.950 11:29:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:11:55.950 00:11:55.950 real 0m25.804s 00:11:55.950 user 1m11.953s 00:11:55.950 sys 0m5.650s 00:11:55.950 11:29:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1128 -- # xtrace_disable 00:11:55.950 11:29:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:11:55.950 ************************************ 00:11:55.950 END TEST nvmf_connect_disconnect 00:11:55.950 ************************************ 00:11:55.950 11:29:56 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@21 -- # run_test nvmf_multitarget /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:11:55.950 11:29:56 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:11:55.950 11:29:56 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1109 -- # xtrace_disable 00:11:55.950 11:29:56 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:11:55.950 ************************************ 00:11:55.950 START TEST nvmf_multitarget 00:11:55.950 ************************************ 00:11:55.950 11:29:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:11:55.950 * Looking for test storage... 00:11:55.950 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:55.950 11:29:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:11:55.950 11:29:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1691 -- # lcov --version 00:11:55.950 11:29:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:11:55.950 11:29:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:11:55.950 11:29:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:55.950 11:29:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:55.950 11:29:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:55.950 11:29:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@336 -- # IFS=.-: 00:11:55.950 11:29:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@336 -- # read -ra ver1 00:11:55.950 11:29:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@337 -- # IFS=.-: 00:11:55.950 11:29:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@337 -- # read -ra ver2 00:11:55.950 11:29:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@338 -- # local 'op=<' 00:11:55.950 11:29:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@340 -- # ver1_l=2 00:11:55.950 11:29:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@341 -- # ver2_l=1 00:11:55.950 11:29:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:55.950 11:29:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@344 -- # case "$op" in 00:11:55.950 11:29:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@345 -- # : 1 00:11:55.950 11:29:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:55.950 11:29:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:55.950 11:29:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@365 -- # decimal 1 00:11:55.950 11:29:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@353 -- # local d=1 00:11:55.950 11:29:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:55.950 11:29:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@355 -- # echo 1 00:11:55.950 11:29:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@365 -- # ver1[v]=1 00:11:55.950 11:29:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@366 -- # decimal 2 00:11:55.950 11:29:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@353 -- # local d=2 00:11:55.950 11:29:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:55.950 11:29:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@355 -- # echo 2 00:11:55.950 11:29:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@366 -- # ver2[v]=2 00:11:55.950 11:29:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:55.950 11:29:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:55.951 11:29:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@368 -- # return 0 00:11:55.951 11:29:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:55.951 11:29:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:11:55.951 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:55.951 --rc genhtml_branch_coverage=1 00:11:55.951 --rc genhtml_function_coverage=1 00:11:55.951 --rc genhtml_legend=1 00:11:55.951 --rc geninfo_all_blocks=1 00:11:55.951 --rc geninfo_unexecuted_blocks=1 00:11:55.951 00:11:55.951 ' 00:11:55.951 11:29:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:11:55.951 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:55.951 --rc genhtml_branch_coverage=1 00:11:55.951 --rc genhtml_function_coverage=1 00:11:55.951 --rc genhtml_legend=1 00:11:55.951 --rc geninfo_all_blocks=1 00:11:55.951 --rc geninfo_unexecuted_blocks=1 00:11:55.951 00:11:55.951 ' 00:11:55.951 11:29:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:11:55.951 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:55.951 --rc genhtml_branch_coverage=1 00:11:55.951 --rc genhtml_function_coverage=1 00:11:55.951 --rc genhtml_legend=1 00:11:55.951 --rc geninfo_all_blocks=1 00:11:55.951 --rc geninfo_unexecuted_blocks=1 00:11:55.951 00:11:55.951 ' 00:11:55.951 11:29:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:11:55.951 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:55.951 --rc genhtml_branch_coverage=1 00:11:55.951 --rc genhtml_function_coverage=1 00:11:55.951 --rc genhtml_legend=1 00:11:55.951 --rc geninfo_all_blocks=1 00:11:55.951 --rc geninfo_unexecuted_blocks=1 00:11:55.951 00:11:55.951 ' 00:11:55.951 11:29:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:55.951 11:29:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@7 -- # uname -s 00:11:55.951 11:29:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:55.951 11:29:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:55.951 11:29:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:55.951 11:29:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:55.951 11:29:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:55.951 11:29:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:55.951 11:29:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:55.951 11:29:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:55.951 11:29:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:55.951 11:29:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:55.951 11:29:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:11:55.951 11:29:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@18 -- # NVME_HOSTID=00abaa28-3537-eb11-906e-0017a4403562 00:11:55.951 11:29:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:55.951 11:29:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:55.951 11:29:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:55.951 11:29:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:55.951 11:29:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:55.951 11:29:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@15 -- # shopt -s extglob 00:11:55.951 11:29:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:55.951 11:29:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:55.951 11:29:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:55.951 11:29:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:55.951 11:29:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:55.951 11:29:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:55.951 11:29:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@5 -- # export PATH 00:11:55.951 11:29:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:55.951 11:29:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@51 -- # : 0 00:11:55.951 11:29:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:11:55.951 11:29:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:11:55.951 11:29:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:55.951 11:29:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:55.951 11:29:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:55.951 11:29:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:11:55.952 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:11:55.952 11:29:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:11:55.952 11:29:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:11:55.952 11:29:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@55 -- # have_pci_nics=0 00:11:55.952 11:29:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@13 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py 00:11:55.952 11:29:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@15 -- # nvmftestinit 00:11:55.952 11:29:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:11:55.952 11:29:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:55.952 11:29:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@476 -- # prepare_net_devs 00:11:55.952 11:29:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@438 -- # local -g is_hw=no 00:11:55.952 11:29:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@440 -- # remove_spdk_ns 00:11:55.952 11:29:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:55.952 11:29:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:55.952 11:29:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:55.952 11:29:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:11:55.952 11:29:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:11:55.952 11:29:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@309 -- # xtrace_disable 00:11:55.952 11:29:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:12:01.327 11:30:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:12:01.327 11:30:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@315 -- # pci_devs=() 00:12:01.327 11:30:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@315 -- # local -a pci_devs 00:12:01.327 11:30:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@316 -- # pci_net_devs=() 00:12:01.327 11:30:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:12:01.327 11:30:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@317 -- # pci_drivers=() 00:12:01.327 11:30:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@317 -- # local -A pci_drivers 00:12:01.327 11:30:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@319 -- # net_devs=() 00:12:01.327 11:30:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@319 -- # local -ga net_devs 00:12:01.327 11:30:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@320 -- # e810=() 00:12:01.327 11:30:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@320 -- # local -ga e810 00:12:01.327 11:30:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@321 -- # x722=() 00:12:01.327 11:30:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@321 -- # local -ga x722 00:12:01.327 11:30:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@322 -- # mlx=() 00:12:01.327 11:30:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@322 -- # local -ga mlx 00:12:01.327 11:30:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:01.327 11:30:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:01.327 11:30:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:01.327 11:30:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:01.327 11:30:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:01.327 11:30:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:01.327 11:30:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:01.327 11:30:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:12:01.327 11:30:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:01.327 11:30:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:01.327 11:30:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:01.327 11:30:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:01.327 11:30:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:12:01.327 11:30:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:12:01.327 11:30:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:12:01.327 11:30:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:12:01.327 11:30:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:12:01.327 11:30:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:12:01.327 11:30:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:01.327 11:30:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:12:01.327 Found 0000:af:00.0 (0x8086 - 0x159b) 00:12:01.327 11:30:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:12:01.327 11:30:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:12:01.327 11:30:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:01.327 11:30:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:01.327 11:30:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:12:01.327 11:30:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:01.327 11:30:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:12:01.327 Found 0000:af:00.1 (0x8086 - 0x159b) 00:12:01.327 11:30:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:12:01.327 11:30:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:12:01.327 11:30:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:01.327 11:30:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:01.327 11:30:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:12:01.327 11:30:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:12:01.327 11:30:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:12:01.586 11:30:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:12:01.586 11:30:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:12:01.586 11:30:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:01.586 11:30:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:12:01.586 11:30:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:01.586 11:30:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@418 -- # [[ up == up ]] 00:12:01.586 11:30:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:12:01.587 11:30:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:01.587 11:30:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:12:01.587 Found net devices under 0000:af:00.0: cvl_0_0 00:12:01.587 11:30:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:12:01.587 11:30:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:12:01.587 11:30:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:01.587 11:30:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:12:01.587 11:30:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:01.587 11:30:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@418 -- # [[ up == up ]] 00:12:01.587 11:30:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:12:01.587 11:30:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:01.587 11:30:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:12:01.587 Found net devices under 0000:af:00.1: cvl_0_1 00:12:01.587 11:30:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:12:01.587 11:30:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:12:01.587 11:30:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@442 -- # is_hw=yes 00:12:01.587 11:30:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:12:01.587 11:30:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:12:01.587 11:30:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:12:01.587 11:30:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:12:01.587 11:30:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:01.587 11:30:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:01.587 11:30:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:12:01.587 11:30:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:12:01.587 11:30:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:12:01.587 11:30:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:12:01.587 11:30:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:12:01.587 11:30:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:12:01.587 11:30:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:12:01.587 11:30:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:01.587 11:30:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:12:01.587 11:30:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:12:01.587 11:30:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:12:01.587 11:30:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:12:01.587 11:30:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:12:01.587 11:30:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:12:01.587 11:30:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:12:01.587 11:30:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:12:01.587 11:30:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:12:01.587 11:30:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:12:01.587 11:30:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:12:01.587 11:30:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:12:01.587 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:01.587 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.435 ms 00:12:01.587 00:12:01.587 --- 10.0.0.2 ping statistics --- 00:12:01.587 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:01.587 rtt min/avg/max/mdev = 0.435/0.435/0.435/0.000 ms 00:12:01.587 11:30:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:12:01.587 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:01.587 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.195 ms 00:12:01.587 00:12:01.587 --- 10.0.0.1 ping statistics --- 00:12:01.587 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:01.587 rtt min/avg/max/mdev = 0.195/0.195/0.195/0.000 ms 00:12:01.587 11:30:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:01.587 11:30:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@450 -- # return 0 00:12:01.587 11:30:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:12:01.587 11:30:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:01.587 11:30:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:12:01.587 11:30:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:12:01.587 11:30:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:01.587 11:30:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:12:01.587 11:30:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:12:01.846 11:30:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@16 -- # nvmfappstart -m 0xF 00:12:01.846 11:30:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:12:01.846 11:30:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@724 -- # xtrace_disable 00:12:01.846 11:30:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:12:01.846 11:30:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@509 -- # nvmfpid=1153255 00:12:01.846 11:30:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:12:01.846 11:30:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@510 -- # waitforlisten 1153255 00:12:01.846 11:30:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@833 -- # '[' -z 1153255 ']' 00:12:01.846 11:30:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:01.846 11:30:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@838 -- # local max_retries=100 00:12:01.846 11:30:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:01.846 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:01.847 11:30:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@842 -- # xtrace_disable 00:12:01.847 11:30:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:12:01.847 [2024-11-15 11:30:02.531895] Starting SPDK v25.01-pre git sha1 4b2d483c6 / DPDK 24.03.0 initialization... 00:12:01.847 [2024-11-15 11:30:02.531952] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:01.847 [2024-11-15 11:30:02.632893] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:01.847 [2024-11-15 11:30:02.681935] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:01.847 [2024-11-15 11:30:02.681981] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:01.847 [2024-11-15 11:30:02.681993] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:01.847 [2024-11-15 11:30:02.682003] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:01.847 [2024-11-15 11:30:02.682010] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:01.847 [2024-11-15 11:30:02.683889] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:12:01.847 [2024-11-15 11:30:02.683911] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:12:01.847 [2024-11-15 11:30:02.684025] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:12:01.847 [2024-11-15 11:30:02.684029] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:02.106 11:30:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:12:02.106 11:30:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@866 -- # return 0 00:12:02.106 11:30:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:12:02.106 11:30:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@730 -- # xtrace_disable 00:12:02.106 11:30:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:12:02.106 11:30:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:02.106 11:30:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@18 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:12:02.106 11:30:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:12:02.106 11:30:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@21 -- # jq length 00:12:02.106 11:30:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@21 -- # '[' 1 '!=' 1 ']' 00:12:02.106 11:30:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_1 -s 32 00:12:02.365 "nvmf_tgt_1" 00:12:02.365 11:30:03 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_2 -s 32 00:12:02.624 "nvmf_tgt_2" 00:12:02.624 11:30:03 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:12:02.624 11:30:03 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@28 -- # jq length 00:12:02.624 11:30:03 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@28 -- # '[' 3 '!=' 3 ']' 00:12:02.624 11:30:03 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_1 00:12:02.882 true 00:12:02.883 11:30:03 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_2 00:12:02.883 true 00:12:02.883 11:30:03 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:12:02.883 11:30:03 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@35 -- # jq length 00:12:03.142 11:30:03 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@35 -- # '[' 1 '!=' 1 ']' 00:12:03.142 11:30:03 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:12:03.142 11:30:03 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@41 -- # nvmftestfini 00:12:03.142 11:30:03 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@516 -- # nvmfcleanup 00:12:03.142 11:30:03 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@121 -- # sync 00:12:03.142 11:30:03 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:12:03.142 11:30:03 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@124 -- # set +e 00:12:03.142 11:30:03 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@125 -- # for i in {1..20} 00:12:03.142 11:30:03 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:12:03.142 rmmod nvme_tcp 00:12:03.142 rmmod nvme_fabrics 00:12:03.142 rmmod nvme_keyring 00:12:03.142 11:30:03 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:12:03.142 11:30:03 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@128 -- # set -e 00:12:03.142 11:30:03 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@129 -- # return 0 00:12:03.142 11:30:03 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@517 -- # '[' -n 1153255 ']' 00:12:03.142 11:30:03 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@518 -- # killprocess 1153255 00:12:03.142 11:30:03 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@952 -- # '[' -z 1153255 ']' 00:12:03.142 11:30:03 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@956 -- # kill -0 1153255 00:12:03.142 11:30:03 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@957 -- # uname 00:12:03.142 11:30:03 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:12:03.142 11:30:03 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 1153255 00:12:03.142 11:30:03 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:12:03.142 11:30:03 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:12:03.142 11:30:03 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@970 -- # echo 'killing process with pid 1153255' 00:12:03.142 killing process with pid 1153255 00:12:03.142 11:30:03 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@971 -- # kill 1153255 00:12:03.142 11:30:03 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@976 -- # wait 1153255 00:12:03.403 11:30:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:12:03.403 11:30:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:12:03.403 11:30:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:12:03.403 11:30:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@297 -- # iptr 00:12:03.403 11:30:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@791 -- # iptables-save 00:12:03.403 11:30:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:12:03.403 11:30:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@791 -- # iptables-restore 00:12:03.403 11:30:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:12:03.403 11:30:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@302 -- # remove_spdk_ns 00:12:03.403 11:30:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:03.403 11:30:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:03.403 11:30:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:05.943 11:30:06 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:12:05.943 00:12:05.943 real 0m9.869s 00:12:05.943 user 0m8.384s 00:12:05.943 sys 0m4.975s 00:12:05.943 11:30:06 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1128 -- # xtrace_disable 00:12:05.943 11:30:06 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:12:05.943 ************************************ 00:12:05.943 END TEST nvmf_multitarget 00:12:05.943 ************************************ 00:12:05.943 11:30:06 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@22 -- # run_test nvmf_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:12:05.943 11:30:06 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:12:05.943 11:30:06 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1109 -- # xtrace_disable 00:12:05.943 11:30:06 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:12:05.943 ************************************ 00:12:05.943 START TEST nvmf_rpc 00:12:05.943 ************************************ 00:12:05.943 11:30:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:12:05.943 * Looking for test storage... 00:12:05.943 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:05.943 11:30:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:12:05.943 11:30:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1691 -- # lcov --version 00:12:05.943 11:30:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:12:05.943 11:30:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:12:05.943 11:30:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:05.943 11:30:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:05.943 11:30:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:05.943 11:30:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:12:05.943 11:30:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:12:05.943 11:30:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:12:05.943 11:30:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:12:05.943 11:30:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:12:05.943 11:30:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:12:05.943 11:30:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:12:05.943 11:30:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:05.943 11:30:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@344 -- # case "$op" in 00:12:05.943 11:30:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@345 -- # : 1 00:12:05.943 11:30:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:05.943 11:30:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:05.943 11:30:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@365 -- # decimal 1 00:12:05.943 11:30:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@353 -- # local d=1 00:12:05.943 11:30:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:05.943 11:30:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@355 -- # echo 1 00:12:05.943 11:30:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:12:05.943 11:30:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@366 -- # decimal 2 00:12:05.943 11:30:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@353 -- # local d=2 00:12:05.943 11:30:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:05.943 11:30:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@355 -- # echo 2 00:12:05.943 11:30:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:12:05.943 11:30:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:05.943 11:30:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:05.943 11:30:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@368 -- # return 0 00:12:05.943 11:30:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:05.943 11:30:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:12:05.943 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:05.943 --rc genhtml_branch_coverage=1 00:12:05.943 --rc genhtml_function_coverage=1 00:12:05.943 --rc genhtml_legend=1 00:12:05.943 --rc geninfo_all_blocks=1 00:12:05.943 --rc geninfo_unexecuted_blocks=1 00:12:05.943 00:12:05.943 ' 00:12:05.943 11:30:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:12:05.943 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:05.943 --rc genhtml_branch_coverage=1 00:12:05.943 --rc genhtml_function_coverage=1 00:12:05.943 --rc genhtml_legend=1 00:12:05.943 --rc geninfo_all_blocks=1 00:12:05.943 --rc geninfo_unexecuted_blocks=1 00:12:05.943 00:12:05.943 ' 00:12:05.943 11:30:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:12:05.943 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:05.943 --rc genhtml_branch_coverage=1 00:12:05.943 --rc genhtml_function_coverage=1 00:12:05.943 --rc genhtml_legend=1 00:12:05.943 --rc geninfo_all_blocks=1 00:12:05.943 --rc geninfo_unexecuted_blocks=1 00:12:05.943 00:12:05.943 ' 00:12:05.943 11:30:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:12:05.943 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:05.943 --rc genhtml_branch_coverage=1 00:12:05.943 --rc genhtml_function_coverage=1 00:12:05.943 --rc genhtml_legend=1 00:12:05.943 --rc geninfo_all_blocks=1 00:12:05.943 --rc geninfo_unexecuted_blocks=1 00:12:05.943 00:12:05.943 ' 00:12:05.943 11:30:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:05.943 11:30:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@7 -- # uname -s 00:12:05.943 11:30:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:05.943 11:30:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:05.943 11:30:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:05.943 11:30:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:05.943 11:30:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:05.943 11:30:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:05.943 11:30:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:05.943 11:30:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:05.943 11:30:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:05.943 11:30:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:05.943 11:30:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:12:05.943 11:30:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@18 -- # NVME_HOSTID=00abaa28-3537-eb11-906e-0017a4403562 00:12:05.943 11:30:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:05.943 11:30:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:05.943 11:30:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:05.943 11:30:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:05.943 11:30:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:05.943 11:30:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@15 -- # shopt -s extglob 00:12:05.943 11:30:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:05.943 11:30:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:05.943 11:30:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:05.943 11:30:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:05.943 11:30:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:05.943 11:30:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:05.943 11:30:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@5 -- # export PATH 00:12:05.944 11:30:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:05.944 11:30:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@51 -- # : 0 00:12:05.944 11:30:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:12:05.944 11:30:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:12:05.944 11:30:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:05.944 11:30:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:05.944 11:30:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:05.944 11:30:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:12:05.944 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:12:05.944 11:30:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:12:05.944 11:30:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:12:05.944 11:30:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@55 -- # have_pci_nics=0 00:12:05.944 11:30:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@11 -- # loops=5 00:12:05.944 11:30:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@23 -- # nvmftestinit 00:12:05.944 11:30:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:12:05.944 11:30:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:05.944 11:30:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@476 -- # prepare_net_devs 00:12:05.944 11:30:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@438 -- # local -g is_hw=no 00:12:05.944 11:30:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@440 -- # remove_spdk_ns 00:12:05.944 11:30:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:05.944 11:30:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:05.944 11:30:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:05.944 11:30:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:12:05.944 11:30:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:12:05.944 11:30:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@309 -- # xtrace_disable 00:12:05.944 11:30:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:11.295 11:30:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:12:11.295 11:30:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@315 -- # pci_devs=() 00:12:11.295 11:30:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@315 -- # local -a pci_devs 00:12:11.295 11:30:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@316 -- # pci_net_devs=() 00:12:11.295 11:30:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:12:11.295 11:30:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@317 -- # pci_drivers=() 00:12:11.295 11:30:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@317 -- # local -A pci_drivers 00:12:11.295 11:30:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@319 -- # net_devs=() 00:12:11.295 11:30:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@319 -- # local -ga net_devs 00:12:11.295 11:30:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@320 -- # e810=() 00:12:11.295 11:30:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@320 -- # local -ga e810 00:12:11.295 11:30:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@321 -- # x722=() 00:12:11.295 11:30:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@321 -- # local -ga x722 00:12:11.295 11:30:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@322 -- # mlx=() 00:12:11.295 11:30:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@322 -- # local -ga mlx 00:12:11.295 11:30:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:11.295 11:30:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:11.295 11:30:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:11.295 11:30:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:11.295 11:30:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:11.295 11:30:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:11.295 11:30:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:11.295 11:30:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:12:11.295 11:30:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:11.295 11:30:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:11.295 11:30:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:11.295 11:30:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:11.295 11:30:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:12:11.295 11:30:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:12:11.295 11:30:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:12:11.295 11:30:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:12:11.295 11:30:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:12:11.295 11:30:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:12:11.295 11:30:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:11.295 11:30:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:12:11.295 Found 0000:af:00.0 (0x8086 - 0x159b) 00:12:11.295 11:30:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:12:11.295 11:30:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:12:11.295 11:30:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:11.295 11:30:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:11.295 11:30:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:12:11.295 11:30:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:11.295 11:30:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:12:11.295 Found 0000:af:00.1 (0x8086 - 0x159b) 00:12:11.295 11:30:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:12:11.295 11:30:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:12:11.295 11:30:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:11.295 11:30:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:11.295 11:30:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:12:11.295 11:30:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:12:11.295 11:30:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:12:11.295 11:30:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:12:11.295 11:30:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:12:11.295 11:30:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:11.295 11:30:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:12:11.295 11:30:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:11.295 11:30:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@418 -- # [[ up == up ]] 00:12:11.295 11:30:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:12:11.295 11:30:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:11.295 11:30:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:12:11.295 Found net devices under 0000:af:00.0: cvl_0_0 00:12:11.295 11:30:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:12:11.295 11:30:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:12:11.295 11:30:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:11.295 11:30:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:12:11.295 11:30:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:11.295 11:30:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@418 -- # [[ up == up ]] 00:12:11.295 11:30:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:12:11.295 11:30:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:11.295 11:30:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:12:11.295 Found net devices under 0000:af:00.1: cvl_0_1 00:12:11.295 11:30:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:12:11.295 11:30:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:12:11.295 11:30:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@442 -- # is_hw=yes 00:12:11.295 11:30:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:12:11.295 11:30:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:12:11.295 11:30:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:12:11.295 11:30:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:12:11.295 11:30:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:11.295 11:30:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:11.295 11:30:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:12:11.295 11:30:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:12:11.295 11:30:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:12:11.295 11:30:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:12:11.295 11:30:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:12:11.295 11:30:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:12:11.295 11:30:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:12:11.295 11:30:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:11.295 11:30:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:12:11.295 11:30:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:12:11.295 11:30:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:12:11.295 11:30:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:12:11.295 11:30:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:12:11.295 11:30:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:12:11.295 11:30:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:12:11.296 11:30:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:12:11.554 11:30:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:12:11.554 11:30:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:12:11.554 11:30:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:12:11.554 11:30:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:12:11.554 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:11.554 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.575 ms 00:12:11.554 00:12:11.554 --- 10.0.0.2 ping statistics --- 00:12:11.554 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:11.554 rtt min/avg/max/mdev = 0.575/0.575/0.575/0.000 ms 00:12:11.554 11:30:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:12:11.554 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:11.554 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.221 ms 00:12:11.554 00:12:11.554 --- 10.0.0.1 ping statistics --- 00:12:11.554 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:11.554 rtt min/avg/max/mdev = 0.221/0.221/0.221/0.000 ms 00:12:11.554 11:30:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:11.554 11:30:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@450 -- # return 0 00:12:11.554 11:30:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:12:11.554 11:30:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:11.554 11:30:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:12:11.554 11:30:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:12:11.554 11:30:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:11.555 11:30:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:12:11.555 11:30:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:12:11.555 11:30:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@24 -- # nvmfappstart -m 0xF 00:12:11.555 11:30:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:12:11.555 11:30:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@724 -- # xtrace_disable 00:12:11.555 11:30:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:11.555 11:30:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@509 -- # nvmfpid=1157658 00:12:11.555 11:30:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@510 -- # waitforlisten 1157658 00:12:11.555 11:30:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:12:11.555 11:30:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@833 -- # '[' -z 1157658 ']' 00:12:11.555 11:30:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:11.555 11:30:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@838 -- # local max_retries=100 00:12:11.555 11:30:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:11.555 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:11.555 11:30:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@842 -- # xtrace_disable 00:12:11.555 11:30:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:11.555 [2024-11-15 11:30:12.369570] Starting SPDK v25.01-pre git sha1 4b2d483c6 / DPDK 24.03.0 initialization... 00:12:11.555 [2024-11-15 11:30:12.369629] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:11.813 [2024-11-15 11:30:12.470826] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:11.813 [2024-11-15 11:30:12.520588] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:11.813 [2024-11-15 11:30:12.520630] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:11.813 [2024-11-15 11:30:12.520640] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:11.813 [2024-11-15 11:30:12.520649] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:11.813 [2024-11-15 11:30:12.520656] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:11.813 [2024-11-15 11:30:12.522559] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:12:11.813 [2024-11-15 11:30:12.522667] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:12:11.813 [2024-11-15 11:30:12.522766] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:12:11.813 [2024-11-15 11:30:12.522771] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:11.813 11:30:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:12:11.813 11:30:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@866 -- # return 0 00:12:11.813 11:30:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:12:11.813 11:30:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@730 -- # xtrace_disable 00:12:11.813 11:30:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:11.813 11:30:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:12.071 11:30:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@26 -- # rpc_cmd nvmf_get_stats 00:12:12.071 11:30:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:12.071 11:30:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:12.071 11:30:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:12.071 11:30:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@26 -- # stats='{ 00:12:12.071 "tick_rate": 2200000000, 00:12:12.071 "poll_groups": [ 00:12:12.071 { 00:12:12.071 "name": "nvmf_tgt_poll_group_000", 00:12:12.071 "admin_qpairs": 0, 00:12:12.071 "io_qpairs": 0, 00:12:12.071 "current_admin_qpairs": 0, 00:12:12.071 "current_io_qpairs": 0, 00:12:12.071 "pending_bdev_io": 0, 00:12:12.071 "completed_nvme_io": 0, 00:12:12.071 "transports": [] 00:12:12.071 }, 00:12:12.071 { 00:12:12.071 "name": "nvmf_tgt_poll_group_001", 00:12:12.071 "admin_qpairs": 0, 00:12:12.071 "io_qpairs": 0, 00:12:12.071 "current_admin_qpairs": 0, 00:12:12.071 "current_io_qpairs": 0, 00:12:12.071 "pending_bdev_io": 0, 00:12:12.071 "completed_nvme_io": 0, 00:12:12.071 "transports": [] 00:12:12.071 }, 00:12:12.071 { 00:12:12.071 "name": "nvmf_tgt_poll_group_002", 00:12:12.071 "admin_qpairs": 0, 00:12:12.071 "io_qpairs": 0, 00:12:12.071 "current_admin_qpairs": 0, 00:12:12.071 "current_io_qpairs": 0, 00:12:12.071 "pending_bdev_io": 0, 00:12:12.071 "completed_nvme_io": 0, 00:12:12.071 "transports": [] 00:12:12.071 }, 00:12:12.071 { 00:12:12.071 "name": "nvmf_tgt_poll_group_003", 00:12:12.071 "admin_qpairs": 0, 00:12:12.071 "io_qpairs": 0, 00:12:12.071 "current_admin_qpairs": 0, 00:12:12.071 "current_io_qpairs": 0, 00:12:12.071 "pending_bdev_io": 0, 00:12:12.071 "completed_nvme_io": 0, 00:12:12.071 "transports": [] 00:12:12.071 } 00:12:12.071 ] 00:12:12.071 }' 00:12:12.071 11:30:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@28 -- # jcount '.poll_groups[].name' 00:12:12.071 11:30:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@14 -- # local 'filter=.poll_groups[].name' 00:12:12.071 11:30:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@15 -- # jq '.poll_groups[].name' 00:12:12.071 11:30:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@15 -- # wc -l 00:12:12.071 11:30:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@28 -- # (( 4 == 4 )) 00:12:12.071 11:30:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@29 -- # jq '.poll_groups[0].transports[0]' 00:12:12.071 11:30:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@29 -- # [[ null == null ]] 00:12:12.071 11:30:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@31 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:12:12.071 11:30:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:12.071 11:30:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:12.072 [2024-11-15 11:30:12.784487] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:12.072 11:30:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:12.072 11:30:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@33 -- # rpc_cmd nvmf_get_stats 00:12:12.072 11:30:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:12.072 11:30:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:12.072 11:30:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:12.072 11:30:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@33 -- # stats='{ 00:12:12.072 "tick_rate": 2200000000, 00:12:12.072 "poll_groups": [ 00:12:12.072 { 00:12:12.072 "name": "nvmf_tgt_poll_group_000", 00:12:12.072 "admin_qpairs": 0, 00:12:12.072 "io_qpairs": 0, 00:12:12.072 "current_admin_qpairs": 0, 00:12:12.072 "current_io_qpairs": 0, 00:12:12.072 "pending_bdev_io": 0, 00:12:12.072 "completed_nvme_io": 0, 00:12:12.072 "transports": [ 00:12:12.072 { 00:12:12.072 "trtype": "TCP" 00:12:12.072 } 00:12:12.072 ] 00:12:12.072 }, 00:12:12.072 { 00:12:12.072 "name": "nvmf_tgt_poll_group_001", 00:12:12.072 "admin_qpairs": 0, 00:12:12.072 "io_qpairs": 0, 00:12:12.072 "current_admin_qpairs": 0, 00:12:12.072 "current_io_qpairs": 0, 00:12:12.072 "pending_bdev_io": 0, 00:12:12.072 "completed_nvme_io": 0, 00:12:12.072 "transports": [ 00:12:12.072 { 00:12:12.072 "trtype": "TCP" 00:12:12.072 } 00:12:12.072 ] 00:12:12.072 }, 00:12:12.072 { 00:12:12.072 "name": "nvmf_tgt_poll_group_002", 00:12:12.072 "admin_qpairs": 0, 00:12:12.072 "io_qpairs": 0, 00:12:12.072 "current_admin_qpairs": 0, 00:12:12.072 "current_io_qpairs": 0, 00:12:12.072 "pending_bdev_io": 0, 00:12:12.072 "completed_nvme_io": 0, 00:12:12.072 "transports": [ 00:12:12.072 { 00:12:12.072 "trtype": "TCP" 00:12:12.072 } 00:12:12.072 ] 00:12:12.072 }, 00:12:12.072 { 00:12:12.072 "name": "nvmf_tgt_poll_group_003", 00:12:12.072 "admin_qpairs": 0, 00:12:12.072 "io_qpairs": 0, 00:12:12.072 "current_admin_qpairs": 0, 00:12:12.072 "current_io_qpairs": 0, 00:12:12.072 "pending_bdev_io": 0, 00:12:12.072 "completed_nvme_io": 0, 00:12:12.072 "transports": [ 00:12:12.072 { 00:12:12.072 "trtype": "TCP" 00:12:12.072 } 00:12:12.072 ] 00:12:12.072 } 00:12:12.072 ] 00:12:12.072 }' 00:12:12.072 11:30:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@35 -- # jsum '.poll_groups[].admin_qpairs' 00:12:12.072 11:30:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:12:12.072 11:30:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:12:12.072 11:30:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:12:12.072 11:30:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@35 -- # (( 0 == 0 )) 00:12:12.072 11:30:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@36 -- # jsum '.poll_groups[].io_qpairs' 00:12:12.072 11:30:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:12:12.072 11:30:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:12:12.072 11:30:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:12:12.072 11:30:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@36 -- # (( 0 == 0 )) 00:12:12.072 11:30:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@38 -- # '[' rdma == tcp ']' 00:12:12.072 11:30:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@46 -- # MALLOC_BDEV_SIZE=64 00:12:12.072 11:30:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@47 -- # MALLOC_BLOCK_SIZE=512 00:12:12.072 11:30:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@49 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:12:12.072 11:30:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:12.072 11:30:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:12.331 Malloc1 00:12:12.331 11:30:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:12.331 11:30:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@52 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:12:12.331 11:30:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:12.331 11:30:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:12.331 11:30:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:12.331 11:30:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:12:12.331 11:30:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:12.331 11:30:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:12.331 11:30:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:12.331 11:30:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@54 -- # rpc_cmd nvmf_subsystem_allow_any_host -d nqn.2016-06.io.spdk:cnode1 00:12:12.331 11:30:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:12.331 11:30:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:12.331 11:30:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:12.331 11:30:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@55 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:12.331 11:30:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:12.331 11:30:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:12.331 [2024-11-15 11:30:12.979342] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:12.331 11:30:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:12.331 11:30:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@58 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid=00abaa28-3537-eb11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -a 10.0.0.2 -s 4420 00:12:12.331 11:30:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@650 -- # local es=0 00:12:12.331 11:30:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@652 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid=00abaa28-3537-eb11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -a 10.0.0.2 -s 4420 00:12:12.331 11:30:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@638 -- # local arg=nvme 00:12:12.331 11:30:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:12:12.331 11:30:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # type -t nvme 00:12:12.331 11:30:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:12:12.331 11:30:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # type -P nvme 00:12:12.331 11:30:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:12:12.331 11:30:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # arg=/usr/sbin/nvme 00:12:12.331 11:30:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # [[ -x /usr/sbin/nvme ]] 00:12:12.331 11:30:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@653 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid=00abaa28-3537-eb11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -a 10.0.0.2 -s 4420 00:12:12.331 [2024-11-15 11:30:13.007852] ctrlr.c: 823:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562' 00:12:12.331 Failed to write to /dev/nvme-fabrics: Input/output error 00:12:12.331 could not add new controller: failed to write to nvme-fabrics device 00:12:12.331 11:30:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@653 -- # es=1 00:12:12.331 11:30:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:12:12.332 11:30:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:12:12.332 11:30:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:12:12.332 11:30:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@61 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:12:12.332 11:30:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:12.332 11:30:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:12.332 11:30:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:12.332 11:30:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@62 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid=00abaa28-3537-eb11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:13.710 11:30:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@63 -- # waitforserial SPDKISFASTANDAWESOME 00:12:13.710 11:30:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1200 -- # local i=0 00:12:13.710 11:30:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1201 -- # local nvme_device_counter=1 nvme_devices=0 00:12:13.710 11:30:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # [[ -n '' ]] 00:12:13.710 11:30:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # sleep 2 00:12:15.615 11:30:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( i++ <= 15 )) 00:12:15.615 11:30:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # lsblk -l -o NAME,SERIAL 00:12:15.615 11:30:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # grep -c SPDKISFASTANDAWESOME 00:12:15.615 11:30:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # nvme_devices=1 00:12:15.615 11:30:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( nvme_devices == nvme_device_counter )) 00:12:15.615 11:30:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # return 0 00:12:15.615 11:30:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@64 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:15.615 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:15.615 11:30:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@65 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:15.615 11:30:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1221 -- # local i=0 00:12:15.615 11:30:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1222 -- # lsblk -o NAME,SERIAL 00:12:15.615 11:30:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1222 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:15.615 11:30:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1229 -- # lsblk -l -o NAME,SERIAL 00:12:15.615 11:30:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1229 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:15.874 11:30:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1233 -- # return 0 00:12:15.874 11:30:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@68 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:12:15.874 11:30:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:15.874 11:30:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:15.874 11:30:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:15.874 11:30:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@69 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid=00abaa28-3537-eb11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:15.874 11:30:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@650 -- # local es=0 00:12:15.874 11:30:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@652 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid=00abaa28-3537-eb11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:15.874 11:30:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@638 -- # local arg=nvme 00:12:15.874 11:30:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:12:15.874 11:30:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # type -t nvme 00:12:15.874 11:30:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:12:15.874 11:30:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # type -P nvme 00:12:15.874 11:30:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:12:15.874 11:30:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # arg=/usr/sbin/nvme 00:12:15.874 11:30:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # [[ -x /usr/sbin/nvme ]] 00:12:15.874 11:30:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@653 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid=00abaa28-3537-eb11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:15.874 [2024-11-15 11:30:16.508177] ctrlr.c: 823:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562' 00:12:15.874 Failed to write to /dev/nvme-fabrics: Input/output error 00:12:15.874 could not add new controller: failed to write to nvme-fabrics device 00:12:15.874 11:30:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@653 -- # es=1 00:12:15.874 11:30:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:12:15.874 11:30:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:12:15.874 11:30:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:12:15.874 11:30:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@72 -- # rpc_cmd nvmf_subsystem_allow_any_host -e nqn.2016-06.io.spdk:cnode1 00:12:15.874 11:30:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:15.874 11:30:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:15.874 11:30:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:15.874 11:30:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@73 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid=00abaa28-3537-eb11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:17.252 11:30:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@74 -- # waitforserial SPDKISFASTANDAWESOME 00:12:17.252 11:30:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1200 -- # local i=0 00:12:17.252 11:30:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1201 -- # local nvme_device_counter=1 nvme_devices=0 00:12:17.252 11:30:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # [[ -n '' ]] 00:12:17.252 11:30:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # sleep 2 00:12:19.157 11:30:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( i++ <= 15 )) 00:12:19.157 11:30:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # lsblk -l -o NAME,SERIAL 00:12:19.157 11:30:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # grep -c SPDKISFASTANDAWESOME 00:12:19.157 11:30:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # nvme_devices=1 00:12:19.157 11:30:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( nvme_devices == nvme_device_counter )) 00:12:19.157 11:30:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # return 0 00:12:19.157 11:30:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@75 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:19.157 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:19.157 11:30:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@76 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:19.157 11:30:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1221 -- # local i=0 00:12:19.157 11:30:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1222 -- # lsblk -o NAME,SERIAL 00:12:19.157 11:30:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1222 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:19.157 11:30:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1229 -- # lsblk -l -o NAME,SERIAL 00:12:19.157 11:30:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1229 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:19.157 11:30:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1233 -- # return 0 00:12:19.157 11:30:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@78 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:19.157 11:30:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:19.157 11:30:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:19.157 11:30:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:19.157 11:30:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # seq 1 5 00:12:19.157 11:30:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:12:19.157 11:30:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:19.157 11:30:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:19.157 11:30:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:19.417 11:30:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:19.417 11:30:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:19.417 11:30:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:19.417 11:30:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:19.417 [2024-11-15 11:30:20.022155] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:19.417 11:30:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:19.417 11:30:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:12:19.417 11:30:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:19.417 11:30:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:19.417 11:30:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:19.417 11:30:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:19.417 11:30:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:19.417 11:30:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:19.417 11:30:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:19.417 11:30:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid=00abaa28-3537-eb11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:20.795 11:30:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:12:20.795 11:30:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1200 -- # local i=0 00:12:20.795 11:30:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1201 -- # local nvme_device_counter=1 nvme_devices=0 00:12:20.795 11:30:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # [[ -n '' ]] 00:12:20.795 11:30:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # sleep 2 00:12:22.702 11:30:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( i++ <= 15 )) 00:12:22.702 11:30:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # lsblk -l -o NAME,SERIAL 00:12:22.702 11:30:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # grep -c SPDKISFASTANDAWESOME 00:12:22.702 11:30:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # nvme_devices=1 00:12:22.702 11:30:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( nvme_devices == nvme_device_counter )) 00:12:22.702 11:30:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # return 0 00:12:22.702 11:30:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:22.961 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:22.961 11:30:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:22.961 11:30:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1221 -- # local i=0 00:12:22.961 11:30:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1222 -- # lsblk -o NAME,SERIAL 00:12:22.961 11:30:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1222 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:22.961 11:30:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1229 -- # lsblk -l -o NAME,SERIAL 00:12:22.961 11:30:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1229 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:22.961 11:30:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1233 -- # return 0 00:12:22.961 11:30:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:12:22.961 11:30:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:22.961 11:30:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:22.961 11:30:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:22.961 11:30:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:22.961 11:30:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:22.961 11:30:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:22.961 11:30:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:22.961 11:30:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:12:22.961 11:30:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:22.961 11:30:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:22.962 11:30:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:22.962 11:30:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:22.962 11:30:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:22.962 11:30:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:22.962 11:30:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:22.962 [2024-11-15 11:30:23.622140] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:22.962 11:30:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:22.962 11:30:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:12:22.962 11:30:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:22.962 11:30:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:22.962 11:30:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:22.962 11:30:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:22.962 11:30:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:22.962 11:30:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:22.962 11:30:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:22.962 11:30:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid=00abaa28-3537-eb11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:24.338 11:30:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:12:24.339 11:30:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1200 -- # local i=0 00:12:24.339 11:30:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1201 -- # local nvme_device_counter=1 nvme_devices=0 00:12:24.339 11:30:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # [[ -n '' ]] 00:12:24.339 11:30:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # sleep 2 00:12:26.245 11:30:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( i++ <= 15 )) 00:12:26.245 11:30:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # lsblk -l -o NAME,SERIAL 00:12:26.245 11:30:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # grep -c SPDKISFASTANDAWESOME 00:12:26.245 11:30:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # nvme_devices=1 00:12:26.245 11:30:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( nvme_devices == nvme_device_counter )) 00:12:26.245 11:30:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # return 0 00:12:26.245 11:30:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:26.245 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:26.245 11:30:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:26.245 11:30:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1221 -- # local i=0 00:12:26.245 11:30:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1222 -- # lsblk -o NAME,SERIAL 00:12:26.245 11:30:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1222 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:26.245 11:30:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1229 -- # lsblk -l -o NAME,SERIAL 00:12:26.245 11:30:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1229 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:26.245 11:30:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1233 -- # return 0 00:12:26.245 11:30:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:12:26.245 11:30:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:26.245 11:30:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:26.245 11:30:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:26.245 11:30:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:26.245 11:30:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:26.245 11:30:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:26.504 11:30:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:26.504 11:30:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:12:26.504 11:30:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:26.504 11:30:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:26.504 11:30:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:26.504 11:30:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:26.504 11:30:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:26.504 11:30:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:26.504 11:30:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:26.504 [2024-11-15 11:30:27.116958] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:26.504 11:30:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:26.504 11:30:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:12:26.504 11:30:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:26.504 11:30:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:26.504 11:30:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:26.504 11:30:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:26.504 11:30:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:26.504 11:30:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:26.504 11:30:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:26.504 11:30:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid=00abaa28-3537-eb11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:27.882 11:30:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:12:27.882 11:30:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1200 -- # local i=0 00:12:27.882 11:30:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1201 -- # local nvme_device_counter=1 nvme_devices=0 00:12:27.882 11:30:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # [[ -n '' ]] 00:12:27.882 11:30:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # sleep 2 00:12:29.786 11:30:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( i++ <= 15 )) 00:12:29.786 11:30:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # lsblk -l -o NAME,SERIAL 00:12:29.786 11:30:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # grep -c SPDKISFASTANDAWESOME 00:12:29.786 11:30:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # nvme_devices=1 00:12:29.786 11:30:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( nvme_devices == nvme_device_counter )) 00:12:29.786 11:30:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # return 0 00:12:29.786 11:30:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:29.786 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:29.786 11:30:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:29.786 11:30:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1221 -- # local i=0 00:12:29.786 11:30:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1222 -- # lsblk -o NAME,SERIAL 00:12:29.786 11:30:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1222 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:29.786 11:30:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1229 -- # lsblk -l -o NAME,SERIAL 00:12:29.786 11:30:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1229 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:29.786 11:30:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1233 -- # return 0 00:12:29.786 11:30:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:12:29.786 11:30:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:29.786 11:30:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:29.786 11:30:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:29.786 11:30:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:29.786 11:30:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:29.786 11:30:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:29.786 11:30:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:29.786 11:30:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:12:29.786 11:30:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:29.786 11:30:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:29.786 11:30:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:29.786 11:30:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:29.786 11:30:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:29.786 11:30:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:29.786 11:30:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:29.786 [2024-11-15 11:30:30.627567] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:29.786 11:30:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:29.786 11:30:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:12:29.786 11:30:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:29.786 11:30:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:30.045 11:30:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:30.045 11:30:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:30.045 11:30:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:30.045 11:30:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:30.045 11:30:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:30.045 11:30:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid=00abaa28-3537-eb11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:31.420 11:30:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:12:31.420 11:30:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1200 -- # local i=0 00:12:31.420 11:30:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1201 -- # local nvme_device_counter=1 nvme_devices=0 00:12:31.420 11:30:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # [[ -n '' ]] 00:12:31.420 11:30:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # sleep 2 00:12:33.323 11:30:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( i++ <= 15 )) 00:12:33.323 11:30:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # lsblk -l -o NAME,SERIAL 00:12:33.323 11:30:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # grep -c SPDKISFASTANDAWESOME 00:12:33.323 11:30:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # nvme_devices=1 00:12:33.323 11:30:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( nvme_devices == nvme_device_counter )) 00:12:33.323 11:30:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # return 0 00:12:33.323 11:30:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:33.323 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:33.323 11:30:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:33.323 11:30:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1221 -- # local i=0 00:12:33.323 11:30:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1222 -- # lsblk -o NAME,SERIAL 00:12:33.323 11:30:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1222 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:33.323 11:30:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1229 -- # lsblk -l -o NAME,SERIAL 00:12:33.323 11:30:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1229 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:33.323 11:30:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1233 -- # return 0 00:12:33.323 11:30:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:12:33.323 11:30:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:33.323 11:30:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:33.323 11:30:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:33.323 11:30:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:33.323 11:30:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:33.323 11:30:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:33.323 11:30:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:33.323 11:30:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:12:33.323 11:30:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:33.323 11:30:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:33.323 11:30:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:33.323 11:30:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:33.323 11:30:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:33.323 11:30:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:33.323 11:30:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:33.324 [2024-11-15 11:30:34.085975] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:33.324 11:30:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:33.324 11:30:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:12:33.324 11:30:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:33.324 11:30:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:33.324 11:30:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:33.324 11:30:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:33.324 11:30:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:33.324 11:30:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:33.324 11:30:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:33.324 11:30:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid=00abaa28-3537-eb11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:34.700 11:30:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:12:34.700 11:30:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1200 -- # local i=0 00:12:34.700 11:30:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1201 -- # local nvme_device_counter=1 nvme_devices=0 00:12:34.700 11:30:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # [[ -n '' ]] 00:12:34.700 11:30:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # sleep 2 00:12:36.621 11:30:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( i++ <= 15 )) 00:12:36.621 11:30:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # lsblk -l -o NAME,SERIAL 00:12:36.621 11:30:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # grep -c SPDKISFASTANDAWESOME 00:12:36.621 11:30:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # nvme_devices=1 00:12:36.621 11:30:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( nvme_devices == nvme_device_counter )) 00:12:36.621 11:30:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # return 0 00:12:36.621 11:30:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:36.621 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:36.621 11:30:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:36.621 11:30:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1221 -- # local i=0 00:12:36.621 11:30:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1222 -- # lsblk -o NAME,SERIAL 00:12:36.621 11:30:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1222 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:36.621 11:30:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1229 -- # lsblk -l -o NAME,SERIAL 00:12:36.621 11:30:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1229 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:36.621 11:30:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1233 -- # return 0 00:12:36.621 11:30:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:12:36.621 11:30:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:36.621 11:30:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:36.880 11:30:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:36.880 11:30:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:36.880 11:30:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:36.880 11:30:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:36.880 11:30:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:36.880 11:30:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # seq 1 5 00:12:36.880 11:30:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:12:36.880 11:30:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:36.880 11:30:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:36.880 11:30:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:36.880 11:30:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:36.880 11:30:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:36.880 11:30:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:36.880 11:30:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:36.880 [2024-11-15 11:30:37.517316] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:36.880 11:30:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:36.880 11:30:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:12:36.880 11:30:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:36.880 11:30:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:36.880 11:30:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:36.880 11:30:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:36.880 11:30:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:36.880 11:30:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:36.880 11:30:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:36.880 11:30:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:36.881 11:30:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:36.881 11:30:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:36.881 11:30:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:36.881 11:30:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:36.881 11:30:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:36.881 11:30:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:36.881 11:30:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:36.881 11:30:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:12:36.881 11:30:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:36.881 11:30:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:36.881 11:30:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:36.881 11:30:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:36.881 11:30:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:36.881 11:30:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:36.881 11:30:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:36.881 [2024-11-15 11:30:37.565471] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:36.881 11:30:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:36.881 11:30:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:12:36.881 11:30:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:36.881 11:30:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:36.881 11:30:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:36.881 11:30:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:36.881 11:30:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:36.881 11:30:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:36.881 11:30:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:36.881 11:30:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:36.881 11:30:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:36.881 11:30:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:36.881 11:30:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:36.881 11:30:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:36.881 11:30:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:36.881 11:30:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:36.881 11:30:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:36.881 11:30:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:12:36.881 11:30:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:36.881 11:30:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:36.881 11:30:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:36.881 11:30:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:36.881 11:30:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:36.881 11:30:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:36.881 11:30:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:36.881 [2024-11-15 11:30:37.613606] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:36.881 11:30:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:36.881 11:30:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:12:36.881 11:30:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:36.881 11:30:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:36.881 11:30:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:36.881 11:30:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:36.881 11:30:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:36.881 11:30:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:36.881 11:30:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:36.881 11:30:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:36.881 11:30:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:36.881 11:30:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:36.881 11:30:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:36.881 11:30:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:36.881 11:30:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:36.881 11:30:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:36.881 11:30:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:36.881 11:30:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:12:36.881 11:30:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:36.881 11:30:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:36.881 11:30:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:36.881 11:30:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:36.881 11:30:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:36.881 11:30:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:36.881 11:30:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:36.881 [2024-11-15 11:30:37.661774] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:36.881 11:30:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:36.881 11:30:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:12:36.881 11:30:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:36.881 11:30:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:36.881 11:30:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:36.881 11:30:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:36.881 11:30:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:36.881 11:30:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:36.881 11:30:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:36.881 11:30:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:36.881 11:30:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:36.881 11:30:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:36.881 11:30:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:36.881 11:30:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:36.881 11:30:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:36.881 11:30:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:36.881 11:30:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:36.881 11:30:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:12:36.881 11:30:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:36.881 11:30:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:36.881 11:30:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:36.881 11:30:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:36.881 11:30:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:36.881 11:30:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:36.881 11:30:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:36.881 [2024-11-15 11:30:37.709951] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:36.881 11:30:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:36.881 11:30:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:12:36.881 11:30:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:36.881 11:30:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:36.881 11:30:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:36.881 11:30:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:36.881 11:30:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:36.881 11:30:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:36.881 11:30:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:36.881 11:30:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:36.881 11:30:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:36.881 11:30:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:37.141 11:30:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:37.141 11:30:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:37.141 11:30:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:37.141 11:30:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:37.141 11:30:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:37.141 11:30:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@110 -- # rpc_cmd nvmf_get_stats 00:12:37.141 11:30:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:37.141 11:30:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:37.141 11:30:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:37.141 11:30:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@110 -- # stats='{ 00:12:37.141 "tick_rate": 2200000000, 00:12:37.141 "poll_groups": [ 00:12:37.141 { 00:12:37.141 "name": "nvmf_tgt_poll_group_000", 00:12:37.141 "admin_qpairs": 2, 00:12:37.141 "io_qpairs": 196, 00:12:37.141 "current_admin_qpairs": 0, 00:12:37.141 "current_io_qpairs": 0, 00:12:37.141 "pending_bdev_io": 0, 00:12:37.141 "completed_nvme_io": 344, 00:12:37.141 "transports": [ 00:12:37.141 { 00:12:37.141 "trtype": "TCP" 00:12:37.141 } 00:12:37.141 ] 00:12:37.141 }, 00:12:37.141 { 00:12:37.141 "name": "nvmf_tgt_poll_group_001", 00:12:37.141 "admin_qpairs": 2, 00:12:37.141 "io_qpairs": 196, 00:12:37.141 "current_admin_qpairs": 0, 00:12:37.141 "current_io_qpairs": 0, 00:12:37.141 "pending_bdev_io": 0, 00:12:37.141 "completed_nvme_io": 298, 00:12:37.141 "transports": [ 00:12:37.141 { 00:12:37.141 "trtype": "TCP" 00:12:37.141 } 00:12:37.141 ] 00:12:37.141 }, 00:12:37.141 { 00:12:37.141 "name": "nvmf_tgt_poll_group_002", 00:12:37.141 "admin_qpairs": 1, 00:12:37.141 "io_qpairs": 196, 00:12:37.141 "current_admin_qpairs": 0, 00:12:37.142 "current_io_qpairs": 0, 00:12:37.142 "pending_bdev_io": 0, 00:12:37.142 "completed_nvme_io": 246, 00:12:37.142 "transports": [ 00:12:37.142 { 00:12:37.142 "trtype": "TCP" 00:12:37.142 } 00:12:37.142 ] 00:12:37.142 }, 00:12:37.142 { 00:12:37.142 "name": "nvmf_tgt_poll_group_003", 00:12:37.142 "admin_qpairs": 2, 00:12:37.142 "io_qpairs": 196, 00:12:37.142 "current_admin_qpairs": 0, 00:12:37.142 "current_io_qpairs": 0, 00:12:37.142 "pending_bdev_io": 0, 00:12:37.142 "completed_nvme_io": 246, 00:12:37.142 "transports": [ 00:12:37.142 { 00:12:37.142 "trtype": "TCP" 00:12:37.142 } 00:12:37.142 ] 00:12:37.142 } 00:12:37.142 ] 00:12:37.142 }' 00:12:37.142 11:30:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@112 -- # jsum '.poll_groups[].admin_qpairs' 00:12:37.142 11:30:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:12:37.142 11:30:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:12:37.142 11:30:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:12:37.142 11:30:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@112 -- # (( 7 > 0 )) 00:12:37.142 11:30:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@113 -- # jsum '.poll_groups[].io_qpairs' 00:12:37.142 11:30:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:12:37.142 11:30:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:12:37.142 11:30:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:12:37.142 11:30:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@113 -- # (( 784 > 0 )) 00:12:37.142 11:30:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@115 -- # '[' rdma == tcp ']' 00:12:37.142 11:30:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@121 -- # trap - SIGINT SIGTERM EXIT 00:12:37.142 11:30:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@123 -- # nvmftestfini 00:12:37.142 11:30:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@516 -- # nvmfcleanup 00:12:37.142 11:30:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@121 -- # sync 00:12:37.142 11:30:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:12:37.142 11:30:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@124 -- # set +e 00:12:37.142 11:30:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@125 -- # for i in {1..20} 00:12:37.142 11:30:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:12:37.142 rmmod nvme_tcp 00:12:37.142 rmmod nvme_fabrics 00:12:37.142 rmmod nvme_keyring 00:12:37.142 11:30:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:12:37.142 11:30:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@128 -- # set -e 00:12:37.142 11:30:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@129 -- # return 0 00:12:37.142 11:30:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@517 -- # '[' -n 1157658 ']' 00:12:37.142 11:30:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@518 -- # killprocess 1157658 00:12:37.142 11:30:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@952 -- # '[' -z 1157658 ']' 00:12:37.142 11:30:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@956 -- # kill -0 1157658 00:12:37.142 11:30:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@957 -- # uname 00:12:37.142 11:30:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:12:37.142 11:30:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 1157658 00:12:37.401 11:30:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:12:37.401 11:30:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:12:37.401 11:30:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@970 -- # echo 'killing process with pid 1157658' 00:12:37.401 killing process with pid 1157658 00:12:37.401 11:30:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@971 -- # kill 1157658 00:12:37.401 11:30:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@976 -- # wait 1157658 00:12:37.401 11:30:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:12:37.401 11:30:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:12:37.401 11:30:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:12:37.401 11:30:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@297 -- # iptr 00:12:37.401 11:30:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:12:37.401 11:30:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@791 -- # iptables-save 00:12:37.401 11:30:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@791 -- # iptables-restore 00:12:37.401 11:30:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:12:37.401 11:30:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@302 -- # remove_spdk_ns 00:12:37.401 11:30:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:37.401 11:30:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:37.401 11:30:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:39.939 11:30:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:12:39.939 00:12:39.939 real 0m34.031s 00:12:39.939 user 1m43.972s 00:12:39.939 sys 0m6.580s 00:12:39.939 11:30:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1128 -- # xtrace_disable 00:12:39.939 11:30:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:39.939 ************************************ 00:12:39.939 END TEST nvmf_rpc 00:12:39.939 ************************************ 00:12:39.939 11:30:40 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@23 -- # run_test nvmf_invalid /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:12:39.939 11:30:40 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:12:39.939 11:30:40 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1109 -- # xtrace_disable 00:12:39.939 11:30:40 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:12:39.939 ************************************ 00:12:39.939 START TEST nvmf_invalid 00:12:39.939 ************************************ 00:12:39.939 11:30:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:12:39.939 * Looking for test storage... 00:12:39.939 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:39.939 11:30:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:12:39.939 11:30:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1691 -- # lcov --version 00:12:39.939 11:30:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:12:39.939 11:30:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:12:39.939 11:30:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:39.939 11:30:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:39.939 11:30:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:39.939 11:30:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@336 -- # IFS=.-: 00:12:39.939 11:30:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@336 -- # read -ra ver1 00:12:39.939 11:30:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@337 -- # IFS=.-: 00:12:39.939 11:30:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@337 -- # read -ra ver2 00:12:39.939 11:30:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@338 -- # local 'op=<' 00:12:39.939 11:30:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@340 -- # ver1_l=2 00:12:39.939 11:30:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@341 -- # ver2_l=1 00:12:39.939 11:30:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:39.939 11:30:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@344 -- # case "$op" in 00:12:39.939 11:30:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@345 -- # : 1 00:12:39.939 11:30:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:39.939 11:30:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:39.939 11:30:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@365 -- # decimal 1 00:12:39.939 11:30:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@353 -- # local d=1 00:12:39.939 11:30:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:39.939 11:30:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@355 -- # echo 1 00:12:39.939 11:30:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@365 -- # ver1[v]=1 00:12:39.939 11:30:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@366 -- # decimal 2 00:12:39.939 11:30:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@353 -- # local d=2 00:12:39.939 11:30:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:39.939 11:30:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@355 -- # echo 2 00:12:39.939 11:30:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@366 -- # ver2[v]=2 00:12:39.939 11:30:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:39.939 11:30:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:39.939 11:30:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@368 -- # return 0 00:12:39.939 11:30:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:39.939 11:30:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:12:39.939 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:39.939 --rc genhtml_branch_coverage=1 00:12:39.939 --rc genhtml_function_coverage=1 00:12:39.939 --rc genhtml_legend=1 00:12:39.939 --rc geninfo_all_blocks=1 00:12:39.939 --rc geninfo_unexecuted_blocks=1 00:12:39.939 00:12:39.939 ' 00:12:39.939 11:30:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:12:39.939 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:39.939 --rc genhtml_branch_coverage=1 00:12:39.939 --rc genhtml_function_coverage=1 00:12:39.939 --rc genhtml_legend=1 00:12:39.939 --rc geninfo_all_blocks=1 00:12:39.939 --rc geninfo_unexecuted_blocks=1 00:12:39.939 00:12:39.939 ' 00:12:39.939 11:30:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:12:39.939 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:39.940 --rc genhtml_branch_coverage=1 00:12:39.940 --rc genhtml_function_coverage=1 00:12:39.940 --rc genhtml_legend=1 00:12:39.940 --rc geninfo_all_blocks=1 00:12:39.940 --rc geninfo_unexecuted_blocks=1 00:12:39.940 00:12:39.940 ' 00:12:39.940 11:30:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:12:39.940 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:39.940 --rc genhtml_branch_coverage=1 00:12:39.940 --rc genhtml_function_coverage=1 00:12:39.940 --rc genhtml_legend=1 00:12:39.940 --rc geninfo_all_blocks=1 00:12:39.940 --rc geninfo_unexecuted_blocks=1 00:12:39.940 00:12:39.940 ' 00:12:39.940 11:30:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:39.940 11:30:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@7 -- # uname -s 00:12:39.940 11:30:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:39.940 11:30:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:39.940 11:30:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:39.940 11:30:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:39.940 11:30:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:39.940 11:30:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:39.940 11:30:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:39.940 11:30:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:39.940 11:30:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:39.940 11:30:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:39.940 11:30:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:12:39.940 11:30:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@18 -- # NVME_HOSTID=00abaa28-3537-eb11-906e-0017a4403562 00:12:39.940 11:30:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:39.940 11:30:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:39.940 11:30:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:39.940 11:30:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:39.940 11:30:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:39.940 11:30:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@15 -- # shopt -s extglob 00:12:39.940 11:30:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:39.940 11:30:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:39.940 11:30:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:39.940 11:30:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:39.940 11:30:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:39.940 11:30:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:39.940 11:30:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@5 -- # export PATH 00:12:39.940 11:30:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:39.940 11:30:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@51 -- # : 0 00:12:39.940 11:30:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:12:39.940 11:30:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:12:39.940 11:30:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:39.940 11:30:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:39.940 11:30:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:39.940 11:30:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:12:39.940 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:12:39.940 11:30:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:12:39.940 11:30:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:12:39.940 11:30:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@55 -- # have_pci_nics=0 00:12:39.940 11:30:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@11 -- # multi_target_rpc=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py 00:12:39.940 11:30:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@12 -- # rpc=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:12:39.940 11:30:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode 00:12:39.940 11:30:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@14 -- # target=foobar 00:12:39.940 11:30:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@16 -- # RANDOM=0 00:12:39.940 11:30:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@34 -- # nvmftestinit 00:12:39.940 11:30:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:12:39.940 11:30:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:39.940 11:30:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@476 -- # prepare_net_devs 00:12:39.940 11:30:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@438 -- # local -g is_hw=no 00:12:39.940 11:30:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@440 -- # remove_spdk_ns 00:12:39.940 11:30:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:39.940 11:30:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:39.940 11:30:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:39.940 11:30:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:12:39.940 11:30:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:12:39.940 11:30:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@309 -- # xtrace_disable 00:12:39.940 11:30:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:12:45.208 11:30:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:12:45.209 11:30:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@315 -- # pci_devs=() 00:12:45.209 11:30:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@315 -- # local -a pci_devs 00:12:45.209 11:30:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@316 -- # pci_net_devs=() 00:12:45.209 11:30:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:12:45.209 11:30:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@317 -- # pci_drivers=() 00:12:45.209 11:30:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@317 -- # local -A pci_drivers 00:12:45.209 11:30:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@319 -- # net_devs=() 00:12:45.209 11:30:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@319 -- # local -ga net_devs 00:12:45.209 11:30:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@320 -- # e810=() 00:12:45.209 11:30:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@320 -- # local -ga e810 00:12:45.209 11:30:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@321 -- # x722=() 00:12:45.209 11:30:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@321 -- # local -ga x722 00:12:45.209 11:30:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@322 -- # mlx=() 00:12:45.209 11:30:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@322 -- # local -ga mlx 00:12:45.209 11:30:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:45.209 11:30:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:45.209 11:30:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:45.209 11:30:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:45.209 11:30:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:45.209 11:30:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:45.209 11:30:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:45.209 11:30:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:12:45.209 11:30:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:45.209 11:30:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:45.209 11:30:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:45.209 11:30:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:45.209 11:30:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:12:45.209 11:30:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:12:45.209 11:30:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:12:45.209 11:30:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:12:45.209 11:30:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:12:45.209 11:30:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:12:45.209 11:30:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:45.209 11:30:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:12:45.209 Found 0000:af:00.0 (0x8086 - 0x159b) 00:12:45.209 11:30:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:12:45.209 11:30:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:12:45.209 11:30:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:45.209 11:30:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:45.209 11:30:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:12:45.209 11:30:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:45.209 11:30:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:12:45.209 Found 0000:af:00.1 (0x8086 - 0x159b) 00:12:45.209 11:30:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:12:45.209 11:30:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:12:45.209 11:30:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:45.209 11:30:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:45.209 11:30:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:12:45.209 11:30:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:12:45.209 11:30:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:12:45.209 11:30:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:12:45.209 11:30:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:12:45.209 11:30:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:45.209 11:30:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:12:45.209 11:30:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:45.209 11:30:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@418 -- # [[ up == up ]] 00:12:45.209 11:30:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:12:45.209 11:30:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:45.209 11:30:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:12:45.209 Found net devices under 0000:af:00.0: cvl_0_0 00:12:45.209 11:30:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:12:45.209 11:30:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:12:45.209 11:30:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:45.209 11:30:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:12:45.209 11:30:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:45.209 11:30:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@418 -- # [[ up == up ]] 00:12:45.209 11:30:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:12:45.209 11:30:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:45.209 11:30:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:12:45.209 Found net devices under 0000:af:00.1: cvl_0_1 00:12:45.209 11:30:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:12:45.209 11:30:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:12:45.209 11:30:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@442 -- # is_hw=yes 00:12:45.209 11:30:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:12:45.209 11:30:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:12:45.209 11:30:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:12:45.209 11:30:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:12:45.209 11:30:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:45.209 11:30:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:45.209 11:30:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:12:45.209 11:30:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:12:45.209 11:30:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:12:45.209 11:30:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:12:45.209 11:30:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:12:45.209 11:30:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:12:45.209 11:30:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:12:45.209 11:30:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:45.209 11:30:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:12:45.209 11:30:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:12:45.209 11:30:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:12:45.209 11:30:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:12:45.209 11:30:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:12:45.209 11:30:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:12:45.209 11:30:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:12:45.209 11:30:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:12:45.209 11:30:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:12:45.209 11:30:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:12:45.209 11:30:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:12:45.209 11:30:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:12:45.209 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:45.209 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.234 ms 00:12:45.209 00:12:45.209 --- 10.0.0.2 ping statistics --- 00:12:45.209 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:45.209 rtt min/avg/max/mdev = 0.234/0.234/0.234/0.000 ms 00:12:45.209 11:30:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:12:45.209 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:45.209 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.058 ms 00:12:45.209 00:12:45.209 --- 10.0.0.1 ping statistics --- 00:12:45.209 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:45.209 rtt min/avg/max/mdev = 0.058/0.058/0.058/0.000 ms 00:12:45.209 11:30:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:45.209 11:30:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@450 -- # return 0 00:12:45.210 11:30:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:12:45.210 11:30:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:45.210 11:30:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:12:45.210 11:30:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:12:45.210 11:30:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:45.210 11:30:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:12:45.210 11:30:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:12:45.210 11:30:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@35 -- # nvmfappstart -m 0xF 00:12:45.210 11:30:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:12:45.210 11:30:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@724 -- # xtrace_disable 00:12:45.210 11:30:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:12:45.210 11:30:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@509 -- # nvmfpid=1166006 00:12:45.210 11:30:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:12:45.210 11:30:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@510 -- # waitforlisten 1166006 00:12:45.210 11:30:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@833 -- # '[' -z 1166006 ']' 00:12:45.210 11:30:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:45.210 11:30:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@838 -- # local max_retries=100 00:12:45.210 11:30:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:45.210 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:45.210 11:30:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@842 -- # xtrace_disable 00:12:45.210 11:30:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:12:45.210 [2024-11-15 11:30:46.056806] Starting SPDK v25.01-pre git sha1 4b2d483c6 / DPDK 24.03.0 initialization... 00:12:45.210 [2024-11-15 11:30:46.056861] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:45.468 [2024-11-15 11:30:46.157712] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:45.468 [2024-11-15 11:30:46.207693] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:45.468 [2024-11-15 11:30:46.207735] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:45.469 [2024-11-15 11:30:46.207745] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:45.469 [2024-11-15 11:30:46.207755] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:45.469 [2024-11-15 11:30:46.207763] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:45.469 [2024-11-15 11:30:46.209667] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:12:45.469 [2024-11-15 11:30:46.209814] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:12:45.469 [2024-11-15 11:30:46.209903] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:12:45.469 [2024-11-15 11:30:46.209907] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:45.469 11:30:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:12:45.469 11:30:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@866 -- # return 0 00:12:45.469 11:30:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:12:45.469 11:30:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@730 -- # xtrace_disable 00:12:45.469 11:30:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:12:45.727 11:30:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:45.727 11:30:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@37 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:12:45.727 11:30:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -t foobar nqn.2016-06.io.spdk:cnode17471 00:12:45.986 [2024-11-15 11:30:46.610642] nvmf_rpc.c: 396:rpc_nvmf_create_subsystem: *ERROR*: Unable to find target foobar 00:12:45.986 11:30:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@40 -- # out='request: 00:12:45.986 { 00:12:45.986 "nqn": "nqn.2016-06.io.spdk:cnode17471", 00:12:45.986 "tgt_name": "foobar", 00:12:45.986 "method": "nvmf_create_subsystem", 00:12:45.986 "req_id": 1 00:12:45.986 } 00:12:45.986 Got JSON-RPC error response 00:12:45.986 response: 00:12:45.986 { 00:12:45.986 "code": -32603, 00:12:45.986 "message": "Unable to find target foobar" 00:12:45.986 }' 00:12:45.986 11:30:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@41 -- # [[ request: 00:12:45.986 { 00:12:45.986 "nqn": "nqn.2016-06.io.spdk:cnode17471", 00:12:45.986 "tgt_name": "foobar", 00:12:45.986 "method": "nvmf_create_subsystem", 00:12:45.986 "req_id": 1 00:12:45.986 } 00:12:45.986 Got JSON-RPC error response 00:12:45.986 response: 00:12:45.986 { 00:12:45.986 "code": -32603, 00:12:45.986 "message": "Unable to find target foobar" 00:12:45.986 } == *\U\n\a\b\l\e\ \t\o\ \f\i\n\d\ \t\a\r\g\e\t* ]] 00:12:45.986 11:30:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@45 -- # echo -e '\x1f' 00:12:45.986 11:30:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -s $'SPDKISFASTANDAWESOME\037' nqn.2016-06.io.spdk:cnode21234 00:12:46.245 [2024-11-15 11:30:46.887598] nvmf_rpc.c: 413:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode21234: invalid serial number 'SPDKISFASTANDAWESOME' 00:12:46.245 11:30:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@45 -- # out='request: 00:12:46.245 { 00:12:46.245 "nqn": "nqn.2016-06.io.spdk:cnode21234", 00:12:46.245 "serial_number": "SPDKISFASTANDAWESOME\u001f", 00:12:46.245 "method": "nvmf_create_subsystem", 00:12:46.245 "req_id": 1 00:12:46.245 } 00:12:46.245 Got JSON-RPC error response 00:12:46.245 response: 00:12:46.245 { 00:12:46.245 "code": -32602, 00:12:46.245 "message": "Invalid SN SPDKISFASTANDAWESOME\u001f" 00:12:46.245 }' 00:12:46.245 11:30:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@46 -- # [[ request: 00:12:46.245 { 00:12:46.245 "nqn": "nqn.2016-06.io.spdk:cnode21234", 00:12:46.245 "serial_number": "SPDKISFASTANDAWESOME\u001f", 00:12:46.245 "method": "nvmf_create_subsystem", 00:12:46.245 "req_id": 1 00:12:46.245 } 00:12:46.245 Got JSON-RPC error response 00:12:46.245 response: 00:12:46.245 { 00:12:46.245 "code": -32602, 00:12:46.245 "message": "Invalid SN SPDKISFASTANDAWESOME\u001f" 00:12:46.245 } == *\I\n\v\a\l\i\d\ \S\N* ]] 00:12:46.245 11:30:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@50 -- # echo -e '\x1f' 00:12:46.245 11:30:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -d $'SPDK_Controller\037' nqn.2016-06.io.spdk:cnode4710 00:12:46.245 [2024-11-15 11:30:47.076246] nvmf_rpc.c: 422:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode4710: invalid model number 'SPDK_Controller' 00:12:46.245 11:30:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@50 -- # out='request: 00:12:46.245 { 00:12:46.245 "nqn": "nqn.2016-06.io.spdk:cnode4710", 00:12:46.245 "model_number": "SPDK_Controller\u001f", 00:12:46.245 "method": "nvmf_create_subsystem", 00:12:46.245 "req_id": 1 00:12:46.245 } 00:12:46.245 Got JSON-RPC error response 00:12:46.245 response: 00:12:46.245 { 00:12:46.245 "code": -32602, 00:12:46.245 "message": "Invalid MN SPDK_Controller\u001f" 00:12:46.245 }' 00:12:46.245 11:30:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@51 -- # [[ request: 00:12:46.245 { 00:12:46.245 "nqn": "nqn.2016-06.io.spdk:cnode4710", 00:12:46.245 "model_number": "SPDK_Controller\u001f", 00:12:46.245 "method": "nvmf_create_subsystem", 00:12:46.245 "req_id": 1 00:12:46.245 } 00:12:46.245 Got JSON-RPC error response 00:12:46.245 response: 00:12:46.245 { 00:12:46.245 "code": -32602, 00:12:46.245 "message": "Invalid MN SPDK_Controller\u001f" 00:12:46.245 } == *\I\n\v\a\l\i\d\ \M\N* ]] 00:12:46.504 11:30:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@54 -- # gen_random_s 21 00:12:46.504 11:30:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@19 -- # local length=21 ll 00:12:46.505 11:30:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:12:46.505 11:30:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # local chars 00:12:46.505 11:30:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@22 -- # local string 00:12:46.505 11:30:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll = 0 )) 00:12:46.505 11:30:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:46.505 11:30:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 96 00:12:46.505 11:30:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x60' 00:12:46.505 11:30:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='`' 00:12:46.505 11:30:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:46.505 11:30:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:46.505 11:30:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 126 00:12:46.505 11:30:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7e' 00:12:46.505 11:30:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='~' 00:12:46.505 11:30:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:46.505 11:30:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:46.505 11:30:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 100 00:12:46.505 11:30:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x64' 00:12:46.505 11:30:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=d 00:12:46.505 11:30:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:46.505 11:30:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:46.505 11:30:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 33 00:12:46.505 11:30:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x21' 00:12:46.505 11:30:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='!' 00:12:46.505 11:30:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:46.505 11:30:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:46.505 11:30:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 106 00:12:46.505 11:30:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6a' 00:12:46.505 11:30:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=j 00:12:46.505 11:30:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:46.505 11:30:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:46.505 11:30:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 83 00:12:46.505 11:30:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x53' 00:12:46.505 11:30:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=S 00:12:46.505 11:30:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:46.505 11:30:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:46.505 11:30:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 38 00:12:46.505 11:30:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x26' 00:12:46.505 11:30:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='&' 00:12:46.505 11:30:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:46.505 11:30:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:46.505 11:30:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 67 00:12:46.505 11:30:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x43' 00:12:46.505 11:30:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=C 00:12:46.505 11:30:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:46.505 11:30:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:46.505 11:30:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 74 00:12:46.505 11:30:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4a' 00:12:46.505 11:30:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=J 00:12:46.505 11:30:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:46.505 11:30:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:46.505 11:30:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 66 00:12:46.505 11:30:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x42' 00:12:46.505 11:30:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=B 00:12:46.505 11:30:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:46.505 11:30:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:46.505 11:30:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 46 00:12:46.505 11:30:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2e' 00:12:46.505 11:30:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=. 00:12:46.505 11:30:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:46.505 11:30:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:46.505 11:30:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 59 00:12:46.505 11:30:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3b' 00:12:46.505 11:30:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=';' 00:12:46.505 11:30:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:46.505 11:30:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:46.505 11:30:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 79 00:12:46.505 11:30:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4f' 00:12:46.505 11:30:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=O 00:12:46.505 11:30:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:46.505 11:30:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:46.505 11:30:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 73 00:12:46.505 11:30:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x49' 00:12:46.505 11:30:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=I 00:12:46.505 11:30:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:46.505 11:30:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:46.505 11:30:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 36 00:12:46.506 11:30:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x24' 00:12:46.506 11:30:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='$' 00:12:46.506 11:30:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:46.506 11:30:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:46.506 11:30:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 91 00:12:46.506 11:30:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5b' 00:12:46.506 11:30:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='[' 00:12:46.506 11:30:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:46.506 11:30:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:46.506 11:30:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 83 00:12:46.506 11:30:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x53' 00:12:46.506 11:30:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=S 00:12:46.506 11:30:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:46.506 11:30:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:46.506 11:30:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 61 00:12:46.506 11:30:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3d' 00:12:46.506 11:30:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+== 00:12:46.506 11:30:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:46.506 11:30:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:46.506 11:30:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 122 00:12:46.506 11:30:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7a' 00:12:46.506 11:30:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=z 00:12:46.506 11:30:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:46.506 11:30:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:46.506 11:30:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 75 00:12:46.506 11:30:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4b' 00:12:46.506 11:30:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=K 00:12:46.506 11:30:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:46.506 11:30:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:46.506 11:30:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 57 00:12:46.506 11:30:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x39' 00:12:46.506 11:30:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=9 00:12:46.506 11:30:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:46.506 11:30:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:46.506 11:30:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@28 -- # [[ ` == \- ]] 00:12:46.506 11:30:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@31 -- # echo '`~d!jS&CJB.;OI$[S=zK9' 00:12:46.506 11:30:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -s '`~d!jS&CJB.;OI$[S=zK9' nqn.2016-06.io.spdk:cnode32369 00:12:46.796 [2024-11-15 11:30:47.493737] nvmf_rpc.c: 413:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode32369: invalid serial number '`~d!jS&CJB.;OI$[S=zK9' 00:12:46.796 11:30:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@54 -- # out='request: 00:12:46.796 { 00:12:46.796 "nqn": "nqn.2016-06.io.spdk:cnode32369", 00:12:46.796 "serial_number": "`~d!jS&CJB.;OI$[S=zK9", 00:12:46.796 "method": "nvmf_create_subsystem", 00:12:46.796 "req_id": 1 00:12:46.796 } 00:12:46.796 Got JSON-RPC error response 00:12:46.796 response: 00:12:46.796 { 00:12:46.796 "code": -32602, 00:12:46.796 "message": "Invalid SN `~d!jS&CJB.;OI$[S=zK9" 00:12:46.796 }' 00:12:46.796 11:30:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@55 -- # [[ request: 00:12:46.796 { 00:12:46.796 "nqn": "nqn.2016-06.io.spdk:cnode32369", 00:12:46.796 "serial_number": "`~d!jS&CJB.;OI$[S=zK9", 00:12:46.796 "method": "nvmf_create_subsystem", 00:12:46.796 "req_id": 1 00:12:46.796 } 00:12:46.796 Got JSON-RPC error response 00:12:46.796 response: 00:12:46.796 { 00:12:46.796 "code": -32602, 00:12:46.796 "message": "Invalid SN `~d!jS&CJB.;OI$[S=zK9" 00:12:46.796 } == *\I\n\v\a\l\i\d\ \S\N* ]] 00:12:46.796 11:30:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@58 -- # gen_random_s 41 00:12:46.796 11:30:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@19 -- # local length=41 ll 00:12:46.796 11:30:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:12:46.796 11:30:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # local chars 00:12:46.796 11:30:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@22 -- # local string 00:12:46.796 11:30:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll = 0 )) 00:12:46.796 11:30:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:46.796 11:30:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 42 00:12:46.796 11:30:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2a' 00:12:46.796 11:30:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='*' 00:12:46.796 11:30:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:46.796 11:30:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:46.796 11:30:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 42 00:12:46.796 11:30:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2a' 00:12:46.796 11:30:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='*' 00:12:46.796 11:30:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:46.796 11:30:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:46.796 11:30:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 127 00:12:46.796 11:30:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7f' 00:12:46.796 11:30:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=$'\177' 00:12:46.796 11:30:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:46.796 11:30:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:46.796 11:30:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 101 00:12:46.796 11:30:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x65' 00:12:46.796 11:30:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=e 00:12:46.796 11:30:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:46.796 11:30:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:46.796 11:30:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 103 00:12:46.796 11:30:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x67' 00:12:46.796 11:30:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=g 00:12:46.796 11:30:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:46.796 11:30:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:46.796 11:30:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 43 00:12:46.796 11:30:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2b' 00:12:46.796 11:30:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=+ 00:12:46.796 11:30:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:46.796 11:30:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:46.796 11:30:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 110 00:12:46.796 11:30:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6e' 00:12:46.796 11:30:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=n 00:12:46.796 11:30:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:46.796 11:30:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:46.796 11:30:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 116 00:12:46.796 11:30:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x74' 00:12:46.796 11:30:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=t 00:12:46.796 11:30:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:46.796 11:30:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:46.796 11:30:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 93 00:12:46.796 11:30:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5d' 00:12:46.796 11:30:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=']' 00:12:46.796 11:30:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:46.796 11:30:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:46.796 11:30:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 41 00:12:46.796 11:30:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x29' 00:12:46.796 11:30:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=')' 00:12:46.796 11:30:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:46.796 11:30:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:46.796 11:30:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 52 00:12:46.796 11:30:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x34' 00:12:46.796 11:30:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=4 00:12:46.796 11:30:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:46.796 11:30:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:46.796 11:30:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 49 00:12:46.796 11:30:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x31' 00:12:46.796 11:30:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=1 00:12:46.796 11:30:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:46.796 11:30:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:46.796 11:30:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 82 00:12:46.796 11:30:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x52' 00:12:46.796 11:30:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=R 00:12:46.796 11:30:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:46.796 11:30:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:46.796 11:30:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 66 00:12:46.796 11:30:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x42' 00:12:46.796 11:30:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=B 00:12:46.796 11:30:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:46.796 11:30:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:46.796 11:30:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 78 00:12:46.796 11:30:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4e' 00:12:46.796 11:30:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=N 00:12:46.797 11:30:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:46.797 11:30:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:46.797 11:30:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 41 00:12:46.797 11:30:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x29' 00:12:46.797 11:30:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=')' 00:12:46.797 11:30:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:46.797 11:30:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:46.797 11:30:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 70 00:12:46.797 11:30:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x46' 00:12:46.797 11:30:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=F 00:12:46.797 11:30:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:46.797 11:30:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:46.797 11:30:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 117 00:12:46.797 11:30:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x75' 00:12:46.797 11:30:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=u 00:12:46.797 11:30:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:46.797 11:30:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:46.797 11:30:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 56 00:12:46.797 11:30:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x38' 00:12:46.797 11:30:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=8 00:12:46.797 11:30:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:46.797 11:30:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:46.797 11:30:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 118 00:12:46.797 11:30:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x76' 00:12:46.797 11:30:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=v 00:12:46.797 11:30:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:46.797 11:30:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:46.797 11:30:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 93 00:12:47.056 11:30:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5d' 00:12:47.056 11:30:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=']' 00:12:47.056 11:30:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:47.056 11:30:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:47.056 11:30:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 48 00:12:47.056 11:30:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x30' 00:12:47.056 11:30:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=0 00:12:47.056 11:30:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:47.056 11:30:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:47.056 11:30:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 66 00:12:47.056 11:30:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x42' 00:12:47.056 11:30:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=B 00:12:47.056 11:30:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:47.056 11:30:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:47.056 11:30:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 114 00:12:47.056 11:30:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x72' 00:12:47.056 11:30:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=r 00:12:47.056 11:30:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:47.056 11:30:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:47.056 11:30:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 84 00:12:47.056 11:30:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x54' 00:12:47.056 11:30:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=T 00:12:47.056 11:30:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:47.056 11:30:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:47.056 11:30:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 82 00:12:47.056 11:30:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x52' 00:12:47.056 11:30:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=R 00:12:47.056 11:30:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:47.056 11:30:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:47.056 11:30:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 73 00:12:47.056 11:30:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x49' 00:12:47.056 11:30:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=I 00:12:47.056 11:30:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:47.056 11:30:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:47.056 11:30:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 127 00:12:47.056 11:30:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7f' 00:12:47.056 11:30:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=$'\177' 00:12:47.056 11:30:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:47.056 11:30:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:47.056 11:30:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 122 00:12:47.057 11:30:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7a' 00:12:47.057 11:30:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=z 00:12:47.057 11:30:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:47.057 11:30:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:47.057 11:30:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 68 00:12:47.057 11:30:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x44' 00:12:47.057 11:30:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=D 00:12:47.057 11:30:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:47.057 11:30:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:47.057 11:30:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 69 00:12:47.057 11:30:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x45' 00:12:47.057 11:30:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=E 00:12:47.057 11:30:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:47.057 11:30:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:47.057 11:30:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 59 00:12:47.057 11:30:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3b' 00:12:47.057 11:30:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=';' 00:12:47.057 11:30:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:47.057 11:30:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:47.057 11:30:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 110 00:12:47.057 11:30:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6e' 00:12:47.057 11:30:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=n 00:12:47.057 11:30:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:47.057 11:30:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:47.057 11:30:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 109 00:12:47.057 11:30:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6d' 00:12:47.057 11:30:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=m 00:12:47.057 11:30:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:47.057 11:30:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:47.057 11:30:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 54 00:12:47.057 11:30:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x36' 00:12:47.057 11:30:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=6 00:12:47.057 11:30:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:47.057 11:30:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:47.057 11:30:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 58 00:12:47.057 11:30:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3a' 00:12:47.057 11:30:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=: 00:12:47.057 11:30:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:47.057 11:30:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:47.057 11:30:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 115 00:12:47.057 11:30:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x73' 00:12:47.057 11:30:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=s 00:12:47.057 11:30:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:47.057 11:30:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:47.057 11:30:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 101 00:12:47.057 11:30:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x65' 00:12:47.057 11:30:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=e 00:12:47.057 11:30:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:47.057 11:30:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:47.057 11:30:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 108 00:12:47.057 11:30:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6c' 00:12:47.057 11:30:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=l 00:12:47.057 11:30:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:47.057 11:30:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:47.057 11:30:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 93 00:12:47.057 11:30:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5d' 00:12:47.057 11:30:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=']' 00:12:47.057 11:30:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:47.057 11:30:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:47.057 11:30:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 80 00:12:47.057 11:30:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x50' 00:12:47.057 11:30:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=P 00:12:47.057 11:30:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:47.057 11:30:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:47.057 11:30:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@28 -- # [[ * == \- ]] 00:12:47.057 11:30:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@31 -- # echo '**eg+nt])41RBN)Fu8v]0BrTRIzDE;nm6:sel]P' 00:12:47.057 11:30:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -d '**eg+nt])41RBN)Fu8v]0BrTRIzDE;nm6:sel]P' nqn.2016-06.io.spdk:cnode21455 00:12:47.316 [2024-11-15 11:30:48.035587] nvmf_rpc.c: 422:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode21455: invalid model number '**eg+nt])41RBN)Fu8v]0BrTRIzDE;nm6:sel]P' 00:12:47.316 11:30:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@58 -- # out='request: 00:12:47.316 { 00:12:47.316 "nqn": "nqn.2016-06.io.spdk:cnode21455", 00:12:47.316 "model_number": "**\u007feg+nt])41RBN)Fu8v]0BrTRI\u007fzDE;nm6:sel]P", 00:12:47.316 "method": "nvmf_create_subsystem", 00:12:47.316 "req_id": 1 00:12:47.316 } 00:12:47.316 Got JSON-RPC error response 00:12:47.316 response: 00:12:47.316 { 00:12:47.316 "code": -32602, 00:12:47.316 "message": "Invalid MN **\u007feg+nt])41RBN)Fu8v]0BrTRI\u007fzDE;nm6:sel]P" 00:12:47.316 }' 00:12:47.316 11:30:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@59 -- # [[ request: 00:12:47.316 { 00:12:47.316 "nqn": "nqn.2016-06.io.spdk:cnode21455", 00:12:47.316 "model_number": "**\u007feg+nt])41RBN)Fu8v]0BrTRI\u007fzDE;nm6:sel]P", 00:12:47.316 "method": "nvmf_create_subsystem", 00:12:47.316 "req_id": 1 00:12:47.316 } 00:12:47.316 Got JSON-RPC error response 00:12:47.316 response: 00:12:47.316 { 00:12:47.316 "code": -32602, 00:12:47.316 "message": "Invalid MN **\u007feg+nt])41RBN)Fu8v]0BrTRI\u007fzDE;nm6:sel]P" 00:12:47.316 } == *\I\n\v\a\l\i\d\ \M\N* ]] 00:12:47.316 11:30:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@62 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport --trtype tcp 00:12:47.576 [2024-11-15 11:30:48.308620] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:47.576 11:30:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode -s SPDK001 -a 00:12:47.834 11:30:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@64 -- # [[ tcp == \T\C\P ]] 00:12:47.834 11:30:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@67 -- # echo '' 00:12:47.834 11:30:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@67 -- # head -n 1 00:12:47.834 11:30:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@67 -- # IP= 00:12:47.834 11:30:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode -t tcp -a '' -s 4421 00:12:48.092 [2024-11-15 11:30:48.888141] nvmf_rpc.c: 783:nvmf_rpc_listen_paused: *ERROR*: Unable to remove listener, rc -2 00:12:48.092 11:30:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@69 -- # out='request: 00:12:48.092 { 00:12:48.092 "nqn": "nqn.2016-06.io.spdk:cnode", 00:12:48.092 "listen_address": { 00:12:48.092 "trtype": "tcp", 00:12:48.092 "traddr": "", 00:12:48.092 "trsvcid": "4421" 00:12:48.092 }, 00:12:48.092 "method": "nvmf_subsystem_remove_listener", 00:12:48.092 "req_id": 1 00:12:48.092 } 00:12:48.092 Got JSON-RPC error response 00:12:48.092 response: 00:12:48.092 { 00:12:48.092 "code": -32602, 00:12:48.092 "message": "Invalid parameters" 00:12:48.092 }' 00:12:48.092 11:30:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@70 -- # [[ request: 00:12:48.093 { 00:12:48.093 "nqn": "nqn.2016-06.io.spdk:cnode", 00:12:48.093 "listen_address": { 00:12:48.093 "trtype": "tcp", 00:12:48.093 "traddr": "", 00:12:48.093 "trsvcid": "4421" 00:12:48.093 }, 00:12:48.093 "method": "nvmf_subsystem_remove_listener", 00:12:48.093 "req_id": 1 00:12:48.093 } 00:12:48.093 Got JSON-RPC error response 00:12:48.093 response: 00:12:48.093 { 00:12:48.093 "code": -32602, 00:12:48.093 "message": "Invalid parameters" 00:12:48.093 } != *\U\n\a\b\l\e\ \t\o\ \s\t\o\p\ \l\i\s\t\e\n\e\r\.* ]] 00:12:48.093 11:30:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode31300 -i 0 00:12:48.351 [2024-11-15 11:30:49.161045] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode31300: invalid cntlid range [0-65519] 00:12:48.351 11:30:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@73 -- # out='request: 00:12:48.351 { 00:12:48.351 "nqn": "nqn.2016-06.io.spdk:cnode31300", 00:12:48.351 "min_cntlid": 0, 00:12:48.351 "method": "nvmf_create_subsystem", 00:12:48.351 "req_id": 1 00:12:48.351 } 00:12:48.351 Got JSON-RPC error response 00:12:48.351 response: 00:12:48.351 { 00:12:48.351 "code": -32602, 00:12:48.351 "message": "Invalid cntlid range [0-65519]" 00:12:48.351 }' 00:12:48.351 11:30:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@74 -- # [[ request: 00:12:48.351 { 00:12:48.351 "nqn": "nqn.2016-06.io.spdk:cnode31300", 00:12:48.351 "min_cntlid": 0, 00:12:48.351 "method": "nvmf_create_subsystem", 00:12:48.351 "req_id": 1 00:12:48.351 } 00:12:48.351 Got JSON-RPC error response 00:12:48.351 response: 00:12:48.351 { 00:12:48.351 "code": -32602, 00:12:48.351 "message": "Invalid cntlid range [0-65519]" 00:12:48.351 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:12:48.351 11:30:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@75 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode18035 -i 65520 00:12:48.610 [2024-11-15 11:30:49.434080] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode18035: invalid cntlid range [65520-65519] 00:12:48.610 11:30:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@75 -- # out='request: 00:12:48.610 { 00:12:48.610 "nqn": "nqn.2016-06.io.spdk:cnode18035", 00:12:48.610 "min_cntlid": 65520, 00:12:48.610 "method": "nvmf_create_subsystem", 00:12:48.610 "req_id": 1 00:12:48.610 } 00:12:48.610 Got JSON-RPC error response 00:12:48.610 response: 00:12:48.610 { 00:12:48.610 "code": -32602, 00:12:48.610 "message": "Invalid cntlid range [65520-65519]" 00:12:48.610 }' 00:12:48.610 11:30:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@76 -- # [[ request: 00:12:48.610 { 00:12:48.610 "nqn": "nqn.2016-06.io.spdk:cnode18035", 00:12:48.610 "min_cntlid": 65520, 00:12:48.610 "method": "nvmf_create_subsystem", 00:12:48.610 "req_id": 1 00:12:48.610 } 00:12:48.610 Got JSON-RPC error response 00:12:48.610 response: 00:12:48.610 { 00:12:48.610 "code": -32602, 00:12:48.610 "message": "Invalid cntlid range [65520-65519]" 00:12:48.610 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:12:48.610 11:30:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode20228 -I 0 00:12:48.869 [2024-11-15 11:30:49.702988] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode20228: invalid cntlid range [1-0] 00:12:48.869 11:30:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@77 -- # out='request: 00:12:48.869 { 00:12:48.869 "nqn": "nqn.2016-06.io.spdk:cnode20228", 00:12:48.869 "max_cntlid": 0, 00:12:48.869 "method": "nvmf_create_subsystem", 00:12:48.869 "req_id": 1 00:12:48.869 } 00:12:48.869 Got JSON-RPC error response 00:12:48.869 response: 00:12:48.869 { 00:12:48.869 "code": -32602, 00:12:48.869 "message": "Invalid cntlid range [1-0]" 00:12:48.869 }' 00:12:48.869 11:30:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@78 -- # [[ request: 00:12:48.869 { 00:12:48.869 "nqn": "nqn.2016-06.io.spdk:cnode20228", 00:12:48.869 "max_cntlid": 0, 00:12:48.869 "method": "nvmf_create_subsystem", 00:12:48.869 "req_id": 1 00:12:48.869 } 00:12:48.869 Got JSON-RPC error response 00:12:48.869 response: 00:12:48.869 { 00:12:48.869 "code": -32602, 00:12:48.869 "message": "Invalid cntlid range [1-0]" 00:12:48.869 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:12:49.127 11:30:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode8028 -I 65520 00:12:49.127 [2024-11-15 11:30:49.971927] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode8028: invalid cntlid range [1-65520] 00:12:49.386 11:30:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@79 -- # out='request: 00:12:49.386 { 00:12:49.386 "nqn": "nqn.2016-06.io.spdk:cnode8028", 00:12:49.386 "max_cntlid": 65520, 00:12:49.386 "method": "nvmf_create_subsystem", 00:12:49.386 "req_id": 1 00:12:49.386 } 00:12:49.386 Got JSON-RPC error response 00:12:49.386 response: 00:12:49.386 { 00:12:49.386 "code": -32602, 00:12:49.386 "message": "Invalid cntlid range [1-65520]" 00:12:49.386 }' 00:12:49.386 11:30:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@80 -- # [[ request: 00:12:49.386 { 00:12:49.386 "nqn": "nqn.2016-06.io.spdk:cnode8028", 00:12:49.386 "max_cntlid": 65520, 00:12:49.386 "method": "nvmf_create_subsystem", 00:12:49.386 "req_id": 1 00:12:49.386 } 00:12:49.386 Got JSON-RPC error response 00:12:49.386 response: 00:12:49.386 { 00:12:49.386 "code": -32602, 00:12:49.386 "message": "Invalid cntlid range [1-65520]" 00:12:49.386 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:12:49.386 11:30:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode385 -i 6 -I 5 00:12:49.644 [2024-11-15 11:30:50.244937] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode385: invalid cntlid range [6-5] 00:12:49.644 11:30:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@83 -- # out='request: 00:12:49.644 { 00:12:49.644 "nqn": "nqn.2016-06.io.spdk:cnode385", 00:12:49.644 "min_cntlid": 6, 00:12:49.644 "max_cntlid": 5, 00:12:49.644 "method": "nvmf_create_subsystem", 00:12:49.644 "req_id": 1 00:12:49.644 } 00:12:49.644 Got JSON-RPC error response 00:12:49.644 response: 00:12:49.644 { 00:12:49.644 "code": -32602, 00:12:49.644 "message": "Invalid cntlid range [6-5]" 00:12:49.644 }' 00:12:49.645 11:30:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@84 -- # [[ request: 00:12:49.645 { 00:12:49.645 "nqn": "nqn.2016-06.io.spdk:cnode385", 00:12:49.645 "min_cntlid": 6, 00:12:49.645 "max_cntlid": 5, 00:12:49.645 "method": "nvmf_create_subsystem", 00:12:49.645 "req_id": 1 00:12:49.645 } 00:12:49.645 Got JSON-RPC error response 00:12:49.645 response: 00:12:49.645 { 00:12:49.645 "code": -32602, 00:12:49.645 "message": "Invalid cntlid range [6-5]" 00:12:49.645 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:12:49.645 11:30:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target --name foobar 00:12:49.645 11:30:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@87 -- # out='request: 00:12:49.645 { 00:12:49.645 "name": "foobar", 00:12:49.645 "method": "nvmf_delete_target", 00:12:49.645 "req_id": 1 00:12:49.645 } 00:12:49.645 Got JSON-RPC error response 00:12:49.645 response: 00:12:49.645 { 00:12:49.645 "code": -32602, 00:12:49.645 "message": "The specified target doesn'\''t exist, cannot delete it." 00:12:49.645 }' 00:12:49.645 11:30:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@88 -- # [[ request: 00:12:49.645 { 00:12:49.645 "name": "foobar", 00:12:49.645 "method": "nvmf_delete_target", 00:12:49.645 "req_id": 1 00:12:49.645 } 00:12:49.645 Got JSON-RPC error response 00:12:49.645 response: 00:12:49.645 { 00:12:49.645 "code": -32602, 00:12:49.645 "message": "The specified target doesn't exist, cannot delete it." 00:12:49.645 } == *\T\h\e\ \s\p\e\c\i\f\i\e\d\ \t\a\r\g\e\t\ \d\o\e\s\n\'\t\ \e\x\i\s\t\,\ \c\a\n\n\o\t\ \d\e\l\e\t\e\ \i\t\.* ]] 00:12:49.645 11:30:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@90 -- # trap - SIGINT SIGTERM EXIT 00:12:49.645 11:30:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@91 -- # nvmftestfini 00:12:49.645 11:30:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@516 -- # nvmfcleanup 00:12:49.645 11:30:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@121 -- # sync 00:12:49.645 11:30:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:12:49.645 11:30:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@124 -- # set +e 00:12:49.645 11:30:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@125 -- # for i in {1..20} 00:12:49.645 11:30:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:12:49.645 rmmod nvme_tcp 00:12:49.645 rmmod nvme_fabrics 00:12:49.645 rmmod nvme_keyring 00:12:49.645 11:30:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:12:49.645 11:30:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@128 -- # set -e 00:12:49.645 11:30:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@129 -- # return 0 00:12:49.645 11:30:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@517 -- # '[' -n 1166006 ']' 00:12:49.645 11:30:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@518 -- # killprocess 1166006 00:12:49.645 11:30:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@952 -- # '[' -z 1166006 ']' 00:12:49.645 11:30:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@956 -- # kill -0 1166006 00:12:49.645 11:30:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@957 -- # uname 00:12:49.645 11:30:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:12:49.645 11:30:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 1166006 00:12:49.904 11:30:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:12:49.904 11:30:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:12:49.904 11:30:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@970 -- # echo 'killing process with pid 1166006' 00:12:49.904 killing process with pid 1166006 00:12:49.904 11:30:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@971 -- # kill 1166006 00:12:49.904 11:30:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@976 -- # wait 1166006 00:12:49.904 11:30:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:12:49.904 11:30:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:12:49.904 11:30:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:12:49.904 11:30:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@297 -- # iptr 00:12:49.904 11:30:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:12:49.904 11:30:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@791 -- # iptables-save 00:12:49.904 11:30:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@791 -- # iptables-restore 00:12:49.904 11:30:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:12:49.904 11:30:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@302 -- # remove_spdk_ns 00:12:49.904 11:30:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:49.904 11:30:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:49.904 11:30:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:52.439 11:30:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:12:52.439 00:12:52.439 real 0m12.441s 00:12:52.439 user 0m22.990s 00:12:52.439 sys 0m5.075s 00:12:52.439 11:30:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1128 -- # xtrace_disable 00:12:52.439 11:30:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:12:52.439 ************************************ 00:12:52.439 END TEST nvmf_invalid 00:12:52.439 ************************************ 00:12:52.439 11:30:52 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@24 -- # run_test nvmf_connect_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:12:52.439 11:30:52 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:12:52.439 11:30:52 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1109 -- # xtrace_disable 00:12:52.439 11:30:52 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:12:52.439 ************************************ 00:12:52.439 START TEST nvmf_connect_stress 00:12:52.439 ************************************ 00:12:52.439 11:30:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:12:52.439 * Looking for test storage... 00:12:52.439 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:52.439 11:30:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:12:52.439 11:30:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1691 -- # lcov --version 00:12:52.439 11:30:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:12:52.439 11:30:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:12:52.439 11:30:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:52.439 11:30:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:52.439 11:30:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:52.439 11:30:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@336 -- # IFS=.-: 00:12:52.439 11:30:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@336 -- # read -ra ver1 00:12:52.439 11:30:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@337 -- # IFS=.-: 00:12:52.439 11:30:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@337 -- # read -ra ver2 00:12:52.439 11:30:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@338 -- # local 'op=<' 00:12:52.439 11:30:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@340 -- # ver1_l=2 00:12:52.439 11:30:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@341 -- # ver2_l=1 00:12:52.439 11:30:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:52.439 11:30:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@344 -- # case "$op" in 00:12:52.439 11:30:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@345 -- # : 1 00:12:52.439 11:30:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:52.439 11:30:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:52.439 11:30:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@365 -- # decimal 1 00:12:52.439 11:30:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@353 -- # local d=1 00:12:52.439 11:30:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:52.439 11:30:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@355 -- # echo 1 00:12:52.439 11:30:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@365 -- # ver1[v]=1 00:12:52.439 11:30:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@366 -- # decimal 2 00:12:52.439 11:30:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@353 -- # local d=2 00:12:52.439 11:30:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:52.439 11:30:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@355 -- # echo 2 00:12:52.439 11:30:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@366 -- # ver2[v]=2 00:12:52.439 11:30:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:52.439 11:30:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:52.439 11:30:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@368 -- # return 0 00:12:52.439 11:30:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:52.439 11:30:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:12:52.439 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:52.439 --rc genhtml_branch_coverage=1 00:12:52.439 --rc genhtml_function_coverage=1 00:12:52.439 --rc genhtml_legend=1 00:12:52.439 --rc geninfo_all_blocks=1 00:12:52.439 --rc geninfo_unexecuted_blocks=1 00:12:52.439 00:12:52.439 ' 00:12:52.439 11:30:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:12:52.439 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:52.439 --rc genhtml_branch_coverage=1 00:12:52.439 --rc genhtml_function_coverage=1 00:12:52.439 --rc genhtml_legend=1 00:12:52.439 --rc geninfo_all_blocks=1 00:12:52.439 --rc geninfo_unexecuted_blocks=1 00:12:52.439 00:12:52.439 ' 00:12:52.440 11:30:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:12:52.440 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:52.440 --rc genhtml_branch_coverage=1 00:12:52.440 --rc genhtml_function_coverage=1 00:12:52.440 --rc genhtml_legend=1 00:12:52.440 --rc geninfo_all_blocks=1 00:12:52.440 --rc geninfo_unexecuted_blocks=1 00:12:52.440 00:12:52.440 ' 00:12:52.440 11:30:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:12:52.440 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:52.440 --rc genhtml_branch_coverage=1 00:12:52.440 --rc genhtml_function_coverage=1 00:12:52.440 --rc genhtml_legend=1 00:12:52.440 --rc geninfo_all_blocks=1 00:12:52.440 --rc geninfo_unexecuted_blocks=1 00:12:52.440 00:12:52.440 ' 00:12:52.440 11:30:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:52.440 11:30:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@7 -- # uname -s 00:12:52.440 11:30:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:52.440 11:30:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:52.440 11:30:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:52.440 11:30:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:52.440 11:30:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:52.440 11:30:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:52.440 11:30:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:52.440 11:30:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:52.440 11:30:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:52.440 11:30:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:52.440 11:30:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:12:52.440 11:30:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=00abaa28-3537-eb11-906e-0017a4403562 00:12:52.440 11:30:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:52.440 11:30:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:52.440 11:30:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:52.440 11:30:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:52.440 11:30:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:52.440 11:30:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@15 -- # shopt -s extglob 00:12:52.440 11:30:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:52.440 11:30:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:52.440 11:30:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:52.440 11:30:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:52.440 11:30:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:52.440 11:30:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:52.440 11:30:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@5 -- # export PATH 00:12:52.440 11:30:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:52.440 11:30:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@51 -- # : 0 00:12:52.440 11:30:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:12:52.440 11:30:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:12:52.440 11:30:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:52.440 11:30:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:52.440 11:30:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:52.440 11:30:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:12:52.440 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:12:52.440 11:30:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:12:52.440 11:30:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:12:52.440 11:30:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@55 -- # have_pci_nics=0 00:12:52.440 11:30:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@12 -- # nvmftestinit 00:12:52.440 11:30:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:12:52.440 11:30:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:52.440 11:30:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@476 -- # prepare_net_devs 00:12:52.440 11:30:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@438 -- # local -g is_hw=no 00:12:52.440 11:30:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@440 -- # remove_spdk_ns 00:12:52.440 11:30:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:52.440 11:30:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:52.440 11:30:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:52.440 11:30:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:12:52.440 11:30:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:12:52.440 11:30:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@309 -- # xtrace_disable 00:12:52.440 11:30:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:57.711 11:30:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:12:57.711 11:30:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@315 -- # pci_devs=() 00:12:57.711 11:30:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@315 -- # local -a pci_devs 00:12:57.711 11:30:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@316 -- # pci_net_devs=() 00:12:57.711 11:30:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:12:57.711 11:30:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@317 -- # pci_drivers=() 00:12:57.711 11:30:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@317 -- # local -A pci_drivers 00:12:57.711 11:30:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@319 -- # net_devs=() 00:12:57.711 11:30:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@319 -- # local -ga net_devs 00:12:57.711 11:30:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@320 -- # e810=() 00:12:57.711 11:30:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@320 -- # local -ga e810 00:12:57.711 11:30:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@321 -- # x722=() 00:12:57.711 11:30:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@321 -- # local -ga x722 00:12:57.711 11:30:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@322 -- # mlx=() 00:12:57.711 11:30:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@322 -- # local -ga mlx 00:12:57.711 11:30:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:57.711 11:30:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:57.711 11:30:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:57.711 11:30:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:57.711 11:30:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:57.711 11:30:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:57.711 11:30:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:57.711 11:30:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:12:57.711 11:30:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:57.711 11:30:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:57.711 11:30:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:57.711 11:30:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:57.711 11:30:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:12:57.711 11:30:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:12:57.711 11:30:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:12:57.712 11:30:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:12:57.712 11:30:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:12:57.712 11:30:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:12:57.712 11:30:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:57.712 11:30:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:12:57.712 Found 0000:af:00.0 (0x8086 - 0x159b) 00:12:57.712 11:30:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:12:57.712 11:30:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:12:57.712 11:30:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:57.712 11:30:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:57.712 11:30:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:12:57.712 11:30:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:57.712 11:30:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:12:57.712 Found 0000:af:00.1 (0x8086 - 0x159b) 00:12:57.712 11:30:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:12:57.712 11:30:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:12:57.712 11:30:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:57.712 11:30:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:57.712 11:30:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:12:57.712 11:30:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:12:57.712 11:30:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:12:57.712 11:30:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:12:57.712 11:30:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:12:57.712 11:30:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:57.712 11:30:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:12:57.712 11:30:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:57.712 11:30:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@418 -- # [[ up == up ]] 00:12:57.712 11:30:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:12:57.712 11:30:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:57.712 11:30:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:12:57.712 Found net devices under 0000:af:00.0: cvl_0_0 00:12:57.712 11:30:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:12:57.712 11:30:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:12:57.712 11:30:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:57.712 11:30:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:12:57.712 11:30:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:57.712 11:30:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@418 -- # [[ up == up ]] 00:12:57.712 11:30:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:12:57.712 11:30:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:57.712 11:30:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:12:57.712 Found net devices under 0000:af:00.1: cvl_0_1 00:12:57.712 11:30:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:12:57.712 11:30:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:12:57.712 11:30:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@442 -- # is_hw=yes 00:12:57.712 11:30:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:12:57.712 11:30:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:12:57.712 11:30:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:12:57.712 11:30:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:12:57.712 11:30:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:57.712 11:30:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:57.712 11:30:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:12:57.712 11:30:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:12:57.712 11:30:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:12:57.712 11:30:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:12:57.712 11:30:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:12:57.712 11:30:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:12:57.712 11:30:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:12:57.712 11:30:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:57.712 11:30:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:12:57.712 11:30:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:12:57.712 11:30:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:12:57.712 11:30:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:12:57.712 11:30:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:12:57.712 11:30:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:12:57.712 11:30:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:12:57.712 11:30:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:12:57.712 11:30:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:12:57.712 11:30:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:12:57.712 11:30:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:12:57.712 11:30:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:12:57.712 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:57.712 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.427 ms 00:12:57.712 00:12:57.712 --- 10.0.0.2 ping statistics --- 00:12:57.712 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:57.712 rtt min/avg/max/mdev = 0.427/0.427/0.427/0.000 ms 00:12:57.712 11:30:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:12:57.712 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:57.712 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.262 ms 00:12:57.712 00:12:57.712 --- 10.0.0.1 ping statistics --- 00:12:57.712 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:57.712 rtt min/avg/max/mdev = 0.262/0.262/0.262/0.000 ms 00:12:57.712 11:30:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:57.712 11:30:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@450 -- # return 0 00:12:57.713 11:30:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:12:57.713 11:30:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:57.713 11:30:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:12:57.713 11:30:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:12:57.713 11:30:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:57.713 11:30:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:12:57.713 11:30:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:12:57.972 11:30:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@13 -- # nvmfappstart -m 0xE 00:12:57.972 11:30:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:12:57.972 11:30:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@724 -- # xtrace_disable 00:12:57.972 11:30:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:57.972 11:30:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@509 -- # nvmfpid=1170462 00:12:57.972 11:30:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@510 -- # waitforlisten 1170462 00:12:57.972 11:30:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:12:57.972 11:30:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@833 -- # '[' -z 1170462 ']' 00:12:57.972 11:30:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:57.972 11:30:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@838 -- # local max_retries=100 00:12:57.972 11:30:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:57.972 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:57.972 11:30:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@842 -- # xtrace_disable 00:12:57.972 11:30:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:57.972 [2024-11-15 11:30:58.605596] Starting SPDK v25.01-pre git sha1 4b2d483c6 / DPDK 24.03.0 initialization... 00:12:57.972 [2024-11-15 11:30:58.605637] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:57.972 [2024-11-15 11:30:58.663262] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:12:57.972 [2024-11-15 11:30:58.704051] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:57.972 [2024-11-15 11:30:58.704080] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:57.972 [2024-11-15 11:30:58.704087] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:57.972 [2024-11-15 11:30:58.704096] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:57.972 [2024-11-15 11:30:58.704102] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:57.972 [2024-11-15 11:30:58.705499] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:12:57.972 [2024-11-15 11:30:58.708473] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:12:57.972 [2024-11-15 11:30:58.708477] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:12:57.972 11:30:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:12:57.972 11:30:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@866 -- # return 0 00:12:57.972 11:30:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:12:57.972 11:30:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@730 -- # xtrace_disable 00:12:57.972 11:30:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:58.232 11:30:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:58.232 11:30:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:12:58.232 11:30:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:58.232 11:30:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:58.232 [2024-11-15 11:30:58.851204] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:58.232 11:30:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:58.232 11:30:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:12:58.232 11:30:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:58.232 11:30:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:58.232 11:30:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:58.232 11:30:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:58.232 11:30:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:58.232 11:30:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:58.232 [2024-11-15 11:30:58.871437] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:58.232 11:30:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:58.232 11:30:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:12:58.232 11:30:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:58.232 11:30:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:58.232 NULL1 00:12:58.232 11:30:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:58.232 11:30:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@21 -- # PERF_PID=1170600 00:12:58.232 11:30:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@20 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/connect_stress/connect_stress -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -t 10 00:12:58.232 11:30:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@23 -- # rpcs=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:12:58.232 11:30:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@25 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:12:58.232 11:30:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # seq 1 20 00:12:58.232 11:30:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:58.232 11:30:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:12:58.232 11:30:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:58.232 11:30:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:12:58.232 11:30:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:58.232 11:30:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:12:58.232 11:30:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:58.232 11:30:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:12:58.232 11:30:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:58.232 11:30:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:12:58.232 11:30:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:58.232 11:30:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:12:58.232 11:30:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:58.232 11:30:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:12:58.232 11:30:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:58.232 11:30:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:12:58.232 11:30:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:58.232 11:30:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:12:58.232 11:30:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:58.232 11:30:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:12:58.232 11:30:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:58.232 11:30:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:12:58.232 11:30:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:58.232 11:30:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:12:58.232 11:30:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:58.232 11:30:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:12:58.232 11:30:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:58.232 11:30:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:12:58.232 11:30:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:58.232 11:30:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:12:58.233 11:30:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:58.233 11:30:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:12:58.233 11:30:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:58.233 11:30:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:12:58.233 11:30:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:58.233 11:30:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:12:58.233 11:30:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:58.233 11:30:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:12:58.233 11:30:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:58.233 11:30:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:12:58.233 11:30:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1170600 00:12:58.233 11:30:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:58.233 11:30:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:58.233 11:30:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:58.491 11:30:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:58.491 11:30:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1170600 00:12:58.491 11:30:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:58.491 11:30:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:58.491 11:30:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:58.784 11:30:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:58.784 11:30:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1170600 00:12:58.784 11:30:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:58.784 11:30:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:58.784 11:30:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:59.351 11:30:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:59.351 11:30:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1170600 00:12:59.351 11:30:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:59.351 11:30:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:59.352 11:30:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:59.611 11:31:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:59.611 11:31:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1170600 00:12:59.611 11:31:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:59.611 11:31:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:59.611 11:31:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:59.871 11:31:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:59.871 11:31:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1170600 00:12:59.871 11:31:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:59.871 11:31:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:59.871 11:31:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:00.129 11:31:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:00.129 11:31:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1170600 00:13:00.129 11:31:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:00.129 11:31:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:00.129 11:31:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:00.695 11:31:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:00.695 11:31:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1170600 00:13:00.695 11:31:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:00.695 11:31:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:00.695 11:31:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:00.955 11:31:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:00.955 11:31:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1170600 00:13:00.955 11:31:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:00.955 11:31:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:00.955 11:31:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:01.214 11:31:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:01.214 11:31:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1170600 00:13:01.214 11:31:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:01.214 11:31:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:01.214 11:31:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:01.473 11:31:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:01.473 11:31:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1170600 00:13:01.473 11:31:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:01.473 11:31:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:01.473 11:31:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:01.731 11:31:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:01.731 11:31:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1170600 00:13:01.731 11:31:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:01.731 11:31:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:01.731 11:31:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:02.299 11:31:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:02.299 11:31:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1170600 00:13:02.299 11:31:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:02.299 11:31:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:02.299 11:31:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:02.558 11:31:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:02.558 11:31:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1170600 00:13:02.558 11:31:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:02.558 11:31:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:02.558 11:31:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:02.817 11:31:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:02.817 11:31:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1170600 00:13:02.817 11:31:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:02.817 11:31:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:02.817 11:31:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:03.076 11:31:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:03.076 11:31:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1170600 00:13:03.076 11:31:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:03.076 11:31:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:03.076 11:31:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:03.335 11:31:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:03.335 11:31:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1170600 00:13:03.335 11:31:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:03.335 11:31:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:03.335 11:31:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:03.903 11:31:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:03.903 11:31:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1170600 00:13:03.903 11:31:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:03.903 11:31:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:03.903 11:31:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:04.161 11:31:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:04.161 11:31:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1170600 00:13:04.162 11:31:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:04.162 11:31:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:04.162 11:31:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:04.421 11:31:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:04.421 11:31:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1170600 00:13:04.421 11:31:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:04.421 11:31:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:04.421 11:31:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:04.680 11:31:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:04.680 11:31:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1170600 00:13:04.680 11:31:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:04.680 11:31:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:04.680 11:31:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:04.939 11:31:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:04.939 11:31:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1170600 00:13:04.939 11:31:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:04.939 11:31:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:04.939 11:31:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:05.506 11:31:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:05.506 11:31:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1170600 00:13:05.506 11:31:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:05.506 11:31:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:05.506 11:31:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:05.765 11:31:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:05.765 11:31:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1170600 00:13:05.765 11:31:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:05.765 11:31:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:05.765 11:31:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:06.025 11:31:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:06.025 11:31:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1170600 00:13:06.025 11:31:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:06.025 11:31:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:06.025 11:31:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:06.284 11:31:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:06.284 11:31:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1170600 00:13:06.284 11:31:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:06.284 11:31:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:06.284 11:31:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:06.542 11:31:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:06.542 11:31:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1170600 00:13:06.542 11:31:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:06.542 11:31:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:06.542 11:31:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:07.108 11:31:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:07.108 11:31:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1170600 00:13:07.108 11:31:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:07.108 11:31:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:07.108 11:31:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:07.367 11:31:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:07.367 11:31:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1170600 00:13:07.367 11:31:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:07.367 11:31:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:07.367 11:31:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:07.625 11:31:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:07.625 11:31:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1170600 00:13:07.625 11:31:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:07.625 11:31:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:07.625 11:31:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:07.884 11:31:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:07.884 11:31:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1170600 00:13:07.884 11:31:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:07.884 11:31:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:07.884 11:31:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:08.143 11:31:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:08.143 11:31:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1170600 00:13:08.143 11:31:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:08.143 11:31:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:08.143 11:31:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:08.402 Testing NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:13:08.661 11:31:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:08.661 11:31:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1170600 00:13:08.661 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh: line 34: kill: (1170600) - No such process 00:13:08.661 11:31:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@38 -- # wait 1170600 00:13:08.661 11:31:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@39 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:13:08.661 11:31:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:13:08.661 11:31:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@43 -- # nvmftestfini 00:13:08.661 11:31:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@516 -- # nvmfcleanup 00:13:08.661 11:31:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@121 -- # sync 00:13:08.661 11:31:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:13:08.661 11:31:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@124 -- # set +e 00:13:08.661 11:31:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@125 -- # for i in {1..20} 00:13:08.661 11:31:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:13:08.661 rmmod nvme_tcp 00:13:08.661 rmmod nvme_fabrics 00:13:08.661 rmmod nvme_keyring 00:13:08.661 11:31:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:13:08.661 11:31:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@128 -- # set -e 00:13:08.661 11:31:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@129 -- # return 0 00:13:08.661 11:31:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@517 -- # '[' -n 1170462 ']' 00:13:08.661 11:31:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@518 -- # killprocess 1170462 00:13:08.661 11:31:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@952 -- # '[' -z 1170462 ']' 00:13:08.661 11:31:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@956 -- # kill -0 1170462 00:13:08.661 11:31:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@957 -- # uname 00:13:08.661 11:31:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:13:08.661 11:31:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 1170462 00:13:08.661 11:31:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:13:08.661 11:31:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:13:08.661 11:31:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@970 -- # echo 'killing process with pid 1170462' 00:13:08.661 killing process with pid 1170462 00:13:08.661 11:31:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@971 -- # kill 1170462 00:13:08.661 11:31:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@976 -- # wait 1170462 00:13:08.921 11:31:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:13:08.921 11:31:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:13:08.921 11:31:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:13:08.921 11:31:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@297 -- # iptr 00:13:08.921 11:31:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@791 -- # iptables-save 00:13:08.921 11:31:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:13:08.921 11:31:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@791 -- # iptables-restore 00:13:08.921 11:31:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:13:08.921 11:31:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@302 -- # remove_spdk_ns 00:13:08.921 11:31:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:08.921 11:31:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:08.921 11:31:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:11.458 11:31:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:13:11.458 00:13:11.458 real 0m18.824s 00:13:11.458 user 0m40.534s 00:13:11.458 sys 0m7.833s 00:13:11.458 11:31:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1128 -- # xtrace_disable 00:13:11.458 11:31:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:11.458 ************************************ 00:13:11.458 END TEST nvmf_connect_stress 00:13:11.458 ************************************ 00:13:11.458 11:31:11 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@25 -- # run_test nvmf_fused_ordering /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:13:11.458 11:31:11 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:13:11.458 11:31:11 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1109 -- # xtrace_disable 00:13:11.458 11:31:11 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:13:11.458 ************************************ 00:13:11.458 START TEST nvmf_fused_ordering 00:13:11.458 ************************************ 00:13:11.458 11:31:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:13:11.458 * Looking for test storage... 00:13:11.458 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:11.458 11:31:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:13:11.458 11:31:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1691 -- # lcov --version 00:13:11.458 11:31:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:13:11.458 11:31:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:13:11.458 11:31:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:13:11.458 11:31:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@333 -- # local ver1 ver1_l 00:13:11.458 11:31:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@334 -- # local ver2 ver2_l 00:13:11.458 11:31:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@336 -- # IFS=.-: 00:13:11.458 11:31:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@336 -- # read -ra ver1 00:13:11.458 11:31:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@337 -- # IFS=.-: 00:13:11.458 11:31:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@337 -- # read -ra ver2 00:13:11.458 11:31:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@338 -- # local 'op=<' 00:13:11.458 11:31:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@340 -- # ver1_l=2 00:13:11.458 11:31:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@341 -- # ver2_l=1 00:13:11.458 11:31:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:13:11.458 11:31:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@344 -- # case "$op" in 00:13:11.458 11:31:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@345 -- # : 1 00:13:11.458 11:31:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@364 -- # (( v = 0 )) 00:13:11.458 11:31:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:13:11.458 11:31:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@365 -- # decimal 1 00:13:11.458 11:31:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@353 -- # local d=1 00:13:11.458 11:31:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:13:11.458 11:31:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@355 -- # echo 1 00:13:11.458 11:31:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@365 -- # ver1[v]=1 00:13:11.458 11:31:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@366 -- # decimal 2 00:13:11.458 11:31:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@353 -- # local d=2 00:13:11.458 11:31:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:13:11.458 11:31:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@355 -- # echo 2 00:13:11.458 11:31:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@366 -- # ver2[v]=2 00:13:11.458 11:31:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:13:11.458 11:31:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:13:11.458 11:31:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@368 -- # return 0 00:13:11.458 11:31:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:13:11.458 11:31:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:13:11.458 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:11.458 --rc genhtml_branch_coverage=1 00:13:11.458 --rc genhtml_function_coverage=1 00:13:11.458 --rc genhtml_legend=1 00:13:11.458 --rc geninfo_all_blocks=1 00:13:11.458 --rc geninfo_unexecuted_blocks=1 00:13:11.458 00:13:11.458 ' 00:13:11.458 11:31:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:13:11.458 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:11.458 --rc genhtml_branch_coverage=1 00:13:11.458 --rc genhtml_function_coverage=1 00:13:11.458 --rc genhtml_legend=1 00:13:11.458 --rc geninfo_all_blocks=1 00:13:11.458 --rc geninfo_unexecuted_blocks=1 00:13:11.458 00:13:11.458 ' 00:13:11.458 11:31:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:13:11.458 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:11.458 --rc genhtml_branch_coverage=1 00:13:11.458 --rc genhtml_function_coverage=1 00:13:11.458 --rc genhtml_legend=1 00:13:11.458 --rc geninfo_all_blocks=1 00:13:11.458 --rc geninfo_unexecuted_blocks=1 00:13:11.458 00:13:11.458 ' 00:13:11.458 11:31:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:13:11.458 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:11.458 --rc genhtml_branch_coverage=1 00:13:11.458 --rc genhtml_function_coverage=1 00:13:11.458 --rc genhtml_legend=1 00:13:11.458 --rc geninfo_all_blocks=1 00:13:11.458 --rc geninfo_unexecuted_blocks=1 00:13:11.458 00:13:11.458 ' 00:13:11.458 11:31:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:11.458 11:31:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@7 -- # uname -s 00:13:11.458 11:31:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:11.458 11:31:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:11.458 11:31:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:11.458 11:31:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:11.458 11:31:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:11.458 11:31:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:11.458 11:31:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:11.458 11:31:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:11.458 11:31:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:11.458 11:31:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:11.458 11:31:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:13:11.459 11:31:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@18 -- # NVME_HOSTID=00abaa28-3537-eb11-906e-0017a4403562 00:13:11.459 11:31:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:11.459 11:31:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:11.459 11:31:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:11.459 11:31:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:11.459 11:31:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:11.459 11:31:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@15 -- # shopt -s extglob 00:13:11.459 11:31:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:11.459 11:31:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:11.459 11:31:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:11.459 11:31:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:11.459 11:31:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:11.459 11:31:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:11.459 11:31:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@5 -- # export PATH 00:13:11.459 11:31:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:11.459 11:31:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@51 -- # : 0 00:13:11.459 11:31:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:13:11.459 11:31:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:13:11.459 11:31:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:11.459 11:31:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:11.459 11:31:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:11.459 11:31:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:13:11.459 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:13:11.459 11:31:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:13:11.459 11:31:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:13:11.459 11:31:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@55 -- # have_pci_nics=0 00:13:11.459 11:31:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@12 -- # nvmftestinit 00:13:11.459 11:31:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:13:11.459 11:31:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:11.459 11:31:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@476 -- # prepare_net_devs 00:13:11.459 11:31:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@438 -- # local -g is_hw=no 00:13:11.459 11:31:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@440 -- # remove_spdk_ns 00:13:11.459 11:31:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:11.459 11:31:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:11.459 11:31:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:11.459 11:31:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:13:11.459 11:31:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:13:11.459 11:31:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@309 -- # xtrace_disable 00:13:11.459 11:31:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:13:16.727 11:31:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:13:16.727 11:31:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@315 -- # pci_devs=() 00:13:16.727 11:31:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@315 -- # local -a pci_devs 00:13:16.727 11:31:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@316 -- # pci_net_devs=() 00:13:16.727 11:31:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:13:16.727 11:31:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@317 -- # pci_drivers=() 00:13:16.727 11:31:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@317 -- # local -A pci_drivers 00:13:16.727 11:31:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@319 -- # net_devs=() 00:13:16.727 11:31:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@319 -- # local -ga net_devs 00:13:16.727 11:31:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@320 -- # e810=() 00:13:16.727 11:31:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@320 -- # local -ga e810 00:13:16.727 11:31:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@321 -- # x722=() 00:13:16.727 11:31:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@321 -- # local -ga x722 00:13:16.728 11:31:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@322 -- # mlx=() 00:13:16.728 11:31:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@322 -- # local -ga mlx 00:13:16.728 11:31:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:16.728 11:31:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:16.728 11:31:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:16.728 11:31:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:16.728 11:31:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:16.728 11:31:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:16.728 11:31:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:16.728 11:31:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:13:16.728 11:31:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:16.728 11:31:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:16.728 11:31:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:16.728 11:31:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:16.728 11:31:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:13:16.728 11:31:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:13:16.728 11:31:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:13:16.728 11:31:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:13:16.728 11:31:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:13:16.728 11:31:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:13:16.728 11:31:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:13:16.728 11:31:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:13:16.728 Found 0000:af:00.0 (0x8086 - 0x159b) 00:13:16.728 11:31:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:13:16.728 11:31:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:13:16.728 11:31:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:16.728 11:31:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:16.728 11:31:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:13:16.728 11:31:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:13:16.728 11:31:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:13:16.728 Found 0000:af:00.1 (0x8086 - 0x159b) 00:13:16.728 11:31:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:13:16.728 11:31:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:13:16.728 11:31:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:16.728 11:31:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:16.728 11:31:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:13:16.728 11:31:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:13:16.728 11:31:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:13:16.728 11:31:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:13:16.728 11:31:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:13:16.728 11:31:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:16.728 11:31:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:13:16.728 11:31:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:16.728 11:31:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@418 -- # [[ up == up ]] 00:13:16.728 11:31:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:13:16.728 11:31:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:16.728 11:31:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:13:16.728 Found net devices under 0000:af:00.0: cvl_0_0 00:13:16.728 11:31:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:13:16.728 11:31:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:13:16.728 11:31:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:16.728 11:31:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:13:16.728 11:31:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:16.728 11:31:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@418 -- # [[ up == up ]] 00:13:16.728 11:31:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:13:16.728 11:31:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:16.728 11:31:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:13:16.728 Found net devices under 0000:af:00.1: cvl_0_1 00:13:16.728 11:31:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:13:16.728 11:31:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:13:16.728 11:31:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@442 -- # is_hw=yes 00:13:16.728 11:31:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:13:16.728 11:31:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:13:16.728 11:31:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:13:16.728 11:31:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:13:16.728 11:31:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:16.728 11:31:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:16.728 11:31:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:13:16.728 11:31:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:13:16.728 11:31:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:13:16.728 11:31:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:13:16.728 11:31:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:13:16.728 11:31:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:13:16.728 11:31:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:13:16.729 11:31:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:16.729 11:31:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:13:16.729 11:31:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:13:16.729 11:31:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:13:16.729 11:31:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:13:16.729 11:31:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:13:16.729 11:31:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:13:16.729 11:31:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:13:16.729 11:31:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:13:16.729 11:31:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:13:16.729 11:31:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:13:16.729 11:31:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:13:16.729 11:31:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:13:16.729 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:16.729 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.481 ms 00:13:16.729 00:13:16.729 --- 10.0.0.2 ping statistics --- 00:13:16.729 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:16.729 rtt min/avg/max/mdev = 0.481/0.481/0.481/0.000 ms 00:13:16.729 11:31:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:13:16.729 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:16.729 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.257 ms 00:13:16.729 00:13:16.729 --- 10.0.0.1 ping statistics --- 00:13:16.729 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:16.729 rtt min/avg/max/mdev = 0.257/0.257/0.257/0.000 ms 00:13:16.729 11:31:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:16.729 11:31:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@450 -- # return 0 00:13:16.729 11:31:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:13:16.729 11:31:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:16.729 11:31:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:13:16.729 11:31:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:13:16.729 11:31:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:16.729 11:31:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:13:16.729 11:31:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:13:16.729 11:31:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@13 -- # nvmfappstart -m 0x2 00:13:16.729 11:31:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:13:16.729 11:31:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@724 -- # xtrace_disable 00:13:16.729 11:31:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:13:16.729 11:31:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@509 -- # nvmfpid=1176055 00:13:16.729 11:31:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@510 -- # waitforlisten 1176055 00:13:16.729 11:31:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@833 -- # '[' -z 1176055 ']' 00:13:16.729 11:31:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:16.729 11:31:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@838 -- # local max_retries=100 00:13:16.729 11:31:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:16.729 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:16.729 11:31:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@842 -- # xtrace_disable 00:13:16.729 11:31:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:13:16.729 11:31:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:13:16.729 [2024-11-15 11:31:17.300921] Starting SPDK v25.01-pre git sha1 4b2d483c6 / DPDK 24.03.0 initialization... 00:13:16.729 [2024-11-15 11:31:17.300977] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:16.729 [2024-11-15 11:31:17.372063] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:16.729 [2024-11-15 11:31:17.411008] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:16.729 [2024-11-15 11:31:17.411040] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:16.729 [2024-11-15 11:31:17.411046] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:16.729 [2024-11-15 11:31:17.411052] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:16.729 [2024-11-15 11:31:17.411056] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:16.729 [2024-11-15 11:31:17.411584] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:13:16.729 11:31:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:13:16.729 11:31:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@866 -- # return 0 00:13:16.729 11:31:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:13:16.729 11:31:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@730 -- # xtrace_disable 00:13:16.729 11:31:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:13:16.729 11:31:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:16.729 11:31:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:13:16.729 11:31:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:16.729 11:31:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:13:16.729 [2024-11-15 11:31:17.565486] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:16.729 11:31:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:16.729 11:31:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:13:16.729 11:31:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:16.729 11:31:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:13:16.989 11:31:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:16.989 11:31:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:16.989 11:31:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:16.989 11:31:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:13:16.989 [2024-11-15 11:31:17.585643] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:16.989 11:31:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:16.989 11:31:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:13:16.989 11:31:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:16.989 11:31:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:13:16.989 NULL1 00:13:16.989 11:31:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:16.989 11:31:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@19 -- # rpc_cmd bdev_wait_for_examine 00:13:16.989 11:31:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:16.989 11:31:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:13:16.989 11:31:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:16.989 11:31:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:13:16.989 11:31:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:16.989 11:31:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:13:16.989 11:31:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:16.989 11:31:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/fused_ordering/fused_ordering -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:13:16.989 [2024-11-15 11:31:17.643128] Starting SPDK v25.01-pre git sha1 4b2d483c6 / DPDK 24.03.0 initialization... 00:13:16.989 [2024-11-15 11:31:17.643161] [ DPDK EAL parameters: fused_ordering --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1176081 ] 00:13:17.557 Attached to nqn.2016-06.io.spdk:cnode1 00:13:17.557 Namespace ID: 1 size: 1GB 00:13:17.557 fused_ordering(0) 00:13:17.557 fused_ordering(1) 00:13:17.557 fused_ordering(2) 00:13:17.557 fused_ordering(3) 00:13:17.557 fused_ordering(4) 00:13:17.557 fused_ordering(5) 00:13:17.557 fused_ordering(6) 00:13:17.557 fused_ordering(7) 00:13:17.557 fused_ordering(8) 00:13:17.557 fused_ordering(9) 00:13:17.557 fused_ordering(10) 00:13:17.557 fused_ordering(11) 00:13:17.557 fused_ordering(12) 00:13:17.557 fused_ordering(13) 00:13:17.557 fused_ordering(14) 00:13:17.557 fused_ordering(15) 00:13:17.557 fused_ordering(16) 00:13:17.557 fused_ordering(17) 00:13:17.557 fused_ordering(18) 00:13:17.557 fused_ordering(19) 00:13:17.557 fused_ordering(20) 00:13:17.557 fused_ordering(21) 00:13:17.557 fused_ordering(22) 00:13:17.557 fused_ordering(23) 00:13:17.557 fused_ordering(24) 00:13:17.557 fused_ordering(25) 00:13:17.557 fused_ordering(26) 00:13:17.557 fused_ordering(27) 00:13:17.557 fused_ordering(28) 00:13:17.557 fused_ordering(29) 00:13:17.557 fused_ordering(30) 00:13:17.557 fused_ordering(31) 00:13:17.557 fused_ordering(32) 00:13:17.557 fused_ordering(33) 00:13:17.557 fused_ordering(34) 00:13:17.557 fused_ordering(35) 00:13:17.557 fused_ordering(36) 00:13:17.557 fused_ordering(37) 00:13:17.557 fused_ordering(38) 00:13:17.557 fused_ordering(39) 00:13:17.557 fused_ordering(40) 00:13:17.557 fused_ordering(41) 00:13:17.557 fused_ordering(42) 00:13:17.557 fused_ordering(43) 00:13:17.557 fused_ordering(44) 00:13:17.557 fused_ordering(45) 00:13:17.557 fused_ordering(46) 00:13:17.557 fused_ordering(47) 00:13:17.557 fused_ordering(48) 00:13:17.557 fused_ordering(49) 00:13:17.557 fused_ordering(50) 00:13:17.557 fused_ordering(51) 00:13:17.557 fused_ordering(52) 00:13:17.557 fused_ordering(53) 00:13:17.557 fused_ordering(54) 00:13:17.557 fused_ordering(55) 00:13:17.557 fused_ordering(56) 00:13:17.557 fused_ordering(57) 00:13:17.557 fused_ordering(58) 00:13:17.557 fused_ordering(59) 00:13:17.557 fused_ordering(60) 00:13:17.557 fused_ordering(61) 00:13:17.557 fused_ordering(62) 00:13:17.557 fused_ordering(63) 00:13:17.557 fused_ordering(64) 00:13:17.557 fused_ordering(65) 00:13:17.557 fused_ordering(66) 00:13:17.557 fused_ordering(67) 00:13:17.557 fused_ordering(68) 00:13:17.557 fused_ordering(69) 00:13:17.557 fused_ordering(70) 00:13:17.557 fused_ordering(71) 00:13:17.557 fused_ordering(72) 00:13:17.557 fused_ordering(73) 00:13:17.557 fused_ordering(74) 00:13:17.557 fused_ordering(75) 00:13:17.557 fused_ordering(76) 00:13:17.557 fused_ordering(77) 00:13:17.557 fused_ordering(78) 00:13:17.557 fused_ordering(79) 00:13:17.557 fused_ordering(80) 00:13:17.557 fused_ordering(81) 00:13:17.557 fused_ordering(82) 00:13:17.557 fused_ordering(83) 00:13:17.557 fused_ordering(84) 00:13:17.557 fused_ordering(85) 00:13:17.557 fused_ordering(86) 00:13:17.557 fused_ordering(87) 00:13:17.557 fused_ordering(88) 00:13:17.557 fused_ordering(89) 00:13:17.557 fused_ordering(90) 00:13:17.557 fused_ordering(91) 00:13:17.557 fused_ordering(92) 00:13:17.557 fused_ordering(93) 00:13:17.557 fused_ordering(94) 00:13:17.557 fused_ordering(95) 00:13:17.557 fused_ordering(96) 00:13:17.557 fused_ordering(97) 00:13:17.557 fused_ordering(98) 00:13:17.557 fused_ordering(99) 00:13:17.557 fused_ordering(100) 00:13:17.557 fused_ordering(101) 00:13:17.557 fused_ordering(102) 00:13:17.557 fused_ordering(103) 00:13:17.557 fused_ordering(104) 00:13:17.557 fused_ordering(105) 00:13:17.557 fused_ordering(106) 00:13:17.557 fused_ordering(107) 00:13:17.557 fused_ordering(108) 00:13:17.557 fused_ordering(109) 00:13:17.557 fused_ordering(110) 00:13:17.557 fused_ordering(111) 00:13:17.557 fused_ordering(112) 00:13:17.557 fused_ordering(113) 00:13:17.557 fused_ordering(114) 00:13:17.557 fused_ordering(115) 00:13:17.557 fused_ordering(116) 00:13:17.557 fused_ordering(117) 00:13:17.557 fused_ordering(118) 00:13:17.557 fused_ordering(119) 00:13:17.557 fused_ordering(120) 00:13:17.557 fused_ordering(121) 00:13:17.557 fused_ordering(122) 00:13:17.557 fused_ordering(123) 00:13:17.557 fused_ordering(124) 00:13:17.557 fused_ordering(125) 00:13:17.557 fused_ordering(126) 00:13:17.557 fused_ordering(127) 00:13:17.557 fused_ordering(128) 00:13:17.557 fused_ordering(129) 00:13:17.557 fused_ordering(130) 00:13:17.557 fused_ordering(131) 00:13:17.557 fused_ordering(132) 00:13:17.557 fused_ordering(133) 00:13:17.557 fused_ordering(134) 00:13:17.557 fused_ordering(135) 00:13:17.557 fused_ordering(136) 00:13:17.557 fused_ordering(137) 00:13:17.557 fused_ordering(138) 00:13:17.557 fused_ordering(139) 00:13:17.557 fused_ordering(140) 00:13:17.557 fused_ordering(141) 00:13:17.557 fused_ordering(142) 00:13:17.558 fused_ordering(143) 00:13:17.558 fused_ordering(144) 00:13:17.558 fused_ordering(145) 00:13:17.558 fused_ordering(146) 00:13:17.558 fused_ordering(147) 00:13:17.558 fused_ordering(148) 00:13:17.558 fused_ordering(149) 00:13:17.558 fused_ordering(150) 00:13:17.558 fused_ordering(151) 00:13:17.558 fused_ordering(152) 00:13:17.558 fused_ordering(153) 00:13:17.558 fused_ordering(154) 00:13:17.558 fused_ordering(155) 00:13:17.558 fused_ordering(156) 00:13:17.558 fused_ordering(157) 00:13:17.558 fused_ordering(158) 00:13:17.558 fused_ordering(159) 00:13:17.558 fused_ordering(160) 00:13:17.558 fused_ordering(161) 00:13:17.558 fused_ordering(162) 00:13:17.558 fused_ordering(163) 00:13:17.558 fused_ordering(164) 00:13:17.558 fused_ordering(165) 00:13:17.558 fused_ordering(166) 00:13:17.558 fused_ordering(167) 00:13:17.558 fused_ordering(168) 00:13:17.558 fused_ordering(169) 00:13:17.558 fused_ordering(170) 00:13:17.558 fused_ordering(171) 00:13:17.558 fused_ordering(172) 00:13:17.558 fused_ordering(173) 00:13:17.558 fused_ordering(174) 00:13:17.558 fused_ordering(175) 00:13:17.558 fused_ordering(176) 00:13:17.558 fused_ordering(177) 00:13:17.558 fused_ordering(178) 00:13:17.558 fused_ordering(179) 00:13:17.558 fused_ordering(180) 00:13:17.558 fused_ordering(181) 00:13:17.558 fused_ordering(182) 00:13:17.558 fused_ordering(183) 00:13:17.558 fused_ordering(184) 00:13:17.558 fused_ordering(185) 00:13:17.558 fused_ordering(186) 00:13:17.558 fused_ordering(187) 00:13:17.558 fused_ordering(188) 00:13:17.558 fused_ordering(189) 00:13:17.558 fused_ordering(190) 00:13:17.558 fused_ordering(191) 00:13:17.558 fused_ordering(192) 00:13:17.558 fused_ordering(193) 00:13:17.558 fused_ordering(194) 00:13:17.558 fused_ordering(195) 00:13:17.558 fused_ordering(196) 00:13:17.558 fused_ordering(197) 00:13:17.558 fused_ordering(198) 00:13:17.558 fused_ordering(199) 00:13:17.558 fused_ordering(200) 00:13:17.558 fused_ordering(201) 00:13:17.558 fused_ordering(202) 00:13:17.558 fused_ordering(203) 00:13:17.558 fused_ordering(204) 00:13:17.558 fused_ordering(205) 00:13:17.818 fused_ordering(206) 00:13:17.818 fused_ordering(207) 00:13:17.818 fused_ordering(208) 00:13:17.818 fused_ordering(209) 00:13:17.818 fused_ordering(210) 00:13:17.818 fused_ordering(211) 00:13:17.818 fused_ordering(212) 00:13:17.818 fused_ordering(213) 00:13:17.818 fused_ordering(214) 00:13:17.818 fused_ordering(215) 00:13:17.818 fused_ordering(216) 00:13:17.818 fused_ordering(217) 00:13:17.818 fused_ordering(218) 00:13:17.818 fused_ordering(219) 00:13:17.818 fused_ordering(220) 00:13:17.818 fused_ordering(221) 00:13:17.818 fused_ordering(222) 00:13:17.818 fused_ordering(223) 00:13:17.818 fused_ordering(224) 00:13:17.818 fused_ordering(225) 00:13:17.818 fused_ordering(226) 00:13:17.818 fused_ordering(227) 00:13:17.818 fused_ordering(228) 00:13:17.818 fused_ordering(229) 00:13:17.818 fused_ordering(230) 00:13:17.818 fused_ordering(231) 00:13:17.818 fused_ordering(232) 00:13:17.818 fused_ordering(233) 00:13:17.818 fused_ordering(234) 00:13:17.818 fused_ordering(235) 00:13:17.818 fused_ordering(236) 00:13:17.818 fused_ordering(237) 00:13:17.818 fused_ordering(238) 00:13:17.818 fused_ordering(239) 00:13:17.818 fused_ordering(240) 00:13:17.818 fused_ordering(241) 00:13:17.818 fused_ordering(242) 00:13:17.818 fused_ordering(243) 00:13:17.818 fused_ordering(244) 00:13:17.818 fused_ordering(245) 00:13:17.818 fused_ordering(246) 00:13:17.818 fused_ordering(247) 00:13:17.818 fused_ordering(248) 00:13:17.818 fused_ordering(249) 00:13:17.818 fused_ordering(250) 00:13:17.818 fused_ordering(251) 00:13:17.818 fused_ordering(252) 00:13:17.818 fused_ordering(253) 00:13:17.818 fused_ordering(254) 00:13:17.818 fused_ordering(255) 00:13:17.818 fused_ordering(256) 00:13:17.818 fused_ordering(257) 00:13:17.818 fused_ordering(258) 00:13:17.818 fused_ordering(259) 00:13:17.818 fused_ordering(260) 00:13:17.818 fused_ordering(261) 00:13:17.818 fused_ordering(262) 00:13:17.818 fused_ordering(263) 00:13:17.818 fused_ordering(264) 00:13:17.818 fused_ordering(265) 00:13:17.818 fused_ordering(266) 00:13:17.818 fused_ordering(267) 00:13:17.818 fused_ordering(268) 00:13:17.818 fused_ordering(269) 00:13:17.818 fused_ordering(270) 00:13:17.818 fused_ordering(271) 00:13:17.818 fused_ordering(272) 00:13:17.818 fused_ordering(273) 00:13:17.818 fused_ordering(274) 00:13:17.818 fused_ordering(275) 00:13:17.818 fused_ordering(276) 00:13:17.818 fused_ordering(277) 00:13:17.818 fused_ordering(278) 00:13:17.818 fused_ordering(279) 00:13:17.818 fused_ordering(280) 00:13:17.818 fused_ordering(281) 00:13:17.818 fused_ordering(282) 00:13:17.818 fused_ordering(283) 00:13:17.818 fused_ordering(284) 00:13:17.818 fused_ordering(285) 00:13:17.818 fused_ordering(286) 00:13:17.818 fused_ordering(287) 00:13:17.818 fused_ordering(288) 00:13:17.818 fused_ordering(289) 00:13:17.818 fused_ordering(290) 00:13:17.818 fused_ordering(291) 00:13:17.818 fused_ordering(292) 00:13:17.818 fused_ordering(293) 00:13:17.818 fused_ordering(294) 00:13:17.818 fused_ordering(295) 00:13:17.818 fused_ordering(296) 00:13:17.818 fused_ordering(297) 00:13:17.818 fused_ordering(298) 00:13:17.818 fused_ordering(299) 00:13:17.818 fused_ordering(300) 00:13:17.818 fused_ordering(301) 00:13:17.818 fused_ordering(302) 00:13:17.818 fused_ordering(303) 00:13:17.818 fused_ordering(304) 00:13:17.818 fused_ordering(305) 00:13:17.818 fused_ordering(306) 00:13:17.818 fused_ordering(307) 00:13:17.818 fused_ordering(308) 00:13:17.818 fused_ordering(309) 00:13:17.818 fused_ordering(310) 00:13:17.818 fused_ordering(311) 00:13:17.818 fused_ordering(312) 00:13:17.818 fused_ordering(313) 00:13:17.818 fused_ordering(314) 00:13:17.818 fused_ordering(315) 00:13:17.818 fused_ordering(316) 00:13:17.818 fused_ordering(317) 00:13:17.818 fused_ordering(318) 00:13:17.818 fused_ordering(319) 00:13:17.818 fused_ordering(320) 00:13:17.818 fused_ordering(321) 00:13:17.818 fused_ordering(322) 00:13:17.818 fused_ordering(323) 00:13:17.818 fused_ordering(324) 00:13:17.818 fused_ordering(325) 00:13:17.818 fused_ordering(326) 00:13:17.818 fused_ordering(327) 00:13:17.818 fused_ordering(328) 00:13:17.818 fused_ordering(329) 00:13:17.818 fused_ordering(330) 00:13:17.818 fused_ordering(331) 00:13:17.818 fused_ordering(332) 00:13:17.818 fused_ordering(333) 00:13:17.818 fused_ordering(334) 00:13:17.818 fused_ordering(335) 00:13:17.818 fused_ordering(336) 00:13:17.818 fused_ordering(337) 00:13:17.818 fused_ordering(338) 00:13:17.818 fused_ordering(339) 00:13:17.818 fused_ordering(340) 00:13:17.818 fused_ordering(341) 00:13:17.819 fused_ordering(342) 00:13:17.819 fused_ordering(343) 00:13:17.819 fused_ordering(344) 00:13:17.819 fused_ordering(345) 00:13:17.819 fused_ordering(346) 00:13:17.819 fused_ordering(347) 00:13:17.819 fused_ordering(348) 00:13:17.819 fused_ordering(349) 00:13:17.819 fused_ordering(350) 00:13:17.819 fused_ordering(351) 00:13:17.819 fused_ordering(352) 00:13:17.819 fused_ordering(353) 00:13:17.819 fused_ordering(354) 00:13:17.819 fused_ordering(355) 00:13:17.819 fused_ordering(356) 00:13:17.819 fused_ordering(357) 00:13:17.819 fused_ordering(358) 00:13:17.819 fused_ordering(359) 00:13:17.819 fused_ordering(360) 00:13:17.819 fused_ordering(361) 00:13:17.819 fused_ordering(362) 00:13:17.819 fused_ordering(363) 00:13:17.819 fused_ordering(364) 00:13:17.819 fused_ordering(365) 00:13:17.819 fused_ordering(366) 00:13:17.819 fused_ordering(367) 00:13:17.819 fused_ordering(368) 00:13:17.819 fused_ordering(369) 00:13:17.819 fused_ordering(370) 00:13:17.819 fused_ordering(371) 00:13:17.819 fused_ordering(372) 00:13:17.819 fused_ordering(373) 00:13:17.819 fused_ordering(374) 00:13:17.819 fused_ordering(375) 00:13:17.819 fused_ordering(376) 00:13:17.819 fused_ordering(377) 00:13:17.819 fused_ordering(378) 00:13:17.819 fused_ordering(379) 00:13:17.819 fused_ordering(380) 00:13:17.819 fused_ordering(381) 00:13:17.819 fused_ordering(382) 00:13:17.819 fused_ordering(383) 00:13:17.819 fused_ordering(384) 00:13:17.819 fused_ordering(385) 00:13:17.819 fused_ordering(386) 00:13:17.819 fused_ordering(387) 00:13:17.819 fused_ordering(388) 00:13:17.819 fused_ordering(389) 00:13:17.819 fused_ordering(390) 00:13:17.819 fused_ordering(391) 00:13:17.819 fused_ordering(392) 00:13:17.819 fused_ordering(393) 00:13:17.819 fused_ordering(394) 00:13:17.819 fused_ordering(395) 00:13:17.819 fused_ordering(396) 00:13:17.819 fused_ordering(397) 00:13:17.819 fused_ordering(398) 00:13:17.819 fused_ordering(399) 00:13:17.819 fused_ordering(400) 00:13:17.819 fused_ordering(401) 00:13:17.819 fused_ordering(402) 00:13:17.819 fused_ordering(403) 00:13:17.819 fused_ordering(404) 00:13:17.819 fused_ordering(405) 00:13:17.819 fused_ordering(406) 00:13:17.819 fused_ordering(407) 00:13:17.819 fused_ordering(408) 00:13:17.819 fused_ordering(409) 00:13:17.819 fused_ordering(410) 00:13:18.078 fused_ordering(411) 00:13:18.078 fused_ordering(412) 00:13:18.078 fused_ordering(413) 00:13:18.078 fused_ordering(414) 00:13:18.078 fused_ordering(415) 00:13:18.078 fused_ordering(416) 00:13:18.078 fused_ordering(417) 00:13:18.078 fused_ordering(418) 00:13:18.078 fused_ordering(419) 00:13:18.078 fused_ordering(420) 00:13:18.078 fused_ordering(421) 00:13:18.078 fused_ordering(422) 00:13:18.078 fused_ordering(423) 00:13:18.078 fused_ordering(424) 00:13:18.078 fused_ordering(425) 00:13:18.078 fused_ordering(426) 00:13:18.078 fused_ordering(427) 00:13:18.078 fused_ordering(428) 00:13:18.078 fused_ordering(429) 00:13:18.078 fused_ordering(430) 00:13:18.078 fused_ordering(431) 00:13:18.078 fused_ordering(432) 00:13:18.078 fused_ordering(433) 00:13:18.078 fused_ordering(434) 00:13:18.078 fused_ordering(435) 00:13:18.078 fused_ordering(436) 00:13:18.078 fused_ordering(437) 00:13:18.078 fused_ordering(438) 00:13:18.078 fused_ordering(439) 00:13:18.078 fused_ordering(440) 00:13:18.078 fused_ordering(441) 00:13:18.078 fused_ordering(442) 00:13:18.078 fused_ordering(443) 00:13:18.078 fused_ordering(444) 00:13:18.078 fused_ordering(445) 00:13:18.078 fused_ordering(446) 00:13:18.078 fused_ordering(447) 00:13:18.078 fused_ordering(448) 00:13:18.078 fused_ordering(449) 00:13:18.078 fused_ordering(450) 00:13:18.078 fused_ordering(451) 00:13:18.078 fused_ordering(452) 00:13:18.078 fused_ordering(453) 00:13:18.078 fused_ordering(454) 00:13:18.078 fused_ordering(455) 00:13:18.078 fused_ordering(456) 00:13:18.078 fused_ordering(457) 00:13:18.078 fused_ordering(458) 00:13:18.078 fused_ordering(459) 00:13:18.078 fused_ordering(460) 00:13:18.078 fused_ordering(461) 00:13:18.078 fused_ordering(462) 00:13:18.078 fused_ordering(463) 00:13:18.078 fused_ordering(464) 00:13:18.078 fused_ordering(465) 00:13:18.078 fused_ordering(466) 00:13:18.078 fused_ordering(467) 00:13:18.078 fused_ordering(468) 00:13:18.078 fused_ordering(469) 00:13:18.078 fused_ordering(470) 00:13:18.078 fused_ordering(471) 00:13:18.078 fused_ordering(472) 00:13:18.078 fused_ordering(473) 00:13:18.078 fused_ordering(474) 00:13:18.078 fused_ordering(475) 00:13:18.078 fused_ordering(476) 00:13:18.078 fused_ordering(477) 00:13:18.078 fused_ordering(478) 00:13:18.078 fused_ordering(479) 00:13:18.078 fused_ordering(480) 00:13:18.078 fused_ordering(481) 00:13:18.078 fused_ordering(482) 00:13:18.078 fused_ordering(483) 00:13:18.078 fused_ordering(484) 00:13:18.078 fused_ordering(485) 00:13:18.078 fused_ordering(486) 00:13:18.078 fused_ordering(487) 00:13:18.078 fused_ordering(488) 00:13:18.078 fused_ordering(489) 00:13:18.078 fused_ordering(490) 00:13:18.078 fused_ordering(491) 00:13:18.078 fused_ordering(492) 00:13:18.078 fused_ordering(493) 00:13:18.078 fused_ordering(494) 00:13:18.078 fused_ordering(495) 00:13:18.078 fused_ordering(496) 00:13:18.078 fused_ordering(497) 00:13:18.078 fused_ordering(498) 00:13:18.078 fused_ordering(499) 00:13:18.078 fused_ordering(500) 00:13:18.078 fused_ordering(501) 00:13:18.078 fused_ordering(502) 00:13:18.078 fused_ordering(503) 00:13:18.078 fused_ordering(504) 00:13:18.078 fused_ordering(505) 00:13:18.078 fused_ordering(506) 00:13:18.078 fused_ordering(507) 00:13:18.078 fused_ordering(508) 00:13:18.078 fused_ordering(509) 00:13:18.078 fused_ordering(510) 00:13:18.078 fused_ordering(511) 00:13:18.078 fused_ordering(512) 00:13:18.078 fused_ordering(513) 00:13:18.078 fused_ordering(514) 00:13:18.078 fused_ordering(515) 00:13:18.078 fused_ordering(516) 00:13:18.078 fused_ordering(517) 00:13:18.078 fused_ordering(518) 00:13:18.078 fused_ordering(519) 00:13:18.078 fused_ordering(520) 00:13:18.078 fused_ordering(521) 00:13:18.078 fused_ordering(522) 00:13:18.078 fused_ordering(523) 00:13:18.078 fused_ordering(524) 00:13:18.078 fused_ordering(525) 00:13:18.078 fused_ordering(526) 00:13:18.078 fused_ordering(527) 00:13:18.078 fused_ordering(528) 00:13:18.078 fused_ordering(529) 00:13:18.078 fused_ordering(530) 00:13:18.078 fused_ordering(531) 00:13:18.078 fused_ordering(532) 00:13:18.078 fused_ordering(533) 00:13:18.078 fused_ordering(534) 00:13:18.078 fused_ordering(535) 00:13:18.078 fused_ordering(536) 00:13:18.078 fused_ordering(537) 00:13:18.078 fused_ordering(538) 00:13:18.078 fused_ordering(539) 00:13:18.078 fused_ordering(540) 00:13:18.078 fused_ordering(541) 00:13:18.078 fused_ordering(542) 00:13:18.078 fused_ordering(543) 00:13:18.078 fused_ordering(544) 00:13:18.078 fused_ordering(545) 00:13:18.078 fused_ordering(546) 00:13:18.078 fused_ordering(547) 00:13:18.078 fused_ordering(548) 00:13:18.078 fused_ordering(549) 00:13:18.078 fused_ordering(550) 00:13:18.078 fused_ordering(551) 00:13:18.078 fused_ordering(552) 00:13:18.078 fused_ordering(553) 00:13:18.078 fused_ordering(554) 00:13:18.078 fused_ordering(555) 00:13:18.078 fused_ordering(556) 00:13:18.078 fused_ordering(557) 00:13:18.078 fused_ordering(558) 00:13:18.078 fused_ordering(559) 00:13:18.078 fused_ordering(560) 00:13:18.078 fused_ordering(561) 00:13:18.078 fused_ordering(562) 00:13:18.078 fused_ordering(563) 00:13:18.078 fused_ordering(564) 00:13:18.078 fused_ordering(565) 00:13:18.078 fused_ordering(566) 00:13:18.078 fused_ordering(567) 00:13:18.078 fused_ordering(568) 00:13:18.078 fused_ordering(569) 00:13:18.078 fused_ordering(570) 00:13:18.078 fused_ordering(571) 00:13:18.078 fused_ordering(572) 00:13:18.078 fused_ordering(573) 00:13:18.078 fused_ordering(574) 00:13:18.079 fused_ordering(575) 00:13:18.079 fused_ordering(576) 00:13:18.079 fused_ordering(577) 00:13:18.079 fused_ordering(578) 00:13:18.079 fused_ordering(579) 00:13:18.079 fused_ordering(580) 00:13:18.079 fused_ordering(581) 00:13:18.079 fused_ordering(582) 00:13:18.079 fused_ordering(583) 00:13:18.079 fused_ordering(584) 00:13:18.079 fused_ordering(585) 00:13:18.079 fused_ordering(586) 00:13:18.079 fused_ordering(587) 00:13:18.079 fused_ordering(588) 00:13:18.079 fused_ordering(589) 00:13:18.079 fused_ordering(590) 00:13:18.079 fused_ordering(591) 00:13:18.079 fused_ordering(592) 00:13:18.079 fused_ordering(593) 00:13:18.079 fused_ordering(594) 00:13:18.079 fused_ordering(595) 00:13:18.079 fused_ordering(596) 00:13:18.079 fused_ordering(597) 00:13:18.079 fused_ordering(598) 00:13:18.079 fused_ordering(599) 00:13:18.079 fused_ordering(600) 00:13:18.079 fused_ordering(601) 00:13:18.079 fused_ordering(602) 00:13:18.079 fused_ordering(603) 00:13:18.079 fused_ordering(604) 00:13:18.079 fused_ordering(605) 00:13:18.079 fused_ordering(606) 00:13:18.079 fused_ordering(607) 00:13:18.079 fused_ordering(608) 00:13:18.079 fused_ordering(609) 00:13:18.079 fused_ordering(610) 00:13:18.079 fused_ordering(611) 00:13:18.079 fused_ordering(612) 00:13:18.079 fused_ordering(613) 00:13:18.079 fused_ordering(614) 00:13:18.079 fused_ordering(615) 00:13:18.645 fused_ordering(616) 00:13:18.645 fused_ordering(617) 00:13:18.645 fused_ordering(618) 00:13:18.645 fused_ordering(619) 00:13:18.645 fused_ordering(620) 00:13:18.645 fused_ordering(621) 00:13:18.645 fused_ordering(622) 00:13:18.645 fused_ordering(623) 00:13:18.645 fused_ordering(624) 00:13:18.645 fused_ordering(625) 00:13:18.645 fused_ordering(626) 00:13:18.645 fused_ordering(627) 00:13:18.645 fused_ordering(628) 00:13:18.645 fused_ordering(629) 00:13:18.645 fused_ordering(630) 00:13:18.645 fused_ordering(631) 00:13:18.645 fused_ordering(632) 00:13:18.646 fused_ordering(633) 00:13:18.646 fused_ordering(634) 00:13:18.646 fused_ordering(635) 00:13:18.646 fused_ordering(636) 00:13:18.646 fused_ordering(637) 00:13:18.646 fused_ordering(638) 00:13:18.646 fused_ordering(639) 00:13:18.646 fused_ordering(640) 00:13:18.646 fused_ordering(641) 00:13:18.646 fused_ordering(642) 00:13:18.646 fused_ordering(643) 00:13:18.646 fused_ordering(644) 00:13:18.646 fused_ordering(645) 00:13:18.646 fused_ordering(646) 00:13:18.646 fused_ordering(647) 00:13:18.646 fused_ordering(648) 00:13:18.646 fused_ordering(649) 00:13:18.646 fused_ordering(650) 00:13:18.646 fused_ordering(651) 00:13:18.646 fused_ordering(652) 00:13:18.646 fused_ordering(653) 00:13:18.646 fused_ordering(654) 00:13:18.646 fused_ordering(655) 00:13:18.646 fused_ordering(656) 00:13:18.646 fused_ordering(657) 00:13:18.646 fused_ordering(658) 00:13:18.646 fused_ordering(659) 00:13:18.646 fused_ordering(660) 00:13:18.646 fused_ordering(661) 00:13:18.646 fused_ordering(662) 00:13:18.646 fused_ordering(663) 00:13:18.646 fused_ordering(664) 00:13:18.646 fused_ordering(665) 00:13:18.646 fused_ordering(666) 00:13:18.646 fused_ordering(667) 00:13:18.646 fused_ordering(668) 00:13:18.646 fused_ordering(669) 00:13:18.646 fused_ordering(670) 00:13:18.646 fused_ordering(671) 00:13:18.646 fused_ordering(672) 00:13:18.646 fused_ordering(673) 00:13:18.646 fused_ordering(674) 00:13:18.646 fused_ordering(675) 00:13:18.646 fused_ordering(676) 00:13:18.646 fused_ordering(677) 00:13:18.646 fused_ordering(678) 00:13:18.646 fused_ordering(679) 00:13:18.646 fused_ordering(680) 00:13:18.646 fused_ordering(681) 00:13:18.646 fused_ordering(682) 00:13:18.646 fused_ordering(683) 00:13:18.646 fused_ordering(684) 00:13:18.646 fused_ordering(685) 00:13:18.646 fused_ordering(686) 00:13:18.646 fused_ordering(687) 00:13:18.646 fused_ordering(688) 00:13:18.646 fused_ordering(689) 00:13:18.646 fused_ordering(690) 00:13:18.646 fused_ordering(691) 00:13:18.646 fused_ordering(692) 00:13:18.646 fused_ordering(693) 00:13:18.646 fused_ordering(694) 00:13:18.646 fused_ordering(695) 00:13:18.646 fused_ordering(696) 00:13:18.646 fused_ordering(697) 00:13:18.646 fused_ordering(698) 00:13:18.646 fused_ordering(699) 00:13:18.646 fused_ordering(700) 00:13:18.646 fused_ordering(701) 00:13:18.646 fused_ordering(702) 00:13:18.646 fused_ordering(703) 00:13:18.646 fused_ordering(704) 00:13:18.646 fused_ordering(705) 00:13:18.646 fused_ordering(706) 00:13:18.646 fused_ordering(707) 00:13:18.646 fused_ordering(708) 00:13:18.646 fused_ordering(709) 00:13:18.646 fused_ordering(710) 00:13:18.646 fused_ordering(711) 00:13:18.646 fused_ordering(712) 00:13:18.646 fused_ordering(713) 00:13:18.646 fused_ordering(714) 00:13:18.646 fused_ordering(715) 00:13:18.646 fused_ordering(716) 00:13:18.646 fused_ordering(717) 00:13:18.646 fused_ordering(718) 00:13:18.646 fused_ordering(719) 00:13:18.646 fused_ordering(720) 00:13:18.646 fused_ordering(721) 00:13:18.646 fused_ordering(722) 00:13:18.646 fused_ordering(723) 00:13:18.646 fused_ordering(724) 00:13:18.646 fused_ordering(725) 00:13:18.646 fused_ordering(726) 00:13:18.646 fused_ordering(727) 00:13:18.646 fused_ordering(728) 00:13:18.646 fused_ordering(729) 00:13:18.646 fused_ordering(730) 00:13:18.646 fused_ordering(731) 00:13:18.646 fused_ordering(732) 00:13:18.646 fused_ordering(733) 00:13:18.646 fused_ordering(734) 00:13:18.646 fused_ordering(735) 00:13:18.646 fused_ordering(736) 00:13:18.646 fused_ordering(737) 00:13:18.646 fused_ordering(738) 00:13:18.646 fused_ordering(739) 00:13:18.646 fused_ordering(740) 00:13:18.646 fused_ordering(741) 00:13:18.646 fused_ordering(742) 00:13:18.646 fused_ordering(743) 00:13:18.646 fused_ordering(744) 00:13:18.646 fused_ordering(745) 00:13:18.646 fused_ordering(746) 00:13:18.646 fused_ordering(747) 00:13:18.646 fused_ordering(748) 00:13:18.646 fused_ordering(749) 00:13:18.646 fused_ordering(750) 00:13:18.646 fused_ordering(751) 00:13:18.646 fused_ordering(752) 00:13:18.646 fused_ordering(753) 00:13:18.646 fused_ordering(754) 00:13:18.646 fused_ordering(755) 00:13:18.646 fused_ordering(756) 00:13:18.646 fused_ordering(757) 00:13:18.646 fused_ordering(758) 00:13:18.646 fused_ordering(759) 00:13:18.646 fused_ordering(760) 00:13:18.646 fused_ordering(761) 00:13:18.646 fused_ordering(762) 00:13:18.646 fused_ordering(763) 00:13:18.646 fused_ordering(764) 00:13:18.646 fused_ordering(765) 00:13:18.646 fused_ordering(766) 00:13:18.646 fused_ordering(767) 00:13:18.646 fused_ordering(768) 00:13:18.646 fused_ordering(769) 00:13:18.646 fused_ordering(770) 00:13:18.646 fused_ordering(771) 00:13:18.646 fused_ordering(772) 00:13:18.646 fused_ordering(773) 00:13:18.646 fused_ordering(774) 00:13:18.646 fused_ordering(775) 00:13:18.646 fused_ordering(776) 00:13:18.646 fused_ordering(777) 00:13:18.646 fused_ordering(778) 00:13:18.646 fused_ordering(779) 00:13:18.646 fused_ordering(780) 00:13:18.646 fused_ordering(781) 00:13:18.646 fused_ordering(782) 00:13:18.646 fused_ordering(783) 00:13:18.646 fused_ordering(784) 00:13:18.646 fused_ordering(785) 00:13:18.646 fused_ordering(786) 00:13:18.646 fused_ordering(787) 00:13:18.646 fused_ordering(788) 00:13:18.646 fused_ordering(789) 00:13:18.646 fused_ordering(790) 00:13:18.646 fused_ordering(791) 00:13:18.646 fused_ordering(792) 00:13:18.646 fused_ordering(793) 00:13:18.646 fused_ordering(794) 00:13:18.646 fused_ordering(795) 00:13:18.646 fused_ordering(796) 00:13:18.646 fused_ordering(797) 00:13:18.646 fused_ordering(798) 00:13:18.646 fused_ordering(799) 00:13:18.646 fused_ordering(800) 00:13:18.646 fused_ordering(801) 00:13:18.646 fused_ordering(802) 00:13:18.646 fused_ordering(803) 00:13:18.646 fused_ordering(804) 00:13:18.646 fused_ordering(805) 00:13:18.646 fused_ordering(806) 00:13:18.646 fused_ordering(807) 00:13:18.646 fused_ordering(808) 00:13:18.646 fused_ordering(809) 00:13:18.646 fused_ordering(810) 00:13:18.646 fused_ordering(811) 00:13:18.646 fused_ordering(812) 00:13:18.646 fused_ordering(813) 00:13:18.646 fused_ordering(814) 00:13:18.646 fused_ordering(815) 00:13:18.646 fused_ordering(816) 00:13:18.646 fused_ordering(817) 00:13:18.646 fused_ordering(818) 00:13:18.646 fused_ordering(819) 00:13:18.646 fused_ordering(820) 00:13:19.583 fused_ordering(821) 00:13:19.583 fused_ordering(822) 00:13:19.583 fused_ordering(823) 00:13:19.583 fused_ordering(824) 00:13:19.583 fused_ordering(825) 00:13:19.583 fused_ordering(826) 00:13:19.583 fused_ordering(827) 00:13:19.583 fused_ordering(828) 00:13:19.583 fused_ordering(829) 00:13:19.583 fused_ordering(830) 00:13:19.583 fused_ordering(831) 00:13:19.583 fused_ordering(832) 00:13:19.583 fused_ordering(833) 00:13:19.583 fused_ordering(834) 00:13:19.583 fused_ordering(835) 00:13:19.583 fused_ordering(836) 00:13:19.583 fused_ordering(837) 00:13:19.583 fused_ordering(838) 00:13:19.583 fused_ordering(839) 00:13:19.583 fused_ordering(840) 00:13:19.583 fused_ordering(841) 00:13:19.583 fused_ordering(842) 00:13:19.583 fused_ordering(843) 00:13:19.583 fused_ordering(844) 00:13:19.583 fused_ordering(845) 00:13:19.583 fused_ordering(846) 00:13:19.583 fused_ordering(847) 00:13:19.583 fused_ordering(848) 00:13:19.583 fused_ordering(849) 00:13:19.583 fused_ordering(850) 00:13:19.583 fused_ordering(851) 00:13:19.583 fused_ordering(852) 00:13:19.583 fused_ordering(853) 00:13:19.583 fused_ordering(854) 00:13:19.583 fused_ordering(855) 00:13:19.583 fused_ordering(856) 00:13:19.583 fused_ordering(857) 00:13:19.583 fused_ordering(858) 00:13:19.583 fused_ordering(859) 00:13:19.583 fused_ordering(860) 00:13:19.583 fused_ordering(861) 00:13:19.583 fused_ordering(862) 00:13:19.583 fused_ordering(863) 00:13:19.583 fused_ordering(864) 00:13:19.583 fused_ordering(865) 00:13:19.583 fused_ordering(866) 00:13:19.583 fused_ordering(867) 00:13:19.583 fused_ordering(868) 00:13:19.583 fused_ordering(869) 00:13:19.583 fused_ordering(870) 00:13:19.583 fused_ordering(871) 00:13:19.583 fused_ordering(872) 00:13:19.583 fused_ordering(873) 00:13:19.583 fused_ordering(874) 00:13:19.583 fused_ordering(875) 00:13:19.583 fused_ordering(876) 00:13:19.583 fused_ordering(877) 00:13:19.583 fused_ordering(878) 00:13:19.583 fused_ordering(879) 00:13:19.583 fused_ordering(880) 00:13:19.583 fused_ordering(881) 00:13:19.583 fused_ordering(882) 00:13:19.583 fused_ordering(883) 00:13:19.583 fused_ordering(884) 00:13:19.583 fused_ordering(885) 00:13:19.583 fused_ordering(886) 00:13:19.583 fused_ordering(887) 00:13:19.583 fused_ordering(888) 00:13:19.583 fused_ordering(889) 00:13:19.583 fused_ordering(890) 00:13:19.583 fused_ordering(891) 00:13:19.583 fused_ordering(892) 00:13:19.583 fused_ordering(893) 00:13:19.583 fused_ordering(894) 00:13:19.583 fused_ordering(895) 00:13:19.583 fused_ordering(896) 00:13:19.583 fused_ordering(897) 00:13:19.583 fused_ordering(898) 00:13:19.583 fused_ordering(899) 00:13:19.583 fused_ordering(900) 00:13:19.583 fused_ordering(901) 00:13:19.583 fused_ordering(902) 00:13:19.583 fused_ordering(903) 00:13:19.583 fused_ordering(904) 00:13:19.583 fused_ordering(905) 00:13:19.583 fused_ordering(906) 00:13:19.583 fused_ordering(907) 00:13:19.583 fused_ordering(908) 00:13:19.583 fused_ordering(909) 00:13:19.583 fused_ordering(910) 00:13:19.583 fused_ordering(911) 00:13:19.583 fused_ordering(912) 00:13:19.583 fused_ordering(913) 00:13:19.583 fused_ordering(914) 00:13:19.583 fused_ordering(915) 00:13:19.583 fused_ordering(916) 00:13:19.583 fused_ordering(917) 00:13:19.583 fused_ordering(918) 00:13:19.583 fused_ordering(919) 00:13:19.583 fused_ordering(920) 00:13:19.583 fused_ordering(921) 00:13:19.583 fused_ordering(922) 00:13:19.583 fused_ordering(923) 00:13:19.583 fused_ordering(924) 00:13:19.583 fused_ordering(925) 00:13:19.583 fused_ordering(926) 00:13:19.583 fused_ordering(927) 00:13:19.583 fused_ordering(928) 00:13:19.583 fused_ordering(929) 00:13:19.583 fused_ordering(930) 00:13:19.583 fused_ordering(931) 00:13:19.583 fused_ordering(932) 00:13:19.583 fused_ordering(933) 00:13:19.583 fused_ordering(934) 00:13:19.583 fused_ordering(935) 00:13:19.583 fused_ordering(936) 00:13:19.583 fused_ordering(937) 00:13:19.583 fused_ordering(938) 00:13:19.583 fused_ordering(939) 00:13:19.583 fused_ordering(940) 00:13:19.583 fused_ordering(941) 00:13:19.583 fused_ordering(942) 00:13:19.583 fused_ordering(943) 00:13:19.583 fused_ordering(944) 00:13:19.583 fused_ordering(945) 00:13:19.583 fused_ordering(946) 00:13:19.583 fused_ordering(947) 00:13:19.583 fused_ordering(948) 00:13:19.583 fused_ordering(949) 00:13:19.583 fused_ordering(950) 00:13:19.583 fused_ordering(951) 00:13:19.583 fused_ordering(952) 00:13:19.583 fused_ordering(953) 00:13:19.583 fused_ordering(954) 00:13:19.583 fused_ordering(955) 00:13:19.583 fused_ordering(956) 00:13:19.583 fused_ordering(957) 00:13:19.583 fused_ordering(958) 00:13:19.583 fused_ordering(959) 00:13:19.583 fused_ordering(960) 00:13:19.583 fused_ordering(961) 00:13:19.583 fused_ordering(962) 00:13:19.583 fused_ordering(963) 00:13:19.583 fused_ordering(964) 00:13:19.583 fused_ordering(965) 00:13:19.583 fused_ordering(966) 00:13:19.583 fused_ordering(967) 00:13:19.583 fused_ordering(968) 00:13:19.583 fused_ordering(969) 00:13:19.583 fused_ordering(970) 00:13:19.583 fused_ordering(971) 00:13:19.583 fused_ordering(972) 00:13:19.583 fused_ordering(973) 00:13:19.583 fused_ordering(974) 00:13:19.583 fused_ordering(975) 00:13:19.583 fused_ordering(976) 00:13:19.583 fused_ordering(977) 00:13:19.583 fused_ordering(978) 00:13:19.583 fused_ordering(979) 00:13:19.583 fused_ordering(980) 00:13:19.583 fused_ordering(981) 00:13:19.583 fused_ordering(982) 00:13:19.583 fused_ordering(983) 00:13:19.583 fused_ordering(984) 00:13:19.583 fused_ordering(985) 00:13:19.583 fused_ordering(986) 00:13:19.583 fused_ordering(987) 00:13:19.583 fused_ordering(988) 00:13:19.583 fused_ordering(989) 00:13:19.583 fused_ordering(990) 00:13:19.583 fused_ordering(991) 00:13:19.583 fused_ordering(992) 00:13:19.583 fused_ordering(993) 00:13:19.583 fused_ordering(994) 00:13:19.583 fused_ordering(995) 00:13:19.583 fused_ordering(996) 00:13:19.583 fused_ordering(997) 00:13:19.583 fused_ordering(998) 00:13:19.583 fused_ordering(999) 00:13:19.583 fused_ordering(1000) 00:13:19.583 fused_ordering(1001) 00:13:19.583 fused_ordering(1002) 00:13:19.583 fused_ordering(1003) 00:13:19.583 fused_ordering(1004) 00:13:19.584 fused_ordering(1005) 00:13:19.584 fused_ordering(1006) 00:13:19.584 fused_ordering(1007) 00:13:19.584 fused_ordering(1008) 00:13:19.584 fused_ordering(1009) 00:13:19.584 fused_ordering(1010) 00:13:19.584 fused_ordering(1011) 00:13:19.584 fused_ordering(1012) 00:13:19.584 fused_ordering(1013) 00:13:19.584 fused_ordering(1014) 00:13:19.584 fused_ordering(1015) 00:13:19.584 fused_ordering(1016) 00:13:19.584 fused_ordering(1017) 00:13:19.584 fused_ordering(1018) 00:13:19.584 fused_ordering(1019) 00:13:19.584 fused_ordering(1020) 00:13:19.584 fused_ordering(1021) 00:13:19.584 fused_ordering(1022) 00:13:19.584 fused_ordering(1023) 00:13:19.584 11:31:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@23 -- # trap - SIGINT SIGTERM EXIT 00:13:19.584 11:31:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@25 -- # nvmftestfini 00:13:19.584 11:31:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@516 -- # nvmfcleanup 00:13:19.584 11:31:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@121 -- # sync 00:13:19.584 11:31:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:13:19.584 11:31:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@124 -- # set +e 00:13:19.584 11:31:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@125 -- # for i in {1..20} 00:13:19.584 11:31:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:13:19.584 rmmod nvme_tcp 00:13:19.584 rmmod nvme_fabrics 00:13:19.584 rmmod nvme_keyring 00:13:19.584 11:31:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:13:19.584 11:31:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@128 -- # set -e 00:13:19.584 11:31:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@129 -- # return 0 00:13:19.584 11:31:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@517 -- # '[' -n 1176055 ']' 00:13:19.584 11:31:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@518 -- # killprocess 1176055 00:13:19.584 11:31:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@952 -- # '[' -z 1176055 ']' 00:13:19.584 11:31:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@956 -- # kill -0 1176055 00:13:19.584 11:31:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@957 -- # uname 00:13:19.584 11:31:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:13:19.584 11:31:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 1176055 00:13:19.584 11:31:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:13:19.584 11:31:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:13:19.584 11:31:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@970 -- # echo 'killing process with pid 1176055' 00:13:19.584 killing process with pid 1176055 00:13:19.584 11:31:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@971 -- # kill 1176055 00:13:19.584 11:31:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@976 -- # wait 1176055 00:13:19.852 11:31:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:13:19.852 11:31:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:13:19.852 11:31:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:13:19.852 11:31:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@297 -- # iptr 00:13:19.852 11:31:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@791 -- # iptables-save 00:13:19.852 11:31:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:13:19.852 11:31:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@791 -- # iptables-restore 00:13:19.852 11:31:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:13:19.852 11:31:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@302 -- # remove_spdk_ns 00:13:19.852 11:31:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:19.852 11:31:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:19.852 11:31:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:21.910 11:31:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:13:21.910 00:13:21.910 real 0m10.781s 00:13:21.910 user 0m6.149s 00:13:21.910 sys 0m5.556s 00:13:21.910 11:31:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1128 -- # xtrace_disable 00:13:21.910 11:31:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:13:21.910 ************************************ 00:13:21.910 END TEST nvmf_fused_ordering 00:13:21.910 ************************************ 00:13:21.910 11:31:22 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@26 -- # run_test nvmf_ns_masking test/nvmf/target/ns_masking.sh --transport=tcp 00:13:21.910 11:31:22 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:13:21.910 11:31:22 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1109 -- # xtrace_disable 00:13:21.910 11:31:22 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:13:21.910 ************************************ 00:13:21.910 START TEST nvmf_ns_masking 00:13:21.910 ************************************ 00:13:21.910 11:31:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1127 -- # test/nvmf/target/ns_masking.sh --transport=tcp 00:13:21.910 * Looking for test storage... 00:13:21.910 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:21.910 11:31:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:13:21.910 11:31:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1691 -- # lcov --version 00:13:21.910 11:31:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:13:22.170 11:31:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:13:22.170 11:31:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:13:22.170 11:31:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@333 -- # local ver1 ver1_l 00:13:22.170 11:31:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@334 -- # local ver2 ver2_l 00:13:22.170 11:31:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@336 -- # IFS=.-: 00:13:22.170 11:31:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@336 -- # read -ra ver1 00:13:22.170 11:31:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@337 -- # IFS=.-: 00:13:22.170 11:31:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@337 -- # read -ra ver2 00:13:22.170 11:31:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@338 -- # local 'op=<' 00:13:22.170 11:31:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@340 -- # ver1_l=2 00:13:22.170 11:31:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@341 -- # ver2_l=1 00:13:22.170 11:31:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:13:22.170 11:31:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@344 -- # case "$op" in 00:13:22.170 11:31:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@345 -- # : 1 00:13:22.170 11:31:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@364 -- # (( v = 0 )) 00:13:22.170 11:31:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:13:22.170 11:31:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@365 -- # decimal 1 00:13:22.170 11:31:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@353 -- # local d=1 00:13:22.170 11:31:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:13:22.170 11:31:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@355 -- # echo 1 00:13:22.170 11:31:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@365 -- # ver1[v]=1 00:13:22.170 11:31:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@366 -- # decimal 2 00:13:22.170 11:31:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@353 -- # local d=2 00:13:22.170 11:31:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:13:22.170 11:31:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@355 -- # echo 2 00:13:22.170 11:31:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@366 -- # ver2[v]=2 00:13:22.170 11:31:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:13:22.170 11:31:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:13:22.170 11:31:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@368 -- # return 0 00:13:22.170 11:31:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:13:22.170 11:31:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:13:22.171 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:22.171 --rc genhtml_branch_coverage=1 00:13:22.171 --rc genhtml_function_coverage=1 00:13:22.171 --rc genhtml_legend=1 00:13:22.171 --rc geninfo_all_blocks=1 00:13:22.171 --rc geninfo_unexecuted_blocks=1 00:13:22.171 00:13:22.171 ' 00:13:22.171 11:31:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:13:22.171 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:22.171 --rc genhtml_branch_coverage=1 00:13:22.171 --rc genhtml_function_coverage=1 00:13:22.171 --rc genhtml_legend=1 00:13:22.171 --rc geninfo_all_blocks=1 00:13:22.171 --rc geninfo_unexecuted_blocks=1 00:13:22.171 00:13:22.171 ' 00:13:22.171 11:31:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:13:22.171 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:22.171 --rc genhtml_branch_coverage=1 00:13:22.171 --rc genhtml_function_coverage=1 00:13:22.171 --rc genhtml_legend=1 00:13:22.171 --rc geninfo_all_blocks=1 00:13:22.171 --rc geninfo_unexecuted_blocks=1 00:13:22.171 00:13:22.171 ' 00:13:22.171 11:31:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:13:22.171 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:22.171 --rc genhtml_branch_coverage=1 00:13:22.171 --rc genhtml_function_coverage=1 00:13:22.171 --rc genhtml_legend=1 00:13:22.171 --rc geninfo_all_blocks=1 00:13:22.171 --rc geninfo_unexecuted_blocks=1 00:13:22.171 00:13:22.171 ' 00:13:22.171 11:31:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@8 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:22.171 11:31:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@7 -- # uname -s 00:13:22.171 11:31:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:22.171 11:31:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:22.171 11:31:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:22.171 11:31:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:22.171 11:31:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:22.171 11:31:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:22.171 11:31:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:22.171 11:31:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:22.171 11:31:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:22.171 11:31:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:22.171 11:31:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:13:22.171 11:31:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@18 -- # NVME_HOSTID=00abaa28-3537-eb11-906e-0017a4403562 00:13:22.171 11:31:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:22.171 11:31:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:22.171 11:31:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:22.171 11:31:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:22.171 11:31:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:22.171 11:31:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@15 -- # shopt -s extglob 00:13:22.171 11:31:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:22.171 11:31:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:22.171 11:31:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:22.171 11:31:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:22.171 11:31:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:22.171 11:31:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:22.171 11:31:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@5 -- # export PATH 00:13:22.171 11:31:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:22.171 11:31:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@51 -- # : 0 00:13:22.171 11:31:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:13:22.171 11:31:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:13:22.171 11:31:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:22.171 11:31:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:22.171 11:31:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:22.171 11:31:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:13:22.171 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:13:22.171 11:31:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:13:22.171 11:31:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:13:22.171 11:31:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@55 -- # have_pci_nics=0 00:13:22.171 11:31:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@10 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:13:22.171 11:31:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@11 -- # hostsock=/var/tmp/host.sock 00:13:22.171 11:31:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@12 -- # loops=5 00:13:22.171 11:31:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@13 -- # uuidgen 00:13:22.171 11:31:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@13 -- # ns1uuid=3bb1c94a-69c3-40ad-80be-4c4878156a2c 00:13:22.171 11:31:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@14 -- # uuidgen 00:13:22.171 11:31:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@14 -- # ns2uuid=605ea996-2194-4f77-bf0c-ebfe085db183 00:13:22.171 11:31:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@16 -- # SUBSYSNQN=nqn.2016-06.io.spdk:cnode1 00:13:22.171 11:31:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@17 -- # HOSTNQN1=nqn.2016-06.io.spdk:host1 00:13:22.171 11:31:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@18 -- # HOSTNQN2=nqn.2016-06.io.spdk:host2 00:13:22.171 11:31:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@19 -- # uuidgen 00:13:22.171 11:31:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@19 -- # HOSTID=94fb728c-a4bf-4a3a-8d4f-d365b509a08c 00:13:22.171 11:31:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@50 -- # nvmftestinit 00:13:22.171 11:31:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:13:22.171 11:31:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:22.171 11:31:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@476 -- # prepare_net_devs 00:13:22.171 11:31:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@438 -- # local -g is_hw=no 00:13:22.171 11:31:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@440 -- # remove_spdk_ns 00:13:22.171 11:31:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:22.171 11:31:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:22.171 11:31:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:22.171 11:31:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:13:22.171 11:31:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:13:22.171 11:31:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@309 -- # xtrace_disable 00:13:22.171 11:31:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:13:27.439 11:31:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:13:27.439 11:31:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@315 -- # pci_devs=() 00:13:27.439 11:31:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@315 -- # local -a pci_devs 00:13:27.439 11:31:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@316 -- # pci_net_devs=() 00:13:27.439 11:31:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:13:27.439 11:31:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@317 -- # pci_drivers=() 00:13:27.439 11:31:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@317 -- # local -A pci_drivers 00:13:27.439 11:31:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@319 -- # net_devs=() 00:13:27.439 11:31:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@319 -- # local -ga net_devs 00:13:27.439 11:31:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@320 -- # e810=() 00:13:27.439 11:31:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@320 -- # local -ga e810 00:13:27.439 11:31:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@321 -- # x722=() 00:13:27.439 11:31:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@321 -- # local -ga x722 00:13:27.440 11:31:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@322 -- # mlx=() 00:13:27.440 11:31:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@322 -- # local -ga mlx 00:13:27.440 11:31:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:27.440 11:31:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:27.440 11:31:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:27.440 11:31:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:27.440 11:31:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:27.440 11:31:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:27.440 11:31:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:27.440 11:31:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:13:27.440 11:31:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:27.440 11:31:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:27.440 11:31:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:27.440 11:31:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:27.440 11:31:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:13:27.440 11:31:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:13:27.440 11:31:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:13:27.440 11:31:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:13:27.440 11:31:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:13:27.440 11:31:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:13:27.440 11:31:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:13:27.440 11:31:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:13:27.440 Found 0000:af:00.0 (0x8086 - 0x159b) 00:13:27.440 11:31:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:13:27.440 11:31:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:13:27.440 11:31:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:27.440 11:31:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:27.440 11:31:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:13:27.440 11:31:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:13:27.440 11:31:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:13:27.440 Found 0000:af:00.1 (0x8086 - 0x159b) 00:13:27.440 11:31:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:13:27.440 11:31:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:13:27.440 11:31:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:27.440 11:31:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:27.440 11:31:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:13:27.440 11:31:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:13:27.440 11:31:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:13:27.440 11:31:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:13:27.440 11:31:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:13:27.440 11:31:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:27.440 11:31:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:13:27.440 11:31:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:27.440 11:31:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@418 -- # [[ up == up ]] 00:13:27.440 11:31:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:13:27.440 11:31:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:27.440 11:31:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:13:27.440 Found net devices under 0000:af:00.0: cvl_0_0 00:13:27.440 11:31:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:13:27.440 11:31:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:13:27.440 11:31:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:27.440 11:31:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:13:27.440 11:31:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:27.440 11:31:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@418 -- # [[ up == up ]] 00:13:27.440 11:31:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:13:27.440 11:31:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:27.440 11:31:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:13:27.440 Found net devices under 0000:af:00.1: cvl_0_1 00:13:27.440 11:31:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:13:27.440 11:31:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:13:27.440 11:31:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@442 -- # is_hw=yes 00:13:27.440 11:31:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:13:27.440 11:31:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:13:27.440 11:31:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:13:27.440 11:31:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:13:27.440 11:31:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:27.440 11:31:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:27.440 11:31:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:13:27.440 11:31:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:13:27.440 11:31:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:13:27.440 11:31:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:13:27.440 11:31:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:13:27.440 11:31:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:13:27.440 11:31:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:13:27.440 11:31:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:27.440 11:31:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:13:27.440 11:31:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:13:27.440 11:31:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:13:27.440 11:31:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:13:27.440 11:31:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:13:27.440 11:31:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:13:27.440 11:31:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:13:27.440 11:31:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:13:27.440 11:31:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:13:27.440 11:31:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:13:27.440 11:31:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:13:27.440 11:31:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:13:27.440 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:27.440 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.488 ms 00:13:27.440 00:13:27.440 --- 10.0.0.2 ping statistics --- 00:13:27.440 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:27.440 rtt min/avg/max/mdev = 0.488/0.488/0.488/0.000 ms 00:13:27.440 11:31:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:13:27.440 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:27.440 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.205 ms 00:13:27.440 00:13:27.440 --- 10.0.0.1 ping statistics --- 00:13:27.440 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:27.440 rtt min/avg/max/mdev = 0.205/0.205/0.205/0.000 ms 00:13:27.440 11:31:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:27.440 11:31:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@450 -- # return 0 00:13:27.440 11:31:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:13:27.440 11:31:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:27.440 11:31:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:13:27.440 11:31:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:13:27.440 11:31:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:27.440 11:31:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:13:27.440 11:31:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:13:27.440 11:31:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@51 -- # nvmfappstart 00:13:27.440 11:31:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:13:27.440 11:31:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@724 -- # xtrace_disable 00:13:27.440 11:31:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:13:27.441 11:31:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@509 -- # nvmfpid=1180081 00:13:27.441 11:31:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@510 -- # waitforlisten 1180081 00:13:27.441 11:31:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@833 -- # '[' -z 1180081 ']' 00:13:27.441 11:31:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:27.441 11:31:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@838 -- # local max_retries=100 00:13:27.441 11:31:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:27.441 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:27.441 11:31:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@842 -- # xtrace_disable 00:13:27.441 11:31:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:13:27.441 11:31:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:13:27.441 [2024-11-15 11:31:28.074240] Starting SPDK v25.01-pre git sha1 4b2d483c6 / DPDK 24.03.0 initialization... 00:13:27.441 [2024-11-15 11:31:28.074297] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:27.441 [2024-11-15 11:31:28.176842] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:27.441 [2024-11-15 11:31:28.224831] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:27.441 [2024-11-15 11:31:28.224872] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:27.441 [2024-11-15 11:31:28.224882] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:27.441 [2024-11-15 11:31:28.224891] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:27.441 [2024-11-15 11:31:28.224899] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:27.441 [2024-11-15 11:31:28.225627] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:27.699 11:31:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:13:27.699 11:31:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@866 -- # return 0 00:13:27.699 11:31:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:13:27.699 11:31:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@730 -- # xtrace_disable 00:13:27.699 11:31:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:13:27.699 11:31:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:27.699 11:31:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:13:27.959 [2024-11-15 11:31:28.606655] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:27.959 11:31:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@55 -- # MALLOC_BDEV_SIZE=64 00:13:27.959 11:31:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@56 -- # MALLOC_BLOCK_SIZE=512 00:13:27.959 11:31:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:13:27.959 Malloc1 00:13:27.959 11:31:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:13:28.218 Malloc2 00:13:28.218 11:31:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@62 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:13:28.477 11:31:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 00:13:28.477 11:31:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:28.735 [2024-11-15 11:31:29.455714] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:28.735 11:31:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@67 -- # connect 00:13:28.735 11:31:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I 94fb728c-a4bf-4a3a-8d4f-d365b509a08c -a 10.0.0.2 -s 4420 -i 4 00:13:28.993 11:31:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 00:13:28.993 11:31:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1200 -- # local i=0 00:13:28.993 11:31:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1201 -- # local nvme_device_counter=1 nvme_devices=0 00:13:28.993 11:31:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1202 -- # [[ -n '' ]] 00:13:28.993 11:31:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # sleep 2 00:13:30.896 11:31:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # (( i++ <= 15 )) 00:13:30.896 11:31:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1209 -- # lsblk -l -o NAME,SERIAL 00:13:30.896 11:31:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1209 -- # grep -c SPDKISFASTANDAWESOME 00:13:30.896 11:31:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1209 -- # nvme_devices=1 00:13:30.896 11:31:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1210 -- # (( nvme_devices == nvme_device_counter )) 00:13:30.896 11:31:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1210 -- # return 0 00:13:30.896 11:31:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:13:30.896 11:31:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:13:30.896 11:31:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:13:30.896 11:31:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:13:30.896 11:31:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@68 -- # ns_is_visible 0x1 00:13:30.896 11:31:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:13:30.896 11:31:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:13:30.896 [ 0]:0x1 00:13:30.896 11:31:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:13:30.896 11:31:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:13:30.896 11:31:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=f9b5d5184d2e4e17b11cfe6788c198f8 00:13:30.896 11:31:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ f9b5d5184d2e4e17b11cfe6788c198f8 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:13:30.896 11:31:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc2 -n 2 00:13:31.154 11:31:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@72 -- # ns_is_visible 0x1 00:13:31.154 11:31:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:13:31.154 11:31:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:13:31.154 [ 0]:0x1 00:13:31.154 11:31:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:13:31.154 11:31:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:13:31.154 11:31:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=f9b5d5184d2e4e17b11cfe6788c198f8 00:13:31.154 11:31:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ f9b5d5184d2e4e17b11cfe6788c198f8 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:13:31.154 11:31:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@73 -- # ns_is_visible 0x2 00:13:31.154 11:31:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:13:31.154 11:31:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:13:31.154 [ 1]:0x2 00:13:31.154 11:31:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:13:31.154 11:31:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:13:31.154 11:31:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=28d75e3185634905b2c1fcbc391a8547 00:13:31.154 11:31:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 28d75e3185634905b2c1fcbc391a8547 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:13:31.154 11:31:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@75 -- # disconnect 00:13:31.154 11:31:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:13:31.412 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:31.412 11:31:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:31.412 11:31:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 --no-auto-visible 00:13:31.670 11:31:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@83 -- # connect 1 00:13:31.670 11:31:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I 94fb728c-a4bf-4a3a-8d4f-d365b509a08c -a 10.0.0.2 -s 4420 -i 4 00:13:31.928 11:31:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 1 00:13:31.928 11:31:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1200 -- # local i=0 00:13:31.928 11:31:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1201 -- # local nvme_device_counter=1 nvme_devices=0 00:13:31.928 11:31:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1202 -- # [[ -n 1 ]] 00:13:31.928 11:31:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1203 -- # nvme_device_counter=1 00:13:31.928 11:31:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # sleep 2 00:13:33.829 11:31:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # (( i++ <= 15 )) 00:13:33.829 11:31:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1209 -- # lsblk -l -o NAME,SERIAL 00:13:33.829 11:31:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1209 -- # grep -c SPDKISFASTANDAWESOME 00:13:33.829 11:31:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1209 -- # nvme_devices=1 00:13:33.829 11:31:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1210 -- # (( nvme_devices == nvme_device_counter )) 00:13:33.829 11:31:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1210 -- # return 0 00:13:33.829 11:31:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:13:33.829 11:31:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:13:33.829 11:31:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:13:33.829 11:31:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:13:33.829 11:31:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@84 -- # NOT ns_is_visible 0x1 00:13:33.829 11:31:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@650 -- # local es=0 00:13:33.829 11:31:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # valid_exec_arg ns_is_visible 0x1 00:13:33.829 11:31:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@638 -- # local arg=ns_is_visible 00:13:33.829 11:31:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:13:33.829 11:31:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # type -t ns_is_visible 00:13:33.829 11:31:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:13:33.829 11:31:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # ns_is_visible 0x1 00:13:33.829 11:31:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:13:33.829 11:31:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:13:33.829 11:31:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:13:33.829 11:31:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:13:33.829 11:31:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:13:33.829 11:31:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:13:33.829 11:31:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # es=1 00:13:33.829 11:31:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:13:33.829 11:31:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:13:33.829 11:31:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:13:33.829 11:31:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@85 -- # ns_is_visible 0x2 00:13:33.829 11:31:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:13:33.829 11:31:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:13:33.830 [ 0]:0x2 00:13:33.830 11:31:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:13:33.830 11:31:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:13:34.089 11:31:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=28d75e3185634905b2c1fcbc391a8547 00:13:34.089 11:31:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 28d75e3185634905b2c1fcbc391a8547 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:13:34.089 11:31:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:13:34.347 11:31:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@89 -- # ns_is_visible 0x1 00:13:34.347 11:31:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:13:34.347 11:31:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:13:34.347 [ 0]:0x1 00:13:34.347 11:31:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:13:34.347 11:31:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:13:34.347 11:31:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=f9b5d5184d2e4e17b11cfe6788c198f8 00:13:34.347 11:31:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ f9b5d5184d2e4e17b11cfe6788c198f8 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:13:34.347 11:31:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@90 -- # ns_is_visible 0x2 00:13:34.347 11:31:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:13:34.347 11:31:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:13:34.347 [ 1]:0x2 00:13:34.347 11:31:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:13:34.347 11:31:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:13:34.347 11:31:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=28d75e3185634905b2c1fcbc391a8547 00:13:34.347 11:31:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 28d75e3185634905b2c1fcbc391a8547 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:13:34.347 11:31:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:13:34.604 11:31:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@94 -- # NOT ns_is_visible 0x1 00:13:34.604 11:31:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@650 -- # local es=0 00:13:34.604 11:31:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # valid_exec_arg ns_is_visible 0x1 00:13:34.604 11:31:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@638 -- # local arg=ns_is_visible 00:13:34.604 11:31:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:13:34.604 11:31:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # type -t ns_is_visible 00:13:34.604 11:31:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:13:34.604 11:31:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # ns_is_visible 0x1 00:13:34.604 11:31:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:13:34.604 11:31:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:13:34.604 11:31:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:13:34.604 11:31:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:13:34.604 11:31:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:13:34.604 11:31:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:13:34.604 11:31:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # es=1 00:13:34.604 11:31:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:13:34.604 11:31:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:13:34.604 11:31:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:13:34.604 11:31:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@95 -- # ns_is_visible 0x2 00:13:34.604 11:31:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:13:34.604 11:31:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:13:34.863 [ 0]:0x2 00:13:34.863 11:31:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:13:34.863 11:31:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:13:34.863 11:31:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=28d75e3185634905b2c1fcbc391a8547 00:13:34.863 11:31:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 28d75e3185634905b2c1fcbc391a8547 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:13:34.863 11:31:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@97 -- # disconnect 00:13:34.863 11:31:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:13:34.863 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:34.863 11:31:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:13:35.121 11:31:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@101 -- # connect 2 00:13:35.121 11:31:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I 94fb728c-a4bf-4a3a-8d4f-d365b509a08c -a 10.0.0.2 -s 4420 -i 4 00:13:35.121 11:31:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 2 00:13:35.121 11:31:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1200 -- # local i=0 00:13:35.121 11:31:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1201 -- # local nvme_device_counter=1 nvme_devices=0 00:13:35.121 11:31:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1202 -- # [[ -n 2 ]] 00:13:35.121 11:31:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1203 -- # nvme_device_counter=2 00:13:35.121 11:31:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # sleep 2 00:13:37.652 11:31:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # (( i++ <= 15 )) 00:13:37.652 11:31:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1209 -- # grep -c SPDKISFASTANDAWESOME 00:13:37.652 11:31:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1209 -- # lsblk -l -o NAME,SERIAL 00:13:37.652 11:31:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1209 -- # nvme_devices=2 00:13:37.652 11:31:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1210 -- # (( nvme_devices == nvme_device_counter )) 00:13:37.652 11:31:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1210 -- # return 0 00:13:37.652 11:31:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:13:37.652 11:31:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:13:37.652 11:31:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:13:37.652 11:31:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:13:37.652 11:31:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@102 -- # ns_is_visible 0x1 00:13:37.652 11:31:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:13:37.652 11:31:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:13:37.652 [ 0]:0x1 00:13:37.652 11:31:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:13:37.652 11:31:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:13:37.652 11:31:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=f9b5d5184d2e4e17b11cfe6788c198f8 00:13:37.652 11:31:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ f9b5d5184d2e4e17b11cfe6788c198f8 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:13:37.652 11:31:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@103 -- # ns_is_visible 0x2 00:13:37.652 11:31:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:13:37.652 11:31:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:13:37.652 [ 1]:0x2 00:13:37.653 11:31:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:13:37.653 11:31:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:13:37.653 11:31:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=28d75e3185634905b2c1fcbc391a8547 00:13:37.653 11:31:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 28d75e3185634905b2c1fcbc391a8547 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:13:37.653 11:31:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@106 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:13:37.653 11:31:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@107 -- # NOT ns_is_visible 0x1 00:13:37.653 11:31:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@650 -- # local es=0 00:13:37.653 11:31:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # valid_exec_arg ns_is_visible 0x1 00:13:37.653 11:31:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@638 -- # local arg=ns_is_visible 00:13:37.653 11:31:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:13:37.653 11:31:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # type -t ns_is_visible 00:13:37.653 11:31:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:13:37.653 11:31:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # ns_is_visible 0x1 00:13:37.653 11:31:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:13:37.653 11:31:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:13:37.653 11:31:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:13:37.653 11:31:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:13:37.653 11:31:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:13:37.653 11:31:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:13:37.653 11:31:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # es=1 00:13:37.653 11:31:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:13:37.653 11:31:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:13:37.653 11:31:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:13:37.653 11:31:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@108 -- # ns_is_visible 0x2 00:13:37.653 11:31:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:13:37.653 11:31:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:13:37.653 [ 0]:0x2 00:13:37.653 11:31:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:13:37.653 11:31:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:13:37.912 11:31:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=28d75e3185634905b2c1fcbc391a8547 00:13:37.912 11:31:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 28d75e3185634905b2c1fcbc391a8547 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:13:37.912 11:31:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@111 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:13:37.912 11:31:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@650 -- # local es=0 00:13:37.912 11:31:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:13:37.912 11:31:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:13:37.912 11:31:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:13:37.912 11:31:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:13:37.912 11:31:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:13:37.912 11:31:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:13:37.912 11:31:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:13:37.912 11:31:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:13:37.912 11:31:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:13:37.912 11:31:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:13:38.170 [2024-11-15 11:31:38.808381] nvmf_rpc.c:1873:nvmf_rpc_ns_visible_paused: *ERROR*: Unable to add/remove nqn.2016-06.io.spdk:host1 to namespace ID 2 00:13:38.170 request: 00:13:38.170 { 00:13:38.170 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:13:38.170 "nsid": 2, 00:13:38.170 "host": "nqn.2016-06.io.spdk:host1", 00:13:38.170 "method": "nvmf_ns_remove_host", 00:13:38.170 "req_id": 1 00:13:38.170 } 00:13:38.170 Got JSON-RPC error response 00:13:38.170 response: 00:13:38.170 { 00:13:38.170 "code": -32602, 00:13:38.170 "message": "Invalid parameters" 00:13:38.170 } 00:13:38.170 11:31:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # es=1 00:13:38.170 11:31:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:13:38.171 11:31:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:13:38.171 11:31:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:13:38.171 11:31:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@112 -- # NOT ns_is_visible 0x1 00:13:38.171 11:31:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@650 -- # local es=0 00:13:38.171 11:31:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # valid_exec_arg ns_is_visible 0x1 00:13:38.171 11:31:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@638 -- # local arg=ns_is_visible 00:13:38.171 11:31:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:13:38.171 11:31:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # type -t ns_is_visible 00:13:38.171 11:31:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:13:38.171 11:31:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # ns_is_visible 0x1 00:13:38.171 11:31:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:13:38.171 11:31:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:13:38.171 11:31:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:13:38.171 11:31:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:13:38.171 11:31:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:13:38.171 11:31:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:13:38.171 11:31:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # es=1 00:13:38.171 11:31:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:13:38.171 11:31:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:13:38.171 11:31:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:13:38.171 11:31:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@113 -- # ns_is_visible 0x2 00:13:38.171 11:31:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:13:38.171 11:31:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:13:38.171 [ 0]:0x2 00:13:38.171 11:31:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:13:38.171 11:31:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:13:38.171 11:31:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=28d75e3185634905b2c1fcbc391a8547 00:13:38.171 11:31:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 28d75e3185634905b2c1fcbc391a8547 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:13:38.171 11:31:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@114 -- # disconnect 00:13:38.171 11:31:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:13:38.171 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:38.171 11:31:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@118 -- # hostpid=1182323 00:13:38.171 11:31:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@119 -- # trap 'killprocess $hostpid; nvmftestfini' SIGINT SIGTERM EXIT 00:13:38.171 11:31:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@121 -- # waitforlisten 1182323 /var/tmp/host.sock 00:13:38.171 11:31:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@117 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -r /var/tmp/host.sock -m 2 00:13:38.171 11:31:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@833 -- # '[' -z 1182323 ']' 00:13:38.171 11:31:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/host.sock 00:13:38.171 11:31:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@838 -- # local max_retries=100 00:13:38.171 11:31:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock...' 00:13:38.171 Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock... 00:13:38.171 11:31:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@842 -- # xtrace_disable 00:13:38.171 11:31:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:13:38.429 [2024-11-15 11:31:39.027826] Starting SPDK v25.01-pre git sha1 4b2d483c6 / DPDK 24.03.0 initialization... 00:13:38.430 [2024-11-15 11:31:39.027867] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1182323 ] 00:13:38.430 [2024-11-15 11:31:39.081129] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:38.430 [2024-11-15 11:31:39.119099] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:13:38.688 11:31:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:13:38.688 11:31:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@866 -- # return 0 00:13:38.688 11:31:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@122 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:38.688 11:31:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:13:38.946 11:31:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@124 -- # uuid2nguid 3bb1c94a-69c3-40ad-80be-4c4878156a2c 00:13:38.946 11:31:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@787 -- # tr -d - 00:13:38.946 11:31:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 -g 3BB1C94A69C340AD80BE4C4878156A2C -i 00:13:39.205 11:31:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@125 -- # uuid2nguid 605ea996-2194-4f77-bf0c-ebfe085db183 00:13:39.205 11:31:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@787 -- # tr -d - 00:13:39.205 11:31:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc2 -n 2 -g 605EA99621944F77BF0CEBFE085DB183 -i 00:13:39.465 11:31:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@126 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:13:39.724 11:31:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host2 00:13:39.724 11:31:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@129 -- # hostrpc bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -b nvme0 00:13:39.724 11:31:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -b nvme0 00:13:40.290 nvme0n1 00:13:40.290 11:31:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@131 -- # hostrpc bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 -b nvme1 00:13:40.290 11:31:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 -b nvme1 00:13:40.547 nvme1n2 00:13:40.547 11:31:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # hostrpc bdev_get_bdevs 00:13:40.547 11:31:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # jq -r '.[].name' 00:13:40.547 11:31:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs 00:13:40.548 11:31:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # sort 00:13:40.548 11:31:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # xargs 00:13:40.805 11:31:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # [[ nvme0n1 nvme1n2 == \n\v\m\e\0\n\1\ \n\v\m\e\1\n\2 ]] 00:13:40.805 11:31:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@135 -- # hostrpc bdev_get_bdevs -b nvme0n1 00:13:40.805 11:31:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@135 -- # jq -r '.[].uuid' 00:13:40.805 11:31:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs -b nvme0n1 00:13:41.063 11:31:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@135 -- # [[ 3bb1c94a-69c3-40ad-80be-4c4878156a2c == \3\b\b\1\c\9\4\a\-\6\9\c\3\-\4\0\a\d\-\8\0\b\e\-\4\c\4\8\7\8\1\5\6\a\2\c ]] 00:13:41.063 11:31:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@136 -- # hostrpc bdev_get_bdevs -b nvme1n2 00:13:41.063 11:31:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs -b nvme1n2 00:13:41.063 11:31:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@136 -- # jq -r '.[].uuid' 00:13:41.322 11:31:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@136 -- # [[ 605ea996-2194-4f77-bf0c-ebfe085db183 == \6\0\5\e\a\9\9\6\-\2\1\9\4\-\4\f\7\7\-\b\f\0\c\-\e\b\f\e\0\8\5\d\b\1\8\3 ]] 00:13:41.322 11:31:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@137 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:41.579 11:31:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@138 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:13:42.146 11:31:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@141 -- # uuid2nguid 3bb1c94a-69c3-40ad-80be-4c4878156a2c 00:13:42.146 11:31:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@787 -- # tr -d - 00:13:42.146 11:31:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@141 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 invalid -n 1 -g 3BB1C94A69C340AD80BE4C4878156A2C 00:13:42.146 11:31:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@650 -- # local es=0 00:13:42.146 11:31:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 invalid -n 1 -g 3BB1C94A69C340AD80BE4C4878156A2C 00:13:42.146 11:31:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:13:42.146 11:31:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:13:42.146 11:31:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:13:42.146 11:31:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:13:42.146 11:31:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:13:42.146 11:31:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:13:42.146 11:31:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:13:42.146 11:31:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:13:42.146 11:31:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 invalid -n 1 -g 3BB1C94A69C340AD80BE4C4878156A2C 00:13:42.146 [2024-11-15 11:31:42.984554] bdev.c:8619:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: invalid 00:13:42.146 [2024-11-15 11:31:42.984592] subsystem.c:2156:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode1: bdev invalid cannot be opened, error=-19 00:13:42.146 [2024-11-15 11:31:42.984604] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:42.146 request: 00:13:42.146 { 00:13:42.146 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:13:42.146 "namespace": { 00:13:42.146 "bdev_name": "invalid", 00:13:42.146 "nsid": 1, 00:13:42.146 "nguid": "3BB1C94A69C340AD80BE4C4878156A2C", 00:13:42.146 "no_auto_visible": false, 00:13:42.146 "no_metadata": false 00:13:42.146 }, 00:13:42.146 "method": "nvmf_subsystem_add_ns", 00:13:42.146 "req_id": 1 00:13:42.146 } 00:13:42.146 Got JSON-RPC error response 00:13:42.146 response: 00:13:42.146 { 00:13:42.146 "code": -32602, 00:13:42.146 "message": "Invalid parameters" 00:13:42.146 } 00:13:42.405 11:31:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # es=1 00:13:42.405 11:31:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:13:42.405 11:31:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:13:42.405 11:31:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:13:42.405 11:31:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@142 -- # uuid2nguid 3bb1c94a-69c3-40ad-80be-4c4878156a2c 00:13:42.405 11:31:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@787 -- # tr -d - 00:13:42.405 11:31:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@142 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 -g 3BB1C94A69C340AD80BE4C4878156A2C -i 00:13:42.663 11:31:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@143 -- # sleep 2s 00:13:44.563 11:31:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@144 -- # hostrpc bdev_get_bdevs 00:13:44.563 11:31:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@144 -- # jq length 00:13:44.563 11:31:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs 00:13:44.824 11:31:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@144 -- # (( 0 == 0 )) 00:13:44.824 11:31:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@146 -- # killprocess 1182323 00:13:44.824 11:31:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@952 -- # '[' -z 1182323 ']' 00:13:44.824 11:31:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@956 -- # kill -0 1182323 00:13:44.824 11:31:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@957 -- # uname 00:13:44.824 11:31:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:13:44.824 11:31:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 1182323 00:13:44.824 11:31:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:13:44.824 11:31:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:13:44.824 11:31:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@970 -- # echo 'killing process with pid 1182323' 00:13:44.824 killing process with pid 1182323 00:13:44.824 11:31:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@971 -- # kill 1182323 00:13:44.824 11:31:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@976 -- # wait 1182323 00:13:45.084 11:31:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@147 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:45.653 11:31:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@149 -- # trap - SIGINT SIGTERM EXIT 00:13:45.653 11:31:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@150 -- # nvmftestfini 00:13:45.653 11:31:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@516 -- # nvmfcleanup 00:13:45.653 11:31:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@121 -- # sync 00:13:45.654 11:31:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:13:45.654 11:31:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@124 -- # set +e 00:13:45.654 11:31:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@125 -- # for i in {1..20} 00:13:45.654 11:31:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:13:45.654 rmmod nvme_tcp 00:13:45.654 rmmod nvme_fabrics 00:13:45.654 rmmod nvme_keyring 00:13:45.654 11:31:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:13:45.654 11:31:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@128 -- # set -e 00:13:45.654 11:31:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@129 -- # return 0 00:13:45.654 11:31:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@517 -- # '[' -n 1180081 ']' 00:13:45.654 11:31:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@518 -- # killprocess 1180081 00:13:45.654 11:31:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@952 -- # '[' -z 1180081 ']' 00:13:45.654 11:31:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@956 -- # kill -0 1180081 00:13:45.654 11:31:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@957 -- # uname 00:13:45.654 11:31:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:13:45.654 11:31:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 1180081 00:13:45.654 11:31:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:13:45.654 11:31:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:13:45.654 11:31:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@970 -- # echo 'killing process with pid 1180081' 00:13:45.654 killing process with pid 1180081 00:13:45.654 11:31:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@971 -- # kill 1180081 00:13:45.654 11:31:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@976 -- # wait 1180081 00:13:45.913 11:31:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:13:45.913 11:31:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:13:45.913 11:31:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:13:45.913 11:31:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@297 -- # iptr 00:13:45.913 11:31:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@791 -- # iptables-save 00:13:45.913 11:31:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:13:45.913 11:31:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@791 -- # iptables-restore 00:13:45.913 11:31:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:13:45.913 11:31:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@302 -- # remove_spdk_ns 00:13:45.913 11:31:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:45.913 11:31:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:45.914 11:31:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:47.885 11:31:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:13:47.885 00:13:47.885 real 0m26.035s 00:13:47.885 user 0m33.133s 00:13:47.885 sys 0m6.566s 00:13:47.885 11:31:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1128 -- # xtrace_disable 00:13:47.885 11:31:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:13:47.885 ************************************ 00:13:47.885 END TEST nvmf_ns_masking 00:13:47.885 ************************************ 00:13:47.885 11:31:48 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@27 -- # [[ 1 -eq 1 ]] 00:13:47.885 11:31:48 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@28 -- # run_test nvmf_nvme_cli /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvme_cli.sh --transport=tcp 00:13:47.885 11:31:48 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:13:47.885 11:31:48 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1109 -- # xtrace_disable 00:13:47.885 11:31:48 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:13:47.885 ************************************ 00:13:47.885 START TEST nvmf_nvme_cli 00:13:47.885 ************************************ 00:13:47.885 11:31:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvme_cli.sh --transport=tcp 00:13:48.144 * Looking for test storage... 00:13:48.144 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:48.144 11:31:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:13:48.144 11:31:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1691 -- # lcov --version 00:13:48.144 11:31:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:13:48.144 11:31:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:13:48.144 11:31:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:13:48.144 11:31:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@333 -- # local ver1 ver1_l 00:13:48.144 11:31:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@334 -- # local ver2 ver2_l 00:13:48.144 11:31:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@336 -- # IFS=.-: 00:13:48.144 11:31:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@336 -- # read -ra ver1 00:13:48.144 11:31:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@337 -- # IFS=.-: 00:13:48.144 11:31:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@337 -- # read -ra ver2 00:13:48.144 11:31:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@338 -- # local 'op=<' 00:13:48.144 11:31:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@340 -- # ver1_l=2 00:13:48.144 11:31:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@341 -- # ver2_l=1 00:13:48.144 11:31:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:13:48.144 11:31:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@344 -- # case "$op" in 00:13:48.144 11:31:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@345 -- # : 1 00:13:48.144 11:31:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@364 -- # (( v = 0 )) 00:13:48.144 11:31:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:13:48.144 11:31:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@365 -- # decimal 1 00:13:48.144 11:31:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@353 -- # local d=1 00:13:48.144 11:31:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:13:48.144 11:31:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@355 -- # echo 1 00:13:48.144 11:31:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@365 -- # ver1[v]=1 00:13:48.144 11:31:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@366 -- # decimal 2 00:13:48.144 11:31:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@353 -- # local d=2 00:13:48.144 11:31:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:13:48.144 11:31:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@355 -- # echo 2 00:13:48.144 11:31:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@366 -- # ver2[v]=2 00:13:48.144 11:31:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:13:48.144 11:31:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:13:48.144 11:31:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@368 -- # return 0 00:13:48.144 11:31:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:13:48.144 11:31:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:13:48.144 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:48.144 --rc genhtml_branch_coverage=1 00:13:48.144 --rc genhtml_function_coverage=1 00:13:48.145 --rc genhtml_legend=1 00:13:48.145 --rc geninfo_all_blocks=1 00:13:48.145 --rc geninfo_unexecuted_blocks=1 00:13:48.145 00:13:48.145 ' 00:13:48.145 11:31:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:13:48.145 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:48.145 --rc genhtml_branch_coverage=1 00:13:48.145 --rc genhtml_function_coverage=1 00:13:48.145 --rc genhtml_legend=1 00:13:48.145 --rc geninfo_all_blocks=1 00:13:48.145 --rc geninfo_unexecuted_blocks=1 00:13:48.145 00:13:48.145 ' 00:13:48.145 11:31:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:13:48.145 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:48.145 --rc genhtml_branch_coverage=1 00:13:48.145 --rc genhtml_function_coverage=1 00:13:48.145 --rc genhtml_legend=1 00:13:48.145 --rc geninfo_all_blocks=1 00:13:48.145 --rc geninfo_unexecuted_blocks=1 00:13:48.145 00:13:48.145 ' 00:13:48.145 11:31:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:13:48.145 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:48.145 --rc genhtml_branch_coverage=1 00:13:48.145 --rc genhtml_function_coverage=1 00:13:48.145 --rc genhtml_legend=1 00:13:48.145 --rc geninfo_all_blocks=1 00:13:48.145 --rc geninfo_unexecuted_blocks=1 00:13:48.145 00:13:48.145 ' 00:13:48.145 11:31:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:48.145 11:31:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@7 -- # uname -s 00:13:48.145 11:31:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:48.145 11:31:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:48.145 11:31:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:48.145 11:31:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:48.145 11:31:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:48.145 11:31:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:48.145 11:31:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:48.145 11:31:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:48.145 11:31:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:48.145 11:31:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:48.145 11:31:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:13:48.145 11:31:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@18 -- # NVME_HOSTID=00abaa28-3537-eb11-906e-0017a4403562 00:13:48.145 11:31:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:48.145 11:31:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:48.145 11:31:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:48.145 11:31:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:48.145 11:31:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:48.145 11:31:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@15 -- # shopt -s extglob 00:13:48.145 11:31:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:48.145 11:31:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:48.145 11:31:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:48.145 11:31:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:48.145 11:31:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:48.145 11:31:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:48.145 11:31:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@5 -- # export PATH 00:13:48.145 11:31:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:48.145 11:31:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@51 -- # : 0 00:13:48.145 11:31:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:13:48.145 11:31:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:13:48.145 11:31:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:48.145 11:31:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:48.145 11:31:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:48.145 11:31:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:13:48.145 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:13:48.145 11:31:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:13:48.145 11:31:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:13:48.145 11:31:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@55 -- # have_pci_nics=0 00:13:48.145 11:31:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@11 -- # MALLOC_BDEV_SIZE=64 00:13:48.145 11:31:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:13:48.145 11:31:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@14 -- # devs=() 00:13:48.145 11:31:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@16 -- # nvmftestinit 00:13:48.145 11:31:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:13:48.145 11:31:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:48.145 11:31:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@476 -- # prepare_net_devs 00:13:48.145 11:31:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@438 -- # local -g is_hw=no 00:13:48.145 11:31:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@440 -- # remove_spdk_ns 00:13:48.145 11:31:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:48.145 11:31:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:48.145 11:31:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:48.145 11:31:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:13:48.145 11:31:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:13:48.145 11:31:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@309 -- # xtrace_disable 00:13:48.145 11:31:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:13:53.418 11:31:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:13:53.418 11:31:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@315 -- # pci_devs=() 00:13:53.418 11:31:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@315 -- # local -a pci_devs 00:13:53.418 11:31:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@316 -- # pci_net_devs=() 00:13:53.418 11:31:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:13:53.418 11:31:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@317 -- # pci_drivers=() 00:13:53.418 11:31:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@317 -- # local -A pci_drivers 00:13:53.418 11:31:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@319 -- # net_devs=() 00:13:53.418 11:31:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@319 -- # local -ga net_devs 00:13:53.418 11:31:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@320 -- # e810=() 00:13:53.418 11:31:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@320 -- # local -ga e810 00:13:53.418 11:31:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@321 -- # x722=() 00:13:53.418 11:31:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@321 -- # local -ga x722 00:13:53.418 11:31:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@322 -- # mlx=() 00:13:53.418 11:31:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@322 -- # local -ga mlx 00:13:53.418 11:31:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:53.418 11:31:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:53.418 11:31:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:53.418 11:31:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:53.418 11:31:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:53.418 11:31:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:53.418 11:31:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:53.418 11:31:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:13:53.418 11:31:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:53.418 11:31:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:53.418 11:31:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:53.418 11:31:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:53.418 11:31:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:13:53.418 11:31:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:13:53.418 11:31:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:13:53.418 11:31:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:13:53.418 11:31:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:13:53.418 11:31:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:13:53.418 11:31:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:13:53.418 11:31:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:13:53.418 Found 0000:af:00.0 (0x8086 - 0x159b) 00:13:53.418 11:31:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:13:53.418 11:31:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:13:53.418 11:31:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:53.418 11:31:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:53.418 11:31:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:13:53.418 11:31:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:13:53.418 11:31:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:13:53.418 Found 0000:af:00.1 (0x8086 - 0x159b) 00:13:53.418 11:31:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:13:53.418 11:31:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:13:53.418 11:31:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:53.418 11:31:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:53.418 11:31:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:13:53.418 11:31:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:13:53.418 11:31:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:13:53.418 11:31:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:13:53.418 11:31:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:13:53.418 11:31:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:53.418 11:31:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:13:53.418 11:31:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:53.418 11:31:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@418 -- # [[ up == up ]] 00:13:53.419 11:31:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:13:53.419 11:31:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:53.419 11:31:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:13:53.419 Found net devices under 0000:af:00.0: cvl_0_0 00:13:53.419 11:31:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:13:53.419 11:31:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:13:53.419 11:31:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:53.419 11:31:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:13:53.419 11:31:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:53.419 11:31:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@418 -- # [[ up == up ]] 00:13:53.419 11:31:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:13:53.419 11:31:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:53.419 11:31:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:13:53.419 Found net devices under 0000:af:00.1: cvl_0_1 00:13:53.419 11:31:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:13:53.419 11:31:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:13:53.419 11:31:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@442 -- # is_hw=yes 00:13:53.419 11:31:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:13:53.419 11:31:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:13:53.419 11:31:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:13:53.419 11:31:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:13:53.419 11:31:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:53.419 11:31:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:53.419 11:31:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:13:53.419 11:31:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:13:53.419 11:31:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:13:53.419 11:31:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:13:53.419 11:31:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:13:53.419 11:31:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:13:53.419 11:31:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:13:53.419 11:31:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:53.419 11:31:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:13:53.419 11:31:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:13:53.419 11:31:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:13:53.419 11:31:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:13:53.419 11:31:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:13:53.419 11:31:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:13:53.419 11:31:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:13:53.419 11:31:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:13:53.679 11:31:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:13:53.679 11:31:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:13:53.679 11:31:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:13:53.679 11:31:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:13:53.679 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:53.679 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.412 ms 00:13:53.679 00:13:53.679 --- 10.0.0.2 ping statistics --- 00:13:53.679 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:53.679 rtt min/avg/max/mdev = 0.412/0.412/0.412/0.000 ms 00:13:53.679 11:31:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:13:53.679 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:53.679 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.226 ms 00:13:53.679 00:13:53.679 --- 10.0.0.1 ping statistics --- 00:13:53.679 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:53.679 rtt min/avg/max/mdev = 0.226/0.226/0.226/0.000 ms 00:13:53.679 11:31:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:53.679 11:31:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@450 -- # return 0 00:13:53.679 11:31:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:13:53.679 11:31:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:53.679 11:31:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:13:53.679 11:31:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:13:53.679 11:31:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:53.679 11:31:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:13:53.679 11:31:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:13:53.679 11:31:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@17 -- # nvmfappstart -m 0xF 00:13:53.679 11:31:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:13:53.679 11:31:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@724 -- # xtrace_disable 00:13:53.679 11:31:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:13:53.679 11:31:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@509 -- # nvmfpid=1187158 00:13:53.679 11:31:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:13:53.679 11:31:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@510 -- # waitforlisten 1187158 00:13:53.679 11:31:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@833 -- # '[' -z 1187158 ']' 00:13:53.679 11:31:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:53.679 11:31:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@838 -- # local max_retries=100 00:13:53.679 11:31:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:53.679 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:53.679 11:31:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@842 -- # xtrace_disable 00:13:53.679 11:31:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:13:53.679 [2024-11-15 11:31:54.444015] Starting SPDK v25.01-pre git sha1 4b2d483c6 / DPDK 24.03.0 initialization... 00:13:53.679 [2024-11-15 11:31:54.444074] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:53.939 [2024-11-15 11:31:54.544673] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:13:53.939 [2024-11-15 11:31:54.595406] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:53.939 [2024-11-15 11:31:54.595452] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:53.939 [2024-11-15 11:31:54.595472] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:53.939 [2024-11-15 11:31:54.595481] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:53.939 [2024-11-15 11:31:54.595489] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:53.939 [2024-11-15 11:31:54.597403] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:13:53.939 [2024-11-15 11:31:54.597528] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:13:53.939 [2024-11-15 11:31:54.597570] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:13:53.939 [2024-11-15 11:31:54.597571] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:53.939 11:31:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:13:53.939 11:31:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@866 -- # return 0 00:13:53.939 11:31:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:13:53.939 11:31:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@730 -- # xtrace_disable 00:13:53.939 11:31:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:13:53.939 11:31:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:53.939 11:31:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@19 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:13:53.939 11:31:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:53.939 11:31:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:13:53.939 [2024-11-15 11:31:54.738034] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:53.939 11:31:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:53.939 11:31:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@21 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:13:53.939 11:31:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:53.939 11:31:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:13:53.939 Malloc0 00:13:53.939 11:31:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:53.939 11:31:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:13:53.939 11:31:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:53.939 11:31:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:13:54.199 Malloc1 00:13:54.199 11:31:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:54.199 11:31:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME -d SPDK_Controller1 -i 291 00:13:54.199 11:31:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:54.199 11:31:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:13:54.199 11:31:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:54.199 11:31:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:13:54.199 11:31:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:54.199 11:31:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:13:54.199 11:31:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:54.199 11:31:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:13:54.199 11:31:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:54.199 11:31:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:13:54.199 11:31:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:54.199 11:31:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:54.199 11:31:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:54.199 11:31:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:13:54.199 [2024-11-15 11:31:54.845413] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:54.199 11:31:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:54.199 11:31:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@28 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:13:54.199 11:31:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:54.199 11:31:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:13:54.199 11:31:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:54.199 11:31:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@30 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid=00abaa28-3537-eb11-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 4420 00:13:54.199 00:13:54.199 Discovery Log Number of Records 2, Generation counter 2 00:13:54.199 =====Discovery Log Entry 0====== 00:13:54.199 trtype: tcp 00:13:54.199 adrfam: ipv4 00:13:54.199 subtype: current discovery subsystem 00:13:54.199 treq: not required 00:13:54.199 portid: 0 00:13:54.199 trsvcid: 4420 00:13:54.199 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:13:54.199 traddr: 10.0.0.2 00:13:54.199 eflags: explicit discovery connections, duplicate discovery information 00:13:54.199 sectype: none 00:13:54.199 =====Discovery Log Entry 1====== 00:13:54.200 trtype: tcp 00:13:54.200 adrfam: ipv4 00:13:54.200 subtype: nvme subsystem 00:13:54.200 treq: not required 00:13:54.200 portid: 0 00:13:54.200 trsvcid: 4420 00:13:54.200 subnqn: nqn.2016-06.io.spdk:cnode1 00:13:54.200 traddr: 10.0.0.2 00:13:54.200 eflags: none 00:13:54.200 sectype: none 00:13:54.200 11:31:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # devs=($(get_nvme_devs)) 00:13:54.200 11:31:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # get_nvme_devs 00:13:54.200 11:31:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@550 -- # local dev _ 00:13:54.200 11:31:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:13:54.200 11:31:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@549 -- # nvme list 00:13:54.200 11:31:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ Node == /dev/nvme* ]] 00:13:54.200 11:31:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:13:54.200 11:31:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ --------------------- == /dev/nvme* ]] 00:13:54.200 11:31:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:13:54.200 11:31:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # nvme_num_before_connection=0 00:13:54.200 11:31:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@32 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid=00abaa28-3537-eb11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:13:56.106 11:31:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@34 -- # waitforserial SPDKISFASTANDAWESOME 2 00:13:56.106 11:31:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1200 -- # local i=0 00:13:56.106 11:31:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1201 -- # local nvme_device_counter=1 nvme_devices=0 00:13:56.106 11:31:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1202 -- # [[ -n 2 ]] 00:13:56.106 11:31:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1203 -- # nvme_device_counter=2 00:13:56.106 11:31:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1207 -- # sleep 2 00:13:58.018 11:31:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1208 -- # (( i++ <= 15 )) 00:13:58.018 11:31:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1209 -- # lsblk -l -o NAME,SERIAL 00:13:58.018 11:31:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1209 -- # grep -c SPDKISFASTANDAWESOME 00:13:58.018 11:31:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1209 -- # nvme_devices=2 00:13:58.018 11:31:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1210 -- # (( nvme_devices == nvme_device_counter )) 00:13:58.018 11:31:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1210 -- # return 0 00:13:58.018 11:31:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@35 -- # get_nvme_devs 00:13:58.018 11:31:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@550 -- # local dev _ 00:13:58.018 11:31:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:13:58.018 11:31:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@549 -- # nvme list 00:13:58.018 11:31:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ Node == /dev/nvme* ]] 00:13:58.018 11:31:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:13:58.018 11:31:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ --------------------- == /dev/nvme* ]] 00:13:58.018 11:31:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:13:58.018 11:31:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ /dev/nvme0n1 == /dev/nvme* ]] 00:13:58.018 11:31:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@554 -- # echo /dev/nvme0n1 00:13:58.018 11:31:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:13:58.018 11:31:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ /dev/nvme0n2 == /dev/nvme* ]] 00:13:58.018 11:31:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@554 -- # echo /dev/nvme0n2 00:13:58.018 11:31:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:13:58.018 11:31:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@35 -- # [[ -z /dev/nvme0n1 00:13:58.018 /dev/nvme0n2 ]] 00:13:58.018 11:31:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # devs=($(get_nvme_devs)) 00:13:58.018 11:31:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # get_nvme_devs 00:13:58.018 11:31:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@550 -- # local dev _ 00:13:58.018 11:31:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@549 -- # nvme list 00:13:58.018 11:31:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:13:58.018 11:31:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ Node == /dev/nvme* ]] 00:13:58.018 11:31:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:13:58.018 11:31:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ --------------------- == /dev/nvme* ]] 00:13:58.018 11:31:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:13:58.018 11:31:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ /dev/nvme0n1 == /dev/nvme* ]] 00:13:58.018 11:31:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@554 -- # echo /dev/nvme0n1 00:13:58.018 11:31:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:13:58.018 11:31:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ /dev/nvme0n2 == /dev/nvme* ]] 00:13:58.018 11:31:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@554 -- # echo /dev/nvme0n2 00:13:58.018 11:31:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:13:58.018 11:31:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # nvme_num=2 00:13:58.018 11:31:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@60 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:13:58.018 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:58.018 11:31:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@61 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:13:58.018 11:31:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1221 -- # local i=0 00:13:58.018 11:31:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1222 -- # lsblk -o NAME,SERIAL 00:13:58.018 11:31:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1222 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:58.018 11:31:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1229 -- # lsblk -l -o NAME,SERIAL 00:13:58.018 11:31:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1229 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:58.018 11:31:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1233 -- # return 0 00:13:58.018 11:31:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@62 -- # (( nvme_num <= nvme_num_before_connection )) 00:13:58.018 11:31:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@67 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:58.018 11:31:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:58.018 11:31:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:13:58.018 11:31:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:58.018 11:31:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:13:58.018 11:31:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@70 -- # nvmftestfini 00:13:58.018 11:31:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@516 -- # nvmfcleanup 00:13:58.018 11:31:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@121 -- # sync 00:13:58.018 11:31:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:13:58.019 11:31:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@124 -- # set +e 00:13:58.019 11:31:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@125 -- # for i in {1..20} 00:13:58.019 11:31:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:13:58.019 rmmod nvme_tcp 00:13:58.019 rmmod nvme_fabrics 00:13:58.019 rmmod nvme_keyring 00:13:58.019 11:31:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:13:58.019 11:31:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@128 -- # set -e 00:13:58.019 11:31:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@129 -- # return 0 00:13:58.019 11:31:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@517 -- # '[' -n 1187158 ']' 00:13:58.019 11:31:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@518 -- # killprocess 1187158 00:13:58.019 11:31:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@952 -- # '[' -z 1187158 ']' 00:13:58.019 11:31:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@956 -- # kill -0 1187158 00:13:58.019 11:31:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@957 -- # uname 00:13:58.019 11:31:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:13:58.019 11:31:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 1187158 00:13:58.019 11:31:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:13:58.019 11:31:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:13:58.019 11:31:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@970 -- # echo 'killing process with pid 1187158' 00:13:58.019 killing process with pid 1187158 00:13:58.019 11:31:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@971 -- # kill 1187158 00:13:58.019 11:31:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@976 -- # wait 1187158 00:13:58.278 11:31:59 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:13:58.278 11:31:59 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:13:58.278 11:31:59 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:13:58.278 11:31:59 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@297 -- # iptr 00:13:58.278 11:31:59 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@791 -- # iptables-save 00:13:58.278 11:31:59 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:13:58.278 11:31:59 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@791 -- # iptables-restore 00:13:58.278 11:31:59 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:13:58.278 11:31:59 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@302 -- # remove_spdk_ns 00:13:58.278 11:31:59 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:58.278 11:31:59 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:58.278 11:31:59 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:00.814 11:32:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:14:00.814 00:14:00.814 real 0m12.382s 00:14:00.814 user 0m19.026s 00:14:00.814 sys 0m4.764s 00:14:00.814 11:32:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1128 -- # xtrace_disable 00:14:00.814 11:32:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:00.814 ************************************ 00:14:00.814 END TEST nvmf_nvme_cli 00:14:00.814 ************************************ 00:14:00.814 11:32:01 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@30 -- # [[ 1 -eq 1 ]] 00:14:00.814 11:32:01 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@31 -- # run_test nvmf_vfio_user /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_vfio_user.sh --transport=tcp 00:14:00.814 11:32:01 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:14:00.814 11:32:01 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1109 -- # xtrace_disable 00:14:00.814 11:32:01 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:14:00.814 ************************************ 00:14:00.814 START TEST nvmf_vfio_user 00:14:00.814 ************************************ 00:14:00.814 11:32:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_vfio_user.sh --transport=tcp 00:14:00.814 * Looking for test storage... 00:14:00.814 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:14:00.814 11:32:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:14:00.814 11:32:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1691 -- # lcov --version 00:14:00.814 11:32:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:14:00.814 11:32:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:14:00.814 11:32:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:14:00.814 11:32:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@333 -- # local ver1 ver1_l 00:14:00.814 11:32:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@334 -- # local ver2 ver2_l 00:14:00.814 11:32:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@336 -- # IFS=.-: 00:14:00.814 11:32:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@336 -- # read -ra ver1 00:14:00.814 11:32:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@337 -- # IFS=.-: 00:14:00.814 11:32:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@337 -- # read -ra ver2 00:14:00.814 11:32:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@338 -- # local 'op=<' 00:14:00.814 11:32:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@340 -- # ver1_l=2 00:14:00.814 11:32:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@341 -- # ver2_l=1 00:14:00.814 11:32:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:14:00.814 11:32:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@344 -- # case "$op" in 00:14:00.814 11:32:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@345 -- # : 1 00:14:00.814 11:32:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@364 -- # (( v = 0 )) 00:14:00.815 11:32:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:14:00.815 11:32:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@365 -- # decimal 1 00:14:00.815 11:32:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@353 -- # local d=1 00:14:00.815 11:32:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:14:00.815 11:32:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@355 -- # echo 1 00:14:00.815 11:32:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@365 -- # ver1[v]=1 00:14:00.815 11:32:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@366 -- # decimal 2 00:14:00.815 11:32:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@353 -- # local d=2 00:14:00.815 11:32:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:14:00.815 11:32:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@355 -- # echo 2 00:14:00.815 11:32:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@366 -- # ver2[v]=2 00:14:00.815 11:32:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:14:00.815 11:32:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:14:00.815 11:32:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@368 -- # return 0 00:14:00.815 11:32:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:14:00.815 11:32:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:14:00.815 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:00.815 --rc genhtml_branch_coverage=1 00:14:00.815 --rc genhtml_function_coverage=1 00:14:00.815 --rc genhtml_legend=1 00:14:00.815 --rc geninfo_all_blocks=1 00:14:00.815 --rc geninfo_unexecuted_blocks=1 00:14:00.815 00:14:00.815 ' 00:14:00.815 11:32:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:14:00.815 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:00.815 --rc genhtml_branch_coverage=1 00:14:00.815 --rc genhtml_function_coverage=1 00:14:00.815 --rc genhtml_legend=1 00:14:00.815 --rc geninfo_all_blocks=1 00:14:00.815 --rc geninfo_unexecuted_blocks=1 00:14:00.815 00:14:00.815 ' 00:14:00.815 11:32:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:14:00.815 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:00.815 --rc genhtml_branch_coverage=1 00:14:00.815 --rc genhtml_function_coverage=1 00:14:00.815 --rc genhtml_legend=1 00:14:00.815 --rc geninfo_all_blocks=1 00:14:00.815 --rc geninfo_unexecuted_blocks=1 00:14:00.815 00:14:00.815 ' 00:14:00.815 11:32:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:14:00.815 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:00.815 --rc genhtml_branch_coverage=1 00:14:00.815 --rc genhtml_function_coverage=1 00:14:00.815 --rc genhtml_legend=1 00:14:00.815 --rc geninfo_all_blocks=1 00:14:00.815 --rc geninfo_unexecuted_blocks=1 00:14:00.815 00:14:00.815 ' 00:14:00.815 11:32:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:14:00.815 11:32:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@7 -- # uname -s 00:14:00.815 11:32:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:00.815 11:32:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:00.815 11:32:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:00.815 11:32:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:00.815 11:32:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:00.815 11:32:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:00.815 11:32:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:00.815 11:32:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:00.815 11:32:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:00.815 11:32:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:00.815 11:32:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:14:00.815 11:32:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@18 -- # NVME_HOSTID=00abaa28-3537-eb11-906e-0017a4403562 00:14:00.815 11:32:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:00.815 11:32:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:00.815 11:32:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:00.815 11:32:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:00.815 11:32:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:14:00.815 11:32:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@15 -- # shopt -s extglob 00:14:00.815 11:32:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:00.815 11:32:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:00.815 11:32:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:00.815 11:32:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:00.815 11:32:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:00.815 11:32:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:00.815 11:32:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@5 -- # export PATH 00:14:00.815 11:32:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:00.815 11:32:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@51 -- # : 0 00:14:00.815 11:32:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:14:00.815 11:32:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:14:00.815 11:32:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:00.815 11:32:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:00.815 11:32:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:00.815 11:32:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:14:00.815 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:14:00.815 11:32:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:14:00.815 11:32:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:14:00.815 11:32:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@55 -- # have_pci_nics=0 00:14:00.815 11:32:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@12 -- # MALLOC_BDEV_SIZE=64 00:14:00.815 11:32:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:14:00.815 11:32:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@14 -- # NUM_DEVICES=2 00:14:00.815 11:32:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:14:00.815 11:32:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@18 -- # export TEST_TRANSPORT=VFIOUSER 00:14:00.815 11:32:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@18 -- # TEST_TRANSPORT=VFIOUSER 00:14:00.815 11:32:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@47 -- # rm -rf /var/run/vfio-user 00:14:00.815 11:32:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@103 -- # setup_nvmf_vfio_user '' '' 00:14:00.815 11:32:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@51 -- # local nvmf_app_args= 00:14:00.815 11:32:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@52 -- # local transport_args= 00:14:00.815 11:32:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@55 -- # nvmfpid=1188599 00:14:00.815 11:32:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@57 -- # echo 'Process pid: 1188599' 00:14:00.815 Process pid: 1188599 00:14:00.815 11:32:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@59 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:14:00.815 11:32:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@60 -- # waitforlisten 1188599 00:14:00.815 11:32:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m '[0,1,2,3]' 00:14:00.815 11:32:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@833 -- # '[' -z 1188599 ']' 00:14:00.815 11:32:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:00.815 11:32:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@838 -- # local max_retries=100 00:14:00.816 11:32:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:00.816 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:00.816 11:32:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@842 -- # xtrace_disable 00:14:00.816 11:32:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:14:00.816 [2024-11-15 11:32:01.425068] Starting SPDK v25.01-pre git sha1 4b2d483c6 / DPDK 24.03.0 initialization... 00:14:00.816 [2024-11-15 11:32:01.425128] [ DPDK EAL parameters: nvmf -l 0,1,2,3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:00.816 [2024-11-15 11:32:01.510714] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:14:00.816 [2024-11-15 11:32:01.558732] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:00.816 [2024-11-15 11:32:01.558778] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:00.816 [2024-11-15 11:32:01.558789] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:00.816 [2024-11-15 11:32:01.558797] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:00.816 [2024-11-15 11:32:01.558805] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:00.816 [2024-11-15 11:32:01.560776] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:14:00.816 [2024-11-15 11:32:01.560881] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:14:00.816 [2024-11-15 11:32:01.560998] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:14:00.816 [2024-11-15 11:32:01.560999] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:00.816 11:32:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:14:00.816 11:32:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@866 -- # return 0 00:14:00.816 11:32:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@62 -- # sleep 1 00:14:02.193 11:32:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t VFIOUSER 00:14:02.193 11:32:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@66 -- # mkdir -p /var/run/vfio-user 00:14:02.193 11:32:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # seq 1 2 00:14:02.193 11:32:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:14:02.193 11:32:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user1/1 00:14:02.193 11:32:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:14:02.452 Malloc1 00:14:02.452 11:32:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode1 -a -s SPDK1 00:14:02.710 11:32:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc1 00:14:02.969 11:32:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode1 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user1/1 -s 0 00:14:03.228 11:32:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:14:03.228 11:32:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user2/2 00:14:03.228 11:32:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:14:03.488 Malloc2 00:14:03.488 11:32:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode2 -a -s SPDK2 00:14:04.054 11:32:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc2 00:14:04.054 11:32:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode2 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user2/2 -s 0 00:14:04.313 11:32:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@104 -- # run_nvmf_vfio_user 00:14:04.313 11:32:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # seq 1 2 00:14:04.313 11:32:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # for i in $(seq 1 $NUM_DEVICES) 00:14:04.313 11:32:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@81 -- # test_traddr=/var/run/vfio-user/domain/vfio-user1/1 00:14:04.313 11:32:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@82 -- # test_subnqn=nqn.2019-07.io.spdk:cnode1 00:14:04.313 11:32:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -g -L nvme -L nvme_vfio -L vfio_pci 00:14:04.574 [2024-11-15 11:32:05.177065] Starting SPDK v25.01-pre git sha1 4b2d483c6 / DPDK 24.03.0 initialization... 00:14:04.574 [2024-11-15 11:32:05.177109] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --single-file-segments --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1189397 ] 00:14:04.574 [2024-11-15 11:32:05.235079] nvme_vfio_user.c: 259:nvme_vfio_ctrlr_scan: *DEBUG*: Scan controller : /var/run/vfio-user/domain/vfio-user1/1 00:14:04.574 [2024-11-15 11:32:05.243778] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 0, Size 0x2000, Offset 0x0, Flags 0xf, Cap offset 32 00:14:04.574 [2024-11-15 11:32:05.243807] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0x1000, Offset 0x1000, Map addr 0x7fcef7cfe000 00:14:04.574 [2024-11-15 11:32:05.244770] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 1, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:14:04.574 [2024-11-15 11:32:05.245770] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 2, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:14:04.574 [2024-11-15 11:32:05.246780] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 3, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:14:04.574 [2024-11-15 11:32:05.247784] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 4, Size 0x2000, Offset 0x0, Flags 0x3, Cap offset 0 00:14:04.574 [2024-11-15 11:32:05.248787] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 5, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:14:04.574 [2024-11-15 11:32:05.249791] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 6, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:14:04.574 [2024-11-15 11:32:05.250793] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 7, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:14:04.574 [2024-11-15 11:32:05.251799] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 8, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:14:04.574 [2024-11-15 11:32:05.252813] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 9, Size 0xc000, Offset 0x0, Flags 0xf, Cap offset 32 00:14:04.574 [2024-11-15 11:32:05.252826] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0xb000, Offset 0x1000, Map addr 0x7fcef7cf3000 00:14:04.574 [2024-11-15 11:32:05.254234] vfio_user_pci.c: 65:vfio_add_mr: *DEBUG*: Add memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:14:04.574 [2024-11-15 11:32:05.271604] vfio_user_pci.c: 386:spdk_vfio_user_setup: *DEBUG*: Device vfio-user0, Path /var/run/vfio-user/domain/vfio-user1/1/cntrl Setup Successfully 00:14:04.574 [2024-11-15 11:32:05.271642] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to connect adminq (no timeout) 00:14:04.574 [2024-11-15 11:32:05.276957] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x0, value 0x201e0100ff 00:14:04.574 [2024-11-15 11:32:05.277008] nvme_pcie_common.c: 159:nvme_pcie_qpair_construct: *INFO*: max_completions_cap = 64 num_trackers = 192 00:14:04.574 [2024-11-15 11:32:05.277098] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for connect adminq (no timeout) 00:14:04.574 [2024-11-15 11:32:05.277118] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to read vs (no timeout) 00:14:04.574 [2024-11-15 11:32:05.277126] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to read vs wait for vs (no timeout) 00:14:04.574 [2024-11-15 11:32:05.277949] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x8, value 0x10300 00:14:04.574 [2024-11-15 11:32:05.277961] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to read cap (no timeout) 00:14:04.574 [2024-11-15 11:32:05.277971] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to read cap wait for cap (no timeout) 00:14:04.574 [2024-11-15 11:32:05.278955] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x0, value 0x201e0100ff 00:14:04.574 [2024-11-15 11:32:05.278967] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to check en (no timeout) 00:14:04.574 [2024-11-15 11:32:05.278977] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to check en wait for cc (timeout 15000 ms) 00:14:04.574 [2024-11-15 11:32:05.279958] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x0 00:14:04.574 [2024-11-15 11:32:05.279970] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:14:04.574 [2024-11-15 11:32:05.280961] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x0 00:14:04.574 [2024-11-15 11:32:05.280971] nvme_ctrlr.c:3906:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] CC.EN = 0 && CSTS.RDY = 0 00:14:04.574 [2024-11-15 11:32:05.280978] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to controller is disabled (timeout 15000 ms) 00:14:04.574 [2024-11-15 11:32:05.280987] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:14:04.574 [2024-11-15 11:32:05.281097] nvme_ctrlr.c:4104:nvme_ctrlr_process_init: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] Setting CC.EN = 1 00:14:04.574 [2024-11-15 11:32:05.281104] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:14:04.574 [2024-11-15 11:32:05.281110] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x28, value 0x2000003c0000 00:14:04.574 [2024-11-15 11:32:05.281972] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x30, value 0x2000003be000 00:14:04.574 [2024-11-15 11:32:05.282975] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x24, value 0xff00ff 00:14:04.574 [2024-11-15 11:32:05.283980] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x460001 00:14:04.574 [2024-11-15 11:32:05.284978] vfio_user.c:2832:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:14:04.574 [2024-11-15 11:32:05.285057] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:14:04.574 [2024-11-15 11:32:05.285996] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x1 00:14:04.574 [2024-11-15 11:32:05.286007] nvme_ctrlr.c:3941:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:14:04.574 [2024-11-15 11:32:05.286014] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to reset admin queue (timeout 30000 ms) 00:14:04.574 [2024-11-15 11:32:05.286039] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to identify controller (no timeout) 00:14:04.574 [2024-11-15 11:32:05.286053] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for identify controller (timeout 30000 ms) 00:14:04.574 [2024-11-15 11:32:05.286074] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:14:04.574 [2024-11-15 11:32:05.286081] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:14:04.574 [2024-11-15 11:32:05.286088] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:14:04.574 [2024-11-15 11:32:05.286104] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000001 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:14:04.574 [2024-11-15 11:32:05.286141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0001 p:1 m:0 dnr:0 00:14:04.574 [2024-11-15 11:32:05.286153] nvme_ctrlr.c:2081:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] transport max_xfer_size 131072 00:14:04.574 [2024-11-15 11:32:05.286160] nvme_ctrlr.c:2085:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] MDTS max_xfer_size 131072 00:14:04.574 [2024-11-15 11:32:05.286166] nvme_ctrlr.c:2088:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] CNTLID 0x0001 00:14:04.574 [2024-11-15 11:32:05.286173] nvme_ctrlr.c:2099:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] Identify CNTLID 0x0001 != Connect CNTLID 0x0000 00:14:04.574 [2024-11-15 11:32:05.286182] nvme_ctrlr.c:2112:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] transport max_sges 1 00:14:04.574 [2024-11-15 11:32:05.286188] nvme_ctrlr.c:2127:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] fuses compare and write: 1 00:14:04.574 [2024-11-15 11:32:05.286194] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to configure AER (timeout 30000 ms) 00:14:04.574 [2024-11-15 11:32:05.286206] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for configure aer (timeout 30000 ms) 00:14:04.574 [2024-11-15 11:32:05.286218] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:191 cdw10:0000000b PRP1 0x0 PRP2 0x0 00:14:04.574 [2024-11-15 11:32:05.286231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0002 p:1 m:0 dnr:0 00:14:04.574 [2024-11-15 11:32:05.286244] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:14:04.574 [2024-11-15 11:32:05.286255] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:14:04.574 [2024-11-15 11:32:05.286265] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:14:04.574 [2024-11-15 11:32:05.286276] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:14:04.574 [2024-11-15 11:32:05.286282] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set keep alive timeout (timeout 30000 ms) 00:14:04.574 [2024-11-15 11:32:05.286290] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:14:04.574 [2024-11-15 11:32:05.286302] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:191 cdw10:0000000f PRP1 0x0 PRP2 0x0 00:14:04.574 [2024-11-15 11:32:05.286314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0007 p:1 m:0 dnr:0 00:14:04.574 [2024-11-15 11:32:05.286323] nvme_ctrlr.c:3047:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] Controller adjusted keep alive timeout to 0 ms 00:14:04.574 [2024-11-15 11:32:05.286330] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to identify controller iocs specific (timeout 30000 ms) 00:14:04.575 [2024-11-15 11:32:05.286339] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set number of queues (timeout 30000 ms) 00:14:04.575 [2024-11-15 11:32:05.286346] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for set number of queues (timeout 30000 ms) 00:14:04.575 [2024-11-15 11:32:05.286359] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:14:04.575 [2024-11-15 11:32:05.286369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:0008 p:1 m:0 dnr:0 00:14:04.575 [2024-11-15 11:32:05.286443] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to identify active ns (timeout 30000 ms) 00:14:04.575 [2024-11-15 11:32:05.286452] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for identify active ns (timeout 30000 ms) 00:14:04.575 [2024-11-15 11:32:05.286478] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f9000 len:4096 00:14:04.575 [2024-11-15 11:32:05.286484] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f9000 00:14:04.575 [2024-11-15 11:32:05.286489] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:14:04.575 [2024-11-15 11:32:05.286497] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000002 cdw11:00000000 PRP1 0x2000002f9000 PRP2 0x0 00:14:04.575 [2024-11-15 11:32:05.286511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0009 p:1 m:0 dnr:0 00:14:04.575 [2024-11-15 11:32:05.286522] nvme_ctrlr.c:4735:spdk_nvme_ctrlr_get_ns: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] Namespace 1 was added 00:14:04.575 [2024-11-15 11:32:05.286533] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to identify ns (timeout 30000 ms) 00:14:04.575 [2024-11-15 11:32:05.286542] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for identify ns (timeout 30000 ms) 00:14:04.575 [2024-11-15 11:32:05.286551] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:14:04.575 [2024-11-15 11:32:05.286557] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:14:04.575 [2024-11-15 11:32:05.286561] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:14:04.575 [2024-11-15 11:32:05.286569] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000000 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:14:04.575 [2024-11-15 11:32:05.286588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000a p:1 m:0 dnr:0 00:14:04.575 [2024-11-15 11:32:05.286603] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to identify namespace id descriptors (timeout 30000 ms) 00:14:04.575 [2024-11-15 11:32:05.286613] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:14:04.575 [2024-11-15 11:32:05.286622] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:14:04.575 [2024-11-15 11:32:05.286627] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:14:04.575 [2024-11-15 11:32:05.286632] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:14:04.575 [2024-11-15 11:32:05.286640] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:14:04.575 [2024-11-15 11:32:05.286651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000b p:1 m:0 dnr:0 00:14:04.575 [2024-11-15 11:32:05.286661] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to identify ns iocs specific (timeout 30000 ms) 00:14:04.575 [2024-11-15 11:32:05.286670] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set supported log pages (timeout 30000 ms) 00:14:04.575 [2024-11-15 11:32:05.286681] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set supported features (timeout 30000 ms) 00:14:04.575 [2024-11-15 11:32:05.286689] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set host behavior support feature (timeout 30000 ms) 00:14:04.575 [2024-11-15 11:32:05.286696] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set doorbell buffer config (timeout 30000 ms) 00:14:04.575 [2024-11-15 11:32:05.286702] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set host ID (timeout 30000 ms) 00:14:04.575 [2024-11-15 11:32:05.286708] nvme_ctrlr.c:3147:nvme_ctrlr_set_host_id: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] NVMe-oF transport - not sending Set Features - Host ID 00:14:04.575 [2024-11-15 11:32:05.286714] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to transport ready (timeout 30000 ms) 00:14:04.575 [2024-11-15 11:32:05.286721] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to ready (no timeout) 00:14:04.575 [2024-11-15 11:32:05.286741] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:191 cdw10:00000001 PRP1 0x0 PRP2 0x0 00:14:04.575 [2024-11-15 11:32:05.286753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000c p:1 m:0 dnr:0 00:14:04.575 [2024-11-15 11:32:05.286768] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:191 cdw10:00000002 PRP1 0x0 PRP2 0x0 00:14:04.575 [2024-11-15 11:32:05.286777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000d p:1 m:0 dnr:0 00:14:04.575 [2024-11-15 11:32:05.286791] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:191 cdw10:00000004 PRP1 0x0 PRP2 0x0 00:14:04.575 [2024-11-15 11:32:05.286803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000e p:1 m:0 dnr:0 00:14:04.575 [2024-11-15 11:32:05.286817] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:14:04.575 [2024-11-15 11:32:05.286826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:000f p:1 m:0 dnr:0 00:14:04.575 [2024-11-15 11:32:05.286842] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f6000 len:8192 00:14:04.575 [2024-11-15 11:32:05.286848] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f6000 00:14:04.575 [2024-11-15 11:32:05.286852] nvme_pcie_common.c:1275:nvme_pcie_prp_list_append: *DEBUG*: prp[0] = 0x2000002f7000 00:14:04.575 [2024-11-15 11:32:05.286857] nvme_pcie_common.c:1291:nvme_pcie_prp_list_append: *DEBUG*: prp2 = 0x2000002f7000 00:14:04.575 [2024-11-15 11:32:05.286861] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 2 00:14:04.575 [2024-11-15 11:32:05.286869] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:191 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 PRP1 0x2000002f6000 PRP2 0x2000002f7000 00:14:04.575 [2024-11-15 11:32:05.286879] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fc000 len:512 00:14:04.575 [2024-11-15 11:32:05.286884] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fc000 00:14:04.575 [2024-11-15 11:32:05.286889] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:14:04.575 [2024-11-15 11:32:05.286896] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:186 nsid:ffffffff cdw10:007f0002 cdw11:00000000 PRP1 0x2000002fc000 PRP2 0x0 00:14:04.575 [2024-11-15 11:32:05.286905] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:512 00:14:04.575 [2024-11-15 11:32:05.286911] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:14:04.575 [2024-11-15 11:32:05.286917] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:14:04.575 [2024-11-15 11:32:05.286925] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:185 nsid:ffffffff cdw10:007f0003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:14:04.575 [2024-11-15 11:32:05.286935] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f4000 len:4096 00:14:04.575 [2024-11-15 11:32:05.286940] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f4000 00:14:04.575 [2024-11-15 11:32:05.286945] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:14:04.575 [2024-11-15 11:32:05.286952] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:184 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 PRP1 0x2000002f4000 PRP2 0x0 00:14:04.575 [2024-11-15 11:32:05.286962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0010 p:1 m:0 dnr:0 00:14:04.575 [2024-11-15 11:32:05.286978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:186 cdw0:0 sqhd:0011 p:1 m:0 dnr:0 00:14:04.575 [2024-11-15 11:32:05.286991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:185 cdw0:0 sqhd:0012 p:1 m:0 dnr:0 00:14:04.575 [2024-11-15 11:32:05.287000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0013 p:1 m:0 dnr:0 00:14:04.575 ===================================================== 00:14:04.575 NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:14:04.575 ===================================================== 00:14:04.575 Controller Capabilities/Features 00:14:04.575 ================================ 00:14:04.575 Vendor ID: 4e58 00:14:04.575 Subsystem Vendor ID: 4e58 00:14:04.575 Serial Number: SPDK1 00:14:04.575 Model Number: SPDK bdev Controller 00:14:04.575 Firmware Version: 25.01 00:14:04.575 Recommended Arb Burst: 6 00:14:04.575 IEEE OUI Identifier: 8d 6b 50 00:14:04.575 Multi-path I/O 00:14:04.575 May have multiple subsystem ports: Yes 00:14:04.575 May have multiple controllers: Yes 00:14:04.575 Associated with SR-IOV VF: No 00:14:04.575 Max Data Transfer Size: 131072 00:14:04.575 Max Number of Namespaces: 32 00:14:04.575 Max Number of I/O Queues: 127 00:14:04.575 NVMe Specification Version (VS): 1.3 00:14:04.575 NVMe Specification Version (Identify): 1.3 00:14:04.575 Maximum Queue Entries: 256 00:14:04.575 Contiguous Queues Required: Yes 00:14:04.575 Arbitration Mechanisms Supported 00:14:04.575 Weighted Round Robin: Not Supported 00:14:04.575 Vendor Specific: Not Supported 00:14:04.575 Reset Timeout: 15000 ms 00:14:04.575 Doorbell Stride: 4 bytes 00:14:04.575 NVM Subsystem Reset: Not Supported 00:14:04.575 Command Sets Supported 00:14:04.575 NVM Command Set: Supported 00:14:04.575 Boot Partition: Not Supported 00:14:04.575 Memory Page Size Minimum: 4096 bytes 00:14:04.575 Memory Page Size Maximum: 4096 bytes 00:14:04.575 Persistent Memory Region: Not Supported 00:14:04.575 Optional Asynchronous Events Supported 00:14:04.575 Namespace Attribute Notices: Supported 00:14:04.575 Firmware Activation Notices: Not Supported 00:14:04.575 ANA Change Notices: Not Supported 00:14:04.575 PLE Aggregate Log Change Notices: Not Supported 00:14:04.575 LBA Status Info Alert Notices: Not Supported 00:14:04.575 EGE Aggregate Log Change Notices: Not Supported 00:14:04.576 Normal NVM Subsystem Shutdown event: Not Supported 00:14:04.576 Zone Descriptor Change Notices: Not Supported 00:14:04.576 Discovery Log Change Notices: Not Supported 00:14:04.576 Controller Attributes 00:14:04.576 128-bit Host Identifier: Supported 00:14:04.576 Non-Operational Permissive Mode: Not Supported 00:14:04.576 NVM Sets: Not Supported 00:14:04.576 Read Recovery Levels: Not Supported 00:14:04.576 Endurance Groups: Not Supported 00:14:04.576 Predictable Latency Mode: Not Supported 00:14:04.576 Traffic Based Keep ALive: Not Supported 00:14:04.576 Namespace Granularity: Not Supported 00:14:04.576 SQ Associations: Not Supported 00:14:04.576 UUID List: Not Supported 00:14:04.576 Multi-Domain Subsystem: Not Supported 00:14:04.576 Fixed Capacity Management: Not Supported 00:14:04.576 Variable Capacity Management: Not Supported 00:14:04.576 Delete Endurance Group: Not Supported 00:14:04.576 Delete NVM Set: Not Supported 00:14:04.576 Extended LBA Formats Supported: Not Supported 00:14:04.576 Flexible Data Placement Supported: Not Supported 00:14:04.576 00:14:04.576 Controller Memory Buffer Support 00:14:04.576 ================================ 00:14:04.576 Supported: No 00:14:04.576 00:14:04.576 Persistent Memory Region Support 00:14:04.576 ================================ 00:14:04.576 Supported: No 00:14:04.576 00:14:04.576 Admin Command Set Attributes 00:14:04.576 ============================ 00:14:04.576 Security Send/Receive: Not Supported 00:14:04.576 Format NVM: Not Supported 00:14:04.576 Firmware Activate/Download: Not Supported 00:14:04.576 Namespace Management: Not Supported 00:14:04.576 Device Self-Test: Not Supported 00:14:04.576 Directives: Not Supported 00:14:04.576 NVMe-MI: Not Supported 00:14:04.576 Virtualization Management: Not Supported 00:14:04.576 Doorbell Buffer Config: Not Supported 00:14:04.576 Get LBA Status Capability: Not Supported 00:14:04.576 Command & Feature Lockdown Capability: Not Supported 00:14:04.576 Abort Command Limit: 4 00:14:04.576 Async Event Request Limit: 4 00:14:04.576 Number of Firmware Slots: N/A 00:14:04.576 Firmware Slot 1 Read-Only: N/A 00:14:04.576 Firmware Activation Without Reset: N/A 00:14:04.576 Multiple Update Detection Support: N/A 00:14:04.576 Firmware Update Granularity: No Information Provided 00:14:04.576 Per-Namespace SMART Log: No 00:14:04.576 Asymmetric Namespace Access Log Page: Not Supported 00:14:04.576 Subsystem NQN: nqn.2019-07.io.spdk:cnode1 00:14:04.576 Command Effects Log Page: Supported 00:14:04.576 Get Log Page Extended Data: Supported 00:14:04.576 Telemetry Log Pages: Not Supported 00:14:04.576 Persistent Event Log Pages: Not Supported 00:14:04.576 Supported Log Pages Log Page: May Support 00:14:04.576 Commands Supported & Effects Log Page: Not Supported 00:14:04.576 Feature Identifiers & Effects Log Page:May Support 00:14:04.576 NVMe-MI Commands & Effects Log Page: May Support 00:14:04.576 Data Area 4 for Telemetry Log: Not Supported 00:14:04.576 Error Log Page Entries Supported: 128 00:14:04.576 Keep Alive: Supported 00:14:04.576 Keep Alive Granularity: 10000 ms 00:14:04.576 00:14:04.576 NVM Command Set Attributes 00:14:04.576 ========================== 00:14:04.576 Submission Queue Entry Size 00:14:04.576 Max: 64 00:14:04.576 Min: 64 00:14:04.576 Completion Queue Entry Size 00:14:04.576 Max: 16 00:14:04.576 Min: 16 00:14:04.576 Number of Namespaces: 32 00:14:04.576 Compare Command: Supported 00:14:04.576 Write Uncorrectable Command: Not Supported 00:14:04.576 Dataset Management Command: Supported 00:14:04.576 Write Zeroes Command: Supported 00:14:04.576 Set Features Save Field: Not Supported 00:14:04.576 Reservations: Not Supported 00:14:04.576 Timestamp: Not Supported 00:14:04.576 Copy: Supported 00:14:04.576 Volatile Write Cache: Present 00:14:04.576 Atomic Write Unit (Normal): 1 00:14:04.576 Atomic Write Unit (PFail): 1 00:14:04.576 Atomic Compare & Write Unit: 1 00:14:04.576 Fused Compare & Write: Supported 00:14:04.576 Scatter-Gather List 00:14:04.576 SGL Command Set: Supported (Dword aligned) 00:14:04.576 SGL Keyed: Not Supported 00:14:04.576 SGL Bit Bucket Descriptor: Not Supported 00:14:04.576 SGL Metadata Pointer: Not Supported 00:14:04.576 Oversized SGL: Not Supported 00:14:04.576 SGL Metadata Address: Not Supported 00:14:04.576 SGL Offset: Not Supported 00:14:04.576 Transport SGL Data Block: Not Supported 00:14:04.576 Replay Protected Memory Block: Not Supported 00:14:04.576 00:14:04.576 Firmware Slot Information 00:14:04.576 ========================= 00:14:04.576 Active slot: 1 00:14:04.576 Slot 1 Firmware Revision: 25.01 00:14:04.576 00:14:04.576 00:14:04.576 Commands Supported and Effects 00:14:04.576 ============================== 00:14:04.576 Admin Commands 00:14:04.576 -------------- 00:14:04.576 Get Log Page (02h): Supported 00:14:04.576 Identify (06h): Supported 00:14:04.576 Abort (08h): Supported 00:14:04.576 Set Features (09h): Supported 00:14:04.576 Get Features (0Ah): Supported 00:14:04.576 Asynchronous Event Request (0Ch): Supported 00:14:04.576 Keep Alive (18h): Supported 00:14:04.576 I/O Commands 00:14:04.576 ------------ 00:14:04.576 Flush (00h): Supported LBA-Change 00:14:04.576 Write (01h): Supported LBA-Change 00:14:04.576 Read (02h): Supported 00:14:04.576 Compare (05h): Supported 00:14:04.576 Write Zeroes (08h): Supported LBA-Change 00:14:04.576 Dataset Management (09h): Supported LBA-Change 00:14:04.576 Copy (19h): Supported LBA-Change 00:14:04.576 00:14:04.576 Error Log 00:14:04.576 ========= 00:14:04.576 00:14:04.576 Arbitration 00:14:04.576 =========== 00:14:04.576 Arbitration Burst: 1 00:14:04.576 00:14:04.576 Power Management 00:14:04.576 ================ 00:14:04.576 Number of Power States: 1 00:14:04.576 Current Power State: Power State #0 00:14:04.576 Power State #0: 00:14:04.576 Max Power: 0.00 W 00:14:04.576 Non-Operational State: Operational 00:14:04.576 Entry Latency: Not Reported 00:14:04.576 Exit Latency: Not Reported 00:14:04.576 Relative Read Throughput: 0 00:14:04.576 Relative Read Latency: 0 00:14:04.576 Relative Write Throughput: 0 00:14:04.576 Relative Write Latency: 0 00:14:04.576 Idle Power: Not Reported 00:14:04.576 Active Power: Not Reported 00:14:04.576 Non-Operational Permissive Mode: Not Supported 00:14:04.576 00:14:04.576 Health Information 00:14:04.576 ================== 00:14:04.576 Critical Warnings: 00:14:04.576 Available Spare Space: OK 00:14:04.576 Temperature: OK 00:14:04.576 Device Reliability: OK 00:14:04.576 Read Only: No 00:14:04.576 Volatile Memory Backup: OK 00:14:04.576 Current Temperature: 0 Kelvin (-273 Celsius) 00:14:04.576 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:14:04.576 Available Spare: 0% 00:14:04.576 Available Sp[2024-11-15 11:32:05.287118] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:184 cdw10:00000005 PRP1 0x0 PRP2 0x0 00:14:04.576 [2024-11-15 11:32:05.287130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0014 p:1 m:0 dnr:0 00:14:04.576 [2024-11-15 11:32:05.287161] nvme_ctrlr.c:4399:nvme_ctrlr_destruct_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] Prepare to destruct SSD 00:14:04.576 [2024-11-15 11:32:05.287173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:04.576 [2024-11-15 11:32:05.287182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:04.576 [2024-11-15 11:32:05.287190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:04.576 [2024-11-15 11:32:05.287198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:04.576 [2024-11-15 11:32:05.290469] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x460001 00:14:04.576 [2024-11-15 11:32:05.290484] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x464001 00:14:04.576 [2024-11-15 11:32:05.291012] vfio_user.c:2794:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:14:04.576 [2024-11-15 11:32:05.291061] nvme_ctrlr.c:1151:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] RTD3E = 0 us 00:14:04.576 [2024-11-15 11:32:05.291069] nvme_ctrlr.c:1154:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] shutdown timeout = 10000 ms 00:14:04.576 [2024-11-15 11:32:05.292028] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x9 00:14:04.576 [2024-11-15 11:32:05.292043] nvme_ctrlr.c:1273:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] shutdown complete in 0 milliseconds 00:14:04.576 [2024-11-15 11:32:05.292101] vfio_user_pci.c: 399:spdk_vfio_user_release: *DEBUG*: Release file /var/run/vfio-user/domain/vfio-user1/1/cntrl 00:14:04.576 [2024-11-15 11:32:05.294057] vfio_user_pci.c: 96:vfio_remove_mr: *DEBUG*: Remove memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:14:04.576 are Threshold: 0% 00:14:04.576 Life Percentage Used: 0% 00:14:04.576 Data Units Read: 0 00:14:04.576 Data Units Written: 0 00:14:04.576 Host Read Commands: 0 00:14:04.576 Host Write Commands: 0 00:14:04.576 Controller Busy Time: 0 minutes 00:14:04.576 Power Cycles: 0 00:14:04.576 Power On Hours: 0 hours 00:14:04.577 Unsafe Shutdowns: 0 00:14:04.577 Unrecoverable Media Errors: 0 00:14:04.577 Lifetime Error Log Entries: 0 00:14:04.577 Warning Temperature Time: 0 minutes 00:14:04.577 Critical Temperature Time: 0 minutes 00:14:04.577 00:14:04.577 Number of Queues 00:14:04.577 ================ 00:14:04.577 Number of I/O Submission Queues: 127 00:14:04.577 Number of I/O Completion Queues: 127 00:14:04.577 00:14:04.577 Active Namespaces 00:14:04.577 ================= 00:14:04.577 Namespace ID:1 00:14:04.577 Error Recovery Timeout: Unlimited 00:14:04.577 Command Set Identifier: NVM (00h) 00:14:04.577 Deallocate: Supported 00:14:04.577 Deallocated/Unwritten Error: Not Supported 00:14:04.577 Deallocated Read Value: Unknown 00:14:04.577 Deallocate in Write Zeroes: Not Supported 00:14:04.577 Deallocated Guard Field: 0xFFFF 00:14:04.577 Flush: Supported 00:14:04.577 Reservation: Supported 00:14:04.577 Namespace Sharing Capabilities: Multiple Controllers 00:14:04.577 Size (in LBAs): 131072 (0GiB) 00:14:04.577 Capacity (in LBAs): 131072 (0GiB) 00:14:04.577 Utilization (in LBAs): 131072 (0GiB) 00:14:04.577 NGUID: 4161269D55AC4350896EC8D831135E8E 00:14:04.577 UUID: 4161269d-55ac-4350-896e-c8d831135e8e 00:14:04.577 Thin Provisioning: Not Supported 00:14:04.577 Per-NS Atomic Units: Yes 00:14:04.577 Atomic Boundary Size (Normal): 0 00:14:04.577 Atomic Boundary Size (PFail): 0 00:14:04.577 Atomic Boundary Offset: 0 00:14:04.577 Maximum Single Source Range Length: 65535 00:14:04.577 Maximum Copy Length: 65535 00:14:04.577 Maximum Source Range Count: 1 00:14:04.577 NGUID/EUI64 Never Reused: No 00:14:04.577 Namespace Write Protected: No 00:14:04.577 Number of LBA Formats: 1 00:14:04.577 Current LBA Format: LBA Format #00 00:14:04.577 LBA Format #00: Data Size: 512 Metadata Size: 0 00:14:04.577 00:14:04.577 11:32:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -s 256 -g -q 128 -o 4096 -w read -t 5 -c 0x2 00:14:04.836 [2024-11-15 11:32:05.536358] vfio_user.c:2832:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:14:10.110 Initializing NVMe Controllers 00:14:10.110 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:14:10.110 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 with lcore 1 00:14:10.110 Initialization complete. Launching workers. 00:14:10.110 ======================================================== 00:14:10.110 Latency(us) 00:14:10.110 Device Information : IOPS MiB/s Average min max 00:14:10.110 VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 from core 1: 39973.80 156.15 3202.57 875.43 6846.71 00:14:10.110 ======================================================== 00:14:10.110 Total : 39973.80 156.15 3202.57 875.43 6846.71 00:14:10.110 00:14:10.110 [2024-11-15 11:32:10.558168] vfio_user.c:2794:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:14:10.110 11:32:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@85 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -s 256 -g -q 128 -o 4096 -w write -t 5 -c 0x2 00:14:10.110 [2024-11-15 11:32:10.792361] vfio_user.c:2832:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:14:15.383 Initializing NVMe Controllers 00:14:15.383 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:14:15.383 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 with lcore 1 00:14:15.383 Initialization complete. Launching workers. 00:14:15.383 ======================================================== 00:14:15.383 Latency(us) 00:14:15.383 Device Information : IOPS MiB/s Average min max 00:14:15.383 VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 from core 1: 16057.45 62.72 7976.82 4985.68 10972.79 00:14:15.383 ======================================================== 00:14:15.383 Total : 16057.45 62.72 7976.82 4985.68 10972.79 00:14:15.383 00:14:15.383 [2024-11-15 11:32:15.829451] vfio_user.c:2794:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:14:15.383 11:32:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -g -q 32 -o 4096 -w randrw -M 50 -t 5 -c 0xE 00:14:15.383 [2024-11-15 11:32:16.036478] vfio_user.c:2832:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:14:20.680 [2024-11-15 11:32:21.104676] vfio_user.c:2794:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:14:20.680 Initializing NVMe Controllers 00:14:20.680 Attaching to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:14:20.680 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:14:20.680 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 1 00:14:20.680 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 2 00:14:20.680 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 3 00:14:20.680 Initialization complete. Launching workers. 00:14:20.680 Starting thread on core 2 00:14:20.680 Starting thread on core 3 00:14:20.680 Starting thread on core 1 00:14:20.680 11:32:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -t 3 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -d 256 -g 00:14:20.680 [2024-11-15 11:32:21.445914] vfio_user.c:2832:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:14:24.871 [2024-11-15 11:32:25.167692] vfio_user.c:2794:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:14:24.871 Initializing NVMe Controllers 00:14:24.871 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:14:24.871 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:14:24.871 Associating SPDK bdev Controller (SPDK1 ) with lcore 0 00:14:24.871 Associating SPDK bdev Controller (SPDK1 ) with lcore 1 00:14:24.871 Associating SPDK bdev Controller (SPDK1 ) with lcore 2 00:14:24.871 Associating SPDK bdev Controller (SPDK1 ) with lcore 3 00:14:24.871 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration run with configuration: 00:14:24.871 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i -1 00:14:24.871 Initialization complete. Launching workers. 00:14:24.871 Starting thread on core 1 with urgent priority queue 00:14:24.871 Starting thread on core 2 with urgent priority queue 00:14:24.871 Starting thread on core 3 with urgent priority queue 00:14:24.871 Starting thread on core 0 with urgent priority queue 00:14:24.871 SPDK bdev Controller (SPDK1 ) core 0: 2936.00 IO/s 34.06 secs/100000 ios 00:14:24.871 SPDK bdev Controller (SPDK1 ) core 1: 4472.33 IO/s 22.36 secs/100000 ios 00:14:24.871 SPDK bdev Controller (SPDK1 ) core 2: 3160.00 IO/s 31.65 secs/100000 ios 00:14:24.871 SPDK bdev Controller (SPDK1 ) core 3: 4041.67 IO/s 24.74 secs/100000 ios 00:14:24.871 ======================================================== 00:14:24.871 00:14:24.871 11:32:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/hello_world -d 256 -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' 00:14:24.871 [2024-11-15 11:32:25.520955] vfio_user.c:2832:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:14:24.871 Initializing NVMe Controllers 00:14:24.871 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:14:24.871 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:14:24.871 Namespace ID: 1 size: 0GB 00:14:24.871 Initialization complete. 00:14:24.871 INFO: using host memory buffer for IO 00:14:24.871 Hello world! 00:14:24.871 [2024-11-15 11:32:25.555203] vfio_user.c:2794:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:14:24.871 11:32:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -g -d 256 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' 00:14:25.130 [2024-11-15 11:32:25.907914] vfio_user.c:2832:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:14:26.508 Initializing NVMe Controllers 00:14:26.508 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:14:26.508 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:14:26.508 Initialization complete. Launching workers. 00:14:26.508 submit (in ns) avg, min, max = 9262.5, 4554.5, 4003269.1 00:14:26.508 complete (in ns) avg, min, max = 21233.9, 2702.7, 4995803.6 00:14:26.508 00:14:26.508 Submit histogram 00:14:26.508 ================ 00:14:26.508 Range in us Cumulative Count 00:14:26.508 4.538 - 4.567: 0.0059% ( 1) 00:14:26.508 4.567 - 4.596: 0.3631% ( 61) 00:14:26.508 4.596 - 4.625: 3.3378% ( 508) 00:14:26.508 4.625 - 4.655: 7.1148% ( 645) 00:14:26.508 4.655 - 4.684: 10.6752% ( 608) 00:14:26.508 4.684 - 4.713: 22.5215% ( 2023) 00:14:26.508 4.713 - 4.742: 38.1800% ( 2674) 00:14:26.508 4.742 - 4.771: 49.1128% ( 1867) 00:14:26.508 4.771 - 4.800: 59.9930% ( 1858) 00:14:26.508 4.800 - 4.829: 70.4749% ( 1790) 00:14:26.508 4.829 - 4.858: 80.0668% ( 1638) 00:14:26.508 4.858 - 4.887: 85.1086% ( 861) 00:14:26.508 4.887 - 4.916: 86.6311% ( 260) 00:14:26.508 4.916 - 4.945: 87.5856% ( 163) 00:14:26.508 4.945 - 4.975: 88.7685% ( 202) 00:14:26.508 4.975 - 5.004: 90.8005% ( 347) 00:14:26.508 5.004 - 5.033: 92.6744% ( 320) 00:14:26.508 5.033 - 5.062: 94.7298% ( 351) 00:14:26.508 5.062 - 5.091: 96.5451% ( 310) 00:14:26.508 5.091 - 5.120: 97.8041% ( 215) 00:14:26.508 5.120 - 5.149: 98.5068% ( 120) 00:14:26.508 5.149 - 5.178: 99.0455% ( 92) 00:14:26.508 5.178 - 5.207: 99.3324% ( 49) 00:14:26.508 5.207 - 5.236: 99.4203% ( 15) 00:14:26.508 5.236 - 5.265: 99.4378% ( 3) 00:14:26.508 5.265 - 5.295: 99.4496% ( 2) 00:14:26.508 5.295 - 5.324: 99.4554% ( 1) 00:14:26.508 5.324 - 5.353: 99.4613% ( 1) 00:14:26.508 7.011 - 7.040: 99.4671% ( 1) 00:14:26.508 7.069 - 7.098: 99.4730% ( 1) 00:14:26.508 7.156 - 7.185: 99.4847% ( 2) 00:14:26.508 7.505 - 7.564: 99.4905% ( 1) 00:14:26.508 7.564 - 7.622: 99.4964% ( 1) 00:14:26.508 7.796 - 7.855: 99.5081% ( 2) 00:14:26.508 7.971 - 8.029: 99.5198% ( 2) 00:14:26.508 8.087 - 8.145: 99.5257% ( 1) 00:14:26.508 8.145 - 8.204: 99.5315% ( 1) 00:14:26.508 8.204 - 8.262: 99.5374% ( 1) 00:14:26.508 8.262 - 8.320: 99.5432% ( 1) 00:14:26.508 8.320 - 8.378: 99.5491% ( 1) 00:14:26.508 8.378 - 8.436: 99.5608% ( 2) 00:14:26.508 8.495 - 8.553: 99.5667% ( 1) 00:14:26.508 8.553 - 8.611: 99.5842% ( 3) 00:14:26.508 8.611 - 8.669: 99.5959% ( 2) 00:14:26.508 8.669 - 8.727: 99.6194% ( 4) 00:14:26.508 8.727 - 8.785: 99.6252% ( 1) 00:14:26.508 8.844 - 8.902: 99.6311% ( 1) 00:14:26.508 8.902 - 8.960: 99.6369% ( 1) 00:14:26.508 8.960 - 9.018: 99.6604% ( 4) 00:14:26.508 9.193 - 9.251: 99.6779% ( 3) 00:14:26.508 9.309 - 9.367: 99.6838% ( 1) 00:14:26.508 9.367 - 9.425: 99.6896% ( 1) 00:14:26.508 9.484 - 9.542: 99.7014% ( 2) 00:14:26.508 9.542 - 9.600: 99.7189% ( 3) 00:14:26.508 9.600 - 9.658: 99.7306% ( 2) 00:14:26.508 9.716 - 9.775: 99.7482% ( 3) 00:14:26.508 9.833 - 9.891: 99.7541% ( 1) 00:14:26.508 9.891 - 9.949: 99.7658% ( 2) 00:14:26.508 9.949 - 10.007: 99.7716% ( 1) 00:14:26.508 10.007 - 10.065: 99.7775% ( 1) 00:14:26.508 10.065 - 10.124: 99.7892% ( 2) 00:14:26.508 10.240 - 10.298: 99.7950% ( 1) 00:14:26.508 10.473 - 10.531: 99.8009% ( 1) 00:14:26.508 10.647 - 10.705: 99.8126% ( 2) 00:14:26.508 10.705 - 10.764: 99.8185% ( 1) 00:14:26.508 10.764 - 10.822: 99.8243% ( 1) 00:14:26.508 10.822 - 10.880: 99.8302% ( 1) 00:14:26.508 10.938 - 10.996: 99.8360% ( 1) 00:14:26.508 11.345 - 11.404: 99.8419% ( 1) 00:14:26.508 12.858 - 12.916: 99.8477% ( 1) 00:14:26.508 13.207 - 13.265: 99.8536% ( 1) 00:14:26.508 14.545 - 14.604: 99.8595% ( 1) 00:14:26.508 16.175 - 16.291: 99.8653% ( 1) 00:14:26.508 16.524 - 16.640: 99.8712% ( 1) 00:14:26.508 17.687 - 17.804: 99.8770% ( 1) 00:14:26.508 18.036 - 18.153: 99.8829% ( 1) 00:14:26.508 19.782 - 19.898: 99.8887% ( 1) 00:14:26.508 3991.738 - 4021.527: 100.0000% ( 19) 00:14:26.508 00:14:26.508 Complete histogram 00:14:26.508 ================== 00:14:26.509 Ra[2024-11-15 11:32:26.934855] vfio_user.c:2794:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:14:26.509 nge in us Cumulative Count 00:14:26.509 2.691 - 2.705: 0.0059% ( 1) 00:14:26.509 2.705 - 2.720: 0.1991% ( 33) 00:14:26.509 2.720 - 2.735: 3.7243% ( 602) 00:14:26.509 2.735 - 2.749: 12.0162% ( 1416) 00:14:26.509 2.749 - 2.764: 15.6234% ( 616) 00:14:26.509 2.764 - 2.778: 19.2715% ( 623) 00:14:26.509 2.778 - 2.793: 41.2660% ( 3756) 00:14:26.509 2.793 - 2.807: 72.9636% ( 5413) 00:14:26.509 2.807 - 2.822: 84.0546% ( 1894) 00:14:26.509 2.822 - 2.836: 87.9721% ( 669) 00:14:26.509 2.836 - 2.851: 89.8753% ( 325) 00:14:26.509 2.851 - 2.865: 91.6145% ( 297) 00:14:26.509 2.865 - 2.880: 94.8293% ( 549) 00:14:26.509 2.880 - 2.895: 97.8216% ( 511) 00:14:26.509 2.895 - 2.909: 98.9401% ( 191) 00:14:26.509 2.909 - 2.924: 99.1802% ( 41) 00:14:26.509 2.924 - 2.938: 99.2095% ( 5) 00:14:26.509 2.938 - 2.953: 99.2153% ( 1) 00:14:26.509 2.953 - 2.967: 99.2270% ( 2) 00:14:26.509 2.967 - 2.982: 99.2446% ( 3) 00:14:26.509 2.996 - 3.011: 99.2505% ( 1) 00:14:26.509 3.011 - 3.025: 99.2563% ( 1) 00:14:26.509 3.055 - 3.069: 99.2622% ( 1) 00:14:26.509 4.742 - 4.771: 99.2680% ( 1) 00:14:26.509 5.149 - 5.178: 99.2739% ( 1) 00:14:26.509 5.818 - 5.847: 99.2797% ( 1) 00:14:26.509 5.964 - 5.993: 99.2856% ( 1) 00:14:26.509 6.022 - 6.051: 99.2973% ( 2) 00:14:26.509 6.196 - 6.225: 99.3032% ( 1) 00:14:26.509 6.225 - 6.255: 99.3090% ( 1) 00:14:26.509 6.255 - 6.284: 99.3149% ( 1) 00:14:26.509 6.284 - 6.313: 99.3207% ( 1) 00:14:26.509 6.371 - 6.400: 99.3266% ( 1) 00:14:26.509 6.400 - 6.429: 99.3441% ( 3) 00:14:26.509 6.487 - 6.516: 99.3500% ( 1) 00:14:26.509 6.516 - 6.545: 99.3559% ( 1) 00:14:26.509 6.575 - 6.604: 99.3617% ( 1) 00:14:26.509 6.604 - 6.633: 99.3734% ( 2) 00:14:26.509 6.633 - 6.662: 99.3851% ( 2) 00:14:26.509 6.807 - 6.836: 99.3968% ( 2) 00:14:26.509 6.865 - 6.895: 99.4027% ( 1) 00:14:26.509 7.011 - 7.040: 99.4086% ( 1) 00:14:26.509 7.040 - 7.069: 99.4144% ( 1) 00:14:26.509 7.185 - 7.215: 99.4203% ( 1) 00:14:26.509 7.302 - 7.331: 99.4261% ( 1) 00:14:26.509 7.418 - 7.447: 99.4320% ( 1) 00:14:26.509 7.447 - 7.505: 99.4378% ( 1) 00:14:26.509 7.505 - 7.564: 99.4496% ( 2) 00:14:26.509 7.564 - 7.622: 99.4554% ( 1) 00:14:26.509 7.680 - 7.738: 99.4671% ( 2) 00:14:26.509 7.855 - 7.913: 99.4730% ( 1) 00:14:26.509 7.913 - 7.971: 99.4788% ( 1) 00:14:26.509 7.971 - 8.029: 99.4847% ( 1) 00:14:26.509 8.029 - 8.087: 99.4905% ( 1) 00:14:26.509 8.087 - 8.145: 99.4964% ( 1) 00:14:26.509 8.553 - 8.611: 99.5023% ( 1) 00:14:26.509 8.611 - 8.669: 99.5140% ( 2) 00:14:26.509 8.669 - 8.727: 99.5198% ( 1) 00:14:26.509 8.844 - 8.902: 99.5257% ( 1) 00:14:26.509 9.425 - 9.484: 99.5315% ( 1) 00:14:26.509 9.484 - 9.542: 99.5374% ( 1) 00:14:26.509 2904.436 - 2919.331: 99.5432% ( 1) 00:14:26.509 3038.487 - 3053.382: 99.5491% ( 1) 00:14:26.509 3991.738 - 4021.527: 99.9941% ( 76) 00:14:26.509 4974.778 - 5004.567: 100.0000% ( 1) 00:14:26.509 00:14:26.509 11:32:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@90 -- # aer_vfio_user /var/run/vfio-user/domain/vfio-user1/1 nqn.2019-07.io.spdk:cnode1 1 00:14:26.509 11:32:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@22 -- # local traddr=/var/run/vfio-user/domain/vfio-user1/1 00:14:26.509 11:32:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@23 -- # local subnqn=nqn.2019-07.io.spdk:cnode1 00:14:26.509 11:32:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@24 -- # local malloc_num=Malloc3 00:14:26.509 11:32:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:14:26.509 [ 00:14:26.509 { 00:14:26.509 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:14:26.509 "subtype": "Discovery", 00:14:26.509 "listen_addresses": [], 00:14:26.509 "allow_any_host": true, 00:14:26.509 "hosts": [] 00:14:26.509 }, 00:14:26.509 { 00:14:26.509 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:14:26.509 "subtype": "NVMe", 00:14:26.509 "listen_addresses": [ 00:14:26.509 { 00:14:26.509 "trtype": "VFIOUSER", 00:14:26.509 "adrfam": "IPv4", 00:14:26.509 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:14:26.509 "trsvcid": "0" 00:14:26.509 } 00:14:26.509 ], 00:14:26.509 "allow_any_host": true, 00:14:26.509 "hosts": [], 00:14:26.509 "serial_number": "SPDK1", 00:14:26.509 "model_number": "SPDK bdev Controller", 00:14:26.509 "max_namespaces": 32, 00:14:26.509 "min_cntlid": 1, 00:14:26.509 "max_cntlid": 65519, 00:14:26.509 "namespaces": [ 00:14:26.509 { 00:14:26.509 "nsid": 1, 00:14:26.509 "bdev_name": "Malloc1", 00:14:26.509 "name": "Malloc1", 00:14:26.509 "nguid": "4161269D55AC4350896EC8D831135E8E", 00:14:26.509 "uuid": "4161269d-55ac-4350-896e-c8d831135e8e" 00:14:26.509 } 00:14:26.509 ] 00:14:26.509 }, 00:14:26.509 { 00:14:26.509 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:14:26.509 "subtype": "NVMe", 00:14:26.509 "listen_addresses": [ 00:14:26.509 { 00:14:26.509 "trtype": "VFIOUSER", 00:14:26.509 "adrfam": "IPv4", 00:14:26.509 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:14:26.509 "trsvcid": "0" 00:14:26.509 } 00:14:26.509 ], 00:14:26.509 "allow_any_host": true, 00:14:26.509 "hosts": [], 00:14:26.509 "serial_number": "SPDK2", 00:14:26.509 "model_number": "SPDK bdev Controller", 00:14:26.509 "max_namespaces": 32, 00:14:26.509 "min_cntlid": 1, 00:14:26.509 "max_cntlid": 65519, 00:14:26.509 "namespaces": [ 00:14:26.509 { 00:14:26.509 "nsid": 1, 00:14:26.509 "bdev_name": "Malloc2", 00:14:26.509 "name": "Malloc2", 00:14:26.509 "nguid": "0E2B64C238BE46118B69B389098E0BB4", 00:14:26.509 "uuid": "0e2b64c2-38be-4611-8b69-b389098e0bb4" 00:14:26.509 } 00:14:26.509 ] 00:14:26.509 } 00:14:26.509 ] 00:14:26.509 11:32:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@27 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:14:26.509 11:32:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@34 -- # aerpid=1193325 00:14:26.509 11:32:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@37 -- # waitforfile /tmp/aer_touch_file 00:14:26.509 11:32:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -n 2 -g -t /tmp/aer_touch_file 00:14:26.509 11:32:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1267 -- # local i=0 00:14:26.509 11:32:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1268 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:14:26.509 11:32:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1274 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:14:26.509 11:32:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1278 -- # return 0 00:14:26.509 11:32:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@38 -- # rm -f /tmp/aer_touch_file 00:14:26.509 11:32:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 --name Malloc3 00:14:26.768 [2024-11-15 11:32:27.472916] vfio_user.c:2832:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:14:26.768 Malloc3 00:14:26.768 11:32:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc3 -n 2 00:14:27.028 [2024-11-15 11:32:27.842582] vfio_user.c:2794:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:14:27.028 11:32:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:14:27.288 Asynchronous Event Request test 00:14:27.289 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:14:27.289 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:14:27.289 Registering asynchronous event callbacks... 00:14:27.289 Starting namespace attribute notice tests for all controllers... 00:14:27.289 /var/run/vfio-user/domain/vfio-user1/1: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:14:27.289 aer_cb - Changed Namespace 00:14:27.289 Cleaning up... 00:14:27.289 [ 00:14:27.289 { 00:14:27.289 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:14:27.289 "subtype": "Discovery", 00:14:27.289 "listen_addresses": [], 00:14:27.289 "allow_any_host": true, 00:14:27.289 "hosts": [] 00:14:27.289 }, 00:14:27.289 { 00:14:27.289 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:14:27.289 "subtype": "NVMe", 00:14:27.289 "listen_addresses": [ 00:14:27.289 { 00:14:27.289 "trtype": "VFIOUSER", 00:14:27.289 "adrfam": "IPv4", 00:14:27.289 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:14:27.289 "trsvcid": "0" 00:14:27.289 } 00:14:27.289 ], 00:14:27.289 "allow_any_host": true, 00:14:27.289 "hosts": [], 00:14:27.289 "serial_number": "SPDK1", 00:14:27.289 "model_number": "SPDK bdev Controller", 00:14:27.289 "max_namespaces": 32, 00:14:27.289 "min_cntlid": 1, 00:14:27.289 "max_cntlid": 65519, 00:14:27.289 "namespaces": [ 00:14:27.289 { 00:14:27.289 "nsid": 1, 00:14:27.289 "bdev_name": "Malloc1", 00:14:27.289 "name": "Malloc1", 00:14:27.289 "nguid": "4161269D55AC4350896EC8D831135E8E", 00:14:27.289 "uuid": "4161269d-55ac-4350-896e-c8d831135e8e" 00:14:27.289 }, 00:14:27.289 { 00:14:27.289 "nsid": 2, 00:14:27.289 "bdev_name": "Malloc3", 00:14:27.289 "name": "Malloc3", 00:14:27.289 "nguid": "5D92B0C48811441A805E51AC5DF2BC05", 00:14:27.289 "uuid": "5d92b0c4-8811-441a-805e-51ac5df2bc05" 00:14:27.289 } 00:14:27.289 ] 00:14:27.289 }, 00:14:27.289 { 00:14:27.289 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:14:27.289 "subtype": "NVMe", 00:14:27.289 "listen_addresses": [ 00:14:27.289 { 00:14:27.289 "trtype": "VFIOUSER", 00:14:27.289 "adrfam": "IPv4", 00:14:27.289 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:14:27.289 "trsvcid": "0" 00:14:27.289 } 00:14:27.289 ], 00:14:27.289 "allow_any_host": true, 00:14:27.289 "hosts": [], 00:14:27.289 "serial_number": "SPDK2", 00:14:27.289 "model_number": "SPDK bdev Controller", 00:14:27.289 "max_namespaces": 32, 00:14:27.289 "min_cntlid": 1, 00:14:27.289 "max_cntlid": 65519, 00:14:27.289 "namespaces": [ 00:14:27.289 { 00:14:27.289 "nsid": 1, 00:14:27.289 "bdev_name": "Malloc2", 00:14:27.289 "name": "Malloc2", 00:14:27.289 "nguid": "0E2B64C238BE46118B69B389098E0BB4", 00:14:27.289 "uuid": "0e2b64c2-38be-4611-8b69-b389098e0bb4" 00:14:27.289 } 00:14:27.289 ] 00:14:27.289 } 00:14:27.289 ] 00:14:27.289 11:32:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@44 -- # wait 1193325 00:14:27.289 11:32:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # for i in $(seq 1 $NUM_DEVICES) 00:14:27.289 11:32:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@81 -- # test_traddr=/var/run/vfio-user/domain/vfio-user2/2 00:14:27.289 11:32:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@82 -- # test_subnqn=nqn.2019-07.io.spdk:cnode2 00:14:27.289 11:32:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -g -L nvme -L nvme_vfio -L vfio_pci 00:14:27.550 [2024-11-15 11:32:28.146241] Starting SPDK v25.01-pre git sha1 4b2d483c6 / DPDK 24.03.0 initialization... 00:14:27.550 [2024-11-15 11:32:28.146281] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --single-file-segments --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1193352 ] 00:14:27.550 [2024-11-15 11:32:28.202327] nvme_vfio_user.c: 259:nvme_vfio_ctrlr_scan: *DEBUG*: Scan controller : /var/run/vfio-user/domain/vfio-user2/2 00:14:27.550 [2024-11-15 11:32:28.210744] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 0, Size 0x2000, Offset 0x0, Flags 0xf, Cap offset 32 00:14:27.550 [2024-11-15 11:32:28.210774] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0x1000, Offset 0x1000, Map addr 0x7f3b6827c000 00:14:27.550 [2024-11-15 11:32:28.211741] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 1, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:14:27.550 [2024-11-15 11:32:28.212751] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 2, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:14:27.550 [2024-11-15 11:32:28.213752] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 3, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:14:27.550 [2024-11-15 11:32:28.214762] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 4, Size 0x2000, Offset 0x0, Flags 0x3, Cap offset 0 00:14:27.550 [2024-11-15 11:32:28.215770] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 5, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:14:27.550 [2024-11-15 11:32:28.216780] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 6, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:14:27.550 [2024-11-15 11:32:28.217787] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 7, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:14:27.550 [2024-11-15 11:32:28.218792] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 8, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:14:27.550 [2024-11-15 11:32:28.219806] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 9, Size 0xc000, Offset 0x0, Flags 0xf, Cap offset 32 00:14:27.550 [2024-11-15 11:32:28.219821] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0xb000, Offset 0x1000, Map addr 0x7f3b68271000 00:14:27.550 [2024-11-15 11:32:28.221233] vfio_user_pci.c: 65:vfio_add_mr: *DEBUG*: Add memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:14:27.550 [2024-11-15 11:32:28.243163] vfio_user_pci.c: 386:spdk_vfio_user_setup: *DEBUG*: Device vfio-user0, Path /var/run/vfio-user/domain/vfio-user2/2/cntrl Setup Successfully 00:14:27.550 [2024-11-15 11:32:28.243197] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to connect adminq (no timeout) 00:14:27.550 [2024-11-15 11:32:28.245277] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x0, value 0x201e0100ff 00:14:27.550 [2024-11-15 11:32:28.245330] nvme_pcie_common.c: 159:nvme_pcie_qpair_construct: *INFO*: max_completions_cap = 64 num_trackers = 192 00:14:27.550 [2024-11-15 11:32:28.245422] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for connect adminq (no timeout) 00:14:27.550 [2024-11-15 11:32:28.245438] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to read vs (no timeout) 00:14:27.550 [2024-11-15 11:32:28.245445] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to read vs wait for vs (no timeout) 00:14:27.550 [2024-11-15 11:32:28.246284] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x8, value 0x10300 00:14:27.550 [2024-11-15 11:32:28.246298] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to read cap (no timeout) 00:14:27.550 [2024-11-15 11:32:28.246309] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to read cap wait for cap (no timeout) 00:14:27.550 [2024-11-15 11:32:28.247286] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x0, value 0x201e0100ff 00:14:27.550 [2024-11-15 11:32:28.247299] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to check en (no timeout) 00:14:27.550 [2024-11-15 11:32:28.247309] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to check en wait for cc (timeout 15000 ms) 00:14:27.550 [2024-11-15 11:32:28.248292] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x0 00:14:27.550 [2024-11-15 11:32:28.248306] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:14:27.550 [2024-11-15 11:32:28.249300] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x0 00:14:27.550 [2024-11-15 11:32:28.249316] nvme_ctrlr.c:3906:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] CC.EN = 0 && CSTS.RDY = 0 00:14:27.550 [2024-11-15 11:32:28.249323] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to controller is disabled (timeout 15000 ms) 00:14:27.550 [2024-11-15 11:32:28.249332] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:14:27.550 [2024-11-15 11:32:28.249443] nvme_ctrlr.c:4104:nvme_ctrlr_process_init: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] Setting CC.EN = 1 00:14:27.550 [2024-11-15 11:32:28.249449] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:14:27.550 [2024-11-15 11:32:28.249456] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x28, value 0x2000003c0000 00:14:27.550 [2024-11-15 11:32:28.250317] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x30, value 0x2000003be000 00:14:27.550 [2024-11-15 11:32:28.251318] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x24, value 0xff00ff 00:14:27.550 [2024-11-15 11:32:28.252334] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x460001 00:14:27.550 [2024-11-15 11:32:28.253333] vfio_user.c:2832:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:14:27.550 [2024-11-15 11:32:28.253383] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:14:27.550 [2024-11-15 11:32:28.254350] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x1 00:14:27.550 [2024-11-15 11:32:28.254364] nvme_ctrlr.c:3941:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:14:27.550 [2024-11-15 11:32:28.254371] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to reset admin queue (timeout 30000 ms) 00:14:27.550 [2024-11-15 11:32:28.254396] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to identify controller (no timeout) 00:14:27.550 [2024-11-15 11:32:28.254411] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for identify controller (timeout 30000 ms) 00:14:27.551 [2024-11-15 11:32:28.254426] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:14:27.551 [2024-11-15 11:32:28.254433] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:14:27.551 [2024-11-15 11:32:28.254438] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:14:27.551 [2024-11-15 11:32:28.254452] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000001 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:14:27.551 [2024-11-15 11:32:28.262473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0001 p:1 m:0 dnr:0 00:14:27.551 [2024-11-15 11:32:28.262490] nvme_ctrlr.c:2081:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] transport max_xfer_size 131072 00:14:27.551 [2024-11-15 11:32:28.262497] nvme_ctrlr.c:2085:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] MDTS max_xfer_size 131072 00:14:27.551 [2024-11-15 11:32:28.262503] nvme_ctrlr.c:2088:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] CNTLID 0x0001 00:14:27.551 [2024-11-15 11:32:28.262509] nvme_ctrlr.c:2099:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] Identify CNTLID 0x0001 != Connect CNTLID 0x0000 00:14:27.551 [2024-11-15 11:32:28.262523] nvme_ctrlr.c:2112:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] transport max_sges 1 00:14:27.551 [2024-11-15 11:32:28.262530] nvme_ctrlr.c:2127:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] fuses compare and write: 1 00:14:27.551 [2024-11-15 11:32:28.262536] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to configure AER (timeout 30000 ms) 00:14:27.551 [2024-11-15 11:32:28.262548] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for configure aer (timeout 30000 ms) 00:14:27.551 [2024-11-15 11:32:28.262562] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:191 cdw10:0000000b PRP1 0x0 PRP2 0x0 00:14:27.551 [2024-11-15 11:32:28.270468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0002 p:1 m:0 dnr:0 00:14:27.551 [2024-11-15 11:32:28.270484] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:14:27.551 [2024-11-15 11:32:28.270496] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:14:27.551 [2024-11-15 11:32:28.270506] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:14:27.551 [2024-11-15 11:32:28.270517] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:14:27.551 [2024-11-15 11:32:28.270524] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set keep alive timeout (timeout 30000 ms) 00:14:27.551 [2024-11-15 11:32:28.270533] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:14:27.551 [2024-11-15 11:32:28.270545] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:191 cdw10:0000000f PRP1 0x0 PRP2 0x0 00:14:27.551 [2024-11-15 11:32:28.278469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0007 p:1 m:0 dnr:0 00:14:27.551 [2024-11-15 11:32:28.278484] nvme_ctrlr.c:3047:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] Controller adjusted keep alive timeout to 0 ms 00:14:27.551 [2024-11-15 11:32:28.278491] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to identify controller iocs specific (timeout 30000 ms) 00:14:27.551 [2024-11-15 11:32:28.278500] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set number of queues (timeout 30000 ms) 00:14:27.551 [2024-11-15 11:32:28.278508] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for set number of queues (timeout 30000 ms) 00:14:27.551 [2024-11-15 11:32:28.278520] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:14:27.551 [2024-11-15 11:32:28.286470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:0008 p:1 m:0 dnr:0 00:14:27.551 [2024-11-15 11:32:28.286551] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to identify active ns (timeout 30000 ms) 00:14:27.551 [2024-11-15 11:32:28.286562] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for identify active ns (timeout 30000 ms) 00:14:27.551 [2024-11-15 11:32:28.286572] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f9000 len:4096 00:14:27.551 [2024-11-15 11:32:28.286579] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f9000 00:14:27.551 [2024-11-15 11:32:28.286584] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:14:27.551 [2024-11-15 11:32:28.286598] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000002 cdw11:00000000 PRP1 0x2000002f9000 PRP2 0x0 00:14:27.551 [2024-11-15 11:32:28.294468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0009 p:1 m:0 dnr:0 00:14:27.551 [2024-11-15 11:32:28.294484] nvme_ctrlr.c:4735:spdk_nvme_ctrlr_get_ns: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] Namespace 1 was added 00:14:27.551 [2024-11-15 11:32:28.294496] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to identify ns (timeout 30000 ms) 00:14:27.551 [2024-11-15 11:32:28.294506] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for identify ns (timeout 30000 ms) 00:14:27.551 [2024-11-15 11:32:28.294516] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:14:27.551 [2024-11-15 11:32:28.294522] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:14:27.551 [2024-11-15 11:32:28.294526] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:14:27.551 [2024-11-15 11:32:28.294534] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000000 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:14:27.551 [2024-11-15 11:32:28.302470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000a p:1 m:0 dnr:0 00:14:27.551 [2024-11-15 11:32:28.302489] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to identify namespace id descriptors (timeout 30000 ms) 00:14:27.551 [2024-11-15 11:32:28.302500] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:14:27.551 [2024-11-15 11:32:28.302510] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:14:27.551 [2024-11-15 11:32:28.302516] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:14:27.551 [2024-11-15 11:32:28.302521] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:14:27.551 [2024-11-15 11:32:28.302528] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:14:27.551 [2024-11-15 11:32:28.310470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000b p:1 m:0 dnr:0 00:14:27.551 [2024-11-15 11:32:28.310484] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to identify ns iocs specific (timeout 30000 ms) 00:14:27.551 [2024-11-15 11:32:28.310493] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set supported log pages (timeout 30000 ms) 00:14:27.551 [2024-11-15 11:32:28.310505] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set supported features (timeout 30000 ms) 00:14:27.551 [2024-11-15 11:32:28.310513] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set host behavior support feature (timeout 30000 ms) 00:14:27.551 [2024-11-15 11:32:28.310521] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set doorbell buffer config (timeout 30000 ms) 00:14:27.551 [2024-11-15 11:32:28.310527] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set host ID (timeout 30000 ms) 00:14:27.551 [2024-11-15 11:32:28.310534] nvme_ctrlr.c:3147:nvme_ctrlr_set_host_id: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] NVMe-oF transport - not sending Set Features - Host ID 00:14:27.551 [2024-11-15 11:32:28.310540] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to transport ready (timeout 30000 ms) 00:14:27.551 [2024-11-15 11:32:28.310549] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to ready (no timeout) 00:14:27.551 [2024-11-15 11:32:28.310569] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:191 cdw10:00000001 PRP1 0x0 PRP2 0x0 00:14:27.551 [2024-11-15 11:32:28.318472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000c p:1 m:0 dnr:0 00:14:27.551 [2024-11-15 11:32:28.318491] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:191 cdw10:00000002 PRP1 0x0 PRP2 0x0 00:14:27.551 [2024-11-15 11:32:28.326468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000d p:1 m:0 dnr:0 00:14:27.551 [2024-11-15 11:32:28.326486] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:191 cdw10:00000004 PRP1 0x0 PRP2 0x0 00:14:27.551 [2024-11-15 11:32:28.334470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000e p:1 m:0 dnr:0 00:14:27.551 [2024-11-15 11:32:28.334488] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:14:27.551 [2024-11-15 11:32:28.342469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:000f p:1 m:0 dnr:0 00:14:27.551 [2024-11-15 11:32:28.342491] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f6000 len:8192 00:14:27.551 [2024-11-15 11:32:28.342498] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f6000 00:14:27.551 [2024-11-15 11:32:28.342502] nvme_pcie_common.c:1275:nvme_pcie_prp_list_append: *DEBUG*: prp[0] = 0x2000002f7000 00:14:27.551 [2024-11-15 11:32:28.342507] nvme_pcie_common.c:1291:nvme_pcie_prp_list_append: *DEBUG*: prp2 = 0x2000002f7000 00:14:27.551 [2024-11-15 11:32:28.342512] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 2 00:14:27.551 [2024-11-15 11:32:28.342520] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:191 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 PRP1 0x2000002f6000 PRP2 0x2000002f7000 00:14:27.551 [2024-11-15 11:32:28.342530] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fc000 len:512 00:14:27.551 [2024-11-15 11:32:28.342536] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fc000 00:14:27.551 [2024-11-15 11:32:28.342541] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:14:27.551 [2024-11-15 11:32:28.342549] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:186 nsid:ffffffff cdw10:007f0002 cdw11:00000000 PRP1 0x2000002fc000 PRP2 0x0 00:14:27.551 [2024-11-15 11:32:28.342559] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:512 00:14:27.551 [2024-11-15 11:32:28.342564] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:14:27.551 [2024-11-15 11:32:28.342569] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:14:27.551 [2024-11-15 11:32:28.342576] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:185 nsid:ffffffff cdw10:007f0003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:14:27.552 [2024-11-15 11:32:28.342586] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f4000 len:4096 00:14:27.552 [2024-11-15 11:32:28.342592] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f4000 00:14:27.552 [2024-11-15 11:32:28.342597] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:14:27.552 [2024-11-15 11:32:28.342605] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:184 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 PRP1 0x2000002f4000 PRP2 0x0 00:14:27.552 [2024-11-15 11:32:28.350469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0010 p:1 m:0 dnr:0 00:14:27.552 [2024-11-15 11:32:28.350494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:186 cdw0:0 sqhd:0011 p:1 m:0 dnr:0 00:14:27.552 [2024-11-15 11:32:28.350508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:185 cdw0:0 sqhd:0012 p:1 m:0 dnr:0 00:14:27.552 [2024-11-15 11:32:28.350517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0013 p:1 m:0 dnr:0 00:14:27.552 ===================================================== 00:14:27.552 NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:14:27.552 ===================================================== 00:14:27.552 Controller Capabilities/Features 00:14:27.552 ================================ 00:14:27.552 Vendor ID: 4e58 00:14:27.552 Subsystem Vendor ID: 4e58 00:14:27.552 Serial Number: SPDK2 00:14:27.552 Model Number: SPDK bdev Controller 00:14:27.552 Firmware Version: 25.01 00:14:27.552 Recommended Arb Burst: 6 00:14:27.552 IEEE OUI Identifier: 8d 6b 50 00:14:27.552 Multi-path I/O 00:14:27.552 May have multiple subsystem ports: Yes 00:14:27.552 May have multiple controllers: Yes 00:14:27.552 Associated with SR-IOV VF: No 00:14:27.552 Max Data Transfer Size: 131072 00:14:27.552 Max Number of Namespaces: 32 00:14:27.552 Max Number of I/O Queues: 127 00:14:27.552 NVMe Specification Version (VS): 1.3 00:14:27.552 NVMe Specification Version (Identify): 1.3 00:14:27.552 Maximum Queue Entries: 256 00:14:27.552 Contiguous Queues Required: Yes 00:14:27.552 Arbitration Mechanisms Supported 00:14:27.552 Weighted Round Robin: Not Supported 00:14:27.552 Vendor Specific: Not Supported 00:14:27.552 Reset Timeout: 15000 ms 00:14:27.552 Doorbell Stride: 4 bytes 00:14:27.552 NVM Subsystem Reset: Not Supported 00:14:27.552 Command Sets Supported 00:14:27.552 NVM Command Set: Supported 00:14:27.552 Boot Partition: Not Supported 00:14:27.552 Memory Page Size Minimum: 4096 bytes 00:14:27.552 Memory Page Size Maximum: 4096 bytes 00:14:27.552 Persistent Memory Region: Not Supported 00:14:27.552 Optional Asynchronous Events Supported 00:14:27.552 Namespace Attribute Notices: Supported 00:14:27.552 Firmware Activation Notices: Not Supported 00:14:27.552 ANA Change Notices: Not Supported 00:14:27.552 PLE Aggregate Log Change Notices: Not Supported 00:14:27.552 LBA Status Info Alert Notices: Not Supported 00:14:27.552 EGE Aggregate Log Change Notices: Not Supported 00:14:27.552 Normal NVM Subsystem Shutdown event: Not Supported 00:14:27.552 Zone Descriptor Change Notices: Not Supported 00:14:27.552 Discovery Log Change Notices: Not Supported 00:14:27.552 Controller Attributes 00:14:27.552 128-bit Host Identifier: Supported 00:14:27.552 Non-Operational Permissive Mode: Not Supported 00:14:27.552 NVM Sets: Not Supported 00:14:27.552 Read Recovery Levels: Not Supported 00:14:27.552 Endurance Groups: Not Supported 00:14:27.552 Predictable Latency Mode: Not Supported 00:14:27.552 Traffic Based Keep ALive: Not Supported 00:14:27.552 Namespace Granularity: Not Supported 00:14:27.552 SQ Associations: Not Supported 00:14:27.552 UUID List: Not Supported 00:14:27.552 Multi-Domain Subsystem: Not Supported 00:14:27.552 Fixed Capacity Management: Not Supported 00:14:27.552 Variable Capacity Management: Not Supported 00:14:27.552 Delete Endurance Group: Not Supported 00:14:27.552 Delete NVM Set: Not Supported 00:14:27.552 Extended LBA Formats Supported: Not Supported 00:14:27.552 Flexible Data Placement Supported: Not Supported 00:14:27.552 00:14:27.552 Controller Memory Buffer Support 00:14:27.552 ================================ 00:14:27.552 Supported: No 00:14:27.552 00:14:27.552 Persistent Memory Region Support 00:14:27.552 ================================ 00:14:27.552 Supported: No 00:14:27.552 00:14:27.552 Admin Command Set Attributes 00:14:27.552 ============================ 00:14:27.552 Security Send/Receive: Not Supported 00:14:27.552 Format NVM: Not Supported 00:14:27.552 Firmware Activate/Download: Not Supported 00:14:27.552 Namespace Management: Not Supported 00:14:27.552 Device Self-Test: Not Supported 00:14:27.552 Directives: Not Supported 00:14:27.552 NVMe-MI: Not Supported 00:14:27.552 Virtualization Management: Not Supported 00:14:27.552 Doorbell Buffer Config: Not Supported 00:14:27.552 Get LBA Status Capability: Not Supported 00:14:27.552 Command & Feature Lockdown Capability: Not Supported 00:14:27.552 Abort Command Limit: 4 00:14:27.552 Async Event Request Limit: 4 00:14:27.552 Number of Firmware Slots: N/A 00:14:27.552 Firmware Slot 1 Read-Only: N/A 00:14:27.552 Firmware Activation Without Reset: N/A 00:14:27.552 Multiple Update Detection Support: N/A 00:14:27.552 Firmware Update Granularity: No Information Provided 00:14:27.552 Per-Namespace SMART Log: No 00:14:27.552 Asymmetric Namespace Access Log Page: Not Supported 00:14:27.552 Subsystem NQN: nqn.2019-07.io.spdk:cnode2 00:14:27.552 Command Effects Log Page: Supported 00:14:27.552 Get Log Page Extended Data: Supported 00:14:27.552 Telemetry Log Pages: Not Supported 00:14:27.552 Persistent Event Log Pages: Not Supported 00:14:27.552 Supported Log Pages Log Page: May Support 00:14:27.552 Commands Supported & Effects Log Page: Not Supported 00:14:27.552 Feature Identifiers & Effects Log Page:May Support 00:14:27.552 NVMe-MI Commands & Effects Log Page: May Support 00:14:27.552 Data Area 4 for Telemetry Log: Not Supported 00:14:27.552 Error Log Page Entries Supported: 128 00:14:27.552 Keep Alive: Supported 00:14:27.552 Keep Alive Granularity: 10000 ms 00:14:27.552 00:14:27.552 NVM Command Set Attributes 00:14:27.552 ========================== 00:14:27.552 Submission Queue Entry Size 00:14:27.552 Max: 64 00:14:27.552 Min: 64 00:14:27.552 Completion Queue Entry Size 00:14:27.552 Max: 16 00:14:27.552 Min: 16 00:14:27.552 Number of Namespaces: 32 00:14:27.552 Compare Command: Supported 00:14:27.552 Write Uncorrectable Command: Not Supported 00:14:27.552 Dataset Management Command: Supported 00:14:27.552 Write Zeroes Command: Supported 00:14:27.552 Set Features Save Field: Not Supported 00:14:27.552 Reservations: Not Supported 00:14:27.552 Timestamp: Not Supported 00:14:27.552 Copy: Supported 00:14:27.552 Volatile Write Cache: Present 00:14:27.552 Atomic Write Unit (Normal): 1 00:14:27.552 Atomic Write Unit (PFail): 1 00:14:27.552 Atomic Compare & Write Unit: 1 00:14:27.552 Fused Compare & Write: Supported 00:14:27.552 Scatter-Gather List 00:14:27.552 SGL Command Set: Supported (Dword aligned) 00:14:27.552 SGL Keyed: Not Supported 00:14:27.552 SGL Bit Bucket Descriptor: Not Supported 00:14:27.552 SGL Metadata Pointer: Not Supported 00:14:27.552 Oversized SGL: Not Supported 00:14:27.552 SGL Metadata Address: Not Supported 00:14:27.552 SGL Offset: Not Supported 00:14:27.552 Transport SGL Data Block: Not Supported 00:14:27.552 Replay Protected Memory Block: Not Supported 00:14:27.552 00:14:27.552 Firmware Slot Information 00:14:27.552 ========================= 00:14:27.552 Active slot: 1 00:14:27.552 Slot 1 Firmware Revision: 25.01 00:14:27.552 00:14:27.552 00:14:27.552 Commands Supported and Effects 00:14:27.552 ============================== 00:14:27.552 Admin Commands 00:14:27.552 -------------- 00:14:27.552 Get Log Page (02h): Supported 00:14:27.552 Identify (06h): Supported 00:14:27.552 Abort (08h): Supported 00:14:27.552 Set Features (09h): Supported 00:14:27.552 Get Features (0Ah): Supported 00:14:27.552 Asynchronous Event Request (0Ch): Supported 00:14:27.552 Keep Alive (18h): Supported 00:14:27.552 I/O Commands 00:14:27.552 ------------ 00:14:27.552 Flush (00h): Supported LBA-Change 00:14:27.552 Write (01h): Supported LBA-Change 00:14:27.552 Read (02h): Supported 00:14:27.552 Compare (05h): Supported 00:14:27.552 Write Zeroes (08h): Supported LBA-Change 00:14:27.552 Dataset Management (09h): Supported LBA-Change 00:14:27.552 Copy (19h): Supported LBA-Change 00:14:27.552 00:14:27.552 Error Log 00:14:27.552 ========= 00:14:27.552 00:14:27.552 Arbitration 00:14:27.552 =========== 00:14:27.552 Arbitration Burst: 1 00:14:27.552 00:14:27.552 Power Management 00:14:27.552 ================ 00:14:27.552 Number of Power States: 1 00:14:27.552 Current Power State: Power State #0 00:14:27.552 Power State #0: 00:14:27.552 Max Power: 0.00 W 00:14:27.552 Non-Operational State: Operational 00:14:27.552 Entry Latency: Not Reported 00:14:27.552 Exit Latency: Not Reported 00:14:27.553 Relative Read Throughput: 0 00:14:27.553 Relative Read Latency: 0 00:14:27.553 Relative Write Throughput: 0 00:14:27.553 Relative Write Latency: 0 00:14:27.553 Idle Power: Not Reported 00:14:27.553 Active Power: Not Reported 00:14:27.553 Non-Operational Permissive Mode: Not Supported 00:14:27.553 00:14:27.553 Health Information 00:14:27.553 ================== 00:14:27.553 Critical Warnings: 00:14:27.553 Available Spare Space: OK 00:14:27.553 Temperature: OK 00:14:27.553 Device Reliability: OK 00:14:27.553 Read Only: No 00:14:27.553 Volatile Memory Backup: OK 00:14:27.553 Current Temperature: 0 Kelvin (-273 Celsius) 00:14:27.553 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:14:27.553 Available Spare: 0% 00:14:27.553 Available Sp[2024-11-15 11:32:28.350642] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:184 cdw10:00000005 PRP1 0x0 PRP2 0x0 00:14:27.553 [2024-11-15 11:32:28.358478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0014 p:1 m:0 dnr:0 00:14:27.553 [2024-11-15 11:32:28.358518] nvme_ctrlr.c:4399:nvme_ctrlr_destruct_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] Prepare to destruct SSD 00:14:27.553 [2024-11-15 11:32:28.358531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:27.553 [2024-11-15 11:32:28.358540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:27.553 [2024-11-15 11:32:28.358548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:27.553 [2024-11-15 11:32:28.358557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:27.553 [2024-11-15 11:32:28.358616] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x460001 00:14:27.553 [2024-11-15 11:32:28.358630] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x464001 00:14:27.553 [2024-11-15 11:32:28.359618] vfio_user.c:2794:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:14:27.553 [2024-11-15 11:32:28.359682] nvme_ctrlr.c:1151:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] RTD3E = 0 us 00:14:27.553 [2024-11-15 11:32:28.359692] nvme_ctrlr.c:1154:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] shutdown timeout = 10000 ms 00:14:27.553 [2024-11-15 11:32:28.360621] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x9 00:14:27.553 [2024-11-15 11:32:28.360638] nvme_ctrlr.c:1273:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] shutdown complete in 0 milliseconds 00:14:27.553 [2024-11-15 11:32:28.360694] vfio_user_pci.c: 399:spdk_vfio_user_release: *DEBUG*: Release file /var/run/vfio-user/domain/vfio-user2/2/cntrl 00:14:27.553 [2024-11-15 11:32:28.362153] vfio_user_pci.c: 96:vfio_remove_mr: *DEBUG*: Remove memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:14:27.813 are Threshold: 0% 00:14:27.813 Life Percentage Used: 0% 00:14:27.813 Data Units Read: 0 00:14:27.813 Data Units Written: 0 00:14:27.813 Host Read Commands: 0 00:14:27.813 Host Write Commands: 0 00:14:27.813 Controller Busy Time: 0 minutes 00:14:27.813 Power Cycles: 0 00:14:27.813 Power On Hours: 0 hours 00:14:27.813 Unsafe Shutdowns: 0 00:14:27.813 Unrecoverable Media Errors: 0 00:14:27.813 Lifetime Error Log Entries: 0 00:14:27.813 Warning Temperature Time: 0 minutes 00:14:27.813 Critical Temperature Time: 0 minutes 00:14:27.813 00:14:27.813 Number of Queues 00:14:27.813 ================ 00:14:27.813 Number of I/O Submission Queues: 127 00:14:27.813 Number of I/O Completion Queues: 127 00:14:27.813 00:14:27.813 Active Namespaces 00:14:27.813 ================= 00:14:27.813 Namespace ID:1 00:14:27.813 Error Recovery Timeout: Unlimited 00:14:27.813 Command Set Identifier: NVM (00h) 00:14:27.813 Deallocate: Supported 00:14:27.813 Deallocated/Unwritten Error: Not Supported 00:14:27.813 Deallocated Read Value: Unknown 00:14:27.813 Deallocate in Write Zeroes: Not Supported 00:14:27.813 Deallocated Guard Field: 0xFFFF 00:14:27.813 Flush: Supported 00:14:27.813 Reservation: Supported 00:14:27.813 Namespace Sharing Capabilities: Multiple Controllers 00:14:27.813 Size (in LBAs): 131072 (0GiB) 00:14:27.813 Capacity (in LBAs): 131072 (0GiB) 00:14:27.813 Utilization (in LBAs): 131072 (0GiB) 00:14:27.813 NGUID: 0E2B64C238BE46118B69B389098E0BB4 00:14:27.813 UUID: 0e2b64c2-38be-4611-8b69-b389098e0bb4 00:14:27.813 Thin Provisioning: Not Supported 00:14:27.813 Per-NS Atomic Units: Yes 00:14:27.813 Atomic Boundary Size (Normal): 0 00:14:27.813 Atomic Boundary Size (PFail): 0 00:14:27.813 Atomic Boundary Offset: 0 00:14:27.813 Maximum Single Source Range Length: 65535 00:14:27.813 Maximum Copy Length: 65535 00:14:27.813 Maximum Source Range Count: 1 00:14:27.813 NGUID/EUI64 Never Reused: No 00:14:27.813 Namespace Write Protected: No 00:14:27.813 Number of LBA Formats: 1 00:14:27.813 Current LBA Format: LBA Format #00 00:14:27.813 LBA Format #00: Data Size: 512 Metadata Size: 0 00:14:27.813 00:14:27.813 11:32:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -s 256 -g -q 128 -o 4096 -w read -t 5 -c 0x2 00:14:27.813 [2024-11-15 11:32:28.606418] vfio_user.c:2832:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:14:33.087 Initializing NVMe Controllers 00:14:33.087 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:14:33.087 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 with lcore 1 00:14:33.087 Initialization complete. Launching workers. 00:14:33.087 ======================================================== 00:14:33.087 Latency(us) 00:14:33.087 Device Information : IOPS MiB/s Average min max 00:14:33.087 VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 from core 1: 39957.11 156.08 3203.06 871.02 9907.06 00:14:33.087 ======================================================== 00:14:33.087 Total : 39957.11 156.08 3203.06 871.02 9907.06 00:14:33.087 00:14:33.087 [2024-11-15 11:32:33.709719] vfio_user.c:2794:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:14:33.087 11:32:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@85 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -s 256 -g -q 128 -o 4096 -w write -t 5 -c 0x2 00:14:33.087 [2024-11-15 11:32:33.939477] vfio_user.c:2832:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:14:38.361 Initializing NVMe Controllers 00:14:38.361 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:14:38.361 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 with lcore 1 00:14:38.361 Initialization complete. Launching workers. 00:14:38.361 ======================================================== 00:14:38.361 Latency(us) 00:14:38.361 Device Information : IOPS MiB/s Average min max 00:14:38.361 VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 from core 1: 24955.20 97.48 5129.09 1298.10 10671.88 00:14:38.361 ======================================================== 00:14:38.361 Total : 24955.20 97.48 5129.09 1298.10 10671.88 00:14:38.361 00:14:38.361 [2024-11-15 11:32:38.961088] vfio_user.c:2794:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:14:38.361 11:32:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -g -q 32 -o 4096 -w randrw -M 50 -t 5 -c 0xE 00:14:38.361 [2024-11-15 11:32:39.171813] vfio_user.c:2832:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:14:43.636 [2024-11-15 11:32:44.314542] vfio_user.c:2794:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:14:43.636 Initializing NVMe Controllers 00:14:43.636 Attaching to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:14:43.636 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:14:43.636 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 1 00:14:43.636 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 2 00:14:43.636 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 3 00:14:43.636 Initialization complete. Launching workers. 00:14:43.636 Starting thread on core 2 00:14:43.636 Starting thread on core 3 00:14:43.636 Starting thread on core 1 00:14:43.636 11:32:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -t 3 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -d 256 -g 00:14:43.895 [2024-11-15 11:32:44.646522] vfio_user.c:2832:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:14:47.223 [2024-11-15 11:32:47.702734] vfio_user.c:2794:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:14:47.223 Initializing NVMe Controllers 00:14:47.223 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:14:47.223 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:14:47.223 Associating SPDK bdev Controller (SPDK2 ) with lcore 0 00:14:47.223 Associating SPDK bdev Controller (SPDK2 ) with lcore 1 00:14:47.223 Associating SPDK bdev Controller (SPDK2 ) with lcore 2 00:14:47.223 Associating SPDK bdev Controller (SPDK2 ) with lcore 3 00:14:47.223 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration run with configuration: 00:14:47.223 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i -1 00:14:47.223 Initialization complete. Launching workers. 00:14:47.223 Starting thread on core 1 with urgent priority queue 00:14:47.223 Starting thread on core 2 with urgent priority queue 00:14:47.223 Starting thread on core 3 with urgent priority queue 00:14:47.223 Starting thread on core 0 with urgent priority queue 00:14:47.223 SPDK bdev Controller (SPDK2 ) core 0: 9691.00 IO/s 10.32 secs/100000 ios 00:14:47.223 SPDK bdev Controller (SPDK2 ) core 1: 9274.67 IO/s 10.78 secs/100000 ios 00:14:47.223 SPDK bdev Controller (SPDK2 ) core 2: 9982.00 IO/s 10.02 secs/100000 ios 00:14:47.223 SPDK bdev Controller (SPDK2 ) core 3: 8156.00 IO/s 12.26 secs/100000 ios 00:14:47.223 ======================================================== 00:14:47.223 00:14:47.223 11:32:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/hello_world -d 256 -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' 00:14:47.520 [2024-11-15 11:32:48.053613] vfio_user.c:2832:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:14:47.520 Initializing NVMe Controllers 00:14:47.520 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:14:47.520 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:14:47.520 Namespace ID: 1 size: 0GB 00:14:47.520 Initialization complete. 00:14:47.520 INFO: using host memory buffer for IO 00:14:47.520 Hello world! 00:14:47.520 [2024-11-15 11:32:48.064683] vfio_user.c:2794:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:14:47.520 11:32:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -g -d 256 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' 00:14:47.818 [2024-11-15 11:32:48.407087] vfio_user.c:2832:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:14:48.820 Initializing NVMe Controllers 00:14:48.820 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:14:48.820 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:14:48.820 Initialization complete. Launching workers. 00:14:48.820 submit (in ns) avg, min, max = 12048.5, 4563.6, 4004425.5 00:14:48.820 complete (in ns) avg, min, max = 18345.6, 2710.0, 4002802.7 00:14:48.820 00:14:48.820 Submit histogram 00:14:48.820 ================ 00:14:48.820 Range in us Cumulative Count 00:14:48.820 4.538 - 4.567: 0.0058% ( 1) 00:14:48.820 4.567 - 4.596: 0.0812% ( 13) 00:14:48.820 4.596 - 4.625: 0.3655% ( 49) 00:14:48.820 4.625 - 4.655: 1.9785% ( 278) 00:14:48.820 4.655 - 4.684: 5.0189% ( 524) 00:14:48.820 4.684 - 4.713: 9.2312% ( 726) 00:14:48.820 4.713 - 4.742: 15.5904% ( 1096) 00:14:48.820 4.742 - 4.771: 29.9797% ( 2480) 00:14:48.820 4.771 - 4.800: 42.3789% ( 2137) 00:14:48.820 4.800 - 4.829: 52.8866% ( 1811) 00:14:48.820 4.829 - 4.858: 65.1697% ( 2117) 00:14:48.820 4.858 - 4.887: 74.2385% ( 1563) 00:14:48.820 4.887 - 4.916: 81.9553% ( 1330) 00:14:48.820 4.916 - 4.945: 85.4424% ( 601) 00:14:48.820 4.945 - 4.975: 86.9278% ( 256) 00:14:48.820 4.975 - 5.004: 87.9199% ( 171) 00:14:48.820 5.004 - 5.033: 89.5155% ( 275) 00:14:48.820 5.033 - 5.062: 91.2272% ( 295) 00:14:48.820 5.062 - 5.091: 93.1070% ( 324) 00:14:48.820 5.091 - 5.120: 95.1494% ( 352) 00:14:48.820 5.120 - 5.149: 96.7334% ( 273) 00:14:48.820 5.149 - 5.178: 97.8648% ( 195) 00:14:48.820 5.178 - 5.207: 98.5088% ( 111) 00:14:48.820 5.207 - 5.236: 98.9730% ( 80) 00:14:48.820 5.236 - 5.265: 99.2921% ( 55) 00:14:48.820 5.265 - 5.295: 99.3734% ( 14) 00:14:48.820 5.295 - 5.324: 99.4256% ( 9) 00:14:48.820 5.324 - 5.353: 99.4488% ( 4) 00:14:48.820 6.342 - 6.371: 99.4546% ( 1) 00:14:48.820 7.156 - 7.185: 99.4604% ( 1) 00:14:48.820 7.564 - 7.622: 99.4662% ( 1) 00:14:48.820 7.622 - 7.680: 99.4720% ( 1) 00:14:48.820 7.796 - 7.855: 99.4778% ( 1) 00:14:48.820 8.029 - 8.087: 99.4836% ( 1) 00:14:48.820 8.087 - 8.145: 99.4894% ( 1) 00:14:48.820 8.145 - 8.204: 99.5010% ( 2) 00:14:48.820 8.320 - 8.378: 99.5068% ( 1) 00:14:48.820 8.378 - 8.436: 99.5126% ( 1) 00:14:48.820 8.436 - 8.495: 99.5184% ( 1) 00:14:48.820 8.553 - 8.611: 99.5300% ( 2) 00:14:48.820 8.611 - 8.669: 99.5358% ( 1) 00:14:48.820 8.727 - 8.785: 99.5416% ( 1) 00:14:48.820 8.785 - 8.844: 99.5474% ( 1) 00:14:48.820 8.844 - 8.902: 99.5590% ( 2) 00:14:48.820 8.902 - 8.960: 99.5648% ( 1) 00:14:48.820 8.960 - 9.018: 99.5706% ( 1) 00:14:48.820 9.018 - 9.076: 99.5764% ( 1) 00:14:48.820 9.193 - 9.251: 99.5822% ( 1) 00:14:48.820 9.425 - 9.484: 99.5880% ( 1) 00:14:48.820 9.542 - 9.600: 99.5938% ( 1) 00:14:48.820 9.600 - 9.658: 99.5997% ( 1) 00:14:48.820 9.658 - 9.716: 99.6113% ( 2) 00:14:48.820 9.716 - 9.775: 99.6229% ( 2) 00:14:48.820 9.775 - 9.833: 99.6461% ( 4) 00:14:48.820 9.833 - 9.891: 99.6519% ( 1) 00:14:48.820 9.949 - 10.007: 99.6635% ( 2) 00:14:48.820 10.007 - 10.065: 99.6751% ( 2) 00:14:48.820 10.124 - 10.182: 99.6867% ( 2) 00:14:48.820 10.182 - 10.240: 99.6925% ( 1) 00:14:48.820 10.240 - 10.298: 99.7041% ( 2) 00:14:48.820 10.298 - 10.356: 99.7157% ( 2) 00:14:48.820 10.356 - 10.415: 99.7215% ( 1) 00:14:48.820 10.764 - 10.822: 99.7389% ( 3) 00:14:48.820 10.880 - 10.938: 99.7447% ( 1) 00:14:48.820 10.996 - 11.055: 99.7505% ( 1) 00:14:48.820 11.113 - 11.171: 99.7563% ( 1) 00:14:48.820 11.171 - 11.229: 99.7679% ( 2) 00:14:48.820 11.229 - 11.287: 99.7737% ( 1) 00:14:48.820 11.345 - 11.404: 99.7853% ( 2) 00:14:48.820 11.520 - 11.578: 99.7911% ( 1) 00:14:48.820 11.636 - 11.695: 99.8027% ( 2) 00:14:48.820 12.916 - 12.975: 99.8085% ( 1) 00:14:48.820 14.604 - 14.662: 99.8143% ( 1) 00:14:48.820 17.687 - 17.804: 99.8201% ( 1) 00:14:48.820 3991.738 - 4021.527: 100.0000% ( 31) 00:14:48.820 00:14:48.820 Complete histogram 00:14:48.820 ================== 00:14:48.820 Range in us Cumulative Count 00:14:48.820 2.705 - 2.720: 0.1973% ( 34) 00:14:48.820 2.720 - 2.735: 6.4868% ( 1084) 00:14:48.820 2.735 - [2024-11-15 11:32:49.506013] vfio_user.c:2794:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:14:48.820 2.749: 25.2974% ( 3242) 00:14:48.820 2.749 - 2.764: 34.0064% ( 1501) 00:14:48.820 2.764 - 2.778: 36.3853% ( 410) 00:14:48.820 2.778 - 2.793: 46.8523% ( 1804) 00:14:48.820 2.793 - 2.807: 71.5057% ( 4249) 00:14:48.820 2.807 - 2.822: 83.4059% ( 2051) 00:14:48.820 2.822 - 2.836: 88.0708% ( 804) 00:14:48.820 2.836 - 2.851: 91.4708% ( 586) 00:14:48.820 2.851 - 2.865: 93.0780% ( 277) 00:14:48.820 2.865 - 2.880: 94.9695% ( 326) 00:14:48.820 2.880 - 2.895: 97.3136% ( 404) 00:14:48.820 2.895 - 2.909: 98.5321% ( 210) 00:14:48.820 2.909 - 2.924: 98.9034% ( 64) 00:14:48.820 2.924 - 2.938: 99.1355% ( 40) 00:14:48.820 2.938 - 2.953: 99.2051% ( 12) 00:14:48.820 2.953 - 2.967: 99.2167% ( 2) 00:14:48.820 2.982 - 2.996: 99.2225% ( 1) 00:14:48.820 2.996 - 3.011: 99.2341% ( 2) 00:14:48.820 3.025 - 3.040: 99.2399% ( 1) 00:14:48.820 3.127 - 3.142: 99.2457% ( 1) 00:14:48.821 5.207 - 5.236: 99.2515% ( 1) 00:14:48.821 5.265 - 5.295: 99.2573% ( 1) 00:14:48.821 5.702 - 5.731: 99.2631% ( 1) 00:14:48.821 6.080 - 6.109: 99.2689% ( 1) 00:14:48.821 6.167 - 6.196: 99.2747% ( 1) 00:14:48.821 6.225 - 6.255: 99.2805% ( 1) 00:14:48.821 6.255 - 6.284: 99.2863% ( 1) 00:14:48.821 6.371 - 6.400: 99.2979% ( 2) 00:14:48.821 6.400 - 6.429: 99.3037% ( 1) 00:14:48.821 6.458 - 6.487: 99.3095% ( 1) 00:14:48.821 6.662 - 6.691: 99.3153% ( 1) 00:14:48.821 6.691 - 6.720: 99.3211% ( 1) 00:14:48.821 6.865 - 6.895: 99.3270% ( 1) 00:14:48.821 6.953 - 6.982: 99.3328% ( 1) 00:14:48.821 6.982 - 7.011: 99.3386% ( 1) 00:14:48.821 7.011 - 7.040: 99.3444% ( 1) 00:14:48.821 7.215 - 7.244: 99.3502% ( 1) 00:14:48.821 7.244 - 7.273: 99.3560% ( 1) 00:14:48.821 7.302 - 7.331: 99.3618% ( 1) 00:14:48.821 7.389 - 7.418: 99.3676% ( 1) 00:14:48.821 7.505 - 7.564: 99.3850% ( 3) 00:14:48.821 7.564 - 7.622: 99.3908% ( 1) 00:14:48.821 7.622 - 7.680: 99.4082% ( 3) 00:14:48.821 7.796 - 7.855: 99.4198% ( 2) 00:14:48.821 7.855 - 7.913: 99.4314% ( 2) 00:14:48.821 7.913 - 7.971: 99.4430% ( 2) 00:14:48.821 8.145 - 8.204: 99.4488% ( 1) 00:14:48.821 8.204 - 8.262: 99.4546% ( 1) 00:14:48.821 8.262 - 8.320: 99.4662% ( 2) 00:14:48.821 8.320 - 8.378: 99.4952% ( 5) 00:14:48.821 8.436 - 8.495: 99.5010% ( 1) 00:14:48.821 8.553 - 8.611: 99.5068% ( 1) 00:14:48.821 8.669 - 8.727: 99.5126% ( 1) 00:14:48.821 8.727 - 8.785: 99.5184% ( 1) 00:14:48.821 8.785 - 8.844: 99.5242% ( 1) 00:14:48.821 8.844 - 8.902: 99.5300% ( 1) 00:14:48.821 8.902 - 8.960: 99.5358% ( 1) 00:14:48.821 8.960 - 9.018: 99.5474% ( 2) 00:14:48.821 9.193 - 9.251: 99.5532% ( 1) 00:14:48.821 9.309 - 9.367: 99.5590% ( 1) 00:14:48.821 9.425 - 9.484: 99.5648% ( 1) 00:14:48.821 9.484 - 9.542: 99.5706% ( 1) 00:14:48.821 9.775 - 9.833: 99.5764% ( 1) 00:14:48.821 10.415 - 10.473: 99.5822% ( 1) 00:14:48.821 11.578 - 11.636: 99.5880% ( 1) 00:14:48.821 14.371 - 14.429: 99.5938% ( 1) 00:14:48.821 16.175 - 16.291: 99.5997% ( 1) 00:14:48.821 19.665 - 19.782: 99.6055% ( 1) 00:14:48.821 20.713 - 20.829: 99.6113% ( 1) 00:14:48.821 3991.738 - 4021.527: 100.0000% ( 67) 00:14:48.821 00:14:48.821 11:32:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@90 -- # aer_vfio_user /var/run/vfio-user/domain/vfio-user2/2 nqn.2019-07.io.spdk:cnode2 2 00:14:48.821 11:32:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@22 -- # local traddr=/var/run/vfio-user/domain/vfio-user2/2 00:14:48.821 11:32:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@23 -- # local subnqn=nqn.2019-07.io.spdk:cnode2 00:14:48.821 11:32:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@24 -- # local malloc_num=Malloc4 00:14:48.821 11:32:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:14:49.080 [ 00:14:49.080 { 00:14:49.080 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:14:49.080 "subtype": "Discovery", 00:14:49.080 "listen_addresses": [], 00:14:49.080 "allow_any_host": true, 00:14:49.080 "hosts": [] 00:14:49.080 }, 00:14:49.080 { 00:14:49.080 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:14:49.080 "subtype": "NVMe", 00:14:49.080 "listen_addresses": [ 00:14:49.080 { 00:14:49.080 "trtype": "VFIOUSER", 00:14:49.080 "adrfam": "IPv4", 00:14:49.080 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:14:49.080 "trsvcid": "0" 00:14:49.080 } 00:14:49.080 ], 00:14:49.080 "allow_any_host": true, 00:14:49.080 "hosts": [], 00:14:49.080 "serial_number": "SPDK1", 00:14:49.080 "model_number": "SPDK bdev Controller", 00:14:49.080 "max_namespaces": 32, 00:14:49.080 "min_cntlid": 1, 00:14:49.080 "max_cntlid": 65519, 00:14:49.080 "namespaces": [ 00:14:49.080 { 00:14:49.080 "nsid": 1, 00:14:49.080 "bdev_name": "Malloc1", 00:14:49.080 "name": "Malloc1", 00:14:49.080 "nguid": "4161269D55AC4350896EC8D831135E8E", 00:14:49.080 "uuid": "4161269d-55ac-4350-896e-c8d831135e8e" 00:14:49.080 }, 00:14:49.080 { 00:14:49.080 "nsid": 2, 00:14:49.080 "bdev_name": "Malloc3", 00:14:49.080 "name": "Malloc3", 00:14:49.080 "nguid": "5D92B0C48811441A805E51AC5DF2BC05", 00:14:49.080 "uuid": "5d92b0c4-8811-441a-805e-51ac5df2bc05" 00:14:49.080 } 00:14:49.080 ] 00:14:49.080 }, 00:14:49.080 { 00:14:49.080 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:14:49.080 "subtype": "NVMe", 00:14:49.080 "listen_addresses": [ 00:14:49.080 { 00:14:49.080 "trtype": "VFIOUSER", 00:14:49.080 "adrfam": "IPv4", 00:14:49.080 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:14:49.080 "trsvcid": "0" 00:14:49.080 } 00:14:49.080 ], 00:14:49.080 "allow_any_host": true, 00:14:49.080 "hosts": [], 00:14:49.080 "serial_number": "SPDK2", 00:14:49.080 "model_number": "SPDK bdev Controller", 00:14:49.080 "max_namespaces": 32, 00:14:49.080 "min_cntlid": 1, 00:14:49.080 "max_cntlid": 65519, 00:14:49.080 "namespaces": [ 00:14:49.080 { 00:14:49.080 "nsid": 1, 00:14:49.080 "bdev_name": "Malloc2", 00:14:49.080 "name": "Malloc2", 00:14:49.080 "nguid": "0E2B64C238BE46118B69B389098E0BB4", 00:14:49.080 "uuid": "0e2b64c2-38be-4611-8b69-b389098e0bb4" 00:14:49.080 } 00:14:49.080 ] 00:14:49.080 } 00:14:49.080 ] 00:14:49.080 11:32:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@27 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:14:49.080 11:32:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@34 -- # aerpid=1197278 00:14:49.080 11:32:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@37 -- # waitforfile /tmp/aer_touch_file 00:14:49.080 11:32:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -n 2 -g -t /tmp/aer_touch_file 00:14:49.080 11:32:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1267 -- # local i=0 00:14:49.080 11:32:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1268 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:14:49.080 11:32:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1274 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:14:49.080 11:32:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1278 -- # return 0 00:14:49.080 11:32:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@38 -- # rm -f /tmp/aer_touch_file 00:14:49.080 11:32:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 --name Malloc4 00:14:49.339 [2024-11-15 11:32:50.037260] vfio_user.c:2832:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:14:49.339 Malloc4 00:14:49.339 11:32:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc4 -n 2 00:14:49.598 [2024-11-15 11:32:50.405975] vfio_user.c:2794:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:14:49.598 11:32:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:14:49.857 Asynchronous Event Request test 00:14:49.857 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:14:49.857 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:14:49.857 Registering asynchronous event callbacks... 00:14:49.857 Starting namespace attribute notice tests for all controllers... 00:14:49.857 /var/run/vfio-user/domain/vfio-user2/2: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:14:49.857 aer_cb - Changed Namespace 00:14:49.857 Cleaning up... 00:14:49.857 [ 00:14:49.857 { 00:14:49.857 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:14:49.857 "subtype": "Discovery", 00:14:49.857 "listen_addresses": [], 00:14:49.857 "allow_any_host": true, 00:14:49.857 "hosts": [] 00:14:49.857 }, 00:14:49.857 { 00:14:49.857 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:14:49.857 "subtype": "NVMe", 00:14:49.857 "listen_addresses": [ 00:14:49.857 { 00:14:49.857 "trtype": "VFIOUSER", 00:14:49.857 "adrfam": "IPv4", 00:14:49.857 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:14:49.857 "trsvcid": "0" 00:14:49.857 } 00:14:49.857 ], 00:14:49.857 "allow_any_host": true, 00:14:49.857 "hosts": [], 00:14:49.857 "serial_number": "SPDK1", 00:14:49.857 "model_number": "SPDK bdev Controller", 00:14:49.857 "max_namespaces": 32, 00:14:49.857 "min_cntlid": 1, 00:14:49.857 "max_cntlid": 65519, 00:14:49.857 "namespaces": [ 00:14:49.857 { 00:14:49.857 "nsid": 1, 00:14:49.857 "bdev_name": "Malloc1", 00:14:49.857 "name": "Malloc1", 00:14:49.857 "nguid": "4161269D55AC4350896EC8D831135E8E", 00:14:49.857 "uuid": "4161269d-55ac-4350-896e-c8d831135e8e" 00:14:49.857 }, 00:14:49.857 { 00:14:49.857 "nsid": 2, 00:14:49.857 "bdev_name": "Malloc3", 00:14:49.857 "name": "Malloc3", 00:14:49.857 "nguid": "5D92B0C48811441A805E51AC5DF2BC05", 00:14:49.857 "uuid": "5d92b0c4-8811-441a-805e-51ac5df2bc05" 00:14:49.857 } 00:14:49.857 ] 00:14:49.857 }, 00:14:49.857 { 00:14:49.857 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:14:49.857 "subtype": "NVMe", 00:14:49.857 "listen_addresses": [ 00:14:49.857 { 00:14:49.857 "trtype": "VFIOUSER", 00:14:49.857 "adrfam": "IPv4", 00:14:49.857 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:14:49.857 "trsvcid": "0" 00:14:49.857 } 00:14:49.857 ], 00:14:49.857 "allow_any_host": true, 00:14:49.857 "hosts": [], 00:14:49.857 "serial_number": "SPDK2", 00:14:49.857 "model_number": "SPDK bdev Controller", 00:14:49.857 "max_namespaces": 32, 00:14:49.857 "min_cntlid": 1, 00:14:49.857 "max_cntlid": 65519, 00:14:49.857 "namespaces": [ 00:14:49.857 { 00:14:49.857 "nsid": 1, 00:14:49.857 "bdev_name": "Malloc2", 00:14:49.857 "name": "Malloc2", 00:14:49.857 "nguid": "0E2B64C238BE46118B69B389098E0BB4", 00:14:49.857 "uuid": "0e2b64c2-38be-4611-8b69-b389098e0bb4" 00:14:49.857 }, 00:14:49.857 { 00:14:49.857 "nsid": 2, 00:14:49.857 "bdev_name": "Malloc4", 00:14:49.857 "name": "Malloc4", 00:14:49.857 "nguid": "91F770FCBBE74C1B82A93734E0E28F4A", 00:14:49.857 "uuid": "91f770fc-bbe7-4c1b-82a9-3734e0e28f4a" 00:14:49.857 } 00:14:49.857 ] 00:14:49.857 } 00:14:49.857 ] 00:14:49.857 11:32:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@44 -- # wait 1197278 00:14:49.857 11:32:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@105 -- # stop_nvmf_vfio_user 00:14:49.857 11:32:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@95 -- # killprocess 1188599 00:14:49.857 11:32:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@952 -- # '[' -z 1188599 ']' 00:14:49.857 11:32:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@956 -- # kill -0 1188599 00:14:49.857 11:32:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@957 -- # uname 00:14:49.857 11:32:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:14:49.857 11:32:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 1188599 00:14:50.116 11:32:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:14:50.116 11:32:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:14:50.116 11:32:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@970 -- # echo 'killing process with pid 1188599' 00:14:50.116 killing process with pid 1188599 00:14:50.116 11:32:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@971 -- # kill 1188599 00:14:50.116 11:32:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@976 -- # wait 1188599 00:14:50.376 11:32:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@97 -- # rm -rf /var/run/vfio-user 00:14:50.376 11:32:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:14:50.376 11:32:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@108 -- # setup_nvmf_vfio_user --interrupt-mode '-M -I' 00:14:50.376 11:32:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@51 -- # local nvmf_app_args=--interrupt-mode 00:14:50.376 11:32:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@52 -- # local 'transport_args=-M -I' 00:14:50.376 11:32:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@55 -- # nvmfpid=1197546 00:14:50.376 11:32:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@57 -- # echo 'Process pid: 1197546' 00:14:50.376 Process pid: 1197546 00:14:50.376 11:32:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m '[0,1,2,3]' --interrupt-mode 00:14:50.376 11:32:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@59 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:14:50.376 11:32:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@60 -- # waitforlisten 1197546 00:14:50.376 11:32:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@833 -- # '[' -z 1197546 ']' 00:14:50.376 11:32:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:50.376 11:32:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@838 -- # local max_retries=100 00:14:50.376 11:32:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:50.376 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:50.376 11:32:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@842 -- # xtrace_disable 00:14:50.376 11:32:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:14:50.376 [2024-11-15 11:32:51.071213] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:14:50.376 [2024-11-15 11:32:51.072497] Starting SPDK v25.01-pre git sha1 4b2d483c6 / DPDK 24.03.0 initialization... 00:14:50.376 [2024-11-15 11:32:51.072546] [ DPDK EAL parameters: nvmf -l 0,1,2,3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:50.376 [2024-11-15 11:32:51.166243] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:14:50.376 [2024-11-15 11:32:51.216708] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:50.376 [2024-11-15 11:32:51.216753] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:50.376 [2024-11-15 11:32:51.216764] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:50.376 [2024-11-15 11:32:51.216773] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:50.376 [2024-11-15 11:32:51.216780] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:50.376 [2024-11-15 11:32:51.218852] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:14:50.376 [2024-11-15 11:32:51.218954] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:14:50.376 [2024-11-15 11:32:51.219069] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:50.376 [2024-11-15 11:32:51.219070] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:14:50.635 [2024-11-15 11:32:51.293976] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:14:50.635 [2024-11-15 11:32:51.294132] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:14:50.635 [2024-11-15 11:32:51.294304] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:14:50.635 [2024-11-15 11:32:51.294705] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:14:50.635 [2024-11-15 11:32:51.294956] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:14:51.203 11:32:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:14:51.203 11:32:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@866 -- # return 0 00:14:51.203 11:32:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@62 -- # sleep 1 00:14:52.140 11:32:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t VFIOUSER -M -I 00:14:52.708 11:32:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@66 -- # mkdir -p /var/run/vfio-user 00:14:52.708 11:32:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # seq 1 2 00:14:52.708 11:32:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:14:52.708 11:32:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user1/1 00:14:52.708 11:32:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:14:52.708 Malloc1 00:14:52.708 11:32:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode1 -a -s SPDK1 00:14:53.277 11:32:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc1 00:14:53.277 11:32:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode1 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user1/1 -s 0 00:14:53.536 11:32:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:14:53.536 11:32:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user2/2 00:14:53.536 11:32:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:14:53.793 Malloc2 00:14:54.050 11:32:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode2 -a -s SPDK2 00:14:54.309 11:32:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc2 00:14:54.567 11:32:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode2 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user2/2 -s 0 00:14:54.826 11:32:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@109 -- # stop_nvmf_vfio_user 00:14:54.826 11:32:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@95 -- # killprocess 1197546 00:14:54.826 11:32:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@952 -- # '[' -z 1197546 ']' 00:14:54.826 11:32:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@956 -- # kill -0 1197546 00:14:54.826 11:32:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@957 -- # uname 00:14:54.826 11:32:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:14:54.826 11:32:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 1197546 00:14:54.826 11:32:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:14:54.826 11:32:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:14:54.826 11:32:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@970 -- # echo 'killing process with pid 1197546' 00:14:54.826 killing process with pid 1197546 00:14:54.826 11:32:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@971 -- # kill 1197546 00:14:54.826 11:32:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@976 -- # wait 1197546 00:14:55.085 11:32:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@97 -- # rm -rf /var/run/vfio-user 00:14:55.085 11:32:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:14:55.085 00:14:55.085 real 0m54.608s 00:14:55.085 user 3m29.333s 00:14:55.085 sys 0m3.671s 00:14:55.085 11:32:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1128 -- # xtrace_disable 00:14:55.085 11:32:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:14:55.085 ************************************ 00:14:55.085 END TEST nvmf_vfio_user 00:14:55.085 ************************************ 00:14:55.085 11:32:55 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@32 -- # run_test nvmf_vfio_user_nvme_compliance /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/compliance.sh --transport=tcp 00:14:55.085 11:32:55 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:14:55.085 11:32:55 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1109 -- # xtrace_disable 00:14:55.085 11:32:55 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:14:55.085 ************************************ 00:14:55.085 START TEST nvmf_vfio_user_nvme_compliance 00:14:55.085 ************************************ 00:14:55.085 11:32:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/compliance.sh --transport=tcp 00:14:55.085 * Looking for test storage... 00:14:55.085 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance 00:14:55.085 11:32:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:14:55.085 11:32:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1691 -- # lcov --version 00:14:55.085 11:32:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:14:55.346 11:32:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:14:55.346 11:32:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:14:55.346 11:32:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@333 -- # local ver1 ver1_l 00:14:55.346 11:32:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@334 -- # local ver2 ver2_l 00:14:55.346 11:32:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@336 -- # IFS=.-: 00:14:55.346 11:32:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@336 -- # read -ra ver1 00:14:55.346 11:32:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@337 -- # IFS=.-: 00:14:55.346 11:32:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@337 -- # read -ra ver2 00:14:55.346 11:32:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@338 -- # local 'op=<' 00:14:55.346 11:32:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@340 -- # ver1_l=2 00:14:55.346 11:32:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@341 -- # ver2_l=1 00:14:55.346 11:32:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:14:55.346 11:32:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@344 -- # case "$op" in 00:14:55.346 11:32:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@345 -- # : 1 00:14:55.346 11:32:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@364 -- # (( v = 0 )) 00:14:55.346 11:32:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:14:55.346 11:32:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@365 -- # decimal 1 00:14:55.346 11:32:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@353 -- # local d=1 00:14:55.346 11:32:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:14:55.346 11:32:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@355 -- # echo 1 00:14:55.346 11:32:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@365 -- # ver1[v]=1 00:14:55.346 11:32:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@366 -- # decimal 2 00:14:55.346 11:32:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@353 -- # local d=2 00:14:55.346 11:32:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:14:55.346 11:32:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@355 -- # echo 2 00:14:55.346 11:32:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@366 -- # ver2[v]=2 00:14:55.346 11:32:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:14:55.346 11:32:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:14:55.346 11:32:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@368 -- # return 0 00:14:55.346 11:32:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:14:55.346 11:32:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:14:55.346 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:55.347 --rc genhtml_branch_coverage=1 00:14:55.347 --rc genhtml_function_coverage=1 00:14:55.347 --rc genhtml_legend=1 00:14:55.347 --rc geninfo_all_blocks=1 00:14:55.347 --rc geninfo_unexecuted_blocks=1 00:14:55.347 00:14:55.347 ' 00:14:55.347 11:32:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:14:55.347 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:55.347 --rc genhtml_branch_coverage=1 00:14:55.347 --rc genhtml_function_coverage=1 00:14:55.347 --rc genhtml_legend=1 00:14:55.347 --rc geninfo_all_blocks=1 00:14:55.347 --rc geninfo_unexecuted_blocks=1 00:14:55.347 00:14:55.347 ' 00:14:55.347 11:32:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:14:55.347 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:55.347 --rc genhtml_branch_coverage=1 00:14:55.347 --rc genhtml_function_coverage=1 00:14:55.347 --rc genhtml_legend=1 00:14:55.347 --rc geninfo_all_blocks=1 00:14:55.347 --rc geninfo_unexecuted_blocks=1 00:14:55.347 00:14:55.347 ' 00:14:55.347 11:32:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:14:55.347 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:55.347 --rc genhtml_branch_coverage=1 00:14:55.347 --rc genhtml_function_coverage=1 00:14:55.347 --rc genhtml_legend=1 00:14:55.347 --rc geninfo_all_blocks=1 00:14:55.347 --rc geninfo_unexecuted_blocks=1 00:14:55.347 00:14:55.347 ' 00:14:55.347 11:32:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:14:55.347 11:32:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@7 -- # uname -s 00:14:55.347 11:32:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:55.347 11:32:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:55.347 11:32:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:55.347 11:32:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:55.347 11:32:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:55.347 11:32:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:55.347 11:32:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:55.347 11:32:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:55.347 11:32:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:55.347 11:32:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:55.347 11:32:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:14:55.347 11:32:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@18 -- # NVME_HOSTID=00abaa28-3537-eb11-906e-0017a4403562 00:14:55.347 11:32:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:55.347 11:32:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:55.347 11:32:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:55.347 11:32:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:55.347 11:32:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:14:55.347 11:32:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@15 -- # shopt -s extglob 00:14:55.347 11:32:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:55.347 11:32:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:55.347 11:32:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:55.347 11:32:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:55.347 11:32:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:55.347 11:32:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:55.347 11:32:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@5 -- # export PATH 00:14:55.347 11:32:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:55.347 11:32:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@51 -- # : 0 00:14:55.347 11:32:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:14:55.347 11:32:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:14:55.347 11:32:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:55.347 11:32:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:55.347 11:32:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:55.347 11:32:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:14:55.347 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:14:55.347 11:32:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:14:55.347 11:32:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:14:55.347 11:32:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@55 -- # have_pci_nics=0 00:14:55.347 11:32:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@11 -- # MALLOC_BDEV_SIZE=64 00:14:55.347 11:32:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:14:55.347 11:32:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@14 -- # export TEST_TRANSPORT=VFIOUSER 00:14:55.347 11:32:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@14 -- # TEST_TRANSPORT=VFIOUSER 00:14:55.347 11:32:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@16 -- # rm -rf /var/run/vfio-user 00:14:55.347 11:32:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@20 -- # nvmfpid=1198417 00:14:55.347 11:32:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@21 -- # echo 'Process pid: 1198417' 00:14:55.347 Process pid: 1198417 00:14:55.347 11:32:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@23 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:14:55.347 11:32:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@24 -- # waitforlisten 1198417 00:14:55.347 11:32:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@833 -- # '[' -z 1198417 ']' 00:14:55.347 11:32:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:55.347 11:32:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@838 -- # local max_retries=100 00:14:55.347 11:32:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:55.347 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:55.347 11:32:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@842 -- # xtrace_disable 00:14:55.347 11:32:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:14:55.348 11:32:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:14:55.348 [2024-11-15 11:32:56.056055] Starting SPDK v25.01-pre git sha1 4b2d483c6 / DPDK 24.03.0 initialization... 00:14:55.348 [2024-11-15 11:32:56.056114] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:55.348 [2024-11-15 11:32:56.153950] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:14:55.607 [2024-11-15 11:32:56.203452] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:55.607 [2024-11-15 11:32:56.203500] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:55.607 [2024-11-15 11:32:56.203512] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:55.607 [2024-11-15 11:32:56.203520] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:55.607 [2024-11-15 11:32:56.203528] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:55.607 [2024-11-15 11:32:56.205213] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:14:55.607 [2024-11-15 11:32:56.205321] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:14:55.607 [2024-11-15 11:32:56.205326] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:55.607 11:32:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:14:55.607 11:32:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@866 -- # return 0 00:14:55.607 11:32:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@26 -- # sleep 1 00:14:56.544 11:32:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@28 -- # nqn=nqn.2021-09.io.spdk:cnode0 00:14:56.544 11:32:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@29 -- # traddr=/var/run/vfio-user 00:14:56.544 11:32:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@31 -- # rpc_cmd nvmf_create_transport -t VFIOUSER 00:14:56.544 11:32:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:56.544 11:32:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:14:56.544 11:32:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:56.544 11:32:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@33 -- # mkdir -p /var/run/vfio-user 00:14:56.544 11:32:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@35 -- # rpc_cmd bdev_malloc_create 64 512 -b malloc0 00:14:56.544 11:32:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:56.544 11:32:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:14:56.544 malloc0 00:14:56.544 11:32:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:56.544 11:32:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@36 -- # rpc_cmd nvmf_create_subsystem nqn.2021-09.io.spdk:cnode0 -a -s spdk -m 32 00:14:56.544 11:32:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:56.544 11:32:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:14:56.544 11:32:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:56.544 11:32:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@37 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2021-09.io.spdk:cnode0 malloc0 00:14:56.544 11:32:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:56.544 11:32:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:14:56.544 11:32:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:56.544 11:32:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@38 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2021-09.io.spdk:cnode0 -t VFIOUSER -a /var/run/vfio-user -s 0 00:14:56.544 11:32:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:56.544 11:32:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:14:56.544 11:32:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:56.544 11:32:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/nvme_compliance -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user subnqn:nqn.2021-09.io.spdk:cnode0' 00:14:56.802 00:14:56.802 00:14:56.803 CUnit - A unit testing framework for C - Version 2.1-3 00:14:56.803 http://cunit.sourceforge.net/ 00:14:56.803 00:14:56.803 00:14:56.803 Suite: nvme_compliance 00:14:56.803 Test: admin_identify_ctrlr_verify_dptr ...[2024-11-15 11:32:57.593951] vfio_user.c:2832:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:14:56.803 [2024-11-15 11:32:57.595348] vfio_user.c: 800:nvme_cmd_map_prps: *ERROR*: no PRP2, 3072 remaining 00:14:56.803 [2024-11-15 11:32:57.595363] vfio_user.c:5503:map_admin_cmd_req: *ERROR*: /var/run/vfio-user: map Admin Opc 6 failed 00:14:56.803 [2024-11-15 11:32:57.595369] vfio_user.c:5596:handle_cmd_req: *ERROR*: /var/run/vfio-user: process NVMe command opc 0x6 failed 00:14:56.803 [2024-11-15 11:32:57.596977] vfio_user.c:2794:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:14:56.803 passed 00:14:57.061 Test: admin_identify_ctrlr_verify_fused ...[2024-11-15 11:32:57.697641] vfio_user.c:2832:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:14:57.061 [2024-11-15 11:32:57.700662] vfio_user.c:2794:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:14:57.061 passed 00:14:57.061 Test: admin_identify_ns ...[2024-11-15 11:32:57.801873] vfio_user.c:2832:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:14:57.062 [2024-11-15 11:32:57.862490] ctrlr.c:2750:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:14:57.062 [2024-11-15 11:32:57.870475] ctrlr.c:2750:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 4294967295 00:14:57.062 [2024-11-15 11:32:57.891604] vfio_user.c:2794:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:14:57.320 passed 00:14:57.320 Test: admin_get_features_mandatory_features ...[2024-11-15 11:32:57.988309] vfio_user.c:2832:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:14:57.320 [2024-11-15 11:32:57.991339] vfio_user.c:2794:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:14:57.320 passed 00:14:57.321 Test: admin_get_features_optional_features ...[2024-11-15 11:32:58.090990] vfio_user.c:2832:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:14:57.321 [2024-11-15 11:32:58.094008] vfio_user.c:2794:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:14:57.321 passed 00:14:57.579 Test: admin_set_features_number_of_queues ...[2024-11-15 11:32:58.193285] vfio_user.c:2832:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:14:57.579 [2024-11-15 11:32:58.297575] vfio_user.c:2794:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:14:57.579 passed 00:14:57.579 Test: admin_get_log_page_mandatory_logs ...[2024-11-15 11:32:58.394231] vfio_user.c:2832:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:14:57.579 [2024-11-15 11:32:58.397255] vfio_user.c:2794:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:14:57.838 passed 00:14:57.838 Test: admin_get_log_page_with_lpo ...[2024-11-15 11:32:58.496112] vfio_user.c:2832:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:14:57.838 [2024-11-15 11:32:58.563477] ctrlr.c:2697:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: offset (516) > len (512) 00:14:57.838 [2024-11-15 11:32:58.576545] vfio_user.c:2794:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:14:57.838 passed 00:14:57.838 Test: fabric_property_get ...[2024-11-15 11:32:58.673228] vfio_user.c:2832:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:14:57.838 [2024-11-15 11:32:58.674545] vfio_user.c:5596:handle_cmd_req: *ERROR*: /var/run/vfio-user: process NVMe command opc 0x7f failed 00:14:57.838 [2024-11-15 11:32:58.676250] vfio_user.c:2794:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:14:58.096 passed 00:14:58.096 Test: admin_delete_io_sq_use_admin_qid ...[2024-11-15 11:32:58.774894] vfio_user.c:2832:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:14:58.096 [2024-11-15 11:32:58.776182] vfio_user.c:2305:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:0 does not exist 00:14:58.096 [2024-11-15 11:32:58.777917] vfio_user.c:2794:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:14:58.096 passed 00:14:58.096 Test: admin_delete_io_sq_delete_sq_twice ...[2024-11-15 11:32:58.875737] vfio_user.c:2832:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:14:58.355 [2024-11-15 11:32:58.959478] vfio_user.c:2305:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:14:58.355 [2024-11-15 11:32:58.975480] vfio_user.c:2305:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:14:58.355 [2024-11-15 11:32:58.980566] vfio_user.c:2794:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:14:58.355 passed 00:14:58.355 Test: admin_delete_io_cq_use_admin_qid ...[2024-11-15 11:32:59.079544] vfio_user.c:2832:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:14:58.355 [2024-11-15 11:32:59.080833] vfio_user.c:2305:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O cqid:0 does not exist 00:14:58.355 [2024-11-15 11:32:59.082563] vfio_user.c:2794:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:14:58.355 passed 00:14:58.355 Test: admin_delete_io_cq_delete_cq_first ...[2024-11-15 11:32:59.181445] vfio_user.c:2832:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:14:58.614 [2024-11-15 11:32:59.259480] vfio_user.c:2315:handle_del_io_q: *ERROR*: /var/run/vfio-user: the associated SQ must be deleted first 00:14:58.614 [2024-11-15 11:32:59.283483] vfio_user.c:2305:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:14:58.614 [2024-11-15 11:32:59.288581] vfio_user.c:2794:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:14:58.614 passed 00:14:58.614 Test: admin_create_io_cq_verify_iv_pc ...[2024-11-15 11:32:59.385297] vfio_user.c:2832:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:14:58.614 [2024-11-15 11:32:59.386605] vfio_user.c:2154:handle_create_io_cq: *ERROR*: /var/run/vfio-user: IV is too big 00:14:58.614 [2024-11-15 11:32:59.386630] vfio_user.c:2148:handle_create_io_cq: *ERROR*: /var/run/vfio-user: non-PC CQ not supported 00:14:58.614 [2024-11-15 11:32:59.388327] vfio_user.c:2794:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:14:58.614 passed 00:14:58.873 Test: admin_create_io_sq_verify_qsize_cqid ...[2024-11-15 11:32:59.488283] vfio_user.c:2832:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:14:58.873 [2024-11-15 11:32:59.579474] vfio_user.c:2236:handle_create_io_q: *ERROR*: /var/run/vfio-user: invalid I/O queue size 1 00:14:58.873 [2024-11-15 11:32:59.587471] vfio_user.c:2236:handle_create_io_q: *ERROR*: /var/run/vfio-user: invalid I/O queue size 257 00:14:58.873 [2024-11-15 11:32:59.595478] vfio_user.c:2034:handle_create_io_sq: *ERROR*: /var/run/vfio-user: invalid cqid:0 00:14:58.873 [2024-11-15 11:32:59.603468] vfio_user.c:2034:handle_create_io_sq: *ERROR*: /var/run/vfio-user: invalid cqid:128 00:14:58.873 [2024-11-15 11:32:59.632581] vfio_user.c:2794:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:14:58.873 passed 00:14:59.132 Test: admin_create_io_sq_verify_pc ...[2024-11-15 11:32:59.729247] vfio_user.c:2832:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:14:59.132 [2024-11-15 11:32:59.744476] vfio_user.c:2047:handle_create_io_sq: *ERROR*: /var/run/vfio-user: non-PC SQ not supported 00:14:59.132 [2024-11-15 11:32:59.762423] vfio_user.c:2794:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:14:59.132 passed 00:14:59.132 Test: admin_create_io_qp_max_qps ...[2024-11-15 11:32:59.862101] vfio_user.c:2832:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:00.511 [2024-11-15 11:33:00.937474] nvme_ctrlr.c:5523:spdk_nvme_ctrlr_alloc_qid: *ERROR*: [/var/run/vfio-user, 0] No free I/O queue IDs 00:15:00.511 [2024-11-15 11:33:01.322471] vfio_user.c:2794:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:00.770 passed 00:15:00.770 Test: admin_create_io_sq_shared_cq ...[2024-11-15 11:33:01.423328] vfio_user.c:2832:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:00.770 [2024-11-15 11:33:01.558476] vfio_user.c:2315:handle_del_io_q: *ERROR*: /var/run/vfio-user: the associated SQ must be deleted first 00:15:00.770 [2024-11-15 11:33:01.595542] vfio_user.c:2794:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:01.029 passed 00:15:01.029 00:15:01.029 Run Summary: Type Total Ran Passed Failed Inactive 00:15:01.029 suites 1 1 n/a 0 0 00:15:01.029 tests 18 18 18 0 0 00:15:01.029 asserts 360 360 360 0 n/a 00:15:01.029 00:15:01.029 Elapsed time = 1.684 seconds 00:15:01.029 11:33:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@42 -- # killprocess 1198417 00:15:01.029 11:33:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@952 -- # '[' -z 1198417 ']' 00:15:01.029 11:33:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@956 -- # kill -0 1198417 00:15:01.029 11:33:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@957 -- # uname 00:15:01.029 11:33:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:15:01.029 11:33:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 1198417 00:15:01.029 11:33:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:15:01.029 11:33:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:15:01.029 11:33:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@970 -- # echo 'killing process with pid 1198417' 00:15:01.029 killing process with pid 1198417 00:15:01.029 11:33:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@971 -- # kill 1198417 00:15:01.029 11:33:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@976 -- # wait 1198417 00:15:01.288 11:33:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@44 -- # rm -rf /var/run/vfio-user 00:15:01.288 11:33:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@46 -- # trap - SIGINT SIGTERM EXIT 00:15:01.288 00:15:01.288 real 0m6.104s 00:15:01.288 user 0m17.143s 00:15:01.288 sys 0m0.546s 00:15:01.288 11:33:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1128 -- # xtrace_disable 00:15:01.288 11:33:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:15:01.288 ************************************ 00:15:01.288 END TEST nvmf_vfio_user_nvme_compliance 00:15:01.288 ************************************ 00:15:01.288 11:33:01 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@33 -- # run_test nvmf_vfio_user_fuzz /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/vfio_user_fuzz.sh --transport=tcp 00:15:01.288 11:33:01 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:15:01.288 11:33:01 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1109 -- # xtrace_disable 00:15:01.288 11:33:01 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:15:01.288 ************************************ 00:15:01.288 START TEST nvmf_vfio_user_fuzz 00:15:01.288 ************************************ 00:15:01.288 11:33:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/vfio_user_fuzz.sh --transport=tcp 00:15:01.288 * Looking for test storage... 00:15:01.288 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:15:01.288 11:33:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:15:01.288 11:33:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1691 -- # lcov --version 00:15:01.288 11:33:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:15:01.550 11:33:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:15:01.550 11:33:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:15:01.550 11:33:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@333 -- # local ver1 ver1_l 00:15:01.550 11:33:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@334 -- # local ver2 ver2_l 00:15:01.550 11:33:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@336 -- # IFS=.-: 00:15:01.550 11:33:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@336 -- # read -ra ver1 00:15:01.550 11:33:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@337 -- # IFS=.-: 00:15:01.550 11:33:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@337 -- # read -ra ver2 00:15:01.550 11:33:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@338 -- # local 'op=<' 00:15:01.550 11:33:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@340 -- # ver1_l=2 00:15:01.550 11:33:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@341 -- # ver2_l=1 00:15:01.550 11:33:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:15:01.550 11:33:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@344 -- # case "$op" in 00:15:01.550 11:33:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@345 -- # : 1 00:15:01.550 11:33:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@364 -- # (( v = 0 )) 00:15:01.550 11:33:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:15:01.551 11:33:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@365 -- # decimal 1 00:15:01.551 11:33:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@353 -- # local d=1 00:15:01.551 11:33:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:15:01.551 11:33:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@355 -- # echo 1 00:15:01.551 11:33:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@365 -- # ver1[v]=1 00:15:01.551 11:33:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@366 -- # decimal 2 00:15:01.551 11:33:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@353 -- # local d=2 00:15:01.551 11:33:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:15:01.551 11:33:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@355 -- # echo 2 00:15:01.551 11:33:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@366 -- # ver2[v]=2 00:15:01.551 11:33:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:15:01.551 11:33:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:15:01.551 11:33:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@368 -- # return 0 00:15:01.551 11:33:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:15:01.551 11:33:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:15:01.551 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:01.551 --rc genhtml_branch_coverage=1 00:15:01.551 --rc genhtml_function_coverage=1 00:15:01.551 --rc genhtml_legend=1 00:15:01.551 --rc geninfo_all_blocks=1 00:15:01.551 --rc geninfo_unexecuted_blocks=1 00:15:01.551 00:15:01.551 ' 00:15:01.551 11:33:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:15:01.551 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:01.551 --rc genhtml_branch_coverage=1 00:15:01.551 --rc genhtml_function_coverage=1 00:15:01.551 --rc genhtml_legend=1 00:15:01.551 --rc geninfo_all_blocks=1 00:15:01.551 --rc geninfo_unexecuted_blocks=1 00:15:01.551 00:15:01.551 ' 00:15:01.551 11:33:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:15:01.551 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:01.551 --rc genhtml_branch_coverage=1 00:15:01.551 --rc genhtml_function_coverage=1 00:15:01.551 --rc genhtml_legend=1 00:15:01.551 --rc geninfo_all_blocks=1 00:15:01.551 --rc geninfo_unexecuted_blocks=1 00:15:01.551 00:15:01.551 ' 00:15:01.551 11:33:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:15:01.551 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:01.551 --rc genhtml_branch_coverage=1 00:15:01.551 --rc genhtml_function_coverage=1 00:15:01.551 --rc genhtml_legend=1 00:15:01.551 --rc geninfo_all_blocks=1 00:15:01.551 --rc geninfo_unexecuted_blocks=1 00:15:01.551 00:15:01.551 ' 00:15:01.551 11:33:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:15:01.551 11:33:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@7 -- # uname -s 00:15:01.551 11:33:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:01.551 11:33:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:01.551 11:33:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:01.551 11:33:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:01.551 11:33:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:01.551 11:33:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:01.551 11:33:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:01.551 11:33:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:01.551 11:33:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:01.551 11:33:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:01.551 11:33:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:15:01.551 11:33:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@18 -- # NVME_HOSTID=00abaa28-3537-eb11-906e-0017a4403562 00:15:01.551 11:33:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:01.551 11:33:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:01.551 11:33:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:15:01.551 11:33:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:01.551 11:33:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:15:01.551 11:33:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@15 -- # shopt -s extglob 00:15:01.551 11:33:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:01.551 11:33:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:01.551 11:33:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:01.551 11:33:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:01.551 11:33:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:01.551 11:33:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:01.551 11:33:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@5 -- # export PATH 00:15:01.551 11:33:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:01.551 11:33:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@51 -- # : 0 00:15:01.551 11:33:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:15:01.551 11:33:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:15:01.551 11:33:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:01.551 11:33:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:01.551 11:33:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:01.551 11:33:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:15:01.551 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:15:01.551 11:33:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:15:01.551 11:33:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:15:01.551 11:33:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@55 -- # have_pci_nics=0 00:15:01.551 11:33:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@12 -- # MALLOC_BDEV_SIZE=64 00:15:01.551 11:33:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:15:01.551 11:33:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@15 -- # nqn=nqn.2021-09.io.spdk:cnode0 00:15:01.551 11:33:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@16 -- # traddr=/var/run/vfio-user 00:15:01.551 11:33:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@18 -- # export TEST_TRANSPORT=VFIOUSER 00:15:01.551 11:33:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@18 -- # TEST_TRANSPORT=VFIOUSER 00:15:01.551 11:33:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@20 -- # rm -rf /var/run/vfio-user 00:15:01.551 11:33:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@24 -- # nvmfpid=1199846 00:15:01.551 11:33:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@25 -- # echo 'Process pid: 1199846' 00:15:01.551 Process pid: 1199846 00:15:01.551 11:33:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@27 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:15:01.551 11:33:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@28 -- # waitforlisten 1199846 00:15:01.551 11:33:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:15:01.551 11:33:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@833 -- # '[' -z 1199846 ']' 00:15:01.551 11:33:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:01.551 11:33:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@838 -- # local max_retries=100 00:15:01.551 11:33:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:01.551 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:01.552 11:33:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@842 -- # xtrace_disable 00:15:01.552 11:33:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:15:01.811 11:33:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:15:01.811 11:33:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@866 -- # return 0 00:15:01.811 11:33:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@30 -- # sleep 1 00:15:02.748 11:33:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@32 -- # rpc_cmd nvmf_create_transport -t VFIOUSER 00:15:02.748 11:33:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:02.748 11:33:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:15:02.748 11:33:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:02.748 11:33:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@34 -- # mkdir -p /var/run/vfio-user 00:15:02.748 11:33:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@36 -- # rpc_cmd bdev_malloc_create 64 512 -b malloc0 00:15:02.748 11:33:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:02.748 11:33:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:15:02.748 malloc0 00:15:02.748 11:33:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:02.748 11:33:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@37 -- # rpc_cmd nvmf_create_subsystem nqn.2021-09.io.spdk:cnode0 -a -s spdk 00:15:02.748 11:33:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:02.748 11:33:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:15:02.748 11:33:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:02.748 11:33:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@38 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2021-09.io.spdk:cnode0 malloc0 00:15:02.748 11:33:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:02.748 11:33:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:15:02.748 11:33:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:02.748 11:33:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@39 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2021-09.io.spdk:cnode0 -t VFIOUSER -a /var/run/vfio-user -s 0 00:15:02.748 11:33:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:02.748 11:33:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:15:02.748 11:33:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:02.748 11:33:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@41 -- # trid='trtype:VFIOUSER subnqn:nqn.2021-09.io.spdk:cnode0 traddr:/var/run/vfio-user' 00:15:02.748 11:33:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/fuzz/nvme_fuzz/nvme_fuzz -m 0x2 -t 30 -S 123456 -F 'trtype:VFIOUSER subnqn:nqn.2021-09.io.spdk:cnode0 traddr:/var/run/vfio-user' -N -a 00:15:34.846 Fuzzing completed. Shutting down the fuzz application 00:15:34.846 00:15:34.846 Dumping successful admin opcodes: 00:15:34.846 8, 9, 10, 24, 00:15:34.846 Dumping successful io opcodes: 00:15:34.846 0, 00:15:34.846 NS: 0x20000081ef00 I/O qp, Total commands completed: 829879, total successful commands: 3215, random_seed: 3858389504 00:15:34.846 NS: 0x20000081ef00 admin qp, Total commands completed: 103893, total successful commands: 858, random_seed: 703461568 00:15:34.846 11:33:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@44 -- # rpc_cmd nvmf_delete_subsystem nqn.2021-09.io.spdk:cnode0 00:15:34.846 11:33:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:34.846 11:33:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:15:34.846 11:33:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:34.846 11:33:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@46 -- # killprocess 1199846 00:15:34.846 11:33:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@952 -- # '[' -z 1199846 ']' 00:15:34.846 11:33:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@956 -- # kill -0 1199846 00:15:34.846 11:33:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@957 -- # uname 00:15:34.846 11:33:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:15:34.846 11:33:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 1199846 00:15:34.846 11:33:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:15:34.846 11:33:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:15:34.846 11:33:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@970 -- # echo 'killing process with pid 1199846' 00:15:34.846 killing process with pid 1199846 00:15:34.846 11:33:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@971 -- # kill 1199846 00:15:34.846 11:33:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@976 -- # wait 1199846 00:15:34.846 11:33:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@48 -- # rm -rf /var/run/vfio-user /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/vfio_user_fuzz_log.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/vfio_user_fuzz_tgt_output.txt 00:15:34.846 11:33:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@50 -- # trap - SIGINT SIGTERM EXIT 00:15:34.846 00:15:34.846 real 0m33.206s 00:15:34.846 user 0m35.881s 00:15:34.846 sys 0m28.132s 00:15:34.846 11:33:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1128 -- # xtrace_disable 00:15:34.846 11:33:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:15:34.846 ************************************ 00:15:34.846 END TEST nvmf_vfio_user_fuzz 00:15:34.846 ************************************ 00:15:34.846 11:33:35 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@37 -- # run_test nvmf_auth_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/auth.sh --transport=tcp 00:15:34.846 11:33:35 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:15:34.846 11:33:35 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1109 -- # xtrace_disable 00:15:34.846 11:33:35 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:15:34.846 ************************************ 00:15:34.846 START TEST nvmf_auth_target 00:15:34.846 ************************************ 00:15:34.846 11:33:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/auth.sh --transport=tcp 00:15:34.846 * Looking for test storage... 00:15:34.846 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:15:34.846 11:33:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:15:34.846 11:33:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1691 -- # lcov --version 00:15:34.846 11:33:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:15:34.846 11:33:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:15:34.846 11:33:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:15:34.846 11:33:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:15:34.846 11:33:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:15:34.846 11:33:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@336 -- # IFS=.-: 00:15:34.846 11:33:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@336 -- # read -ra ver1 00:15:34.846 11:33:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@337 -- # IFS=.-: 00:15:34.846 11:33:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@337 -- # read -ra ver2 00:15:34.846 11:33:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@338 -- # local 'op=<' 00:15:34.846 11:33:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@340 -- # ver1_l=2 00:15:34.846 11:33:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@341 -- # ver2_l=1 00:15:34.846 11:33:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:15:34.846 11:33:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@344 -- # case "$op" in 00:15:34.846 11:33:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@345 -- # : 1 00:15:34.846 11:33:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:15:34.847 11:33:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:15:34.847 11:33:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@365 -- # decimal 1 00:15:34.847 11:33:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@353 -- # local d=1 00:15:34.847 11:33:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:15:34.847 11:33:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@355 -- # echo 1 00:15:34.847 11:33:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@365 -- # ver1[v]=1 00:15:34.847 11:33:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@366 -- # decimal 2 00:15:34.847 11:33:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@353 -- # local d=2 00:15:34.847 11:33:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:15:34.847 11:33:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@355 -- # echo 2 00:15:34.847 11:33:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@366 -- # ver2[v]=2 00:15:34.847 11:33:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:15:34.847 11:33:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:15:34.847 11:33:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@368 -- # return 0 00:15:34.847 11:33:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:15:34.847 11:33:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:15:34.847 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:34.847 --rc genhtml_branch_coverage=1 00:15:34.847 --rc genhtml_function_coverage=1 00:15:34.847 --rc genhtml_legend=1 00:15:34.847 --rc geninfo_all_blocks=1 00:15:34.847 --rc geninfo_unexecuted_blocks=1 00:15:34.847 00:15:34.847 ' 00:15:34.847 11:33:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:15:34.847 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:34.847 --rc genhtml_branch_coverage=1 00:15:34.847 --rc genhtml_function_coverage=1 00:15:34.847 --rc genhtml_legend=1 00:15:34.847 --rc geninfo_all_blocks=1 00:15:34.847 --rc geninfo_unexecuted_blocks=1 00:15:34.847 00:15:34.847 ' 00:15:34.847 11:33:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:15:34.847 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:34.847 --rc genhtml_branch_coverage=1 00:15:34.847 --rc genhtml_function_coverage=1 00:15:34.847 --rc genhtml_legend=1 00:15:34.847 --rc geninfo_all_blocks=1 00:15:34.847 --rc geninfo_unexecuted_blocks=1 00:15:34.847 00:15:34.847 ' 00:15:34.847 11:33:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:15:34.847 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:34.847 --rc genhtml_branch_coverage=1 00:15:34.847 --rc genhtml_function_coverage=1 00:15:34.847 --rc genhtml_legend=1 00:15:34.847 --rc geninfo_all_blocks=1 00:15:34.847 --rc geninfo_unexecuted_blocks=1 00:15:34.847 00:15:34.847 ' 00:15:34.847 11:33:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:15:34.847 11:33:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@7 -- # uname -s 00:15:34.847 11:33:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:34.847 11:33:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:34.847 11:33:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:34.847 11:33:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:34.847 11:33:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:34.847 11:33:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:34.847 11:33:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:34.847 11:33:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:34.847 11:33:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:34.847 11:33:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:34.847 11:33:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:15:34.847 11:33:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@18 -- # NVME_HOSTID=00abaa28-3537-eb11-906e-0017a4403562 00:15:34.847 11:33:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:34.847 11:33:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:34.847 11:33:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:15:34.847 11:33:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:34.847 11:33:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:15:34.847 11:33:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@15 -- # shopt -s extglob 00:15:34.847 11:33:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:34.847 11:33:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:34.847 11:33:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:34.847 11:33:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:34.847 11:33:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:34.847 11:33:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:34.847 11:33:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@5 -- # export PATH 00:15:34.847 11:33:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:34.847 11:33:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@51 -- # : 0 00:15:34.847 11:33:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:15:34.847 11:33:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:15:34.847 11:33:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:34.847 11:33:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:34.847 11:33:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:34.847 11:33:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:15:34.847 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:15:34.847 11:33:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:15:34.847 11:33:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:15:34.847 11:33:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:15:34.847 11:33:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:15:34.847 11:33:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@14 -- # dhgroups=("null" "ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:15:34.847 11:33:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@15 -- # subnqn=nqn.2024-03.io.spdk:cnode0 00:15:34.847 11:33:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@16 -- # hostnqn=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:15:34.847 11:33:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@17 -- # hostsock=/var/tmp/host.sock 00:15:34.847 11:33:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@18 -- # keys=() 00:15:34.847 11:33:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@18 -- # ckeys=() 00:15:34.847 11:33:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@86 -- # nvmftestinit 00:15:34.847 11:33:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:15:34.847 11:33:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:34.847 11:33:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@476 -- # prepare_net_devs 00:15:34.847 11:33:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@438 -- # local -g is_hw=no 00:15:34.847 11:33:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@440 -- # remove_spdk_ns 00:15:34.847 11:33:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:34.847 11:33:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:15:34.847 11:33:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:34.847 11:33:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:15:34.847 11:33:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:15:34.847 11:33:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@309 -- # xtrace_disable 00:15:34.847 11:33:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:40.124 11:33:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:15:40.124 11:33:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@315 -- # pci_devs=() 00:15:40.124 11:33:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@315 -- # local -a pci_devs 00:15:40.124 11:33:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@316 -- # pci_net_devs=() 00:15:40.124 11:33:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:15:40.124 11:33:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@317 -- # pci_drivers=() 00:15:40.124 11:33:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@317 -- # local -A pci_drivers 00:15:40.124 11:33:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@319 -- # net_devs=() 00:15:40.124 11:33:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@319 -- # local -ga net_devs 00:15:40.124 11:33:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@320 -- # e810=() 00:15:40.124 11:33:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@320 -- # local -ga e810 00:15:40.124 11:33:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@321 -- # x722=() 00:15:40.124 11:33:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@321 -- # local -ga x722 00:15:40.124 11:33:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@322 -- # mlx=() 00:15:40.124 11:33:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@322 -- # local -ga mlx 00:15:40.124 11:33:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:15:40.124 11:33:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:15:40.124 11:33:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:15:40.124 11:33:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:15:40.124 11:33:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:15:40.124 11:33:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:15:40.124 11:33:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:15:40.124 11:33:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:15:40.124 11:33:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:15:40.124 11:33:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:15:40.124 11:33:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:15:40.125 11:33:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:15:40.125 11:33:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:15:40.125 11:33:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:15:40.125 11:33:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:15:40.125 11:33:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:15:40.125 11:33:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:15:40.125 11:33:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:15:40.125 11:33:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:15:40.125 11:33:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:15:40.125 Found 0000:af:00.0 (0x8086 - 0x159b) 00:15:40.125 11:33:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:15:40.125 11:33:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:15:40.125 11:33:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:40.125 11:33:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:40.125 11:33:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:15:40.125 11:33:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:15:40.125 11:33:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:15:40.125 Found 0000:af:00.1 (0x8086 - 0x159b) 00:15:40.125 11:33:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:15:40.125 11:33:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:15:40.125 11:33:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:40.125 11:33:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:40.125 11:33:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:15:40.125 11:33:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:15:40.125 11:33:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:15:40.125 11:33:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:15:40.125 11:33:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:15:40.125 11:33:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:40.125 11:33:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:15:40.125 11:33:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:15:40.125 11:33:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:15:40.125 11:33:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:15:40.125 11:33:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:40.125 11:33:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:15:40.125 Found net devices under 0000:af:00.0: cvl_0_0 00:15:40.125 11:33:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:15:40.125 11:33:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:15:40.125 11:33:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:40.125 11:33:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:15:40.125 11:33:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:15:40.125 11:33:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:15:40.125 11:33:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:15:40.125 11:33:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:40.125 11:33:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:15:40.125 Found net devices under 0000:af:00.1: cvl_0_1 00:15:40.125 11:33:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:15:40.125 11:33:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:15:40.125 11:33:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@442 -- # is_hw=yes 00:15:40.125 11:33:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:15:40.125 11:33:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:15:40.125 11:33:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:15:40.125 11:33:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:15:40.125 11:33:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:15:40.125 11:33:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:40.125 11:33:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:15:40.125 11:33:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:15:40.125 11:33:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:15:40.125 11:33:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:15:40.125 11:33:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:15:40.125 11:33:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:15:40.125 11:33:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:15:40.125 11:33:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:40.125 11:33:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:15:40.125 11:33:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:15:40.125 11:33:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:15:40.125 11:33:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:15:40.125 11:33:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:15:40.125 11:33:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:15:40.125 11:33:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:15:40.125 11:33:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:15:40.125 11:33:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:15:40.125 11:33:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:15:40.125 11:33:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:15:40.125 11:33:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:15:40.125 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:40.125 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.485 ms 00:15:40.125 00:15:40.125 --- 10.0.0.2 ping statistics --- 00:15:40.125 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:40.125 rtt min/avg/max/mdev = 0.485/0.485/0.485/0.000 ms 00:15:40.125 11:33:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:15:40.125 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:40.125 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.230 ms 00:15:40.125 00:15:40.125 --- 10.0.0.1 ping statistics --- 00:15:40.125 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:40.125 rtt min/avg/max/mdev = 0.230/0.230/0.230/0.000 ms 00:15:40.125 11:33:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:40.125 11:33:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@450 -- # return 0 00:15:40.125 11:33:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:15:40.125 11:33:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:40.125 11:33:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:15:40.125 11:33:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:15:40.125 11:33:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:40.125 11:33:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:15:40.125 11:33:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:15:40.125 11:33:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@87 -- # nvmfappstart -L nvmf_auth 00:15:40.125 11:33:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:15:40.125 11:33:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@724 -- # xtrace_disable 00:15:40.125 11:33:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:40.125 11:33:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@509 -- # nvmfpid=1209204 00:15:40.125 11:33:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@510 -- # waitforlisten 1209204 00:15:40.125 11:33:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@833 -- # '[' -z 1209204 ']' 00:15:40.125 11:33:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:40.125 11:33:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@838 -- # local max_retries=100 00:15:40.125 11:33:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:40.126 11:33:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # xtrace_disable 00:15:40.126 11:33:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:40.126 11:33:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvmf_auth 00:15:40.694 11:33:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:15:40.694 11:33:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@866 -- # return 0 00:15:40.694 11:33:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:15:40.694 11:33:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@730 -- # xtrace_disable 00:15:40.694 11:33:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:40.694 11:33:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:40.694 11:33:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@89 -- # hostpid=1209225 00:15:40.694 11:33:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@91 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:15:40.694 11:33:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 2 -r /var/tmp/host.sock -L nvme_auth 00:15:40.694 11:33:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # gen_dhchap_key null 48 00:15:40.695 11:33:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:15:40.695 11:33:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:15:40.695 11:33:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:15:40.695 11:33:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=null 00:15:40.695 11:33:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=48 00:15:40.695 11:33:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:15:40.695 11:33:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=4196781cbaa1dd9fe545f006fd30ac61fed280e2b458b143 00:15:40.695 11:33:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-null.XXX 00:15:40.695 11:33:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-null.J8a 00:15:40.695 11:33:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key 4196781cbaa1dd9fe545f006fd30ac61fed280e2b458b143 0 00:15:40.695 11:33:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 4196781cbaa1dd9fe545f006fd30ac61fed280e2b458b143 0 00:15:40.695 11:33:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:15:40.695 11:33:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:15:40.695 11:33:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=4196781cbaa1dd9fe545f006fd30ac61fed280e2b458b143 00:15:40.695 11:33:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=0 00:15:40.695 11:33:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:15:40.695 11:33:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-null.J8a 00:15:40.695 11:33:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-null.J8a 00:15:40.695 11:33:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # keys[0]=/tmp/spdk.key-null.J8a 00:15:40.695 11:33:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # gen_dhchap_key sha512 64 00:15:40.695 11:33:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:15:40.695 11:33:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:15:40.695 11:33:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:15:40.695 11:33:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha512 00:15:40.695 11:33:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=64 00:15:40.695 11:33:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 32 /dev/urandom 00:15:40.695 11:33:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=2bf62d239314d29a57b73970089e31cf87c2e74033bf34e9e79ecdfbfb7e93bb 00:15:40.695 11:33:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha512.XXX 00:15:40.695 11:33:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha512.50R 00:15:40.695 11:33:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key 2bf62d239314d29a57b73970089e31cf87c2e74033bf34e9e79ecdfbfb7e93bb 3 00:15:40.695 11:33:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 2bf62d239314d29a57b73970089e31cf87c2e74033bf34e9e79ecdfbfb7e93bb 3 00:15:40.695 11:33:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:15:40.695 11:33:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:15:40.695 11:33:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=2bf62d239314d29a57b73970089e31cf87c2e74033bf34e9e79ecdfbfb7e93bb 00:15:40.695 11:33:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=3 00:15:40.695 11:33:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:15:40.695 11:33:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha512.50R 00:15:40.695 11:33:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha512.50R 00:15:40.695 11:33:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # ckeys[0]=/tmp/spdk.key-sha512.50R 00:15:40.695 11:33:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # gen_dhchap_key sha256 32 00:15:40.695 11:33:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:15:40.695 11:33:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:15:40.695 11:33:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:15:40.695 11:33:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha256 00:15:40.695 11:33:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=32 00:15:40.695 11:33:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:15:40.695 11:33:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=70c593e9b6abca13a9dede56373f234c 00:15:40.695 11:33:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha256.XXX 00:15:40.695 11:33:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha256.OdX 00:15:40.695 11:33:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key 70c593e9b6abca13a9dede56373f234c 1 00:15:40.695 11:33:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 70c593e9b6abca13a9dede56373f234c 1 00:15:40.695 11:33:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:15:40.695 11:33:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:15:40.695 11:33:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=70c593e9b6abca13a9dede56373f234c 00:15:40.695 11:33:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=1 00:15:40.695 11:33:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:15:40.695 11:33:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha256.OdX 00:15:40.695 11:33:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha256.OdX 00:15:40.695 11:33:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # keys[1]=/tmp/spdk.key-sha256.OdX 00:15:40.695 11:33:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # gen_dhchap_key sha384 48 00:15:40.695 11:33:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:15:40.695 11:33:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:15:40.695 11:33:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:15:40.695 11:33:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha384 00:15:40.695 11:33:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=48 00:15:40.695 11:33:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:15:40.695 11:33:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=3c08a00a29efc49220fd9fbd06747a04f6e54db138dd7858 00:15:40.695 11:33:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha384.XXX 00:15:40.695 11:33:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha384.pcg 00:15:40.695 11:33:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key 3c08a00a29efc49220fd9fbd06747a04f6e54db138dd7858 2 00:15:40.695 11:33:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 3c08a00a29efc49220fd9fbd06747a04f6e54db138dd7858 2 00:15:40.695 11:33:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:15:40.695 11:33:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:15:40.695 11:33:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=3c08a00a29efc49220fd9fbd06747a04f6e54db138dd7858 00:15:40.695 11:33:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=2 00:15:40.695 11:33:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:15:40.695 11:33:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha384.pcg 00:15:40.695 11:33:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha384.pcg 00:15:40.695 11:33:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # ckeys[1]=/tmp/spdk.key-sha384.pcg 00:15:40.695 11:33:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # gen_dhchap_key sha384 48 00:15:40.695 11:33:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:15:40.695 11:33:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:15:40.695 11:33:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:15:40.695 11:33:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha384 00:15:40.695 11:33:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=48 00:15:40.695 11:33:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:15:40.695 11:33:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=b7850208f5873acbd515de20f3588e2c05f9206040fb3468 00:15:40.955 11:33:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha384.XXX 00:15:40.955 11:33:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha384.uiV 00:15:40.955 11:33:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key b7850208f5873acbd515de20f3588e2c05f9206040fb3468 2 00:15:40.955 11:33:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 b7850208f5873acbd515de20f3588e2c05f9206040fb3468 2 00:15:40.955 11:33:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:15:40.955 11:33:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:15:40.955 11:33:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=b7850208f5873acbd515de20f3588e2c05f9206040fb3468 00:15:40.955 11:33:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=2 00:15:40.955 11:33:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:15:40.955 11:33:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha384.uiV 00:15:40.955 11:33:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha384.uiV 00:15:40.955 11:33:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # keys[2]=/tmp/spdk.key-sha384.uiV 00:15:40.955 11:33:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # gen_dhchap_key sha256 32 00:15:40.955 11:33:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:15:40.955 11:33:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:15:40.955 11:33:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:15:40.955 11:33:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha256 00:15:40.955 11:33:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=32 00:15:40.955 11:33:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:15:40.955 11:33:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=2ed817c899b79d91bbd798922adcc6c7 00:15:40.955 11:33:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha256.XXX 00:15:40.955 11:33:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha256.MMZ 00:15:40.955 11:33:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key 2ed817c899b79d91bbd798922adcc6c7 1 00:15:40.955 11:33:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 2ed817c899b79d91bbd798922adcc6c7 1 00:15:40.955 11:33:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:15:40.955 11:33:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:15:40.955 11:33:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=2ed817c899b79d91bbd798922adcc6c7 00:15:40.955 11:33:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=1 00:15:40.955 11:33:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:15:40.955 11:33:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha256.MMZ 00:15:40.955 11:33:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha256.MMZ 00:15:40.955 11:33:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # ckeys[2]=/tmp/spdk.key-sha256.MMZ 00:15:40.955 11:33:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@97 -- # gen_dhchap_key sha512 64 00:15:40.955 11:33:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:15:40.955 11:33:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:15:40.955 11:33:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:15:40.955 11:33:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha512 00:15:40.955 11:33:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=64 00:15:40.955 11:33:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 32 /dev/urandom 00:15:40.955 11:33:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=86ea3220ef428a0d62e5165b2a7be421ad5d2e81df76aaba4978390e948dd98e 00:15:40.955 11:33:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha512.XXX 00:15:40.955 11:33:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha512.qnL 00:15:40.955 11:33:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key 86ea3220ef428a0d62e5165b2a7be421ad5d2e81df76aaba4978390e948dd98e 3 00:15:40.955 11:33:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 86ea3220ef428a0d62e5165b2a7be421ad5d2e81df76aaba4978390e948dd98e 3 00:15:40.955 11:33:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:15:40.955 11:33:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:15:40.955 11:33:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=86ea3220ef428a0d62e5165b2a7be421ad5d2e81df76aaba4978390e948dd98e 00:15:40.955 11:33:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=3 00:15:40.955 11:33:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:15:40.955 11:33:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha512.qnL 00:15:40.955 11:33:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha512.qnL 00:15:40.955 11:33:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@97 -- # keys[3]=/tmp/spdk.key-sha512.qnL 00:15:40.955 11:33:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@97 -- # ckeys[3]= 00:15:40.955 11:33:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@99 -- # waitforlisten 1209204 00:15:40.955 11:33:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@833 -- # '[' -z 1209204 ']' 00:15:40.955 11:33:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:40.955 11:33:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@838 -- # local max_retries=100 00:15:40.955 11:33:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:40.955 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:40.955 11:33:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # xtrace_disable 00:15:40.955 11:33:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:41.215 11:33:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:15:41.215 11:33:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@866 -- # return 0 00:15:41.215 11:33:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@100 -- # waitforlisten 1209225 /var/tmp/host.sock 00:15:41.215 11:33:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@833 -- # '[' -z 1209225 ']' 00:15:41.215 11:33:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/host.sock 00:15:41.215 11:33:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@838 -- # local max_retries=100 00:15:41.215 11:33:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock...' 00:15:41.215 Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock... 00:15:41.215 11:33:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # xtrace_disable 00:15:41.215 11:33:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:41.473 11:33:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:15:41.473 11:33:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@866 -- # return 0 00:15:41.473 11:33:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@101 -- # rpc_cmd 00:15:41.473 11:33:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:41.473 11:33:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:41.473 11:33:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:41.473 11:33:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:15:41.473 11:33:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.J8a 00:15:41.473 11:33:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:41.473 11:33:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:41.733 11:33:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:41.733 11:33:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key0 /tmp/spdk.key-null.J8a 00:15:41.733 11:33:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key0 /tmp/spdk.key-null.J8a 00:15:41.733 11:33:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n /tmp/spdk.key-sha512.50R ]] 00:15:41.733 11:33:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@112 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.50R 00:15:41.733 11:33:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:41.733 11:33:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:41.733 11:33:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:41.733 11:33:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@113 -- # hostrpc keyring_file_add_key ckey0 /tmp/spdk.key-sha512.50R 00:15:41.733 11:33:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey0 /tmp/spdk.key-sha512.50R 00:15:41.993 11:33:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:15:41.993 11:33:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-sha256.OdX 00:15:41.993 11:33:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:41.993 11:33:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:41.993 11:33:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:41.993 11:33:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key1 /tmp/spdk.key-sha256.OdX 00:15:41.993 11:33:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key1 /tmp/spdk.key-sha256.OdX 00:15:41.993 11:33:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n /tmp/spdk.key-sha384.pcg ]] 00:15:41.993 11:33:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@112 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.pcg 00:15:41.993 11:33:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:41.993 11:33:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:41.993 11:33:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:41.993 11:33:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@113 -- # hostrpc keyring_file_add_key ckey1 /tmp/spdk.key-sha384.pcg 00:15:41.993 11:33:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey1 /tmp/spdk.key-sha384.pcg 00:15:42.311 11:33:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:15:42.311 11:33:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha384.uiV 00:15:42.311 11:33:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:42.311 11:33:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:42.311 11:33:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:42.311 11:33:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key2 /tmp/spdk.key-sha384.uiV 00:15:42.311 11:33:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key2 /tmp/spdk.key-sha384.uiV 00:15:42.571 11:33:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n /tmp/spdk.key-sha256.MMZ ]] 00:15:42.571 11:33:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@112 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.MMZ 00:15:42.571 11:33:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:42.571 11:33:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:42.571 11:33:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:42.571 11:33:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@113 -- # hostrpc keyring_file_add_key ckey2 /tmp/spdk.key-sha256.MMZ 00:15:42.571 11:33:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey2 /tmp/spdk.key-sha256.MMZ 00:15:42.571 11:33:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:15:42.571 11:33:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha512.qnL 00:15:42.571 11:33:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:42.571 11:33:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:42.571 11:33:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:42.571 11:33:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key3 /tmp/spdk.key-sha512.qnL 00:15:42.571 11:33:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key3 /tmp/spdk.key-sha512.qnL 00:15:42.831 11:33:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n '' ]] 00:15:42.831 11:33:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@118 -- # for digest in "${digests[@]}" 00:15:42.831 11:33:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:15:42.831 11:33:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:42.831 11:33:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:15:42.831 11:33:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:15:42.831 11:33:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 0 00:15:42.831 11:33:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:42.831 11:33:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:15:42.831 11:33:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:15:42.831 11:33:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:15:42.831 11:33:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:42.831 11:33:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:42.831 11:33:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:42.831 11:33:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:42.831 11:33:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:42.831 11:33:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:42.831 11:33:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:42.831 11:33:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:43.090 00:15:43.090 11:33:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:43.090 11:33:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:43.090 11:33:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:43.350 11:33:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:43.350 11:33:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:43.350 11:33:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:43.350 11:33:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:43.609 11:33:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:43.609 11:33:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:43.609 { 00:15:43.609 "cntlid": 1, 00:15:43.609 "qid": 0, 00:15:43.609 "state": "enabled", 00:15:43.609 "thread": "nvmf_tgt_poll_group_000", 00:15:43.609 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562", 00:15:43.609 "listen_address": { 00:15:43.610 "trtype": "TCP", 00:15:43.610 "adrfam": "IPv4", 00:15:43.610 "traddr": "10.0.0.2", 00:15:43.610 "trsvcid": "4420" 00:15:43.610 }, 00:15:43.610 "peer_address": { 00:15:43.610 "trtype": "TCP", 00:15:43.610 "adrfam": "IPv4", 00:15:43.610 "traddr": "10.0.0.1", 00:15:43.610 "trsvcid": "60378" 00:15:43.610 }, 00:15:43.610 "auth": { 00:15:43.610 "state": "completed", 00:15:43.610 "digest": "sha256", 00:15:43.610 "dhgroup": "null" 00:15:43.610 } 00:15:43.610 } 00:15:43.610 ]' 00:15:43.610 11:33:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:43.610 11:33:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:43.610 11:33:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:43.610 11:33:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:15:43.610 11:33:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:43.610 11:33:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:43.610 11:33:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:43.610 11:33:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:43.868 11:33:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NDE5Njc4MWNiYWExZGQ5ZmU1NDVmMDA2ZmQzMGFjNjFmZWQyODBlMmI0NThiMTQzLE1V+g==: --dhchap-ctrl-secret DHHC-1:03:MmJmNjJkMjM5MzE0ZDI5YTU3YjczOTcwMDg5ZTMxY2Y4N2MyZTc0MDMzYmYzNGU5ZTc5ZWNkZmJmYjdlOTNiYnYQ51E=: 00:15:43.868 11:33:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid 00abaa28-3537-eb11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:NDE5Njc4MWNiYWExZGQ5ZmU1NDVmMDA2ZmQzMGFjNjFmZWQyODBlMmI0NThiMTQzLE1V+g==: --dhchap-ctrl-secret DHHC-1:03:MmJmNjJkMjM5MzE0ZDI5YTU3YjczOTcwMDg5ZTMxY2Y4N2MyZTc0MDMzYmYzNGU5ZTc5ZWNkZmJmYjdlOTNiYnYQ51E=: 00:15:44.803 11:33:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:44.803 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:44.803 11:33:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:15:44.803 11:33:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:44.803 11:33:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:44.803 11:33:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:44.803 11:33:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:44.803 11:33:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:15:44.803 11:33:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:15:44.803 11:33:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 1 00:15:44.803 11:33:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:44.803 11:33:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:15:44.803 11:33:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:15:44.803 11:33:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:15:44.803 11:33:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:44.803 11:33:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:44.803 11:33:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:44.803 11:33:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:44.803 11:33:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:44.803 11:33:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:44.803 11:33:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:44.803 11:33:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:45.372 00:15:45.372 11:33:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:45.372 11:33:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:45.372 11:33:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:45.372 11:33:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:45.372 11:33:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:45.372 11:33:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:45.372 11:33:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:45.372 11:33:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:45.372 11:33:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:45.372 { 00:15:45.372 "cntlid": 3, 00:15:45.372 "qid": 0, 00:15:45.373 "state": "enabled", 00:15:45.373 "thread": "nvmf_tgt_poll_group_000", 00:15:45.373 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562", 00:15:45.373 "listen_address": { 00:15:45.373 "trtype": "TCP", 00:15:45.373 "adrfam": "IPv4", 00:15:45.373 "traddr": "10.0.0.2", 00:15:45.373 "trsvcid": "4420" 00:15:45.373 }, 00:15:45.373 "peer_address": { 00:15:45.373 "trtype": "TCP", 00:15:45.373 "adrfam": "IPv4", 00:15:45.373 "traddr": "10.0.0.1", 00:15:45.373 "trsvcid": "60408" 00:15:45.373 }, 00:15:45.373 "auth": { 00:15:45.373 "state": "completed", 00:15:45.373 "digest": "sha256", 00:15:45.373 "dhgroup": "null" 00:15:45.373 } 00:15:45.373 } 00:15:45.373 ]' 00:15:45.373 11:33:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:45.373 11:33:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:45.373 11:33:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:45.373 11:33:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:15:45.373 11:33:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:45.633 11:33:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:45.633 11:33:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:45.633 11:33:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:45.633 11:33:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NzBjNTkzZTliNmFiY2ExM2E5ZGVkZTU2MzczZjIzNGPWmSRF: --dhchap-ctrl-secret DHHC-1:02:M2MwOGEwMGEyOWVmYzQ5MjIwZmQ5ZmJkMDY3NDdhMDRmNmU1NGRiMTM4ZGQ3ODU4gDsQkA==: 00:15:45.633 11:33:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid 00abaa28-3537-eb11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:NzBjNTkzZTliNmFiY2ExM2E5ZGVkZTU2MzczZjIzNGPWmSRF: --dhchap-ctrl-secret DHHC-1:02:M2MwOGEwMGEyOWVmYzQ5MjIwZmQ5ZmJkMDY3NDdhMDRmNmU1NGRiMTM4ZGQ3ODU4gDsQkA==: 00:15:46.572 11:33:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:46.572 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:46.572 11:33:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:15:46.572 11:33:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:46.572 11:33:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:46.572 11:33:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:46.572 11:33:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:46.572 11:33:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:15:46.572 11:33:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:15:46.572 11:33:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 2 00:15:46.572 11:33:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:46.572 11:33:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:15:46.572 11:33:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:15:46.572 11:33:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:15:46.572 11:33:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:46.572 11:33:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:46.572 11:33:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:46.573 11:33:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:46.573 11:33:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:46.573 11:33:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:46.573 11:33:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:46.573 11:33:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:46.832 00:15:46.832 11:33:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:46.832 11:33:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:46.832 11:33:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:47.091 11:33:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:47.091 11:33:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:47.091 11:33:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:47.091 11:33:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:47.091 11:33:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:47.091 11:33:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:47.091 { 00:15:47.091 "cntlid": 5, 00:15:47.091 "qid": 0, 00:15:47.091 "state": "enabled", 00:15:47.091 "thread": "nvmf_tgt_poll_group_000", 00:15:47.091 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562", 00:15:47.091 "listen_address": { 00:15:47.091 "trtype": "TCP", 00:15:47.091 "adrfam": "IPv4", 00:15:47.091 "traddr": "10.0.0.2", 00:15:47.091 "trsvcid": "4420" 00:15:47.091 }, 00:15:47.091 "peer_address": { 00:15:47.091 "trtype": "TCP", 00:15:47.091 "adrfam": "IPv4", 00:15:47.091 "traddr": "10.0.0.1", 00:15:47.091 "trsvcid": "60444" 00:15:47.091 }, 00:15:47.091 "auth": { 00:15:47.091 "state": "completed", 00:15:47.091 "digest": "sha256", 00:15:47.091 "dhgroup": "null" 00:15:47.091 } 00:15:47.091 } 00:15:47.091 ]' 00:15:47.091 11:33:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:47.091 11:33:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:47.091 11:33:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:47.091 11:33:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:15:47.091 11:33:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:47.350 11:33:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:47.350 11:33:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:47.350 11:33:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:47.609 11:33:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:Yjc4NTAyMDhmNTg3M2FjYmQ1MTVkZTIwZjM1ODhlMmMwNWY5MjA2MDQwZmIzNDY4n3XOMQ==: --dhchap-ctrl-secret DHHC-1:01:MmVkODE3Yzg5OWI3OWQ5MWJiZDc5ODkyMmFkY2M2YzfHVm/m: 00:15:47.610 11:33:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid 00abaa28-3537-eb11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:Yjc4NTAyMDhmNTg3M2FjYmQ1MTVkZTIwZjM1ODhlMmMwNWY5MjA2MDQwZmIzNDY4n3XOMQ==: --dhchap-ctrl-secret DHHC-1:01:MmVkODE3Yzg5OWI3OWQ5MWJiZDc5ODkyMmFkY2M2YzfHVm/m: 00:15:48.178 11:33:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:48.178 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:48.178 11:33:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:15:48.178 11:33:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:48.178 11:33:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:48.178 11:33:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:48.178 11:33:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:48.178 11:33:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:15:48.178 11:33:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:15:48.438 11:33:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 3 00:15:48.438 11:33:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:48.438 11:33:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:15:48.438 11:33:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:15:48.438 11:33:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:15:48.438 11:33:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:48.438 11:33:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --dhchap-key key3 00:15:48.438 11:33:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:48.438 11:33:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:48.698 11:33:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:48.698 11:33:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:15:48.698 11:33:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:15:48.698 11:33:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:15:48.957 00:15:48.957 11:33:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:48.957 11:33:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:48.957 11:33:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:49.216 11:33:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:49.216 11:33:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:49.216 11:33:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:49.216 11:33:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:49.216 11:33:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:49.216 11:33:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:49.216 { 00:15:49.216 "cntlid": 7, 00:15:49.216 "qid": 0, 00:15:49.216 "state": "enabled", 00:15:49.216 "thread": "nvmf_tgt_poll_group_000", 00:15:49.216 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562", 00:15:49.216 "listen_address": { 00:15:49.216 "trtype": "TCP", 00:15:49.216 "adrfam": "IPv4", 00:15:49.216 "traddr": "10.0.0.2", 00:15:49.216 "trsvcid": "4420" 00:15:49.216 }, 00:15:49.216 "peer_address": { 00:15:49.216 "trtype": "TCP", 00:15:49.216 "adrfam": "IPv4", 00:15:49.216 "traddr": "10.0.0.1", 00:15:49.216 "trsvcid": "60478" 00:15:49.216 }, 00:15:49.216 "auth": { 00:15:49.216 "state": "completed", 00:15:49.216 "digest": "sha256", 00:15:49.216 "dhgroup": "null" 00:15:49.216 } 00:15:49.216 } 00:15:49.216 ]' 00:15:49.216 11:33:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:49.216 11:33:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:49.216 11:33:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:49.216 11:33:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:15:49.216 11:33:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:49.216 11:33:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:49.216 11:33:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:49.216 11:33:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:49.476 11:33:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ODZlYTMyMjBlZjQyOGEwZDYyZTUxNjViMmE3YmU0MjFhZDVkMmU4MWRmNzZhYWJhNDk3ODM5MGU5NDhkZDk4ZVOjsv0=: 00:15:49.476 11:33:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid 00abaa28-3537-eb11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:ODZlYTMyMjBlZjQyOGEwZDYyZTUxNjViMmE3YmU0MjFhZDVkMmU4MWRmNzZhYWJhNDk3ODM5MGU5NDhkZDk4ZVOjsv0=: 00:15:50.412 11:33:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:50.412 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:50.412 11:33:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:15:50.412 11:33:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:50.412 11:33:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:50.412 11:33:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:50.412 11:33:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:15:50.412 11:33:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:50.412 11:33:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:15:50.412 11:33:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:15:50.671 11:33:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 0 00:15:50.671 11:33:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:50.671 11:33:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:15:50.671 11:33:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:15:50.671 11:33:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:15:50.671 11:33:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:50.671 11:33:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:50.671 11:33:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:50.671 11:33:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:50.671 11:33:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:50.671 11:33:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:50.671 11:33:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:50.671 11:33:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:50.930 00:15:50.930 11:33:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:50.930 11:33:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:50.930 11:33:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:51.188 11:33:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:51.188 11:33:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:51.188 11:33:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:51.188 11:33:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:51.188 11:33:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:51.188 11:33:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:51.188 { 00:15:51.188 "cntlid": 9, 00:15:51.188 "qid": 0, 00:15:51.188 "state": "enabled", 00:15:51.188 "thread": "nvmf_tgt_poll_group_000", 00:15:51.188 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562", 00:15:51.188 "listen_address": { 00:15:51.188 "trtype": "TCP", 00:15:51.188 "adrfam": "IPv4", 00:15:51.188 "traddr": "10.0.0.2", 00:15:51.188 "trsvcid": "4420" 00:15:51.188 }, 00:15:51.188 "peer_address": { 00:15:51.188 "trtype": "TCP", 00:15:51.188 "adrfam": "IPv4", 00:15:51.188 "traddr": "10.0.0.1", 00:15:51.188 "trsvcid": "60522" 00:15:51.188 }, 00:15:51.188 "auth": { 00:15:51.188 "state": "completed", 00:15:51.188 "digest": "sha256", 00:15:51.188 "dhgroup": "ffdhe2048" 00:15:51.188 } 00:15:51.188 } 00:15:51.188 ]' 00:15:51.188 11:33:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:51.188 11:33:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:51.188 11:33:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:51.188 11:33:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:15:51.188 11:33:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:51.188 11:33:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:51.188 11:33:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:51.188 11:33:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:51.447 11:33:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NDE5Njc4MWNiYWExZGQ5ZmU1NDVmMDA2ZmQzMGFjNjFmZWQyODBlMmI0NThiMTQzLE1V+g==: --dhchap-ctrl-secret DHHC-1:03:MmJmNjJkMjM5MzE0ZDI5YTU3YjczOTcwMDg5ZTMxY2Y4N2MyZTc0MDMzYmYzNGU5ZTc5ZWNkZmJmYjdlOTNiYnYQ51E=: 00:15:51.447 11:33:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid 00abaa28-3537-eb11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:NDE5Njc4MWNiYWExZGQ5ZmU1NDVmMDA2ZmQzMGFjNjFmZWQyODBlMmI0NThiMTQzLE1V+g==: --dhchap-ctrl-secret DHHC-1:03:MmJmNjJkMjM5MzE0ZDI5YTU3YjczOTcwMDg5ZTMxY2Y4N2MyZTc0MDMzYmYzNGU5ZTc5ZWNkZmJmYjdlOTNiYnYQ51E=: 00:15:52.014 11:33:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:52.014 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:52.014 11:33:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:15:52.014 11:33:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:52.014 11:33:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:52.014 11:33:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:52.014 11:33:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:52.014 11:33:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:15:52.014 11:33:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:15:52.582 11:33:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 1 00:15:52.582 11:33:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:52.582 11:33:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:15:52.582 11:33:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:15:52.582 11:33:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:15:52.582 11:33:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:52.582 11:33:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:52.582 11:33:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:52.582 11:33:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:52.582 11:33:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:52.582 11:33:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:52.582 11:33:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:52.582 11:33:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:52.841 00:15:52.841 11:33:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:52.841 11:33:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:52.841 11:33:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:53.100 11:33:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:53.100 11:33:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:53.100 11:33:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:53.100 11:33:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:53.100 11:33:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:53.100 11:33:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:53.100 { 00:15:53.100 "cntlid": 11, 00:15:53.100 "qid": 0, 00:15:53.100 "state": "enabled", 00:15:53.100 "thread": "nvmf_tgt_poll_group_000", 00:15:53.100 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562", 00:15:53.100 "listen_address": { 00:15:53.100 "trtype": "TCP", 00:15:53.100 "adrfam": "IPv4", 00:15:53.100 "traddr": "10.0.0.2", 00:15:53.100 "trsvcid": "4420" 00:15:53.100 }, 00:15:53.100 "peer_address": { 00:15:53.100 "trtype": "TCP", 00:15:53.100 "adrfam": "IPv4", 00:15:53.100 "traddr": "10.0.0.1", 00:15:53.100 "trsvcid": "54118" 00:15:53.100 }, 00:15:53.100 "auth": { 00:15:53.100 "state": "completed", 00:15:53.100 "digest": "sha256", 00:15:53.100 "dhgroup": "ffdhe2048" 00:15:53.100 } 00:15:53.100 } 00:15:53.100 ]' 00:15:53.100 11:33:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:53.100 11:33:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:53.100 11:33:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:53.100 11:33:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:15:53.100 11:33:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:53.100 11:33:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:53.100 11:33:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:53.100 11:33:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:53.360 11:33:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NzBjNTkzZTliNmFiY2ExM2E5ZGVkZTU2MzczZjIzNGPWmSRF: --dhchap-ctrl-secret DHHC-1:02:M2MwOGEwMGEyOWVmYzQ5MjIwZmQ5ZmJkMDY3NDdhMDRmNmU1NGRiMTM4ZGQ3ODU4gDsQkA==: 00:15:53.360 11:33:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid 00abaa28-3537-eb11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:NzBjNTkzZTliNmFiY2ExM2E5ZGVkZTU2MzczZjIzNGPWmSRF: --dhchap-ctrl-secret DHHC-1:02:M2MwOGEwMGEyOWVmYzQ5MjIwZmQ5ZmJkMDY3NDdhMDRmNmU1NGRiMTM4ZGQ3ODU4gDsQkA==: 00:15:54.297 11:33:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:54.297 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:54.297 11:33:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:15:54.297 11:33:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:54.297 11:33:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:54.297 11:33:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:54.297 11:33:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:54.297 11:33:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:15:54.297 11:33:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:15:54.556 11:33:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 2 00:15:54.556 11:33:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:54.556 11:33:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:15:54.556 11:33:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:15:54.556 11:33:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:15:54.556 11:33:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:54.556 11:33:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:54.556 11:33:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:54.556 11:33:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:54.556 11:33:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:54.556 11:33:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:54.556 11:33:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:54.556 11:33:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:54.815 00:15:54.815 11:33:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:54.815 11:33:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:54.815 11:33:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:55.074 11:33:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:55.074 11:33:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:55.074 11:33:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:55.074 11:33:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:55.333 11:33:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:55.333 11:33:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:55.333 { 00:15:55.333 "cntlid": 13, 00:15:55.333 "qid": 0, 00:15:55.333 "state": "enabled", 00:15:55.333 "thread": "nvmf_tgt_poll_group_000", 00:15:55.333 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562", 00:15:55.333 "listen_address": { 00:15:55.333 "trtype": "TCP", 00:15:55.333 "adrfam": "IPv4", 00:15:55.333 "traddr": "10.0.0.2", 00:15:55.333 "trsvcid": "4420" 00:15:55.333 }, 00:15:55.333 "peer_address": { 00:15:55.333 "trtype": "TCP", 00:15:55.333 "adrfam": "IPv4", 00:15:55.333 "traddr": "10.0.0.1", 00:15:55.333 "trsvcid": "54144" 00:15:55.333 }, 00:15:55.333 "auth": { 00:15:55.333 "state": "completed", 00:15:55.333 "digest": "sha256", 00:15:55.333 "dhgroup": "ffdhe2048" 00:15:55.333 } 00:15:55.333 } 00:15:55.333 ]' 00:15:55.333 11:33:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:55.333 11:33:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:55.333 11:33:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:55.333 11:33:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:15:55.333 11:33:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:55.333 11:33:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:55.333 11:33:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:55.333 11:33:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:55.591 11:33:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:Yjc4NTAyMDhmNTg3M2FjYmQ1MTVkZTIwZjM1ODhlMmMwNWY5MjA2MDQwZmIzNDY4n3XOMQ==: --dhchap-ctrl-secret DHHC-1:01:MmVkODE3Yzg5OWI3OWQ5MWJiZDc5ODkyMmFkY2M2YzfHVm/m: 00:15:55.591 11:33:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid 00abaa28-3537-eb11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:Yjc4NTAyMDhmNTg3M2FjYmQ1MTVkZTIwZjM1ODhlMmMwNWY5MjA2MDQwZmIzNDY4n3XOMQ==: --dhchap-ctrl-secret DHHC-1:01:MmVkODE3Yzg5OWI3OWQ5MWJiZDc5ODkyMmFkY2M2YzfHVm/m: 00:15:56.526 11:33:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:56.526 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:56.526 11:33:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:15:56.526 11:33:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:56.526 11:33:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:56.526 11:33:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:56.526 11:33:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:56.526 11:33:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:15:56.526 11:33:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:15:56.526 11:33:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 3 00:15:56.526 11:33:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:56.526 11:33:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:15:56.526 11:33:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:15:56.526 11:33:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:15:56.526 11:33:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:56.526 11:33:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --dhchap-key key3 00:15:56.526 11:33:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:56.526 11:33:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:56.526 11:33:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:56.526 11:33:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:15:56.526 11:33:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:15:56.526 11:33:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:15:57.095 00:15:57.095 11:33:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:57.095 11:33:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:57.095 11:33:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:57.353 11:33:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:57.353 11:33:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:57.353 11:33:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:57.353 11:33:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:57.353 11:33:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:57.353 11:33:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:57.353 { 00:15:57.353 "cntlid": 15, 00:15:57.353 "qid": 0, 00:15:57.353 "state": "enabled", 00:15:57.353 "thread": "nvmf_tgt_poll_group_000", 00:15:57.353 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562", 00:15:57.353 "listen_address": { 00:15:57.353 "trtype": "TCP", 00:15:57.353 "adrfam": "IPv4", 00:15:57.353 "traddr": "10.0.0.2", 00:15:57.353 "trsvcid": "4420" 00:15:57.353 }, 00:15:57.353 "peer_address": { 00:15:57.353 "trtype": "TCP", 00:15:57.353 "adrfam": "IPv4", 00:15:57.353 "traddr": "10.0.0.1", 00:15:57.353 "trsvcid": "54178" 00:15:57.353 }, 00:15:57.353 "auth": { 00:15:57.353 "state": "completed", 00:15:57.353 "digest": "sha256", 00:15:57.353 "dhgroup": "ffdhe2048" 00:15:57.353 } 00:15:57.353 } 00:15:57.353 ]' 00:15:57.353 11:33:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:57.353 11:33:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:57.353 11:33:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:57.353 11:33:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:15:57.353 11:33:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:57.353 11:33:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:57.353 11:33:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:57.353 11:33:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:57.612 11:33:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ODZlYTMyMjBlZjQyOGEwZDYyZTUxNjViMmE3YmU0MjFhZDVkMmU4MWRmNzZhYWJhNDk3ODM5MGU5NDhkZDk4ZVOjsv0=: 00:15:57.612 11:33:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid 00abaa28-3537-eb11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:ODZlYTMyMjBlZjQyOGEwZDYyZTUxNjViMmE3YmU0MjFhZDVkMmU4MWRmNzZhYWJhNDk3ODM5MGU5NDhkZDk4ZVOjsv0=: 00:15:58.549 11:33:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:58.549 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:58.549 11:33:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:15:58.549 11:33:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:58.549 11:33:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:58.549 11:33:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:58.549 11:33:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:15:58.549 11:33:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:58.549 11:33:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:15:58.549 11:33:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:15:58.808 11:33:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 0 00:15:58.808 11:33:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:58.808 11:33:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:15:58.808 11:33:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:15:58.808 11:33:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:15:58.808 11:33:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:58.808 11:33:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:58.808 11:33:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:58.808 11:33:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:58.808 11:33:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:58.808 11:33:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:58.808 11:33:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:58.808 11:33:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:59.067 00:15:59.067 11:33:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:59.067 11:33:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:59.067 11:33:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:59.326 11:34:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:59.326 11:34:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:59.326 11:34:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:59.327 11:34:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:59.327 11:34:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:59.327 11:34:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:59.327 { 00:15:59.327 "cntlid": 17, 00:15:59.327 "qid": 0, 00:15:59.327 "state": "enabled", 00:15:59.327 "thread": "nvmf_tgt_poll_group_000", 00:15:59.327 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562", 00:15:59.327 "listen_address": { 00:15:59.327 "trtype": "TCP", 00:15:59.327 "adrfam": "IPv4", 00:15:59.327 "traddr": "10.0.0.2", 00:15:59.327 "trsvcid": "4420" 00:15:59.327 }, 00:15:59.327 "peer_address": { 00:15:59.327 "trtype": "TCP", 00:15:59.327 "adrfam": "IPv4", 00:15:59.327 "traddr": "10.0.0.1", 00:15:59.327 "trsvcid": "54204" 00:15:59.327 }, 00:15:59.327 "auth": { 00:15:59.327 "state": "completed", 00:15:59.327 "digest": "sha256", 00:15:59.327 "dhgroup": "ffdhe3072" 00:15:59.327 } 00:15:59.327 } 00:15:59.327 ]' 00:15:59.327 11:34:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:59.327 11:34:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:59.327 11:34:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:59.327 11:34:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:15:59.327 11:34:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:59.585 11:34:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:59.585 11:34:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:59.585 11:34:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:59.846 11:34:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NDE5Njc4MWNiYWExZGQ5ZmU1NDVmMDA2ZmQzMGFjNjFmZWQyODBlMmI0NThiMTQzLE1V+g==: --dhchap-ctrl-secret DHHC-1:03:MmJmNjJkMjM5MzE0ZDI5YTU3YjczOTcwMDg5ZTMxY2Y4N2MyZTc0MDMzYmYzNGU5ZTc5ZWNkZmJmYjdlOTNiYnYQ51E=: 00:15:59.846 11:34:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid 00abaa28-3537-eb11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:NDE5Njc4MWNiYWExZGQ5ZmU1NDVmMDA2ZmQzMGFjNjFmZWQyODBlMmI0NThiMTQzLE1V+g==: --dhchap-ctrl-secret DHHC-1:03:MmJmNjJkMjM5MzE0ZDI5YTU3YjczOTcwMDg5ZTMxY2Y4N2MyZTc0MDMzYmYzNGU5ZTc5ZWNkZmJmYjdlOTNiYnYQ51E=: 00:16:00.783 11:34:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:00.783 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:00.783 11:34:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:16:00.783 11:34:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:00.783 11:34:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:00.783 11:34:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:00.783 11:34:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:00.783 11:34:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:16:00.783 11:34:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:16:00.783 11:34:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 1 00:16:00.783 11:34:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:00.783 11:34:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:00.783 11:34:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:16:00.783 11:34:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:16:00.783 11:34:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:00.783 11:34:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:00.783 11:34:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:00.783 11:34:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:00.783 11:34:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:00.783 11:34:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:00.783 11:34:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:00.783 11:34:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:01.351 00:16:01.351 11:34:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:01.351 11:34:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:01.351 11:34:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:01.609 11:34:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:01.609 11:34:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:01.609 11:34:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:01.609 11:34:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:01.609 11:34:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:01.609 11:34:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:01.609 { 00:16:01.609 "cntlid": 19, 00:16:01.609 "qid": 0, 00:16:01.609 "state": "enabled", 00:16:01.609 "thread": "nvmf_tgt_poll_group_000", 00:16:01.609 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562", 00:16:01.609 "listen_address": { 00:16:01.609 "trtype": "TCP", 00:16:01.609 "adrfam": "IPv4", 00:16:01.609 "traddr": "10.0.0.2", 00:16:01.609 "trsvcid": "4420" 00:16:01.609 }, 00:16:01.609 "peer_address": { 00:16:01.609 "trtype": "TCP", 00:16:01.609 "adrfam": "IPv4", 00:16:01.609 "traddr": "10.0.0.1", 00:16:01.609 "trsvcid": "54224" 00:16:01.609 }, 00:16:01.609 "auth": { 00:16:01.609 "state": "completed", 00:16:01.609 "digest": "sha256", 00:16:01.609 "dhgroup": "ffdhe3072" 00:16:01.609 } 00:16:01.609 } 00:16:01.609 ]' 00:16:01.609 11:34:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:01.609 11:34:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:01.609 11:34:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:01.609 11:34:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:16:01.609 11:34:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:01.609 11:34:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:01.609 11:34:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:01.609 11:34:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:01.868 11:34:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NzBjNTkzZTliNmFiY2ExM2E5ZGVkZTU2MzczZjIzNGPWmSRF: --dhchap-ctrl-secret DHHC-1:02:M2MwOGEwMGEyOWVmYzQ5MjIwZmQ5ZmJkMDY3NDdhMDRmNmU1NGRiMTM4ZGQ3ODU4gDsQkA==: 00:16:01.868 11:34:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid 00abaa28-3537-eb11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:NzBjNTkzZTliNmFiY2ExM2E5ZGVkZTU2MzczZjIzNGPWmSRF: --dhchap-ctrl-secret DHHC-1:02:M2MwOGEwMGEyOWVmYzQ5MjIwZmQ5ZmJkMDY3NDdhMDRmNmU1NGRiMTM4ZGQ3ODU4gDsQkA==: 00:16:02.803 11:34:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:02.803 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:02.803 11:34:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:16:02.803 11:34:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:02.803 11:34:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:02.803 11:34:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:02.803 11:34:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:02.803 11:34:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:16:02.803 11:34:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:16:03.062 11:34:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 2 00:16:03.062 11:34:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:03.062 11:34:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:03.062 11:34:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:16:03.062 11:34:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:16:03.062 11:34:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:03.062 11:34:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:03.062 11:34:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:03.062 11:34:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:03.062 11:34:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:03.062 11:34:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:03.062 11:34:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:03.062 11:34:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:03.320 00:16:03.320 11:34:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:03.320 11:34:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:03.320 11:34:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:03.577 11:34:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:03.577 11:34:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:03.577 11:34:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:03.577 11:34:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:03.577 11:34:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:03.577 11:34:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:03.577 { 00:16:03.577 "cntlid": 21, 00:16:03.577 "qid": 0, 00:16:03.577 "state": "enabled", 00:16:03.577 "thread": "nvmf_tgt_poll_group_000", 00:16:03.577 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562", 00:16:03.577 "listen_address": { 00:16:03.577 "trtype": "TCP", 00:16:03.577 "adrfam": "IPv4", 00:16:03.577 "traddr": "10.0.0.2", 00:16:03.577 "trsvcid": "4420" 00:16:03.577 }, 00:16:03.577 "peer_address": { 00:16:03.577 "trtype": "TCP", 00:16:03.577 "adrfam": "IPv4", 00:16:03.577 "traddr": "10.0.0.1", 00:16:03.577 "trsvcid": "43784" 00:16:03.577 }, 00:16:03.577 "auth": { 00:16:03.577 "state": "completed", 00:16:03.577 "digest": "sha256", 00:16:03.577 "dhgroup": "ffdhe3072" 00:16:03.577 } 00:16:03.577 } 00:16:03.577 ]' 00:16:03.577 11:34:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:03.577 11:34:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:03.577 11:34:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:03.578 11:34:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:16:03.578 11:34:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:03.578 11:34:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:03.578 11:34:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:03.578 11:34:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:03.835 11:34:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:Yjc4NTAyMDhmNTg3M2FjYmQ1MTVkZTIwZjM1ODhlMmMwNWY5MjA2MDQwZmIzNDY4n3XOMQ==: --dhchap-ctrl-secret DHHC-1:01:MmVkODE3Yzg5OWI3OWQ5MWJiZDc5ODkyMmFkY2M2YzfHVm/m: 00:16:03.835 11:34:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid 00abaa28-3537-eb11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:Yjc4NTAyMDhmNTg3M2FjYmQ1MTVkZTIwZjM1ODhlMmMwNWY5MjA2MDQwZmIzNDY4n3XOMQ==: --dhchap-ctrl-secret DHHC-1:01:MmVkODE3Yzg5OWI3OWQ5MWJiZDc5ODkyMmFkY2M2YzfHVm/m: 00:16:04.773 11:34:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:04.773 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:04.773 11:34:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:16:04.773 11:34:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:04.773 11:34:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:04.773 11:34:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:04.773 11:34:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:04.773 11:34:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:16:04.773 11:34:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:16:04.773 11:34:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 3 00:16:04.773 11:34:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:04.773 11:34:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:04.773 11:34:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:16:04.773 11:34:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:16:04.773 11:34:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:04.773 11:34:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --dhchap-key key3 00:16:04.773 11:34:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:04.773 11:34:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:04.773 11:34:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:04.773 11:34:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:16:04.773 11:34:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:04.773 11:34:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:05.031 00:16:05.031 11:34:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:05.031 11:34:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:05.031 11:34:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:05.289 11:34:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:05.289 11:34:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:05.289 11:34:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:05.289 11:34:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:05.289 11:34:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:05.289 11:34:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:05.289 { 00:16:05.289 "cntlid": 23, 00:16:05.289 "qid": 0, 00:16:05.289 "state": "enabled", 00:16:05.289 "thread": "nvmf_tgt_poll_group_000", 00:16:05.289 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562", 00:16:05.289 "listen_address": { 00:16:05.289 "trtype": "TCP", 00:16:05.289 "adrfam": "IPv4", 00:16:05.289 "traddr": "10.0.0.2", 00:16:05.289 "trsvcid": "4420" 00:16:05.289 }, 00:16:05.289 "peer_address": { 00:16:05.289 "trtype": "TCP", 00:16:05.289 "adrfam": "IPv4", 00:16:05.289 "traddr": "10.0.0.1", 00:16:05.289 "trsvcid": "43812" 00:16:05.289 }, 00:16:05.289 "auth": { 00:16:05.289 "state": "completed", 00:16:05.289 "digest": "sha256", 00:16:05.289 "dhgroup": "ffdhe3072" 00:16:05.289 } 00:16:05.289 } 00:16:05.289 ]' 00:16:05.289 11:34:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:05.289 11:34:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:05.289 11:34:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:05.289 11:34:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:16:05.289 11:34:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:05.547 11:34:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:05.547 11:34:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:05.547 11:34:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:05.804 11:34:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ODZlYTMyMjBlZjQyOGEwZDYyZTUxNjViMmE3YmU0MjFhZDVkMmU4MWRmNzZhYWJhNDk3ODM5MGU5NDhkZDk4ZVOjsv0=: 00:16:05.804 11:34:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid 00abaa28-3537-eb11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:ODZlYTMyMjBlZjQyOGEwZDYyZTUxNjViMmE3YmU0MjFhZDVkMmU4MWRmNzZhYWJhNDk3ODM5MGU5NDhkZDk4ZVOjsv0=: 00:16:06.369 11:34:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:06.369 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:06.369 11:34:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:16:06.369 11:34:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:06.369 11:34:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:06.369 11:34:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:06.369 11:34:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:16:06.369 11:34:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:06.369 11:34:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:16:06.369 11:34:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:16:06.628 11:34:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 0 00:16:06.628 11:34:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:06.628 11:34:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:06.628 11:34:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:16:06.628 11:34:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:16:06.628 11:34:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:06.628 11:34:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:06.628 11:34:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:06.628 11:34:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:06.628 11:34:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:06.628 11:34:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:06.628 11:34:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:06.628 11:34:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:06.886 00:16:06.886 11:34:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:06.886 11:34:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:06.886 11:34:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:07.143 11:34:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:07.143 11:34:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:07.143 11:34:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:07.143 11:34:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:07.401 11:34:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:07.401 11:34:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:07.401 { 00:16:07.401 "cntlid": 25, 00:16:07.401 "qid": 0, 00:16:07.401 "state": "enabled", 00:16:07.401 "thread": "nvmf_tgt_poll_group_000", 00:16:07.401 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562", 00:16:07.401 "listen_address": { 00:16:07.401 "trtype": "TCP", 00:16:07.401 "adrfam": "IPv4", 00:16:07.401 "traddr": "10.0.0.2", 00:16:07.401 "trsvcid": "4420" 00:16:07.401 }, 00:16:07.401 "peer_address": { 00:16:07.401 "trtype": "TCP", 00:16:07.401 "adrfam": "IPv4", 00:16:07.401 "traddr": "10.0.0.1", 00:16:07.401 "trsvcid": "43820" 00:16:07.401 }, 00:16:07.401 "auth": { 00:16:07.401 "state": "completed", 00:16:07.401 "digest": "sha256", 00:16:07.401 "dhgroup": "ffdhe4096" 00:16:07.401 } 00:16:07.401 } 00:16:07.401 ]' 00:16:07.401 11:34:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:07.401 11:34:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:07.401 11:34:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:07.401 11:34:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:16:07.401 11:34:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:07.401 11:34:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:07.401 11:34:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:07.401 11:34:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:07.660 11:34:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NDE5Njc4MWNiYWExZGQ5ZmU1NDVmMDA2ZmQzMGFjNjFmZWQyODBlMmI0NThiMTQzLE1V+g==: --dhchap-ctrl-secret DHHC-1:03:MmJmNjJkMjM5MzE0ZDI5YTU3YjczOTcwMDg5ZTMxY2Y4N2MyZTc0MDMzYmYzNGU5ZTc5ZWNkZmJmYjdlOTNiYnYQ51E=: 00:16:07.660 11:34:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid 00abaa28-3537-eb11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:NDE5Njc4MWNiYWExZGQ5ZmU1NDVmMDA2ZmQzMGFjNjFmZWQyODBlMmI0NThiMTQzLE1V+g==: --dhchap-ctrl-secret DHHC-1:03:MmJmNjJkMjM5MzE0ZDI5YTU3YjczOTcwMDg5ZTMxY2Y4N2MyZTc0MDMzYmYzNGU5ZTc5ZWNkZmJmYjdlOTNiYnYQ51E=: 00:16:08.592 11:34:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:08.592 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:08.592 11:34:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:16:08.592 11:34:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:08.592 11:34:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:08.592 11:34:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:08.592 11:34:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:08.592 11:34:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:16:08.592 11:34:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:16:08.850 11:34:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 1 00:16:08.850 11:34:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:08.850 11:34:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:08.850 11:34:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:16:08.850 11:34:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:16:08.850 11:34:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:08.850 11:34:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:08.850 11:34:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:08.850 11:34:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:08.850 11:34:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:08.850 11:34:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:08.850 11:34:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:08.850 11:34:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:09.108 00:16:09.108 11:34:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:09.108 11:34:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:09.108 11:34:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:09.366 11:34:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:09.366 11:34:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:09.366 11:34:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:09.366 11:34:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:09.366 11:34:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:09.366 11:34:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:09.366 { 00:16:09.366 "cntlid": 27, 00:16:09.366 "qid": 0, 00:16:09.366 "state": "enabled", 00:16:09.366 "thread": "nvmf_tgt_poll_group_000", 00:16:09.366 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562", 00:16:09.366 "listen_address": { 00:16:09.366 "trtype": "TCP", 00:16:09.366 "adrfam": "IPv4", 00:16:09.366 "traddr": "10.0.0.2", 00:16:09.366 "trsvcid": "4420" 00:16:09.366 }, 00:16:09.366 "peer_address": { 00:16:09.366 "trtype": "TCP", 00:16:09.366 "adrfam": "IPv4", 00:16:09.366 "traddr": "10.0.0.1", 00:16:09.366 "trsvcid": "43850" 00:16:09.366 }, 00:16:09.366 "auth": { 00:16:09.366 "state": "completed", 00:16:09.366 "digest": "sha256", 00:16:09.366 "dhgroup": "ffdhe4096" 00:16:09.366 } 00:16:09.366 } 00:16:09.366 ]' 00:16:09.366 11:34:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:09.366 11:34:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:09.366 11:34:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:09.366 11:34:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:16:09.366 11:34:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:09.366 11:34:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:09.366 11:34:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:09.366 11:34:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:09.942 11:34:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NzBjNTkzZTliNmFiY2ExM2E5ZGVkZTU2MzczZjIzNGPWmSRF: --dhchap-ctrl-secret DHHC-1:02:M2MwOGEwMGEyOWVmYzQ5MjIwZmQ5ZmJkMDY3NDdhMDRmNmU1NGRiMTM4ZGQ3ODU4gDsQkA==: 00:16:09.942 11:34:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid 00abaa28-3537-eb11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:NzBjNTkzZTliNmFiY2ExM2E5ZGVkZTU2MzczZjIzNGPWmSRF: --dhchap-ctrl-secret DHHC-1:02:M2MwOGEwMGEyOWVmYzQ5MjIwZmQ5ZmJkMDY3NDdhMDRmNmU1NGRiMTM4ZGQ3ODU4gDsQkA==: 00:16:10.519 11:34:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:10.519 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:10.519 11:34:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:16:10.519 11:34:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:10.519 11:34:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:10.519 11:34:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:10.519 11:34:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:10.519 11:34:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:16:10.519 11:34:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:16:10.789 11:34:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 2 00:16:10.789 11:34:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:10.789 11:34:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:10.789 11:34:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:16:10.789 11:34:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:16:10.790 11:34:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:10.790 11:34:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:10.790 11:34:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:10.790 11:34:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:10.790 11:34:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:10.790 11:34:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:10.790 11:34:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:10.790 11:34:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:11.055 00:16:11.055 11:34:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:11.055 11:34:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:11.055 11:34:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:11.313 11:34:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:11.313 11:34:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:11.313 11:34:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:11.313 11:34:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:11.313 11:34:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:11.313 11:34:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:11.313 { 00:16:11.313 "cntlid": 29, 00:16:11.313 "qid": 0, 00:16:11.313 "state": "enabled", 00:16:11.313 "thread": "nvmf_tgt_poll_group_000", 00:16:11.313 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562", 00:16:11.313 "listen_address": { 00:16:11.313 "trtype": "TCP", 00:16:11.313 "adrfam": "IPv4", 00:16:11.313 "traddr": "10.0.0.2", 00:16:11.313 "trsvcid": "4420" 00:16:11.313 }, 00:16:11.313 "peer_address": { 00:16:11.313 "trtype": "TCP", 00:16:11.313 "adrfam": "IPv4", 00:16:11.313 "traddr": "10.0.0.1", 00:16:11.313 "trsvcid": "43894" 00:16:11.313 }, 00:16:11.313 "auth": { 00:16:11.313 "state": "completed", 00:16:11.313 "digest": "sha256", 00:16:11.313 "dhgroup": "ffdhe4096" 00:16:11.313 } 00:16:11.313 } 00:16:11.313 ]' 00:16:11.314 11:34:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:11.314 11:34:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:11.314 11:34:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:11.572 11:34:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:16:11.572 11:34:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:11.572 11:34:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:11.572 11:34:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:11.572 11:34:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:11.831 11:34:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:Yjc4NTAyMDhmNTg3M2FjYmQ1MTVkZTIwZjM1ODhlMmMwNWY5MjA2MDQwZmIzNDY4n3XOMQ==: --dhchap-ctrl-secret DHHC-1:01:MmVkODE3Yzg5OWI3OWQ5MWJiZDc5ODkyMmFkY2M2YzfHVm/m: 00:16:11.831 11:34:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid 00abaa28-3537-eb11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:Yjc4NTAyMDhmNTg3M2FjYmQ1MTVkZTIwZjM1ODhlMmMwNWY5MjA2MDQwZmIzNDY4n3XOMQ==: --dhchap-ctrl-secret DHHC-1:01:MmVkODE3Yzg5OWI3OWQ5MWJiZDc5ODkyMmFkY2M2YzfHVm/m: 00:16:12.398 11:34:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:12.398 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:12.398 11:34:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:16:12.398 11:34:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:12.398 11:34:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:12.726 11:34:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:12.726 11:34:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:12.726 11:34:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:16:12.726 11:34:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:16:12.726 11:34:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 3 00:16:12.726 11:34:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:12.726 11:34:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:12.726 11:34:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:16:12.726 11:34:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:16:12.726 11:34:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:12.726 11:34:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --dhchap-key key3 00:16:12.726 11:34:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:12.726 11:34:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:12.726 11:34:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:12.726 11:34:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:16:12.726 11:34:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:12.726 11:34:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:12.985 00:16:12.985 11:34:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:12.985 11:34:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:12.985 11:34:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:13.244 11:34:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:13.244 11:34:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:13.244 11:34:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:13.244 11:34:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:13.244 11:34:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:13.244 11:34:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:13.244 { 00:16:13.244 "cntlid": 31, 00:16:13.244 "qid": 0, 00:16:13.244 "state": "enabled", 00:16:13.244 "thread": "nvmf_tgt_poll_group_000", 00:16:13.244 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562", 00:16:13.244 "listen_address": { 00:16:13.244 "trtype": "TCP", 00:16:13.244 "adrfam": "IPv4", 00:16:13.244 "traddr": "10.0.0.2", 00:16:13.244 "trsvcid": "4420" 00:16:13.244 }, 00:16:13.244 "peer_address": { 00:16:13.244 "trtype": "TCP", 00:16:13.244 "adrfam": "IPv4", 00:16:13.244 "traddr": "10.0.0.1", 00:16:13.244 "trsvcid": "51748" 00:16:13.244 }, 00:16:13.244 "auth": { 00:16:13.244 "state": "completed", 00:16:13.244 "digest": "sha256", 00:16:13.244 "dhgroup": "ffdhe4096" 00:16:13.244 } 00:16:13.244 } 00:16:13.244 ]' 00:16:13.244 11:34:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:13.244 11:34:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:13.244 11:34:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:13.244 11:34:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:16:13.244 11:34:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:13.503 11:34:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:13.503 11:34:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:13.503 11:34:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:13.504 11:34:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ODZlYTMyMjBlZjQyOGEwZDYyZTUxNjViMmE3YmU0MjFhZDVkMmU4MWRmNzZhYWJhNDk3ODM5MGU5NDhkZDk4ZVOjsv0=: 00:16:13.504 11:34:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid 00abaa28-3537-eb11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:ODZlYTMyMjBlZjQyOGEwZDYyZTUxNjViMmE3YmU0MjFhZDVkMmU4MWRmNzZhYWJhNDk3ODM5MGU5NDhkZDk4ZVOjsv0=: 00:16:14.439 11:34:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:14.439 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:14.439 11:34:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:16:14.439 11:34:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:14.439 11:34:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:14.439 11:34:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:14.439 11:34:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:16:14.439 11:34:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:14.439 11:34:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:16:14.439 11:34:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:16:14.698 11:34:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 0 00:16:14.698 11:34:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:14.698 11:34:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:14.698 11:34:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:16:14.698 11:34:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:16:14.698 11:34:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:14.698 11:34:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:14.698 11:34:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:14.698 11:34:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:14.698 11:34:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:14.698 11:34:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:14.698 11:34:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:14.698 11:34:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:14.957 00:16:14.957 11:34:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:14.957 11:34:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:14.957 11:34:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:15.216 11:34:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:15.216 11:34:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:15.216 11:34:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:15.216 11:34:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:15.475 11:34:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:15.475 11:34:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:15.475 { 00:16:15.475 "cntlid": 33, 00:16:15.475 "qid": 0, 00:16:15.475 "state": "enabled", 00:16:15.475 "thread": "nvmf_tgt_poll_group_000", 00:16:15.475 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562", 00:16:15.475 "listen_address": { 00:16:15.475 "trtype": "TCP", 00:16:15.475 "adrfam": "IPv4", 00:16:15.475 "traddr": "10.0.0.2", 00:16:15.475 "trsvcid": "4420" 00:16:15.475 }, 00:16:15.475 "peer_address": { 00:16:15.475 "trtype": "TCP", 00:16:15.475 "adrfam": "IPv4", 00:16:15.475 "traddr": "10.0.0.1", 00:16:15.475 "trsvcid": "51774" 00:16:15.475 }, 00:16:15.475 "auth": { 00:16:15.475 "state": "completed", 00:16:15.475 "digest": "sha256", 00:16:15.475 "dhgroup": "ffdhe6144" 00:16:15.475 } 00:16:15.475 } 00:16:15.475 ]' 00:16:15.475 11:34:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:15.475 11:34:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:15.475 11:34:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:15.475 11:34:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:16:15.475 11:34:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:15.475 11:34:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:15.475 11:34:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:15.475 11:34:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:15.735 11:34:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NDE5Njc4MWNiYWExZGQ5ZmU1NDVmMDA2ZmQzMGFjNjFmZWQyODBlMmI0NThiMTQzLE1V+g==: --dhchap-ctrl-secret DHHC-1:03:MmJmNjJkMjM5MzE0ZDI5YTU3YjczOTcwMDg5ZTMxY2Y4N2MyZTc0MDMzYmYzNGU5ZTc5ZWNkZmJmYjdlOTNiYnYQ51E=: 00:16:15.735 11:34:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid 00abaa28-3537-eb11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:NDE5Njc4MWNiYWExZGQ5ZmU1NDVmMDA2ZmQzMGFjNjFmZWQyODBlMmI0NThiMTQzLE1V+g==: --dhchap-ctrl-secret DHHC-1:03:MmJmNjJkMjM5MzE0ZDI5YTU3YjczOTcwMDg5ZTMxY2Y4N2MyZTc0MDMzYmYzNGU5ZTc5ZWNkZmJmYjdlOTNiYnYQ51E=: 00:16:16.672 11:34:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:16.672 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:16.672 11:34:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:16:16.672 11:34:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:16.672 11:34:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:16.672 11:34:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:16.672 11:34:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:16.672 11:34:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:16:16.672 11:34:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:16:16.672 11:34:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 1 00:16:16.672 11:34:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:17.003 11:34:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:17.003 11:34:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:16:17.003 11:34:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:16:17.003 11:34:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:17.003 11:34:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:17.003 11:34:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:17.003 11:34:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:17.003 11:34:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:17.003 11:34:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:17.003 11:34:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:17.004 11:34:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:17.301 00:16:17.301 11:34:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:17.301 11:34:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:17.301 11:34:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:17.595 11:34:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:17.595 11:34:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:17.595 11:34:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:17.595 11:34:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:17.595 11:34:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:17.595 11:34:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:17.595 { 00:16:17.595 "cntlid": 35, 00:16:17.595 "qid": 0, 00:16:17.595 "state": "enabled", 00:16:17.595 "thread": "nvmf_tgt_poll_group_000", 00:16:17.595 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562", 00:16:17.595 "listen_address": { 00:16:17.595 "trtype": "TCP", 00:16:17.595 "adrfam": "IPv4", 00:16:17.595 "traddr": "10.0.0.2", 00:16:17.595 "trsvcid": "4420" 00:16:17.595 }, 00:16:17.595 "peer_address": { 00:16:17.595 "trtype": "TCP", 00:16:17.595 "adrfam": "IPv4", 00:16:17.595 "traddr": "10.0.0.1", 00:16:17.595 "trsvcid": "51784" 00:16:17.595 }, 00:16:17.595 "auth": { 00:16:17.595 "state": "completed", 00:16:17.595 "digest": "sha256", 00:16:17.595 "dhgroup": "ffdhe6144" 00:16:17.595 } 00:16:17.595 } 00:16:17.595 ]' 00:16:17.595 11:34:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:17.595 11:34:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:17.595 11:34:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:17.595 11:34:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:16:17.595 11:34:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:17.595 11:34:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:17.595 11:34:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:17.595 11:34:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:17.914 11:34:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NzBjNTkzZTliNmFiY2ExM2E5ZGVkZTU2MzczZjIzNGPWmSRF: --dhchap-ctrl-secret DHHC-1:02:M2MwOGEwMGEyOWVmYzQ5MjIwZmQ5ZmJkMDY3NDdhMDRmNmU1NGRiMTM4ZGQ3ODU4gDsQkA==: 00:16:17.914 11:34:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid 00abaa28-3537-eb11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:NzBjNTkzZTliNmFiY2ExM2E5ZGVkZTU2MzczZjIzNGPWmSRF: --dhchap-ctrl-secret DHHC-1:02:M2MwOGEwMGEyOWVmYzQ5MjIwZmQ5ZmJkMDY3NDdhMDRmNmU1NGRiMTM4ZGQ3ODU4gDsQkA==: 00:16:18.852 11:34:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:18.852 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:18.852 11:34:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:16:18.852 11:34:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:18.852 11:34:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:18.852 11:34:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:18.852 11:34:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:18.852 11:34:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:16:18.852 11:34:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:16:19.113 11:34:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 2 00:16:19.113 11:34:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:19.113 11:34:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:19.113 11:34:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:16:19.113 11:34:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:16:19.113 11:34:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:19.113 11:34:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:19.113 11:34:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:19.113 11:34:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:19.113 11:34:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:19.113 11:34:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:19.113 11:34:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:19.113 11:34:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:19.682 00:16:19.682 11:34:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:19.682 11:34:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:19.682 11:34:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:19.682 11:34:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:19.682 11:34:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:19.682 11:34:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:19.682 11:34:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:19.682 11:34:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:19.682 11:34:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:19.682 { 00:16:19.682 "cntlid": 37, 00:16:19.682 "qid": 0, 00:16:19.682 "state": "enabled", 00:16:19.682 "thread": "nvmf_tgt_poll_group_000", 00:16:19.682 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562", 00:16:19.682 "listen_address": { 00:16:19.682 "trtype": "TCP", 00:16:19.682 "adrfam": "IPv4", 00:16:19.682 "traddr": "10.0.0.2", 00:16:19.682 "trsvcid": "4420" 00:16:19.682 }, 00:16:19.682 "peer_address": { 00:16:19.682 "trtype": "TCP", 00:16:19.682 "adrfam": "IPv4", 00:16:19.682 "traddr": "10.0.0.1", 00:16:19.682 "trsvcid": "51818" 00:16:19.682 }, 00:16:19.682 "auth": { 00:16:19.682 "state": "completed", 00:16:19.682 "digest": "sha256", 00:16:19.682 "dhgroup": "ffdhe6144" 00:16:19.682 } 00:16:19.682 } 00:16:19.682 ]' 00:16:19.682 11:34:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:19.682 11:34:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:19.682 11:34:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:19.941 11:34:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:16:19.941 11:34:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:19.941 11:34:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:19.941 11:34:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:19.941 11:34:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:20.199 11:34:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:Yjc4NTAyMDhmNTg3M2FjYmQ1MTVkZTIwZjM1ODhlMmMwNWY5MjA2MDQwZmIzNDY4n3XOMQ==: --dhchap-ctrl-secret DHHC-1:01:MmVkODE3Yzg5OWI3OWQ5MWJiZDc5ODkyMmFkY2M2YzfHVm/m: 00:16:20.199 11:34:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid 00abaa28-3537-eb11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:Yjc4NTAyMDhmNTg3M2FjYmQ1MTVkZTIwZjM1ODhlMmMwNWY5MjA2MDQwZmIzNDY4n3XOMQ==: --dhchap-ctrl-secret DHHC-1:01:MmVkODE3Yzg5OWI3OWQ5MWJiZDc5ODkyMmFkY2M2YzfHVm/m: 00:16:21.136 11:34:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:21.136 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:21.136 11:34:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:16:21.136 11:34:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:21.136 11:34:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:21.136 11:34:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:21.136 11:34:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:21.136 11:34:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:16:21.136 11:34:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:16:21.136 11:34:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 3 00:16:21.136 11:34:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:21.136 11:34:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:21.136 11:34:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:16:21.136 11:34:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:16:21.136 11:34:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:21.136 11:34:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --dhchap-key key3 00:16:21.136 11:34:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:21.136 11:34:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:21.136 11:34:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:21.136 11:34:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:16:21.136 11:34:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:21.136 11:34:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:21.703 00:16:21.703 11:34:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:21.703 11:34:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:21.703 11:34:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:21.962 11:34:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:21.962 11:34:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:21.962 11:34:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:21.962 11:34:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:21.962 11:34:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:21.962 11:34:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:21.962 { 00:16:21.962 "cntlid": 39, 00:16:21.962 "qid": 0, 00:16:21.962 "state": "enabled", 00:16:21.962 "thread": "nvmf_tgt_poll_group_000", 00:16:21.962 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562", 00:16:21.962 "listen_address": { 00:16:21.962 "trtype": "TCP", 00:16:21.962 "adrfam": "IPv4", 00:16:21.962 "traddr": "10.0.0.2", 00:16:21.962 "trsvcid": "4420" 00:16:21.962 }, 00:16:21.962 "peer_address": { 00:16:21.962 "trtype": "TCP", 00:16:21.962 "adrfam": "IPv4", 00:16:21.962 "traddr": "10.0.0.1", 00:16:21.962 "trsvcid": "51854" 00:16:21.962 }, 00:16:21.962 "auth": { 00:16:21.962 "state": "completed", 00:16:21.962 "digest": "sha256", 00:16:21.962 "dhgroup": "ffdhe6144" 00:16:21.962 } 00:16:21.962 } 00:16:21.962 ]' 00:16:21.962 11:34:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:21.962 11:34:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:21.962 11:34:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:21.962 11:34:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:16:21.962 11:34:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:21.962 11:34:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:21.962 11:34:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:21.962 11:34:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:22.220 11:34:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ODZlYTMyMjBlZjQyOGEwZDYyZTUxNjViMmE3YmU0MjFhZDVkMmU4MWRmNzZhYWJhNDk3ODM5MGU5NDhkZDk4ZVOjsv0=: 00:16:22.221 11:34:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid 00abaa28-3537-eb11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:ODZlYTMyMjBlZjQyOGEwZDYyZTUxNjViMmE3YmU0MjFhZDVkMmU4MWRmNzZhYWJhNDk3ODM5MGU5NDhkZDk4ZVOjsv0=: 00:16:23.157 11:34:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:23.157 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:23.157 11:34:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:16:23.157 11:34:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:23.157 11:34:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:23.157 11:34:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:23.157 11:34:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:16:23.157 11:34:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:23.157 11:34:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:16:23.157 11:34:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:16:23.157 11:34:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 0 00:16:23.157 11:34:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:23.157 11:34:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:23.157 11:34:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:16:23.157 11:34:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:16:23.157 11:34:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:23.157 11:34:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:23.157 11:34:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:23.157 11:34:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:23.157 11:34:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:23.157 11:34:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:23.157 11:34:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:23.157 11:34:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:23.724 00:16:23.724 11:34:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:23.724 11:34:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:23.724 11:34:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:23.983 11:34:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:23.983 11:34:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:23.983 11:34:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:23.983 11:34:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:23.983 11:34:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:23.983 11:34:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:23.983 { 00:16:23.983 "cntlid": 41, 00:16:23.983 "qid": 0, 00:16:23.983 "state": "enabled", 00:16:23.983 "thread": "nvmf_tgt_poll_group_000", 00:16:23.983 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562", 00:16:23.983 "listen_address": { 00:16:23.983 "trtype": "TCP", 00:16:23.983 "adrfam": "IPv4", 00:16:23.983 "traddr": "10.0.0.2", 00:16:23.983 "trsvcid": "4420" 00:16:23.983 }, 00:16:23.983 "peer_address": { 00:16:23.983 "trtype": "TCP", 00:16:23.983 "adrfam": "IPv4", 00:16:23.983 "traddr": "10.0.0.1", 00:16:23.983 "trsvcid": "58786" 00:16:23.983 }, 00:16:23.983 "auth": { 00:16:23.983 "state": "completed", 00:16:23.983 "digest": "sha256", 00:16:23.983 "dhgroup": "ffdhe8192" 00:16:23.983 } 00:16:23.983 } 00:16:23.983 ]' 00:16:23.983 11:34:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:23.983 11:34:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:23.983 11:34:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:23.983 11:34:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:16:23.983 11:34:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:23.983 11:34:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:23.983 11:34:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:23.983 11:34:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:24.241 11:34:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NDE5Njc4MWNiYWExZGQ5ZmU1NDVmMDA2ZmQzMGFjNjFmZWQyODBlMmI0NThiMTQzLE1V+g==: --dhchap-ctrl-secret DHHC-1:03:MmJmNjJkMjM5MzE0ZDI5YTU3YjczOTcwMDg5ZTMxY2Y4N2MyZTc0MDMzYmYzNGU5ZTc5ZWNkZmJmYjdlOTNiYnYQ51E=: 00:16:24.241 11:34:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid 00abaa28-3537-eb11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:NDE5Njc4MWNiYWExZGQ5ZmU1NDVmMDA2ZmQzMGFjNjFmZWQyODBlMmI0NThiMTQzLE1V+g==: --dhchap-ctrl-secret DHHC-1:03:MmJmNjJkMjM5MzE0ZDI5YTU3YjczOTcwMDg5ZTMxY2Y4N2MyZTc0MDMzYmYzNGU5ZTc5ZWNkZmJmYjdlOTNiYnYQ51E=: 00:16:24.809 11:34:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:24.809 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:24.809 11:34:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:16:24.809 11:34:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:24.809 11:34:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:25.068 11:34:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:25.068 11:34:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:25.068 11:34:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:16:25.068 11:34:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:16:25.327 11:34:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 1 00:16:25.327 11:34:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:25.327 11:34:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:25.327 11:34:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:16:25.327 11:34:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:16:25.327 11:34:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:25.327 11:34:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:25.327 11:34:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:25.327 11:34:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:25.327 11:34:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:25.327 11:34:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:25.327 11:34:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:25.327 11:34:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:25.895 00:16:25.895 11:34:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:25.895 11:34:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:25.895 11:34:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:26.153 11:34:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:26.153 11:34:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:26.154 11:34:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:26.154 11:34:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:26.154 11:34:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:26.154 11:34:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:26.154 { 00:16:26.154 "cntlid": 43, 00:16:26.154 "qid": 0, 00:16:26.154 "state": "enabled", 00:16:26.154 "thread": "nvmf_tgt_poll_group_000", 00:16:26.154 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562", 00:16:26.154 "listen_address": { 00:16:26.154 "trtype": "TCP", 00:16:26.154 "adrfam": "IPv4", 00:16:26.154 "traddr": "10.0.0.2", 00:16:26.154 "trsvcid": "4420" 00:16:26.154 }, 00:16:26.154 "peer_address": { 00:16:26.154 "trtype": "TCP", 00:16:26.154 "adrfam": "IPv4", 00:16:26.154 "traddr": "10.0.0.1", 00:16:26.154 "trsvcid": "58802" 00:16:26.154 }, 00:16:26.154 "auth": { 00:16:26.154 "state": "completed", 00:16:26.154 "digest": "sha256", 00:16:26.154 "dhgroup": "ffdhe8192" 00:16:26.154 } 00:16:26.154 } 00:16:26.154 ]' 00:16:26.154 11:34:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:26.154 11:34:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:26.154 11:34:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:26.154 11:34:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:16:26.154 11:34:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:26.154 11:34:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:26.154 11:34:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:26.154 11:34:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:26.413 11:34:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NzBjNTkzZTliNmFiY2ExM2E5ZGVkZTU2MzczZjIzNGPWmSRF: --dhchap-ctrl-secret DHHC-1:02:M2MwOGEwMGEyOWVmYzQ5MjIwZmQ5ZmJkMDY3NDdhMDRmNmU1NGRiMTM4ZGQ3ODU4gDsQkA==: 00:16:26.413 11:34:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid 00abaa28-3537-eb11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:NzBjNTkzZTliNmFiY2ExM2E5ZGVkZTU2MzczZjIzNGPWmSRF: --dhchap-ctrl-secret DHHC-1:02:M2MwOGEwMGEyOWVmYzQ5MjIwZmQ5ZmJkMDY3NDdhMDRmNmU1NGRiMTM4ZGQ3ODU4gDsQkA==: 00:16:27.350 11:34:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:27.350 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:27.350 11:34:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:16:27.350 11:34:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:27.350 11:34:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:27.350 11:34:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:27.350 11:34:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:27.350 11:34:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:16:27.350 11:34:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:16:27.350 11:34:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 2 00:16:27.350 11:34:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:27.350 11:34:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:27.350 11:34:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:16:27.350 11:34:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:16:27.350 11:34:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:27.350 11:34:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:27.350 11:34:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:27.350 11:34:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:27.350 11:34:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:27.350 11:34:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:27.350 11:34:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:27.350 11:34:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:28.286 00:16:28.286 11:34:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:28.286 11:34:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:28.286 11:34:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:28.286 11:34:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:28.286 11:34:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:28.286 11:34:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:28.286 11:34:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:28.286 11:34:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:28.286 11:34:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:28.286 { 00:16:28.286 "cntlid": 45, 00:16:28.286 "qid": 0, 00:16:28.286 "state": "enabled", 00:16:28.286 "thread": "nvmf_tgt_poll_group_000", 00:16:28.286 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562", 00:16:28.286 "listen_address": { 00:16:28.286 "trtype": "TCP", 00:16:28.286 "adrfam": "IPv4", 00:16:28.286 "traddr": "10.0.0.2", 00:16:28.286 "trsvcid": "4420" 00:16:28.286 }, 00:16:28.286 "peer_address": { 00:16:28.287 "trtype": "TCP", 00:16:28.287 "adrfam": "IPv4", 00:16:28.287 "traddr": "10.0.0.1", 00:16:28.287 "trsvcid": "58828" 00:16:28.287 }, 00:16:28.287 "auth": { 00:16:28.287 "state": "completed", 00:16:28.287 "digest": "sha256", 00:16:28.287 "dhgroup": "ffdhe8192" 00:16:28.287 } 00:16:28.287 } 00:16:28.287 ]' 00:16:28.287 11:34:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:28.287 11:34:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:28.287 11:34:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:28.545 11:34:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:16:28.545 11:34:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:28.545 11:34:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:28.545 11:34:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:28.545 11:34:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:28.803 11:34:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:Yjc4NTAyMDhmNTg3M2FjYmQ1MTVkZTIwZjM1ODhlMmMwNWY5MjA2MDQwZmIzNDY4n3XOMQ==: --dhchap-ctrl-secret DHHC-1:01:MmVkODE3Yzg5OWI3OWQ5MWJiZDc5ODkyMmFkY2M2YzfHVm/m: 00:16:28.803 11:34:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid 00abaa28-3537-eb11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:Yjc4NTAyMDhmNTg3M2FjYmQ1MTVkZTIwZjM1ODhlMmMwNWY5MjA2MDQwZmIzNDY4n3XOMQ==: --dhchap-ctrl-secret DHHC-1:01:MmVkODE3Yzg5OWI3OWQ5MWJiZDc5ODkyMmFkY2M2YzfHVm/m: 00:16:29.368 11:34:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:29.368 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:29.368 11:34:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:16:29.368 11:34:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:29.368 11:34:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:29.368 11:34:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:29.368 11:34:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:29.368 11:34:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:16:29.368 11:34:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:16:29.935 11:34:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 3 00:16:29.935 11:34:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:29.935 11:34:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:29.935 11:34:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:16:29.935 11:34:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:16:29.935 11:34:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:29.935 11:34:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --dhchap-key key3 00:16:29.935 11:34:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:29.935 11:34:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:29.935 11:34:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:29.935 11:34:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:16:29.935 11:34:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:29.935 11:34:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:30.502 00:16:30.502 11:34:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:30.502 11:34:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:30.502 11:34:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:30.760 11:34:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:30.760 11:34:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:30.760 11:34:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:30.760 11:34:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:30.760 11:34:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:30.760 11:34:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:30.760 { 00:16:30.760 "cntlid": 47, 00:16:30.760 "qid": 0, 00:16:30.760 "state": "enabled", 00:16:30.760 "thread": "nvmf_tgt_poll_group_000", 00:16:30.760 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562", 00:16:30.760 "listen_address": { 00:16:30.760 "trtype": "TCP", 00:16:30.760 "adrfam": "IPv4", 00:16:30.760 "traddr": "10.0.0.2", 00:16:30.760 "trsvcid": "4420" 00:16:30.760 }, 00:16:30.760 "peer_address": { 00:16:30.760 "trtype": "TCP", 00:16:30.760 "adrfam": "IPv4", 00:16:30.760 "traddr": "10.0.0.1", 00:16:30.760 "trsvcid": "58846" 00:16:30.760 }, 00:16:30.760 "auth": { 00:16:30.760 "state": "completed", 00:16:30.760 "digest": "sha256", 00:16:30.760 "dhgroup": "ffdhe8192" 00:16:30.760 } 00:16:30.760 } 00:16:30.760 ]' 00:16:30.760 11:34:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:30.760 11:34:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:30.760 11:34:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:30.760 11:34:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:16:30.760 11:34:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:30.760 11:34:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:30.760 11:34:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:30.760 11:34:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:31.019 11:34:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ODZlYTMyMjBlZjQyOGEwZDYyZTUxNjViMmE3YmU0MjFhZDVkMmU4MWRmNzZhYWJhNDk3ODM5MGU5NDhkZDk4ZVOjsv0=: 00:16:31.019 11:34:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid 00abaa28-3537-eb11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:ODZlYTMyMjBlZjQyOGEwZDYyZTUxNjViMmE3YmU0MjFhZDVkMmU4MWRmNzZhYWJhNDk3ODM5MGU5NDhkZDk4ZVOjsv0=: 00:16:31.586 11:34:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:31.845 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:31.845 11:34:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:16:31.845 11:34:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:31.845 11:34:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:31.845 11:34:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:31.845 11:34:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@118 -- # for digest in "${digests[@]}" 00:16:31.845 11:34:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:16:31.845 11:34:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:31.845 11:34:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:16:31.845 11:34:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:16:31.845 11:34:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 0 00:16:31.845 11:34:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:31.845 11:34:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:16:31.845 11:34:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:16:31.845 11:34:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:16:31.845 11:34:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:31.845 11:34:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:31.845 11:34:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:31.845 11:34:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:31.845 11:34:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:31.845 11:34:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:31.845 11:34:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:31.845 11:34:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:32.413 00:16:32.413 11:34:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:32.413 11:34:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:32.413 11:34:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:32.672 11:34:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:32.672 11:34:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:32.672 11:34:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:32.672 11:34:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:32.673 11:34:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:32.673 11:34:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:32.673 { 00:16:32.673 "cntlid": 49, 00:16:32.673 "qid": 0, 00:16:32.673 "state": "enabled", 00:16:32.673 "thread": "nvmf_tgt_poll_group_000", 00:16:32.673 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562", 00:16:32.673 "listen_address": { 00:16:32.673 "trtype": "TCP", 00:16:32.673 "adrfam": "IPv4", 00:16:32.673 "traddr": "10.0.0.2", 00:16:32.673 "trsvcid": "4420" 00:16:32.673 }, 00:16:32.673 "peer_address": { 00:16:32.673 "trtype": "TCP", 00:16:32.673 "adrfam": "IPv4", 00:16:32.673 "traddr": "10.0.0.1", 00:16:32.673 "trsvcid": "57778" 00:16:32.673 }, 00:16:32.673 "auth": { 00:16:32.673 "state": "completed", 00:16:32.673 "digest": "sha384", 00:16:32.673 "dhgroup": "null" 00:16:32.673 } 00:16:32.673 } 00:16:32.673 ]' 00:16:32.673 11:34:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:32.673 11:34:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:32.673 11:34:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:32.673 11:34:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:16:32.673 11:34:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:32.673 11:34:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:32.673 11:34:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:32.673 11:34:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:32.931 11:34:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NDE5Njc4MWNiYWExZGQ5ZmU1NDVmMDA2ZmQzMGFjNjFmZWQyODBlMmI0NThiMTQzLE1V+g==: --dhchap-ctrl-secret DHHC-1:03:MmJmNjJkMjM5MzE0ZDI5YTU3YjczOTcwMDg5ZTMxY2Y4N2MyZTc0MDMzYmYzNGU5ZTc5ZWNkZmJmYjdlOTNiYnYQ51E=: 00:16:32.931 11:34:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid 00abaa28-3537-eb11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:NDE5Njc4MWNiYWExZGQ5ZmU1NDVmMDA2ZmQzMGFjNjFmZWQyODBlMmI0NThiMTQzLE1V+g==: --dhchap-ctrl-secret DHHC-1:03:MmJmNjJkMjM5MzE0ZDI5YTU3YjczOTcwMDg5ZTMxY2Y4N2MyZTc0MDMzYmYzNGU5ZTc5ZWNkZmJmYjdlOTNiYnYQ51E=: 00:16:33.864 11:34:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:33.864 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:33.864 11:34:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:16:33.864 11:34:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:33.864 11:34:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:33.864 11:34:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:33.864 11:34:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:33.865 11:34:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:16:33.865 11:34:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:16:34.122 11:34:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 1 00:16:34.122 11:34:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:34.122 11:34:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:16:34.122 11:34:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:16:34.122 11:34:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:16:34.122 11:34:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:34.122 11:34:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:34.122 11:34:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:34.122 11:34:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:34.122 11:34:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:34.122 11:34:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:34.122 11:34:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:34.122 11:34:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:34.381 00:16:34.381 11:34:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:34.381 11:34:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:34.381 11:34:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:34.641 11:34:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:34.641 11:34:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:34.641 11:34:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:34.641 11:34:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:34.641 11:34:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:34.641 11:34:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:34.641 { 00:16:34.641 "cntlid": 51, 00:16:34.641 "qid": 0, 00:16:34.641 "state": "enabled", 00:16:34.641 "thread": "nvmf_tgt_poll_group_000", 00:16:34.641 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562", 00:16:34.641 "listen_address": { 00:16:34.641 "trtype": "TCP", 00:16:34.641 "adrfam": "IPv4", 00:16:34.641 "traddr": "10.0.0.2", 00:16:34.641 "trsvcid": "4420" 00:16:34.641 }, 00:16:34.641 "peer_address": { 00:16:34.641 "trtype": "TCP", 00:16:34.641 "adrfam": "IPv4", 00:16:34.641 "traddr": "10.0.0.1", 00:16:34.641 "trsvcid": "57808" 00:16:34.641 }, 00:16:34.641 "auth": { 00:16:34.641 "state": "completed", 00:16:34.641 "digest": "sha384", 00:16:34.641 "dhgroup": "null" 00:16:34.641 } 00:16:34.641 } 00:16:34.641 ]' 00:16:34.641 11:34:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:34.641 11:34:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:34.641 11:34:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:34.641 11:34:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:16:34.641 11:34:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:34.641 11:34:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:34.641 11:34:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:34.641 11:34:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:35.209 11:34:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NzBjNTkzZTliNmFiY2ExM2E5ZGVkZTU2MzczZjIzNGPWmSRF: --dhchap-ctrl-secret DHHC-1:02:M2MwOGEwMGEyOWVmYzQ5MjIwZmQ5ZmJkMDY3NDdhMDRmNmU1NGRiMTM4ZGQ3ODU4gDsQkA==: 00:16:35.209 11:34:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid 00abaa28-3537-eb11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:NzBjNTkzZTliNmFiY2ExM2E5ZGVkZTU2MzczZjIzNGPWmSRF: --dhchap-ctrl-secret DHHC-1:02:M2MwOGEwMGEyOWVmYzQ5MjIwZmQ5ZmJkMDY3NDdhMDRmNmU1NGRiMTM4ZGQ3ODU4gDsQkA==: 00:16:35.778 11:34:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:35.778 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:35.778 11:34:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:16:35.778 11:34:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:35.778 11:34:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:35.778 11:34:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:35.778 11:34:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:35.778 11:34:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:16:35.778 11:34:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:16:36.036 11:34:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 2 00:16:36.036 11:34:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:36.036 11:34:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:16:36.036 11:34:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:16:36.036 11:34:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:16:36.036 11:34:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:36.036 11:34:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:36.036 11:34:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:36.036 11:34:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:36.036 11:34:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:36.036 11:34:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:36.036 11:34:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:36.036 11:34:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:36.294 00:16:36.294 11:34:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:36.294 11:34:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:36.294 11:34:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:36.553 11:34:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:36.553 11:34:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:36.553 11:34:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:36.553 11:34:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:36.553 11:34:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:36.553 11:34:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:36.553 { 00:16:36.553 "cntlid": 53, 00:16:36.553 "qid": 0, 00:16:36.553 "state": "enabled", 00:16:36.553 "thread": "nvmf_tgt_poll_group_000", 00:16:36.553 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562", 00:16:36.553 "listen_address": { 00:16:36.553 "trtype": "TCP", 00:16:36.553 "adrfam": "IPv4", 00:16:36.553 "traddr": "10.0.0.2", 00:16:36.553 "trsvcid": "4420" 00:16:36.553 }, 00:16:36.553 "peer_address": { 00:16:36.553 "trtype": "TCP", 00:16:36.553 "adrfam": "IPv4", 00:16:36.553 "traddr": "10.0.0.1", 00:16:36.553 "trsvcid": "57832" 00:16:36.553 }, 00:16:36.553 "auth": { 00:16:36.553 "state": "completed", 00:16:36.553 "digest": "sha384", 00:16:36.553 "dhgroup": "null" 00:16:36.553 } 00:16:36.553 } 00:16:36.553 ]' 00:16:36.553 11:34:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:36.553 11:34:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:36.553 11:34:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:36.553 11:34:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:16:36.553 11:34:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:36.553 11:34:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:36.553 11:34:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:36.553 11:34:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:36.812 11:34:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:Yjc4NTAyMDhmNTg3M2FjYmQ1MTVkZTIwZjM1ODhlMmMwNWY5MjA2MDQwZmIzNDY4n3XOMQ==: --dhchap-ctrl-secret DHHC-1:01:MmVkODE3Yzg5OWI3OWQ5MWJiZDc5ODkyMmFkY2M2YzfHVm/m: 00:16:36.812 11:34:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid 00abaa28-3537-eb11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:Yjc4NTAyMDhmNTg3M2FjYmQ1MTVkZTIwZjM1ODhlMmMwNWY5MjA2MDQwZmIzNDY4n3XOMQ==: --dhchap-ctrl-secret DHHC-1:01:MmVkODE3Yzg5OWI3OWQ5MWJiZDc5ODkyMmFkY2M2YzfHVm/m: 00:16:37.748 11:34:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:37.748 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:37.748 11:34:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:16:37.748 11:34:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:37.748 11:34:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:37.748 11:34:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:37.748 11:34:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:37.748 11:34:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:16:37.748 11:34:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:16:38.007 11:34:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 3 00:16:38.007 11:34:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:38.007 11:34:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:16:38.007 11:34:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:16:38.007 11:34:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:16:38.007 11:34:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:38.007 11:34:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --dhchap-key key3 00:16:38.007 11:34:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:38.007 11:34:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:38.007 11:34:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:38.007 11:34:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:16:38.007 11:34:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:38.007 11:34:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:38.266 00:16:38.266 11:34:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:38.266 11:34:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:38.266 11:34:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:38.525 11:34:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:38.526 11:34:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:38.526 11:34:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:38.526 11:34:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:38.526 11:34:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:38.526 11:34:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:38.526 { 00:16:38.526 "cntlid": 55, 00:16:38.526 "qid": 0, 00:16:38.526 "state": "enabled", 00:16:38.526 "thread": "nvmf_tgt_poll_group_000", 00:16:38.526 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562", 00:16:38.526 "listen_address": { 00:16:38.526 "trtype": "TCP", 00:16:38.526 "adrfam": "IPv4", 00:16:38.526 "traddr": "10.0.0.2", 00:16:38.526 "trsvcid": "4420" 00:16:38.526 }, 00:16:38.526 "peer_address": { 00:16:38.526 "trtype": "TCP", 00:16:38.526 "adrfam": "IPv4", 00:16:38.526 "traddr": "10.0.0.1", 00:16:38.526 "trsvcid": "57844" 00:16:38.526 }, 00:16:38.526 "auth": { 00:16:38.526 "state": "completed", 00:16:38.526 "digest": "sha384", 00:16:38.526 "dhgroup": "null" 00:16:38.526 } 00:16:38.526 } 00:16:38.526 ]' 00:16:38.526 11:34:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:38.526 11:34:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:38.526 11:34:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:38.526 11:34:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:16:38.526 11:34:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:38.526 11:34:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:38.526 11:34:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:38.526 11:34:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:39.095 11:34:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ODZlYTMyMjBlZjQyOGEwZDYyZTUxNjViMmE3YmU0MjFhZDVkMmU4MWRmNzZhYWJhNDk3ODM5MGU5NDhkZDk4ZVOjsv0=: 00:16:39.095 11:34:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid 00abaa28-3537-eb11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:ODZlYTMyMjBlZjQyOGEwZDYyZTUxNjViMmE3YmU0MjFhZDVkMmU4MWRmNzZhYWJhNDk3ODM5MGU5NDhkZDk4ZVOjsv0=: 00:16:39.662 11:34:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:39.662 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:39.662 11:34:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:16:39.662 11:34:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:39.662 11:34:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:39.662 11:34:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:39.662 11:34:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:16:39.662 11:34:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:39.662 11:34:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:16:39.662 11:34:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:16:39.921 11:34:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 0 00:16:39.921 11:34:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:39.921 11:34:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:16:39.921 11:34:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:16:39.921 11:34:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:16:39.921 11:34:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:39.921 11:34:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:39.921 11:34:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:39.921 11:34:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:39.921 11:34:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:39.921 11:34:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:39.921 11:34:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:39.921 11:34:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:40.490 00:16:40.490 11:34:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:40.490 11:34:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:40.490 11:34:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:40.490 11:34:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:40.490 11:34:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:40.490 11:34:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:40.490 11:34:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:40.490 11:34:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:40.490 11:34:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:40.490 { 00:16:40.490 "cntlid": 57, 00:16:40.490 "qid": 0, 00:16:40.490 "state": "enabled", 00:16:40.490 "thread": "nvmf_tgt_poll_group_000", 00:16:40.490 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562", 00:16:40.490 "listen_address": { 00:16:40.490 "trtype": "TCP", 00:16:40.490 "adrfam": "IPv4", 00:16:40.490 "traddr": "10.0.0.2", 00:16:40.490 "trsvcid": "4420" 00:16:40.490 }, 00:16:40.490 "peer_address": { 00:16:40.490 "trtype": "TCP", 00:16:40.490 "adrfam": "IPv4", 00:16:40.490 "traddr": "10.0.0.1", 00:16:40.490 "trsvcid": "57868" 00:16:40.490 }, 00:16:40.490 "auth": { 00:16:40.490 "state": "completed", 00:16:40.490 "digest": "sha384", 00:16:40.490 "dhgroup": "ffdhe2048" 00:16:40.490 } 00:16:40.490 } 00:16:40.490 ]' 00:16:40.490 11:34:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:40.490 11:34:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:40.490 11:34:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:40.748 11:34:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:16:40.748 11:34:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:40.748 11:34:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:40.749 11:34:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:40.749 11:34:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:41.008 11:34:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NDE5Njc4MWNiYWExZGQ5ZmU1NDVmMDA2ZmQzMGFjNjFmZWQyODBlMmI0NThiMTQzLE1V+g==: --dhchap-ctrl-secret DHHC-1:03:MmJmNjJkMjM5MzE0ZDI5YTU3YjczOTcwMDg5ZTMxY2Y4N2MyZTc0MDMzYmYzNGU5ZTc5ZWNkZmJmYjdlOTNiYnYQ51E=: 00:16:41.008 11:34:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid 00abaa28-3537-eb11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:NDE5Njc4MWNiYWExZGQ5ZmU1NDVmMDA2ZmQzMGFjNjFmZWQyODBlMmI0NThiMTQzLE1V+g==: --dhchap-ctrl-secret DHHC-1:03:MmJmNjJkMjM5MzE0ZDI5YTU3YjczOTcwMDg5ZTMxY2Y4N2MyZTc0MDMzYmYzNGU5ZTc5ZWNkZmJmYjdlOTNiYnYQ51E=: 00:16:41.945 11:34:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:41.945 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:41.945 11:34:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:16:41.945 11:34:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:41.945 11:34:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:41.945 11:34:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:41.945 11:34:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:41.945 11:34:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:16:41.945 11:34:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:16:41.945 11:34:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 1 00:16:41.945 11:34:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:41.945 11:34:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:16:41.945 11:34:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:16:41.945 11:34:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:16:41.945 11:34:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:41.945 11:34:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:41.945 11:34:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:41.945 11:34:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:42.204 11:34:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:42.204 11:34:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:42.204 11:34:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:42.204 11:34:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:42.463 00:16:42.463 11:34:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:42.463 11:34:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:42.463 11:34:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:42.721 11:34:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:42.721 11:34:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:42.721 11:34:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:42.721 11:34:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:42.721 11:34:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:42.721 11:34:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:42.721 { 00:16:42.721 "cntlid": 59, 00:16:42.721 "qid": 0, 00:16:42.721 "state": "enabled", 00:16:42.721 "thread": "nvmf_tgt_poll_group_000", 00:16:42.721 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562", 00:16:42.721 "listen_address": { 00:16:42.721 "trtype": "TCP", 00:16:42.721 "adrfam": "IPv4", 00:16:42.721 "traddr": "10.0.0.2", 00:16:42.721 "trsvcid": "4420" 00:16:42.721 }, 00:16:42.721 "peer_address": { 00:16:42.721 "trtype": "TCP", 00:16:42.721 "adrfam": "IPv4", 00:16:42.721 "traddr": "10.0.0.1", 00:16:42.721 "trsvcid": "60390" 00:16:42.721 }, 00:16:42.721 "auth": { 00:16:42.721 "state": "completed", 00:16:42.721 "digest": "sha384", 00:16:42.721 "dhgroup": "ffdhe2048" 00:16:42.721 } 00:16:42.721 } 00:16:42.721 ]' 00:16:42.721 11:34:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:42.721 11:34:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:42.721 11:34:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:42.722 11:34:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:16:42.722 11:34:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:42.983 11:34:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:42.983 11:34:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:42.983 11:34:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:43.241 11:34:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NzBjNTkzZTliNmFiY2ExM2E5ZGVkZTU2MzczZjIzNGPWmSRF: --dhchap-ctrl-secret DHHC-1:02:M2MwOGEwMGEyOWVmYzQ5MjIwZmQ5ZmJkMDY3NDdhMDRmNmU1NGRiMTM4ZGQ3ODU4gDsQkA==: 00:16:43.241 11:34:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid 00abaa28-3537-eb11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:NzBjNTkzZTliNmFiY2ExM2E5ZGVkZTU2MzczZjIzNGPWmSRF: --dhchap-ctrl-secret DHHC-1:02:M2MwOGEwMGEyOWVmYzQ5MjIwZmQ5ZmJkMDY3NDdhMDRmNmU1NGRiMTM4ZGQ3ODU4gDsQkA==: 00:16:43.808 11:34:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:43.808 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:43.808 11:34:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:16:43.808 11:34:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:43.808 11:34:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:44.068 11:34:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:44.068 11:34:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:44.068 11:34:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:16:44.068 11:34:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:16:44.325 11:34:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 2 00:16:44.325 11:34:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:44.325 11:34:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:16:44.325 11:34:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:16:44.325 11:34:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:16:44.325 11:34:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:44.325 11:34:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:44.325 11:34:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:44.325 11:34:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:44.325 11:34:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:44.325 11:34:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:44.325 11:34:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:44.325 11:34:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:44.583 00:16:44.583 11:34:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:44.583 11:34:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:44.583 11:34:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:44.842 11:34:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:44.842 11:34:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:44.842 11:34:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:44.842 11:34:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:44.842 11:34:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:44.842 11:34:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:44.842 { 00:16:44.842 "cntlid": 61, 00:16:44.842 "qid": 0, 00:16:44.842 "state": "enabled", 00:16:44.842 "thread": "nvmf_tgt_poll_group_000", 00:16:44.842 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562", 00:16:44.842 "listen_address": { 00:16:44.842 "trtype": "TCP", 00:16:44.842 "adrfam": "IPv4", 00:16:44.842 "traddr": "10.0.0.2", 00:16:44.842 "trsvcid": "4420" 00:16:44.842 }, 00:16:44.842 "peer_address": { 00:16:44.842 "trtype": "TCP", 00:16:44.842 "adrfam": "IPv4", 00:16:44.842 "traddr": "10.0.0.1", 00:16:44.842 "trsvcid": "60428" 00:16:44.842 }, 00:16:44.842 "auth": { 00:16:44.842 "state": "completed", 00:16:44.842 "digest": "sha384", 00:16:44.842 "dhgroup": "ffdhe2048" 00:16:44.842 } 00:16:44.842 } 00:16:44.842 ]' 00:16:44.842 11:34:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:44.842 11:34:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:44.842 11:34:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:44.842 11:34:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:16:44.842 11:34:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:44.842 11:34:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:44.842 11:34:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:44.842 11:34:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:45.410 11:34:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:Yjc4NTAyMDhmNTg3M2FjYmQ1MTVkZTIwZjM1ODhlMmMwNWY5MjA2MDQwZmIzNDY4n3XOMQ==: --dhchap-ctrl-secret DHHC-1:01:MmVkODE3Yzg5OWI3OWQ5MWJiZDc5ODkyMmFkY2M2YzfHVm/m: 00:16:45.410 11:34:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid 00abaa28-3537-eb11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:Yjc4NTAyMDhmNTg3M2FjYmQ1MTVkZTIwZjM1ODhlMmMwNWY5MjA2MDQwZmIzNDY4n3XOMQ==: --dhchap-ctrl-secret DHHC-1:01:MmVkODE3Yzg5OWI3OWQ5MWJiZDc5ODkyMmFkY2M2YzfHVm/m: 00:16:45.978 11:34:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:45.978 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:45.978 11:34:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:16:45.978 11:34:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:45.978 11:34:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:45.978 11:34:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:45.978 11:34:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:45.978 11:34:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:16:45.978 11:34:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:16:46.237 11:34:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 3 00:16:46.237 11:34:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:46.237 11:34:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:16:46.237 11:34:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:16:46.237 11:34:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:16:46.237 11:34:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:46.237 11:34:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --dhchap-key key3 00:16:46.237 11:34:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:46.237 11:34:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:46.237 11:34:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:46.237 11:34:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:16:46.237 11:34:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:46.237 11:34:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:46.496 00:16:46.496 11:34:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:46.496 11:34:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:46.496 11:34:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:46.496 11:34:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:46.496 11:34:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:46.496 11:34:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:46.496 11:34:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:46.496 11:34:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:46.496 11:34:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:46.496 { 00:16:46.496 "cntlid": 63, 00:16:46.496 "qid": 0, 00:16:46.496 "state": "enabled", 00:16:46.496 "thread": "nvmf_tgt_poll_group_000", 00:16:46.496 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562", 00:16:46.496 "listen_address": { 00:16:46.496 "trtype": "TCP", 00:16:46.496 "adrfam": "IPv4", 00:16:46.496 "traddr": "10.0.0.2", 00:16:46.496 "trsvcid": "4420" 00:16:46.496 }, 00:16:46.496 "peer_address": { 00:16:46.496 "trtype": "TCP", 00:16:46.496 "adrfam": "IPv4", 00:16:46.496 "traddr": "10.0.0.1", 00:16:46.496 "trsvcid": "60442" 00:16:46.496 }, 00:16:46.496 "auth": { 00:16:46.496 "state": "completed", 00:16:46.496 "digest": "sha384", 00:16:46.496 "dhgroup": "ffdhe2048" 00:16:46.496 } 00:16:46.496 } 00:16:46.496 ]' 00:16:46.496 11:34:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:46.496 11:34:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:46.496 11:34:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:46.755 11:34:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:16:46.755 11:34:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:46.755 11:34:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:46.755 11:34:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:46.755 11:34:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:47.014 11:34:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ODZlYTMyMjBlZjQyOGEwZDYyZTUxNjViMmE3YmU0MjFhZDVkMmU4MWRmNzZhYWJhNDk3ODM5MGU5NDhkZDk4ZVOjsv0=: 00:16:47.014 11:34:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid 00abaa28-3537-eb11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:ODZlYTMyMjBlZjQyOGEwZDYyZTUxNjViMmE3YmU0MjFhZDVkMmU4MWRmNzZhYWJhNDk3ODM5MGU5NDhkZDk4ZVOjsv0=: 00:16:47.950 11:34:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:47.950 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:47.950 11:34:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:16:47.950 11:34:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:47.950 11:34:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:47.950 11:34:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:47.950 11:34:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:16:47.950 11:34:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:47.950 11:34:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:16:47.950 11:34:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:16:48.209 11:34:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 0 00:16:48.209 11:34:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:48.209 11:34:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:16:48.209 11:34:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:16:48.209 11:34:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:16:48.209 11:34:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:48.209 11:34:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:48.209 11:34:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:48.209 11:34:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:48.209 11:34:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:48.209 11:34:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:48.209 11:34:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:48.209 11:34:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:48.209 00:16:48.468 11:34:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:48.468 11:34:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:48.468 11:34:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:48.727 11:34:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:48.727 11:34:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:48.727 11:34:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:48.727 11:34:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:48.727 11:34:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:48.727 11:34:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:48.727 { 00:16:48.727 "cntlid": 65, 00:16:48.727 "qid": 0, 00:16:48.727 "state": "enabled", 00:16:48.727 "thread": "nvmf_tgt_poll_group_000", 00:16:48.727 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562", 00:16:48.727 "listen_address": { 00:16:48.727 "trtype": "TCP", 00:16:48.727 "adrfam": "IPv4", 00:16:48.727 "traddr": "10.0.0.2", 00:16:48.727 "trsvcid": "4420" 00:16:48.727 }, 00:16:48.727 "peer_address": { 00:16:48.727 "trtype": "TCP", 00:16:48.727 "adrfam": "IPv4", 00:16:48.727 "traddr": "10.0.0.1", 00:16:48.727 "trsvcid": "60484" 00:16:48.727 }, 00:16:48.727 "auth": { 00:16:48.727 "state": "completed", 00:16:48.727 "digest": "sha384", 00:16:48.727 "dhgroup": "ffdhe3072" 00:16:48.727 } 00:16:48.727 } 00:16:48.727 ]' 00:16:48.727 11:34:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:48.727 11:34:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:48.727 11:34:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:48.727 11:34:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:16:48.727 11:34:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:48.727 11:34:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:48.727 11:34:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:48.727 11:34:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:48.986 11:34:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NDE5Njc4MWNiYWExZGQ5ZmU1NDVmMDA2ZmQzMGFjNjFmZWQyODBlMmI0NThiMTQzLE1V+g==: --dhchap-ctrl-secret DHHC-1:03:MmJmNjJkMjM5MzE0ZDI5YTU3YjczOTcwMDg5ZTMxY2Y4N2MyZTc0MDMzYmYzNGU5ZTc5ZWNkZmJmYjdlOTNiYnYQ51E=: 00:16:48.986 11:34:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid 00abaa28-3537-eb11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:NDE5Njc4MWNiYWExZGQ5ZmU1NDVmMDA2ZmQzMGFjNjFmZWQyODBlMmI0NThiMTQzLE1V+g==: --dhchap-ctrl-secret DHHC-1:03:MmJmNjJkMjM5MzE0ZDI5YTU3YjczOTcwMDg5ZTMxY2Y4N2MyZTc0MDMzYmYzNGU5ZTc5ZWNkZmJmYjdlOTNiYnYQ51E=: 00:16:49.923 11:34:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:49.923 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:49.923 11:34:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:16:49.923 11:34:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:49.923 11:34:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:49.923 11:34:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:49.923 11:34:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:49.924 11:34:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:16:49.924 11:34:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:16:50.183 11:34:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 1 00:16:50.183 11:34:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:50.183 11:34:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:16:50.183 11:34:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:16:50.183 11:34:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:16:50.183 11:34:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:50.183 11:34:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:50.183 11:34:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:50.183 11:34:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:50.183 11:34:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:50.183 11:34:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:50.183 11:34:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:50.183 11:34:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:50.442 00:16:50.442 11:34:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:50.442 11:34:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:50.442 11:34:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:50.701 11:34:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:50.701 11:34:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:50.701 11:34:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:50.701 11:34:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:50.701 11:34:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:50.701 11:34:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:50.701 { 00:16:50.701 "cntlid": 67, 00:16:50.701 "qid": 0, 00:16:50.701 "state": "enabled", 00:16:50.701 "thread": "nvmf_tgt_poll_group_000", 00:16:50.701 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562", 00:16:50.701 "listen_address": { 00:16:50.701 "trtype": "TCP", 00:16:50.701 "adrfam": "IPv4", 00:16:50.701 "traddr": "10.0.0.2", 00:16:50.701 "trsvcid": "4420" 00:16:50.701 }, 00:16:50.701 "peer_address": { 00:16:50.701 "trtype": "TCP", 00:16:50.701 "adrfam": "IPv4", 00:16:50.701 "traddr": "10.0.0.1", 00:16:50.701 "trsvcid": "60506" 00:16:50.701 }, 00:16:50.701 "auth": { 00:16:50.701 "state": "completed", 00:16:50.701 "digest": "sha384", 00:16:50.701 "dhgroup": "ffdhe3072" 00:16:50.701 } 00:16:50.701 } 00:16:50.701 ]' 00:16:50.701 11:34:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:50.701 11:34:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:50.701 11:34:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:50.701 11:34:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:16:50.701 11:34:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:50.701 11:34:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:50.701 11:34:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:50.701 11:34:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:50.959 11:34:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NzBjNTkzZTliNmFiY2ExM2E5ZGVkZTU2MzczZjIzNGPWmSRF: --dhchap-ctrl-secret DHHC-1:02:M2MwOGEwMGEyOWVmYzQ5MjIwZmQ5ZmJkMDY3NDdhMDRmNmU1NGRiMTM4ZGQ3ODU4gDsQkA==: 00:16:50.959 11:34:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid 00abaa28-3537-eb11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:NzBjNTkzZTliNmFiY2ExM2E5ZGVkZTU2MzczZjIzNGPWmSRF: --dhchap-ctrl-secret DHHC-1:02:M2MwOGEwMGEyOWVmYzQ5MjIwZmQ5ZmJkMDY3NDdhMDRmNmU1NGRiMTM4ZGQ3ODU4gDsQkA==: 00:16:51.895 11:34:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:51.895 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:51.895 11:34:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:16:51.895 11:34:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:51.895 11:34:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:51.895 11:34:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:51.895 11:34:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:51.895 11:34:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:16:51.895 11:34:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:16:52.154 11:34:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 2 00:16:52.154 11:34:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:52.154 11:34:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:16:52.154 11:34:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:16:52.154 11:34:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:16:52.154 11:34:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:52.154 11:34:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:52.154 11:34:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:52.154 11:34:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:52.154 11:34:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:52.154 11:34:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:52.154 11:34:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:52.154 11:34:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:52.413 00:16:52.413 11:34:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:52.413 11:34:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:52.413 11:34:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:52.671 11:34:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:52.671 11:34:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:52.671 11:34:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:52.671 11:34:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:52.671 11:34:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:52.671 11:34:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:52.671 { 00:16:52.671 "cntlid": 69, 00:16:52.671 "qid": 0, 00:16:52.671 "state": "enabled", 00:16:52.671 "thread": "nvmf_tgt_poll_group_000", 00:16:52.671 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562", 00:16:52.671 "listen_address": { 00:16:52.671 "trtype": "TCP", 00:16:52.671 "adrfam": "IPv4", 00:16:52.671 "traddr": "10.0.0.2", 00:16:52.671 "trsvcid": "4420" 00:16:52.671 }, 00:16:52.671 "peer_address": { 00:16:52.672 "trtype": "TCP", 00:16:52.672 "adrfam": "IPv4", 00:16:52.672 "traddr": "10.0.0.1", 00:16:52.672 "trsvcid": "42930" 00:16:52.672 }, 00:16:52.672 "auth": { 00:16:52.672 "state": "completed", 00:16:52.672 "digest": "sha384", 00:16:52.672 "dhgroup": "ffdhe3072" 00:16:52.672 } 00:16:52.672 } 00:16:52.672 ]' 00:16:52.672 11:34:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:52.672 11:34:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:52.672 11:34:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:52.672 11:34:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:16:52.672 11:34:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:52.672 11:34:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:52.672 11:34:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:52.672 11:34:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:52.931 11:34:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:Yjc4NTAyMDhmNTg3M2FjYmQ1MTVkZTIwZjM1ODhlMmMwNWY5MjA2MDQwZmIzNDY4n3XOMQ==: --dhchap-ctrl-secret DHHC-1:01:MmVkODE3Yzg5OWI3OWQ5MWJiZDc5ODkyMmFkY2M2YzfHVm/m: 00:16:52.931 11:34:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid 00abaa28-3537-eb11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:Yjc4NTAyMDhmNTg3M2FjYmQ1MTVkZTIwZjM1ODhlMmMwNWY5MjA2MDQwZmIzNDY4n3XOMQ==: --dhchap-ctrl-secret DHHC-1:01:MmVkODE3Yzg5OWI3OWQ5MWJiZDc5ODkyMmFkY2M2YzfHVm/m: 00:16:53.866 11:34:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:53.866 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:53.866 11:34:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:16:53.866 11:34:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:53.866 11:34:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:53.866 11:34:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:53.866 11:34:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:53.866 11:34:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:16:53.866 11:34:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:16:54.124 11:34:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 3 00:16:54.124 11:34:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:54.124 11:34:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:16:54.124 11:34:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:16:54.124 11:34:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:16:54.125 11:34:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:54.125 11:34:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --dhchap-key key3 00:16:54.125 11:34:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:54.125 11:34:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:54.125 11:34:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:54.125 11:34:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:16:54.125 11:34:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:54.125 11:34:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:54.383 00:16:54.383 11:34:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:54.383 11:34:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:54.383 11:34:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:54.642 11:34:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:54.642 11:34:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:54.643 11:34:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:54.643 11:34:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:54.643 11:34:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:54.643 11:34:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:54.643 { 00:16:54.643 "cntlid": 71, 00:16:54.643 "qid": 0, 00:16:54.643 "state": "enabled", 00:16:54.643 "thread": "nvmf_tgt_poll_group_000", 00:16:54.643 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562", 00:16:54.643 "listen_address": { 00:16:54.643 "trtype": "TCP", 00:16:54.643 "adrfam": "IPv4", 00:16:54.643 "traddr": "10.0.0.2", 00:16:54.643 "trsvcid": "4420" 00:16:54.643 }, 00:16:54.643 "peer_address": { 00:16:54.643 "trtype": "TCP", 00:16:54.643 "adrfam": "IPv4", 00:16:54.643 "traddr": "10.0.0.1", 00:16:54.643 "trsvcid": "42944" 00:16:54.643 }, 00:16:54.643 "auth": { 00:16:54.643 "state": "completed", 00:16:54.643 "digest": "sha384", 00:16:54.643 "dhgroup": "ffdhe3072" 00:16:54.643 } 00:16:54.643 } 00:16:54.643 ]' 00:16:54.643 11:34:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:54.643 11:34:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:54.643 11:34:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:54.643 11:34:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:16:54.643 11:34:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:54.902 11:34:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:54.902 11:34:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:54.902 11:34:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:55.160 11:34:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ODZlYTMyMjBlZjQyOGEwZDYyZTUxNjViMmE3YmU0MjFhZDVkMmU4MWRmNzZhYWJhNDk3ODM5MGU5NDhkZDk4ZVOjsv0=: 00:16:55.160 11:34:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid 00abaa28-3537-eb11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:ODZlYTMyMjBlZjQyOGEwZDYyZTUxNjViMmE3YmU0MjFhZDVkMmU4MWRmNzZhYWJhNDk3ODM5MGU5NDhkZDk4ZVOjsv0=: 00:16:56.095 11:34:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:56.095 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:56.095 11:34:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:16:56.095 11:34:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:56.095 11:34:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:56.095 11:34:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:56.096 11:34:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:16:56.096 11:34:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:56.096 11:34:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:16:56.096 11:34:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:16:56.096 11:34:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 0 00:16:56.096 11:34:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:56.096 11:34:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:16:56.096 11:34:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:16:56.096 11:34:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:16:56.096 11:34:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:56.096 11:34:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:56.096 11:34:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:56.096 11:34:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:56.096 11:34:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:56.096 11:34:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:56.096 11:34:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:56.096 11:34:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:56.355 00:16:56.355 11:34:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:56.355 11:34:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:56.355 11:34:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:56.613 11:34:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:56.872 11:34:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:56.872 11:34:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:56.872 11:34:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:56.872 11:34:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:56.872 11:34:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:56.872 { 00:16:56.872 "cntlid": 73, 00:16:56.873 "qid": 0, 00:16:56.873 "state": "enabled", 00:16:56.873 "thread": "nvmf_tgt_poll_group_000", 00:16:56.873 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562", 00:16:56.873 "listen_address": { 00:16:56.873 "trtype": "TCP", 00:16:56.873 "adrfam": "IPv4", 00:16:56.873 "traddr": "10.0.0.2", 00:16:56.873 "trsvcid": "4420" 00:16:56.873 }, 00:16:56.873 "peer_address": { 00:16:56.873 "trtype": "TCP", 00:16:56.873 "adrfam": "IPv4", 00:16:56.873 "traddr": "10.0.0.1", 00:16:56.873 "trsvcid": "42968" 00:16:56.873 }, 00:16:56.873 "auth": { 00:16:56.873 "state": "completed", 00:16:56.873 "digest": "sha384", 00:16:56.873 "dhgroup": "ffdhe4096" 00:16:56.873 } 00:16:56.873 } 00:16:56.873 ]' 00:16:56.873 11:34:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:56.873 11:34:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:56.873 11:34:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:56.873 11:34:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:16:56.873 11:34:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:56.873 11:34:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:56.873 11:34:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:56.873 11:34:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:57.132 11:34:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NDE5Njc4MWNiYWExZGQ5ZmU1NDVmMDA2ZmQzMGFjNjFmZWQyODBlMmI0NThiMTQzLE1V+g==: --dhchap-ctrl-secret DHHC-1:03:MmJmNjJkMjM5MzE0ZDI5YTU3YjczOTcwMDg5ZTMxY2Y4N2MyZTc0MDMzYmYzNGU5ZTc5ZWNkZmJmYjdlOTNiYnYQ51E=: 00:16:57.132 11:34:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid 00abaa28-3537-eb11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:NDE5Njc4MWNiYWExZGQ5ZmU1NDVmMDA2ZmQzMGFjNjFmZWQyODBlMmI0NThiMTQzLE1V+g==: --dhchap-ctrl-secret DHHC-1:03:MmJmNjJkMjM5MzE0ZDI5YTU3YjczOTcwMDg5ZTMxY2Y4N2MyZTc0MDMzYmYzNGU5ZTc5ZWNkZmJmYjdlOTNiYnYQ51E=: 00:16:58.070 11:34:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:58.070 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:58.070 11:34:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:16:58.070 11:34:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:58.070 11:34:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:58.070 11:34:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:58.070 11:34:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:58.070 11:34:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:16:58.070 11:34:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:16:58.329 11:34:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 1 00:16:58.329 11:34:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:58.329 11:34:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:16:58.329 11:34:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:16:58.329 11:34:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:16:58.329 11:34:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:58.329 11:34:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:58.329 11:34:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:58.329 11:34:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:58.329 11:34:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:58.329 11:34:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:58.329 11:34:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:58.329 11:34:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:58.588 00:16:58.588 11:34:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:58.588 11:34:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:58.588 11:34:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:58.847 11:34:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:58.847 11:34:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:58.847 11:34:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:58.847 11:34:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:58.847 11:34:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:58.847 11:34:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:58.847 { 00:16:58.847 "cntlid": 75, 00:16:58.847 "qid": 0, 00:16:58.847 "state": "enabled", 00:16:58.847 "thread": "nvmf_tgt_poll_group_000", 00:16:58.847 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562", 00:16:58.847 "listen_address": { 00:16:58.847 "trtype": "TCP", 00:16:58.847 "adrfam": "IPv4", 00:16:58.847 "traddr": "10.0.0.2", 00:16:58.847 "trsvcid": "4420" 00:16:58.847 }, 00:16:58.847 "peer_address": { 00:16:58.847 "trtype": "TCP", 00:16:58.847 "adrfam": "IPv4", 00:16:58.847 "traddr": "10.0.0.1", 00:16:58.847 "trsvcid": "42980" 00:16:58.847 }, 00:16:58.847 "auth": { 00:16:58.847 "state": "completed", 00:16:58.847 "digest": "sha384", 00:16:58.847 "dhgroup": "ffdhe4096" 00:16:58.847 } 00:16:58.847 } 00:16:58.847 ]' 00:16:58.847 11:34:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:58.847 11:34:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:58.847 11:34:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:58.847 11:34:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:16:58.848 11:34:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:58.848 11:34:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:58.848 11:34:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:58.848 11:34:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:59.107 11:34:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NzBjNTkzZTliNmFiY2ExM2E5ZGVkZTU2MzczZjIzNGPWmSRF: --dhchap-ctrl-secret DHHC-1:02:M2MwOGEwMGEyOWVmYzQ5MjIwZmQ5ZmJkMDY3NDdhMDRmNmU1NGRiMTM4ZGQ3ODU4gDsQkA==: 00:16:59.107 11:34:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid 00abaa28-3537-eb11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:NzBjNTkzZTliNmFiY2ExM2E5ZGVkZTU2MzczZjIzNGPWmSRF: --dhchap-ctrl-secret DHHC-1:02:M2MwOGEwMGEyOWVmYzQ5MjIwZmQ5ZmJkMDY3NDdhMDRmNmU1NGRiMTM4ZGQ3ODU4gDsQkA==: 00:17:00.042 11:35:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:00.042 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:00.042 11:35:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:17:00.042 11:35:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:00.042 11:35:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:00.042 11:35:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:00.042 11:35:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:00.042 11:35:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:17:00.042 11:35:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:17:00.299 11:35:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 2 00:17:00.299 11:35:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:00.299 11:35:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:17:00.299 11:35:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:17:00.299 11:35:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:17:00.299 11:35:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:00.299 11:35:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:00.299 11:35:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:00.299 11:35:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:00.299 11:35:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:00.299 11:35:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:00.299 11:35:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:00.299 11:35:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:00.557 00:17:00.557 11:35:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:00.557 11:35:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:00.557 11:35:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:00.815 11:35:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:00.816 11:35:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:00.816 11:35:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:00.816 11:35:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:00.816 11:35:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:00.816 11:35:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:00.816 { 00:17:00.816 "cntlid": 77, 00:17:00.816 "qid": 0, 00:17:00.816 "state": "enabled", 00:17:00.816 "thread": "nvmf_tgt_poll_group_000", 00:17:00.816 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562", 00:17:00.816 "listen_address": { 00:17:00.816 "trtype": "TCP", 00:17:00.816 "adrfam": "IPv4", 00:17:00.816 "traddr": "10.0.0.2", 00:17:00.816 "trsvcid": "4420" 00:17:00.816 }, 00:17:00.816 "peer_address": { 00:17:00.816 "trtype": "TCP", 00:17:00.816 "adrfam": "IPv4", 00:17:00.816 "traddr": "10.0.0.1", 00:17:00.816 "trsvcid": "43006" 00:17:00.816 }, 00:17:00.816 "auth": { 00:17:00.816 "state": "completed", 00:17:00.816 "digest": "sha384", 00:17:00.816 "dhgroup": "ffdhe4096" 00:17:00.816 } 00:17:00.816 } 00:17:00.816 ]' 00:17:00.816 11:35:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:00.816 11:35:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:00.816 11:35:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:01.074 11:35:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:17:01.074 11:35:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:01.074 11:35:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:01.074 11:35:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:01.074 11:35:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:01.333 11:35:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:Yjc4NTAyMDhmNTg3M2FjYmQ1MTVkZTIwZjM1ODhlMmMwNWY5MjA2MDQwZmIzNDY4n3XOMQ==: --dhchap-ctrl-secret DHHC-1:01:MmVkODE3Yzg5OWI3OWQ5MWJiZDc5ODkyMmFkY2M2YzfHVm/m: 00:17:01.333 11:35:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid 00abaa28-3537-eb11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:Yjc4NTAyMDhmNTg3M2FjYmQ1MTVkZTIwZjM1ODhlMmMwNWY5MjA2MDQwZmIzNDY4n3XOMQ==: --dhchap-ctrl-secret DHHC-1:01:MmVkODE3Yzg5OWI3OWQ5MWJiZDc5ODkyMmFkY2M2YzfHVm/m: 00:17:02.269 11:35:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:02.269 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:02.269 11:35:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:17:02.269 11:35:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:02.269 11:35:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:02.269 11:35:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:02.269 11:35:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:02.269 11:35:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:17:02.269 11:35:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:17:02.269 11:35:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 3 00:17:02.269 11:35:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:02.269 11:35:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:17:02.269 11:35:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:17:02.269 11:35:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:17:02.269 11:35:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:02.269 11:35:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --dhchap-key key3 00:17:02.269 11:35:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:02.269 11:35:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:02.269 11:35:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:02.269 11:35:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:17:02.269 11:35:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:02.269 11:35:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:02.836 00:17:02.836 11:35:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:02.836 11:35:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:02.836 11:35:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:03.095 11:35:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:03.095 11:35:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:03.095 11:35:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:03.095 11:35:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:03.095 11:35:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:03.095 11:35:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:03.095 { 00:17:03.095 "cntlid": 79, 00:17:03.095 "qid": 0, 00:17:03.095 "state": "enabled", 00:17:03.095 "thread": "nvmf_tgt_poll_group_000", 00:17:03.095 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562", 00:17:03.095 "listen_address": { 00:17:03.095 "trtype": "TCP", 00:17:03.095 "adrfam": "IPv4", 00:17:03.095 "traddr": "10.0.0.2", 00:17:03.095 "trsvcid": "4420" 00:17:03.095 }, 00:17:03.095 "peer_address": { 00:17:03.095 "trtype": "TCP", 00:17:03.095 "adrfam": "IPv4", 00:17:03.095 "traddr": "10.0.0.1", 00:17:03.095 "trsvcid": "47178" 00:17:03.095 }, 00:17:03.095 "auth": { 00:17:03.095 "state": "completed", 00:17:03.095 "digest": "sha384", 00:17:03.095 "dhgroup": "ffdhe4096" 00:17:03.095 } 00:17:03.095 } 00:17:03.095 ]' 00:17:03.095 11:35:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:03.095 11:35:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:03.095 11:35:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:03.095 11:35:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:17:03.095 11:35:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:03.095 11:35:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:03.095 11:35:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:03.095 11:35:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:03.354 11:35:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ODZlYTMyMjBlZjQyOGEwZDYyZTUxNjViMmE3YmU0MjFhZDVkMmU4MWRmNzZhYWJhNDk3ODM5MGU5NDhkZDk4ZVOjsv0=: 00:17:03.354 11:35:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid 00abaa28-3537-eb11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:ODZlYTMyMjBlZjQyOGEwZDYyZTUxNjViMmE3YmU0MjFhZDVkMmU4MWRmNzZhYWJhNDk3ODM5MGU5NDhkZDk4ZVOjsv0=: 00:17:04.290 11:35:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:04.290 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:04.290 11:35:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:17:04.290 11:35:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:04.290 11:35:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:04.290 11:35:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:04.290 11:35:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:17:04.290 11:35:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:04.290 11:35:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:17:04.290 11:35:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:17:04.549 11:35:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 0 00:17:04.549 11:35:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:04.549 11:35:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:17:04.549 11:35:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:17:04.549 11:35:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:17:04.549 11:35:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:04.549 11:35:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:04.549 11:35:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:04.549 11:35:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:04.549 11:35:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:04.549 11:35:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:04.549 11:35:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:04.549 11:35:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:05.117 00:17:05.117 11:35:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:05.117 11:35:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:05.117 11:35:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:05.376 11:35:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:05.376 11:35:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:05.376 11:35:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:05.376 11:35:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:05.376 11:35:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:05.376 11:35:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:05.376 { 00:17:05.376 "cntlid": 81, 00:17:05.376 "qid": 0, 00:17:05.376 "state": "enabled", 00:17:05.376 "thread": "nvmf_tgt_poll_group_000", 00:17:05.376 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562", 00:17:05.376 "listen_address": { 00:17:05.376 "trtype": "TCP", 00:17:05.376 "adrfam": "IPv4", 00:17:05.376 "traddr": "10.0.0.2", 00:17:05.376 "trsvcid": "4420" 00:17:05.376 }, 00:17:05.376 "peer_address": { 00:17:05.376 "trtype": "TCP", 00:17:05.376 "adrfam": "IPv4", 00:17:05.376 "traddr": "10.0.0.1", 00:17:05.376 "trsvcid": "47200" 00:17:05.376 }, 00:17:05.376 "auth": { 00:17:05.376 "state": "completed", 00:17:05.376 "digest": "sha384", 00:17:05.376 "dhgroup": "ffdhe6144" 00:17:05.376 } 00:17:05.376 } 00:17:05.376 ]' 00:17:05.376 11:35:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:05.376 11:35:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:05.376 11:35:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:05.376 11:35:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:17:05.376 11:35:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:05.377 11:35:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:05.377 11:35:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:05.377 11:35:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:05.635 11:35:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NDE5Njc4MWNiYWExZGQ5ZmU1NDVmMDA2ZmQzMGFjNjFmZWQyODBlMmI0NThiMTQzLE1V+g==: --dhchap-ctrl-secret DHHC-1:03:MmJmNjJkMjM5MzE0ZDI5YTU3YjczOTcwMDg5ZTMxY2Y4N2MyZTc0MDMzYmYzNGU5ZTc5ZWNkZmJmYjdlOTNiYnYQ51E=: 00:17:05.635 11:35:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid 00abaa28-3537-eb11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:NDE5Njc4MWNiYWExZGQ5ZmU1NDVmMDA2ZmQzMGFjNjFmZWQyODBlMmI0NThiMTQzLE1V+g==: --dhchap-ctrl-secret DHHC-1:03:MmJmNjJkMjM5MzE0ZDI5YTU3YjczOTcwMDg5ZTMxY2Y4N2MyZTc0MDMzYmYzNGU5ZTc5ZWNkZmJmYjdlOTNiYnYQ51E=: 00:17:06.572 11:35:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:06.572 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:06.572 11:35:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:17:06.573 11:35:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:06.573 11:35:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:06.573 11:35:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:06.573 11:35:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:06.573 11:35:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:17:06.573 11:35:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:17:06.831 11:35:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 1 00:17:06.831 11:35:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:06.831 11:35:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:17:06.832 11:35:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:17:06.832 11:35:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:17:06.832 11:35:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:06.832 11:35:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:06.832 11:35:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:06.832 11:35:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:06.832 11:35:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:06.832 11:35:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:06.832 11:35:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:06.832 11:35:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:07.091 00:17:07.091 11:35:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:07.091 11:35:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:07.091 11:35:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:07.350 11:35:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:07.350 11:35:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:07.350 11:35:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:07.350 11:35:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:07.350 11:35:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:07.350 11:35:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:07.350 { 00:17:07.350 "cntlid": 83, 00:17:07.350 "qid": 0, 00:17:07.350 "state": "enabled", 00:17:07.350 "thread": "nvmf_tgt_poll_group_000", 00:17:07.350 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562", 00:17:07.350 "listen_address": { 00:17:07.350 "trtype": "TCP", 00:17:07.350 "adrfam": "IPv4", 00:17:07.350 "traddr": "10.0.0.2", 00:17:07.350 "trsvcid": "4420" 00:17:07.350 }, 00:17:07.350 "peer_address": { 00:17:07.350 "trtype": "TCP", 00:17:07.350 "adrfam": "IPv4", 00:17:07.350 "traddr": "10.0.0.1", 00:17:07.350 "trsvcid": "47216" 00:17:07.350 }, 00:17:07.350 "auth": { 00:17:07.350 "state": "completed", 00:17:07.350 "digest": "sha384", 00:17:07.350 "dhgroup": "ffdhe6144" 00:17:07.350 } 00:17:07.350 } 00:17:07.350 ]' 00:17:07.350 11:35:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:07.350 11:35:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:07.350 11:35:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:07.609 11:35:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:17:07.609 11:35:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:07.609 11:35:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:07.609 11:35:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:07.609 11:35:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:07.868 11:35:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NzBjNTkzZTliNmFiY2ExM2E5ZGVkZTU2MzczZjIzNGPWmSRF: --dhchap-ctrl-secret DHHC-1:02:M2MwOGEwMGEyOWVmYzQ5MjIwZmQ5ZmJkMDY3NDdhMDRmNmU1NGRiMTM4ZGQ3ODU4gDsQkA==: 00:17:07.868 11:35:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid 00abaa28-3537-eb11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:NzBjNTkzZTliNmFiY2ExM2E5ZGVkZTU2MzczZjIzNGPWmSRF: --dhchap-ctrl-secret DHHC-1:02:M2MwOGEwMGEyOWVmYzQ5MjIwZmQ5ZmJkMDY3NDdhMDRmNmU1NGRiMTM4ZGQ3ODU4gDsQkA==: 00:17:08.804 11:35:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:08.804 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:08.804 11:35:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:17:08.804 11:35:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:08.804 11:35:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:08.804 11:35:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:08.804 11:35:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:08.804 11:35:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:17:08.804 11:35:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:17:08.804 11:35:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 2 00:17:08.804 11:35:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:08.804 11:35:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:17:08.804 11:35:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:17:08.804 11:35:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:17:08.804 11:35:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:08.804 11:35:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:08.804 11:35:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:08.804 11:35:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:08.804 11:35:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:08.804 11:35:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:08.804 11:35:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:08.804 11:35:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:09.372 00:17:09.372 11:35:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:09.372 11:35:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:09.372 11:35:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:09.631 11:35:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:09.631 11:35:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:09.631 11:35:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:09.631 11:35:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:09.631 11:35:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:09.631 11:35:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:09.631 { 00:17:09.631 "cntlid": 85, 00:17:09.631 "qid": 0, 00:17:09.631 "state": "enabled", 00:17:09.631 "thread": "nvmf_tgt_poll_group_000", 00:17:09.631 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562", 00:17:09.631 "listen_address": { 00:17:09.631 "trtype": "TCP", 00:17:09.631 "adrfam": "IPv4", 00:17:09.631 "traddr": "10.0.0.2", 00:17:09.631 "trsvcid": "4420" 00:17:09.631 }, 00:17:09.631 "peer_address": { 00:17:09.631 "trtype": "TCP", 00:17:09.631 "adrfam": "IPv4", 00:17:09.631 "traddr": "10.0.0.1", 00:17:09.631 "trsvcid": "47244" 00:17:09.631 }, 00:17:09.631 "auth": { 00:17:09.631 "state": "completed", 00:17:09.631 "digest": "sha384", 00:17:09.631 "dhgroup": "ffdhe6144" 00:17:09.631 } 00:17:09.631 } 00:17:09.631 ]' 00:17:09.631 11:35:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:09.631 11:35:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:09.631 11:35:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:09.631 11:35:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:17:09.631 11:35:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:09.890 11:35:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:09.890 11:35:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:09.890 11:35:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:10.149 11:35:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:Yjc4NTAyMDhmNTg3M2FjYmQ1MTVkZTIwZjM1ODhlMmMwNWY5MjA2MDQwZmIzNDY4n3XOMQ==: --dhchap-ctrl-secret DHHC-1:01:MmVkODE3Yzg5OWI3OWQ5MWJiZDc5ODkyMmFkY2M2YzfHVm/m: 00:17:10.149 11:35:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid 00abaa28-3537-eb11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:Yjc4NTAyMDhmNTg3M2FjYmQ1MTVkZTIwZjM1ODhlMmMwNWY5MjA2MDQwZmIzNDY4n3XOMQ==: --dhchap-ctrl-secret DHHC-1:01:MmVkODE3Yzg5OWI3OWQ5MWJiZDc5ODkyMmFkY2M2YzfHVm/m: 00:17:10.716 11:35:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:10.717 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:10.717 11:35:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:17:10.717 11:35:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:10.717 11:35:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:10.717 11:35:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:10.717 11:35:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:10.717 11:35:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:17:10.717 11:35:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:17:11.284 11:35:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 3 00:17:11.284 11:35:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:11.284 11:35:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:17:11.284 11:35:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:17:11.284 11:35:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:17:11.284 11:35:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:11.284 11:35:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --dhchap-key key3 00:17:11.284 11:35:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:11.284 11:35:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:11.284 11:35:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:11.284 11:35:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:17:11.284 11:35:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:11.284 11:35:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:11.543 00:17:11.543 11:35:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:11.543 11:35:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:11.543 11:35:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:11.803 11:35:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:11.803 11:35:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:11.803 11:35:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:11.803 11:35:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:11.803 11:35:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:11.803 11:35:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:11.803 { 00:17:11.803 "cntlid": 87, 00:17:11.803 "qid": 0, 00:17:11.803 "state": "enabled", 00:17:11.803 "thread": "nvmf_tgt_poll_group_000", 00:17:11.803 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562", 00:17:11.803 "listen_address": { 00:17:11.803 "trtype": "TCP", 00:17:11.803 "adrfam": "IPv4", 00:17:11.803 "traddr": "10.0.0.2", 00:17:11.803 "trsvcid": "4420" 00:17:11.803 }, 00:17:11.803 "peer_address": { 00:17:11.803 "trtype": "TCP", 00:17:11.803 "adrfam": "IPv4", 00:17:11.803 "traddr": "10.0.0.1", 00:17:11.803 "trsvcid": "47256" 00:17:11.803 }, 00:17:11.803 "auth": { 00:17:11.803 "state": "completed", 00:17:11.803 "digest": "sha384", 00:17:11.803 "dhgroup": "ffdhe6144" 00:17:11.803 } 00:17:11.803 } 00:17:11.803 ]' 00:17:11.803 11:35:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:11.803 11:35:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:11.803 11:35:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:11.803 11:35:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:17:11.803 11:35:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:11.803 11:35:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:11.803 11:35:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:11.803 11:35:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:12.062 11:35:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ODZlYTMyMjBlZjQyOGEwZDYyZTUxNjViMmE3YmU0MjFhZDVkMmU4MWRmNzZhYWJhNDk3ODM5MGU5NDhkZDk4ZVOjsv0=: 00:17:12.062 11:35:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid 00abaa28-3537-eb11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:ODZlYTMyMjBlZjQyOGEwZDYyZTUxNjViMmE3YmU0MjFhZDVkMmU4MWRmNzZhYWJhNDk3ODM5MGU5NDhkZDk4ZVOjsv0=: 00:17:12.999 11:35:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:12.999 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:12.999 11:35:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:17:12.999 11:35:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:12.999 11:35:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:12.999 11:35:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:12.999 11:35:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:17:12.999 11:35:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:12.999 11:35:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:17:12.999 11:35:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:17:12.999 11:35:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 0 00:17:12.999 11:35:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:13.000 11:35:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:17:13.000 11:35:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:17:13.000 11:35:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:17:13.000 11:35:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:13.000 11:35:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:13.000 11:35:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:13.000 11:35:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:13.000 11:35:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:13.000 11:35:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:13.000 11:35:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:13.000 11:35:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:13.567 00:17:13.567 11:35:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:13.567 11:35:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:13.567 11:35:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:13.826 11:35:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:13.826 11:35:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:13.826 11:35:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:13.826 11:35:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:13.826 11:35:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:13.826 11:35:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:13.826 { 00:17:13.826 "cntlid": 89, 00:17:13.826 "qid": 0, 00:17:13.826 "state": "enabled", 00:17:13.826 "thread": "nvmf_tgt_poll_group_000", 00:17:13.826 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562", 00:17:13.826 "listen_address": { 00:17:13.826 "trtype": "TCP", 00:17:13.826 "adrfam": "IPv4", 00:17:13.826 "traddr": "10.0.0.2", 00:17:13.826 "trsvcid": "4420" 00:17:13.826 }, 00:17:13.826 "peer_address": { 00:17:13.826 "trtype": "TCP", 00:17:13.826 "adrfam": "IPv4", 00:17:13.826 "traddr": "10.0.0.1", 00:17:13.826 "trsvcid": "42992" 00:17:13.826 }, 00:17:13.826 "auth": { 00:17:13.826 "state": "completed", 00:17:13.826 "digest": "sha384", 00:17:13.826 "dhgroup": "ffdhe8192" 00:17:13.826 } 00:17:13.826 } 00:17:13.826 ]' 00:17:13.826 11:35:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:13.826 11:35:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:13.826 11:35:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:13.827 11:35:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:17:13.827 11:35:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:14.085 11:35:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:14.085 11:35:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:14.085 11:35:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:14.345 11:35:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NDE5Njc4MWNiYWExZGQ5ZmU1NDVmMDA2ZmQzMGFjNjFmZWQyODBlMmI0NThiMTQzLE1V+g==: --dhchap-ctrl-secret DHHC-1:03:MmJmNjJkMjM5MzE0ZDI5YTU3YjczOTcwMDg5ZTMxY2Y4N2MyZTc0MDMzYmYzNGU5ZTc5ZWNkZmJmYjdlOTNiYnYQ51E=: 00:17:14.345 11:35:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid 00abaa28-3537-eb11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:NDE5Njc4MWNiYWExZGQ5ZmU1NDVmMDA2ZmQzMGFjNjFmZWQyODBlMmI0NThiMTQzLE1V+g==: --dhchap-ctrl-secret DHHC-1:03:MmJmNjJkMjM5MzE0ZDI5YTU3YjczOTcwMDg5ZTMxY2Y4N2MyZTc0MDMzYmYzNGU5ZTc5ZWNkZmJmYjdlOTNiYnYQ51E=: 00:17:14.912 11:35:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:14.912 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:14.912 11:35:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:17:14.912 11:35:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:14.912 11:35:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:14.912 11:35:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:14.912 11:35:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:14.912 11:35:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:17:14.912 11:35:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:17:15.516 11:35:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 1 00:17:15.516 11:35:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:15.516 11:35:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:17:15.516 11:35:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:17:15.516 11:35:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:17:15.516 11:35:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:15.516 11:35:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:15.516 11:35:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:15.516 11:35:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:15.516 11:35:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:15.516 11:35:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:15.516 11:35:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:15.516 11:35:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:15.892 00:17:16.191 11:35:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:16.191 11:35:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:16.191 11:35:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:16.191 11:35:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:16.191 11:35:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:16.191 11:35:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:16.191 11:35:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:16.191 11:35:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:16.191 11:35:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:16.191 { 00:17:16.191 "cntlid": 91, 00:17:16.191 "qid": 0, 00:17:16.191 "state": "enabled", 00:17:16.191 "thread": "nvmf_tgt_poll_group_000", 00:17:16.191 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562", 00:17:16.191 "listen_address": { 00:17:16.191 "trtype": "TCP", 00:17:16.191 "adrfam": "IPv4", 00:17:16.191 "traddr": "10.0.0.2", 00:17:16.191 "trsvcid": "4420" 00:17:16.191 }, 00:17:16.191 "peer_address": { 00:17:16.191 "trtype": "TCP", 00:17:16.191 "adrfam": "IPv4", 00:17:16.191 "traddr": "10.0.0.1", 00:17:16.191 "trsvcid": "43010" 00:17:16.191 }, 00:17:16.191 "auth": { 00:17:16.191 "state": "completed", 00:17:16.191 "digest": "sha384", 00:17:16.191 "dhgroup": "ffdhe8192" 00:17:16.191 } 00:17:16.191 } 00:17:16.191 ]' 00:17:16.192 11:35:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:16.192 11:35:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:16.192 11:35:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:16.450 11:35:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:17:16.450 11:35:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:16.450 11:35:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:16.450 11:35:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:16.450 11:35:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:16.451 11:35:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NzBjNTkzZTliNmFiY2ExM2E5ZGVkZTU2MzczZjIzNGPWmSRF: --dhchap-ctrl-secret DHHC-1:02:M2MwOGEwMGEyOWVmYzQ5MjIwZmQ5ZmJkMDY3NDdhMDRmNmU1NGRiMTM4ZGQ3ODU4gDsQkA==: 00:17:16.451 11:35:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid 00abaa28-3537-eb11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:NzBjNTkzZTliNmFiY2ExM2E5ZGVkZTU2MzczZjIzNGPWmSRF: --dhchap-ctrl-secret DHHC-1:02:M2MwOGEwMGEyOWVmYzQ5MjIwZmQ5ZmJkMDY3NDdhMDRmNmU1NGRiMTM4ZGQ3ODU4gDsQkA==: 00:17:17.389 11:35:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:17.389 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:17.389 11:35:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:17:17.389 11:35:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:17.389 11:35:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:17.389 11:35:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:17.389 11:35:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:17.389 11:35:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:17:17.389 11:35:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:17:17.389 11:35:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 2 00:17:17.389 11:35:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:17.389 11:35:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:17:17.389 11:35:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:17:17.389 11:35:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:17:17.389 11:35:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:17.389 11:35:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:17.389 11:35:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:17.389 11:35:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:17.389 11:35:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:17.389 11:35:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:17.389 11:35:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:17.389 11:35:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:17.957 00:17:17.957 11:35:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:17.957 11:35:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:17.957 11:35:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:18.215 11:35:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:18.215 11:35:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:18.215 11:35:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:18.215 11:35:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:18.215 11:35:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:18.215 11:35:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:18.215 { 00:17:18.215 "cntlid": 93, 00:17:18.215 "qid": 0, 00:17:18.215 "state": "enabled", 00:17:18.215 "thread": "nvmf_tgt_poll_group_000", 00:17:18.215 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562", 00:17:18.215 "listen_address": { 00:17:18.215 "trtype": "TCP", 00:17:18.215 "adrfam": "IPv4", 00:17:18.216 "traddr": "10.0.0.2", 00:17:18.216 "trsvcid": "4420" 00:17:18.216 }, 00:17:18.216 "peer_address": { 00:17:18.216 "trtype": "TCP", 00:17:18.216 "adrfam": "IPv4", 00:17:18.216 "traddr": "10.0.0.1", 00:17:18.216 "trsvcid": "43038" 00:17:18.216 }, 00:17:18.216 "auth": { 00:17:18.216 "state": "completed", 00:17:18.216 "digest": "sha384", 00:17:18.216 "dhgroup": "ffdhe8192" 00:17:18.216 } 00:17:18.216 } 00:17:18.216 ]' 00:17:18.216 11:35:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:18.216 11:35:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:18.216 11:35:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:18.475 11:35:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:17:18.475 11:35:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:18.475 11:35:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:18.475 11:35:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:18.475 11:35:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:18.735 11:35:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:Yjc4NTAyMDhmNTg3M2FjYmQ1MTVkZTIwZjM1ODhlMmMwNWY5MjA2MDQwZmIzNDY4n3XOMQ==: --dhchap-ctrl-secret DHHC-1:01:MmVkODE3Yzg5OWI3OWQ5MWJiZDc5ODkyMmFkY2M2YzfHVm/m: 00:17:18.735 11:35:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid 00abaa28-3537-eb11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:Yjc4NTAyMDhmNTg3M2FjYmQ1MTVkZTIwZjM1ODhlMmMwNWY5MjA2MDQwZmIzNDY4n3XOMQ==: --dhchap-ctrl-secret DHHC-1:01:MmVkODE3Yzg5OWI3OWQ5MWJiZDc5ODkyMmFkY2M2YzfHVm/m: 00:17:19.300 11:35:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:19.559 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:19.559 11:35:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:17:19.559 11:35:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:19.559 11:35:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:19.559 11:35:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:19.559 11:35:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:19.559 11:35:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:17:19.559 11:35:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:17:19.559 11:35:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 3 00:17:19.559 11:35:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:19.559 11:35:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:17:19.559 11:35:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:17:19.559 11:35:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:17:19.559 11:35:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:19.559 11:35:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --dhchap-key key3 00:17:19.559 11:35:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:19.559 11:35:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:19.559 11:35:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:19.559 11:35:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:17:19.559 11:35:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:19.559 11:35:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:20.126 00:17:20.126 11:35:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:20.126 11:35:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:20.126 11:35:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:20.385 11:35:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:20.385 11:35:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:20.385 11:35:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:20.385 11:35:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:20.385 11:35:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:20.385 11:35:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:20.385 { 00:17:20.385 "cntlid": 95, 00:17:20.385 "qid": 0, 00:17:20.385 "state": "enabled", 00:17:20.385 "thread": "nvmf_tgt_poll_group_000", 00:17:20.385 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562", 00:17:20.385 "listen_address": { 00:17:20.385 "trtype": "TCP", 00:17:20.385 "adrfam": "IPv4", 00:17:20.385 "traddr": "10.0.0.2", 00:17:20.385 "trsvcid": "4420" 00:17:20.385 }, 00:17:20.385 "peer_address": { 00:17:20.385 "trtype": "TCP", 00:17:20.385 "adrfam": "IPv4", 00:17:20.385 "traddr": "10.0.0.1", 00:17:20.385 "trsvcid": "43074" 00:17:20.385 }, 00:17:20.385 "auth": { 00:17:20.385 "state": "completed", 00:17:20.385 "digest": "sha384", 00:17:20.385 "dhgroup": "ffdhe8192" 00:17:20.385 } 00:17:20.385 } 00:17:20.385 ]' 00:17:20.385 11:35:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:20.385 11:35:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:20.385 11:35:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:20.643 11:35:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:17:20.643 11:35:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:20.643 11:35:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:20.643 11:35:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:20.643 11:35:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:20.902 11:35:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ODZlYTMyMjBlZjQyOGEwZDYyZTUxNjViMmE3YmU0MjFhZDVkMmU4MWRmNzZhYWJhNDk3ODM5MGU5NDhkZDk4ZVOjsv0=: 00:17:20.902 11:35:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid 00abaa28-3537-eb11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:ODZlYTMyMjBlZjQyOGEwZDYyZTUxNjViMmE3YmU0MjFhZDVkMmU4MWRmNzZhYWJhNDk3ODM5MGU5NDhkZDk4ZVOjsv0=: 00:17:21.839 11:35:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:21.839 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:21.839 11:35:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:17:21.839 11:35:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:21.839 11:35:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:21.839 11:35:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:21.839 11:35:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@118 -- # for digest in "${digests[@]}" 00:17:21.839 11:35:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:17:21.839 11:35:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:21.839 11:35:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:17:21.839 11:35:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:17:21.839 11:35:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 0 00:17:21.839 11:35:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:21.839 11:35:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:21.839 11:35:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:17:21.839 11:35:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:17:21.839 11:35:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:21.839 11:35:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:21.839 11:35:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:21.839 11:35:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:21.839 11:35:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:21.839 11:35:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:21.839 11:35:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:21.839 11:35:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:22.099 00:17:22.099 11:35:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:22.099 11:35:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:22.099 11:35:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:22.357 11:35:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:22.357 11:35:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:22.357 11:35:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:22.357 11:35:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:22.357 11:35:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:22.357 11:35:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:22.357 { 00:17:22.357 "cntlid": 97, 00:17:22.357 "qid": 0, 00:17:22.357 "state": "enabled", 00:17:22.357 "thread": "nvmf_tgt_poll_group_000", 00:17:22.357 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562", 00:17:22.357 "listen_address": { 00:17:22.357 "trtype": "TCP", 00:17:22.357 "adrfam": "IPv4", 00:17:22.357 "traddr": "10.0.0.2", 00:17:22.357 "trsvcid": "4420" 00:17:22.357 }, 00:17:22.357 "peer_address": { 00:17:22.357 "trtype": "TCP", 00:17:22.357 "adrfam": "IPv4", 00:17:22.357 "traddr": "10.0.0.1", 00:17:22.357 "trsvcid": "57800" 00:17:22.357 }, 00:17:22.357 "auth": { 00:17:22.357 "state": "completed", 00:17:22.357 "digest": "sha512", 00:17:22.357 "dhgroup": "null" 00:17:22.357 } 00:17:22.357 } 00:17:22.357 ]' 00:17:22.357 11:35:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:22.616 11:35:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:22.616 11:35:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:22.616 11:35:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:17:22.616 11:35:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:22.616 11:35:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:22.616 11:35:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:22.616 11:35:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:22.875 11:35:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NDE5Njc4MWNiYWExZGQ5ZmU1NDVmMDA2ZmQzMGFjNjFmZWQyODBlMmI0NThiMTQzLE1V+g==: --dhchap-ctrl-secret DHHC-1:03:MmJmNjJkMjM5MzE0ZDI5YTU3YjczOTcwMDg5ZTMxY2Y4N2MyZTc0MDMzYmYzNGU5ZTc5ZWNkZmJmYjdlOTNiYnYQ51E=: 00:17:22.875 11:35:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid 00abaa28-3537-eb11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:NDE5Njc4MWNiYWExZGQ5ZmU1NDVmMDA2ZmQzMGFjNjFmZWQyODBlMmI0NThiMTQzLE1V+g==: --dhchap-ctrl-secret DHHC-1:03:MmJmNjJkMjM5MzE0ZDI5YTU3YjczOTcwMDg5ZTMxY2Y4N2MyZTc0MDMzYmYzNGU5ZTc5ZWNkZmJmYjdlOTNiYnYQ51E=: 00:17:23.811 11:35:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:23.811 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:23.811 11:35:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:17:23.811 11:35:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:23.811 11:35:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:23.811 11:35:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:23.811 11:35:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:23.811 11:35:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:17:23.811 11:35:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:17:23.811 11:35:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 1 00:17:23.811 11:35:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:23.811 11:35:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:23.811 11:35:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:17:23.811 11:35:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:17:23.811 11:35:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:23.811 11:35:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:23.811 11:35:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:23.811 11:35:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:23.811 11:35:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:23.811 11:35:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:23.811 11:35:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:23.811 11:35:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:24.081 00:17:24.081 11:35:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:24.081 11:35:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:24.081 11:35:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:24.344 11:35:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:24.344 11:35:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:24.344 11:35:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:24.344 11:35:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:24.344 11:35:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:24.344 11:35:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:24.344 { 00:17:24.344 "cntlid": 99, 00:17:24.344 "qid": 0, 00:17:24.344 "state": "enabled", 00:17:24.344 "thread": "nvmf_tgt_poll_group_000", 00:17:24.344 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562", 00:17:24.344 "listen_address": { 00:17:24.344 "trtype": "TCP", 00:17:24.344 "adrfam": "IPv4", 00:17:24.344 "traddr": "10.0.0.2", 00:17:24.344 "trsvcid": "4420" 00:17:24.344 }, 00:17:24.344 "peer_address": { 00:17:24.344 "trtype": "TCP", 00:17:24.344 "adrfam": "IPv4", 00:17:24.344 "traddr": "10.0.0.1", 00:17:24.344 "trsvcid": "57824" 00:17:24.344 }, 00:17:24.344 "auth": { 00:17:24.344 "state": "completed", 00:17:24.344 "digest": "sha512", 00:17:24.344 "dhgroup": "null" 00:17:24.344 } 00:17:24.344 } 00:17:24.344 ]' 00:17:24.603 11:35:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:24.603 11:35:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:24.603 11:35:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:24.603 11:35:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:17:24.603 11:35:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:24.603 11:35:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:24.603 11:35:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:24.603 11:35:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:24.862 11:35:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NzBjNTkzZTliNmFiY2ExM2E5ZGVkZTU2MzczZjIzNGPWmSRF: --dhchap-ctrl-secret DHHC-1:02:M2MwOGEwMGEyOWVmYzQ5MjIwZmQ5ZmJkMDY3NDdhMDRmNmU1NGRiMTM4ZGQ3ODU4gDsQkA==: 00:17:24.862 11:35:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid 00abaa28-3537-eb11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:NzBjNTkzZTliNmFiY2ExM2E5ZGVkZTU2MzczZjIzNGPWmSRF: --dhchap-ctrl-secret DHHC-1:02:M2MwOGEwMGEyOWVmYzQ5MjIwZmQ5ZmJkMDY3NDdhMDRmNmU1NGRiMTM4ZGQ3ODU4gDsQkA==: 00:17:25.797 11:35:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:25.797 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:25.797 11:35:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:17:25.797 11:35:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:25.797 11:35:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:25.797 11:35:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:25.797 11:35:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:25.797 11:35:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:17:25.797 11:35:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:17:26.055 11:35:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 2 00:17:26.055 11:35:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:26.055 11:35:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:26.055 11:35:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:17:26.055 11:35:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:17:26.055 11:35:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:26.055 11:35:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:26.055 11:35:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:26.055 11:35:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:26.055 11:35:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:26.055 11:35:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:26.055 11:35:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:26.055 11:35:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:26.313 00:17:26.313 11:35:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:26.314 11:35:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:26.314 11:35:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:26.571 11:35:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:26.571 11:35:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:26.571 11:35:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:26.571 11:35:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:26.571 11:35:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:26.571 11:35:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:26.571 { 00:17:26.571 "cntlid": 101, 00:17:26.571 "qid": 0, 00:17:26.571 "state": "enabled", 00:17:26.571 "thread": "nvmf_tgt_poll_group_000", 00:17:26.571 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562", 00:17:26.571 "listen_address": { 00:17:26.571 "trtype": "TCP", 00:17:26.571 "adrfam": "IPv4", 00:17:26.571 "traddr": "10.0.0.2", 00:17:26.571 "trsvcid": "4420" 00:17:26.571 }, 00:17:26.571 "peer_address": { 00:17:26.571 "trtype": "TCP", 00:17:26.571 "adrfam": "IPv4", 00:17:26.571 "traddr": "10.0.0.1", 00:17:26.571 "trsvcid": "57856" 00:17:26.571 }, 00:17:26.571 "auth": { 00:17:26.571 "state": "completed", 00:17:26.571 "digest": "sha512", 00:17:26.571 "dhgroup": "null" 00:17:26.571 } 00:17:26.571 } 00:17:26.571 ]' 00:17:26.571 11:35:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:26.571 11:35:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:26.571 11:35:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:26.571 11:35:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:17:26.571 11:35:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:26.571 11:35:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:26.571 11:35:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:26.571 11:35:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:26.830 11:35:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:Yjc4NTAyMDhmNTg3M2FjYmQ1MTVkZTIwZjM1ODhlMmMwNWY5MjA2MDQwZmIzNDY4n3XOMQ==: --dhchap-ctrl-secret DHHC-1:01:MmVkODE3Yzg5OWI3OWQ5MWJiZDc5ODkyMmFkY2M2YzfHVm/m: 00:17:26.830 11:35:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid 00abaa28-3537-eb11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:Yjc4NTAyMDhmNTg3M2FjYmQ1MTVkZTIwZjM1ODhlMmMwNWY5MjA2MDQwZmIzNDY4n3XOMQ==: --dhchap-ctrl-secret DHHC-1:01:MmVkODE3Yzg5OWI3OWQ5MWJiZDc5ODkyMmFkY2M2YzfHVm/m: 00:17:27.766 11:35:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:27.766 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:27.766 11:35:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:17:27.766 11:35:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:27.766 11:35:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:27.766 11:35:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:27.766 11:35:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:27.766 11:35:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:17:27.766 11:35:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:17:27.766 11:35:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 3 00:17:27.766 11:35:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:27.766 11:35:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:27.766 11:35:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:17:28.025 11:35:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:17:28.025 11:35:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:28.025 11:35:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --dhchap-key key3 00:17:28.025 11:35:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:28.025 11:35:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:28.025 11:35:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:28.025 11:35:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:17:28.025 11:35:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:28.025 11:35:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:28.025 00:17:28.025 11:35:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:28.025 11:35:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:28.025 11:35:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:28.283 11:35:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:28.283 11:35:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:28.283 11:35:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:28.283 11:35:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:28.283 11:35:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:28.283 11:35:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:28.283 { 00:17:28.283 "cntlid": 103, 00:17:28.283 "qid": 0, 00:17:28.283 "state": "enabled", 00:17:28.283 "thread": "nvmf_tgt_poll_group_000", 00:17:28.283 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562", 00:17:28.283 "listen_address": { 00:17:28.283 "trtype": "TCP", 00:17:28.283 "adrfam": "IPv4", 00:17:28.283 "traddr": "10.0.0.2", 00:17:28.283 "trsvcid": "4420" 00:17:28.283 }, 00:17:28.283 "peer_address": { 00:17:28.283 "trtype": "TCP", 00:17:28.283 "adrfam": "IPv4", 00:17:28.283 "traddr": "10.0.0.1", 00:17:28.283 "trsvcid": "57882" 00:17:28.283 }, 00:17:28.283 "auth": { 00:17:28.283 "state": "completed", 00:17:28.283 "digest": "sha512", 00:17:28.283 "dhgroup": "null" 00:17:28.283 } 00:17:28.283 } 00:17:28.283 ]' 00:17:28.283 11:35:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:28.283 11:35:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:28.283 11:35:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:28.283 11:35:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:17:28.283 11:35:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:28.540 11:35:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:28.540 11:35:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:28.540 11:35:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:28.798 11:35:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ODZlYTMyMjBlZjQyOGEwZDYyZTUxNjViMmE3YmU0MjFhZDVkMmU4MWRmNzZhYWJhNDk3ODM5MGU5NDhkZDk4ZVOjsv0=: 00:17:28.798 11:35:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid 00abaa28-3537-eb11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:ODZlYTMyMjBlZjQyOGEwZDYyZTUxNjViMmE3YmU0MjFhZDVkMmU4MWRmNzZhYWJhNDk3ODM5MGU5NDhkZDk4ZVOjsv0=: 00:17:29.732 11:35:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:29.732 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:29.732 11:35:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:17:29.732 11:35:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:29.732 11:35:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:29.732 11:35:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:29.732 11:35:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:17:29.732 11:35:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:29.732 11:35:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:17:29.732 11:35:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:17:29.732 11:35:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 0 00:17:29.732 11:35:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:29.732 11:35:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:29.732 11:35:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:17:29.732 11:35:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:17:29.732 11:35:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:29.732 11:35:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:29.732 11:35:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:29.732 11:35:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:29.732 11:35:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:29.732 11:35:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:29.732 11:35:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:29.732 11:35:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:30.299 00:17:30.299 11:35:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:30.299 11:35:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:30.299 11:35:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:30.558 11:35:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:30.558 11:35:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:30.558 11:35:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:30.558 11:35:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:30.558 11:35:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:30.558 11:35:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:30.558 { 00:17:30.558 "cntlid": 105, 00:17:30.558 "qid": 0, 00:17:30.558 "state": "enabled", 00:17:30.558 "thread": "nvmf_tgt_poll_group_000", 00:17:30.558 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562", 00:17:30.558 "listen_address": { 00:17:30.558 "trtype": "TCP", 00:17:30.558 "adrfam": "IPv4", 00:17:30.558 "traddr": "10.0.0.2", 00:17:30.558 "trsvcid": "4420" 00:17:30.558 }, 00:17:30.558 "peer_address": { 00:17:30.558 "trtype": "TCP", 00:17:30.558 "adrfam": "IPv4", 00:17:30.558 "traddr": "10.0.0.1", 00:17:30.558 "trsvcid": "57910" 00:17:30.558 }, 00:17:30.558 "auth": { 00:17:30.558 "state": "completed", 00:17:30.558 "digest": "sha512", 00:17:30.558 "dhgroup": "ffdhe2048" 00:17:30.558 } 00:17:30.558 } 00:17:30.558 ]' 00:17:30.558 11:35:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:30.558 11:35:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:30.558 11:35:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:30.558 11:35:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:17:30.558 11:35:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:30.558 11:35:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:30.558 11:35:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:30.558 11:35:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:30.817 11:35:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NDE5Njc4MWNiYWExZGQ5ZmU1NDVmMDA2ZmQzMGFjNjFmZWQyODBlMmI0NThiMTQzLE1V+g==: --dhchap-ctrl-secret DHHC-1:03:MmJmNjJkMjM5MzE0ZDI5YTU3YjczOTcwMDg5ZTMxY2Y4N2MyZTc0MDMzYmYzNGU5ZTc5ZWNkZmJmYjdlOTNiYnYQ51E=: 00:17:30.817 11:35:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid 00abaa28-3537-eb11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:NDE5Njc4MWNiYWExZGQ5ZmU1NDVmMDA2ZmQzMGFjNjFmZWQyODBlMmI0NThiMTQzLE1V+g==: --dhchap-ctrl-secret DHHC-1:03:MmJmNjJkMjM5MzE0ZDI5YTU3YjczOTcwMDg5ZTMxY2Y4N2MyZTc0MDMzYmYzNGU5ZTc5ZWNkZmJmYjdlOTNiYnYQ51E=: 00:17:31.754 11:35:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:31.754 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:31.755 11:35:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:17:31.755 11:35:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:31.755 11:35:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:31.755 11:35:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:31.755 11:35:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:31.755 11:35:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:17:31.755 11:35:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:17:31.755 11:35:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 1 00:17:31.755 11:35:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:31.755 11:35:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:31.755 11:35:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:17:31.755 11:35:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:17:31.755 11:35:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:31.755 11:35:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:31.755 11:35:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:31.755 11:35:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:31.755 11:35:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:31.755 11:35:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:31.755 11:35:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:31.755 11:35:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:32.324 00:17:32.324 11:35:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:32.324 11:35:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:32.324 11:35:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:32.324 11:35:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:32.324 11:35:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:32.324 11:35:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:32.324 11:35:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:32.324 11:35:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:32.324 11:35:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:32.324 { 00:17:32.324 "cntlid": 107, 00:17:32.324 "qid": 0, 00:17:32.324 "state": "enabled", 00:17:32.324 "thread": "nvmf_tgt_poll_group_000", 00:17:32.324 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562", 00:17:32.324 "listen_address": { 00:17:32.324 "trtype": "TCP", 00:17:32.324 "adrfam": "IPv4", 00:17:32.324 "traddr": "10.0.0.2", 00:17:32.324 "trsvcid": "4420" 00:17:32.324 }, 00:17:32.324 "peer_address": { 00:17:32.324 "trtype": "TCP", 00:17:32.324 "adrfam": "IPv4", 00:17:32.324 "traddr": "10.0.0.1", 00:17:32.324 "trsvcid": "49664" 00:17:32.324 }, 00:17:32.324 "auth": { 00:17:32.324 "state": "completed", 00:17:32.324 "digest": "sha512", 00:17:32.324 "dhgroup": "ffdhe2048" 00:17:32.324 } 00:17:32.324 } 00:17:32.324 ]' 00:17:32.324 11:35:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:32.324 11:35:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:32.324 11:35:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:32.324 11:35:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:17:32.324 11:35:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:32.583 11:35:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:32.583 11:35:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:32.583 11:35:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:32.842 11:35:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NzBjNTkzZTliNmFiY2ExM2E5ZGVkZTU2MzczZjIzNGPWmSRF: --dhchap-ctrl-secret DHHC-1:02:M2MwOGEwMGEyOWVmYzQ5MjIwZmQ5ZmJkMDY3NDdhMDRmNmU1NGRiMTM4ZGQ3ODU4gDsQkA==: 00:17:32.842 11:35:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid 00abaa28-3537-eb11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:NzBjNTkzZTliNmFiY2ExM2E5ZGVkZTU2MzczZjIzNGPWmSRF: --dhchap-ctrl-secret DHHC-1:02:M2MwOGEwMGEyOWVmYzQ5MjIwZmQ5ZmJkMDY3NDdhMDRmNmU1NGRiMTM4ZGQ3ODU4gDsQkA==: 00:17:33.420 11:35:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:33.679 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:33.679 11:35:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:17:33.679 11:35:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:33.679 11:35:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:33.679 11:35:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:33.679 11:35:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:33.680 11:35:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:17:33.680 11:35:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:17:33.938 11:35:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 2 00:17:33.938 11:35:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:33.938 11:35:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:33.938 11:35:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:17:33.938 11:35:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:17:33.938 11:35:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:33.938 11:35:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:33.938 11:35:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:33.939 11:35:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:33.939 11:35:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:33.939 11:35:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:33.939 11:35:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:33.939 11:35:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:34.197 00:17:34.197 11:35:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:34.197 11:35:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:34.197 11:35:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:34.462 11:35:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:34.462 11:35:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:34.462 11:35:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:34.462 11:35:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:34.462 11:35:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:34.462 11:35:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:34.462 { 00:17:34.462 "cntlid": 109, 00:17:34.462 "qid": 0, 00:17:34.462 "state": "enabled", 00:17:34.462 "thread": "nvmf_tgt_poll_group_000", 00:17:34.462 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562", 00:17:34.462 "listen_address": { 00:17:34.462 "trtype": "TCP", 00:17:34.462 "adrfam": "IPv4", 00:17:34.462 "traddr": "10.0.0.2", 00:17:34.462 "trsvcid": "4420" 00:17:34.462 }, 00:17:34.462 "peer_address": { 00:17:34.462 "trtype": "TCP", 00:17:34.462 "adrfam": "IPv4", 00:17:34.462 "traddr": "10.0.0.1", 00:17:34.462 "trsvcid": "49708" 00:17:34.462 }, 00:17:34.462 "auth": { 00:17:34.462 "state": "completed", 00:17:34.462 "digest": "sha512", 00:17:34.462 "dhgroup": "ffdhe2048" 00:17:34.462 } 00:17:34.462 } 00:17:34.462 ]' 00:17:34.462 11:35:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:34.462 11:35:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:34.462 11:35:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:34.462 11:35:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:17:34.463 11:35:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:34.463 11:35:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:34.463 11:35:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:34.463 11:35:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:34.723 11:35:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:Yjc4NTAyMDhmNTg3M2FjYmQ1MTVkZTIwZjM1ODhlMmMwNWY5MjA2MDQwZmIzNDY4n3XOMQ==: --dhchap-ctrl-secret DHHC-1:01:MmVkODE3Yzg5OWI3OWQ5MWJiZDc5ODkyMmFkY2M2YzfHVm/m: 00:17:34.723 11:35:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid 00abaa28-3537-eb11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:Yjc4NTAyMDhmNTg3M2FjYmQ1MTVkZTIwZjM1ODhlMmMwNWY5MjA2MDQwZmIzNDY4n3XOMQ==: --dhchap-ctrl-secret DHHC-1:01:MmVkODE3Yzg5OWI3OWQ5MWJiZDc5ODkyMmFkY2M2YzfHVm/m: 00:17:35.712 11:35:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:35.712 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:35.712 11:35:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:17:35.712 11:35:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:35.712 11:35:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:35.712 11:35:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:35.712 11:35:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:35.712 11:35:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:17:35.712 11:35:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:17:35.971 11:35:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 3 00:17:35.971 11:35:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:35.971 11:35:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:35.971 11:35:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:17:35.971 11:35:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:17:35.971 11:35:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:35.971 11:35:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --dhchap-key key3 00:17:35.971 11:35:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:35.971 11:35:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:35.971 11:35:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:35.971 11:35:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:17:35.971 11:35:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:35.971 11:35:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:36.230 00:17:36.230 11:35:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:36.230 11:35:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:36.230 11:35:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:36.489 11:35:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:36.489 11:35:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:36.489 11:35:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:36.489 11:35:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:36.489 11:35:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:36.489 11:35:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:36.489 { 00:17:36.489 "cntlid": 111, 00:17:36.489 "qid": 0, 00:17:36.489 "state": "enabled", 00:17:36.489 "thread": "nvmf_tgt_poll_group_000", 00:17:36.489 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562", 00:17:36.489 "listen_address": { 00:17:36.489 "trtype": "TCP", 00:17:36.489 "adrfam": "IPv4", 00:17:36.489 "traddr": "10.0.0.2", 00:17:36.489 "trsvcid": "4420" 00:17:36.489 }, 00:17:36.489 "peer_address": { 00:17:36.489 "trtype": "TCP", 00:17:36.489 "adrfam": "IPv4", 00:17:36.489 "traddr": "10.0.0.1", 00:17:36.489 "trsvcid": "49730" 00:17:36.489 }, 00:17:36.489 "auth": { 00:17:36.489 "state": "completed", 00:17:36.489 "digest": "sha512", 00:17:36.489 "dhgroup": "ffdhe2048" 00:17:36.489 } 00:17:36.489 } 00:17:36.489 ]' 00:17:36.489 11:35:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:36.489 11:35:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:36.489 11:35:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:36.489 11:35:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:17:36.489 11:35:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:36.489 11:35:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:36.489 11:35:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:36.489 11:35:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:36.748 11:35:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ODZlYTMyMjBlZjQyOGEwZDYyZTUxNjViMmE3YmU0MjFhZDVkMmU4MWRmNzZhYWJhNDk3ODM5MGU5NDhkZDk4ZVOjsv0=: 00:17:36.748 11:35:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid 00abaa28-3537-eb11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:ODZlYTMyMjBlZjQyOGEwZDYyZTUxNjViMmE3YmU0MjFhZDVkMmU4MWRmNzZhYWJhNDk3ODM5MGU5NDhkZDk4ZVOjsv0=: 00:17:37.315 11:35:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:37.316 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:37.316 11:35:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:17:37.316 11:35:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:37.316 11:35:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:37.575 11:35:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:37.575 11:35:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:17:37.575 11:35:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:37.575 11:35:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:17:37.575 11:35:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:17:37.833 11:35:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 0 00:17:37.833 11:35:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:37.833 11:35:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:37.833 11:35:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:17:37.833 11:35:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:17:37.833 11:35:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:37.833 11:35:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:37.833 11:35:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:37.833 11:35:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:37.833 11:35:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:37.833 11:35:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:37.833 11:35:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:37.833 11:35:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:38.092 00:17:38.092 11:35:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:38.092 11:35:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:38.092 11:35:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:38.351 11:35:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:38.351 11:35:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:38.351 11:35:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:38.351 11:35:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:38.351 11:35:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:38.351 11:35:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:38.351 { 00:17:38.351 "cntlid": 113, 00:17:38.351 "qid": 0, 00:17:38.351 "state": "enabled", 00:17:38.351 "thread": "nvmf_tgt_poll_group_000", 00:17:38.351 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562", 00:17:38.351 "listen_address": { 00:17:38.351 "trtype": "TCP", 00:17:38.351 "adrfam": "IPv4", 00:17:38.351 "traddr": "10.0.0.2", 00:17:38.351 "trsvcid": "4420" 00:17:38.351 }, 00:17:38.351 "peer_address": { 00:17:38.351 "trtype": "TCP", 00:17:38.351 "adrfam": "IPv4", 00:17:38.351 "traddr": "10.0.0.1", 00:17:38.351 "trsvcid": "49764" 00:17:38.351 }, 00:17:38.351 "auth": { 00:17:38.351 "state": "completed", 00:17:38.351 "digest": "sha512", 00:17:38.351 "dhgroup": "ffdhe3072" 00:17:38.351 } 00:17:38.351 } 00:17:38.351 ]' 00:17:38.351 11:35:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:38.351 11:35:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:38.351 11:35:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:38.351 11:35:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:17:38.351 11:35:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:38.351 11:35:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:38.351 11:35:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:38.351 11:35:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:38.610 11:35:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NDE5Njc4MWNiYWExZGQ5ZmU1NDVmMDA2ZmQzMGFjNjFmZWQyODBlMmI0NThiMTQzLE1V+g==: --dhchap-ctrl-secret DHHC-1:03:MmJmNjJkMjM5MzE0ZDI5YTU3YjczOTcwMDg5ZTMxY2Y4N2MyZTc0MDMzYmYzNGU5ZTc5ZWNkZmJmYjdlOTNiYnYQ51E=: 00:17:38.610 11:35:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid 00abaa28-3537-eb11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:NDE5Njc4MWNiYWExZGQ5ZmU1NDVmMDA2ZmQzMGFjNjFmZWQyODBlMmI0NThiMTQzLE1V+g==: --dhchap-ctrl-secret DHHC-1:03:MmJmNjJkMjM5MzE0ZDI5YTU3YjczOTcwMDg5ZTMxY2Y4N2MyZTc0MDMzYmYzNGU5ZTc5ZWNkZmJmYjdlOTNiYnYQ51E=: 00:17:39.547 11:35:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:39.547 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:39.547 11:35:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:17:39.547 11:35:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:39.547 11:35:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:39.547 11:35:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:39.547 11:35:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:39.547 11:35:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:17:39.547 11:35:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:17:39.806 11:35:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 1 00:17:39.806 11:35:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:39.806 11:35:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:39.806 11:35:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:17:39.806 11:35:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:17:39.806 11:35:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:39.806 11:35:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:39.806 11:35:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:39.806 11:35:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:39.806 11:35:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:39.806 11:35:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:39.806 11:35:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:39.806 11:35:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:40.065 00:17:40.065 11:35:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:40.065 11:35:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:40.065 11:35:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:40.324 11:35:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:40.324 11:35:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:40.324 11:35:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:40.324 11:35:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:40.324 11:35:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:40.324 11:35:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:40.324 { 00:17:40.324 "cntlid": 115, 00:17:40.324 "qid": 0, 00:17:40.324 "state": "enabled", 00:17:40.324 "thread": "nvmf_tgt_poll_group_000", 00:17:40.324 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562", 00:17:40.324 "listen_address": { 00:17:40.324 "trtype": "TCP", 00:17:40.324 "adrfam": "IPv4", 00:17:40.324 "traddr": "10.0.0.2", 00:17:40.324 "trsvcid": "4420" 00:17:40.324 }, 00:17:40.324 "peer_address": { 00:17:40.324 "trtype": "TCP", 00:17:40.324 "adrfam": "IPv4", 00:17:40.324 "traddr": "10.0.0.1", 00:17:40.324 "trsvcid": "49800" 00:17:40.324 }, 00:17:40.324 "auth": { 00:17:40.324 "state": "completed", 00:17:40.324 "digest": "sha512", 00:17:40.324 "dhgroup": "ffdhe3072" 00:17:40.324 } 00:17:40.324 } 00:17:40.324 ]' 00:17:40.324 11:35:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:40.324 11:35:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:40.324 11:35:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:40.324 11:35:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:17:40.324 11:35:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:40.324 11:35:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:40.324 11:35:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:40.324 11:35:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:40.583 11:35:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NzBjNTkzZTliNmFiY2ExM2E5ZGVkZTU2MzczZjIzNGPWmSRF: --dhchap-ctrl-secret DHHC-1:02:M2MwOGEwMGEyOWVmYzQ5MjIwZmQ5ZmJkMDY3NDdhMDRmNmU1NGRiMTM4ZGQ3ODU4gDsQkA==: 00:17:40.583 11:35:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid 00abaa28-3537-eb11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:NzBjNTkzZTliNmFiY2ExM2E5ZGVkZTU2MzczZjIzNGPWmSRF: --dhchap-ctrl-secret DHHC-1:02:M2MwOGEwMGEyOWVmYzQ5MjIwZmQ5ZmJkMDY3NDdhMDRmNmU1NGRiMTM4ZGQ3ODU4gDsQkA==: 00:17:41.519 11:35:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:41.519 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:41.519 11:35:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:17:41.519 11:35:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:41.519 11:35:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:41.519 11:35:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:41.519 11:35:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:41.519 11:35:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:17:41.519 11:35:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:17:41.778 11:35:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 2 00:17:41.778 11:35:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:41.778 11:35:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:41.778 11:35:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:17:41.778 11:35:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:17:41.778 11:35:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:41.778 11:35:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:41.778 11:35:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:41.778 11:35:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:41.778 11:35:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:41.778 11:35:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:41.778 11:35:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:41.778 11:35:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:42.037 00:17:42.037 11:35:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:42.037 11:35:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:42.037 11:35:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:42.037 11:35:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:42.037 11:35:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:42.037 11:35:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:42.037 11:35:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:42.037 11:35:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:42.037 11:35:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:42.037 { 00:17:42.037 "cntlid": 117, 00:17:42.037 "qid": 0, 00:17:42.037 "state": "enabled", 00:17:42.037 "thread": "nvmf_tgt_poll_group_000", 00:17:42.037 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562", 00:17:42.037 "listen_address": { 00:17:42.037 "trtype": "TCP", 00:17:42.037 "adrfam": "IPv4", 00:17:42.037 "traddr": "10.0.0.2", 00:17:42.037 "trsvcid": "4420" 00:17:42.037 }, 00:17:42.037 "peer_address": { 00:17:42.037 "trtype": "TCP", 00:17:42.037 "adrfam": "IPv4", 00:17:42.037 "traddr": "10.0.0.1", 00:17:42.037 "trsvcid": "39766" 00:17:42.037 }, 00:17:42.037 "auth": { 00:17:42.037 "state": "completed", 00:17:42.037 "digest": "sha512", 00:17:42.037 "dhgroup": "ffdhe3072" 00:17:42.037 } 00:17:42.037 } 00:17:42.037 ]' 00:17:42.037 11:35:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:42.295 11:35:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:42.295 11:35:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:42.295 11:35:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:17:42.295 11:35:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:42.295 11:35:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:42.295 11:35:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:42.295 11:35:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:42.553 11:35:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:Yjc4NTAyMDhmNTg3M2FjYmQ1MTVkZTIwZjM1ODhlMmMwNWY5MjA2MDQwZmIzNDY4n3XOMQ==: --dhchap-ctrl-secret DHHC-1:01:MmVkODE3Yzg5OWI3OWQ5MWJiZDc5ODkyMmFkY2M2YzfHVm/m: 00:17:42.553 11:35:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid 00abaa28-3537-eb11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:Yjc4NTAyMDhmNTg3M2FjYmQ1MTVkZTIwZjM1ODhlMmMwNWY5MjA2MDQwZmIzNDY4n3XOMQ==: --dhchap-ctrl-secret DHHC-1:01:MmVkODE3Yzg5OWI3OWQ5MWJiZDc5ODkyMmFkY2M2YzfHVm/m: 00:17:43.121 11:35:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:43.121 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:43.121 11:35:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:17:43.121 11:35:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:43.121 11:35:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:43.380 11:35:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:43.380 11:35:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:43.380 11:35:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:17:43.380 11:35:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:17:43.639 11:35:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 3 00:17:43.639 11:35:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:43.639 11:35:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:43.639 11:35:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:17:43.639 11:35:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:17:43.639 11:35:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:43.639 11:35:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --dhchap-key key3 00:17:43.639 11:35:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:43.639 11:35:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:43.639 11:35:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:43.639 11:35:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:17:43.639 11:35:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:43.639 11:35:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:43.898 00:17:43.898 11:35:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:43.898 11:35:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:43.898 11:35:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:44.160 11:35:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:44.160 11:35:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:44.160 11:35:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:44.160 11:35:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:44.160 11:35:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:44.160 11:35:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:44.160 { 00:17:44.160 "cntlid": 119, 00:17:44.160 "qid": 0, 00:17:44.160 "state": "enabled", 00:17:44.160 "thread": "nvmf_tgt_poll_group_000", 00:17:44.160 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562", 00:17:44.160 "listen_address": { 00:17:44.160 "trtype": "TCP", 00:17:44.160 "adrfam": "IPv4", 00:17:44.160 "traddr": "10.0.0.2", 00:17:44.160 "trsvcid": "4420" 00:17:44.160 }, 00:17:44.160 "peer_address": { 00:17:44.160 "trtype": "TCP", 00:17:44.160 "adrfam": "IPv4", 00:17:44.160 "traddr": "10.0.0.1", 00:17:44.160 "trsvcid": "39804" 00:17:44.160 }, 00:17:44.160 "auth": { 00:17:44.160 "state": "completed", 00:17:44.160 "digest": "sha512", 00:17:44.160 "dhgroup": "ffdhe3072" 00:17:44.160 } 00:17:44.160 } 00:17:44.160 ]' 00:17:44.160 11:35:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:44.160 11:35:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:44.160 11:35:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:44.160 11:35:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:17:44.160 11:35:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:44.160 11:35:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:44.160 11:35:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:44.160 11:35:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:44.425 11:35:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ODZlYTMyMjBlZjQyOGEwZDYyZTUxNjViMmE3YmU0MjFhZDVkMmU4MWRmNzZhYWJhNDk3ODM5MGU5NDhkZDk4ZVOjsv0=: 00:17:44.425 11:35:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid 00abaa28-3537-eb11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:ODZlYTMyMjBlZjQyOGEwZDYyZTUxNjViMmE3YmU0MjFhZDVkMmU4MWRmNzZhYWJhNDk3ODM5MGU5NDhkZDk4ZVOjsv0=: 00:17:45.362 11:35:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:45.362 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:45.362 11:35:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:17:45.362 11:35:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:45.362 11:35:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:45.362 11:35:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:45.362 11:35:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:17:45.362 11:35:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:45.362 11:35:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:17:45.362 11:35:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:17:45.362 11:35:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 0 00:17:45.362 11:35:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:45.362 11:35:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:45.362 11:35:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:17:45.362 11:35:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:17:45.362 11:35:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:45.362 11:35:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:45.362 11:35:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:45.362 11:35:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:45.362 11:35:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:45.362 11:35:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:45.362 11:35:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:45.362 11:35:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:45.932 00:17:45.932 11:35:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:45.932 11:35:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:45.932 11:35:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:46.191 11:35:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:46.191 11:35:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:46.191 11:35:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:46.191 11:35:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:46.191 11:35:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:46.191 11:35:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:46.191 { 00:17:46.191 "cntlid": 121, 00:17:46.191 "qid": 0, 00:17:46.191 "state": "enabled", 00:17:46.191 "thread": "nvmf_tgt_poll_group_000", 00:17:46.191 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562", 00:17:46.191 "listen_address": { 00:17:46.191 "trtype": "TCP", 00:17:46.191 "adrfam": "IPv4", 00:17:46.191 "traddr": "10.0.0.2", 00:17:46.191 "trsvcid": "4420" 00:17:46.191 }, 00:17:46.191 "peer_address": { 00:17:46.191 "trtype": "TCP", 00:17:46.191 "adrfam": "IPv4", 00:17:46.191 "traddr": "10.0.0.1", 00:17:46.191 "trsvcid": "39838" 00:17:46.191 }, 00:17:46.191 "auth": { 00:17:46.191 "state": "completed", 00:17:46.191 "digest": "sha512", 00:17:46.191 "dhgroup": "ffdhe4096" 00:17:46.191 } 00:17:46.191 } 00:17:46.191 ]' 00:17:46.191 11:35:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:46.192 11:35:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:46.192 11:35:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:46.192 11:35:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:17:46.192 11:35:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:46.192 11:35:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:46.192 11:35:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:46.192 11:35:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:46.451 11:35:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NDE5Njc4MWNiYWExZGQ5ZmU1NDVmMDA2ZmQzMGFjNjFmZWQyODBlMmI0NThiMTQzLE1V+g==: --dhchap-ctrl-secret DHHC-1:03:MmJmNjJkMjM5MzE0ZDI5YTU3YjczOTcwMDg5ZTMxY2Y4N2MyZTc0MDMzYmYzNGU5ZTc5ZWNkZmJmYjdlOTNiYnYQ51E=: 00:17:46.451 11:35:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid 00abaa28-3537-eb11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:NDE5Njc4MWNiYWExZGQ5ZmU1NDVmMDA2ZmQzMGFjNjFmZWQyODBlMmI0NThiMTQzLE1V+g==: --dhchap-ctrl-secret DHHC-1:03:MmJmNjJkMjM5MzE0ZDI5YTU3YjczOTcwMDg5ZTMxY2Y4N2MyZTc0MDMzYmYzNGU5ZTc5ZWNkZmJmYjdlOTNiYnYQ51E=: 00:17:47.020 11:35:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:47.020 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:47.020 11:35:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:17:47.020 11:35:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:47.020 11:35:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:47.020 11:35:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:47.020 11:35:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:47.020 11:35:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:17:47.020 11:35:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:17:47.279 11:35:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 1 00:17:47.279 11:35:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:47.279 11:35:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:47.279 11:35:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:17:47.279 11:35:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:17:47.279 11:35:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:47.279 11:35:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:47.279 11:35:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:47.279 11:35:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:47.279 11:35:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:47.279 11:35:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:47.279 11:35:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:47.279 11:35:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:47.538 00:17:47.538 11:35:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:47.538 11:35:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:47.538 11:35:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:48.106 11:35:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:48.106 11:35:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:48.106 11:35:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:48.106 11:35:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:48.106 11:35:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:48.106 11:35:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:48.106 { 00:17:48.106 "cntlid": 123, 00:17:48.106 "qid": 0, 00:17:48.106 "state": "enabled", 00:17:48.106 "thread": "nvmf_tgt_poll_group_000", 00:17:48.106 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562", 00:17:48.106 "listen_address": { 00:17:48.106 "trtype": "TCP", 00:17:48.106 "adrfam": "IPv4", 00:17:48.106 "traddr": "10.0.0.2", 00:17:48.106 "trsvcid": "4420" 00:17:48.106 }, 00:17:48.106 "peer_address": { 00:17:48.106 "trtype": "TCP", 00:17:48.106 "adrfam": "IPv4", 00:17:48.106 "traddr": "10.0.0.1", 00:17:48.106 "trsvcid": "39856" 00:17:48.106 }, 00:17:48.106 "auth": { 00:17:48.106 "state": "completed", 00:17:48.106 "digest": "sha512", 00:17:48.106 "dhgroup": "ffdhe4096" 00:17:48.106 } 00:17:48.106 } 00:17:48.106 ]' 00:17:48.106 11:35:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:48.106 11:35:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:48.106 11:35:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:48.106 11:35:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:17:48.106 11:35:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:48.106 11:35:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:48.106 11:35:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:48.106 11:35:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:48.365 11:35:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NzBjNTkzZTliNmFiY2ExM2E5ZGVkZTU2MzczZjIzNGPWmSRF: --dhchap-ctrl-secret DHHC-1:02:M2MwOGEwMGEyOWVmYzQ5MjIwZmQ5ZmJkMDY3NDdhMDRmNmU1NGRiMTM4ZGQ3ODU4gDsQkA==: 00:17:48.365 11:35:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid 00abaa28-3537-eb11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:NzBjNTkzZTliNmFiY2ExM2E5ZGVkZTU2MzczZjIzNGPWmSRF: --dhchap-ctrl-secret DHHC-1:02:M2MwOGEwMGEyOWVmYzQ5MjIwZmQ5ZmJkMDY3NDdhMDRmNmU1NGRiMTM4ZGQ3ODU4gDsQkA==: 00:17:49.302 11:35:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:49.302 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:49.302 11:35:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:17:49.302 11:35:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:49.302 11:35:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:49.302 11:35:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:49.302 11:35:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:49.302 11:35:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:17:49.302 11:35:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:17:49.562 11:35:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 2 00:17:49.562 11:35:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:49.562 11:35:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:49.562 11:35:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:17:49.562 11:35:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:17:49.562 11:35:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:49.562 11:35:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:49.562 11:35:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:49.562 11:35:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:49.562 11:35:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:49.562 11:35:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:49.562 11:35:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:49.562 11:35:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:49.821 00:17:49.821 11:35:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:49.821 11:35:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:49.821 11:35:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:50.080 11:35:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:50.081 11:35:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:50.081 11:35:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:50.081 11:35:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:50.081 11:35:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:50.081 11:35:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:50.081 { 00:17:50.081 "cntlid": 125, 00:17:50.081 "qid": 0, 00:17:50.081 "state": "enabled", 00:17:50.081 "thread": "nvmf_tgt_poll_group_000", 00:17:50.081 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562", 00:17:50.081 "listen_address": { 00:17:50.081 "trtype": "TCP", 00:17:50.081 "adrfam": "IPv4", 00:17:50.081 "traddr": "10.0.0.2", 00:17:50.081 "trsvcid": "4420" 00:17:50.081 }, 00:17:50.081 "peer_address": { 00:17:50.081 "trtype": "TCP", 00:17:50.081 "adrfam": "IPv4", 00:17:50.081 "traddr": "10.0.0.1", 00:17:50.081 "trsvcid": "39884" 00:17:50.081 }, 00:17:50.081 "auth": { 00:17:50.081 "state": "completed", 00:17:50.081 "digest": "sha512", 00:17:50.081 "dhgroup": "ffdhe4096" 00:17:50.081 } 00:17:50.081 } 00:17:50.081 ]' 00:17:50.081 11:35:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:50.081 11:35:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:50.081 11:35:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:50.081 11:35:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:17:50.081 11:35:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:50.081 11:35:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:50.081 11:35:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:50.081 11:35:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:50.340 11:35:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:Yjc4NTAyMDhmNTg3M2FjYmQ1MTVkZTIwZjM1ODhlMmMwNWY5MjA2MDQwZmIzNDY4n3XOMQ==: --dhchap-ctrl-secret DHHC-1:01:MmVkODE3Yzg5OWI3OWQ5MWJiZDc5ODkyMmFkY2M2YzfHVm/m: 00:17:50.340 11:35:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid 00abaa28-3537-eb11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:Yjc4NTAyMDhmNTg3M2FjYmQ1MTVkZTIwZjM1ODhlMmMwNWY5MjA2MDQwZmIzNDY4n3XOMQ==: --dhchap-ctrl-secret DHHC-1:01:MmVkODE3Yzg5OWI3OWQ5MWJiZDc5ODkyMmFkY2M2YzfHVm/m: 00:17:51.276 11:35:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:51.276 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:51.276 11:35:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:17:51.276 11:35:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:51.276 11:35:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:51.276 11:35:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:51.276 11:35:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:51.276 11:35:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:17:51.276 11:35:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:17:51.535 11:35:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 3 00:17:51.535 11:35:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:51.535 11:35:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:51.535 11:35:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:17:51.535 11:35:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:17:51.535 11:35:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:51.535 11:35:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --dhchap-key key3 00:17:51.535 11:35:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:51.535 11:35:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:51.535 11:35:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:51.535 11:35:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:17:51.535 11:35:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:51.535 11:35:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:51.795 00:17:52.054 11:35:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:52.054 11:35:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:52.054 11:35:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:52.054 11:35:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:52.054 11:35:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:52.054 11:35:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:52.054 11:35:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:52.054 11:35:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:52.054 11:35:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:52.054 { 00:17:52.054 "cntlid": 127, 00:17:52.054 "qid": 0, 00:17:52.054 "state": "enabled", 00:17:52.054 "thread": "nvmf_tgt_poll_group_000", 00:17:52.054 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562", 00:17:52.054 "listen_address": { 00:17:52.054 "trtype": "TCP", 00:17:52.054 "adrfam": "IPv4", 00:17:52.054 "traddr": "10.0.0.2", 00:17:52.054 "trsvcid": "4420" 00:17:52.054 }, 00:17:52.054 "peer_address": { 00:17:52.054 "trtype": "TCP", 00:17:52.054 "adrfam": "IPv4", 00:17:52.054 "traddr": "10.0.0.1", 00:17:52.054 "trsvcid": "45146" 00:17:52.054 }, 00:17:52.054 "auth": { 00:17:52.054 "state": "completed", 00:17:52.054 "digest": "sha512", 00:17:52.054 "dhgroup": "ffdhe4096" 00:17:52.054 } 00:17:52.054 } 00:17:52.054 ]' 00:17:52.054 11:35:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:52.054 11:35:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:52.054 11:35:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:52.054 11:35:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:17:52.054 11:35:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:52.313 11:35:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:52.313 11:35:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:52.313 11:35:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:52.573 11:35:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ODZlYTMyMjBlZjQyOGEwZDYyZTUxNjViMmE3YmU0MjFhZDVkMmU4MWRmNzZhYWJhNDk3ODM5MGU5NDhkZDk4ZVOjsv0=: 00:17:52.573 11:35:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid 00abaa28-3537-eb11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:ODZlYTMyMjBlZjQyOGEwZDYyZTUxNjViMmE3YmU0MjFhZDVkMmU4MWRmNzZhYWJhNDk3ODM5MGU5NDhkZDk4ZVOjsv0=: 00:17:53.509 11:35:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:53.509 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:53.509 11:35:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:17:53.509 11:35:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:53.509 11:35:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:53.509 11:35:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:53.509 11:35:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:17:53.509 11:35:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:53.509 11:35:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:17:53.509 11:35:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:17:53.509 11:35:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 0 00:17:53.509 11:35:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:53.509 11:35:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:53.509 11:35:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:17:53.509 11:35:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:17:53.509 11:35:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:53.509 11:35:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:53.509 11:35:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:53.509 11:35:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:53.509 11:35:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:53.509 11:35:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:53.509 11:35:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:53.509 11:35:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:54.075 00:17:54.075 11:35:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:54.075 11:35:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:54.075 11:35:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:54.335 11:35:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:54.335 11:35:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:54.335 11:35:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:54.335 11:35:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:54.335 11:35:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:54.335 11:35:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:54.335 { 00:17:54.335 "cntlid": 129, 00:17:54.335 "qid": 0, 00:17:54.335 "state": "enabled", 00:17:54.335 "thread": "nvmf_tgt_poll_group_000", 00:17:54.335 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562", 00:17:54.335 "listen_address": { 00:17:54.335 "trtype": "TCP", 00:17:54.335 "adrfam": "IPv4", 00:17:54.335 "traddr": "10.0.0.2", 00:17:54.335 "trsvcid": "4420" 00:17:54.335 }, 00:17:54.335 "peer_address": { 00:17:54.335 "trtype": "TCP", 00:17:54.335 "adrfam": "IPv4", 00:17:54.335 "traddr": "10.0.0.1", 00:17:54.335 "trsvcid": "45178" 00:17:54.335 }, 00:17:54.335 "auth": { 00:17:54.335 "state": "completed", 00:17:54.335 "digest": "sha512", 00:17:54.335 "dhgroup": "ffdhe6144" 00:17:54.335 } 00:17:54.335 } 00:17:54.335 ]' 00:17:54.335 11:35:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:54.335 11:35:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:54.336 11:35:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:54.594 11:35:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:17:54.594 11:35:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:54.594 11:35:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:54.594 11:35:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:54.594 11:35:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:54.853 11:35:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NDE5Njc4MWNiYWExZGQ5ZmU1NDVmMDA2ZmQzMGFjNjFmZWQyODBlMmI0NThiMTQzLE1V+g==: --dhchap-ctrl-secret DHHC-1:03:MmJmNjJkMjM5MzE0ZDI5YTU3YjczOTcwMDg5ZTMxY2Y4N2MyZTc0MDMzYmYzNGU5ZTc5ZWNkZmJmYjdlOTNiYnYQ51E=: 00:17:54.853 11:35:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid 00abaa28-3537-eb11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:NDE5Njc4MWNiYWExZGQ5ZmU1NDVmMDA2ZmQzMGFjNjFmZWQyODBlMmI0NThiMTQzLE1V+g==: --dhchap-ctrl-secret DHHC-1:03:MmJmNjJkMjM5MzE0ZDI5YTU3YjczOTcwMDg5ZTMxY2Y4N2MyZTc0MDMzYmYzNGU5ZTc5ZWNkZmJmYjdlOTNiYnYQ51E=: 00:17:55.790 11:35:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:55.790 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:55.790 11:35:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:17:55.790 11:35:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:55.790 11:35:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:55.790 11:35:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:55.790 11:35:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:55.790 11:35:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:17:55.790 11:35:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:17:55.790 11:35:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 1 00:17:55.790 11:35:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:55.790 11:35:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:55.790 11:35:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:17:55.790 11:35:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:17:55.791 11:35:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:55.791 11:35:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:55.791 11:35:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:55.791 11:35:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:55.791 11:35:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:55.791 11:35:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:55.791 11:35:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:55.791 11:35:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:56.359 00:17:56.359 11:35:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:56.359 11:35:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:56.359 11:35:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:56.617 11:35:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:56.617 11:35:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:56.618 11:35:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:56.618 11:35:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:56.618 11:35:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:56.618 11:35:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:56.618 { 00:17:56.618 "cntlid": 131, 00:17:56.618 "qid": 0, 00:17:56.618 "state": "enabled", 00:17:56.618 "thread": "nvmf_tgt_poll_group_000", 00:17:56.618 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562", 00:17:56.618 "listen_address": { 00:17:56.618 "trtype": "TCP", 00:17:56.618 "adrfam": "IPv4", 00:17:56.618 "traddr": "10.0.0.2", 00:17:56.618 "trsvcid": "4420" 00:17:56.618 }, 00:17:56.618 "peer_address": { 00:17:56.618 "trtype": "TCP", 00:17:56.618 "adrfam": "IPv4", 00:17:56.618 "traddr": "10.0.0.1", 00:17:56.618 "trsvcid": "45218" 00:17:56.618 }, 00:17:56.618 "auth": { 00:17:56.618 "state": "completed", 00:17:56.618 "digest": "sha512", 00:17:56.618 "dhgroup": "ffdhe6144" 00:17:56.618 } 00:17:56.618 } 00:17:56.618 ]' 00:17:56.618 11:35:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:56.618 11:35:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:56.618 11:35:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:56.618 11:35:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:17:56.618 11:35:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:56.618 11:35:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:56.618 11:35:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:56.618 11:35:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:56.877 11:35:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NzBjNTkzZTliNmFiY2ExM2E5ZGVkZTU2MzczZjIzNGPWmSRF: --dhchap-ctrl-secret DHHC-1:02:M2MwOGEwMGEyOWVmYzQ5MjIwZmQ5ZmJkMDY3NDdhMDRmNmU1NGRiMTM4ZGQ3ODU4gDsQkA==: 00:17:56.877 11:35:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid 00abaa28-3537-eb11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:NzBjNTkzZTliNmFiY2ExM2E5ZGVkZTU2MzczZjIzNGPWmSRF: --dhchap-ctrl-secret DHHC-1:02:M2MwOGEwMGEyOWVmYzQ5MjIwZmQ5ZmJkMDY3NDdhMDRmNmU1NGRiMTM4ZGQ3ODU4gDsQkA==: 00:17:57.814 11:35:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:57.814 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:57.814 11:35:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:17:57.814 11:35:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:57.814 11:35:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:57.814 11:35:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:57.814 11:35:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:57.814 11:35:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:17:57.814 11:35:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:17:58.073 11:35:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 2 00:17:58.073 11:35:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:58.073 11:35:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:58.073 11:35:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:17:58.073 11:35:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:17:58.073 11:35:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:58.073 11:35:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:58.073 11:35:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:58.073 11:35:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:58.073 11:35:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:58.073 11:35:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:58.073 11:35:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:58.073 11:35:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:58.641 00:17:58.641 11:35:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:58.641 11:35:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:58.641 11:35:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:58.900 11:35:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:58.900 11:35:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:58.900 11:35:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:58.900 11:35:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:58.900 11:35:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:58.900 11:35:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:58.900 { 00:17:58.900 "cntlid": 133, 00:17:58.900 "qid": 0, 00:17:58.900 "state": "enabled", 00:17:58.900 "thread": "nvmf_tgt_poll_group_000", 00:17:58.900 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562", 00:17:58.900 "listen_address": { 00:17:58.900 "trtype": "TCP", 00:17:58.900 "adrfam": "IPv4", 00:17:58.900 "traddr": "10.0.0.2", 00:17:58.900 "trsvcid": "4420" 00:17:58.900 }, 00:17:58.900 "peer_address": { 00:17:58.900 "trtype": "TCP", 00:17:58.900 "adrfam": "IPv4", 00:17:58.900 "traddr": "10.0.0.1", 00:17:58.900 "trsvcid": "45248" 00:17:58.900 }, 00:17:58.900 "auth": { 00:17:58.900 "state": "completed", 00:17:58.900 "digest": "sha512", 00:17:58.900 "dhgroup": "ffdhe6144" 00:17:58.900 } 00:17:58.900 } 00:17:58.900 ]' 00:17:58.900 11:35:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:58.900 11:35:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:58.900 11:35:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:58.900 11:35:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:17:58.900 11:35:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:58.900 11:35:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:58.900 11:35:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:58.900 11:35:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:59.159 11:35:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:Yjc4NTAyMDhmNTg3M2FjYmQ1MTVkZTIwZjM1ODhlMmMwNWY5MjA2MDQwZmIzNDY4n3XOMQ==: --dhchap-ctrl-secret DHHC-1:01:MmVkODE3Yzg5OWI3OWQ5MWJiZDc5ODkyMmFkY2M2YzfHVm/m: 00:17:59.159 11:35:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid 00abaa28-3537-eb11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:Yjc4NTAyMDhmNTg3M2FjYmQ1MTVkZTIwZjM1ODhlMmMwNWY5MjA2MDQwZmIzNDY4n3XOMQ==: --dhchap-ctrl-secret DHHC-1:01:MmVkODE3Yzg5OWI3OWQ5MWJiZDc5ODkyMmFkY2M2YzfHVm/m: 00:18:00.091 11:36:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:00.091 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:00.091 11:36:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:18:00.091 11:36:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:00.091 11:36:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:00.091 11:36:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:00.091 11:36:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:00.091 11:36:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:18:00.091 11:36:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:18:00.350 11:36:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 3 00:18:00.350 11:36:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:00.350 11:36:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:18:00.350 11:36:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:18:00.350 11:36:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:18:00.350 11:36:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:00.350 11:36:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --dhchap-key key3 00:18:00.350 11:36:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:00.350 11:36:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:00.350 11:36:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:00.350 11:36:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:18:00.350 11:36:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:00.350 11:36:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:00.917 00:18:00.917 11:36:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:00.917 11:36:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:00.917 11:36:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:01.176 11:36:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:01.176 11:36:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:01.176 11:36:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:01.176 11:36:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:01.176 11:36:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:01.176 11:36:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:01.176 { 00:18:01.176 "cntlid": 135, 00:18:01.176 "qid": 0, 00:18:01.176 "state": "enabled", 00:18:01.176 "thread": "nvmf_tgt_poll_group_000", 00:18:01.176 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562", 00:18:01.176 "listen_address": { 00:18:01.176 "trtype": "TCP", 00:18:01.176 "adrfam": "IPv4", 00:18:01.176 "traddr": "10.0.0.2", 00:18:01.176 "trsvcid": "4420" 00:18:01.176 }, 00:18:01.176 "peer_address": { 00:18:01.176 "trtype": "TCP", 00:18:01.176 "adrfam": "IPv4", 00:18:01.176 "traddr": "10.0.0.1", 00:18:01.176 "trsvcid": "45270" 00:18:01.176 }, 00:18:01.176 "auth": { 00:18:01.176 "state": "completed", 00:18:01.176 "digest": "sha512", 00:18:01.176 "dhgroup": "ffdhe6144" 00:18:01.176 } 00:18:01.176 } 00:18:01.176 ]' 00:18:01.176 11:36:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:01.176 11:36:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:01.176 11:36:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:01.176 11:36:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:18:01.176 11:36:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:01.176 11:36:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:01.176 11:36:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:01.176 11:36:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:01.434 11:36:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ODZlYTMyMjBlZjQyOGEwZDYyZTUxNjViMmE3YmU0MjFhZDVkMmU4MWRmNzZhYWJhNDk3ODM5MGU5NDhkZDk4ZVOjsv0=: 00:18:01.434 11:36:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid 00abaa28-3537-eb11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:ODZlYTMyMjBlZjQyOGEwZDYyZTUxNjViMmE3YmU0MjFhZDVkMmU4MWRmNzZhYWJhNDk3ODM5MGU5NDhkZDk4ZVOjsv0=: 00:18:02.369 11:36:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:02.369 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:02.369 11:36:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:18:02.369 11:36:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:02.369 11:36:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:02.369 11:36:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:02.369 11:36:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:18:02.369 11:36:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:02.369 11:36:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:18:02.369 11:36:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:18:02.627 11:36:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 0 00:18:02.627 11:36:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:02.627 11:36:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:18:02.627 11:36:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:18:02.627 11:36:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:18:02.627 11:36:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:02.627 11:36:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:02.627 11:36:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:02.627 11:36:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:02.627 11:36:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:02.627 11:36:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:02.627 11:36:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:02.627 11:36:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:03.192 00:18:03.192 11:36:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:03.192 11:36:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:03.192 11:36:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:03.450 11:36:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:03.450 11:36:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:03.450 11:36:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:03.450 11:36:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:03.450 11:36:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:03.450 11:36:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:03.450 { 00:18:03.450 "cntlid": 137, 00:18:03.450 "qid": 0, 00:18:03.450 "state": "enabled", 00:18:03.450 "thread": "nvmf_tgt_poll_group_000", 00:18:03.450 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562", 00:18:03.450 "listen_address": { 00:18:03.450 "trtype": "TCP", 00:18:03.450 "adrfam": "IPv4", 00:18:03.450 "traddr": "10.0.0.2", 00:18:03.450 "trsvcid": "4420" 00:18:03.450 }, 00:18:03.450 "peer_address": { 00:18:03.450 "trtype": "TCP", 00:18:03.450 "adrfam": "IPv4", 00:18:03.450 "traddr": "10.0.0.1", 00:18:03.450 "trsvcid": "59682" 00:18:03.450 }, 00:18:03.450 "auth": { 00:18:03.450 "state": "completed", 00:18:03.450 "digest": "sha512", 00:18:03.450 "dhgroup": "ffdhe8192" 00:18:03.450 } 00:18:03.450 } 00:18:03.450 ]' 00:18:03.450 11:36:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:03.450 11:36:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:03.450 11:36:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:03.450 11:36:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:18:03.450 11:36:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:03.451 11:36:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:03.451 11:36:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:03.451 11:36:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:03.708 11:36:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NDE5Njc4MWNiYWExZGQ5ZmU1NDVmMDA2ZmQzMGFjNjFmZWQyODBlMmI0NThiMTQzLE1V+g==: --dhchap-ctrl-secret DHHC-1:03:MmJmNjJkMjM5MzE0ZDI5YTU3YjczOTcwMDg5ZTMxY2Y4N2MyZTc0MDMzYmYzNGU5ZTc5ZWNkZmJmYjdlOTNiYnYQ51E=: 00:18:03.708 11:36:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid 00abaa28-3537-eb11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:NDE5Njc4MWNiYWExZGQ5ZmU1NDVmMDA2ZmQzMGFjNjFmZWQyODBlMmI0NThiMTQzLE1V+g==: --dhchap-ctrl-secret DHHC-1:03:MmJmNjJkMjM5MzE0ZDI5YTU3YjczOTcwMDg5ZTMxY2Y4N2MyZTc0MDMzYmYzNGU5ZTc5ZWNkZmJmYjdlOTNiYnYQ51E=: 00:18:04.642 11:36:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:04.642 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:04.642 11:36:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:18:04.642 11:36:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:04.643 11:36:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:04.643 11:36:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:04.643 11:36:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:04.643 11:36:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:18:04.643 11:36:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:18:04.900 11:36:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 1 00:18:04.900 11:36:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:04.900 11:36:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:18:04.900 11:36:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:18:04.900 11:36:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:18:04.900 11:36:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:04.900 11:36:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:04.900 11:36:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:04.900 11:36:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:04.900 11:36:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:04.900 11:36:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:04.900 11:36:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:04.900 11:36:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:05.466 00:18:05.466 11:36:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:05.466 11:36:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:05.466 11:36:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:05.725 11:36:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:05.725 11:36:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:05.725 11:36:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:05.725 11:36:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:05.725 11:36:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:05.725 11:36:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:05.725 { 00:18:05.725 "cntlid": 139, 00:18:05.725 "qid": 0, 00:18:05.725 "state": "enabled", 00:18:05.725 "thread": "nvmf_tgt_poll_group_000", 00:18:05.725 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562", 00:18:05.725 "listen_address": { 00:18:05.725 "trtype": "TCP", 00:18:05.725 "adrfam": "IPv4", 00:18:05.725 "traddr": "10.0.0.2", 00:18:05.725 "trsvcid": "4420" 00:18:05.725 }, 00:18:05.725 "peer_address": { 00:18:05.725 "trtype": "TCP", 00:18:05.725 "adrfam": "IPv4", 00:18:05.725 "traddr": "10.0.0.1", 00:18:05.725 "trsvcid": "59714" 00:18:05.725 }, 00:18:05.725 "auth": { 00:18:05.725 "state": "completed", 00:18:05.725 "digest": "sha512", 00:18:05.725 "dhgroup": "ffdhe8192" 00:18:05.725 } 00:18:05.725 } 00:18:05.725 ]' 00:18:05.725 11:36:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:05.725 11:36:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:05.725 11:36:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:05.725 11:36:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:18:05.725 11:36:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:05.983 11:36:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:05.983 11:36:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:05.983 11:36:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:06.241 11:36:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NzBjNTkzZTliNmFiY2ExM2E5ZGVkZTU2MzczZjIzNGPWmSRF: --dhchap-ctrl-secret DHHC-1:02:M2MwOGEwMGEyOWVmYzQ5MjIwZmQ5ZmJkMDY3NDdhMDRmNmU1NGRiMTM4ZGQ3ODU4gDsQkA==: 00:18:06.241 11:36:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid 00abaa28-3537-eb11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:NzBjNTkzZTliNmFiY2ExM2E5ZGVkZTU2MzczZjIzNGPWmSRF: --dhchap-ctrl-secret DHHC-1:02:M2MwOGEwMGEyOWVmYzQ5MjIwZmQ5ZmJkMDY3NDdhMDRmNmU1NGRiMTM4ZGQ3ODU4gDsQkA==: 00:18:07.174 11:36:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:07.175 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:07.175 11:36:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:18:07.175 11:36:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:07.175 11:36:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:07.175 11:36:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:07.175 11:36:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:07.175 11:36:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:18:07.175 11:36:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:18:07.175 11:36:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 2 00:18:07.175 11:36:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:07.175 11:36:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:18:07.175 11:36:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:18:07.175 11:36:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:18:07.175 11:36:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:07.175 11:36:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:07.175 11:36:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:07.175 11:36:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:07.175 11:36:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:07.175 11:36:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:07.175 11:36:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:07.175 11:36:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:07.740 00:18:07.740 11:36:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:07.740 11:36:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:07.740 11:36:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:07.997 11:36:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:07.997 11:36:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:07.997 11:36:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:07.997 11:36:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:07.997 11:36:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:07.997 11:36:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:07.997 { 00:18:07.997 "cntlid": 141, 00:18:07.997 "qid": 0, 00:18:07.997 "state": "enabled", 00:18:07.997 "thread": "nvmf_tgt_poll_group_000", 00:18:07.997 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562", 00:18:07.997 "listen_address": { 00:18:07.997 "trtype": "TCP", 00:18:07.997 "adrfam": "IPv4", 00:18:07.997 "traddr": "10.0.0.2", 00:18:07.997 "trsvcid": "4420" 00:18:07.997 }, 00:18:07.997 "peer_address": { 00:18:07.997 "trtype": "TCP", 00:18:07.997 "adrfam": "IPv4", 00:18:07.997 "traddr": "10.0.0.1", 00:18:07.997 "trsvcid": "59744" 00:18:07.997 }, 00:18:07.997 "auth": { 00:18:07.997 "state": "completed", 00:18:07.997 "digest": "sha512", 00:18:07.997 "dhgroup": "ffdhe8192" 00:18:07.997 } 00:18:07.997 } 00:18:07.997 ]' 00:18:07.997 11:36:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:08.256 11:36:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:08.256 11:36:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:08.256 11:36:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:18:08.256 11:36:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:08.256 11:36:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:08.256 11:36:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:08.256 11:36:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:08.514 11:36:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:Yjc4NTAyMDhmNTg3M2FjYmQ1MTVkZTIwZjM1ODhlMmMwNWY5MjA2MDQwZmIzNDY4n3XOMQ==: --dhchap-ctrl-secret DHHC-1:01:MmVkODE3Yzg5OWI3OWQ5MWJiZDc5ODkyMmFkY2M2YzfHVm/m: 00:18:08.514 11:36:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid 00abaa28-3537-eb11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:Yjc4NTAyMDhmNTg3M2FjYmQ1MTVkZTIwZjM1ODhlMmMwNWY5MjA2MDQwZmIzNDY4n3XOMQ==: --dhchap-ctrl-secret DHHC-1:01:MmVkODE3Yzg5OWI3OWQ5MWJiZDc5ODkyMmFkY2M2YzfHVm/m: 00:18:09.080 11:36:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:09.080 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:09.080 11:36:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:18:09.080 11:36:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:09.080 11:36:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:09.080 11:36:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:09.080 11:36:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:09.080 11:36:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:18:09.080 11:36:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:18:09.338 11:36:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 3 00:18:09.338 11:36:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:09.338 11:36:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:18:09.338 11:36:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:18:09.338 11:36:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:18:09.338 11:36:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:09.338 11:36:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --dhchap-key key3 00:18:09.338 11:36:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:09.338 11:36:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:09.338 11:36:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:09.338 11:36:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:18:09.338 11:36:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:09.338 11:36:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:09.903 00:18:09.903 11:36:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:09.903 11:36:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:09.903 11:36:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:10.161 11:36:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:10.161 11:36:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:10.161 11:36:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:10.161 11:36:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:10.161 11:36:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:10.161 11:36:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:10.161 { 00:18:10.161 "cntlid": 143, 00:18:10.161 "qid": 0, 00:18:10.161 "state": "enabled", 00:18:10.161 "thread": "nvmf_tgt_poll_group_000", 00:18:10.161 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562", 00:18:10.161 "listen_address": { 00:18:10.161 "trtype": "TCP", 00:18:10.161 "adrfam": "IPv4", 00:18:10.161 "traddr": "10.0.0.2", 00:18:10.161 "trsvcid": "4420" 00:18:10.161 }, 00:18:10.161 "peer_address": { 00:18:10.161 "trtype": "TCP", 00:18:10.161 "adrfam": "IPv4", 00:18:10.161 "traddr": "10.0.0.1", 00:18:10.161 "trsvcid": "59764" 00:18:10.161 }, 00:18:10.161 "auth": { 00:18:10.161 "state": "completed", 00:18:10.161 "digest": "sha512", 00:18:10.161 "dhgroup": "ffdhe8192" 00:18:10.161 } 00:18:10.161 } 00:18:10.161 ]' 00:18:10.161 11:36:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:10.161 11:36:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:10.161 11:36:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:10.419 11:36:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:18:10.419 11:36:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:10.419 11:36:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:10.419 11:36:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:10.419 11:36:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:10.677 11:36:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ODZlYTMyMjBlZjQyOGEwZDYyZTUxNjViMmE3YmU0MjFhZDVkMmU4MWRmNzZhYWJhNDk3ODM5MGU5NDhkZDk4ZVOjsv0=: 00:18:10.677 11:36:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid 00abaa28-3537-eb11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:ODZlYTMyMjBlZjQyOGEwZDYyZTUxNjViMmE3YmU0MjFhZDVkMmU4MWRmNzZhYWJhNDk3ODM5MGU5NDhkZDk4ZVOjsv0=: 00:18:11.611 11:36:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:11.611 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:11.612 11:36:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:18:11.612 11:36:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:11.612 11:36:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:11.612 11:36:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:11.612 11:36:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@129 -- # IFS=, 00:18:11.612 11:36:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@130 -- # printf %s sha256,sha384,sha512 00:18:11.612 11:36:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@129 -- # IFS=, 00:18:11.612 11:36:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@130 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:18:11.612 11:36:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@129 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:18:11.612 11:36:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:18:11.612 11:36:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@141 -- # connect_authenticate sha512 ffdhe8192 0 00:18:11.612 11:36:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:11.612 11:36:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:18:11.612 11:36:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:18:11.612 11:36:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:18:11.612 11:36:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:11.612 11:36:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:11.612 11:36:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:11.612 11:36:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:11.612 11:36:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:11.612 11:36:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:11.612 11:36:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:11.612 11:36:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:12.178 00:18:12.178 11:36:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:12.178 11:36:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:12.178 11:36:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:12.436 11:36:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:12.436 11:36:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:12.436 11:36:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:12.436 11:36:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:12.436 11:36:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:12.436 11:36:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:12.436 { 00:18:12.436 "cntlid": 145, 00:18:12.436 "qid": 0, 00:18:12.436 "state": "enabled", 00:18:12.436 "thread": "nvmf_tgt_poll_group_000", 00:18:12.436 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562", 00:18:12.436 "listen_address": { 00:18:12.436 "trtype": "TCP", 00:18:12.436 "adrfam": "IPv4", 00:18:12.436 "traddr": "10.0.0.2", 00:18:12.436 "trsvcid": "4420" 00:18:12.436 }, 00:18:12.436 "peer_address": { 00:18:12.436 "trtype": "TCP", 00:18:12.436 "adrfam": "IPv4", 00:18:12.436 "traddr": "10.0.0.1", 00:18:12.436 "trsvcid": "36472" 00:18:12.436 }, 00:18:12.436 "auth": { 00:18:12.436 "state": "completed", 00:18:12.436 "digest": "sha512", 00:18:12.436 "dhgroup": "ffdhe8192" 00:18:12.436 } 00:18:12.436 } 00:18:12.436 ]' 00:18:12.436 11:36:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:12.693 11:36:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:12.694 11:36:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:12.694 11:36:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:18:12.694 11:36:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:12.694 11:36:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:12.694 11:36:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:12.694 11:36:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:12.951 11:36:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NDE5Njc4MWNiYWExZGQ5ZmU1NDVmMDA2ZmQzMGFjNjFmZWQyODBlMmI0NThiMTQzLE1V+g==: --dhchap-ctrl-secret DHHC-1:03:MmJmNjJkMjM5MzE0ZDI5YTU3YjczOTcwMDg5ZTMxY2Y4N2MyZTc0MDMzYmYzNGU5ZTc5ZWNkZmJmYjdlOTNiYnYQ51E=: 00:18:12.951 11:36:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid 00abaa28-3537-eb11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:NDE5Njc4MWNiYWExZGQ5ZmU1NDVmMDA2ZmQzMGFjNjFmZWQyODBlMmI0NThiMTQzLE1V+g==: --dhchap-ctrl-secret DHHC-1:03:MmJmNjJkMjM5MzE0ZDI5YTU3YjczOTcwMDg5ZTMxY2Y4N2MyZTc0MDMzYmYzNGU5ZTc5ZWNkZmJmYjdlOTNiYnYQ51E=: 00:18:13.884 11:36:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:13.885 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:13.885 11:36:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:18:13.885 11:36:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:13.885 11:36:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:13.885 11:36:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:13.885 11:36:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@144 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --dhchap-key key1 00:18:13.885 11:36:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:13.885 11:36:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:13.885 11:36:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:13.885 11:36:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@145 -- # NOT bdev_connect -b nvme0 --dhchap-key key2 00:18:13.885 11:36:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:18:13.885 11:36:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key2 00:18:13.885 11:36:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=bdev_connect 00:18:13.885 11:36:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:13.885 11:36:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t bdev_connect 00:18:13.885 11:36:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:13.885 11:36:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # bdev_connect -b nvme0 --dhchap-key key2 00:18:13.885 11:36:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 00:18:13.885 11:36:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 00:18:14.470 request: 00:18:14.470 { 00:18:14.470 "name": "nvme0", 00:18:14.470 "trtype": "tcp", 00:18:14.470 "traddr": "10.0.0.2", 00:18:14.470 "adrfam": "ipv4", 00:18:14.470 "trsvcid": "4420", 00:18:14.470 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:18:14.470 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562", 00:18:14.470 "prchk_reftag": false, 00:18:14.470 "prchk_guard": false, 00:18:14.470 "hdgst": false, 00:18:14.470 "ddgst": false, 00:18:14.470 "dhchap_key": "key2", 00:18:14.470 "allow_unrecognized_csi": false, 00:18:14.470 "method": "bdev_nvme_attach_controller", 00:18:14.470 "req_id": 1 00:18:14.470 } 00:18:14.470 Got JSON-RPC error response 00:18:14.470 response: 00:18:14.470 { 00:18:14.470 "code": -5, 00:18:14.470 "message": "Input/output error" 00:18:14.470 } 00:18:14.470 11:36:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:18:14.470 11:36:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:18:14.470 11:36:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:18:14.470 11:36:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:18:14.470 11:36:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@146 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:18:14.470 11:36:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:14.470 11:36:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:14.470 11:36:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:14.470 11:36:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@149 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:14.470 11:36:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:14.470 11:36:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:14.470 11:36:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:14.471 11:36:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@150 -- # NOT bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:18:14.471 11:36:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:18:14.471 11:36:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:18:14.471 11:36:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=bdev_connect 00:18:14.471 11:36:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:14.471 11:36:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t bdev_connect 00:18:14.471 11:36:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:14.471 11:36:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:18:14.471 11:36:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:18:14.471 11:36:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:18:15.122 request: 00:18:15.122 { 00:18:15.122 "name": "nvme0", 00:18:15.122 "trtype": "tcp", 00:18:15.122 "traddr": "10.0.0.2", 00:18:15.122 "adrfam": "ipv4", 00:18:15.122 "trsvcid": "4420", 00:18:15.122 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:18:15.122 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562", 00:18:15.122 "prchk_reftag": false, 00:18:15.122 "prchk_guard": false, 00:18:15.122 "hdgst": false, 00:18:15.122 "ddgst": false, 00:18:15.122 "dhchap_key": "key1", 00:18:15.122 "dhchap_ctrlr_key": "ckey2", 00:18:15.122 "allow_unrecognized_csi": false, 00:18:15.122 "method": "bdev_nvme_attach_controller", 00:18:15.122 "req_id": 1 00:18:15.122 } 00:18:15.122 Got JSON-RPC error response 00:18:15.122 response: 00:18:15.122 { 00:18:15.122 "code": -5, 00:18:15.122 "message": "Input/output error" 00:18:15.122 } 00:18:15.122 11:36:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:18:15.122 11:36:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:18:15.122 11:36:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:18:15.122 11:36:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:18:15.122 11:36:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@151 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:18:15.122 11:36:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:15.122 11:36:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:15.122 11:36:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:15.122 11:36:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@154 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --dhchap-key key1 00:18:15.122 11:36:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:15.122 11:36:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:15.122 11:36:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:15.122 11:36:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@155 -- # NOT bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:15.122 11:36:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:18:15.122 11:36:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:15.122 11:36:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=bdev_connect 00:18:15.122 11:36:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:15.122 11:36:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t bdev_connect 00:18:15.122 11:36:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:15.122 11:36:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:15.122 11:36:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:15.122 11:36:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:15.749 request: 00:18:15.749 { 00:18:15.749 "name": "nvme0", 00:18:15.749 "trtype": "tcp", 00:18:15.749 "traddr": "10.0.0.2", 00:18:15.749 "adrfam": "ipv4", 00:18:15.749 "trsvcid": "4420", 00:18:15.749 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:18:15.749 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562", 00:18:15.749 "prchk_reftag": false, 00:18:15.749 "prchk_guard": false, 00:18:15.749 "hdgst": false, 00:18:15.749 "ddgst": false, 00:18:15.749 "dhchap_key": "key1", 00:18:15.749 "dhchap_ctrlr_key": "ckey1", 00:18:15.749 "allow_unrecognized_csi": false, 00:18:15.749 "method": "bdev_nvme_attach_controller", 00:18:15.749 "req_id": 1 00:18:15.749 } 00:18:15.749 Got JSON-RPC error response 00:18:15.749 response: 00:18:15.749 { 00:18:15.749 "code": -5, 00:18:15.749 "message": "Input/output error" 00:18:15.749 } 00:18:15.749 11:36:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:18:15.749 11:36:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:18:15.749 11:36:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:18:15.749 11:36:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:18:15.749 11:36:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@156 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:18:15.749 11:36:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:15.749 11:36:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:15.749 11:36:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:15.749 11:36:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@159 -- # killprocess 1209204 00:18:15.749 11:36:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@952 -- # '[' -z 1209204 ']' 00:18:15.749 11:36:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@956 -- # kill -0 1209204 00:18:15.749 11:36:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@957 -- # uname 00:18:15.749 11:36:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:18:15.749 11:36:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 1209204 00:18:15.749 11:36:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:18:15.749 11:36:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:18:15.749 11:36:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@970 -- # echo 'killing process with pid 1209204' 00:18:15.749 killing process with pid 1209204 00:18:15.749 11:36:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@971 -- # kill 1209204 00:18:15.749 11:36:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@976 -- # wait 1209204 00:18:15.749 11:36:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@160 -- # nvmfappstart --wait-for-rpc -L nvmf_auth 00:18:15.749 11:36:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:18:15.749 11:36:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@724 -- # xtrace_disable 00:18:15.749 11:36:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:15.749 11:36:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@509 -- # nvmfpid=1238494 00:18:15.749 11:36:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@510 -- # waitforlisten 1238494 00:18:15.749 11:36:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@833 -- # '[' -z 1238494 ']' 00:18:15.749 11:36:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:15.749 11:36:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@838 -- # local max_retries=100 00:18:15.749 11:36:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:15.749 11:36:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # xtrace_disable 00:18:15.749 11:36:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc -L nvmf_auth 00:18:15.749 11:36:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:16.018 11:36:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:18:16.018 11:36:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@866 -- # return 0 00:18:16.018 11:36:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:18:16.018 11:36:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@730 -- # xtrace_disable 00:18:16.018 11:36:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:16.276 11:36:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:16.276 11:36:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@161 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:18:16.276 11:36:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@163 -- # waitforlisten 1238494 00:18:16.276 11:36:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@833 -- # '[' -z 1238494 ']' 00:18:16.276 11:36:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:16.276 11:36:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@838 -- # local max_retries=100 00:18:16.276 11:36:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:16.276 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:16.276 11:36:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # xtrace_disable 00:18:16.276 11:36:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:16.534 11:36:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:18:16.534 11:36:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@866 -- # return 0 00:18:16.534 11:36:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@164 -- # rpc_cmd 00:18:16.534 11:36:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:16.534 11:36:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:16.534 null0 00:18:16.534 11:36:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:16.534 11:36:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:18:16.534 11:36:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.J8a 00:18:16.534 11:36:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:16.534 11:36:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:16.534 11:36:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:16.534 11:36:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n /tmp/spdk.key-sha512.50R ]] 00:18:16.534 11:36:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.50R 00:18:16.534 11:36:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:16.534 11:36:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:16.534 11:36:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:16.534 11:36:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:18:16.534 11:36:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-sha256.OdX 00:18:16.534 11:36:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:16.534 11:36:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:16.534 11:36:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:16.534 11:36:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n /tmp/spdk.key-sha384.pcg ]] 00:18:16.534 11:36:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.pcg 00:18:16.534 11:36:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:16.534 11:36:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:16.534 11:36:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:16.534 11:36:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:18:16.534 11:36:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha384.uiV 00:18:16.534 11:36:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:16.534 11:36:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:16.534 11:36:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:16.534 11:36:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n /tmp/spdk.key-sha256.MMZ ]] 00:18:16.534 11:36:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.MMZ 00:18:16.534 11:36:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:16.534 11:36:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:16.534 11:36:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:16.534 11:36:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:18:16.534 11:36:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha512.qnL 00:18:16.534 11:36:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:16.534 11:36:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:16.534 11:36:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:16.534 11:36:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n '' ]] 00:18:16.534 11:36:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@179 -- # connect_authenticate sha512 ffdhe8192 3 00:18:16.534 11:36:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:16.535 11:36:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:18:16.535 11:36:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:18:16.535 11:36:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:18:16.535 11:36:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:16.535 11:36:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --dhchap-key key3 00:18:16.535 11:36:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:16.535 11:36:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:16.535 11:36:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:16.535 11:36:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:18:16.535 11:36:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:16.535 11:36:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:17.469 nvme0n1 00:18:17.469 11:36:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:17.469 11:36:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:17.469 11:36:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:18.036 11:36:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:18.036 11:36:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:18.036 11:36:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:18.036 11:36:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:18.036 11:36:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:18.036 11:36:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:18.036 { 00:18:18.036 "cntlid": 1, 00:18:18.036 "qid": 0, 00:18:18.036 "state": "enabled", 00:18:18.036 "thread": "nvmf_tgt_poll_group_000", 00:18:18.036 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562", 00:18:18.036 "listen_address": { 00:18:18.036 "trtype": "TCP", 00:18:18.036 "adrfam": "IPv4", 00:18:18.036 "traddr": "10.0.0.2", 00:18:18.036 "trsvcid": "4420" 00:18:18.036 }, 00:18:18.036 "peer_address": { 00:18:18.036 "trtype": "TCP", 00:18:18.036 "adrfam": "IPv4", 00:18:18.036 "traddr": "10.0.0.1", 00:18:18.036 "trsvcid": "36518" 00:18:18.036 }, 00:18:18.036 "auth": { 00:18:18.036 "state": "completed", 00:18:18.036 "digest": "sha512", 00:18:18.036 "dhgroup": "ffdhe8192" 00:18:18.036 } 00:18:18.036 } 00:18:18.036 ]' 00:18:18.036 11:36:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:18.036 11:36:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:18.036 11:36:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:18.036 11:36:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:18:18.036 11:36:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:18.036 11:36:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:18.036 11:36:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:18.036 11:36:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:18.294 11:36:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ODZlYTMyMjBlZjQyOGEwZDYyZTUxNjViMmE3YmU0MjFhZDVkMmU4MWRmNzZhYWJhNDk3ODM5MGU5NDhkZDk4ZVOjsv0=: 00:18:18.294 11:36:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid 00abaa28-3537-eb11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:ODZlYTMyMjBlZjQyOGEwZDYyZTUxNjViMmE3YmU0MjFhZDVkMmU4MWRmNzZhYWJhNDk3ODM5MGU5NDhkZDk4ZVOjsv0=: 00:18:19.227 11:36:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:19.227 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:19.227 11:36:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:18:19.227 11:36:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:19.227 11:36:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:19.227 11:36:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:19.227 11:36:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@182 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --dhchap-key key3 00:18:19.227 11:36:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:19.227 11:36:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:19.227 11:36:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:19.227 11:36:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@183 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 00:18:19.227 11:36:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 00:18:19.227 11:36:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@184 -- # NOT bdev_connect -b nvme0 --dhchap-key key3 00:18:19.227 11:36:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:18:19.227 11:36:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key3 00:18:19.227 11:36:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=bdev_connect 00:18:19.227 11:36:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:19.227 11:36:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t bdev_connect 00:18:19.227 11:36:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:19.227 11:36:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # bdev_connect -b nvme0 --dhchap-key key3 00:18:19.227 11:36:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:19.227 11:36:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:19.793 request: 00:18:19.793 { 00:18:19.793 "name": "nvme0", 00:18:19.793 "trtype": "tcp", 00:18:19.793 "traddr": "10.0.0.2", 00:18:19.793 "adrfam": "ipv4", 00:18:19.793 "trsvcid": "4420", 00:18:19.793 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:18:19.793 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562", 00:18:19.793 "prchk_reftag": false, 00:18:19.793 "prchk_guard": false, 00:18:19.793 "hdgst": false, 00:18:19.793 "ddgst": false, 00:18:19.793 "dhchap_key": "key3", 00:18:19.793 "allow_unrecognized_csi": false, 00:18:19.793 "method": "bdev_nvme_attach_controller", 00:18:19.793 "req_id": 1 00:18:19.793 } 00:18:19.793 Got JSON-RPC error response 00:18:19.793 response: 00:18:19.793 { 00:18:19.793 "code": -5, 00:18:19.793 "message": "Input/output error" 00:18:19.793 } 00:18:19.793 11:36:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:18:19.793 11:36:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:18:19.793 11:36:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:18:19.793 11:36:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:18:19.793 11:36:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@187 -- # IFS=, 00:18:19.793 11:36:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@188 -- # printf %s sha256,sha384,sha512 00:18:19.793 11:36:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@187 -- # hostrpc bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:18:19.793 11:36:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:18:19.793 11:36:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@193 -- # NOT bdev_connect -b nvme0 --dhchap-key key3 00:18:19.793 11:36:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:18:19.793 11:36:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key3 00:18:19.793 11:36:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=bdev_connect 00:18:19.793 11:36:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:19.793 11:36:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t bdev_connect 00:18:19.793 11:36:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:19.793 11:36:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # bdev_connect -b nvme0 --dhchap-key key3 00:18:19.793 11:36:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:19.794 11:36:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:20.051 request: 00:18:20.051 { 00:18:20.051 "name": "nvme0", 00:18:20.051 "trtype": "tcp", 00:18:20.051 "traddr": "10.0.0.2", 00:18:20.051 "adrfam": "ipv4", 00:18:20.051 "trsvcid": "4420", 00:18:20.051 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:18:20.051 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562", 00:18:20.051 "prchk_reftag": false, 00:18:20.051 "prchk_guard": false, 00:18:20.051 "hdgst": false, 00:18:20.051 "ddgst": false, 00:18:20.051 "dhchap_key": "key3", 00:18:20.051 "allow_unrecognized_csi": false, 00:18:20.051 "method": "bdev_nvme_attach_controller", 00:18:20.051 "req_id": 1 00:18:20.051 } 00:18:20.051 Got JSON-RPC error response 00:18:20.051 response: 00:18:20.051 { 00:18:20.051 "code": -5, 00:18:20.051 "message": "Input/output error" 00:18:20.051 } 00:18:20.309 11:36:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:18:20.309 11:36:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:18:20.309 11:36:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:18:20.309 11:36:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:18:20.309 11:36:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@197 -- # IFS=, 00:18:20.309 11:36:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@198 -- # printf %s sha256,sha384,sha512 00:18:20.309 11:36:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@197 -- # IFS=, 00:18:20.309 11:36:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@198 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:18:20.309 11:36:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@197 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:18:20.309 11:36:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:18:20.568 11:36:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@208 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:18:20.568 11:36:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:20.568 11:36:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:20.568 11:36:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:20.568 11:36:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@209 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:18:20.568 11:36:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:20.568 11:36:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:20.568 11:36:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:20.568 11:36:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@210 -- # NOT bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:18:20.568 11:36:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:18:20.568 11:36:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:18:20.568 11:36:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=bdev_connect 00:18:20.568 11:36:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:20.568 11:36:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t bdev_connect 00:18:20.568 11:36:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:20.568 11:36:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:18:20.568 11:36:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:18:20.568 11:36:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:18:20.827 request: 00:18:20.827 { 00:18:20.827 "name": "nvme0", 00:18:20.827 "trtype": "tcp", 00:18:20.827 "traddr": "10.0.0.2", 00:18:20.827 "adrfam": "ipv4", 00:18:20.827 "trsvcid": "4420", 00:18:20.827 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:18:20.827 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562", 00:18:20.827 "prchk_reftag": false, 00:18:20.827 "prchk_guard": false, 00:18:20.827 "hdgst": false, 00:18:20.827 "ddgst": false, 00:18:20.827 "dhchap_key": "key0", 00:18:20.827 "dhchap_ctrlr_key": "key1", 00:18:20.827 "allow_unrecognized_csi": false, 00:18:20.827 "method": "bdev_nvme_attach_controller", 00:18:20.827 "req_id": 1 00:18:20.827 } 00:18:20.827 Got JSON-RPC error response 00:18:20.827 response: 00:18:20.827 { 00:18:20.827 "code": -5, 00:18:20.827 "message": "Input/output error" 00:18:20.827 } 00:18:20.827 11:36:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:18:20.827 11:36:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:18:20.827 11:36:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:18:20.827 11:36:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:18:20.827 11:36:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@213 -- # bdev_connect -b nvme0 --dhchap-key key0 00:18:20.827 11:36:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 00:18:20.827 11:36:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 00:18:21.393 nvme0n1 00:18:21.393 11:36:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@214 -- # hostrpc bdev_nvme_get_controllers 00:18:21.393 11:36:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@214 -- # jq -r '.[].name' 00:18:21.393 11:36:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:21.393 11:36:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@214 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:21.393 11:36:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@215 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:21.393 11:36:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:21.958 11:36:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@218 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --dhchap-key key1 00:18:21.958 11:36:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:21.958 11:36:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:21.958 11:36:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:21.958 11:36:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@219 -- # bdev_connect -b nvme0 --dhchap-key key1 00:18:21.958 11:36:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:18:21.958 11:36:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:18:22.891 nvme0n1 00:18:22.891 11:36:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@220 -- # hostrpc bdev_nvme_get_controllers 00:18:22.891 11:36:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@220 -- # jq -r '.[].name' 00:18:22.891 11:36:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:22.891 11:36:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@220 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:22.891 11:36:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@222 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key key3 00:18:22.891 11:36:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:22.891 11:36:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:23.149 11:36:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:23.149 11:36:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@223 -- # hostrpc bdev_nvme_get_controllers 00:18:23.149 11:36:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@223 -- # jq -r '.[].name' 00:18:23.149 11:36:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:23.406 11:36:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@223 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:23.406 11:36:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@225 -- # nvme_connect --dhchap-secret DHHC-1:02:Yjc4NTAyMDhmNTg3M2FjYmQ1MTVkZTIwZjM1ODhlMmMwNWY5MjA2MDQwZmIzNDY4n3XOMQ==: --dhchap-ctrl-secret DHHC-1:03:ODZlYTMyMjBlZjQyOGEwZDYyZTUxNjViMmE3YmU0MjFhZDVkMmU4MWRmNzZhYWJhNDk3ODM5MGU5NDhkZDk4ZVOjsv0=: 00:18:23.406 11:36:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid 00abaa28-3537-eb11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:Yjc4NTAyMDhmNTg3M2FjYmQ1MTVkZTIwZjM1ODhlMmMwNWY5MjA2MDQwZmIzNDY4n3XOMQ==: --dhchap-ctrl-secret DHHC-1:03:ODZlYTMyMjBlZjQyOGEwZDYyZTUxNjViMmE3YmU0MjFhZDVkMmU4MWRmNzZhYWJhNDk3ODM5MGU5NDhkZDk4ZVOjsv0=: 00:18:24.031 11:36:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@226 -- # nvme_get_ctrlr 00:18:24.031 11:36:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@41 -- # local dev 00:18:24.031 11:36:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@43 -- # for dev in /sys/devices/virtual/nvme-fabrics/ctl/nvme* 00:18:24.031 11:36:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nqn.2024-03.io.spdk:cnode0 == \n\q\n\.\2\0\2\4\-\0\3\.\i\o\.\s\p\d\k\:\c\n\o\d\e\0 ]] 00:18:24.031 11:36:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # echo nvme0 00:18:24.031 11:36:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # break 00:18:24.031 11:36:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@226 -- # nctrlr=nvme0 00:18:24.031 11:36:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@227 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:24.031 11:36:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:24.290 11:36:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@228 -- # NOT bdev_connect -b nvme0 --dhchap-key key1 00:18:24.290 11:36:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:18:24.290 11:36:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key1 00:18:24.290 11:36:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=bdev_connect 00:18:24.290 11:36:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:24.290 11:36:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t bdev_connect 00:18:24.290 11:36:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:24.290 11:36:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # bdev_connect -b nvme0 --dhchap-key key1 00:18:24.290 11:36:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:18:24.290 11:36:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:18:24.854 request: 00:18:24.854 { 00:18:24.854 "name": "nvme0", 00:18:24.854 "trtype": "tcp", 00:18:24.854 "traddr": "10.0.0.2", 00:18:24.854 "adrfam": "ipv4", 00:18:24.854 "trsvcid": "4420", 00:18:24.854 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:18:24.854 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562", 00:18:24.854 "prchk_reftag": false, 00:18:24.854 "prchk_guard": false, 00:18:24.854 "hdgst": false, 00:18:24.854 "ddgst": false, 00:18:24.854 "dhchap_key": "key1", 00:18:24.854 "allow_unrecognized_csi": false, 00:18:24.854 "method": "bdev_nvme_attach_controller", 00:18:24.854 "req_id": 1 00:18:24.854 } 00:18:24.854 Got JSON-RPC error response 00:18:24.854 response: 00:18:24.854 { 00:18:24.854 "code": -5, 00:18:24.854 "message": "Input/output error" 00:18:24.854 } 00:18:24.854 11:36:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:18:24.854 11:36:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:18:24.854 11:36:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:18:24.854 11:36:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:18:24.854 11:36:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@229 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:18:24.854 11:36:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:18:24.855 11:36:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:18:25.787 nvme0n1 00:18:25.787 11:36:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@230 -- # hostrpc bdev_nvme_get_controllers 00:18:25.787 11:36:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@230 -- # jq -r '.[].name' 00:18:25.787 11:36:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:26.044 11:36:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@230 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:26.044 11:36:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@231 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:26.044 11:36:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:26.609 11:36:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@233 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:18:26.609 11:36:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:26.609 11:36:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:26.609 11:36:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:26.609 11:36:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@234 -- # bdev_connect -b nvme0 00:18:26.609 11:36:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 00:18:26.609 11:36:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 00:18:26.867 nvme0n1 00:18:26.867 11:36:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@235 -- # hostrpc bdev_nvme_get_controllers 00:18:26.867 11:36:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:26.867 11:36:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@235 -- # jq -r '.[].name' 00:18:27.124 11:36:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@235 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:27.124 11:36:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@236 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:27.124 11:36:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:27.382 11:36:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@239 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key key3 00:18:27.382 11:36:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:27.382 11:36:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:27.382 11:36:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:27.382 11:36:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@240 -- # nvme_set_keys nvme0 DHHC-1:01:NzBjNTkzZTliNmFiY2ExM2E5ZGVkZTU2MzczZjIzNGPWmSRF: '' 2s 00:18:27.382 11:36:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # local ctl key ckey dev timeout 00:18:27.382 11:36:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ctl=nvme0 00:18:27.382 11:36:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # key=DHHC-1:01:NzBjNTkzZTliNmFiY2ExM2E5ZGVkZTU2MzczZjIzNGPWmSRF: 00:18:27.382 11:36:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ckey= 00:18:27.382 11:36:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # timeout=2s 00:18:27.382 11:36:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # dev=/sys/devices/virtual/nvme-fabrics/ctl/nvme0 00:18:27.382 11:36:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@54 -- # [[ -z DHHC-1:01:NzBjNTkzZTliNmFiY2ExM2E5ZGVkZTU2MzczZjIzNGPWmSRF: ]] 00:18:27.382 11:36:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@54 -- # echo DHHC-1:01:NzBjNTkzZTliNmFiY2ExM2E5ZGVkZTU2MzczZjIzNGPWmSRF: 00:18:27.382 11:36:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # [[ -z '' ]] 00:18:27.382 11:36:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # [[ -z 2s ]] 00:18:27.382 11:36:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # sleep 2s 00:18:29.279 11:36:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@241 -- # waitforblk nvme0n1 00:18:29.279 11:36:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1237 -- # local i=0 00:18:29.279 11:36:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1238 -- # grep -q -w nvme0n1 00:18:29.279 11:36:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1238 -- # lsblk -l -o NAME 00:18:29.279 11:36:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1244 -- # grep -q -w nvme0n1 00:18:29.279 11:36:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1244 -- # lsblk -l -o NAME 00:18:29.537 11:36:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1248 -- # return 0 00:18:29.537 11:36:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@243 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key key2 00:18:29.537 11:36:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:29.537 11:36:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:29.537 11:36:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:29.537 11:36:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@244 -- # nvme_set_keys nvme0 '' DHHC-1:02:Yjc4NTAyMDhmNTg3M2FjYmQ1MTVkZTIwZjM1ODhlMmMwNWY5MjA2MDQwZmIzNDY4n3XOMQ==: 2s 00:18:29.537 11:36:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # local ctl key ckey dev timeout 00:18:29.537 11:36:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ctl=nvme0 00:18:29.537 11:36:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # key= 00:18:29.537 11:36:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ckey=DHHC-1:02:Yjc4NTAyMDhmNTg3M2FjYmQ1MTVkZTIwZjM1ODhlMmMwNWY5MjA2MDQwZmIzNDY4n3XOMQ==: 00:18:29.537 11:36:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # timeout=2s 00:18:29.537 11:36:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # dev=/sys/devices/virtual/nvme-fabrics/ctl/nvme0 00:18:29.537 11:36:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@54 -- # [[ -z '' ]] 00:18:29.537 11:36:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # [[ -z DHHC-1:02:Yjc4NTAyMDhmNTg3M2FjYmQ1MTVkZTIwZjM1ODhlMmMwNWY5MjA2MDQwZmIzNDY4n3XOMQ==: ]] 00:18:29.537 11:36:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # echo DHHC-1:02:Yjc4NTAyMDhmNTg3M2FjYmQ1MTVkZTIwZjM1ODhlMmMwNWY5MjA2MDQwZmIzNDY4n3XOMQ==: 00:18:29.537 11:36:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # [[ -z 2s ]] 00:18:29.537 11:36:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # sleep 2s 00:18:31.436 11:36:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@245 -- # waitforblk nvme0n1 00:18:31.436 11:36:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1237 -- # local i=0 00:18:31.436 11:36:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1238 -- # lsblk -l -o NAME 00:18:31.436 11:36:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1238 -- # grep -q -w nvme0n1 00:18:31.436 11:36:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1244 -- # lsblk -l -o NAME 00:18:31.436 11:36:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1244 -- # grep -q -w nvme0n1 00:18:31.436 11:36:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1248 -- # return 0 00:18:31.436 11:36:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@246 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:31.436 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:31.436 11:36:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@249 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key key1 00:18:31.436 11:36:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:31.436 11:36:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:31.436 11:36:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:31.436 11:36:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@250 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:18:31.436 11:36:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:18:31.436 11:36:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:18:32.368 nvme0n1 00:18:32.368 11:36:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@252 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key key3 00:18:32.368 11:36:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:32.368 11:36:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:32.368 11:36:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:32.368 11:36:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@253 -- # hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:18:32.368 11:36:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:18:32.932 11:36:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@254 -- # hostrpc bdev_nvme_get_controllers 00:18:32.932 11:36:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@254 -- # jq -r '.[].name' 00:18:32.932 11:36:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:33.189 11:36:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@254 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:33.190 11:36:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@256 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:18:33.190 11:36:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:33.190 11:36:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:33.447 11:36:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:33.447 11:36:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@257 -- # hostrpc bdev_nvme_set_keys nvme0 00:18:33.447 11:36:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 00:18:33.447 11:36:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@258 -- # hostrpc bdev_nvme_get_controllers 00:18:33.447 11:36:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@258 -- # jq -r '.[].name' 00:18:33.447 11:36:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:33.705 11:36:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@258 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:33.705 11:36:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@260 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key key3 00:18:33.705 11:36:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:33.705 11:36:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:33.705 11:36:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:33.705 11:36:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@261 -- # NOT hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:18:33.705 11:36:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:18:33.705 11:36:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:18:33.705 11:36:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=hostrpc 00:18:33.705 11:36:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:33.705 11:36:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t hostrpc 00:18:33.705 11:36:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:33.705 11:36:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:18:33.705 11:36:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:18:34.280 request: 00:18:34.280 { 00:18:34.280 "name": "nvme0", 00:18:34.280 "dhchap_key": "key1", 00:18:34.280 "dhchap_ctrlr_key": "key3", 00:18:34.280 "method": "bdev_nvme_set_keys", 00:18:34.280 "req_id": 1 00:18:34.280 } 00:18:34.280 Got JSON-RPC error response 00:18:34.280 response: 00:18:34.280 { 00:18:34.280 "code": -13, 00:18:34.280 "message": "Permission denied" 00:18:34.280 } 00:18:34.280 11:36:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:18:34.280 11:36:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:18:34.280 11:36:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:18:34.280 11:36:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:18:34.280 11:36:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # hostrpc bdev_nvme_get_controllers 00:18:34.280 11:36:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:34.280 11:36:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # jq length 00:18:34.539 11:36:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # (( 1 != 0 )) 00:18:34.539 11:36:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@263 -- # sleep 1s 00:18:35.911 11:36:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # hostrpc bdev_nvme_get_controllers 00:18:35.911 11:36:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # jq length 00:18:35.911 11:36:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:35.911 11:36:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # (( 0 != 0 )) 00:18:35.911 11:36:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@267 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key key1 00:18:35.911 11:36:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:35.911 11:36:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:35.911 11:36:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:35.911 11:36:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@268 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:18:35.911 11:36:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:18:35.911 11:36:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:18:36.842 nvme0n1 00:18:36.842 11:36:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@270 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key key3 00:18:36.842 11:36:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:36.842 11:36:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:36.842 11:36:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:36.842 11:36:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@271 -- # NOT hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:18:36.842 11:36:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:18:36.842 11:36:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:18:36.842 11:36:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=hostrpc 00:18:36.842 11:36:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:36.842 11:36:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t hostrpc 00:18:36.842 11:36:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:36.842 11:36:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:18:36.842 11:36:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:18:37.407 request: 00:18:37.407 { 00:18:37.407 "name": "nvme0", 00:18:37.407 "dhchap_key": "key2", 00:18:37.407 "dhchap_ctrlr_key": "key0", 00:18:37.407 "method": "bdev_nvme_set_keys", 00:18:37.407 "req_id": 1 00:18:37.407 } 00:18:37.407 Got JSON-RPC error response 00:18:37.407 response: 00:18:37.407 { 00:18:37.407 "code": -13, 00:18:37.407 "message": "Permission denied" 00:18:37.407 } 00:18:37.407 11:36:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:18:37.407 11:36:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:18:37.407 11:36:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:18:37.407 11:36:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:18:37.407 11:36:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # hostrpc bdev_nvme_get_controllers 00:18:37.407 11:36:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # jq length 00:18:37.407 11:36:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:37.665 11:36:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # (( 1 != 0 )) 00:18:37.665 11:36:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@273 -- # sleep 1s 00:18:38.601 11:36:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # hostrpc bdev_nvme_get_controllers 00:18:38.601 11:36:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:38.601 11:36:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # jq length 00:18:38.860 11:36:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # (( 0 != 0 )) 00:18:38.860 11:36:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@276 -- # trap - SIGINT SIGTERM EXIT 00:18:38.860 11:36:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@277 -- # cleanup 00:18:38.860 11:36:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@21 -- # killprocess 1209225 00:18:38.860 11:36:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@952 -- # '[' -z 1209225 ']' 00:18:38.860 11:36:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@956 -- # kill -0 1209225 00:18:38.860 11:36:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@957 -- # uname 00:18:38.860 11:36:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:18:38.860 11:36:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 1209225 00:18:39.122 11:36:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:18:39.122 11:36:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:18:39.122 11:36:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@970 -- # echo 'killing process with pid 1209225' 00:18:39.122 killing process with pid 1209225 00:18:39.122 11:36:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@971 -- # kill 1209225 00:18:39.122 11:36:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@976 -- # wait 1209225 00:18:39.389 11:36:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@22 -- # nvmftestfini 00:18:39.389 11:36:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@516 -- # nvmfcleanup 00:18:39.389 11:36:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@121 -- # sync 00:18:39.389 11:36:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:18:39.389 11:36:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@124 -- # set +e 00:18:39.389 11:36:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:18:39.389 11:36:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:18:39.389 rmmod nvme_tcp 00:18:39.389 rmmod nvme_fabrics 00:18:39.389 rmmod nvme_keyring 00:18:39.389 11:36:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:18:39.389 11:36:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@128 -- # set -e 00:18:39.389 11:36:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@129 -- # return 0 00:18:39.389 11:36:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@517 -- # '[' -n 1238494 ']' 00:18:39.389 11:36:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@518 -- # killprocess 1238494 00:18:39.389 11:36:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@952 -- # '[' -z 1238494 ']' 00:18:39.389 11:36:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@956 -- # kill -0 1238494 00:18:39.389 11:36:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@957 -- # uname 00:18:39.389 11:36:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:18:39.389 11:36:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 1238494 00:18:39.389 11:36:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:18:39.389 11:36:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:18:39.389 11:36:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@970 -- # echo 'killing process with pid 1238494' 00:18:39.389 killing process with pid 1238494 00:18:39.389 11:36:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@971 -- # kill 1238494 00:18:39.389 11:36:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@976 -- # wait 1238494 00:18:39.648 11:36:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:18:39.648 11:36:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:18:39.648 11:36:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:18:39.648 11:36:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@297 -- # iptr 00:18:39.648 11:36:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@791 -- # iptables-save 00:18:39.648 11:36:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:18:39.648 11:36:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@791 -- # iptables-restore 00:18:39.648 11:36:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:18:39.648 11:36:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@302 -- # remove_spdk_ns 00:18:39.648 11:36:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:39.648 11:36:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:18:39.648 11:36:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:41.549 11:36:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:18:41.549 11:36:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@23 -- # rm -f /tmp/spdk.key-null.J8a /tmp/spdk.key-sha256.OdX /tmp/spdk.key-sha384.uiV /tmp/spdk.key-sha512.qnL /tmp/spdk.key-sha512.50R /tmp/spdk.key-sha384.pcg /tmp/spdk.key-sha256.MMZ '' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf-auth.log 00:18:41.549 00:18:41.549 real 3m7.131s 00:18:41.549 user 7m16.621s 00:18:41.549 sys 0m25.900s 00:18:41.549 11:36:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1128 -- # xtrace_disable 00:18:41.549 11:36:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:41.549 ************************************ 00:18:41.549 END TEST nvmf_auth_target 00:18:41.549 ************************************ 00:18:41.808 11:36:42 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@39 -- # '[' tcp = tcp ']' 00:18:41.808 11:36:42 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@40 -- # run_test nvmf_bdevio_no_huge /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:18:41.808 11:36:42 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:18:41.808 11:36:42 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1109 -- # xtrace_disable 00:18:41.808 11:36:42 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:18:41.808 ************************************ 00:18:41.808 START TEST nvmf_bdevio_no_huge 00:18:41.808 ************************************ 00:18:41.808 11:36:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:18:41.808 * Looking for test storage... 00:18:41.808 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:18:41.808 11:36:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:18:41.808 11:36:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1691 -- # lcov --version 00:18:41.808 11:36:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:18:41.808 11:36:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:18:41.808 11:36:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:18:41.808 11:36:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@333 -- # local ver1 ver1_l 00:18:41.808 11:36:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@334 -- # local ver2 ver2_l 00:18:41.808 11:36:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@336 -- # IFS=.-: 00:18:41.808 11:36:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@336 -- # read -ra ver1 00:18:41.808 11:36:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@337 -- # IFS=.-: 00:18:41.808 11:36:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@337 -- # read -ra ver2 00:18:41.808 11:36:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@338 -- # local 'op=<' 00:18:41.808 11:36:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@340 -- # ver1_l=2 00:18:41.808 11:36:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@341 -- # ver2_l=1 00:18:41.808 11:36:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:18:41.808 11:36:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@344 -- # case "$op" in 00:18:41.808 11:36:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@345 -- # : 1 00:18:41.808 11:36:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@364 -- # (( v = 0 )) 00:18:41.809 11:36:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:41.809 11:36:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@365 -- # decimal 1 00:18:41.809 11:36:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@353 -- # local d=1 00:18:41.809 11:36:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:18:42.068 11:36:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@355 -- # echo 1 00:18:42.068 11:36:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@365 -- # ver1[v]=1 00:18:42.068 11:36:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@366 -- # decimal 2 00:18:42.068 11:36:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@353 -- # local d=2 00:18:42.068 11:36:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:18:42.068 11:36:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@355 -- # echo 2 00:18:42.068 11:36:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@366 -- # ver2[v]=2 00:18:42.068 11:36:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:18:42.068 11:36:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:18:42.068 11:36:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@368 -- # return 0 00:18:42.068 11:36:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:18:42.068 11:36:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:18:42.068 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:42.068 --rc genhtml_branch_coverage=1 00:18:42.068 --rc genhtml_function_coverage=1 00:18:42.068 --rc genhtml_legend=1 00:18:42.068 --rc geninfo_all_blocks=1 00:18:42.068 --rc geninfo_unexecuted_blocks=1 00:18:42.068 00:18:42.068 ' 00:18:42.068 11:36:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:18:42.068 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:42.068 --rc genhtml_branch_coverage=1 00:18:42.068 --rc genhtml_function_coverage=1 00:18:42.068 --rc genhtml_legend=1 00:18:42.068 --rc geninfo_all_blocks=1 00:18:42.068 --rc geninfo_unexecuted_blocks=1 00:18:42.068 00:18:42.068 ' 00:18:42.068 11:36:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:18:42.068 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:42.068 --rc genhtml_branch_coverage=1 00:18:42.068 --rc genhtml_function_coverage=1 00:18:42.068 --rc genhtml_legend=1 00:18:42.068 --rc geninfo_all_blocks=1 00:18:42.068 --rc geninfo_unexecuted_blocks=1 00:18:42.068 00:18:42.068 ' 00:18:42.068 11:36:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:18:42.068 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:42.068 --rc genhtml_branch_coverage=1 00:18:42.068 --rc genhtml_function_coverage=1 00:18:42.068 --rc genhtml_legend=1 00:18:42.068 --rc geninfo_all_blocks=1 00:18:42.068 --rc geninfo_unexecuted_blocks=1 00:18:42.068 00:18:42.068 ' 00:18:42.068 11:36:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:18:42.068 11:36:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # uname -s 00:18:42.068 11:36:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:42.068 11:36:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:42.068 11:36:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:42.068 11:36:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:42.068 11:36:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:42.068 11:36:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:42.068 11:36:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:42.068 11:36:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:42.068 11:36:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:42.068 11:36:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:42.068 11:36:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:18:42.068 11:36:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@18 -- # NVME_HOSTID=00abaa28-3537-eb11-906e-0017a4403562 00:18:42.069 11:36:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:42.069 11:36:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:42.069 11:36:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:18:42.069 11:36:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:42.069 11:36:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:18:42.069 11:36:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@15 -- # shopt -s extglob 00:18:42.069 11:36:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:42.069 11:36:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:42.069 11:36:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:42.069 11:36:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:42.069 11:36:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:42.069 11:36:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:42.069 11:36:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@5 -- # export PATH 00:18:42.069 11:36:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:42.069 11:36:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@51 -- # : 0 00:18:42.069 11:36:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:18:42.069 11:36:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:18:42.069 11:36:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:42.069 11:36:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:42.069 11:36:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:42.069 11:36:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:18:42.069 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:18:42.069 11:36:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:18:42.069 11:36:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:18:42.069 11:36:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@55 -- # have_pci_nics=0 00:18:42.069 11:36:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:18:42.069 11:36:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:18:42.069 11:36:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@14 -- # nvmftestinit 00:18:42.069 11:36:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:18:42.069 11:36:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:42.069 11:36:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@476 -- # prepare_net_devs 00:18:42.069 11:36:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@438 -- # local -g is_hw=no 00:18:42.069 11:36:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@440 -- # remove_spdk_ns 00:18:42.069 11:36:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:42.069 11:36:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:18:42.069 11:36:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:42.069 11:36:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:18:42.069 11:36:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:18:42.069 11:36:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@309 -- # xtrace_disable 00:18:42.069 11:36:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:18:47.335 11:36:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:18:47.335 11:36:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@315 -- # pci_devs=() 00:18:47.335 11:36:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@315 -- # local -a pci_devs 00:18:47.335 11:36:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@316 -- # pci_net_devs=() 00:18:47.335 11:36:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:18:47.335 11:36:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@317 -- # pci_drivers=() 00:18:47.335 11:36:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@317 -- # local -A pci_drivers 00:18:47.335 11:36:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@319 -- # net_devs=() 00:18:47.335 11:36:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@319 -- # local -ga net_devs 00:18:47.335 11:36:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@320 -- # e810=() 00:18:47.335 11:36:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@320 -- # local -ga e810 00:18:47.335 11:36:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@321 -- # x722=() 00:18:47.335 11:36:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@321 -- # local -ga x722 00:18:47.335 11:36:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@322 -- # mlx=() 00:18:47.335 11:36:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@322 -- # local -ga mlx 00:18:47.335 11:36:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:18:47.335 11:36:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:18:47.335 11:36:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:18:47.335 11:36:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:18:47.335 11:36:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:18:47.335 11:36:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:18:47.335 11:36:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:18:47.335 11:36:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:18:47.335 11:36:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:18:47.335 11:36:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:18:47.335 11:36:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:18:47.335 11:36:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:18:47.335 11:36:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:18:47.335 11:36:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:18:47.335 11:36:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:18:47.335 11:36:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:18:47.335 11:36:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:18:47.335 11:36:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:18:47.335 11:36:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:18:47.335 11:36:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:18:47.335 Found 0000:af:00.0 (0x8086 - 0x159b) 00:18:47.335 11:36:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:18:47.335 11:36:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:18:47.336 11:36:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:47.336 11:36:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:47.336 11:36:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:18:47.336 11:36:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:18:47.336 11:36:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:18:47.336 Found 0000:af:00.1 (0x8086 - 0x159b) 00:18:47.336 11:36:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:18:47.336 11:36:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:18:47.336 11:36:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:47.336 11:36:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:47.336 11:36:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:18:47.336 11:36:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:18:47.336 11:36:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:18:47.336 11:36:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:18:47.336 11:36:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:18:47.336 11:36:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:47.336 11:36:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:18:47.336 11:36:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:47.336 11:36:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@418 -- # [[ up == up ]] 00:18:47.336 11:36:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:18:47.336 11:36:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:47.336 11:36:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:18:47.336 Found net devices under 0000:af:00.0: cvl_0_0 00:18:47.336 11:36:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:18:47.336 11:36:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:18:47.336 11:36:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:47.336 11:36:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:18:47.336 11:36:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:47.336 11:36:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@418 -- # [[ up == up ]] 00:18:47.336 11:36:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:18:47.336 11:36:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:47.336 11:36:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:18:47.336 Found net devices under 0000:af:00.1: cvl_0_1 00:18:47.336 11:36:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:18:47.336 11:36:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:18:47.595 11:36:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@442 -- # is_hw=yes 00:18:47.595 11:36:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:18:47.595 11:36:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:18:47.595 11:36:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:18:47.595 11:36:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:18:47.595 11:36:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:18:47.595 11:36:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:47.595 11:36:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:18:47.595 11:36:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:18:47.595 11:36:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:18:47.595 11:36:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:18:47.595 11:36:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:18:47.595 11:36:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:18:47.595 11:36:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:18:47.595 11:36:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:47.595 11:36:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:18:47.595 11:36:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:18:47.595 11:36:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:18:47.595 11:36:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:18:47.595 11:36:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:18:47.595 11:36:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:18:47.595 11:36:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:18:47.595 11:36:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:18:47.595 11:36:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:18:47.595 11:36:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:18:47.595 11:36:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:18:47.595 11:36:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:18:47.595 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:47.595 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.482 ms 00:18:47.595 00:18:47.595 --- 10.0.0.2 ping statistics --- 00:18:47.595 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:47.595 rtt min/avg/max/mdev = 0.482/0.482/0.482/0.000 ms 00:18:47.595 11:36:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:18:47.595 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:47.595 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.232 ms 00:18:47.595 00:18:47.595 --- 10.0.0.1 ping statistics --- 00:18:47.595 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:47.595 rtt min/avg/max/mdev = 0.232/0.232/0.232/0.000 ms 00:18:47.595 11:36:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:47.595 11:36:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@450 -- # return 0 00:18:47.595 11:36:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:18:47.595 11:36:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:47.595 11:36:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:18:47.595 11:36:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:18:47.595 11:36:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:47.595 11:36:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:18:47.595 11:36:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:18:47.595 11:36:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:18:47.595 11:36:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:18:47.595 11:36:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@724 -- # xtrace_disable 00:18:47.854 11:36:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:18:47.854 11:36:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@509 -- # nvmfpid=1246296 00:18:47.854 11:36:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@510 -- # waitforlisten 1246296 00:18:47.854 11:36:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --no-huge -s 1024 -m 0x78 00:18:47.854 11:36:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@833 -- # '[' -z 1246296 ']' 00:18:47.854 11:36:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:47.854 11:36:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@838 -- # local max_retries=100 00:18:47.854 11:36:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:47.854 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:47.854 11:36:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@842 -- # xtrace_disable 00:18:47.854 11:36:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:18:47.854 [2024-11-15 11:36:48.507759] Starting SPDK v25.01-pre git sha1 4b2d483c6 / DPDK 24.03.0 initialization... 00:18:47.854 [2024-11-15 11:36:48.507820] [ DPDK EAL parameters: nvmf -c 0x78 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk0 --proc-type=auto ] 00:18:47.854 [2024-11-15 11:36:48.591825] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:18:47.854 [2024-11-15 11:36:48.635666] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:47.854 [2024-11-15 11:36:48.635698] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:47.854 [2024-11-15 11:36:48.635705] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:47.854 [2024-11-15 11:36:48.635710] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:47.854 [2024-11-15 11:36:48.635715] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:47.854 [2024-11-15 11:36:48.636959] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:18:47.854 [2024-11-15 11:36:48.637077] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:18:47.854 [2024-11-15 11:36:48.637189] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:18:47.854 [2024-11-15 11:36:48.637190] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:18:48.112 11:36:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:18:48.112 11:36:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@866 -- # return 0 00:18:48.112 11:36:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:18:48.112 11:36:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@730 -- # xtrace_disable 00:18:48.112 11:36:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:18:48.112 11:36:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:48.112 11:36:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:18:48.112 11:36:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:48.112 11:36:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:18:48.112 [2024-11-15 11:36:48.780165] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:48.112 11:36:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:48.112 11:36:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:18:48.112 11:36:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:48.112 11:36:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:18:48.112 Malloc0 00:18:48.112 11:36:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:48.112 11:36:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:18:48.112 11:36:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:48.112 11:36:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:18:48.112 11:36:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:48.112 11:36:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:18:48.112 11:36:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:48.112 11:36:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:18:48.112 11:36:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:48.112 11:36:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:18:48.112 11:36:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:48.112 11:36:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:18:48.112 [2024-11-15 11:36:48.816858] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:48.112 11:36:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:48.112 11:36:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 --no-huge -s 1024 00:18:48.112 11:36:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:18:48.112 11:36:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@560 -- # config=() 00:18:48.112 11:36:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@560 -- # local subsystem config 00:18:48.112 11:36:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:18:48.112 11:36:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:18:48.112 { 00:18:48.112 "params": { 00:18:48.112 "name": "Nvme$subsystem", 00:18:48.112 "trtype": "$TEST_TRANSPORT", 00:18:48.112 "traddr": "$NVMF_FIRST_TARGET_IP", 00:18:48.112 "adrfam": "ipv4", 00:18:48.112 "trsvcid": "$NVMF_PORT", 00:18:48.112 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:18:48.112 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:18:48.112 "hdgst": ${hdgst:-false}, 00:18:48.112 "ddgst": ${ddgst:-false} 00:18:48.112 }, 00:18:48.112 "method": "bdev_nvme_attach_controller" 00:18:48.112 } 00:18:48.112 EOF 00:18:48.112 )") 00:18:48.112 11:36:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@582 -- # cat 00:18:48.112 11:36:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@584 -- # jq . 00:18:48.112 11:36:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@585 -- # IFS=, 00:18:48.112 11:36:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:18:48.112 "params": { 00:18:48.112 "name": "Nvme1", 00:18:48.112 "trtype": "tcp", 00:18:48.112 "traddr": "10.0.0.2", 00:18:48.112 "adrfam": "ipv4", 00:18:48.112 "trsvcid": "4420", 00:18:48.112 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:18:48.112 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:18:48.112 "hdgst": false, 00:18:48.112 "ddgst": false 00:18:48.112 }, 00:18:48.112 "method": "bdev_nvme_attach_controller" 00:18:48.112 }' 00:18:48.112 [2024-11-15 11:36:48.872723] Starting SPDK v25.01-pre git sha1 4b2d483c6 / DPDK 24.03.0 initialization... 00:18:48.112 [2024-11-15 11:36:48.872783] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk_pid1246395 ] 00:18:48.370 [2024-11-15 11:36:48.974329] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:18:48.370 [2024-11-15 11:36:49.041908] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:18:48.370 [2024-11-15 11:36:49.042008] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:18:48.370 [2024-11-15 11:36:49.042009] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:48.627 I/O targets: 00:18:48.627 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:18:48.627 00:18:48.627 00:18:48.627 CUnit - A unit testing framework for C - Version 2.1-3 00:18:48.627 http://cunit.sourceforge.net/ 00:18:48.627 00:18:48.627 00:18:48.627 Suite: bdevio tests on: Nvme1n1 00:18:48.627 Test: blockdev write read block ...passed 00:18:48.627 Test: blockdev write zeroes read block ...passed 00:18:48.627 Test: blockdev write zeroes read no split ...passed 00:18:48.627 Test: blockdev write zeroes read split ...passed 00:18:48.627 Test: blockdev write zeroes read split partial ...passed 00:18:48.627 Test: blockdev reset ...[2024-11-15 11:36:49.435092] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:18:48.627 [2024-11-15 11:36:49.435169] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14d2ef0 (9): Bad file descriptor 00:18:48.884 [2024-11-15 11:36:49.583092] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller successful. 00:18:48.884 passed 00:18:48.884 Test: blockdev write read 8 blocks ...passed 00:18:48.884 Test: blockdev write read size > 128k ...passed 00:18:48.884 Test: blockdev write read invalid size ...passed 00:18:48.884 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:18:48.884 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:18:48.884 Test: blockdev write read max offset ...passed 00:18:48.884 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:18:49.141 Test: blockdev writev readv 8 blocks ...passed 00:18:49.141 Test: blockdev writev readv 30 x 1block ...passed 00:18:49.141 Test: blockdev writev readv block ...passed 00:18:49.141 Test: blockdev writev readv size > 128k ...passed 00:18:49.141 Test: blockdev writev readv size > 128k in two iovs ...passed 00:18:49.141 Test: blockdev comparev and writev ...[2024-11-15 11:36:49.793167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:18:49.141 [2024-11-15 11:36:49.793196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:49.142 [2024-11-15 11:36:49.793213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:18:49.142 [2024-11-15 11:36:49.793221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:49.142 [2024-11-15 11:36:49.793474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:18:49.142 [2024-11-15 11:36:49.793485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:18:49.142 [2024-11-15 11:36:49.793495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:18:49.142 [2024-11-15 11:36:49.793503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:18:49.142 [2024-11-15 11:36:49.793746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:18:49.142 [2024-11-15 11:36:49.793756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:18:49.142 [2024-11-15 11:36:49.793766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:18:49.142 [2024-11-15 11:36:49.793772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:18:49.142 [2024-11-15 11:36:49.794016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:18:49.142 [2024-11-15 11:36:49.794026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:18:49.142 [2024-11-15 11:36:49.794036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:18:49.142 [2024-11-15 11:36:49.794043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:18:49.142 passed 00:18:49.142 Test: blockdev nvme passthru rw ...passed 00:18:49.142 Test: blockdev nvme passthru vendor specific ...[2024-11-15 11:36:49.875850] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:18:49.142 [2024-11-15 11:36:49.875864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:18:49.142 [2024-11-15 11:36:49.875967] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:18:49.142 [2024-11-15 11:36:49.875977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:18:49.142 [2024-11-15 11:36:49.876077] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:18:49.142 [2024-11-15 11:36:49.876086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:18:49.142 [2024-11-15 11:36:49.876187] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:18:49.142 [2024-11-15 11:36:49.876196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:18:49.142 passed 00:18:49.142 Test: blockdev nvme admin passthru ...passed 00:18:49.142 Test: blockdev copy ...passed 00:18:49.142 00:18:49.142 Run Summary: Type Total Ran Passed Failed Inactive 00:18:49.142 suites 1 1 n/a 0 0 00:18:49.142 tests 23 23 23 0 0 00:18:49.142 asserts 152 152 152 0 n/a 00:18:49.142 00:18:49.142 Elapsed time = 1.224 seconds 00:18:49.706 11:36:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:18:49.706 11:36:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:49.706 11:36:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:18:49.706 11:36:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:49.706 11:36:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:18:49.706 11:36:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@30 -- # nvmftestfini 00:18:49.706 11:36:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@516 -- # nvmfcleanup 00:18:49.706 11:36:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@121 -- # sync 00:18:49.706 11:36:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:18:49.706 11:36:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@124 -- # set +e 00:18:49.706 11:36:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@125 -- # for i in {1..20} 00:18:49.706 11:36:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:18:49.706 rmmod nvme_tcp 00:18:49.706 rmmod nvme_fabrics 00:18:49.706 rmmod nvme_keyring 00:18:49.706 11:36:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:18:49.706 11:36:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@128 -- # set -e 00:18:49.706 11:36:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@129 -- # return 0 00:18:49.706 11:36:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@517 -- # '[' -n 1246296 ']' 00:18:49.706 11:36:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@518 -- # killprocess 1246296 00:18:49.706 11:36:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@952 -- # '[' -z 1246296 ']' 00:18:49.706 11:36:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@956 -- # kill -0 1246296 00:18:49.706 11:36:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@957 -- # uname 00:18:49.706 11:36:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:18:49.706 11:36:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 1246296 00:18:49.706 11:36:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@958 -- # process_name=reactor_3 00:18:49.706 11:36:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@962 -- # '[' reactor_3 = sudo ']' 00:18:49.706 11:36:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@970 -- # echo 'killing process with pid 1246296' 00:18:49.706 killing process with pid 1246296 00:18:49.706 11:36:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@971 -- # kill 1246296 00:18:49.706 11:36:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@976 -- # wait 1246296 00:18:49.964 11:36:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:18:49.964 11:36:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:18:49.964 11:36:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:18:49.964 11:36:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@297 -- # iptr 00:18:49.964 11:36:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@791 -- # iptables-save 00:18:49.964 11:36:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:18:49.964 11:36:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@791 -- # iptables-restore 00:18:49.964 11:36:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:18:49.964 11:36:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@302 -- # remove_spdk_ns 00:18:49.964 11:36:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:49.964 11:36:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:18:49.964 11:36:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:52.498 11:36:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:18:52.498 00:18:52.498 real 0m10.280s 00:18:52.498 user 0m12.205s 00:18:52.498 sys 0m5.321s 00:18:52.498 11:36:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1128 -- # xtrace_disable 00:18:52.498 11:36:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:18:52.498 ************************************ 00:18:52.498 END TEST nvmf_bdevio_no_huge 00:18:52.498 ************************************ 00:18:52.498 11:36:52 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@41 -- # run_test nvmf_tls /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/tls.sh --transport=tcp 00:18:52.498 11:36:52 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:18:52.498 11:36:52 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1109 -- # xtrace_disable 00:18:52.498 11:36:52 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:18:52.498 ************************************ 00:18:52.498 START TEST nvmf_tls 00:18:52.498 ************************************ 00:18:52.498 11:36:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/tls.sh --transport=tcp 00:18:52.498 * Looking for test storage... 00:18:52.498 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:18:52.498 11:36:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:18:52.498 11:36:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:18:52.498 11:36:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1691 -- # lcov --version 00:18:52.498 11:36:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:18:52.499 11:36:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:18:52.499 11:36:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@333 -- # local ver1 ver1_l 00:18:52.499 11:36:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@334 -- # local ver2 ver2_l 00:18:52.499 11:36:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@336 -- # IFS=.-: 00:18:52.499 11:36:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@336 -- # read -ra ver1 00:18:52.499 11:36:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@337 -- # IFS=.-: 00:18:52.499 11:36:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@337 -- # read -ra ver2 00:18:52.499 11:36:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@338 -- # local 'op=<' 00:18:52.499 11:36:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@340 -- # ver1_l=2 00:18:52.499 11:36:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@341 -- # ver2_l=1 00:18:52.499 11:36:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:18:52.499 11:36:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@344 -- # case "$op" in 00:18:52.499 11:36:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@345 -- # : 1 00:18:52.499 11:36:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@364 -- # (( v = 0 )) 00:18:52.499 11:36:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:52.499 11:36:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@365 -- # decimal 1 00:18:52.499 11:36:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@353 -- # local d=1 00:18:52.499 11:36:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:18:52.499 11:36:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@355 -- # echo 1 00:18:52.499 11:36:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@365 -- # ver1[v]=1 00:18:52.499 11:36:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@366 -- # decimal 2 00:18:52.499 11:36:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@353 -- # local d=2 00:18:52.499 11:36:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:18:52.499 11:36:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@355 -- # echo 2 00:18:52.499 11:36:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@366 -- # ver2[v]=2 00:18:52.499 11:36:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:18:52.499 11:36:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:18:52.499 11:36:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@368 -- # return 0 00:18:52.499 11:36:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:18:52.499 11:36:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:18:52.499 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:52.499 --rc genhtml_branch_coverage=1 00:18:52.499 --rc genhtml_function_coverage=1 00:18:52.499 --rc genhtml_legend=1 00:18:52.499 --rc geninfo_all_blocks=1 00:18:52.499 --rc geninfo_unexecuted_blocks=1 00:18:52.499 00:18:52.499 ' 00:18:52.499 11:36:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:18:52.499 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:52.499 --rc genhtml_branch_coverage=1 00:18:52.499 --rc genhtml_function_coverage=1 00:18:52.499 --rc genhtml_legend=1 00:18:52.499 --rc geninfo_all_blocks=1 00:18:52.499 --rc geninfo_unexecuted_blocks=1 00:18:52.499 00:18:52.499 ' 00:18:52.499 11:36:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:18:52.499 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:52.499 --rc genhtml_branch_coverage=1 00:18:52.499 --rc genhtml_function_coverage=1 00:18:52.499 --rc genhtml_legend=1 00:18:52.499 --rc geninfo_all_blocks=1 00:18:52.499 --rc geninfo_unexecuted_blocks=1 00:18:52.499 00:18:52.499 ' 00:18:52.499 11:36:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:18:52.499 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:52.499 --rc genhtml_branch_coverage=1 00:18:52.499 --rc genhtml_function_coverage=1 00:18:52.499 --rc genhtml_legend=1 00:18:52.499 --rc geninfo_all_blocks=1 00:18:52.499 --rc geninfo_unexecuted_blocks=1 00:18:52.499 00:18:52.499 ' 00:18:52.499 11:36:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:18:52.499 11:36:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@7 -- # uname -s 00:18:52.499 11:36:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:52.499 11:36:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:52.499 11:36:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:52.499 11:36:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:52.499 11:36:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:52.499 11:36:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:52.499 11:36:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:52.499 11:36:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:52.499 11:36:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:52.499 11:36:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:52.499 11:36:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:18:52.499 11:36:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@18 -- # NVME_HOSTID=00abaa28-3537-eb11-906e-0017a4403562 00:18:52.499 11:36:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:52.499 11:36:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:52.499 11:36:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:18:52.499 11:36:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:52.499 11:36:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:18:52.499 11:36:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@15 -- # shopt -s extglob 00:18:52.499 11:36:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:52.499 11:36:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:52.499 11:36:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:52.499 11:36:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:52.499 11:36:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:52.499 11:36:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:52.499 11:36:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@5 -- # export PATH 00:18:52.499 11:36:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:52.499 11:36:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@51 -- # : 0 00:18:52.499 11:36:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:18:52.499 11:36:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:18:52.499 11:36:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:52.499 11:36:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:52.499 11:36:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:52.499 11:36:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:18:52.499 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:18:52.499 11:36:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:18:52.499 11:36:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:18:52.499 11:36:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@55 -- # have_pci_nics=0 00:18:52.499 11:36:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:18:52.499 11:36:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@63 -- # nvmftestinit 00:18:52.499 11:36:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:18:52.499 11:36:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:52.499 11:36:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@476 -- # prepare_net_devs 00:18:52.499 11:36:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@438 -- # local -g is_hw=no 00:18:52.499 11:36:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@440 -- # remove_spdk_ns 00:18:52.499 11:36:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:52.499 11:36:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:18:52.499 11:36:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:52.500 11:36:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:18:52.500 11:36:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:18:52.500 11:36:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@309 -- # xtrace_disable 00:18:52.500 11:36:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:57.770 11:36:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:18:57.770 11:36:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@315 -- # pci_devs=() 00:18:57.770 11:36:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@315 -- # local -a pci_devs 00:18:57.770 11:36:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@316 -- # pci_net_devs=() 00:18:57.770 11:36:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:18:57.770 11:36:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@317 -- # pci_drivers=() 00:18:57.770 11:36:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@317 -- # local -A pci_drivers 00:18:57.770 11:36:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@319 -- # net_devs=() 00:18:57.770 11:36:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@319 -- # local -ga net_devs 00:18:57.770 11:36:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@320 -- # e810=() 00:18:57.770 11:36:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@320 -- # local -ga e810 00:18:57.770 11:36:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@321 -- # x722=() 00:18:57.770 11:36:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@321 -- # local -ga x722 00:18:57.770 11:36:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@322 -- # mlx=() 00:18:57.770 11:36:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@322 -- # local -ga mlx 00:18:57.770 11:36:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:18:57.770 11:36:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:18:57.770 11:36:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:18:57.770 11:36:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:18:57.770 11:36:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:18:57.770 11:36:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:18:57.770 11:36:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:18:57.770 11:36:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:18:57.770 11:36:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:18:57.770 11:36:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:18:57.770 11:36:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:18:57.770 11:36:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:18:57.770 11:36:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:18:57.770 11:36:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:18:57.770 11:36:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:18:57.770 11:36:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:18:57.770 11:36:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:18:57.770 11:36:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:18:57.770 11:36:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:18:57.770 11:36:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:18:57.770 Found 0000:af:00.0 (0x8086 - 0x159b) 00:18:57.770 11:36:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:18:57.770 11:36:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:18:57.770 11:36:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:57.770 11:36:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:57.770 11:36:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:18:57.770 11:36:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:18:57.770 11:36:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:18:57.770 Found 0000:af:00.1 (0x8086 - 0x159b) 00:18:57.770 11:36:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:18:57.770 11:36:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:18:57.770 11:36:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:57.770 11:36:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:57.770 11:36:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:18:57.770 11:36:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:18:57.770 11:36:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:18:57.770 11:36:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:18:57.770 11:36:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:18:57.770 11:36:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:57.770 11:36:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:18:57.770 11:36:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:57.770 11:36:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@418 -- # [[ up == up ]] 00:18:57.770 11:36:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:18:57.770 11:36:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:57.770 11:36:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:18:57.770 Found net devices under 0000:af:00.0: cvl_0_0 00:18:57.770 11:36:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:18:57.770 11:36:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:18:57.770 11:36:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:57.770 11:36:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:18:57.770 11:36:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:57.770 11:36:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@418 -- # [[ up == up ]] 00:18:57.770 11:36:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:18:57.770 11:36:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:57.770 11:36:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:18:57.770 Found net devices under 0000:af:00.1: cvl_0_1 00:18:57.770 11:36:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:18:57.770 11:36:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:18:57.770 11:36:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@442 -- # is_hw=yes 00:18:57.770 11:36:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:18:57.770 11:36:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:18:57.770 11:36:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:18:57.770 11:36:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:18:57.770 11:36:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:18:57.770 11:36:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:57.770 11:36:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:18:57.770 11:36:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:18:57.770 11:36:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:18:57.770 11:36:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:18:57.770 11:36:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:18:57.770 11:36:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:18:57.770 11:36:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:18:57.770 11:36:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:57.770 11:36:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:18:57.770 11:36:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:18:57.770 11:36:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:18:57.770 11:36:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:18:57.770 11:36:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:18:57.770 11:36:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:18:57.770 11:36:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:18:57.771 11:36:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:18:57.771 11:36:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:18:57.771 11:36:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:18:57.771 11:36:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:18:57.771 11:36:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:18:57.771 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:57.771 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.426 ms 00:18:57.771 00:18:57.771 --- 10.0.0.2 ping statistics --- 00:18:57.771 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:57.771 rtt min/avg/max/mdev = 0.426/0.426/0.426/0.000 ms 00:18:57.771 11:36:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:18:57.771 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:57.771 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.123 ms 00:18:57.771 00:18:57.771 --- 10.0.0.1 ping statistics --- 00:18:57.771 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:57.771 rtt min/avg/max/mdev = 0.123/0.123/0.123/0.000 ms 00:18:57.771 11:36:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:57.771 11:36:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@450 -- # return 0 00:18:57.771 11:36:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:18:57.771 11:36:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:57.771 11:36:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:18:57.771 11:36:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:18:57.771 11:36:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:57.771 11:36:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:18:57.771 11:36:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:18:57.771 11:36:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@64 -- # nvmfappstart -m 0x2 --wait-for-rpc 00:18:57.771 11:36:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:18:57.771 11:36:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:18:57.771 11:36:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:57.771 11:36:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=1250300 00:18:57.771 11:36:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 1250300 00:18:57.771 11:36:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 --wait-for-rpc 00:18:57.771 11:36:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # '[' -z 1250300 ']' 00:18:57.771 11:36:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:57.771 11:36:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # local max_retries=100 00:18:57.771 11:36:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:57.771 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:57.771 11:36:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # xtrace_disable 00:18:57.771 11:36:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:57.771 [2024-11-15 11:36:58.456223] Starting SPDK v25.01-pre git sha1 4b2d483c6 / DPDK 24.03.0 initialization... 00:18:57.771 [2024-11-15 11:36:58.456270] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:57.771 [2024-11-15 11:36:58.515232] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:57.771 [2024-11-15 11:36:58.553704] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:57.771 [2024-11-15 11:36:58.553738] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:57.771 [2024-11-15 11:36:58.553745] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:57.771 [2024-11-15 11:36:58.553750] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:57.771 [2024-11-15 11:36:58.553755] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:57.771 [2024-11-15 11:36:58.554325] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:18:58.029 11:36:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:18:58.029 11:36:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@866 -- # return 0 00:18:58.029 11:36:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:18:58.029 11:36:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:18:58.029 11:36:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:58.029 11:36:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:58.029 11:36:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@66 -- # '[' tcp '!=' tcp ']' 00:18:58.029 11:36:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_set_default_impl -i ssl 00:18:58.287 true 00:18:58.287 11:36:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:18:58.287 11:36:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@74 -- # jq -r .tls_version 00:18:58.544 11:36:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@74 -- # version=0 00:18:58.544 11:36:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@75 -- # [[ 0 != \0 ]] 00:18:58.544 11:36:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@81 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:18:58.802 11:36:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:18:58.802 11:36:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@82 -- # jq -r .tls_version 00:18:59.060 11:36:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@82 -- # version=13 00:18:59.060 11:36:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@83 -- # [[ 13 != \1\3 ]] 00:18:59.060 11:36:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 7 00:18:59.318 11:37:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@90 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:18:59.318 11:37:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@90 -- # jq -r .tls_version 00:18:59.576 11:37:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@90 -- # version=7 00:18:59.576 11:37:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@91 -- # [[ 7 != \7 ]] 00:18:59.576 11:37:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@97 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:18:59.576 11:37:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@97 -- # jq -r .enable_ktls 00:18:59.835 11:37:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@97 -- # ktls=false 00:18:59.835 11:37:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@98 -- # [[ false != \f\a\l\s\e ]] 00:18:59.835 11:37:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --enable-ktls 00:19:00.092 11:37:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@105 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:19:00.092 11:37:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@105 -- # jq -r .enable_ktls 00:19:00.350 11:37:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@105 -- # ktls=true 00:19:00.350 11:37:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@106 -- # [[ true != \t\r\u\e ]] 00:19:00.350 11:37:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@112 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --disable-ktls 00:19:00.607 11:37:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@113 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:19:00.607 11:37:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@113 -- # jq -r .enable_ktls 00:19:00.865 11:37:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@113 -- # ktls=false 00:19:00.865 11:37:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@114 -- # [[ false != \f\a\l\s\e ]] 00:19:00.865 11:37:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@119 -- # format_interchange_psk 00112233445566778899aabbccddeeff 1 00:19:00.865 11:37:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 1 00:19:00.865 11:37:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@730 -- # local prefix key digest 00:19:00.865 11:37:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:19:00.865 11:37:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff 00:19:00.865 11:37:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # digest=1 00:19:00.865 11:37:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@733 -- # python - 00:19:00.865 11:37:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@119 -- # key=NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:19:00.865 11:37:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@120 -- # format_interchange_psk ffeeddccbbaa99887766554433221100 1 00:19:00.865 11:37:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 ffeeddccbbaa99887766554433221100 1 00:19:00.866 11:37:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@730 -- # local prefix key digest 00:19:00.866 11:37:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:19:00.866 11:37:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # key=ffeeddccbbaa99887766554433221100 00:19:00.866 11:37:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # digest=1 00:19:00.866 11:37:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@733 -- # python - 00:19:01.124 11:37:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@120 -- # key_2=NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:19:01.124 11:37:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@122 -- # mktemp 00:19:01.124 11:37:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@122 -- # key_path=/tmp/tmp.zC46utioZu 00:19:01.124 11:37:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@123 -- # mktemp 00:19:01.124 11:37:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@123 -- # key_2_path=/tmp/tmp.kn2DecQ9CF 00:19:01.124 11:37:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@125 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:19:01.124 11:37:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@126 -- # echo -n NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:19:01.124 11:37:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@128 -- # chmod 0600 /tmp/tmp.zC46utioZu 00:19:01.124 11:37:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@129 -- # chmod 0600 /tmp/tmp.kn2DecQ9CF 00:19:01.124 11:37:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@131 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:19:01.382 11:37:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@132 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py framework_start_init 00:19:01.641 11:37:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@134 -- # setup_nvmf_tgt /tmp/tmp.zC46utioZu 00:19:01.641 11:37:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.zC46utioZu 00:19:01.641 11:37:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:19:01.899 [2024-11-15 11:37:02.623887] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:01.899 11:37:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:19:02.157 11:37:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:19:02.414 [2024-11-15 11:37:03.169289] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:19:02.414 [2024-11-15 11:37:03.169527] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:02.414 11:37:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:19:02.672 malloc0 00:19:02.672 11:37:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:19:02.929 11:37:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.zC46utioZu 00:19:03.187 11:37:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:19:03.445 11:37:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@138 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -S ssl -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 hostnqn:nqn.2016-06.io.spdk:host1' --psk-path /tmp/tmp.zC46utioZu 00:19:15.640 Initializing NVMe Controllers 00:19:15.640 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:19:15.640 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:19:15.640 Initialization complete. Launching workers. 00:19:15.640 ======================================================== 00:19:15.640 Latency(us) 00:19:15.640 Device Information : IOPS MiB/s Average min max 00:19:15.640 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 18132.86 70.83 3529.06 1348.01 6977.81 00:19:15.640 ======================================================== 00:19:15.640 Total : 18132.86 70.83 3529.06 1348.01 6977.81 00:19:15.640 00:19:15.640 11:37:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@144 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.zC46utioZu 00:19:15.640 11:37:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:19:15.640 11:37:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:19:15.640 11:37:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:19:15.640 11:37:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.zC46utioZu 00:19:15.640 11:37:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:19:15.640 11:37:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=1253233 00:19:15.640 11:37:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:19:15.640 11:37:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:19:15.640 11:37:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 1253233 /var/tmp/bdevperf.sock 00:19:15.640 11:37:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # '[' -z 1253233 ']' 00:19:15.640 11:37:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:15.640 11:37:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # local max_retries=100 00:19:15.640 11:37:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:15.640 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:15.640 11:37:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # xtrace_disable 00:19:15.640 11:37:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:15.640 [2024-11-15 11:37:14.456295] Starting SPDK v25.01-pre git sha1 4b2d483c6 / DPDK 24.03.0 initialization... 00:19:15.640 [2024-11-15 11:37:14.456355] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1253233 ] 00:19:15.640 [2024-11-15 11:37:14.521715] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:15.640 [2024-11-15 11:37:14.561399] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:19:15.640 11:37:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:19:15.640 11:37:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@866 -- # return 0 00:19:15.640 11:37:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.zC46utioZu 00:19:15.641 11:37:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:19:15.641 [2024-11-15 11:37:15.204416] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:19:15.641 TLSTESTn1 00:19:15.641 11:37:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:19:15.641 Running I/O for 10 seconds... 00:19:16.575 5411.00 IOPS, 21.14 MiB/s [2024-11-15T10:37:18.801Z] 5701.50 IOPS, 22.27 MiB/s [2024-11-15T10:37:19.735Z] 5751.33 IOPS, 22.47 MiB/s [2024-11-15T10:37:20.667Z] 5853.25 IOPS, 22.86 MiB/s [2024-11-15T10:37:21.600Z] 5846.20 IOPS, 22.84 MiB/s [2024-11-15T10:37:22.533Z] 5859.33 IOPS, 22.89 MiB/s [2024-11-15T10:37:23.466Z] 5892.00 IOPS, 23.02 MiB/s [2024-11-15T10:37:24.838Z] 5883.38 IOPS, 22.98 MiB/s [2024-11-15T10:37:25.771Z] 5900.33 IOPS, 23.05 MiB/s [2024-11-15T10:37:25.771Z] 5921.60 IOPS, 23.13 MiB/s 00:19:24.918 Latency(us) 00:19:24.918 [2024-11-15T10:37:25.771Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:24.918 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:19:24.918 Verification LBA range: start 0x0 length 0x2000 00:19:24.918 TLSTESTn1 : 10.01 5925.46 23.15 0.00 0.00 21569.32 5451.40 25141.99 00:19:24.918 [2024-11-15T10:37:25.771Z] =================================================================================================================== 00:19:24.918 [2024-11-15T10:37:25.771Z] Total : 5925.46 23.15 0.00 0.00 21569.32 5451.40 25141.99 00:19:24.918 { 00:19:24.918 "results": [ 00:19:24.918 { 00:19:24.918 "job": "TLSTESTn1", 00:19:24.918 "core_mask": "0x4", 00:19:24.918 "workload": "verify", 00:19:24.918 "status": "finished", 00:19:24.918 "verify_range": { 00:19:24.918 "start": 0, 00:19:24.918 "length": 8192 00:19:24.918 }, 00:19:24.918 "queue_depth": 128, 00:19:24.918 "io_size": 4096, 00:19:24.918 "runtime": 10.014915, 00:19:24.918 "iops": 5925.462173168718, 00:19:24.918 "mibps": 23.146336613940306, 00:19:24.918 "io_failed": 0, 00:19:24.918 "io_timeout": 0, 00:19:24.918 "avg_latency_us": 21569.32229390615, 00:19:24.918 "min_latency_us": 5451.403636363636, 00:19:24.918 "max_latency_us": 25141.992727272725 00:19:24.918 } 00:19:24.918 ], 00:19:24.918 "core_count": 1 00:19:24.918 } 00:19:24.918 11:37:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@45 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:19:24.918 11:37:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@46 -- # killprocess 1253233 00:19:24.918 11:37:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # '[' -z 1253233 ']' 00:19:24.918 11:37:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # kill -0 1253233 00:19:24.918 11:37:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # uname 00:19:24.918 11:37:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:19:24.918 11:37:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 1253233 00:19:24.918 11:37:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # process_name=reactor_2 00:19:24.918 11:37:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@962 -- # '[' reactor_2 = sudo ']' 00:19:24.918 11:37:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@970 -- # echo 'killing process with pid 1253233' 00:19:24.918 killing process with pid 1253233 00:19:24.918 11:37:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@971 -- # kill 1253233 00:19:24.918 Received shutdown signal, test time was about 10.000000 seconds 00:19:24.918 00:19:24.918 Latency(us) 00:19:24.918 [2024-11-15T10:37:25.771Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:24.918 [2024-11-15T10:37:25.772Z] =================================================================================================================== 00:19:24.919 [2024-11-15T10:37:25.772Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:19:24.919 11:37:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@976 -- # wait 1253233 00:19:24.919 11:37:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@147 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.kn2DecQ9CF 00:19:24.919 11:37:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@650 -- # local es=0 00:19:24.919 11:37:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.kn2DecQ9CF 00:19:24.919 11:37:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@638 -- # local arg=run_bdevperf 00:19:24.919 11:37:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:19:24.919 11:37:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # type -t run_bdevperf 00:19:24.919 11:37:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:19:24.919 11:37:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.kn2DecQ9CF 00:19:24.919 11:37:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:19:24.919 11:37:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:19:24.919 11:37:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:19:24.919 11:37:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.kn2DecQ9CF 00:19:24.919 11:37:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:19:24.919 11:37:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=1255135 00:19:24.919 11:37:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:19:24.919 11:37:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:19:24.919 11:37:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 1255135 /var/tmp/bdevperf.sock 00:19:24.919 11:37:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # '[' -z 1255135 ']' 00:19:24.919 11:37:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:24.919 11:37:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # local max_retries=100 00:19:24.919 11:37:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:24.919 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:24.919 11:37:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # xtrace_disable 00:19:24.919 11:37:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:24.919 [2024-11-15 11:37:25.745372] Starting SPDK v25.01-pre git sha1 4b2d483c6 / DPDK 24.03.0 initialization... 00:19:24.919 [2024-11-15 11:37:25.745432] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1255135 ] 00:19:25.177 [2024-11-15 11:37:25.811343] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:25.177 [2024-11-15 11:37:25.851442] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:19:25.177 11:37:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:19:25.177 11:37:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@866 -- # return 0 00:19:25.177 11:37:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.kn2DecQ9CF 00:19:25.436 11:37:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:19:25.693 [2024-11-15 11:37:26.502482] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:19:25.693 [2024-11-15 11:37:26.514154] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:19:25.693 [2024-11-15 11:37:26.514724] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x225e660 (107): Transport endpoint is not connected 00:19:25.694 [2024-11-15 11:37:26.515718] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x225e660 (9): Bad file descriptor 00:19:25.694 [2024-11-15 11:37:26.516720] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 0] Ctrlr is in error state 00:19:25.694 [2024-11-15 11:37:26.516729] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:19:25.694 [2024-11-15 11:37:26.516735] nvme.c: 884:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode1, Operation not permitted 00:19:25.694 [2024-11-15 11:37:26.516745] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 0] in failed state. 00:19:25.694 request: 00:19:25.694 { 00:19:25.694 "name": "TLSTEST", 00:19:25.694 "trtype": "tcp", 00:19:25.694 "traddr": "10.0.0.2", 00:19:25.694 "adrfam": "ipv4", 00:19:25.694 "trsvcid": "4420", 00:19:25.694 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:19:25.694 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:19:25.694 "prchk_reftag": false, 00:19:25.694 "prchk_guard": false, 00:19:25.694 "hdgst": false, 00:19:25.694 "ddgst": false, 00:19:25.694 "psk": "key0", 00:19:25.694 "allow_unrecognized_csi": false, 00:19:25.694 "method": "bdev_nvme_attach_controller", 00:19:25.694 "req_id": 1 00:19:25.694 } 00:19:25.694 Got JSON-RPC error response 00:19:25.694 response: 00:19:25.694 { 00:19:25.694 "code": -5, 00:19:25.694 "message": "Input/output error" 00:19:25.694 } 00:19:25.694 11:37:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 1255135 00:19:25.694 11:37:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # '[' -z 1255135 ']' 00:19:25.694 11:37:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # kill -0 1255135 00:19:25.694 11:37:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # uname 00:19:25.694 11:37:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:19:25.951 11:37:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 1255135 00:19:25.951 11:37:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # process_name=reactor_2 00:19:25.951 11:37:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@962 -- # '[' reactor_2 = sudo ']' 00:19:25.951 11:37:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@970 -- # echo 'killing process with pid 1255135' 00:19:25.951 killing process with pid 1255135 00:19:25.951 11:37:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@971 -- # kill 1255135 00:19:25.951 Received shutdown signal, test time was about 10.000000 seconds 00:19:25.951 00:19:25.951 Latency(us) 00:19:25.951 [2024-11-15T10:37:26.804Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:25.951 [2024-11-15T10:37:26.804Z] =================================================================================================================== 00:19:25.951 [2024-11-15T10:37:26.804Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:19:25.951 11:37:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@976 -- # wait 1255135 00:19:25.951 11:37:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:19:25.951 11:37:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # es=1 00:19:25.951 11:37:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:19:25.951 11:37:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:19:25.951 11:37:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:19:25.951 11:37:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@150 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.zC46utioZu 00:19:25.951 11:37:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@650 -- # local es=0 00:19:25.951 11:37:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.zC46utioZu 00:19:25.951 11:37:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@638 -- # local arg=run_bdevperf 00:19:25.951 11:37:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:19:25.951 11:37:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # type -t run_bdevperf 00:19:25.951 11:37:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:19:25.952 11:37:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.zC46utioZu 00:19:25.952 11:37:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:19:25.952 11:37:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:19:25.952 11:37:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host2 00:19:25.952 11:37:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.zC46utioZu 00:19:25.952 11:37:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:19:25.952 11:37:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=1255339 00:19:25.952 11:37:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:19:25.952 11:37:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:19:25.952 11:37:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 1255339 /var/tmp/bdevperf.sock 00:19:25.952 11:37:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # '[' -z 1255339 ']' 00:19:25.952 11:37:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:25.952 11:37:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # local max_retries=100 00:19:25.952 11:37:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:25.952 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:25.952 11:37:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # xtrace_disable 00:19:25.952 11:37:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:26.210 [2024-11-15 11:37:26.805445] Starting SPDK v25.01-pre git sha1 4b2d483c6 / DPDK 24.03.0 initialization... 00:19:26.210 [2024-11-15 11:37:26.805516] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1255339 ] 00:19:26.210 [2024-11-15 11:37:26.871175] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:26.210 [2024-11-15 11:37:26.905232] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:19:26.210 11:37:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:19:26.210 11:37:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@866 -- # return 0 00:19:26.210 11:37:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.zC46utioZu 00:19:26.468 11:37:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 --psk key0 00:19:26.726 [2024-11-15 11:37:27.556045] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:19:26.726 [2024-11-15 11:37:27.566726] tcp.c: 969:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:19:26.726 [2024-11-15 11:37:27.566749] posix.c: 573:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:19:26.726 [2024-11-15 11:37:27.566771] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:19:26.726 [2024-11-15 11:37:27.567411] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1643660 (107): Transport endpoint is not connected 00:19:26.726 [2024-11-15 11:37:27.568404] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1643660 (9): Bad file descriptor 00:19:26.726 [2024-11-15 11:37:27.569406] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 0] Ctrlr is in error state 00:19:26.726 [2024-11-15 11:37:27.569414] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:19:26.726 [2024-11-15 11:37:27.569422] nvme.c: 884:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode1, Operation not permitted 00:19:26.726 [2024-11-15 11:37:27.569431] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 0] in failed state. 00:19:26.726 request: 00:19:26.726 { 00:19:26.726 "name": "TLSTEST", 00:19:26.726 "trtype": "tcp", 00:19:26.726 "traddr": "10.0.0.2", 00:19:26.726 "adrfam": "ipv4", 00:19:26.726 "trsvcid": "4420", 00:19:26.726 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:19:26.726 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:19:26.726 "prchk_reftag": false, 00:19:26.726 "prchk_guard": false, 00:19:26.726 "hdgst": false, 00:19:26.726 "ddgst": false, 00:19:26.726 "psk": "key0", 00:19:26.726 "allow_unrecognized_csi": false, 00:19:26.726 "method": "bdev_nvme_attach_controller", 00:19:26.726 "req_id": 1 00:19:26.726 } 00:19:26.726 Got JSON-RPC error response 00:19:26.726 response: 00:19:26.726 { 00:19:26.726 "code": -5, 00:19:26.726 "message": "Input/output error" 00:19:26.726 } 00:19:26.985 11:37:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 1255339 00:19:26.985 11:37:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # '[' -z 1255339 ']' 00:19:26.985 11:37:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # kill -0 1255339 00:19:26.985 11:37:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # uname 00:19:26.985 11:37:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:19:26.985 11:37:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 1255339 00:19:26.985 11:37:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # process_name=reactor_2 00:19:26.985 11:37:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@962 -- # '[' reactor_2 = sudo ']' 00:19:26.985 11:37:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@970 -- # echo 'killing process with pid 1255339' 00:19:26.985 killing process with pid 1255339 00:19:26.985 11:37:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@971 -- # kill 1255339 00:19:26.985 Received shutdown signal, test time was about 10.000000 seconds 00:19:26.985 00:19:26.985 Latency(us) 00:19:26.985 [2024-11-15T10:37:27.838Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:26.985 [2024-11-15T10:37:27.838Z] =================================================================================================================== 00:19:26.985 [2024-11-15T10:37:27.838Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:19:26.985 11:37:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@976 -- # wait 1255339 00:19:26.985 11:37:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:19:26.985 11:37:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # es=1 00:19:26.985 11:37:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:19:26.985 11:37:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:19:26.985 11:37:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:19:26.985 11:37:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@153 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.zC46utioZu 00:19:26.985 11:37:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@650 -- # local es=0 00:19:26.985 11:37:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.zC46utioZu 00:19:26.985 11:37:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@638 -- # local arg=run_bdevperf 00:19:26.985 11:37:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:19:26.985 11:37:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # type -t run_bdevperf 00:19:26.985 11:37:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:19:26.985 11:37:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.zC46utioZu 00:19:26.985 11:37:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:19:26.985 11:37:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode2 00:19:26.985 11:37:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:19:26.985 11:37:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.zC46utioZu 00:19:26.985 11:37:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:19:26.985 11:37:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=1255605 00:19:26.985 11:37:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:19:26.985 11:37:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:19:26.985 11:37:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 1255605 /var/tmp/bdevperf.sock 00:19:26.985 11:37:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # '[' -z 1255605 ']' 00:19:26.985 11:37:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:26.985 11:37:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # local max_retries=100 00:19:26.985 11:37:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:26.985 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:26.985 11:37:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # xtrace_disable 00:19:26.985 11:37:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:27.243 [2024-11-15 11:37:27.852379] Starting SPDK v25.01-pre git sha1 4b2d483c6 / DPDK 24.03.0 initialization... 00:19:27.243 [2024-11-15 11:37:27.852442] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1255605 ] 00:19:27.243 [2024-11-15 11:37:27.918628] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:27.243 [2024-11-15 11:37:27.952565] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:19:27.243 11:37:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:19:27.243 11:37:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@866 -- # return 0 00:19:27.243 11:37:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.zC46utioZu 00:19:27.501 11:37:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -q nqn.2016-06.io.spdk:host1 --psk key0 00:19:27.759 [2024-11-15 11:37:28.599322] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:19:27.759 [2024-11-15 11:37:28.609346] tcp.c: 969:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:19:27.759 [2024-11-15 11:37:28.609366] posix.c: 573:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:19:27.759 [2024-11-15 11:37:28.609387] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:19:27.759 [2024-11-15 11:37:28.609669] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x187d660 (107): Transport endpoint is not connected 00:19:27.759 [2024-11-15 11:37:28.610662] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x187d660 (9): Bad file descriptor 00:19:27.759 [2024-11-15 11:37:28.611663] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 0] Ctrlr is in error state 00:19:27.759 [2024-11-15 11:37:28.611674] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:19:27.759 [2024-11-15 11:37:28.611680] nvme.c: 884:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode2, Operation not permitted 00:19:27.759 [2024-11-15 11:37:28.611690] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 0] in failed state. 00:19:28.017 request: 00:19:28.017 { 00:19:28.017 "name": "TLSTEST", 00:19:28.017 "trtype": "tcp", 00:19:28.017 "traddr": "10.0.0.2", 00:19:28.017 "adrfam": "ipv4", 00:19:28.017 "trsvcid": "4420", 00:19:28.017 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:19:28.017 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:19:28.017 "prchk_reftag": false, 00:19:28.017 "prchk_guard": false, 00:19:28.017 "hdgst": false, 00:19:28.017 "ddgst": false, 00:19:28.017 "psk": "key0", 00:19:28.017 "allow_unrecognized_csi": false, 00:19:28.017 "method": "bdev_nvme_attach_controller", 00:19:28.017 "req_id": 1 00:19:28.017 } 00:19:28.017 Got JSON-RPC error response 00:19:28.017 response: 00:19:28.017 { 00:19:28.017 "code": -5, 00:19:28.017 "message": "Input/output error" 00:19:28.017 } 00:19:28.017 11:37:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 1255605 00:19:28.017 11:37:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # '[' -z 1255605 ']' 00:19:28.017 11:37:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # kill -0 1255605 00:19:28.017 11:37:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # uname 00:19:28.017 11:37:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:19:28.017 11:37:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 1255605 00:19:28.017 11:37:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # process_name=reactor_2 00:19:28.017 11:37:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@962 -- # '[' reactor_2 = sudo ']' 00:19:28.017 11:37:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@970 -- # echo 'killing process with pid 1255605' 00:19:28.017 killing process with pid 1255605 00:19:28.017 11:37:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@971 -- # kill 1255605 00:19:28.017 Received shutdown signal, test time was about 10.000000 seconds 00:19:28.017 00:19:28.017 Latency(us) 00:19:28.017 [2024-11-15T10:37:28.870Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:28.017 [2024-11-15T10:37:28.870Z] =================================================================================================================== 00:19:28.017 [2024-11-15T10:37:28.870Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:19:28.017 11:37:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@976 -- # wait 1255605 00:19:28.017 11:37:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:19:28.017 11:37:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # es=1 00:19:28.017 11:37:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:19:28.017 11:37:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:19:28.017 11:37:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:19:28.017 11:37:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@156 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:19:28.017 11:37:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@650 -- # local es=0 00:19:28.018 11:37:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:19:28.018 11:37:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@638 -- # local arg=run_bdevperf 00:19:28.018 11:37:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:19:28.018 11:37:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # type -t run_bdevperf 00:19:28.018 11:37:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:19:28.018 11:37:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:19:28.018 11:37:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:19:28.018 11:37:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:19:28.018 11:37:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:19:28.018 11:37:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk= 00:19:28.018 11:37:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:19:28.018 11:37:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=1255810 00:19:28.018 11:37:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:19:28.018 11:37:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:19:28.018 11:37:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 1255810 /var/tmp/bdevperf.sock 00:19:28.018 11:37:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # '[' -z 1255810 ']' 00:19:28.018 11:37:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:28.018 11:37:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # local max_retries=100 00:19:28.018 11:37:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:28.018 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:28.018 11:37:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # xtrace_disable 00:19:28.018 11:37:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:28.276 [2024-11-15 11:37:28.896970] Starting SPDK v25.01-pre git sha1 4b2d483c6 / DPDK 24.03.0 initialization... 00:19:28.276 [2024-11-15 11:37:28.897035] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1255810 ] 00:19:28.276 [2024-11-15 11:37:28.963251] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:28.276 [2024-11-15 11:37:28.999418] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:19:28.276 11:37:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:19:28.276 11:37:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@866 -- # return 0 00:19:28.276 11:37:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 '' 00:19:28.534 [2024-11-15 11:37:29.373615] keyring.c: 24:keyring_file_check_path: *ERROR*: Non-absolute paths are not allowed: 00:19:28.534 [2024-11-15 11:37:29.373646] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:19:28.534 request: 00:19:28.534 { 00:19:28.534 "name": "key0", 00:19:28.534 "path": "", 00:19:28.534 "method": "keyring_file_add_key", 00:19:28.534 "req_id": 1 00:19:28.534 } 00:19:28.534 Got JSON-RPC error response 00:19:28.534 response: 00:19:28.534 { 00:19:28.534 "code": -1, 00:19:28.534 "message": "Operation not permitted" 00:19:28.534 } 00:19:28.795 11:37:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:19:28.795 [2024-11-15 11:37:29.642387] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:19:28.795 [2024-11-15 11:37:29.642413] bdev_nvme.c:6622:spdk_bdev_nvme_create: *ERROR*: Could not load PSK: key0 00:19:28.795 request: 00:19:28.795 { 00:19:28.795 "name": "TLSTEST", 00:19:28.795 "trtype": "tcp", 00:19:28.795 "traddr": "10.0.0.2", 00:19:28.795 "adrfam": "ipv4", 00:19:28.795 "trsvcid": "4420", 00:19:28.795 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:19:28.795 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:19:28.795 "prchk_reftag": false, 00:19:28.795 "prchk_guard": false, 00:19:28.795 "hdgst": false, 00:19:28.795 "ddgst": false, 00:19:28.795 "psk": "key0", 00:19:28.795 "allow_unrecognized_csi": false, 00:19:28.795 "method": "bdev_nvme_attach_controller", 00:19:28.795 "req_id": 1 00:19:28.795 } 00:19:28.795 Got JSON-RPC error response 00:19:28.795 response: 00:19:28.795 { 00:19:28.795 "code": -126, 00:19:28.795 "message": "Required key not available" 00:19:28.795 } 00:19:29.053 11:37:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 1255810 00:19:29.053 11:37:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # '[' -z 1255810 ']' 00:19:29.053 11:37:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # kill -0 1255810 00:19:29.053 11:37:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # uname 00:19:29.053 11:37:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:19:29.053 11:37:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 1255810 00:19:29.053 11:37:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # process_name=reactor_2 00:19:29.053 11:37:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@962 -- # '[' reactor_2 = sudo ']' 00:19:29.053 11:37:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@970 -- # echo 'killing process with pid 1255810' 00:19:29.053 killing process with pid 1255810 00:19:29.053 11:37:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@971 -- # kill 1255810 00:19:29.053 Received shutdown signal, test time was about 10.000000 seconds 00:19:29.053 00:19:29.053 Latency(us) 00:19:29.053 [2024-11-15T10:37:29.906Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:29.053 [2024-11-15T10:37:29.906Z] =================================================================================================================== 00:19:29.053 [2024-11-15T10:37:29.906Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:19:29.053 11:37:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@976 -- # wait 1255810 00:19:29.053 11:37:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:19:29.053 11:37:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # es=1 00:19:29.053 11:37:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:19:29.053 11:37:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:19:29.053 11:37:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:19:29.053 11:37:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@159 -- # killprocess 1250300 00:19:29.053 11:37:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # '[' -z 1250300 ']' 00:19:29.053 11:37:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # kill -0 1250300 00:19:29.053 11:37:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # uname 00:19:29.053 11:37:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:19:29.053 11:37:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 1250300 00:19:29.310 11:37:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:19:29.310 11:37:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:19:29.310 11:37:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@970 -- # echo 'killing process with pid 1250300' 00:19:29.310 killing process with pid 1250300 00:19:29.310 11:37:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@971 -- # kill 1250300 00:19:29.310 11:37:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@976 -- # wait 1250300 00:19:29.310 11:37:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@160 -- # format_interchange_psk 00112233445566778899aabbccddeeff0011223344556677 2 00:19:29.310 11:37:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff0011223344556677 2 00:19:29.310 11:37:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@730 -- # local prefix key digest 00:19:29.310 11:37:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:19:29.310 11:37:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff0011223344556677 00:19:29.310 11:37:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # digest=2 00:19:29.310 11:37:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@733 -- # python - 00:19:29.310 11:37:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@160 -- # key_long=NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:19:29.310 11:37:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@161 -- # mktemp 00:19:29.310 11:37:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@161 -- # key_long_path=/tmp/tmp.6M4cr9t6oG 00:19:29.310 11:37:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@162 -- # echo -n NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:19:29.310 11:37:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@163 -- # chmod 0600 /tmp/tmp.6M4cr9t6oG 00:19:29.310 11:37:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@164 -- # nvmfappstart -m 0x2 00:19:29.310 11:37:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:19:29.310 11:37:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:19:29.310 11:37:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:29.310 11:37:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=1255982 00:19:29.310 11:37:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 1255982 00:19:29.310 11:37:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:19:29.310 11:37:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # '[' -z 1255982 ']' 00:19:29.310 11:37:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:29.310 11:37:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # local max_retries=100 00:19:29.310 11:37:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:29.310 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:29.310 11:37:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # xtrace_disable 00:19:29.310 11:37:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:29.576 [2024-11-15 11:37:30.185699] Starting SPDK v25.01-pre git sha1 4b2d483c6 / DPDK 24.03.0 initialization... 00:19:29.576 [2024-11-15 11:37:30.185761] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:29.576 [2024-11-15 11:37:30.259688] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:29.576 [2024-11-15 11:37:30.296346] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:29.576 [2024-11-15 11:37:30.296379] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:29.576 [2024-11-15 11:37:30.296386] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:29.576 [2024-11-15 11:37:30.296392] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:29.576 [2024-11-15 11:37:30.296397] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:29.576 [2024-11-15 11:37:30.296967] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:19:29.576 11:37:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:19:29.576 11:37:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@866 -- # return 0 00:19:29.576 11:37:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:19:29.576 11:37:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:19:29.576 11:37:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:29.869 11:37:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:29.869 11:37:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@166 -- # setup_nvmf_tgt /tmp/tmp.6M4cr9t6oG 00:19:29.869 11:37:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.6M4cr9t6oG 00:19:29.869 11:37:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:19:29.869 [2024-11-15 11:37:30.691899] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:29.869 11:37:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:19:30.139 11:37:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:19:30.434 [2024-11-15 11:37:31.241310] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:19:30.434 [2024-11-15 11:37:31.241531] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:30.434 11:37:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:19:30.707 malloc0 00:19:30.708 11:37:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:19:30.993 11:37:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.6M4cr9t6oG 00:19:31.264 11:37:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:19:31.521 11:37:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@168 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.6M4cr9t6oG 00:19:31.521 11:37:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:19:31.521 11:37:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:19:31.521 11:37:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:19:31.522 11:37:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.6M4cr9t6oG 00:19:31.522 11:37:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:19:31.522 11:37:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=1256448 00:19:31.522 11:37:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:19:31.522 11:37:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:19:31.522 11:37:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 1256448 /var/tmp/bdevperf.sock 00:19:31.522 11:37:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # '[' -z 1256448 ']' 00:19:31.522 11:37:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:31.522 11:37:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # local max_retries=100 00:19:31.522 11:37:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:31.522 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:31.522 11:37:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # xtrace_disable 00:19:31.522 11:37:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:31.779 [2024-11-15 11:37:32.396769] Starting SPDK v25.01-pre git sha1 4b2d483c6 / DPDK 24.03.0 initialization... 00:19:31.779 [2024-11-15 11:37:32.396833] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1256448 ] 00:19:31.779 [2024-11-15 11:37:32.462414] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:31.779 [2024-11-15 11:37:32.502128] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:19:31.779 11:37:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:19:31.779 11:37:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@866 -- # return 0 00:19:31.779 11:37:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.6M4cr9t6oG 00:19:32.037 11:37:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:19:32.294 [2024-11-15 11:37:33.100976] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:19:32.552 TLSTESTn1 00:19:32.552 11:37:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:19:32.552 Running I/O for 10 seconds... 00:19:34.857 5909.00 IOPS, 23.08 MiB/s [2024-11-15T10:37:36.643Z] 6003.50 IOPS, 23.45 MiB/s [2024-11-15T10:37:37.576Z] 5981.67 IOPS, 23.37 MiB/s [2024-11-15T10:37:38.508Z] 6019.25 IOPS, 23.51 MiB/s [2024-11-15T10:37:39.441Z] 5994.60 IOPS, 23.42 MiB/s [2024-11-15T10:37:40.372Z] 5992.33 IOPS, 23.41 MiB/s [2024-11-15T10:37:41.304Z] 5985.71 IOPS, 23.38 MiB/s [2024-11-15T10:37:42.675Z] 5952.25 IOPS, 23.25 MiB/s [2024-11-15T10:37:43.608Z] 5744.33 IOPS, 22.44 MiB/s [2024-11-15T10:37:43.608Z] 5573.20 IOPS, 21.77 MiB/s 00:19:42.755 Latency(us) 00:19:42.755 [2024-11-15T10:37:43.608Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:42.755 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:19:42.755 Verification LBA range: start 0x0 length 0x2000 00:19:42.755 TLSTESTn1 : 10.02 5575.98 21.78 0.00 0.00 22918.99 6762.12 33125.47 00:19:42.755 [2024-11-15T10:37:43.608Z] =================================================================================================================== 00:19:42.755 [2024-11-15T10:37:43.608Z] Total : 5575.98 21.78 0.00 0.00 22918.99 6762.12 33125.47 00:19:42.755 { 00:19:42.755 "results": [ 00:19:42.755 { 00:19:42.755 "job": "TLSTESTn1", 00:19:42.755 "core_mask": "0x4", 00:19:42.755 "workload": "verify", 00:19:42.755 "status": "finished", 00:19:42.755 "verify_range": { 00:19:42.755 "start": 0, 00:19:42.755 "length": 8192 00:19:42.755 }, 00:19:42.755 "queue_depth": 128, 00:19:42.755 "io_size": 4096, 00:19:42.755 "runtime": 10.017975, 00:19:42.755 "iops": 5575.977181017121, 00:19:42.755 "mibps": 21.78116086334813, 00:19:42.755 "io_failed": 0, 00:19:42.755 "io_timeout": 0, 00:19:42.755 "avg_latency_us": 22918.988750838133, 00:19:42.755 "min_latency_us": 6762.123636363636, 00:19:42.755 "max_latency_us": 33125.46909090909 00:19:42.755 } 00:19:42.755 ], 00:19:42.755 "core_count": 1 00:19:42.755 } 00:19:42.755 11:37:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@45 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:19:42.755 11:37:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@46 -- # killprocess 1256448 00:19:42.755 11:37:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # '[' -z 1256448 ']' 00:19:42.755 11:37:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # kill -0 1256448 00:19:42.755 11:37:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # uname 00:19:42.755 11:37:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:19:42.755 11:37:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 1256448 00:19:42.755 11:37:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # process_name=reactor_2 00:19:42.755 11:37:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@962 -- # '[' reactor_2 = sudo ']' 00:19:42.755 11:37:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@970 -- # echo 'killing process with pid 1256448' 00:19:42.755 killing process with pid 1256448 00:19:42.755 11:37:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@971 -- # kill 1256448 00:19:42.755 Received shutdown signal, test time was about 10.000000 seconds 00:19:42.755 00:19:42.755 Latency(us) 00:19:42.755 [2024-11-15T10:37:43.608Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:42.755 [2024-11-15T10:37:43.608Z] =================================================================================================================== 00:19:42.755 [2024-11-15T10:37:43.608Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:19:42.755 11:37:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@976 -- # wait 1256448 00:19:42.755 11:37:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@171 -- # chmod 0666 /tmp/tmp.6M4cr9t6oG 00:19:42.755 11:37:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@172 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.6M4cr9t6oG 00:19:42.755 11:37:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@650 -- # local es=0 00:19:42.755 11:37:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.6M4cr9t6oG 00:19:42.755 11:37:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@638 -- # local arg=run_bdevperf 00:19:42.755 11:37:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:19:42.755 11:37:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # type -t run_bdevperf 00:19:42.755 11:37:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:19:42.755 11:37:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.6M4cr9t6oG 00:19:42.755 11:37:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:19:42.755 11:37:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:19:42.755 11:37:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:19:42.755 11:37:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.6M4cr9t6oG 00:19:42.755 11:37:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:19:42.755 11:37:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=1258377 00:19:42.755 11:37:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:19:42.755 11:37:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:19:42.755 11:37:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 1258377 /var/tmp/bdevperf.sock 00:19:42.755 11:37:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # '[' -z 1258377 ']' 00:19:42.755 11:37:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:42.755 11:37:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # local max_retries=100 00:19:42.755 11:37:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:42.755 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:42.755 11:37:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # xtrace_disable 00:19:42.755 11:37:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:42.755 [2024-11-15 11:37:43.597089] Starting SPDK v25.01-pre git sha1 4b2d483c6 / DPDK 24.03.0 initialization... 00:19:42.755 [2024-11-15 11:37:43.597154] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1258377 ] 00:19:43.014 [2024-11-15 11:37:43.669235] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:43.014 [2024-11-15 11:37:43.712183] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:19:43.014 11:37:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:19:43.014 11:37:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@866 -- # return 0 00:19:43.014 11:37:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.6M4cr9t6oG 00:19:43.270 [2024-11-15 11:37:44.077611] keyring.c: 36:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.6M4cr9t6oG': 0100666 00:19:43.271 [2024-11-15 11:37:44.077640] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:19:43.271 request: 00:19:43.271 { 00:19:43.271 "name": "key0", 00:19:43.271 "path": "/tmp/tmp.6M4cr9t6oG", 00:19:43.271 "method": "keyring_file_add_key", 00:19:43.271 "req_id": 1 00:19:43.271 } 00:19:43.271 Got JSON-RPC error response 00:19:43.271 response: 00:19:43.271 { 00:19:43.271 "code": -1, 00:19:43.271 "message": "Operation not permitted" 00:19:43.271 } 00:19:43.271 11:37:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:19:43.528 [2024-11-15 11:37:44.350383] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:19:43.528 [2024-11-15 11:37:44.350408] bdev_nvme.c:6622:spdk_bdev_nvme_create: *ERROR*: Could not load PSK: key0 00:19:43.528 request: 00:19:43.528 { 00:19:43.528 "name": "TLSTEST", 00:19:43.528 "trtype": "tcp", 00:19:43.528 "traddr": "10.0.0.2", 00:19:43.528 "adrfam": "ipv4", 00:19:43.528 "trsvcid": "4420", 00:19:43.528 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:19:43.528 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:19:43.528 "prchk_reftag": false, 00:19:43.528 "prchk_guard": false, 00:19:43.528 "hdgst": false, 00:19:43.528 "ddgst": false, 00:19:43.528 "psk": "key0", 00:19:43.528 "allow_unrecognized_csi": false, 00:19:43.528 "method": "bdev_nvme_attach_controller", 00:19:43.528 "req_id": 1 00:19:43.528 } 00:19:43.528 Got JSON-RPC error response 00:19:43.528 response: 00:19:43.528 { 00:19:43.528 "code": -126, 00:19:43.528 "message": "Required key not available" 00:19:43.528 } 00:19:43.528 11:37:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 1258377 00:19:43.528 11:37:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # '[' -z 1258377 ']' 00:19:43.528 11:37:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # kill -0 1258377 00:19:43.528 11:37:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # uname 00:19:43.528 11:37:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:19:43.528 11:37:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 1258377 00:19:43.786 11:37:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # process_name=reactor_2 00:19:43.786 11:37:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@962 -- # '[' reactor_2 = sudo ']' 00:19:43.786 11:37:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@970 -- # echo 'killing process with pid 1258377' 00:19:43.786 killing process with pid 1258377 00:19:43.786 11:37:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@971 -- # kill 1258377 00:19:43.786 Received shutdown signal, test time was about 10.000000 seconds 00:19:43.786 00:19:43.786 Latency(us) 00:19:43.786 [2024-11-15T10:37:44.639Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:43.786 [2024-11-15T10:37:44.639Z] =================================================================================================================== 00:19:43.786 [2024-11-15T10:37:44.639Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:19:43.786 11:37:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@976 -- # wait 1258377 00:19:43.786 11:37:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:19:43.786 11:37:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # es=1 00:19:43.786 11:37:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:19:43.786 11:37:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:19:43.786 11:37:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:19:43.786 11:37:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@175 -- # killprocess 1255982 00:19:43.786 11:37:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # '[' -z 1255982 ']' 00:19:43.786 11:37:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # kill -0 1255982 00:19:43.786 11:37:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # uname 00:19:43.786 11:37:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:19:43.786 11:37:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 1255982 00:19:43.786 11:37:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:19:43.786 11:37:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:19:43.786 11:37:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@970 -- # echo 'killing process with pid 1255982' 00:19:43.786 killing process with pid 1255982 00:19:43.786 11:37:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@971 -- # kill 1255982 00:19:43.786 11:37:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@976 -- # wait 1255982 00:19:44.044 11:37:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@176 -- # nvmfappstart -m 0x2 00:19:44.044 11:37:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:19:44.044 11:37:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:19:44.044 11:37:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:44.044 11:37:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=1258567 00:19:44.044 11:37:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 1258567 00:19:44.044 11:37:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:19:44.044 11:37:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # '[' -z 1258567 ']' 00:19:44.044 11:37:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:44.044 11:37:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # local max_retries=100 00:19:44.044 11:37:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:44.044 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:44.044 11:37:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # xtrace_disable 00:19:44.044 11:37:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:44.044 [2024-11-15 11:37:44.839011] Starting SPDK v25.01-pre git sha1 4b2d483c6 / DPDK 24.03.0 initialization... 00:19:44.044 [2024-11-15 11:37:44.839059] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:44.044 [2024-11-15 11:37:44.896269] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:44.302 [2024-11-15 11:37:44.931432] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:44.302 [2024-11-15 11:37:44.931471] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:44.302 [2024-11-15 11:37:44.931477] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:44.302 [2024-11-15 11:37:44.931483] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:44.302 [2024-11-15 11:37:44.931504] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:44.302 [2024-11-15 11:37:44.932075] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:19:44.302 11:37:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:19:44.302 11:37:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@866 -- # return 0 00:19:44.302 11:37:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:19:44.302 11:37:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:19:44.302 11:37:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:44.302 11:37:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:44.302 11:37:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@178 -- # NOT setup_nvmf_tgt /tmp/tmp.6M4cr9t6oG 00:19:44.302 11:37:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@650 -- # local es=0 00:19:44.302 11:37:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # valid_exec_arg setup_nvmf_tgt /tmp/tmp.6M4cr9t6oG 00:19:44.302 11:37:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@638 -- # local arg=setup_nvmf_tgt 00:19:44.302 11:37:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:19:44.302 11:37:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # type -t setup_nvmf_tgt 00:19:44.302 11:37:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:19:44.302 11:37:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # setup_nvmf_tgt /tmp/tmp.6M4cr9t6oG 00:19:44.302 11:37:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.6M4cr9t6oG 00:19:44.302 11:37:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:19:44.559 [2024-11-15 11:37:45.315215] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:44.559 11:37:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:19:44.816 11:37:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:19:45.073 [2024-11-15 11:37:45.856609] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:19:45.073 [2024-11-15 11:37:45.856812] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:45.073 11:37:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:19:45.331 malloc0 00:19:45.331 11:37:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:19:45.590 11:37:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.6M4cr9t6oG 00:19:45.850 [2024-11-15 11:37:46.574146] keyring.c: 36:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.6M4cr9t6oG': 0100666 00:19:45.850 [2024-11-15 11:37:46.574171] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:19:45.850 request: 00:19:45.850 { 00:19:45.850 "name": "key0", 00:19:45.850 "path": "/tmp/tmp.6M4cr9t6oG", 00:19:45.850 "method": "keyring_file_add_key", 00:19:45.850 "req_id": 1 00:19:45.850 } 00:19:45.850 Got JSON-RPC error response 00:19:45.850 response: 00:19:45.850 { 00:19:45.850 "code": -1, 00:19:45.850 "message": "Operation not permitted" 00:19:45.850 } 00:19:45.850 11:37:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:19:46.108 [2024-11-15 11:37:46.850883] tcp.c:3792:nvmf_tcp_subsystem_add_host: *ERROR*: Key 'key0' does not exist 00:19:46.108 [2024-11-15 11:37:46.850919] subsystem.c:1051:spdk_nvmf_subsystem_add_host_ext: *ERROR*: Unable to add host to TCP transport 00:19:46.108 request: 00:19:46.108 { 00:19:46.108 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:46.108 "host": "nqn.2016-06.io.spdk:host1", 00:19:46.108 "psk": "key0", 00:19:46.108 "method": "nvmf_subsystem_add_host", 00:19:46.108 "req_id": 1 00:19:46.108 } 00:19:46.108 Got JSON-RPC error response 00:19:46.108 response: 00:19:46.108 { 00:19:46.108 "code": -32603, 00:19:46.108 "message": "Internal error" 00:19:46.108 } 00:19:46.108 11:37:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # es=1 00:19:46.108 11:37:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:19:46.108 11:37:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:19:46.108 11:37:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:19:46.108 11:37:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@181 -- # killprocess 1258567 00:19:46.108 11:37:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # '[' -z 1258567 ']' 00:19:46.108 11:37:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # kill -0 1258567 00:19:46.108 11:37:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # uname 00:19:46.108 11:37:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:19:46.108 11:37:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 1258567 00:19:46.108 11:37:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:19:46.108 11:37:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:19:46.108 11:37:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@970 -- # echo 'killing process with pid 1258567' 00:19:46.108 killing process with pid 1258567 00:19:46.108 11:37:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@971 -- # kill 1258567 00:19:46.108 11:37:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@976 -- # wait 1258567 00:19:46.367 11:37:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@182 -- # chmod 0600 /tmp/tmp.6M4cr9t6oG 00:19:46.367 11:37:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@185 -- # nvmfappstart -m 0x2 00:19:46.367 11:37:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:19:46.367 11:37:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:19:46.367 11:37:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:46.367 11:37:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=1259114 00:19:46.367 11:37:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:19:46.367 11:37:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 1259114 00:19:46.367 11:37:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # '[' -z 1259114 ']' 00:19:46.367 11:37:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:46.367 11:37:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # local max_retries=100 00:19:46.367 11:37:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:46.367 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:46.367 11:37:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # xtrace_disable 00:19:46.367 11:37:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:46.367 [2024-11-15 11:37:47.152237] Starting SPDK v25.01-pre git sha1 4b2d483c6 / DPDK 24.03.0 initialization... 00:19:46.367 [2024-11-15 11:37:47.152280] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:46.367 [2024-11-15 11:37:47.207802] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:46.626 [2024-11-15 11:37:47.241484] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:46.626 [2024-11-15 11:37:47.241517] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:46.626 [2024-11-15 11:37:47.241524] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:46.626 [2024-11-15 11:37:47.241530] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:46.626 [2024-11-15 11:37:47.241535] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:46.626 [2024-11-15 11:37:47.242090] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:19:46.626 11:37:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:19:46.626 11:37:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@866 -- # return 0 00:19:46.626 11:37:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:19:46.626 11:37:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:19:46.626 11:37:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:46.626 11:37:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:46.626 11:37:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@186 -- # setup_nvmf_tgt /tmp/tmp.6M4cr9t6oG 00:19:46.626 11:37:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.6M4cr9t6oG 00:19:46.626 11:37:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:19:46.884 [2024-11-15 11:37:47.620163] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:46.884 11:37:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:19:47.142 11:37:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:19:47.400 [2024-11-15 11:37:48.073356] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:19:47.400 [2024-11-15 11:37:48.073590] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:47.400 11:37:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:19:47.658 malloc0 00:19:47.658 11:37:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:19:47.916 11:37:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.6M4cr9t6oG 00:19:48.173 11:37:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:19:48.431 11:37:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@189 -- # bdevperf_pid=1259406 00:19:48.431 11:37:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@188 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:19:48.431 11:37:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@191 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:19:48.431 11:37:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@192 -- # waitforlisten 1259406 /var/tmp/bdevperf.sock 00:19:48.431 11:37:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # '[' -z 1259406 ']' 00:19:48.431 11:37:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:48.431 11:37:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # local max_retries=100 00:19:48.431 11:37:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:48.432 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:48.432 11:37:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # xtrace_disable 00:19:48.432 11:37:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:48.432 [2024-11-15 11:37:49.131905] Starting SPDK v25.01-pre git sha1 4b2d483c6 / DPDK 24.03.0 initialization... 00:19:48.432 [2024-11-15 11:37:49.131967] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1259406 ] 00:19:48.432 [2024-11-15 11:37:49.199222] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:48.432 [2024-11-15 11:37:49.237285] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:19:48.690 11:37:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:19:48.690 11:37:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@866 -- # return 0 00:19:48.690 11:37:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@193 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.6M4cr9t6oG 00:19:48.690 11:37:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@194 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:19:48.948 [2024-11-15 11:37:49.753004] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:19:49.206 TLSTESTn1 00:19:49.206 11:37:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@198 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py save_config 00:19:49.464 11:37:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@198 -- # tgtconf='{ 00:19:49.464 "subsystems": [ 00:19:49.464 { 00:19:49.464 "subsystem": "keyring", 00:19:49.464 "config": [ 00:19:49.464 { 00:19:49.464 "method": "keyring_file_add_key", 00:19:49.464 "params": { 00:19:49.464 "name": "key0", 00:19:49.464 "path": "/tmp/tmp.6M4cr9t6oG" 00:19:49.464 } 00:19:49.464 } 00:19:49.464 ] 00:19:49.464 }, 00:19:49.464 { 00:19:49.464 "subsystem": "iobuf", 00:19:49.464 "config": [ 00:19:49.464 { 00:19:49.464 "method": "iobuf_set_options", 00:19:49.464 "params": { 00:19:49.464 "small_pool_count": 8192, 00:19:49.464 "large_pool_count": 1024, 00:19:49.464 "small_bufsize": 8192, 00:19:49.464 "large_bufsize": 135168, 00:19:49.464 "enable_numa": false 00:19:49.464 } 00:19:49.464 } 00:19:49.464 ] 00:19:49.464 }, 00:19:49.464 { 00:19:49.464 "subsystem": "sock", 00:19:49.464 "config": [ 00:19:49.464 { 00:19:49.464 "method": "sock_set_default_impl", 00:19:49.464 "params": { 00:19:49.464 "impl_name": "posix" 00:19:49.464 } 00:19:49.464 }, 00:19:49.464 { 00:19:49.464 "method": "sock_impl_set_options", 00:19:49.464 "params": { 00:19:49.464 "impl_name": "ssl", 00:19:49.464 "recv_buf_size": 4096, 00:19:49.464 "send_buf_size": 4096, 00:19:49.464 "enable_recv_pipe": true, 00:19:49.464 "enable_quickack": false, 00:19:49.464 "enable_placement_id": 0, 00:19:49.464 "enable_zerocopy_send_server": true, 00:19:49.464 "enable_zerocopy_send_client": false, 00:19:49.464 "zerocopy_threshold": 0, 00:19:49.464 "tls_version": 0, 00:19:49.464 "enable_ktls": false 00:19:49.464 } 00:19:49.464 }, 00:19:49.464 { 00:19:49.464 "method": "sock_impl_set_options", 00:19:49.464 "params": { 00:19:49.464 "impl_name": "posix", 00:19:49.464 "recv_buf_size": 2097152, 00:19:49.464 "send_buf_size": 2097152, 00:19:49.464 "enable_recv_pipe": true, 00:19:49.464 "enable_quickack": false, 00:19:49.464 "enable_placement_id": 0, 00:19:49.464 "enable_zerocopy_send_server": true, 00:19:49.464 "enable_zerocopy_send_client": false, 00:19:49.464 "zerocopy_threshold": 0, 00:19:49.464 "tls_version": 0, 00:19:49.464 "enable_ktls": false 00:19:49.464 } 00:19:49.464 } 00:19:49.464 ] 00:19:49.464 }, 00:19:49.464 { 00:19:49.464 "subsystem": "vmd", 00:19:49.464 "config": [] 00:19:49.464 }, 00:19:49.464 { 00:19:49.464 "subsystem": "accel", 00:19:49.464 "config": [ 00:19:49.464 { 00:19:49.464 "method": "accel_set_options", 00:19:49.464 "params": { 00:19:49.464 "small_cache_size": 128, 00:19:49.464 "large_cache_size": 16, 00:19:49.464 "task_count": 2048, 00:19:49.464 "sequence_count": 2048, 00:19:49.464 "buf_count": 2048 00:19:49.464 } 00:19:49.464 } 00:19:49.464 ] 00:19:49.464 }, 00:19:49.464 { 00:19:49.464 "subsystem": "bdev", 00:19:49.464 "config": [ 00:19:49.464 { 00:19:49.464 "method": "bdev_set_options", 00:19:49.464 "params": { 00:19:49.464 "bdev_io_pool_size": 65535, 00:19:49.464 "bdev_io_cache_size": 256, 00:19:49.464 "bdev_auto_examine": true, 00:19:49.464 "iobuf_small_cache_size": 128, 00:19:49.464 "iobuf_large_cache_size": 16 00:19:49.464 } 00:19:49.464 }, 00:19:49.464 { 00:19:49.464 "method": "bdev_raid_set_options", 00:19:49.464 "params": { 00:19:49.464 "process_window_size_kb": 1024, 00:19:49.464 "process_max_bandwidth_mb_sec": 0 00:19:49.464 } 00:19:49.464 }, 00:19:49.464 { 00:19:49.464 "method": "bdev_iscsi_set_options", 00:19:49.464 "params": { 00:19:49.464 "timeout_sec": 30 00:19:49.464 } 00:19:49.464 }, 00:19:49.464 { 00:19:49.464 "method": "bdev_nvme_set_options", 00:19:49.464 "params": { 00:19:49.464 "action_on_timeout": "none", 00:19:49.464 "timeout_us": 0, 00:19:49.464 "timeout_admin_us": 0, 00:19:49.464 "keep_alive_timeout_ms": 10000, 00:19:49.464 "arbitration_burst": 0, 00:19:49.464 "low_priority_weight": 0, 00:19:49.464 "medium_priority_weight": 0, 00:19:49.464 "high_priority_weight": 0, 00:19:49.464 "nvme_adminq_poll_period_us": 10000, 00:19:49.464 "nvme_ioq_poll_period_us": 0, 00:19:49.464 "io_queue_requests": 0, 00:19:49.464 "delay_cmd_submit": true, 00:19:49.464 "transport_retry_count": 4, 00:19:49.464 "bdev_retry_count": 3, 00:19:49.464 "transport_ack_timeout": 0, 00:19:49.464 "ctrlr_loss_timeout_sec": 0, 00:19:49.464 "reconnect_delay_sec": 0, 00:19:49.464 "fast_io_fail_timeout_sec": 0, 00:19:49.464 "disable_auto_failback": false, 00:19:49.464 "generate_uuids": false, 00:19:49.464 "transport_tos": 0, 00:19:49.464 "nvme_error_stat": false, 00:19:49.464 "rdma_srq_size": 0, 00:19:49.464 "io_path_stat": false, 00:19:49.464 "allow_accel_sequence": false, 00:19:49.464 "rdma_max_cq_size": 0, 00:19:49.464 "rdma_cm_event_timeout_ms": 0, 00:19:49.464 "dhchap_digests": [ 00:19:49.464 "sha256", 00:19:49.464 "sha384", 00:19:49.465 "sha512" 00:19:49.465 ], 00:19:49.465 "dhchap_dhgroups": [ 00:19:49.465 "null", 00:19:49.465 "ffdhe2048", 00:19:49.465 "ffdhe3072", 00:19:49.465 "ffdhe4096", 00:19:49.465 "ffdhe6144", 00:19:49.465 "ffdhe8192" 00:19:49.465 ] 00:19:49.465 } 00:19:49.465 }, 00:19:49.465 { 00:19:49.465 "method": "bdev_nvme_set_hotplug", 00:19:49.465 "params": { 00:19:49.465 "period_us": 100000, 00:19:49.465 "enable": false 00:19:49.465 } 00:19:49.465 }, 00:19:49.465 { 00:19:49.465 "method": "bdev_malloc_create", 00:19:49.465 "params": { 00:19:49.465 "name": "malloc0", 00:19:49.465 "num_blocks": 8192, 00:19:49.465 "block_size": 4096, 00:19:49.465 "physical_block_size": 4096, 00:19:49.465 "uuid": "7863b46d-effc-428c-8fe3-d8478348b4ce", 00:19:49.465 "optimal_io_boundary": 0, 00:19:49.465 "md_size": 0, 00:19:49.465 "dif_type": 0, 00:19:49.465 "dif_is_head_of_md": false, 00:19:49.465 "dif_pi_format": 0 00:19:49.465 } 00:19:49.465 }, 00:19:49.465 { 00:19:49.465 "method": "bdev_wait_for_examine" 00:19:49.465 } 00:19:49.465 ] 00:19:49.465 }, 00:19:49.465 { 00:19:49.465 "subsystem": "nbd", 00:19:49.465 "config": [] 00:19:49.465 }, 00:19:49.465 { 00:19:49.465 "subsystem": "scheduler", 00:19:49.465 "config": [ 00:19:49.465 { 00:19:49.465 "method": "framework_set_scheduler", 00:19:49.465 "params": { 00:19:49.465 "name": "static" 00:19:49.465 } 00:19:49.465 } 00:19:49.465 ] 00:19:49.465 }, 00:19:49.465 { 00:19:49.465 "subsystem": "nvmf", 00:19:49.465 "config": [ 00:19:49.465 { 00:19:49.465 "method": "nvmf_set_config", 00:19:49.465 "params": { 00:19:49.465 "discovery_filter": "match_any", 00:19:49.465 "admin_cmd_passthru": { 00:19:49.465 "identify_ctrlr": false 00:19:49.465 }, 00:19:49.465 "dhchap_digests": [ 00:19:49.465 "sha256", 00:19:49.465 "sha384", 00:19:49.465 "sha512" 00:19:49.465 ], 00:19:49.465 "dhchap_dhgroups": [ 00:19:49.465 "null", 00:19:49.465 "ffdhe2048", 00:19:49.465 "ffdhe3072", 00:19:49.465 "ffdhe4096", 00:19:49.465 "ffdhe6144", 00:19:49.465 "ffdhe8192" 00:19:49.465 ] 00:19:49.465 } 00:19:49.465 }, 00:19:49.465 { 00:19:49.465 "method": "nvmf_set_max_subsystems", 00:19:49.465 "params": { 00:19:49.465 "max_subsystems": 1024 00:19:49.465 } 00:19:49.465 }, 00:19:49.465 { 00:19:49.465 "method": "nvmf_set_crdt", 00:19:49.465 "params": { 00:19:49.465 "crdt1": 0, 00:19:49.465 "crdt2": 0, 00:19:49.465 "crdt3": 0 00:19:49.465 } 00:19:49.465 }, 00:19:49.465 { 00:19:49.465 "method": "nvmf_create_transport", 00:19:49.465 "params": { 00:19:49.465 "trtype": "TCP", 00:19:49.465 "max_queue_depth": 128, 00:19:49.465 "max_io_qpairs_per_ctrlr": 127, 00:19:49.465 "in_capsule_data_size": 4096, 00:19:49.465 "max_io_size": 131072, 00:19:49.465 "io_unit_size": 131072, 00:19:49.465 "max_aq_depth": 128, 00:19:49.465 "num_shared_buffers": 511, 00:19:49.465 "buf_cache_size": 4294967295, 00:19:49.465 "dif_insert_or_strip": false, 00:19:49.465 "zcopy": false, 00:19:49.465 "c2h_success": false, 00:19:49.465 "sock_priority": 0, 00:19:49.465 "abort_timeout_sec": 1, 00:19:49.465 "ack_timeout": 0, 00:19:49.465 "data_wr_pool_size": 0 00:19:49.465 } 00:19:49.465 }, 00:19:49.465 { 00:19:49.465 "method": "nvmf_create_subsystem", 00:19:49.465 "params": { 00:19:49.465 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:49.465 "allow_any_host": false, 00:19:49.465 "serial_number": "SPDK00000000000001", 00:19:49.465 "model_number": "SPDK bdev Controller", 00:19:49.465 "max_namespaces": 10, 00:19:49.465 "min_cntlid": 1, 00:19:49.465 "max_cntlid": 65519, 00:19:49.465 "ana_reporting": false 00:19:49.465 } 00:19:49.465 }, 00:19:49.465 { 00:19:49.465 "method": "nvmf_subsystem_add_host", 00:19:49.465 "params": { 00:19:49.465 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:49.465 "host": "nqn.2016-06.io.spdk:host1", 00:19:49.465 "psk": "key0" 00:19:49.465 } 00:19:49.465 }, 00:19:49.465 { 00:19:49.465 "method": "nvmf_subsystem_add_ns", 00:19:49.465 "params": { 00:19:49.465 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:49.465 "namespace": { 00:19:49.465 "nsid": 1, 00:19:49.465 "bdev_name": "malloc0", 00:19:49.465 "nguid": "7863B46DEFFC428C8FE3D8478348B4CE", 00:19:49.465 "uuid": "7863b46d-effc-428c-8fe3-d8478348b4ce", 00:19:49.465 "no_auto_visible": false 00:19:49.465 } 00:19:49.465 } 00:19:49.465 }, 00:19:49.465 { 00:19:49.465 "method": "nvmf_subsystem_add_listener", 00:19:49.465 "params": { 00:19:49.465 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:49.465 "listen_address": { 00:19:49.465 "trtype": "TCP", 00:19:49.465 "adrfam": "IPv4", 00:19:49.465 "traddr": "10.0.0.2", 00:19:49.465 "trsvcid": "4420" 00:19:49.465 }, 00:19:49.465 "secure_channel": true 00:19:49.465 } 00:19:49.465 } 00:19:49.465 ] 00:19:49.465 } 00:19:49.465 ] 00:19:49.465 }' 00:19:49.465 11:37:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@199 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:19:49.724 11:37:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@199 -- # bdevperfconf='{ 00:19:49.724 "subsystems": [ 00:19:49.724 { 00:19:49.724 "subsystem": "keyring", 00:19:49.724 "config": [ 00:19:49.724 { 00:19:49.724 "method": "keyring_file_add_key", 00:19:49.724 "params": { 00:19:49.724 "name": "key0", 00:19:49.724 "path": "/tmp/tmp.6M4cr9t6oG" 00:19:49.724 } 00:19:49.724 } 00:19:49.724 ] 00:19:49.724 }, 00:19:49.724 { 00:19:49.724 "subsystem": "iobuf", 00:19:49.724 "config": [ 00:19:49.724 { 00:19:49.724 "method": "iobuf_set_options", 00:19:49.724 "params": { 00:19:49.724 "small_pool_count": 8192, 00:19:49.724 "large_pool_count": 1024, 00:19:49.724 "small_bufsize": 8192, 00:19:49.724 "large_bufsize": 135168, 00:19:49.724 "enable_numa": false 00:19:49.724 } 00:19:49.724 } 00:19:49.724 ] 00:19:49.724 }, 00:19:49.724 { 00:19:49.724 "subsystem": "sock", 00:19:49.724 "config": [ 00:19:49.724 { 00:19:49.724 "method": "sock_set_default_impl", 00:19:49.724 "params": { 00:19:49.724 "impl_name": "posix" 00:19:49.724 } 00:19:49.724 }, 00:19:49.724 { 00:19:49.724 "method": "sock_impl_set_options", 00:19:49.724 "params": { 00:19:49.724 "impl_name": "ssl", 00:19:49.724 "recv_buf_size": 4096, 00:19:49.724 "send_buf_size": 4096, 00:19:49.724 "enable_recv_pipe": true, 00:19:49.724 "enable_quickack": false, 00:19:49.724 "enable_placement_id": 0, 00:19:49.724 "enable_zerocopy_send_server": true, 00:19:49.724 "enable_zerocopy_send_client": false, 00:19:49.724 "zerocopy_threshold": 0, 00:19:49.724 "tls_version": 0, 00:19:49.724 "enable_ktls": false 00:19:49.724 } 00:19:49.724 }, 00:19:49.724 { 00:19:49.724 "method": "sock_impl_set_options", 00:19:49.724 "params": { 00:19:49.724 "impl_name": "posix", 00:19:49.724 "recv_buf_size": 2097152, 00:19:49.724 "send_buf_size": 2097152, 00:19:49.724 "enable_recv_pipe": true, 00:19:49.724 "enable_quickack": false, 00:19:49.724 "enable_placement_id": 0, 00:19:49.724 "enable_zerocopy_send_server": true, 00:19:49.724 "enable_zerocopy_send_client": false, 00:19:49.724 "zerocopy_threshold": 0, 00:19:49.724 "tls_version": 0, 00:19:49.724 "enable_ktls": false 00:19:49.724 } 00:19:49.724 } 00:19:49.724 ] 00:19:49.724 }, 00:19:49.724 { 00:19:49.724 "subsystem": "vmd", 00:19:49.724 "config": [] 00:19:49.724 }, 00:19:49.724 { 00:19:49.724 "subsystem": "accel", 00:19:49.724 "config": [ 00:19:49.724 { 00:19:49.724 "method": "accel_set_options", 00:19:49.724 "params": { 00:19:49.724 "small_cache_size": 128, 00:19:49.724 "large_cache_size": 16, 00:19:49.724 "task_count": 2048, 00:19:49.724 "sequence_count": 2048, 00:19:49.724 "buf_count": 2048 00:19:49.724 } 00:19:49.724 } 00:19:49.724 ] 00:19:49.724 }, 00:19:49.724 { 00:19:49.724 "subsystem": "bdev", 00:19:49.724 "config": [ 00:19:49.724 { 00:19:49.724 "method": "bdev_set_options", 00:19:49.724 "params": { 00:19:49.724 "bdev_io_pool_size": 65535, 00:19:49.724 "bdev_io_cache_size": 256, 00:19:49.724 "bdev_auto_examine": true, 00:19:49.724 "iobuf_small_cache_size": 128, 00:19:49.724 "iobuf_large_cache_size": 16 00:19:49.724 } 00:19:49.724 }, 00:19:49.724 { 00:19:49.724 "method": "bdev_raid_set_options", 00:19:49.724 "params": { 00:19:49.724 "process_window_size_kb": 1024, 00:19:49.724 "process_max_bandwidth_mb_sec": 0 00:19:49.724 } 00:19:49.724 }, 00:19:49.724 { 00:19:49.724 "method": "bdev_iscsi_set_options", 00:19:49.724 "params": { 00:19:49.724 "timeout_sec": 30 00:19:49.724 } 00:19:49.724 }, 00:19:49.724 { 00:19:49.724 "method": "bdev_nvme_set_options", 00:19:49.724 "params": { 00:19:49.724 "action_on_timeout": "none", 00:19:49.724 "timeout_us": 0, 00:19:49.724 "timeout_admin_us": 0, 00:19:49.724 "keep_alive_timeout_ms": 10000, 00:19:49.724 "arbitration_burst": 0, 00:19:49.724 "low_priority_weight": 0, 00:19:49.724 "medium_priority_weight": 0, 00:19:49.724 "high_priority_weight": 0, 00:19:49.724 "nvme_adminq_poll_period_us": 10000, 00:19:49.724 "nvme_ioq_poll_period_us": 0, 00:19:49.724 "io_queue_requests": 512, 00:19:49.724 "delay_cmd_submit": true, 00:19:49.724 "transport_retry_count": 4, 00:19:49.724 "bdev_retry_count": 3, 00:19:49.724 "transport_ack_timeout": 0, 00:19:49.724 "ctrlr_loss_timeout_sec": 0, 00:19:49.724 "reconnect_delay_sec": 0, 00:19:49.724 "fast_io_fail_timeout_sec": 0, 00:19:49.724 "disable_auto_failback": false, 00:19:49.724 "generate_uuids": false, 00:19:49.724 "transport_tos": 0, 00:19:49.724 "nvme_error_stat": false, 00:19:49.724 "rdma_srq_size": 0, 00:19:49.724 "io_path_stat": false, 00:19:49.724 "allow_accel_sequence": false, 00:19:49.724 "rdma_max_cq_size": 0, 00:19:49.724 "rdma_cm_event_timeout_ms": 0, 00:19:49.724 "dhchap_digests": [ 00:19:49.724 "sha256", 00:19:49.724 "sha384", 00:19:49.724 "sha512" 00:19:49.724 ], 00:19:49.724 "dhchap_dhgroups": [ 00:19:49.724 "null", 00:19:49.724 "ffdhe2048", 00:19:49.724 "ffdhe3072", 00:19:49.724 "ffdhe4096", 00:19:49.724 "ffdhe6144", 00:19:49.724 "ffdhe8192" 00:19:49.724 ] 00:19:49.724 } 00:19:49.724 }, 00:19:49.724 { 00:19:49.724 "method": "bdev_nvme_attach_controller", 00:19:49.724 "params": { 00:19:49.724 "name": "TLSTEST", 00:19:49.724 "trtype": "TCP", 00:19:49.724 "adrfam": "IPv4", 00:19:49.724 "traddr": "10.0.0.2", 00:19:49.724 "trsvcid": "4420", 00:19:49.724 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:19:49.724 "prchk_reftag": false, 00:19:49.724 "prchk_guard": false, 00:19:49.724 "ctrlr_loss_timeout_sec": 0, 00:19:49.724 "reconnect_delay_sec": 0, 00:19:49.724 "fast_io_fail_timeout_sec": 0, 00:19:49.724 "psk": "key0", 00:19:49.724 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:19:49.724 "hdgst": false, 00:19:49.724 "ddgst": false, 00:19:49.724 "multipath": "multipath" 00:19:49.724 } 00:19:49.724 }, 00:19:49.724 { 00:19:49.724 "method": "bdev_nvme_set_hotplug", 00:19:49.724 "params": { 00:19:49.724 "period_us": 100000, 00:19:49.724 "enable": false 00:19:49.724 } 00:19:49.724 }, 00:19:49.724 { 00:19:49.724 "method": "bdev_wait_for_examine" 00:19:49.724 } 00:19:49.724 ] 00:19:49.724 }, 00:19:49.724 { 00:19:49.724 "subsystem": "nbd", 00:19:49.724 "config": [] 00:19:49.724 } 00:19:49.724 ] 00:19:49.724 }' 00:19:49.724 11:37:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@201 -- # killprocess 1259406 00:19:49.724 11:37:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # '[' -z 1259406 ']' 00:19:49.724 11:37:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # kill -0 1259406 00:19:49.724 11:37:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # uname 00:19:49.724 11:37:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:19:49.724 11:37:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 1259406 00:19:49.724 11:37:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # process_name=reactor_2 00:19:49.725 11:37:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@962 -- # '[' reactor_2 = sudo ']' 00:19:49.725 11:37:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@970 -- # echo 'killing process with pid 1259406' 00:19:49.725 killing process with pid 1259406 00:19:49.725 11:37:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@971 -- # kill 1259406 00:19:49.725 Received shutdown signal, test time was about 10.000000 seconds 00:19:49.725 00:19:49.725 Latency(us) 00:19:49.725 [2024-11-15T10:37:50.578Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:49.725 [2024-11-15T10:37:50.578Z] =================================================================================================================== 00:19:49.725 [2024-11-15T10:37:50.578Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:19:49.725 11:37:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@976 -- # wait 1259406 00:19:49.983 11:37:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@202 -- # killprocess 1259114 00:19:49.983 11:37:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # '[' -z 1259114 ']' 00:19:49.983 11:37:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # kill -0 1259114 00:19:49.983 11:37:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # uname 00:19:49.983 11:37:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:19:49.983 11:37:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 1259114 00:19:49.983 11:37:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:19:49.983 11:37:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:19:49.983 11:37:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@970 -- # echo 'killing process with pid 1259114' 00:19:49.983 killing process with pid 1259114 00:19:49.983 11:37:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@971 -- # kill 1259114 00:19:49.983 11:37:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@976 -- # wait 1259114 00:19:50.241 11:37:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@205 -- # nvmfappstart -m 0x2 -c /dev/fd/62 00:19:50.241 11:37:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:19:50.241 11:37:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:19:50.241 11:37:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:50.241 11:37:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@205 -- # echo '{ 00:19:50.241 "subsystems": [ 00:19:50.241 { 00:19:50.241 "subsystem": "keyring", 00:19:50.241 "config": [ 00:19:50.241 { 00:19:50.241 "method": "keyring_file_add_key", 00:19:50.241 "params": { 00:19:50.241 "name": "key0", 00:19:50.241 "path": "/tmp/tmp.6M4cr9t6oG" 00:19:50.241 } 00:19:50.242 } 00:19:50.242 ] 00:19:50.242 }, 00:19:50.242 { 00:19:50.242 "subsystem": "iobuf", 00:19:50.242 "config": [ 00:19:50.242 { 00:19:50.242 "method": "iobuf_set_options", 00:19:50.242 "params": { 00:19:50.242 "small_pool_count": 8192, 00:19:50.242 "large_pool_count": 1024, 00:19:50.242 "small_bufsize": 8192, 00:19:50.242 "large_bufsize": 135168, 00:19:50.242 "enable_numa": false 00:19:50.242 } 00:19:50.242 } 00:19:50.242 ] 00:19:50.242 }, 00:19:50.242 { 00:19:50.242 "subsystem": "sock", 00:19:50.242 "config": [ 00:19:50.242 { 00:19:50.242 "method": "sock_set_default_impl", 00:19:50.242 "params": { 00:19:50.242 "impl_name": "posix" 00:19:50.242 } 00:19:50.242 }, 00:19:50.242 { 00:19:50.242 "method": "sock_impl_set_options", 00:19:50.242 "params": { 00:19:50.242 "impl_name": "ssl", 00:19:50.242 "recv_buf_size": 4096, 00:19:50.242 "send_buf_size": 4096, 00:19:50.242 "enable_recv_pipe": true, 00:19:50.242 "enable_quickack": false, 00:19:50.242 "enable_placement_id": 0, 00:19:50.242 "enable_zerocopy_send_server": true, 00:19:50.242 "enable_zerocopy_send_client": false, 00:19:50.242 "zerocopy_threshold": 0, 00:19:50.242 "tls_version": 0, 00:19:50.242 "enable_ktls": false 00:19:50.242 } 00:19:50.242 }, 00:19:50.242 { 00:19:50.242 "method": "sock_impl_set_options", 00:19:50.242 "params": { 00:19:50.242 "impl_name": "posix", 00:19:50.242 "recv_buf_size": 2097152, 00:19:50.242 "send_buf_size": 2097152, 00:19:50.242 "enable_recv_pipe": true, 00:19:50.242 "enable_quickack": false, 00:19:50.242 "enable_placement_id": 0, 00:19:50.242 "enable_zerocopy_send_server": true, 00:19:50.242 "enable_zerocopy_send_client": false, 00:19:50.242 "zerocopy_threshold": 0, 00:19:50.242 "tls_version": 0, 00:19:50.242 "enable_ktls": false 00:19:50.242 } 00:19:50.242 } 00:19:50.242 ] 00:19:50.242 }, 00:19:50.242 { 00:19:50.242 "subsystem": "vmd", 00:19:50.242 "config": [] 00:19:50.242 }, 00:19:50.242 { 00:19:50.242 "subsystem": "accel", 00:19:50.242 "config": [ 00:19:50.242 { 00:19:50.242 "method": "accel_set_options", 00:19:50.242 "params": { 00:19:50.242 "small_cache_size": 128, 00:19:50.242 "large_cache_size": 16, 00:19:50.242 "task_count": 2048, 00:19:50.242 "sequence_count": 2048, 00:19:50.242 "buf_count": 2048 00:19:50.242 } 00:19:50.242 } 00:19:50.242 ] 00:19:50.242 }, 00:19:50.242 { 00:19:50.242 "subsystem": "bdev", 00:19:50.242 "config": [ 00:19:50.242 { 00:19:50.242 "method": "bdev_set_options", 00:19:50.242 "params": { 00:19:50.242 "bdev_io_pool_size": 65535, 00:19:50.242 "bdev_io_cache_size": 256, 00:19:50.242 "bdev_auto_examine": true, 00:19:50.242 "iobuf_small_cache_size": 128, 00:19:50.242 "iobuf_large_cache_size": 16 00:19:50.242 } 00:19:50.242 }, 00:19:50.242 { 00:19:50.242 "method": "bdev_raid_set_options", 00:19:50.242 "params": { 00:19:50.242 "process_window_size_kb": 1024, 00:19:50.242 "process_max_bandwidth_mb_sec": 0 00:19:50.242 } 00:19:50.242 }, 00:19:50.242 { 00:19:50.242 "method": "bdev_iscsi_set_options", 00:19:50.242 "params": { 00:19:50.242 "timeout_sec": 30 00:19:50.242 } 00:19:50.242 }, 00:19:50.242 { 00:19:50.242 "method": "bdev_nvme_set_options", 00:19:50.242 "params": { 00:19:50.242 "action_on_timeout": "none", 00:19:50.242 "timeout_us": 0, 00:19:50.242 "timeout_admin_us": 0, 00:19:50.242 "keep_alive_timeout_ms": 10000, 00:19:50.242 "arbitration_burst": 0, 00:19:50.242 "low_priority_weight": 0, 00:19:50.242 "medium_priority_weight": 0, 00:19:50.242 "high_priority_weight": 0, 00:19:50.242 "nvme_adminq_poll_period_us": 10000, 00:19:50.242 "nvme_ioq_poll_period_us": 0, 00:19:50.242 "io_queue_requests": 0, 00:19:50.242 "delay_cmd_submit": true, 00:19:50.242 "transport_retry_count": 4, 00:19:50.242 "bdev_retry_count": 3, 00:19:50.242 "transport_ack_timeout": 0, 00:19:50.242 "ctrlr_loss_timeout_sec": 0, 00:19:50.242 "reconnect_delay_sec": 0, 00:19:50.242 "fast_io_fail_timeout_sec": 0, 00:19:50.242 "disable_auto_failback": false, 00:19:50.242 "generate_uuids": false, 00:19:50.242 "transport_tos": 0, 00:19:50.242 "nvme_error_stat": false, 00:19:50.242 "rdma_srq_size": 0, 00:19:50.242 "io_path_stat": false, 00:19:50.242 "allow_accel_sequence": false, 00:19:50.242 "rdma_max_cq_size": 0, 00:19:50.242 "rdma_cm_event_timeout_ms": 0, 00:19:50.242 "dhchap_digests": [ 00:19:50.242 "sha256", 00:19:50.242 "sha384", 00:19:50.242 "sha512" 00:19:50.242 ], 00:19:50.242 "dhchap_dhgroups": [ 00:19:50.242 "null", 00:19:50.242 "ffdhe2048", 00:19:50.242 "ffdhe3072", 00:19:50.242 "ffdhe4096", 00:19:50.242 "ffdhe6144", 00:19:50.242 "ffdhe8192" 00:19:50.242 ] 00:19:50.242 } 00:19:50.242 }, 00:19:50.242 { 00:19:50.242 "method": "bdev_nvme_set_hotplug", 00:19:50.242 "params": { 00:19:50.242 "period_us": 100000, 00:19:50.242 "enable": false 00:19:50.242 } 00:19:50.242 }, 00:19:50.242 { 00:19:50.242 "method": "bdev_malloc_create", 00:19:50.242 "params": { 00:19:50.242 "name": "malloc0", 00:19:50.242 "num_blocks": 8192, 00:19:50.242 "block_size": 4096, 00:19:50.242 "physical_block_size": 4096, 00:19:50.242 "uuid": "7863b46d-effc-428c-8fe3-d8478348b4ce", 00:19:50.242 "optimal_io_boundary": 0, 00:19:50.242 "md_size": 0, 00:19:50.242 "dif_type": 0, 00:19:50.242 "dif_is_head_of_md": false, 00:19:50.242 "dif_pi_format": 0 00:19:50.242 } 00:19:50.242 }, 00:19:50.242 { 00:19:50.242 "method": "bdev_wait_for_examine" 00:19:50.242 } 00:19:50.242 ] 00:19:50.242 }, 00:19:50.242 { 00:19:50.242 "subsystem": "nbd", 00:19:50.242 "config": [] 00:19:50.242 }, 00:19:50.242 { 00:19:50.242 "subsystem": "scheduler", 00:19:50.242 "config": [ 00:19:50.242 { 00:19:50.242 "method": "framework_set_scheduler", 00:19:50.242 "params": { 00:19:50.242 "name": "static" 00:19:50.242 } 00:19:50.242 } 00:19:50.242 ] 00:19:50.242 }, 00:19:50.242 { 00:19:50.242 "subsystem": "nvmf", 00:19:50.242 "config": [ 00:19:50.242 { 00:19:50.242 "method": "nvmf_set_config", 00:19:50.242 "params": { 00:19:50.242 "discovery_filter": "match_any", 00:19:50.242 "admin_cmd_passthru": { 00:19:50.242 "identify_ctrlr": false 00:19:50.242 }, 00:19:50.242 "dhchap_digests": [ 00:19:50.242 "sha256", 00:19:50.242 "sha384", 00:19:50.242 "sha512" 00:19:50.242 ], 00:19:50.242 "dhchap_dhgroups": [ 00:19:50.242 "null", 00:19:50.242 "ffdhe2048", 00:19:50.242 "ffdhe3072", 00:19:50.242 "ffdhe4096", 00:19:50.242 "ffdhe6144", 00:19:50.242 "ffdhe8192" 00:19:50.242 ] 00:19:50.242 } 00:19:50.242 }, 00:19:50.242 { 00:19:50.242 "method": "nvmf_set_max_subsystems", 00:19:50.242 "params": { 00:19:50.242 "max_subsystems": 1024 00:19:50.242 } 00:19:50.242 }, 00:19:50.242 { 00:19:50.242 "method": "nvmf_set_crdt", 00:19:50.242 "params": { 00:19:50.242 "crdt1": 0, 00:19:50.242 "crdt2": 0, 00:19:50.242 "crdt3": 0 00:19:50.242 } 00:19:50.242 }, 00:19:50.242 { 00:19:50.242 "method": "nvmf_create_transport", 00:19:50.242 "params": { 00:19:50.242 "trtype": "TCP", 00:19:50.242 "max_queue_depth": 128, 00:19:50.242 "max_io_qpairs_per_ctrlr": 127, 00:19:50.242 "in_capsule_data_size": 4096, 00:19:50.242 "max_io_size": 131072, 00:19:50.242 "io_unit_size": 131072, 00:19:50.242 "max_aq_depth": 128, 00:19:50.242 "num_shared_buffers": 511, 00:19:50.242 "buf_cache_size": 4294967295, 00:19:50.242 "dif_insert_or_strip": false, 00:19:50.242 "zcopy": false, 00:19:50.243 "c2h_success": false, 00:19:50.243 "sock_priority": 0, 00:19:50.243 "abort_timeout_sec": 1, 00:19:50.243 "ack_timeout": 0, 00:19:50.243 "data_wr_pool_size": 0 00:19:50.243 } 00:19:50.243 }, 00:19:50.243 { 00:19:50.243 "method": "nvmf_create_subsystem", 00:19:50.243 "params": { 00:19:50.243 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:50.243 "allow_any_host": false, 00:19:50.243 "serial_number": "SPDK00000000000001", 00:19:50.243 "model_number": "SPDK bdev Controller", 00:19:50.243 "max_namespaces": 10, 00:19:50.243 "min_cntlid": 1, 00:19:50.243 "max_cntlid": 65519, 00:19:50.243 "ana_reporting": false 00:19:50.243 } 00:19:50.243 }, 00:19:50.243 { 00:19:50.243 "method": "nvmf_subsystem_add_host", 00:19:50.243 "params": { 00:19:50.243 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:50.243 "host": "nqn.2016-06.io.spdk:host1", 00:19:50.243 "psk": "key0" 00:19:50.243 } 00:19:50.243 }, 00:19:50.243 { 00:19:50.243 "method": "nvmf_subsystem_add_ns", 00:19:50.243 "params": { 00:19:50.243 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:50.243 "namespace": { 00:19:50.243 "nsid": 1, 00:19:50.243 "bdev_name": "malloc0", 00:19:50.243 "nguid": "7863B46DEFFC428C8FE3D8478348B4CE", 00:19:50.243 "uuid": "7863b46d-effc-428c-8fe3-d8478348b4ce", 00:19:50.243 "no_auto_visible": false 00:19:50.243 } 00:19:50.243 } 00:19:50.243 }, 00:19:50.243 { 00:19:50.243 "method": "nvmf_subsystem_add_listener", 00:19:50.243 "params": { 00:19:50.243 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:50.243 "listen_address": { 00:19:50.243 "trtype": "TCP", 00:19:50.243 "adrfam": "IPv4", 00:19:50.243 "traddr": "10.0.0.2", 00:19:50.243 "trsvcid": "4420" 00:19:50.243 }, 00:19:50.243 "secure_channel": true 00:19:50.243 } 00:19:50.243 } 00:19:50.243 ] 00:19:50.243 } 00:19:50.243 ] 00:19:50.243 }' 00:19:50.243 11:37:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=1259811 00:19:50.243 11:37:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 1259811 00:19:50.243 11:37:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 -c /dev/fd/62 00:19:50.243 11:37:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # '[' -z 1259811 ']' 00:19:50.243 11:37:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:50.243 11:37:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # local max_retries=100 00:19:50.243 11:37:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:50.243 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:50.243 11:37:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # xtrace_disable 00:19:50.243 11:37:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:50.243 [2024-11-15 11:37:50.933678] Starting SPDK v25.01-pre git sha1 4b2d483c6 / DPDK 24.03.0 initialization... 00:19:50.243 [2024-11-15 11:37:50.933736] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:50.243 [2024-11-15 11:37:51.005852] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:50.243 [2024-11-15 11:37:51.044928] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:50.243 [2024-11-15 11:37:51.044963] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:50.243 [2024-11-15 11:37:51.044969] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:50.243 [2024-11-15 11:37:51.044974] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:50.243 [2024-11-15 11:37:51.044979] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:50.243 [2024-11-15 11:37:51.045560] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:19:50.501 [2024-11-15 11:37:51.257761] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:50.501 [2024-11-15 11:37:51.289797] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:19:50.501 [2024-11-15 11:37:51.290003] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:51.437 11:37:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:19:51.437 11:37:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@866 -- # return 0 00:19:51.437 11:37:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:19:51.437 11:37:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:19:51.437 11:37:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:51.437 11:37:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:51.437 11:37:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@209 -- # bdevperf_pid=1259963 00:19:51.437 11:37:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@210 -- # waitforlisten 1259963 /var/tmp/bdevperf.sock 00:19:51.437 11:37:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # '[' -z 1259963 ']' 00:19:51.437 11:37:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:51.437 11:37:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # local max_retries=100 00:19:51.437 11:37:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@206 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -c /dev/fd/63 00:19:51.437 11:37:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:51.437 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:51.437 11:37:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # xtrace_disable 00:19:51.437 11:37:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@206 -- # echo '{ 00:19:51.437 "subsystems": [ 00:19:51.437 { 00:19:51.437 "subsystem": "keyring", 00:19:51.437 "config": [ 00:19:51.437 { 00:19:51.437 "method": "keyring_file_add_key", 00:19:51.437 "params": { 00:19:51.437 "name": "key0", 00:19:51.437 "path": "/tmp/tmp.6M4cr9t6oG" 00:19:51.437 } 00:19:51.437 } 00:19:51.437 ] 00:19:51.437 }, 00:19:51.437 { 00:19:51.437 "subsystem": "iobuf", 00:19:51.437 "config": [ 00:19:51.437 { 00:19:51.437 "method": "iobuf_set_options", 00:19:51.437 "params": { 00:19:51.437 "small_pool_count": 8192, 00:19:51.437 "large_pool_count": 1024, 00:19:51.437 "small_bufsize": 8192, 00:19:51.437 "large_bufsize": 135168, 00:19:51.437 "enable_numa": false 00:19:51.437 } 00:19:51.437 } 00:19:51.437 ] 00:19:51.437 }, 00:19:51.437 { 00:19:51.437 "subsystem": "sock", 00:19:51.437 "config": [ 00:19:51.437 { 00:19:51.437 "method": "sock_set_default_impl", 00:19:51.437 "params": { 00:19:51.437 "impl_name": "posix" 00:19:51.437 } 00:19:51.437 }, 00:19:51.437 { 00:19:51.437 "method": "sock_impl_set_options", 00:19:51.437 "params": { 00:19:51.437 "impl_name": "ssl", 00:19:51.437 "recv_buf_size": 4096, 00:19:51.437 "send_buf_size": 4096, 00:19:51.437 "enable_recv_pipe": true, 00:19:51.437 "enable_quickack": false, 00:19:51.437 "enable_placement_id": 0, 00:19:51.437 "enable_zerocopy_send_server": true, 00:19:51.437 "enable_zerocopy_send_client": false, 00:19:51.437 "zerocopy_threshold": 0, 00:19:51.437 "tls_version": 0, 00:19:51.437 "enable_ktls": false 00:19:51.437 } 00:19:51.437 }, 00:19:51.437 { 00:19:51.437 "method": "sock_impl_set_options", 00:19:51.437 "params": { 00:19:51.437 "impl_name": "posix", 00:19:51.437 "recv_buf_size": 2097152, 00:19:51.437 "send_buf_size": 2097152, 00:19:51.437 "enable_recv_pipe": true, 00:19:51.437 "enable_quickack": false, 00:19:51.437 "enable_placement_id": 0, 00:19:51.437 "enable_zerocopy_send_server": true, 00:19:51.437 "enable_zerocopy_send_client": false, 00:19:51.437 "zerocopy_threshold": 0, 00:19:51.437 "tls_version": 0, 00:19:51.437 "enable_ktls": false 00:19:51.437 } 00:19:51.437 } 00:19:51.437 ] 00:19:51.437 }, 00:19:51.437 { 00:19:51.437 "subsystem": "vmd", 00:19:51.437 "config": [] 00:19:51.437 }, 00:19:51.437 { 00:19:51.437 "subsystem": "accel", 00:19:51.437 "config": [ 00:19:51.437 { 00:19:51.437 "method": "accel_set_options", 00:19:51.437 "params": { 00:19:51.437 "small_cache_size": 128, 00:19:51.437 "large_cache_size": 16, 00:19:51.437 "task_count": 2048, 00:19:51.437 "sequence_count": 2048, 00:19:51.437 "buf_count": 2048 00:19:51.437 } 00:19:51.437 } 00:19:51.437 ] 00:19:51.437 }, 00:19:51.437 { 00:19:51.437 "subsystem": "bdev", 00:19:51.437 "config": [ 00:19:51.437 { 00:19:51.437 "method": "bdev_set_options", 00:19:51.437 "params": { 00:19:51.437 "bdev_io_pool_size": 65535, 00:19:51.437 "bdev_io_cache_size": 256, 00:19:51.437 "bdev_auto_examine": true, 00:19:51.437 "iobuf_small_cache_size": 128, 00:19:51.437 "iobuf_large_cache_size": 16 00:19:51.437 } 00:19:51.437 }, 00:19:51.437 { 00:19:51.437 "method": "bdev_raid_set_options", 00:19:51.437 "params": { 00:19:51.437 "process_window_size_kb": 1024, 00:19:51.437 "process_max_bandwidth_mb_sec": 0 00:19:51.437 } 00:19:51.437 }, 00:19:51.437 { 00:19:51.437 "method": "bdev_iscsi_set_options", 00:19:51.437 "params": { 00:19:51.437 "timeout_sec": 30 00:19:51.437 } 00:19:51.437 }, 00:19:51.437 { 00:19:51.437 "method": "bdev_nvme_set_options", 00:19:51.437 "params": { 00:19:51.437 "action_on_timeout": "none", 00:19:51.437 "timeout_us": 0, 00:19:51.437 "timeout_admin_us": 0, 00:19:51.437 "keep_alive_timeout_ms": 10000, 00:19:51.437 "arbitration_burst": 0, 00:19:51.437 "low_priority_weight": 0, 00:19:51.437 "medium_priority_weight": 0, 00:19:51.437 "high_priority_weight": 0, 00:19:51.437 "nvme_adminq_poll_period_us": 10000, 00:19:51.437 "nvme_ioq_poll_period_us": 0, 00:19:51.437 "io_queue_requests": 512, 00:19:51.437 "delay_cmd_submit": true, 00:19:51.437 "transport_retry_count": 4, 00:19:51.437 "bdev_retry_count": 3, 00:19:51.437 "transport_ack_timeout": 0, 00:19:51.437 "ctrlr_loss_timeout_sec": 0, 00:19:51.437 "reconnect_delay_sec": 0, 00:19:51.437 "fast_io_fail_timeout_sec": 0, 00:19:51.437 "disable_auto_failback": false, 00:19:51.437 "generate_uuids": false, 00:19:51.437 "transport_tos": 0, 00:19:51.437 "nvme_error_stat": false, 00:19:51.437 "rdma_srq_size": 0, 00:19:51.437 "io_path_stat": false, 00:19:51.437 "allow_accel_sequence": false, 00:19:51.437 "rdma_max_cq_size": 0, 00:19:51.437 "rdma_cm_event_timeout_ms": 0, 00:19:51.437 "dhchap_digests": [ 00:19:51.437 "sha256", 00:19:51.437 "sha384", 00:19:51.437 "sha512" 00:19:51.437 ], 00:19:51.437 "dhchap_dhgroups": [ 00:19:51.437 "null", 00:19:51.437 "ffdhe2048", 00:19:51.437 "ffdhe3072", 00:19:51.437 "ffdhe4096", 00:19:51.437 "ffdhe6144", 00:19:51.437 "ffdhe8192" 00:19:51.437 ] 00:19:51.437 } 00:19:51.438 }, 00:19:51.438 { 00:19:51.438 "method": "bdev_nvme_attach_controller", 00:19:51.438 "params": { 00:19:51.438 "name": "TLSTEST", 00:19:51.438 "trtype": "TCP", 00:19:51.438 "adrfam": "IPv4", 00:19:51.438 "traddr": "10.0.0.2", 00:19:51.438 "trsvcid": "4420", 00:19:51.438 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:19:51.438 "prchk_reftag": false, 00:19:51.438 "prchk_guard": false, 00:19:51.438 "ctrlr_loss_timeout_sec": 0, 00:19:51.438 "reconnect_delay_sec": 0, 00:19:51.438 "fast_io_fail_timeout_sec": 0, 00:19:51.438 "psk": "key0", 00:19:51.438 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:19:51.438 "hdgst": false, 00:19:51.438 "ddgst": false, 00:19:51.438 "multipath": "multipath" 00:19:51.438 } 00:19:51.438 }, 00:19:51.438 { 00:19:51.438 "method": "bdev_nvme_set_hotplug", 00:19:51.438 "params": { 00:19:51.438 "period_us": 100000, 00:19:51.438 "enable": false 00:19:51.438 } 00:19:51.438 }, 00:19:51.438 { 00:19:51.438 "method": "bdev_wait_for_examine" 00:19:51.438 } 00:19:51.438 ] 00:19:51.438 }, 00:19:51.438 { 00:19:51.438 "subsystem": "nbd", 00:19:51.438 "config": [] 00:19:51.438 } 00:19:51.438 ] 00:19:51.438 }' 00:19:51.438 11:37:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:51.438 [2024-11-15 11:37:52.013841] Starting SPDK v25.01-pre git sha1 4b2d483c6 / DPDK 24.03.0 initialization... 00:19:51.438 [2024-11-15 11:37:52.013898] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1259963 ] 00:19:51.438 [2024-11-15 11:37:52.078737] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:51.438 [2024-11-15 11:37:52.115705] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:19:51.438 [2024-11-15 11:37:52.267370] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:19:51.696 11:37:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:19:51.696 11:37:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@866 -- # return 0 00:19:51.696 11:37:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@213 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:19:51.696 Running I/O for 10 seconds... 00:19:54.006 5847.00 IOPS, 22.84 MiB/s [2024-11-15T10:37:55.792Z] 5852.50 IOPS, 22.86 MiB/s [2024-11-15T10:37:56.725Z] 5882.33 IOPS, 22.98 MiB/s [2024-11-15T10:37:57.659Z] 5902.25 IOPS, 23.06 MiB/s [2024-11-15T10:37:58.594Z] 5922.20 IOPS, 23.13 MiB/s [2024-11-15T10:37:59.968Z] 5936.00 IOPS, 23.19 MiB/s [2024-11-15T10:38:00.534Z] 5960.43 IOPS, 23.28 MiB/s [2024-11-15T10:38:01.907Z] 5966.75 IOPS, 23.31 MiB/s [2024-11-15T10:38:02.842Z] 5965.56 IOPS, 23.30 MiB/s [2024-11-15T10:38:02.842Z] 5965.30 IOPS, 23.30 MiB/s 00:20:01.989 Latency(us) 00:20:01.989 [2024-11-15T10:38:02.842Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:01.989 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:20:01.989 Verification LBA range: start 0x0 length 0x2000 00:20:01.989 TLSTESTn1 : 10.01 5969.04 23.32 0.00 0.00 21411.41 4498.15 23235.49 00:20:01.989 [2024-11-15T10:38:02.842Z] =================================================================================================================== 00:20:01.989 [2024-11-15T10:38:02.842Z] Total : 5969.04 23.32 0.00 0.00 21411.41 4498.15 23235.49 00:20:01.989 { 00:20:01.989 "results": [ 00:20:01.989 { 00:20:01.989 "job": "TLSTESTn1", 00:20:01.989 "core_mask": "0x4", 00:20:01.989 "workload": "verify", 00:20:01.989 "status": "finished", 00:20:01.989 "verify_range": { 00:20:01.989 "start": 0, 00:20:01.989 "length": 8192 00:20:01.989 }, 00:20:01.989 "queue_depth": 128, 00:20:01.989 "io_size": 4096, 00:20:01.989 "runtime": 10.014675, 00:20:01.989 "iops": 5969.040433164331, 00:20:01.989 "mibps": 23.31656419204817, 00:20:01.989 "io_failed": 0, 00:20:01.989 "io_timeout": 0, 00:20:01.989 "avg_latency_us": 21411.40966205263, 00:20:01.989 "min_latency_us": 4498.152727272727, 00:20:01.989 "max_latency_us": 23235.49090909091 00:20:01.989 } 00:20:01.989 ], 00:20:01.989 "core_count": 1 00:20:01.989 } 00:20:01.989 11:38:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@215 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:20:01.989 11:38:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@216 -- # killprocess 1259963 00:20:01.989 11:38:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # '[' -z 1259963 ']' 00:20:01.989 11:38:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # kill -0 1259963 00:20:01.989 11:38:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # uname 00:20:01.989 11:38:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:20:01.989 11:38:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 1259963 00:20:01.989 11:38:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # process_name=reactor_2 00:20:01.989 11:38:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@962 -- # '[' reactor_2 = sudo ']' 00:20:01.989 11:38:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@970 -- # echo 'killing process with pid 1259963' 00:20:01.989 killing process with pid 1259963 00:20:01.989 11:38:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@971 -- # kill 1259963 00:20:01.989 Received shutdown signal, test time was about 10.000000 seconds 00:20:01.989 00:20:01.989 Latency(us) 00:20:01.989 [2024-11-15T10:38:02.842Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:01.989 [2024-11-15T10:38:02.842Z] =================================================================================================================== 00:20:01.989 [2024-11-15T10:38:02.842Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:20:01.989 11:38:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@976 -- # wait 1259963 00:20:01.989 11:38:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@217 -- # killprocess 1259811 00:20:01.989 11:38:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # '[' -z 1259811 ']' 00:20:01.989 11:38:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # kill -0 1259811 00:20:01.989 11:38:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # uname 00:20:01.989 11:38:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:20:01.989 11:38:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 1259811 00:20:02.248 11:38:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:20:02.248 11:38:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:20:02.248 11:38:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@970 -- # echo 'killing process with pid 1259811' 00:20:02.248 killing process with pid 1259811 00:20:02.248 11:38:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@971 -- # kill 1259811 00:20:02.248 11:38:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@976 -- # wait 1259811 00:20:02.248 11:38:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@220 -- # nvmfappstart 00:20:02.248 11:38:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:20:02.248 11:38:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:20:02.248 11:38:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:02.248 11:38:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=1262051 00:20:02.248 11:38:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:20:02.248 11:38:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 1262051 00:20:02.248 11:38:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # '[' -z 1262051 ']' 00:20:02.248 11:38:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:02.248 11:38:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # local max_retries=100 00:20:02.248 11:38:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:02.248 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:02.248 11:38:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # xtrace_disable 00:20:02.248 11:38:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:02.248 [2024-11-15 11:38:03.082960] Starting SPDK v25.01-pre git sha1 4b2d483c6 / DPDK 24.03.0 initialization... 00:20:02.248 [2024-11-15 11:38:03.083018] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:02.506 [2024-11-15 11:38:03.184025] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:02.506 [2024-11-15 11:38:03.231839] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:02.506 [2024-11-15 11:38:03.231877] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:02.507 [2024-11-15 11:38:03.231888] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:02.507 [2024-11-15 11:38:03.231897] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:02.507 [2024-11-15 11:38:03.231904] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:02.507 [2024-11-15 11:38:03.232605] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:02.507 11:38:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:20:02.507 11:38:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@866 -- # return 0 00:20:02.507 11:38:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:20:02.507 11:38:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:20:02.507 11:38:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:02.765 11:38:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:02.765 11:38:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@221 -- # setup_nvmf_tgt /tmp/tmp.6M4cr9t6oG 00:20:02.765 11:38:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.6M4cr9t6oG 00:20:02.765 11:38:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:20:03.022 [2024-11-15 11:38:03.624132] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:03.022 11:38:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:20:03.280 11:38:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:20:03.538 [2024-11-15 11:38:04.157575] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:20:03.538 [2024-11-15 11:38:04.157822] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:03.538 11:38:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:20:03.796 malloc0 00:20:03.796 11:38:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:20:04.055 11:38:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.6M4cr9t6oG 00:20:04.312 11:38:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:20:04.569 11:38:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@224 -- # bdevperf_pid=1262350 00:20:04.569 11:38:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@222 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:20:04.569 11:38:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@226 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:20:04.569 11:38:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@227 -- # waitforlisten 1262350 /var/tmp/bdevperf.sock 00:20:04.569 11:38:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # '[' -z 1262350 ']' 00:20:04.569 11:38:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:04.569 11:38:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # local max_retries=100 00:20:04.569 11:38:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:04.569 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:04.569 11:38:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # xtrace_disable 00:20:04.569 11:38:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:04.569 [2024-11-15 11:38:05.334531] Starting SPDK v25.01-pre git sha1 4b2d483c6 / DPDK 24.03.0 initialization... 00:20:04.569 [2024-11-15 11:38:05.334593] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1262350 ] 00:20:04.569 [2024-11-15 11:38:05.400162] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:04.827 [2024-11-15 11:38:05.440974] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:20:04.827 11:38:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:20:04.827 11:38:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@866 -- # return 0 00:20:04.827 11:38:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@229 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.6M4cr9t6oG 00:20:05.084 11:38:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@230 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:20:05.342 [2024-11-15 11:38:06.084266] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:20:05.342 nvme0n1 00:20:05.342 11:38:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@234 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:20:05.600 Running I/O for 1 seconds... 00:20:06.536 3669.00 IOPS, 14.33 MiB/s 00:20:06.536 Latency(us) 00:20:06.536 [2024-11-15T10:38:07.389Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:06.536 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:20:06.536 Verification LBA range: start 0x0 length 0x2000 00:20:06.536 nvme0n1 : 1.02 3723.69 14.55 0.00 0.00 34110.86 5123.72 49092.42 00:20:06.536 [2024-11-15T10:38:07.389Z] =================================================================================================================== 00:20:06.536 [2024-11-15T10:38:07.389Z] Total : 3723.69 14.55 0.00 0.00 34110.86 5123.72 49092.42 00:20:06.536 { 00:20:06.536 "results": [ 00:20:06.536 { 00:20:06.536 "job": "nvme0n1", 00:20:06.536 "core_mask": "0x2", 00:20:06.536 "workload": "verify", 00:20:06.536 "status": "finished", 00:20:06.536 "verify_range": { 00:20:06.536 "start": 0, 00:20:06.536 "length": 8192 00:20:06.536 }, 00:20:06.536 "queue_depth": 128, 00:20:06.536 "io_size": 4096, 00:20:06.536 "runtime": 1.019956, 00:20:06.536 "iops": 3723.6900415312034, 00:20:06.536 "mibps": 14.545664224731263, 00:20:06.536 "io_failed": 0, 00:20:06.536 "io_timeout": 0, 00:20:06.536 "avg_latency_us": 34110.862099669685, 00:20:06.536 "min_latency_us": 5123.723636363637, 00:20:06.536 "max_latency_us": 49092.42181818182 00:20:06.536 } 00:20:06.536 ], 00:20:06.536 "core_count": 1 00:20:06.536 } 00:20:06.536 11:38:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@236 -- # killprocess 1262350 00:20:06.536 11:38:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # '[' -z 1262350 ']' 00:20:06.536 11:38:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # kill -0 1262350 00:20:06.536 11:38:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # uname 00:20:06.536 11:38:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:20:06.536 11:38:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 1262350 00:20:06.794 11:38:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:20:06.794 11:38:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:20:06.794 11:38:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@970 -- # echo 'killing process with pid 1262350' 00:20:06.794 killing process with pid 1262350 00:20:06.795 11:38:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@971 -- # kill 1262350 00:20:06.795 Received shutdown signal, test time was about 1.000000 seconds 00:20:06.795 00:20:06.795 Latency(us) 00:20:06.795 [2024-11-15T10:38:07.648Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:06.795 [2024-11-15T10:38:07.648Z] =================================================================================================================== 00:20:06.795 [2024-11-15T10:38:07.648Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:20:06.795 11:38:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@976 -- # wait 1262350 00:20:06.795 11:38:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@237 -- # killprocess 1262051 00:20:06.795 11:38:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # '[' -z 1262051 ']' 00:20:06.795 11:38:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # kill -0 1262051 00:20:06.795 11:38:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # uname 00:20:06.795 11:38:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:20:06.795 11:38:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 1262051 00:20:06.795 11:38:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:20:06.795 11:38:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:20:06.795 11:38:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@970 -- # echo 'killing process with pid 1262051' 00:20:06.795 killing process with pid 1262051 00:20:06.795 11:38:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@971 -- # kill 1262051 00:20:06.795 11:38:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@976 -- # wait 1262051 00:20:07.053 11:38:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@242 -- # nvmfappstart 00:20:07.053 11:38:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:20:07.053 11:38:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:20:07.053 11:38:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:07.053 11:38:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=1262884 00:20:07.053 11:38:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:20:07.053 11:38:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 1262884 00:20:07.053 11:38:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # '[' -z 1262884 ']' 00:20:07.053 11:38:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:07.053 11:38:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # local max_retries=100 00:20:07.053 11:38:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:07.053 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:07.053 11:38:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # xtrace_disable 00:20:07.053 11:38:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:07.053 [2024-11-15 11:38:07.870038] Starting SPDK v25.01-pre git sha1 4b2d483c6 / DPDK 24.03.0 initialization... 00:20:07.053 [2024-11-15 11:38:07.870096] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:07.312 [2024-11-15 11:38:07.968530] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:07.312 [2024-11-15 11:38:08.015887] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:07.312 [2024-11-15 11:38:08.015926] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:07.312 [2024-11-15 11:38:08.015937] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:07.312 [2024-11-15 11:38:08.015945] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:07.312 [2024-11-15 11:38:08.015953] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:07.312 [2024-11-15 11:38:08.016668] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:07.312 11:38:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:20:07.312 11:38:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@866 -- # return 0 00:20:07.312 11:38:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:20:07.312 11:38:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:20:07.312 11:38:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:07.312 11:38:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:07.312 11:38:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@243 -- # rpc_cmd 00:20:07.312 11:38:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:07.312 11:38:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:07.571 [2024-11-15 11:38:08.167625] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:07.571 malloc0 00:20:07.571 [2024-11-15 11:38:08.196889] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:20:07.571 [2024-11-15 11:38:08.197126] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:07.571 11:38:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:07.571 11:38:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@256 -- # bdevperf_pid=1262904 00:20:07.571 11:38:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@254 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:20:07.571 11:38:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@258 -- # waitforlisten 1262904 /var/tmp/bdevperf.sock 00:20:07.571 11:38:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # '[' -z 1262904 ']' 00:20:07.571 11:38:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:07.571 11:38:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # local max_retries=100 00:20:07.571 11:38:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:07.571 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:07.571 11:38:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # xtrace_disable 00:20:07.571 11:38:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:07.571 [2024-11-15 11:38:08.277793] Starting SPDK v25.01-pre git sha1 4b2d483c6 / DPDK 24.03.0 initialization... 00:20:07.571 [2024-11-15 11:38:08.277847] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1262904 ] 00:20:07.571 [2024-11-15 11:38:08.344084] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:07.571 [2024-11-15 11:38:08.384258] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:20:07.829 11:38:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:20:07.829 11:38:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@866 -- # return 0 00:20:07.829 11:38:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@259 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.6M4cr9t6oG 00:20:08.087 11:38:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@260 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:20:08.345 [2024-11-15 11:38:09.015757] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:20:08.345 nvme0n1 00:20:08.345 11:38:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@264 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:20:08.603 Running I/O for 1 seconds... 00:20:09.536 3643.00 IOPS, 14.23 MiB/s 00:20:09.536 Latency(us) 00:20:09.536 [2024-11-15T10:38:10.389Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:09.536 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:20:09.536 Verification LBA range: start 0x0 length 0x2000 00:20:09.537 nvme0n1 : 1.02 3681.83 14.38 0.00 0.00 34488.56 7745.16 36938.47 00:20:09.537 [2024-11-15T10:38:10.390Z] =================================================================================================================== 00:20:09.537 [2024-11-15T10:38:10.390Z] Total : 3681.83 14.38 0.00 0.00 34488.56 7745.16 36938.47 00:20:09.537 { 00:20:09.537 "results": [ 00:20:09.537 { 00:20:09.537 "job": "nvme0n1", 00:20:09.537 "core_mask": "0x2", 00:20:09.537 "workload": "verify", 00:20:09.537 "status": "finished", 00:20:09.537 "verify_range": { 00:20:09.537 "start": 0, 00:20:09.537 "length": 8192 00:20:09.537 }, 00:20:09.537 "queue_depth": 128, 00:20:09.537 "io_size": 4096, 00:20:09.537 "runtime": 1.024491, 00:20:09.537 "iops": 3681.828342074259, 00:20:09.537 "mibps": 14.382141961227575, 00:20:09.537 "io_failed": 0, 00:20:09.537 "io_timeout": 0, 00:20:09.537 "avg_latency_us": 34488.5569227803, 00:20:09.537 "min_latency_us": 7745.163636363636, 00:20:09.537 "max_latency_us": 36938.472727272725 00:20:09.537 } 00:20:09.537 ], 00:20:09.537 "core_count": 1 00:20:09.537 } 00:20:09.537 11:38:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@267 -- # rpc_cmd save_config 00:20:09.537 11:38:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:09.537 11:38:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:09.537 11:38:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:09.794 11:38:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@267 -- # tgtcfg='{ 00:20:09.794 "subsystems": [ 00:20:09.794 { 00:20:09.794 "subsystem": "keyring", 00:20:09.794 "config": [ 00:20:09.794 { 00:20:09.794 "method": "keyring_file_add_key", 00:20:09.794 "params": { 00:20:09.794 "name": "key0", 00:20:09.794 "path": "/tmp/tmp.6M4cr9t6oG" 00:20:09.794 } 00:20:09.794 } 00:20:09.794 ] 00:20:09.794 }, 00:20:09.794 { 00:20:09.794 "subsystem": "iobuf", 00:20:09.794 "config": [ 00:20:09.794 { 00:20:09.794 "method": "iobuf_set_options", 00:20:09.794 "params": { 00:20:09.794 "small_pool_count": 8192, 00:20:09.794 "large_pool_count": 1024, 00:20:09.794 "small_bufsize": 8192, 00:20:09.794 "large_bufsize": 135168, 00:20:09.794 "enable_numa": false 00:20:09.794 } 00:20:09.794 } 00:20:09.794 ] 00:20:09.794 }, 00:20:09.794 { 00:20:09.794 "subsystem": "sock", 00:20:09.794 "config": [ 00:20:09.794 { 00:20:09.794 "method": "sock_set_default_impl", 00:20:09.794 "params": { 00:20:09.794 "impl_name": "posix" 00:20:09.794 } 00:20:09.794 }, 00:20:09.794 { 00:20:09.794 "method": "sock_impl_set_options", 00:20:09.794 "params": { 00:20:09.794 "impl_name": "ssl", 00:20:09.794 "recv_buf_size": 4096, 00:20:09.794 "send_buf_size": 4096, 00:20:09.794 "enable_recv_pipe": true, 00:20:09.794 "enable_quickack": false, 00:20:09.794 "enable_placement_id": 0, 00:20:09.794 "enable_zerocopy_send_server": true, 00:20:09.794 "enable_zerocopy_send_client": false, 00:20:09.794 "zerocopy_threshold": 0, 00:20:09.794 "tls_version": 0, 00:20:09.794 "enable_ktls": false 00:20:09.794 } 00:20:09.794 }, 00:20:09.794 { 00:20:09.794 "method": "sock_impl_set_options", 00:20:09.794 "params": { 00:20:09.794 "impl_name": "posix", 00:20:09.794 "recv_buf_size": 2097152, 00:20:09.794 "send_buf_size": 2097152, 00:20:09.794 "enable_recv_pipe": true, 00:20:09.794 "enable_quickack": false, 00:20:09.794 "enable_placement_id": 0, 00:20:09.794 "enable_zerocopy_send_server": true, 00:20:09.794 "enable_zerocopy_send_client": false, 00:20:09.794 "zerocopy_threshold": 0, 00:20:09.794 "tls_version": 0, 00:20:09.794 "enable_ktls": false 00:20:09.794 } 00:20:09.794 } 00:20:09.794 ] 00:20:09.794 }, 00:20:09.794 { 00:20:09.794 "subsystem": "vmd", 00:20:09.794 "config": [] 00:20:09.794 }, 00:20:09.794 { 00:20:09.794 "subsystem": "accel", 00:20:09.794 "config": [ 00:20:09.794 { 00:20:09.794 "method": "accel_set_options", 00:20:09.794 "params": { 00:20:09.794 "small_cache_size": 128, 00:20:09.794 "large_cache_size": 16, 00:20:09.794 "task_count": 2048, 00:20:09.794 "sequence_count": 2048, 00:20:09.794 "buf_count": 2048 00:20:09.794 } 00:20:09.794 } 00:20:09.794 ] 00:20:09.794 }, 00:20:09.794 { 00:20:09.794 "subsystem": "bdev", 00:20:09.794 "config": [ 00:20:09.794 { 00:20:09.794 "method": "bdev_set_options", 00:20:09.794 "params": { 00:20:09.794 "bdev_io_pool_size": 65535, 00:20:09.794 "bdev_io_cache_size": 256, 00:20:09.794 "bdev_auto_examine": true, 00:20:09.794 "iobuf_small_cache_size": 128, 00:20:09.794 "iobuf_large_cache_size": 16 00:20:09.794 } 00:20:09.794 }, 00:20:09.794 { 00:20:09.794 "method": "bdev_raid_set_options", 00:20:09.794 "params": { 00:20:09.794 "process_window_size_kb": 1024, 00:20:09.794 "process_max_bandwidth_mb_sec": 0 00:20:09.794 } 00:20:09.794 }, 00:20:09.794 { 00:20:09.794 "method": "bdev_iscsi_set_options", 00:20:09.794 "params": { 00:20:09.794 "timeout_sec": 30 00:20:09.794 } 00:20:09.794 }, 00:20:09.794 { 00:20:09.794 "method": "bdev_nvme_set_options", 00:20:09.794 "params": { 00:20:09.794 "action_on_timeout": "none", 00:20:09.794 "timeout_us": 0, 00:20:09.794 "timeout_admin_us": 0, 00:20:09.794 "keep_alive_timeout_ms": 10000, 00:20:09.794 "arbitration_burst": 0, 00:20:09.794 "low_priority_weight": 0, 00:20:09.794 "medium_priority_weight": 0, 00:20:09.794 "high_priority_weight": 0, 00:20:09.794 "nvme_adminq_poll_period_us": 10000, 00:20:09.794 "nvme_ioq_poll_period_us": 0, 00:20:09.794 "io_queue_requests": 0, 00:20:09.794 "delay_cmd_submit": true, 00:20:09.794 "transport_retry_count": 4, 00:20:09.794 "bdev_retry_count": 3, 00:20:09.794 "transport_ack_timeout": 0, 00:20:09.794 "ctrlr_loss_timeout_sec": 0, 00:20:09.794 "reconnect_delay_sec": 0, 00:20:09.794 "fast_io_fail_timeout_sec": 0, 00:20:09.794 "disable_auto_failback": false, 00:20:09.794 "generate_uuids": false, 00:20:09.794 "transport_tos": 0, 00:20:09.794 "nvme_error_stat": false, 00:20:09.794 "rdma_srq_size": 0, 00:20:09.794 "io_path_stat": false, 00:20:09.794 "allow_accel_sequence": false, 00:20:09.794 "rdma_max_cq_size": 0, 00:20:09.794 "rdma_cm_event_timeout_ms": 0, 00:20:09.794 "dhchap_digests": [ 00:20:09.794 "sha256", 00:20:09.794 "sha384", 00:20:09.794 "sha512" 00:20:09.794 ], 00:20:09.794 "dhchap_dhgroups": [ 00:20:09.794 "null", 00:20:09.794 "ffdhe2048", 00:20:09.794 "ffdhe3072", 00:20:09.794 "ffdhe4096", 00:20:09.794 "ffdhe6144", 00:20:09.794 "ffdhe8192" 00:20:09.794 ] 00:20:09.794 } 00:20:09.794 }, 00:20:09.794 { 00:20:09.794 "method": "bdev_nvme_set_hotplug", 00:20:09.794 "params": { 00:20:09.794 "period_us": 100000, 00:20:09.794 "enable": false 00:20:09.794 } 00:20:09.794 }, 00:20:09.794 { 00:20:09.794 "method": "bdev_malloc_create", 00:20:09.794 "params": { 00:20:09.794 "name": "malloc0", 00:20:09.794 "num_blocks": 8192, 00:20:09.794 "block_size": 4096, 00:20:09.794 "physical_block_size": 4096, 00:20:09.794 "uuid": "8f74801d-8ccc-46e5-bf05-cab85b91ceff", 00:20:09.794 "optimal_io_boundary": 0, 00:20:09.794 "md_size": 0, 00:20:09.794 "dif_type": 0, 00:20:09.794 "dif_is_head_of_md": false, 00:20:09.794 "dif_pi_format": 0 00:20:09.794 } 00:20:09.794 }, 00:20:09.794 { 00:20:09.794 "method": "bdev_wait_for_examine" 00:20:09.795 } 00:20:09.795 ] 00:20:09.795 }, 00:20:09.795 { 00:20:09.795 "subsystem": "nbd", 00:20:09.795 "config": [] 00:20:09.795 }, 00:20:09.795 { 00:20:09.795 "subsystem": "scheduler", 00:20:09.795 "config": [ 00:20:09.795 { 00:20:09.795 "method": "framework_set_scheduler", 00:20:09.795 "params": { 00:20:09.795 "name": "static" 00:20:09.795 } 00:20:09.795 } 00:20:09.795 ] 00:20:09.795 }, 00:20:09.795 { 00:20:09.795 "subsystem": "nvmf", 00:20:09.795 "config": [ 00:20:09.795 { 00:20:09.795 "method": "nvmf_set_config", 00:20:09.795 "params": { 00:20:09.795 "discovery_filter": "match_any", 00:20:09.795 "admin_cmd_passthru": { 00:20:09.795 "identify_ctrlr": false 00:20:09.795 }, 00:20:09.795 "dhchap_digests": [ 00:20:09.795 "sha256", 00:20:09.795 "sha384", 00:20:09.795 "sha512" 00:20:09.795 ], 00:20:09.795 "dhchap_dhgroups": [ 00:20:09.795 "null", 00:20:09.795 "ffdhe2048", 00:20:09.795 "ffdhe3072", 00:20:09.795 "ffdhe4096", 00:20:09.795 "ffdhe6144", 00:20:09.795 "ffdhe8192" 00:20:09.795 ] 00:20:09.795 } 00:20:09.795 }, 00:20:09.795 { 00:20:09.795 "method": "nvmf_set_max_subsystems", 00:20:09.795 "params": { 00:20:09.795 "max_subsystems": 1024 00:20:09.795 } 00:20:09.795 }, 00:20:09.795 { 00:20:09.795 "method": "nvmf_set_crdt", 00:20:09.795 "params": { 00:20:09.795 "crdt1": 0, 00:20:09.795 "crdt2": 0, 00:20:09.795 "crdt3": 0 00:20:09.795 } 00:20:09.795 }, 00:20:09.795 { 00:20:09.795 "method": "nvmf_create_transport", 00:20:09.795 "params": { 00:20:09.795 "trtype": "TCP", 00:20:09.795 "max_queue_depth": 128, 00:20:09.795 "max_io_qpairs_per_ctrlr": 127, 00:20:09.795 "in_capsule_data_size": 4096, 00:20:09.795 "max_io_size": 131072, 00:20:09.795 "io_unit_size": 131072, 00:20:09.795 "max_aq_depth": 128, 00:20:09.795 "num_shared_buffers": 511, 00:20:09.795 "buf_cache_size": 4294967295, 00:20:09.795 "dif_insert_or_strip": false, 00:20:09.795 "zcopy": false, 00:20:09.795 "c2h_success": false, 00:20:09.795 "sock_priority": 0, 00:20:09.795 "abort_timeout_sec": 1, 00:20:09.795 "ack_timeout": 0, 00:20:09.795 "data_wr_pool_size": 0 00:20:09.795 } 00:20:09.795 }, 00:20:09.795 { 00:20:09.795 "method": "nvmf_create_subsystem", 00:20:09.795 "params": { 00:20:09.795 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:09.795 "allow_any_host": false, 00:20:09.795 "serial_number": "00000000000000000000", 00:20:09.795 "model_number": "SPDK bdev Controller", 00:20:09.795 "max_namespaces": 32, 00:20:09.795 "min_cntlid": 1, 00:20:09.795 "max_cntlid": 65519, 00:20:09.795 "ana_reporting": false 00:20:09.795 } 00:20:09.795 }, 00:20:09.795 { 00:20:09.795 "method": "nvmf_subsystem_add_host", 00:20:09.795 "params": { 00:20:09.795 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:09.795 "host": "nqn.2016-06.io.spdk:host1", 00:20:09.795 "psk": "key0" 00:20:09.795 } 00:20:09.795 }, 00:20:09.795 { 00:20:09.795 "method": "nvmf_subsystem_add_ns", 00:20:09.795 "params": { 00:20:09.795 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:09.795 "namespace": { 00:20:09.795 "nsid": 1, 00:20:09.795 "bdev_name": "malloc0", 00:20:09.795 "nguid": "8F74801D8CCC46E5BF05CAB85B91CEFF", 00:20:09.795 "uuid": "8f74801d-8ccc-46e5-bf05-cab85b91ceff", 00:20:09.795 "no_auto_visible": false 00:20:09.795 } 00:20:09.795 } 00:20:09.795 }, 00:20:09.795 { 00:20:09.795 "method": "nvmf_subsystem_add_listener", 00:20:09.795 "params": { 00:20:09.795 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:09.795 "listen_address": { 00:20:09.795 "trtype": "TCP", 00:20:09.795 "adrfam": "IPv4", 00:20:09.795 "traddr": "10.0.0.2", 00:20:09.795 "trsvcid": "4420" 00:20:09.795 }, 00:20:09.795 "secure_channel": false, 00:20:09.795 "sock_impl": "ssl" 00:20:09.795 } 00:20:09.795 } 00:20:09.795 ] 00:20:09.795 } 00:20:09.795 ] 00:20:09.795 }' 00:20:09.795 11:38:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@268 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:20:10.065 11:38:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@268 -- # bperfcfg='{ 00:20:10.065 "subsystems": [ 00:20:10.065 { 00:20:10.065 "subsystem": "keyring", 00:20:10.065 "config": [ 00:20:10.065 { 00:20:10.065 "method": "keyring_file_add_key", 00:20:10.065 "params": { 00:20:10.065 "name": "key0", 00:20:10.065 "path": "/tmp/tmp.6M4cr9t6oG" 00:20:10.065 } 00:20:10.065 } 00:20:10.065 ] 00:20:10.065 }, 00:20:10.065 { 00:20:10.065 "subsystem": "iobuf", 00:20:10.065 "config": [ 00:20:10.065 { 00:20:10.065 "method": "iobuf_set_options", 00:20:10.065 "params": { 00:20:10.065 "small_pool_count": 8192, 00:20:10.065 "large_pool_count": 1024, 00:20:10.065 "small_bufsize": 8192, 00:20:10.065 "large_bufsize": 135168, 00:20:10.065 "enable_numa": false 00:20:10.065 } 00:20:10.065 } 00:20:10.065 ] 00:20:10.065 }, 00:20:10.065 { 00:20:10.065 "subsystem": "sock", 00:20:10.065 "config": [ 00:20:10.065 { 00:20:10.065 "method": "sock_set_default_impl", 00:20:10.065 "params": { 00:20:10.065 "impl_name": "posix" 00:20:10.065 } 00:20:10.065 }, 00:20:10.065 { 00:20:10.065 "method": "sock_impl_set_options", 00:20:10.065 "params": { 00:20:10.065 "impl_name": "ssl", 00:20:10.065 "recv_buf_size": 4096, 00:20:10.065 "send_buf_size": 4096, 00:20:10.065 "enable_recv_pipe": true, 00:20:10.065 "enable_quickack": false, 00:20:10.065 "enable_placement_id": 0, 00:20:10.065 "enable_zerocopy_send_server": true, 00:20:10.065 "enable_zerocopy_send_client": false, 00:20:10.065 "zerocopy_threshold": 0, 00:20:10.065 "tls_version": 0, 00:20:10.065 "enable_ktls": false 00:20:10.065 } 00:20:10.065 }, 00:20:10.065 { 00:20:10.065 "method": "sock_impl_set_options", 00:20:10.065 "params": { 00:20:10.065 "impl_name": "posix", 00:20:10.065 "recv_buf_size": 2097152, 00:20:10.065 "send_buf_size": 2097152, 00:20:10.065 "enable_recv_pipe": true, 00:20:10.065 "enable_quickack": false, 00:20:10.065 "enable_placement_id": 0, 00:20:10.065 "enable_zerocopy_send_server": true, 00:20:10.065 "enable_zerocopy_send_client": false, 00:20:10.065 "zerocopy_threshold": 0, 00:20:10.065 "tls_version": 0, 00:20:10.065 "enable_ktls": false 00:20:10.065 } 00:20:10.065 } 00:20:10.065 ] 00:20:10.065 }, 00:20:10.065 { 00:20:10.065 "subsystem": "vmd", 00:20:10.065 "config": [] 00:20:10.065 }, 00:20:10.065 { 00:20:10.065 "subsystem": "accel", 00:20:10.065 "config": [ 00:20:10.065 { 00:20:10.065 "method": "accel_set_options", 00:20:10.065 "params": { 00:20:10.065 "small_cache_size": 128, 00:20:10.065 "large_cache_size": 16, 00:20:10.065 "task_count": 2048, 00:20:10.065 "sequence_count": 2048, 00:20:10.065 "buf_count": 2048 00:20:10.065 } 00:20:10.065 } 00:20:10.065 ] 00:20:10.066 }, 00:20:10.066 { 00:20:10.066 "subsystem": "bdev", 00:20:10.066 "config": [ 00:20:10.066 { 00:20:10.066 "method": "bdev_set_options", 00:20:10.066 "params": { 00:20:10.066 "bdev_io_pool_size": 65535, 00:20:10.066 "bdev_io_cache_size": 256, 00:20:10.066 "bdev_auto_examine": true, 00:20:10.066 "iobuf_small_cache_size": 128, 00:20:10.066 "iobuf_large_cache_size": 16 00:20:10.066 } 00:20:10.066 }, 00:20:10.066 { 00:20:10.066 "method": "bdev_raid_set_options", 00:20:10.066 "params": { 00:20:10.066 "process_window_size_kb": 1024, 00:20:10.066 "process_max_bandwidth_mb_sec": 0 00:20:10.066 } 00:20:10.066 }, 00:20:10.066 { 00:20:10.066 "method": "bdev_iscsi_set_options", 00:20:10.066 "params": { 00:20:10.066 "timeout_sec": 30 00:20:10.066 } 00:20:10.066 }, 00:20:10.066 { 00:20:10.066 "method": "bdev_nvme_set_options", 00:20:10.066 "params": { 00:20:10.066 "action_on_timeout": "none", 00:20:10.066 "timeout_us": 0, 00:20:10.066 "timeout_admin_us": 0, 00:20:10.066 "keep_alive_timeout_ms": 10000, 00:20:10.066 "arbitration_burst": 0, 00:20:10.066 "low_priority_weight": 0, 00:20:10.066 "medium_priority_weight": 0, 00:20:10.066 "high_priority_weight": 0, 00:20:10.066 "nvme_adminq_poll_period_us": 10000, 00:20:10.066 "nvme_ioq_poll_period_us": 0, 00:20:10.066 "io_queue_requests": 512, 00:20:10.066 "delay_cmd_submit": true, 00:20:10.066 "transport_retry_count": 4, 00:20:10.066 "bdev_retry_count": 3, 00:20:10.066 "transport_ack_timeout": 0, 00:20:10.066 "ctrlr_loss_timeout_sec": 0, 00:20:10.066 "reconnect_delay_sec": 0, 00:20:10.066 "fast_io_fail_timeout_sec": 0, 00:20:10.066 "disable_auto_failback": false, 00:20:10.066 "generate_uuids": false, 00:20:10.066 "transport_tos": 0, 00:20:10.066 "nvme_error_stat": false, 00:20:10.066 "rdma_srq_size": 0, 00:20:10.066 "io_path_stat": false, 00:20:10.066 "allow_accel_sequence": false, 00:20:10.066 "rdma_max_cq_size": 0, 00:20:10.066 "rdma_cm_event_timeout_ms": 0, 00:20:10.066 "dhchap_digests": [ 00:20:10.066 "sha256", 00:20:10.066 "sha384", 00:20:10.066 "sha512" 00:20:10.066 ], 00:20:10.066 "dhchap_dhgroups": [ 00:20:10.066 "null", 00:20:10.066 "ffdhe2048", 00:20:10.066 "ffdhe3072", 00:20:10.066 "ffdhe4096", 00:20:10.066 "ffdhe6144", 00:20:10.066 "ffdhe8192" 00:20:10.066 ] 00:20:10.066 } 00:20:10.066 }, 00:20:10.066 { 00:20:10.066 "method": "bdev_nvme_attach_controller", 00:20:10.066 "params": { 00:20:10.066 "name": "nvme0", 00:20:10.066 "trtype": "TCP", 00:20:10.066 "adrfam": "IPv4", 00:20:10.066 "traddr": "10.0.0.2", 00:20:10.066 "trsvcid": "4420", 00:20:10.066 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:10.066 "prchk_reftag": false, 00:20:10.066 "prchk_guard": false, 00:20:10.066 "ctrlr_loss_timeout_sec": 0, 00:20:10.066 "reconnect_delay_sec": 0, 00:20:10.066 "fast_io_fail_timeout_sec": 0, 00:20:10.066 "psk": "key0", 00:20:10.066 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:20:10.066 "hdgst": false, 00:20:10.066 "ddgst": false, 00:20:10.066 "multipath": "multipath" 00:20:10.066 } 00:20:10.066 }, 00:20:10.066 { 00:20:10.066 "method": "bdev_nvme_set_hotplug", 00:20:10.066 "params": { 00:20:10.066 "period_us": 100000, 00:20:10.066 "enable": false 00:20:10.066 } 00:20:10.066 }, 00:20:10.066 { 00:20:10.066 "method": "bdev_enable_histogram", 00:20:10.066 "params": { 00:20:10.066 "name": "nvme0n1", 00:20:10.066 "enable": true 00:20:10.066 } 00:20:10.066 }, 00:20:10.066 { 00:20:10.066 "method": "bdev_wait_for_examine" 00:20:10.066 } 00:20:10.066 ] 00:20:10.066 }, 00:20:10.066 { 00:20:10.066 "subsystem": "nbd", 00:20:10.066 "config": [] 00:20:10.066 } 00:20:10.066 ] 00:20:10.066 }' 00:20:10.066 11:38:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@270 -- # killprocess 1262904 00:20:10.066 11:38:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # '[' -z 1262904 ']' 00:20:10.066 11:38:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # kill -0 1262904 00:20:10.066 11:38:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # uname 00:20:10.066 11:38:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:20:10.066 11:38:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 1262904 00:20:10.066 11:38:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:20:10.066 11:38:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:20:10.066 11:38:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@970 -- # echo 'killing process with pid 1262904' 00:20:10.066 killing process with pid 1262904 00:20:10.066 11:38:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@971 -- # kill 1262904 00:20:10.066 Received shutdown signal, test time was about 1.000000 seconds 00:20:10.066 00:20:10.066 Latency(us) 00:20:10.066 [2024-11-15T10:38:10.919Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:10.066 [2024-11-15T10:38:10.919Z] =================================================================================================================== 00:20:10.066 [2024-11-15T10:38:10.919Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:20:10.066 11:38:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@976 -- # wait 1262904 00:20:10.323 11:38:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@271 -- # killprocess 1262884 00:20:10.323 11:38:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # '[' -z 1262884 ']' 00:20:10.323 11:38:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # kill -0 1262884 00:20:10.323 11:38:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # uname 00:20:10.323 11:38:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:20:10.323 11:38:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 1262884 00:20:10.323 11:38:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:20:10.323 11:38:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:20:10.323 11:38:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@970 -- # echo 'killing process with pid 1262884' 00:20:10.323 killing process with pid 1262884 00:20:10.323 11:38:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@971 -- # kill 1262884 00:20:10.323 11:38:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@976 -- # wait 1262884 00:20:10.581 11:38:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@273 -- # nvmfappstart -c /dev/fd/62 00:20:10.581 11:38:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:20:10.581 11:38:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:20:10.581 11:38:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@273 -- # echo '{ 00:20:10.581 "subsystems": [ 00:20:10.581 { 00:20:10.581 "subsystem": "keyring", 00:20:10.581 "config": [ 00:20:10.581 { 00:20:10.581 "method": "keyring_file_add_key", 00:20:10.581 "params": { 00:20:10.581 "name": "key0", 00:20:10.581 "path": "/tmp/tmp.6M4cr9t6oG" 00:20:10.581 } 00:20:10.581 } 00:20:10.581 ] 00:20:10.581 }, 00:20:10.581 { 00:20:10.581 "subsystem": "iobuf", 00:20:10.581 "config": [ 00:20:10.581 { 00:20:10.581 "method": "iobuf_set_options", 00:20:10.581 "params": { 00:20:10.581 "small_pool_count": 8192, 00:20:10.581 "large_pool_count": 1024, 00:20:10.581 "small_bufsize": 8192, 00:20:10.581 "large_bufsize": 135168, 00:20:10.581 "enable_numa": false 00:20:10.581 } 00:20:10.581 } 00:20:10.581 ] 00:20:10.581 }, 00:20:10.581 { 00:20:10.581 "subsystem": "sock", 00:20:10.581 "config": [ 00:20:10.581 { 00:20:10.581 "method": "sock_set_default_impl", 00:20:10.581 "params": { 00:20:10.581 "impl_name": "posix" 00:20:10.581 } 00:20:10.581 }, 00:20:10.581 { 00:20:10.581 "method": "sock_impl_set_options", 00:20:10.581 "params": { 00:20:10.581 "impl_name": "ssl", 00:20:10.581 "recv_buf_size": 4096, 00:20:10.581 "send_buf_size": 4096, 00:20:10.581 "enable_recv_pipe": true, 00:20:10.581 "enable_quickack": false, 00:20:10.581 "enable_placement_id": 0, 00:20:10.581 "enable_zerocopy_send_server": true, 00:20:10.581 "enable_zerocopy_send_client": false, 00:20:10.581 "zerocopy_threshold": 0, 00:20:10.581 "tls_version": 0, 00:20:10.581 "enable_ktls": false 00:20:10.581 } 00:20:10.581 }, 00:20:10.581 { 00:20:10.581 "method": "sock_impl_set_options", 00:20:10.581 "params": { 00:20:10.581 "impl_name": "posix", 00:20:10.581 "recv_buf_size": 2097152, 00:20:10.581 "send_buf_size": 2097152, 00:20:10.581 "enable_recv_pipe": true, 00:20:10.581 "enable_quickack": false, 00:20:10.581 "enable_placement_id": 0, 00:20:10.581 "enable_zerocopy_send_server": true, 00:20:10.581 "enable_zerocopy_send_client": false, 00:20:10.581 "zerocopy_threshold": 0, 00:20:10.581 "tls_version": 0, 00:20:10.581 "enable_ktls": false 00:20:10.581 } 00:20:10.581 } 00:20:10.581 ] 00:20:10.581 }, 00:20:10.581 { 00:20:10.581 "subsystem": "vmd", 00:20:10.581 "config": [] 00:20:10.581 }, 00:20:10.581 { 00:20:10.581 "subsystem": "accel", 00:20:10.581 "config": [ 00:20:10.581 { 00:20:10.581 "method": "accel_set_options", 00:20:10.581 "params": { 00:20:10.581 "small_cache_size": 128, 00:20:10.581 "large_cache_size": 16, 00:20:10.581 "task_count": 2048, 00:20:10.581 "sequence_count": 2048, 00:20:10.581 "buf_count": 2048 00:20:10.581 } 00:20:10.581 } 00:20:10.581 ] 00:20:10.581 }, 00:20:10.581 { 00:20:10.581 "subsystem": "bdev", 00:20:10.581 "config": [ 00:20:10.581 { 00:20:10.581 "method": "bdev_set_options", 00:20:10.581 "params": { 00:20:10.581 "bdev_io_pool_size": 65535, 00:20:10.581 "bdev_io_cache_size": 256, 00:20:10.581 "bdev_auto_examine": true, 00:20:10.581 "iobuf_small_cache_size": 128, 00:20:10.581 "iobuf_large_cache_size": 16 00:20:10.581 } 00:20:10.581 }, 00:20:10.581 { 00:20:10.581 "method": "bdev_raid_set_options", 00:20:10.581 "params": { 00:20:10.581 "process_window_size_kb": 1024, 00:20:10.581 "process_max_bandwidth_mb_sec": 0 00:20:10.581 } 00:20:10.581 }, 00:20:10.581 { 00:20:10.581 "method": "bdev_iscsi_set_options", 00:20:10.581 "params": { 00:20:10.581 "timeout_sec": 30 00:20:10.581 } 00:20:10.581 }, 00:20:10.581 { 00:20:10.581 "method": "bdev_nvme_set_options", 00:20:10.581 "params": { 00:20:10.581 "action_on_timeout": "none", 00:20:10.581 "timeout_us": 0, 00:20:10.581 "timeout_admin_us": 0, 00:20:10.581 "keep_alive_timeout_ms": 10000, 00:20:10.581 "arbitration_burst": 0, 00:20:10.581 "low_priority_weight": 0, 00:20:10.581 "medium_priority_weight": 0, 00:20:10.581 "high_priority_weight": 0, 00:20:10.581 "nvme_adminq_poll_period_us": 10000, 00:20:10.581 "nvme_ioq_poll_period_us": 0, 00:20:10.581 "io_queue_requests": 0, 00:20:10.581 "delay_cmd_submit": true, 00:20:10.581 "transport_retry_count": 4, 00:20:10.581 "bdev_retry_count": 3, 00:20:10.581 "transport_ack_timeout": 0, 00:20:10.581 "ctrlr_loss_timeout_sec": 0, 00:20:10.581 "reconnect_delay_sec": 0, 00:20:10.581 "fast_io_fail_timeout_sec": 0, 00:20:10.581 "disable_auto_failback": false, 00:20:10.581 "generate_uuids": false, 00:20:10.581 "transport_tos": 0, 00:20:10.581 "nvme_error_stat": false, 00:20:10.581 "rdma_srq_size": 0, 00:20:10.581 "io_path_stat": false, 00:20:10.581 "allow_accel_sequence": false, 00:20:10.581 "rdma_max_cq_size": 0, 00:20:10.581 "rdma_cm_event_timeout_ms": 0, 00:20:10.581 "dhchap_digests": [ 00:20:10.581 "sha256", 00:20:10.581 "sha384", 00:20:10.581 "sha512" 00:20:10.581 ], 00:20:10.581 "dhchap_dhgroups": [ 00:20:10.581 "null", 00:20:10.581 "ffdhe2048", 00:20:10.581 "ffdhe3072", 00:20:10.581 "ffdhe4096", 00:20:10.581 "ffdhe6144", 00:20:10.581 "ffdhe8192" 00:20:10.581 ] 00:20:10.581 } 00:20:10.581 }, 00:20:10.581 { 00:20:10.581 "method": "bdev_nvme_set_hotplug", 00:20:10.581 "params": { 00:20:10.581 "period_us": 100000, 00:20:10.581 "enable": false 00:20:10.581 } 00:20:10.581 }, 00:20:10.581 { 00:20:10.581 "method": "bdev_malloc_create", 00:20:10.581 "params": { 00:20:10.581 "name": "malloc0", 00:20:10.581 "num_blocks": 8192, 00:20:10.581 "block_size": 4096, 00:20:10.581 "physical_block_size": 4096, 00:20:10.581 "uuid": "8f74801d-8ccc-46e5-bf05-cab85b91ceff", 00:20:10.581 "optimal_io_boundary": 0, 00:20:10.582 "md_size": 0, 00:20:10.582 "dif_type": 0, 00:20:10.582 "dif_is_head_of_md": false, 00:20:10.582 "dif_pi_format": 0 00:20:10.582 } 00:20:10.582 }, 00:20:10.582 { 00:20:10.582 "method": "bdev_wait_for_examine" 00:20:10.582 } 00:20:10.582 ] 00:20:10.582 }, 00:20:10.582 { 00:20:10.582 "subsystem": "nbd", 00:20:10.582 "config": [] 00:20:10.582 }, 00:20:10.582 { 00:20:10.582 "subsystem": "scheduler", 00:20:10.582 "config": [ 00:20:10.582 { 00:20:10.582 "method": "framework_set_scheduler", 00:20:10.582 "params": { 00:20:10.582 "name": "static" 00:20:10.582 } 00:20:10.582 } 00:20:10.582 ] 00:20:10.582 }, 00:20:10.582 { 00:20:10.582 "subsystem": "nvmf", 00:20:10.582 "config": [ 00:20:10.582 { 00:20:10.582 "method": "nvmf_set_config", 00:20:10.582 "params": { 00:20:10.582 "discovery_filter": "match_any", 00:20:10.582 "admin_cmd_passthru": { 00:20:10.582 "identify_ctrlr": false 00:20:10.582 }, 00:20:10.582 "dhchap_digests": [ 00:20:10.582 "sha256", 00:20:10.582 "sha384", 00:20:10.582 "sha512" 00:20:10.582 ], 00:20:10.582 "dhchap_dhgroups": [ 00:20:10.582 "null", 00:20:10.582 "ffdhe2048", 00:20:10.582 "ffdhe3072", 00:20:10.582 "ffdhe4096", 00:20:10.582 "ffdhe6144", 00:20:10.582 "ffdhe8192" 00:20:10.582 ] 00:20:10.582 } 00:20:10.582 }, 00:20:10.582 { 00:20:10.582 "method": "nvmf_set_max_subsystems", 00:20:10.582 "params": { 00:20:10.582 "max_subsystems": 1024 00:20:10.582 } 00:20:10.582 }, 00:20:10.582 { 00:20:10.582 "method": "nvmf_set_crdt", 00:20:10.582 "params": { 00:20:10.582 "crdt1": 0, 00:20:10.582 "crdt2": 0, 00:20:10.582 "crdt3": 0 00:20:10.582 } 00:20:10.582 }, 00:20:10.582 { 00:20:10.582 "method": "nvmf_create_transport", 00:20:10.582 "params": { 00:20:10.582 "trtype": "TCP", 00:20:10.582 "max_queue_depth": 128, 00:20:10.582 "max_io_qpairs_per_ctrlr": 127, 00:20:10.582 "in_capsule_data_size": 4096, 00:20:10.582 "max_io_size": 131072, 00:20:10.582 "io_unit_size": 131072, 00:20:10.582 "max_aq_depth": 128, 00:20:10.582 "num_shared_buffers": 511, 00:20:10.582 "buf_cache_size": 4294967295, 00:20:10.582 "dif_insert_or_strip": false, 00:20:10.582 "zcopy": false, 00:20:10.582 "c2h_success": false, 00:20:10.582 "sock_priority": 0, 00:20:10.582 "abort_timeout_sec": 1, 00:20:10.582 "ack_timeout": 0, 00:20:10.582 "data_wr_pool_size": 0 00:20:10.582 } 00:20:10.582 }, 00:20:10.582 { 00:20:10.582 "method": "nvmf_create_subsystem", 00:20:10.582 "params": { 00:20:10.582 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:10.582 "allow_any_host": false, 00:20:10.582 "serial_number": "00000000000000000000", 00:20:10.582 "model_number": "SPDK bdev Controller", 00:20:10.582 "max_namespaces": 32, 00:20:10.582 "min_cntlid": 1, 00:20:10.582 "max_cntlid": 65519, 00:20:10.582 "ana_reporting": false 00:20:10.582 } 00:20:10.582 }, 00:20:10.582 { 00:20:10.582 "method": "nvmf_subsystem_add_host", 00:20:10.582 "params": { 00:20:10.582 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:10.582 "host": "nqn.2016-06.io.spdk:host1", 00:20:10.582 "psk": "key0" 00:20:10.582 } 00:20:10.582 }, 00:20:10.582 { 00:20:10.582 "method": "nvmf_subsystem_add_ns", 00:20:10.582 "params": { 00:20:10.582 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:10.582 "namespace": { 00:20:10.582 "nsid": 1, 00:20:10.582 "bdev_name": "malloc0", 00:20:10.582 "nguid": "8F74801D8CCC46E5BF05CAB85B91CEFF", 00:20:10.582 "uuid": "8f74801d-8ccc-46e5-bf05-cab85b91ceff", 00:20:10.582 "no_auto_visible": false 00:20:10.582 } 00:20:10.582 } 00:20:10.582 }, 00:20:10.582 { 00:20:10.582 "method": "nvmf_subsystem_add_listener", 00:20:10.582 "params": { 00:20:10.582 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:10.582 "listen_address": { 00:20:10.582 "trtype": "TCP", 00:20:10.582 "adrfam": "IPv4", 00:20:10.582 "traddr": "10.0.0.2", 00:20:10.582 "trsvcid": "4420" 00:20:10.582 }, 00:20:10.582 "secure_channel": false, 00:20:10.582 "sock_impl": "ssl" 00:20:10.582 } 00:20:10.582 } 00:20:10.582 ] 00:20:10.582 } 00:20:10.582 ] 00:20:10.582 }' 00:20:10.582 11:38:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:10.582 11:38:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=1263445 00:20:10.582 11:38:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 1263445 00:20:10.582 11:38:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -c /dev/fd/62 00:20:10.582 11:38:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # '[' -z 1263445 ']' 00:20:10.582 11:38:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:10.582 11:38:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # local max_retries=100 00:20:10.582 11:38:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:10.582 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:10.582 11:38:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # xtrace_disable 00:20:10.582 11:38:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:10.582 [2024-11-15 11:38:11.261484] Starting SPDK v25.01-pre git sha1 4b2d483c6 / DPDK 24.03.0 initialization... 00:20:10.582 [2024-11-15 11:38:11.261543] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:10.582 [2024-11-15 11:38:11.361407] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:10.582 [2024-11-15 11:38:11.409183] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:10.582 [2024-11-15 11:38:11.409225] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:10.582 [2024-11-15 11:38:11.409236] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:10.582 [2024-11-15 11:38:11.409245] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:10.582 [2024-11-15 11:38:11.409253] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:10.582 [2024-11-15 11:38:11.410013] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:10.839 [2024-11-15 11:38:11.633064] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:10.839 [2024-11-15 11:38:11.665073] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:20:10.839 [2024-11-15 11:38:11.665315] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:11.404 11:38:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:20:11.404 11:38:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@866 -- # return 0 00:20:11.404 11:38:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:20:11.404 11:38:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:20:11.404 11:38:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:11.662 11:38:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:11.662 11:38:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@276 -- # bdevperf_pid=1263722 00:20:11.662 11:38:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@277 -- # waitforlisten 1263722 /var/tmp/bdevperf.sock 00:20:11.662 11:38:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # '[' -z 1263722 ']' 00:20:11.662 11:38:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:11.662 11:38:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # local max_retries=100 00:20:11.662 11:38:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@274 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 -c /dev/fd/63 00:20:11.662 11:38:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:11.662 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:11.662 11:38:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # xtrace_disable 00:20:11.662 11:38:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@274 -- # echo '{ 00:20:11.662 "subsystems": [ 00:20:11.662 { 00:20:11.662 "subsystem": "keyring", 00:20:11.662 "config": [ 00:20:11.662 { 00:20:11.662 "method": "keyring_file_add_key", 00:20:11.662 "params": { 00:20:11.662 "name": "key0", 00:20:11.662 "path": "/tmp/tmp.6M4cr9t6oG" 00:20:11.662 } 00:20:11.662 } 00:20:11.662 ] 00:20:11.662 }, 00:20:11.662 { 00:20:11.662 "subsystem": "iobuf", 00:20:11.662 "config": [ 00:20:11.662 { 00:20:11.662 "method": "iobuf_set_options", 00:20:11.662 "params": { 00:20:11.662 "small_pool_count": 8192, 00:20:11.662 "large_pool_count": 1024, 00:20:11.662 "small_bufsize": 8192, 00:20:11.662 "large_bufsize": 135168, 00:20:11.662 "enable_numa": false 00:20:11.662 } 00:20:11.662 } 00:20:11.662 ] 00:20:11.662 }, 00:20:11.662 { 00:20:11.662 "subsystem": "sock", 00:20:11.662 "config": [ 00:20:11.662 { 00:20:11.662 "method": "sock_set_default_impl", 00:20:11.662 "params": { 00:20:11.662 "impl_name": "posix" 00:20:11.662 } 00:20:11.662 }, 00:20:11.662 { 00:20:11.662 "method": "sock_impl_set_options", 00:20:11.662 "params": { 00:20:11.662 "impl_name": "ssl", 00:20:11.662 "recv_buf_size": 4096, 00:20:11.662 "send_buf_size": 4096, 00:20:11.662 "enable_recv_pipe": true, 00:20:11.662 "enable_quickack": false, 00:20:11.662 "enable_placement_id": 0, 00:20:11.662 "enable_zerocopy_send_server": true, 00:20:11.662 "enable_zerocopy_send_client": false, 00:20:11.662 "zerocopy_threshold": 0, 00:20:11.662 "tls_version": 0, 00:20:11.662 "enable_ktls": false 00:20:11.662 } 00:20:11.662 }, 00:20:11.662 { 00:20:11.662 "method": "sock_impl_set_options", 00:20:11.662 "params": { 00:20:11.662 "impl_name": "posix", 00:20:11.662 "recv_buf_size": 2097152, 00:20:11.662 "send_buf_size": 2097152, 00:20:11.662 "enable_recv_pipe": true, 00:20:11.662 "enable_quickack": false, 00:20:11.662 "enable_placement_id": 0, 00:20:11.662 "enable_zerocopy_send_server": true, 00:20:11.662 "enable_zerocopy_send_client": false, 00:20:11.662 "zerocopy_threshold": 0, 00:20:11.662 "tls_version": 0, 00:20:11.662 "enable_ktls": false 00:20:11.662 } 00:20:11.662 } 00:20:11.662 ] 00:20:11.662 }, 00:20:11.662 { 00:20:11.662 "subsystem": "vmd", 00:20:11.662 "config": [] 00:20:11.662 }, 00:20:11.662 { 00:20:11.662 "subsystem": "accel", 00:20:11.662 "config": [ 00:20:11.662 { 00:20:11.662 "method": "accel_set_options", 00:20:11.662 "params": { 00:20:11.662 "small_cache_size": 128, 00:20:11.662 "large_cache_size": 16, 00:20:11.662 "task_count": 2048, 00:20:11.662 "sequence_count": 2048, 00:20:11.662 "buf_count": 2048 00:20:11.662 } 00:20:11.662 } 00:20:11.662 ] 00:20:11.662 }, 00:20:11.662 { 00:20:11.662 "subsystem": "bdev", 00:20:11.662 "config": [ 00:20:11.662 { 00:20:11.662 "method": "bdev_set_options", 00:20:11.662 "params": { 00:20:11.662 "bdev_io_pool_size": 65535, 00:20:11.662 "bdev_io_cache_size": 256, 00:20:11.662 "bdev_auto_examine": true, 00:20:11.662 "iobuf_small_cache_size": 128, 00:20:11.662 "iobuf_large_cache_size": 16 00:20:11.662 } 00:20:11.662 }, 00:20:11.662 { 00:20:11.662 "method": "bdev_raid_set_options", 00:20:11.662 "params": { 00:20:11.662 "process_window_size_kb": 1024, 00:20:11.662 "process_max_bandwidth_mb_sec": 0 00:20:11.662 } 00:20:11.662 }, 00:20:11.662 { 00:20:11.662 "method": "bdev_iscsi_set_options", 00:20:11.662 "params": { 00:20:11.662 "timeout_sec": 30 00:20:11.662 } 00:20:11.662 }, 00:20:11.662 { 00:20:11.662 "method": "bdev_nvme_set_options", 00:20:11.662 "params": { 00:20:11.662 "action_on_timeout": "none", 00:20:11.662 "timeout_us": 0, 00:20:11.662 "timeout_admin_us": 0, 00:20:11.662 "keep_alive_timeout_ms": 10000, 00:20:11.662 "arbitration_burst": 0, 00:20:11.662 "low_priority_weight": 0, 00:20:11.662 "medium_priority_weight": 0, 00:20:11.662 "high_priority_weight": 0, 00:20:11.662 "nvme_adminq_poll_period_us": 10000, 00:20:11.662 "nvme_ioq_poll_period_us": 0, 00:20:11.662 "io_queue_requests": 512, 00:20:11.662 "delay_cmd_submit": true, 00:20:11.662 "transport_retry_count": 4, 00:20:11.662 "bdev_retry_count": 3, 00:20:11.662 "transport_ack_timeout": 0, 00:20:11.662 "ctrlr_loss_timeout_sec": 0, 00:20:11.662 "reconnect_delay_sec": 0, 00:20:11.662 "fast_io_fail_timeout_sec": 0, 00:20:11.662 "disable_auto_failback": false, 00:20:11.662 "generate_uuids": false, 00:20:11.662 "transport_tos": 0, 00:20:11.662 "nvme_error_stat": false, 00:20:11.662 "rdma_srq_size": 0, 00:20:11.662 "io_path_stat": false, 00:20:11.662 "allow_accel_sequence": false, 00:20:11.662 "rdma_max_cq_size": 0, 00:20:11.662 "rdma_cm_event_timeout_ms": 0, 00:20:11.662 "dhchap_digests": [ 00:20:11.662 "sha256", 00:20:11.662 "sha384", 00:20:11.662 "sha512" 00:20:11.662 ], 00:20:11.662 "dhchap_dhgroups": [ 00:20:11.662 "null", 00:20:11.662 "ffdhe2048", 00:20:11.662 "ffdhe3072", 00:20:11.662 "ffdhe4096", 00:20:11.662 "ffdhe6144", 00:20:11.662 "ffdhe8192" 00:20:11.662 ] 00:20:11.662 } 00:20:11.663 }, 00:20:11.663 { 00:20:11.663 "method": "bdev_nvme_attach_controller", 00:20:11.663 "params": { 00:20:11.663 "name": "nvme0", 00:20:11.663 "trtype": "TCP", 00:20:11.663 "adrfam": "IPv4", 00:20:11.663 "traddr": "10.0.0.2", 00:20:11.663 "trsvcid": "4420", 00:20:11.663 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:11.663 "prchk_reftag": false, 00:20:11.663 "prchk_guard": false, 00:20:11.663 "ctrlr_loss_timeout_sec": 0, 00:20:11.663 "reconnect_delay_sec": 0, 00:20:11.663 "fast_io_fail_timeout_sec": 0, 00:20:11.663 "psk": "key0", 00:20:11.663 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:20:11.663 "hdgst": false, 00:20:11.663 "ddgst": false, 00:20:11.663 "multipath": "multipath" 00:20:11.663 } 00:20:11.663 }, 00:20:11.663 { 00:20:11.663 "method": "bdev_nvme_set_hotplug", 00:20:11.663 "params": { 00:20:11.663 "period_us": 100000, 00:20:11.663 "enable": false 00:20:11.663 } 00:20:11.663 }, 00:20:11.663 { 00:20:11.663 "method": "bdev_enable_histogram", 00:20:11.663 "params": { 00:20:11.663 "name": "nvme0n1", 00:20:11.663 "enable": true 00:20:11.663 } 00:20:11.663 }, 00:20:11.663 { 00:20:11.663 "method": "bdev_wait_for_examine" 00:20:11.663 } 00:20:11.663 ] 00:20:11.663 }, 00:20:11.663 { 00:20:11.663 "subsystem": "nbd", 00:20:11.663 "config": [] 00:20:11.663 } 00:20:11.663 ] 00:20:11.663 }' 00:20:11.663 11:38:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:11.663 [2024-11-15 11:38:12.343125] Starting SPDK v25.01-pre git sha1 4b2d483c6 / DPDK 24.03.0 initialization... 00:20:11.663 [2024-11-15 11:38:12.343188] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1263722 ] 00:20:11.663 [2024-11-15 11:38:12.409487] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:11.663 [2024-11-15 11:38:12.449809] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:20:11.919 [2024-11-15 11:38:12.601322] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:20:11.919 11:38:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:20:11.919 11:38:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@866 -- # return 0 00:20:11.919 11:38:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@279 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:20:11.919 11:38:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@279 -- # jq -r '.[].name' 00:20:12.175 11:38:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@279 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:12.175 11:38:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@280 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:20:12.431 Running I/O for 1 seconds... 00:20:13.361 3918.00 IOPS, 15.30 MiB/s 00:20:13.361 Latency(us) 00:20:13.361 [2024-11-15T10:38:14.214Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:13.362 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:20:13.362 Verification LBA range: start 0x0 length 0x2000 00:20:13.362 nvme0n1 : 1.02 3976.68 15.53 0.00 0.00 31948.61 6732.33 33363.78 00:20:13.362 [2024-11-15T10:38:14.215Z] =================================================================================================================== 00:20:13.362 [2024-11-15T10:38:14.215Z] Total : 3976.68 15.53 0.00 0.00 31948.61 6732.33 33363.78 00:20:13.362 { 00:20:13.362 "results": [ 00:20:13.362 { 00:20:13.362 "job": "nvme0n1", 00:20:13.362 "core_mask": "0x2", 00:20:13.362 "workload": "verify", 00:20:13.362 "status": "finished", 00:20:13.362 "verify_range": { 00:20:13.362 "start": 0, 00:20:13.362 "length": 8192 00:20:13.362 }, 00:20:13.362 "queue_depth": 128, 00:20:13.362 "io_size": 4096, 00:20:13.362 "runtime": 1.017684, 00:20:13.362 "iops": 3976.6764535946327, 00:20:13.362 "mibps": 15.533892396854034, 00:20:13.362 "io_failed": 0, 00:20:13.362 "io_timeout": 0, 00:20:13.362 "avg_latency_us": 31948.608619628456, 00:20:13.362 "min_latency_us": 6732.334545454545, 00:20:13.362 "max_latency_us": 33363.781818181815 00:20:13.362 } 00:20:13.362 ], 00:20:13.362 "core_count": 1 00:20:13.362 } 00:20:13.362 11:38:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@282 -- # trap - SIGINT SIGTERM EXIT 00:20:13.362 11:38:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@283 -- # cleanup 00:20:13.362 11:38:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@15 -- # process_shm --id 0 00:20:13.362 11:38:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@810 -- # type=--id 00:20:13.362 11:38:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@811 -- # id=0 00:20:13.362 11:38:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@812 -- # '[' --id = --pid ']' 00:20:13.362 11:38:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@816 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:20:13.362 11:38:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@816 -- # shm_files=nvmf_trace.0 00:20:13.362 11:38:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@818 -- # [[ -z nvmf_trace.0 ]] 00:20:13.362 11:38:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@822 -- # for n in $shm_files 00:20:13.362 11:38:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@823 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:20:13.362 nvmf_trace.0 00:20:13.619 11:38:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@825 -- # return 0 00:20:13.619 11:38:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@16 -- # killprocess 1263722 00:20:13.619 11:38:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # '[' -z 1263722 ']' 00:20:13.619 11:38:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # kill -0 1263722 00:20:13.619 11:38:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # uname 00:20:13.619 11:38:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:20:13.619 11:38:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 1263722 00:20:13.619 11:38:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:20:13.619 11:38:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:20:13.619 11:38:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@970 -- # echo 'killing process with pid 1263722' 00:20:13.619 killing process with pid 1263722 00:20:13.619 11:38:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@971 -- # kill 1263722 00:20:13.619 Received shutdown signal, test time was about 1.000000 seconds 00:20:13.619 00:20:13.619 Latency(us) 00:20:13.619 [2024-11-15T10:38:14.472Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:13.619 [2024-11-15T10:38:14.472Z] =================================================================================================================== 00:20:13.619 [2024-11-15T10:38:14.472Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:20:13.619 11:38:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@976 -- # wait 1263722 00:20:13.881 11:38:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@17 -- # nvmftestfini 00:20:13.881 11:38:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@516 -- # nvmfcleanup 00:20:13.881 11:38:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@121 -- # sync 00:20:13.881 11:38:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:20:13.881 11:38:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@124 -- # set +e 00:20:13.881 11:38:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@125 -- # for i in {1..20} 00:20:13.881 11:38:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:20:13.881 rmmod nvme_tcp 00:20:13.881 rmmod nvme_fabrics 00:20:13.881 rmmod nvme_keyring 00:20:13.881 11:38:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:20:13.881 11:38:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@128 -- # set -e 00:20:13.881 11:38:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@129 -- # return 0 00:20:13.881 11:38:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@517 -- # '[' -n 1263445 ']' 00:20:13.881 11:38:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@518 -- # killprocess 1263445 00:20:13.881 11:38:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # '[' -z 1263445 ']' 00:20:13.881 11:38:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # kill -0 1263445 00:20:13.881 11:38:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # uname 00:20:13.881 11:38:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:20:13.881 11:38:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 1263445 00:20:13.881 11:38:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:20:13.881 11:38:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:20:13.881 11:38:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@970 -- # echo 'killing process with pid 1263445' 00:20:13.881 killing process with pid 1263445 00:20:13.881 11:38:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@971 -- # kill 1263445 00:20:13.881 11:38:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@976 -- # wait 1263445 00:20:14.139 11:38:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:20:14.139 11:38:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:20:14.139 11:38:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:20:14.139 11:38:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@297 -- # iptr 00:20:14.139 11:38:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@791 -- # iptables-restore 00:20:14.139 11:38:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@791 -- # iptables-save 00:20:14.139 11:38:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:20:14.139 11:38:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:20:14.139 11:38:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@302 -- # remove_spdk_ns 00:20:14.139 11:38:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:14.139 11:38:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:14.139 11:38:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:16.039 11:38:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:20:16.039 11:38:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@18 -- # rm -f /tmp/tmp.zC46utioZu /tmp/tmp.kn2DecQ9CF /tmp/tmp.6M4cr9t6oG 00:20:16.039 00:20:16.039 real 1m24.055s 00:20:16.039 user 2m12.101s 00:20:16.039 sys 0m31.655s 00:20:16.039 11:38:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1128 -- # xtrace_disable 00:20:16.039 11:38:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:16.039 ************************************ 00:20:16.039 END TEST nvmf_tls 00:20:16.039 ************************************ 00:20:16.297 11:38:16 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@42 -- # run_test nvmf_fips /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:20:16.297 11:38:16 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:20:16.297 11:38:16 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1109 -- # xtrace_disable 00:20:16.297 11:38:16 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:20:16.297 ************************************ 00:20:16.297 START TEST nvmf_fips 00:20:16.297 ************************************ 00:20:16.297 11:38:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:20:16.297 * Looking for test storage... 00:20:16.297 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips 00:20:16.297 11:38:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:20:16.297 11:38:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:20:16.297 11:38:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1691 -- # lcov --version 00:20:16.297 11:38:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:20:16.297 11:38:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:20:16.297 11:38:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@333 -- # local ver1 ver1_l 00:20:16.297 11:38:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@334 -- # local ver2 ver2_l 00:20:16.297 11:38:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # IFS=.-: 00:20:16.297 11:38:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # read -ra ver1 00:20:16.297 11:38:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # IFS=.-: 00:20:16.297 11:38:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # read -ra ver2 00:20:16.297 11:38:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@338 -- # local 'op=<' 00:20:16.297 11:38:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@340 -- # ver1_l=2 00:20:16.297 11:38:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@341 -- # ver2_l=1 00:20:16.297 11:38:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:20:16.297 11:38:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@344 -- # case "$op" in 00:20:16.297 11:38:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@345 -- # : 1 00:20:16.297 11:38:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v = 0 )) 00:20:16.297 11:38:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:20:16.297 11:38:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # decimal 1 00:20:16.297 11:38:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=1 00:20:16.297 11:38:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:20:16.297 11:38:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 1 00:20:16.297 11:38:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # ver1[v]=1 00:20:16.297 11:38:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # decimal 2 00:20:16.297 11:38:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=2 00:20:16.297 11:38:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:20:16.297 11:38:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 2 00:20:16.297 11:38:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # ver2[v]=2 00:20:16.297 11:38:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:20:16.297 11:38:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:20:16.297 11:38:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@368 -- # return 0 00:20:16.297 11:38:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:20:16.297 11:38:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:20:16.297 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:16.297 --rc genhtml_branch_coverage=1 00:20:16.297 --rc genhtml_function_coverage=1 00:20:16.297 --rc genhtml_legend=1 00:20:16.297 --rc geninfo_all_blocks=1 00:20:16.297 --rc geninfo_unexecuted_blocks=1 00:20:16.297 00:20:16.297 ' 00:20:16.297 11:38:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:20:16.297 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:16.297 --rc genhtml_branch_coverage=1 00:20:16.297 --rc genhtml_function_coverage=1 00:20:16.297 --rc genhtml_legend=1 00:20:16.297 --rc geninfo_all_blocks=1 00:20:16.297 --rc geninfo_unexecuted_blocks=1 00:20:16.297 00:20:16.297 ' 00:20:16.297 11:38:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:20:16.297 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:16.297 --rc genhtml_branch_coverage=1 00:20:16.297 --rc genhtml_function_coverage=1 00:20:16.297 --rc genhtml_legend=1 00:20:16.297 --rc geninfo_all_blocks=1 00:20:16.297 --rc geninfo_unexecuted_blocks=1 00:20:16.297 00:20:16.297 ' 00:20:16.297 11:38:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:20:16.297 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:16.297 --rc genhtml_branch_coverage=1 00:20:16.297 --rc genhtml_function_coverage=1 00:20:16.297 --rc genhtml_legend=1 00:20:16.297 --rc geninfo_all_blocks=1 00:20:16.297 --rc geninfo_unexecuted_blocks=1 00:20:16.297 00:20:16.297 ' 00:20:16.297 11:38:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:20:16.297 11:38:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@7 -- # uname -s 00:20:16.297 11:38:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:16.297 11:38:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:16.297 11:38:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:16.297 11:38:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:16.297 11:38:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:16.298 11:38:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:16.298 11:38:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:16.298 11:38:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:16.298 11:38:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:16.298 11:38:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:16.298 11:38:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:20:16.298 11:38:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@18 -- # NVME_HOSTID=00abaa28-3537-eb11-906e-0017a4403562 00:20:16.556 11:38:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:16.556 11:38:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:16.556 11:38:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:20:16.556 11:38:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:16.556 11:38:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:20:16.556 11:38:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@15 -- # shopt -s extglob 00:20:16.556 11:38:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:16.556 11:38:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:16.556 11:38:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:16.556 11:38:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:16.556 11:38:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:16.556 11:38:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:16.556 11:38:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@5 -- # export PATH 00:20:16.556 11:38:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:16.556 11:38:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@51 -- # : 0 00:20:16.556 11:38:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:20:16.556 11:38:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:20:16.556 11:38:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:16.556 11:38:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:16.556 11:38:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:16.556 11:38:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:20:16.556 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:20:16.556 11:38:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:20:16.556 11:38:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:20:16.556 11:38:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@55 -- # have_pci_nics=0 00:20:16.556 11:38:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:20:16.556 11:38:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@90 -- # check_openssl_version 00:20:16.556 11:38:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@84 -- # local target=3.0.0 00:20:16.556 11:38:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@86 -- # openssl version 00:20:16.556 11:38:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@86 -- # awk '{print $2}' 00:20:16.556 11:38:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@86 -- # ge 3.1.1 3.0.0 00:20:16.556 11:38:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@376 -- # cmp_versions 3.1.1 '>=' 3.0.0 00:20:16.556 11:38:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@333 -- # local ver1 ver1_l 00:20:16.556 11:38:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@334 -- # local ver2 ver2_l 00:20:16.556 11:38:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # IFS=.-: 00:20:16.556 11:38:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # read -ra ver1 00:20:16.556 11:38:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # IFS=.-: 00:20:16.556 11:38:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # read -ra ver2 00:20:16.556 11:38:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@338 -- # local 'op=>=' 00:20:16.556 11:38:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@340 -- # ver1_l=3 00:20:16.556 11:38:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@341 -- # ver2_l=3 00:20:16.556 11:38:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:20:16.556 11:38:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@344 -- # case "$op" in 00:20:16.556 11:38:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@348 -- # : 1 00:20:16.556 11:38:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v = 0 )) 00:20:16.556 11:38:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:20:16.556 11:38:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # decimal 3 00:20:16.556 11:38:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=3 00:20:16.556 11:38:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 3 =~ ^[0-9]+$ ]] 00:20:16.556 11:38:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 3 00:20:16.556 11:38:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # ver1[v]=3 00:20:16.556 11:38:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # decimal 3 00:20:16.556 11:38:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=3 00:20:16.556 11:38:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 3 =~ ^[0-9]+$ ]] 00:20:16.556 11:38:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 3 00:20:16.556 11:38:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # ver2[v]=3 00:20:16.556 11:38:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:20:16.556 11:38:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:20:16.556 11:38:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v++ )) 00:20:16.556 11:38:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:20:16.556 11:38:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # decimal 1 00:20:16.556 11:38:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=1 00:20:16.556 11:38:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:20:16.556 11:38:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 1 00:20:16.556 11:38:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # ver1[v]=1 00:20:16.556 11:38:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # decimal 0 00:20:16.556 11:38:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=0 00:20:16.556 11:38:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 0 =~ ^[0-9]+$ ]] 00:20:16.556 11:38:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 0 00:20:16.556 11:38:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # ver2[v]=0 00:20:16.556 11:38:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:20:16.556 11:38:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # return 0 00:20:16.557 11:38:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@96 -- # openssl info -modulesdir 00:20:16.557 11:38:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@96 -- # [[ ! -f /usr/lib64/ossl-modules/fips.so ]] 00:20:16.557 11:38:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@101 -- # openssl fipsinstall -help 00:20:16.557 11:38:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@101 -- # warn='This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode' 00:20:16.557 11:38:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@102 -- # [[ This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode == \T\h\i\s\ \c\o\m\m\a\n\d\ \i\s\ \n\o\t\ \e\n\a\b\l\e\d* ]] 00:20:16.557 11:38:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@105 -- # export callback=build_openssl_config 00:20:16.557 11:38:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@105 -- # callback=build_openssl_config 00:20:16.557 11:38:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@114 -- # build_openssl_config 00:20:16.557 11:38:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@38 -- # cat 00:20:16.557 11:38:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@58 -- # [[ ! -t 0 ]] 00:20:16.557 11:38:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@59 -- # cat - 00:20:16.557 11:38:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@115 -- # export OPENSSL_CONF=spdk_fips.conf 00:20:16.557 11:38:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@115 -- # OPENSSL_CONF=spdk_fips.conf 00:20:16.557 11:38:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@117 -- # mapfile -t providers 00:20:16.557 11:38:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@117 -- # openssl list -providers 00:20:16.557 11:38:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@117 -- # grep name 00:20:16.557 11:38:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@121 -- # (( 2 != 2 )) 00:20:16.557 11:38:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@121 -- # [[ name: openssl base provider != *base* ]] 00:20:16.557 11:38:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@121 -- # [[ name: red hat enterprise linux 9 - openssl fips provider != *fips* ]] 00:20:16.557 11:38:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@128 -- # NOT openssl md5 /dev/fd/62 00:20:16.557 11:38:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@650 -- # local es=0 00:20:16.557 11:38:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@128 -- # : 00:20:16.557 11:38:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@652 -- # valid_exec_arg openssl md5 /dev/fd/62 00:20:16.557 11:38:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@638 -- # local arg=openssl 00:20:16.557 11:38:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:20:16.557 11:38:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@642 -- # type -t openssl 00:20:16.557 11:38:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:20:16.557 11:38:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # type -P openssl 00:20:16.557 11:38:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:20:16.557 11:38:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # arg=/usr/bin/openssl 00:20:16.557 11:38:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # [[ -x /usr/bin/openssl ]] 00:20:16.557 11:38:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@653 -- # openssl md5 /dev/fd/62 00:20:16.557 Error setting digest 00:20:16.557 408219E1307F0000:error:0308010C:digital envelope routines:inner_evp_generic_fetch:unsupported:crypto/evp/evp_fetch.c:341:Global default library context, Algorithm (MD5 : 95), Properties () 00:20:16.557 408219E1307F0000:error:03000086:digital envelope routines:evp_md_init_internal:initialization error:crypto/evp/digest.c:272: 00:20:16.557 11:38:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@653 -- # es=1 00:20:16.557 11:38:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:20:16.557 11:38:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:20:16.557 11:38:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:20:16.557 11:38:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@131 -- # nvmftestinit 00:20:16.557 11:38:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:20:16.557 11:38:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:16.557 11:38:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@476 -- # prepare_net_devs 00:20:16.557 11:38:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@438 -- # local -g is_hw=no 00:20:16.557 11:38:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@440 -- # remove_spdk_ns 00:20:16.557 11:38:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:16.557 11:38:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:16.557 11:38:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:16.557 11:38:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:20:16.557 11:38:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:20:16.557 11:38:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@309 -- # xtrace_disable 00:20:16.557 11:38:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:20:21.821 11:38:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:20:21.821 11:38:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@315 -- # pci_devs=() 00:20:21.821 11:38:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@315 -- # local -a pci_devs 00:20:21.821 11:38:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@316 -- # pci_net_devs=() 00:20:21.821 11:38:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:20:21.821 11:38:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@317 -- # pci_drivers=() 00:20:21.821 11:38:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@317 -- # local -A pci_drivers 00:20:21.821 11:38:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@319 -- # net_devs=() 00:20:21.821 11:38:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@319 -- # local -ga net_devs 00:20:21.821 11:38:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@320 -- # e810=() 00:20:21.821 11:38:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@320 -- # local -ga e810 00:20:21.821 11:38:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@321 -- # x722=() 00:20:21.821 11:38:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@321 -- # local -ga x722 00:20:21.821 11:38:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@322 -- # mlx=() 00:20:21.821 11:38:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@322 -- # local -ga mlx 00:20:21.821 11:38:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:20:21.821 11:38:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:20:21.821 11:38:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:20:21.821 11:38:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:20:21.821 11:38:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:20:21.821 11:38:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:20:21.821 11:38:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:20:21.821 11:38:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:20:21.821 11:38:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:20:21.821 11:38:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:20:21.821 11:38:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:20:21.821 11:38:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:20:21.821 11:38:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:20:21.821 11:38:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:20:21.821 11:38:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:20:21.821 11:38:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:20:21.821 11:38:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:20:21.821 11:38:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:20:21.821 11:38:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:20:21.821 11:38:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:20:21.821 Found 0000:af:00.0 (0x8086 - 0x159b) 00:20:21.821 11:38:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:20:21.821 11:38:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:20:21.821 11:38:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:21.821 11:38:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:21.821 11:38:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:20:21.821 11:38:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:20:21.821 11:38:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:20:21.821 Found 0000:af:00.1 (0x8086 - 0x159b) 00:20:21.821 11:38:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:20:21.821 11:38:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:20:21.821 11:38:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:21.821 11:38:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:21.821 11:38:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:20:21.821 11:38:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:20:21.821 11:38:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:20:21.821 11:38:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:20:21.821 11:38:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:20:21.821 11:38:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:21.821 11:38:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:20:21.821 11:38:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:21.821 11:38:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@418 -- # [[ up == up ]] 00:20:21.821 11:38:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:20:21.821 11:38:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:21.821 11:38:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:20:21.821 Found net devices under 0000:af:00.0: cvl_0_0 00:20:21.821 11:38:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:20:21.821 11:38:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:20:21.821 11:38:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:21.821 11:38:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:20:21.821 11:38:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:21.821 11:38:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@418 -- # [[ up == up ]] 00:20:21.821 11:38:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:20:21.821 11:38:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:21.821 11:38:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:20:21.821 Found net devices under 0000:af:00.1: cvl_0_1 00:20:21.822 11:38:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:20:21.822 11:38:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:20:21.822 11:38:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@442 -- # is_hw=yes 00:20:21.822 11:38:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:20:21.822 11:38:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:20:21.822 11:38:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:20:21.822 11:38:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:20:21.822 11:38:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:21.822 11:38:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:21.822 11:38:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:20:21.822 11:38:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:20:21.822 11:38:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:20:21.822 11:38:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:20:21.822 11:38:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:20:21.822 11:38:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:20:21.822 11:38:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:20:21.822 11:38:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:21.822 11:38:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:20:21.822 11:38:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:20:21.822 11:38:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:20:21.822 11:38:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:20:21.822 11:38:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:20:21.822 11:38:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:20:21.822 11:38:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:20:21.822 11:38:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:20:21.822 11:38:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:20:21.822 11:38:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:20:21.822 11:38:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:20:21.822 11:38:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:20:21.822 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:21.822 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.495 ms 00:20:21.822 00:20:21.822 --- 10.0.0.2 ping statistics --- 00:20:21.822 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:21.822 rtt min/avg/max/mdev = 0.495/0.495/0.495/0.000 ms 00:20:21.822 11:38:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:20:21.822 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:21.822 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.205 ms 00:20:21.822 00:20:21.822 --- 10.0.0.1 ping statistics --- 00:20:21.822 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:21.822 rtt min/avg/max/mdev = 0.205/0.205/0.205/0.000 ms 00:20:21.822 11:38:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:21.822 11:38:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@450 -- # return 0 00:20:21.822 11:38:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:20:21.822 11:38:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:21.822 11:38:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:20:21.822 11:38:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:20:21.822 11:38:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:21.822 11:38:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:20:21.822 11:38:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:20:21.822 11:38:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@132 -- # nvmfappstart -m 0x2 00:20:21.822 11:38:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:20:21.822 11:38:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@724 -- # xtrace_disable 00:20:21.822 11:38:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:20:21.822 11:38:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@509 -- # nvmfpid=1267736 00:20:21.822 11:38:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@510 -- # waitforlisten 1267736 00:20:21.822 11:38:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@833 -- # '[' -z 1267736 ']' 00:20:21.822 11:38:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:21.822 11:38:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@838 -- # local max_retries=100 00:20:21.822 11:38:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:21.822 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:21.822 11:38:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@842 -- # xtrace_disable 00:20:21.822 11:38:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:20:21.822 11:38:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:20:21.822 [2024-11-15 11:38:22.617080] Starting SPDK v25.01-pre git sha1 4b2d483c6 / DPDK 24.03.0 initialization... 00:20:21.822 [2024-11-15 11:38:22.617141] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:22.080 [2024-11-15 11:38:22.689643] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:22.080 [2024-11-15 11:38:22.727060] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:22.080 [2024-11-15 11:38:22.727092] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:22.080 [2024-11-15 11:38:22.727098] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:22.080 [2024-11-15 11:38:22.727103] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:22.080 [2024-11-15 11:38:22.727108] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:22.080 [2024-11-15 11:38:22.727682] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:20:22.080 11:38:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:20:22.080 11:38:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@866 -- # return 0 00:20:22.080 11:38:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:20:22.080 11:38:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@730 -- # xtrace_disable 00:20:22.080 11:38:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:20:22.080 11:38:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:22.080 11:38:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@134 -- # trap cleanup EXIT 00:20:22.080 11:38:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@137 -- # key=NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:20:22.080 11:38:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@138 -- # mktemp -t spdk-psk.XXX 00:20:22.080 11:38:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@138 -- # key_path=/tmp/spdk-psk.dLo 00:20:22.080 11:38:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@139 -- # echo -n NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:20:22.080 11:38:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@140 -- # chmod 0600 /tmp/spdk-psk.dLo 00:20:22.080 11:38:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@142 -- # setup_nvmf_tgt_conf /tmp/spdk-psk.dLo 00:20:22.080 11:38:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@22 -- # local key=/tmp/spdk-psk.dLo 00:20:22.080 11:38:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:20:22.338 [2024-11-15 11:38:23.123246] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:22.338 [2024-11-15 11:38:23.139263] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:20:22.338 [2024-11-15 11:38:23.139453] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:22.338 malloc0 00:20:22.596 11:38:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@145 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:20:22.596 11:38:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@148 -- # bdevperf_pid=1267770 00:20:22.596 11:38:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@149 -- # waitforlisten 1267770 /var/tmp/bdevperf.sock 00:20:22.596 11:38:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@146 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:20:22.596 11:38:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@833 -- # '[' -z 1267770 ']' 00:20:22.596 11:38:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:22.596 11:38:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@838 -- # local max_retries=100 00:20:22.596 11:38:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:22.596 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:22.596 11:38:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@842 -- # xtrace_disable 00:20:22.596 11:38:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:20:22.596 [2024-11-15 11:38:23.272729] Starting SPDK v25.01-pre git sha1 4b2d483c6 / DPDK 24.03.0 initialization... 00:20:22.596 [2024-11-15 11:38:23.272795] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1267770 ] 00:20:22.596 [2024-11-15 11:38:23.339689] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:22.596 [2024-11-15 11:38:23.377538] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:20:22.854 11:38:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:20:22.854 11:38:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@866 -- # return 0 00:20:22.854 11:38:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@151 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/spdk-psk.dLo 00:20:23.112 11:38:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@152 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:20:23.370 [2024-11-15 11:38:23.992890] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:20:23.370 TLSTESTn1 00:20:23.370 11:38:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@156 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:20:23.370 Running I/O for 10 seconds... 00:20:25.680 5872.00 IOPS, 22.94 MiB/s [2024-11-15T10:38:27.467Z] 5876.50 IOPS, 22.96 MiB/s [2024-11-15T10:38:28.401Z] 5502.00 IOPS, 21.49 MiB/s [2024-11-15T10:38:29.335Z] 5141.75 IOPS, 20.08 MiB/s [2024-11-15T10:38:30.269Z] 4940.20 IOPS, 19.30 MiB/s [2024-11-15T10:38:31.639Z] 4751.83 IOPS, 18.56 MiB/s [2024-11-15T10:38:32.571Z] 4641.14 IOPS, 18.13 MiB/s [2024-11-15T10:38:33.505Z] 4574.62 IOPS, 17.87 MiB/s [2024-11-15T10:38:34.440Z] 4533.00 IOPS, 17.71 MiB/s [2024-11-15T10:38:34.440Z] 4489.30 IOPS, 17.54 MiB/s 00:20:33.587 Latency(us) 00:20:33.587 [2024-11-15T10:38:34.440Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:33.587 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:20:33.587 Verification LBA range: start 0x0 length 0x2000 00:20:33.587 TLSTESTn1 : 10.02 4494.48 17.56 0.00 0.00 28441.39 4527.94 53620.36 00:20:33.587 [2024-11-15T10:38:34.440Z] =================================================================================================================== 00:20:33.587 [2024-11-15T10:38:34.440Z] Total : 4494.48 17.56 0.00 0.00 28441.39 4527.94 53620.36 00:20:33.587 { 00:20:33.587 "results": [ 00:20:33.587 { 00:20:33.587 "job": "TLSTESTn1", 00:20:33.587 "core_mask": "0x4", 00:20:33.587 "workload": "verify", 00:20:33.587 "status": "finished", 00:20:33.587 "verify_range": { 00:20:33.587 "start": 0, 00:20:33.587 "length": 8192 00:20:33.587 }, 00:20:33.587 "queue_depth": 128, 00:20:33.587 "io_size": 4096, 00:20:33.587 "runtime": 10.016964, 00:20:33.587 "iops": 4494.47557164027, 00:20:33.587 "mibps": 17.556545201719803, 00:20:33.587 "io_failed": 0, 00:20:33.587 "io_timeout": 0, 00:20:33.587 "avg_latency_us": 28441.39077755633, 00:20:33.587 "min_latency_us": 4527.941818181818, 00:20:33.587 "max_latency_us": 53620.36363636364 00:20:33.587 } 00:20:33.587 ], 00:20:33.587 "core_count": 1 00:20:33.587 } 00:20:33.587 11:38:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@1 -- # cleanup 00:20:33.587 11:38:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@15 -- # process_shm --id 0 00:20:33.587 11:38:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@810 -- # type=--id 00:20:33.587 11:38:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@811 -- # id=0 00:20:33.587 11:38:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@812 -- # '[' --id = --pid ']' 00:20:33.587 11:38:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@816 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:20:33.587 11:38:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@816 -- # shm_files=nvmf_trace.0 00:20:33.587 11:38:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@818 -- # [[ -z nvmf_trace.0 ]] 00:20:33.587 11:38:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@822 -- # for n in $shm_files 00:20:33.587 11:38:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@823 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:20:33.587 nvmf_trace.0 00:20:33.587 11:38:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@825 -- # return 0 00:20:33.587 11:38:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@16 -- # killprocess 1267770 00:20:33.587 11:38:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@952 -- # '[' -z 1267770 ']' 00:20:33.587 11:38:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@956 -- # kill -0 1267770 00:20:33.587 11:38:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@957 -- # uname 00:20:33.587 11:38:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:20:33.587 11:38:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 1267770 00:20:33.587 11:38:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@958 -- # process_name=reactor_2 00:20:33.587 11:38:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@962 -- # '[' reactor_2 = sudo ']' 00:20:33.587 11:38:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@970 -- # echo 'killing process with pid 1267770' 00:20:33.587 killing process with pid 1267770 00:20:33.587 11:38:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@971 -- # kill 1267770 00:20:33.587 Received shutdown signal, test time was about 10.000000 seconds 00:20:33.587 00:20:33.587 Latency(us) 00:20:33.587 [2024-11-15T10:38:34.440Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:33.587 [2024-11-15T10:38:34.440Z] =================================================================================================================== 00:20:33.587 [2024-11-15T10:38:34.440Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:20:33.587 11:38:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@976 -- # wait 1267770 00:20:33.847 11:38:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@17 -- # nvmftestfini 00:20:33.847 11:38:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@516 -- # nvmfcleanup 00:20:33.847 11:38:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@121 -- # sync 00:20:33.847 11:38:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:20:33.847 11:38:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@124 -- # set +e 00:20:33.847 11:38:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@125 -- # for i in {1..20} 00:20:33.847 11:38:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:20:33.847 rmmod nvme_tcp 00:20:33.847 rmmod nvme_fabrics 00:20:33.847 rmmod nvme_keyring 00:20:33.847 11:38:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:20:33.847 11:38:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@128 -- # set -e 00:20:33.847 11:38:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@129 -- # return 0 00:20:33.847 11:38:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@517 -- # '[' -n 1267736 ']' 00:20:33.847 11:38:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@518 -- # killprocess 1267736 00:20:33.847 11:38:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@952 -- # '[' -z 1267736 ']' 00:20:33.847 11:38:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@956 -- # kill -0 1267736 00:20:33.847 11:38:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@957 -- # uname 00:20:33.847 11:38:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:20:33.847 11:38:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 1267736 00:20:34.105 11:38:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:20:34.105 11:38:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:20:34.105 11:38:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@970 -- # echo 'killing process with pid 1267736' 00:20:34.105 killing process with pid 1267736 00:20:34.105 11:38:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@971 -- # kill 1267736 00:20:34.105 11:38:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@976 -- # wait 1267736 00:20:34.105 11:38:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:20:34.105 11:38:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:20:34.105 11:38:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:20:34.105 11:38:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@297 -- # iptr 00:20:34.105 11:38:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@791 -- # iptables-save 00:20:34.105 11:38:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@791 -- # iptables-restore 00:20:34.105 11:38:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:20:34.105 11:38:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:20:34.105 11:38:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@302 -- # remove_spdk_ns 00:20:34.105 11:38:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:34.105 11:38:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:34.105 11:38:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:36.114 11:38:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:20:36.114 11:38:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@18 -- # rm -f /tmp/spdk-psk.dLo 00:20:36.114 00:20:36.114 real 0m19.984s 00:20:36.114 user 0m21.203s 00:20:36.114 sys 0m9.629s 00:20:36.114 11:38:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1128 -- # xtrace_disable 00:20:36.114 11:38:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:20:36.114 ************************************ 00:20:36.114 END TEST nvmf_fips 00:20:36.114 ************************************ 00:20:36.386 11:38:36 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@43 -- # run_test nvmf_control_msg_list /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/control_msg_list.sh --transport=tcp 00:20:36.386 11:38:36 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:20:36.386 11:38:36 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1109 -- # xtrace_disable 00:20:36.386 11:38:36 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:20:36.386 ************************************ 00:20:36.386 START TEST nvmf_control_msg_list 00:20:36.386 ************************************ 00:20:36.386 11:38:36 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/control_msg_list.sh --transport=tcp 00:20:36.386 * Looking for test storage... 00:20:36.386 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:20:36.386 11:38:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:20:36.386 11:38:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1691 -- # lcov --version 00:20:36.386 11:38:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:20:36.386 11:38:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:20:36.386 11:38:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:20:36.386 11:38:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@333 -- # local ver1 ver1_l 00:20:36.386 11:38:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@334 -- # local ver2 ver2_l 00:20:36.386 11:38:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@336 -- # IFS=.-: 00:20:36.386 11:38:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@336 -- # read -ra ver1 00:20:36.386 11:38:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@337 -- # IFS=.-: 00:20:36.386 11:38:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@337 -- # read -ra ver2 00:20:36.386 11:38:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@338 -- # local 'op=<' 00:20:36.386 11:38:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@340 -- # ver1_l=2 00:20:36.386 11:38:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@341 -- # ver2_l=1 00:20:36.386 11:38:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:20:36.386 11:38:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@344 -- # case "$op" in 00:20:36.386 11:38:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@345 -- # : 1 00:20:36.386 11:38:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@364 -- # (( v = 0 )) 00:20:36.386 11:38:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:20:36.386 11:38:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@365 -- # decimal 1 00:20:36.386 11:38:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@353 -- # local d=1 00:20:36.386 11:38:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:20:36.386 11:38:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@355 -- # echo 1 00:20:36.386 11:38:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@365 -- # ver1[v]=1 00:20:36.386 11:38:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@366 -- # decimal 2 00:20:36.386 11:38:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@353 -- # local d=2 00:20:36.386 11:38:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:20:36.386 11:38:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@355 -- # echo 2 00:20:36.386 11:38:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@366 -- # ver2[v]=2 00:20:36.386 11:38:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:20:36.386 11:38:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:20:36.386 11:38:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@368 -- # return 0 00:20:36.386 11:38:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:20:36.386 11:38:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:20:36.386 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:36.386 --rc genhtml_branch_coverage=1 00:20:36.386 --rc genhtml_function_coverage=1 00:20:36.386 --rc genhtml_legend=1 00:20:36.386 --rc geninfo_all_blocks=1 00:20:36.386 --rc geninfo_unexecuted_blocks=1 00:20:36.386 00:20:36.386 ' 00:20:36.386 11:38:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:20:36.386 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:36.386 --rc genhtml_branch_coverage=1 00:20:36.386 --rc genhtml_function_coverage=1 00:20:36.386 --rc genhtml_legend=1 00:20:36.386 --rc geninfo_all_blocks=1 00:20:36.386 --rc geninfo_unexecuted_blocks=1 00:20:36.386 00:20:36.386 ' 00:20:36.386 11:38:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:20:36.386 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:36.386 --rc genhtml_branch_coverage=1 00:20:36.386 --rc genhtml_function_coverage=1 00:20:36.386 --rc genhtml_legend=1 00:20:36.386 --rc geninfo_all_blocks=1 00:20:36.386 --rc geninfo_unexecuted_blocks=1 00:20:36.386 00:20:36.386 ' 00:20:36.386 11:38:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:20:36.386 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:36.386 --rc genhtml_branch_coverage=1 00:20:36.386 --rc genhtml_function_coverage=1 00:20:36.386 --rc genhtml_legend=1 00:20:36.386 --rc geninfo_all_blocks=1 00:20:36.386 --rc geninfo_unexecuted_blocks=1 00:20:36.386 00:20:36.386 ' 00:20:36.386 11:38:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:20:36.386 11:38:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@7 -- # uname -s 00:20:36.386 11:38:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:36.386 11:38:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:36.386 11:38:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:36.386 11:38:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:36.386 11:38:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:36.386 11:38:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:36.386 11:38:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:36.386 11:38:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:36.386 11:38:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:36.386 11:38:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:36.386 11:38:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:20:36.386 11:38:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@18 -- # NVME_HOSTID=00abaa28-3537-eb11-906e-0017a4403562 00:20:36.386 11:38:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:36.386 11:38:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:36.386 11:38:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:20:36.386 11:38:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:36.386 11:38:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:20:36.386 11:38:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@15 -- # shopt -s extglob 00:20:36.386 11:38:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:36.386 11:38:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:36.386 11:38:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:36.386 11:38:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:36.386 11:38:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:36.386 11:38:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:36.387 11:38:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@5 -- # export PATH 00:20:36.387 11:38:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:36.387 11:38:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@51 -- # : 0 00:20:36.387 11:38:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:20:36.387 11:38:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:20:36.387 11:38:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:36.387 11:38:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:36.387 11:38:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:36.387 11:38:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:20:36.387 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:20:36.387 11:38:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:20:36.387 11:38:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:20:36.387 11:38:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@55 -- # have_pci_nics=0 00:20:36.387 11:38:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@12 -- # nvmftestinit 00:20:36.387 11:38:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:20:36.387 11:38:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:36.387 11:38:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@476 -- # prepare_net_devs 00:20:36.387 11:38:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@438 -- # local -g is_hw=no 00:20:36.387 11:38:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@440 -- # remove_spdk_ns 00:20:36.387 11:38:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:36.387 11:38:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:36.387 11:38:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:36.387 11:38:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:20:36.387 11:38:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:20:36.387 11:38:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@309 -- # xtrace_disable 00:20:36.387 11:38:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:20:42.957 11:38:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:20:42.957 11:38:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@315 -- # pci_devs=() 00:20:42.957 11:38:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@315 -- # local -a pci_devs 00:20:42.957 11:38:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@316 -- # pci_net_devs=() 00:20:42.957 11:38:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:20:42.957 11:38:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@317 -- # pci_drivers=() 00:20:42.957 11:38:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@317 -- # local -A pci_drivers 00:20:42.957 11:38:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@319 -- # net_devs=() 00:20:42.957 11:38:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@319 -- # local -ga net_devs 00:20:42.957 11:38:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@320 -- # e810=() 00:20:42.957 11:38:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@320 -- # local -ga e810 00:20:42.957 11:38:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@321 -- # x722=() 00:20:42.957 11:38:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@321 -- # local -ga x722 00:20:42.957 11:38:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@322 -- # mlx=() 00:20:42.957 11:38:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@322 -- # local -ga mlx 00:20:42.957 11:38:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:20:42.957 11:38:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:20:42.957 11:38:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:20:42.957 11:38:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:20:42.957 11:38:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:20:42.957 11:38:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:20:42.957 11:38:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:20:42.957 11:38:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:20:42.957 11:38:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:20:42.957 11:38:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:20:42.957 11:38:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:20:42.957 11:38:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:20:42.957 11:38:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:20:42.957 11:38:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:20:42.957 11:38:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:20:42.957 11:38:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:20:42.957 11:38:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:20:42.957 11:38:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:20:42.957 11:38:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:20:42.957 11:38:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:20:42.957 Found 0000:af:00.0 (0x8086 - 0x159b) 00:20:42.957 11:38:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:20:42.957 11:38:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:20:42.957 11:38:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:42.957 11:38:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:42.957 11:38:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:20:42.957 11:38:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:20:42.957 11:38:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:20:42.957 Found 0000:af:00.1 (0x8086 - 0x159b) 00:20:42.958 11:38:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:20:42.958 11:38:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:20:42.958 11:38:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:42.958 11:38:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:42.958 11:38:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:20:42.958 11:38:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:20:42.958 11:38:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:20:42.958 11:38:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:20:42.958 11:38:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:20:42.958 11:38:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:42.958 11:38:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:20:42.958 11:38:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:42.958 11:38:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@418 -- # [[ up == up ]] 00:20:42.958 11:38:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:20:42.958 11:38:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:42.958 11:38:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:20:42.958 Found net devices under 0000:af:00.0: cvl_0_0 00:20:42.958 11:38:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:20:42.958 11:38:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:20:42.958 11:38:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:42.958 11:38:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:20:42.958 11:38:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:42.958 11:38:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@418 -- # [[ up == up ]] 00:20:42.958 11:38:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:20:42.958 11:38:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:42.958 11:38:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:20:42.958 Found net devices under 0000:af:00.1: cvl_0_1 00:20:42.958 11:38:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:20:42.958 11:38:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:20:42.958 11:38:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@442 -- # is_hw=yes 00:20:42.958 11:38:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:20:42.958 11:38:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:20:42.958 11:38:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:20:42.958 11:38:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:20:42.958 11:38:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:42.958 11:38:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:42.958 11:38:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:20:42.958 11:38:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:20:42.958 11:38:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:20:42.958 11:38:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:20:42.958 11:38:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:20:42.958 11:38:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:20:42.958 11:38:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:20:42.958 11:38:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:42.958 11:38:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:20:42.958 11:38:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:20:42.958 11:38:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:20:42.958 11:38:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:20:42.958 11:38:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:20:42.958 11:38:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:20:42.958 11:38:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:20:42.958 11:38:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:20:42.958 11:38:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:20:42.958 11:38:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:20:42.958 11:38:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:20:42.958 11:38:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:20:42.958 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:42.958 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.408 ms 00:20:42.958 00:20:42.958 --- 10.0.0.2 ping statistics --- 00:20:42.958 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:42.958 rtt min/avg/max/mdev = 0.408/0.408/0.408/0.000 ms 00:20:42.958 11:38:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:20:42.958 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:42.958 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.202 ms 00:20:42.958 00:20:42.958 --- 10.0.0.1 ping statistics --- 00:20:42.958 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:42.958 rtt min/avg/max/mdev = 0.202/0.202/0.202/0.000 ms 00:20:42.958 11:38:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:42.958 11:38:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@450 -- # return 0 00:20:42.958 11:38:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:20:42.958 11:38:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:42.958 11:38:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:20:42.958 11:38:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:20:42.958 11:38:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:42.958 11:38:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:20:42.958 11:38:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:20:42.958 11:38:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@13 -- # nvmfappstart 00:20:42.958 11:38:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:20:42.958 11:38:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@724 -- # xtrace_disable 00:20:42.958 11:38:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:20:42.958 11:38:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@509 -- # nvmfpid=1273575 00:20:42.958 11:38:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@510 -- # waitforlisten 1273575 00:20:42.958 11:38:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:20:42.958 11:38:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@833 -- # '[' -z 1273575 ']' 00:20:42.958 11:38:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:42.958 11:38:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@838 -- # local max_retries=100 00:20:42.958 11:38:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:42.958 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:42.958 11:38:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@842 -- # xtrace_disable 00:20:42.958 11:38:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:20:42.958 [2024-11-15 11:38:42.991324] Starting SPDK v25.01-pre git sha1 4b2d483c6 / DPDK 24.03.0 initialization... 00:20:42.958 [2024-11-15 11:38:42.991381] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:42.958 [2024-11-15 11:38:43.091846] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:42.958 [2024-11-15 11:38:43.139820] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:42.958 [2024-11-15 11:38:43.139859] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:42.958 [2024-11-15 11:38:43.139870] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:42.958 [2024-11-15 11:38:43.139879] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:42.958 [2024-11-15 11:38:43.139886] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:42.958 [2024-11-15 11:38:43.140617] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:42.958 11:38:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:20:42.958 11:38:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@866 -- # return 0 00:20:42.958 11:38:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:20:42.958 11:38:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@730 -- # xtrace_disable 00:20:42.958 11:38:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:20:42.958 11:38:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:42.959 11:38:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@15 -- # subnqn=nqn.2024-07.io.spdk:cnode0 00:20:42.959 11:38:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@16 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:20:42.959 11:38:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@19 -- # rpc_cmd nvmf_create_transport '-t tcp -o' --in-capsule-data-size 768 --control-msg-num 1 00:20:42.959 11:38:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:42.959 11:38:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:20:42.959 [2024-11-15 11:38:43.292974] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:42.959 11:38:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:42.959 11:38:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2024-07.io.spdk:cnode0 -a 00:20:42.959 11:38:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:42.959 11:38:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:20:42.959 11:38:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:42.959 11:38:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@21 -- # rpc_cmd bdev_malloc_create -b Malloc0 32 512 00:20:42.959 11:38:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:42.959 11:38:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:20:42.959 Malloc0 00:20:42.959 11:38:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:42.959 11:38:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2024-07.io.spdk:cnode0 Malloc0 00:20:42.959 11:38:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:42.959 11:38:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:20:42.959 11:38:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:42.959 11:38:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2024-07.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:20:42.959 11:38:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:42.959 11:38:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:20:42.959 [2024-11-15 11:38:43.334214] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:42.959 11:38:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:42.959 11:38:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@27 -- # perf_pid1=1273612 00:20:42.959 11:38:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x2 -q 1 -o 4096 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:20:42.959 11:38:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@29 -- # perf_pid2=1273614 00:20:42.959 11:38:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x4 -q 1 -o 4096 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:20:42.959 11:38:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@31 -- # perf_pid3=1273615 00:20:42.959 11:38:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@33 -- # wait 1273612 00:20:42.959 11:38:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x8 -q 1 -o 4096 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:20:42.959 [2024-11-15 11:38:43.413091] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:20:42.959 [2024-11-15 11:38:43.413325] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:20:42.959 [2024-11-15 11:38:43.413546] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:20:43.893 Initializing NVMe Controllers 00:20:43.893 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2024-07.io.spdk:cnode0 00:20:43.893 Associating TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 3 00:20:43.893 Initialization complete. Launching workers. 00:20:43.893 ======================================================== 00:20:43.893 Latency(us) 00:20:43.893 Device Information : IOPS MiB/s Average min max 00:20:43.893 TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 3: 3369.00 13.16 296.42 179.10 581.58 00:20:43.893 ======================================================== 00:20:43.893 Total : 3369.00 13.16 296.42 179.10 581.58 00:20:43.893 00:20:43.893 Initializing NVMe Controllers 00:20:43.893 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2024-07.io.spdk:cnode0 00:20:43.893 Associating TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 1 00:20:43.893 Initialization complete. Launching workers. 00:20:43.893 ======================================================== 00:20:43.893 Latency(us) 00:20:43.893 Device Information : IOPS MiB/s Average min max 00:20:43.893 TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 1: 3076.00 12.02 324.69 197.58 40937.68 00:20:43.893 ======================================================== 00:20:43.893 Total : 3076.00 12.02 324.69 197.58 40937.68 00:20:43.893 00:20:43.893 [2024-11-15 11:38:44.557040] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dcddc0 is same with the state(6) to be set 00:20:43.893 11:38:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@34 -- # wait 1273614 00:20:43.893 Initializing NVMe Controllers 00:20:43.893 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2024-07.io.spdk:cnode0 00:20:43.893 Associating TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 2 00:20:43.893 Initialization complete. Launching workers. 00:20:43.893 ======================================================== 00:20:43.893 Latency(us) 00:20:43.893 Device Information : IOPS MiB/s Average min max 00:20:43.893 TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 2: 2768.00 10.81 360.82 179.62 643.03 00:20:43.893 ======================================================== 00:20:43.893 Total : 2768.00 10.81 360.82 179.62 643.03 00:20:43.893 00:20:43.893 11:38:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@35 -- # wait 1273615 00:20:43.893 11:38:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:20:43.893 11:38:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@38 -- # nvmftestfini 00:20:43.893 11:38:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@516 -- # nvmfcleanup 00:20:43.893 11:38:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@121 -- # sync 00:20:43.893 11:38:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:20:43.893 11:38:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@124 -- # set +e 00:20:43.894 11:38:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@125 -- # for i in {1..20} 00:20:43.894 11:38:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:20:43.894 rmmod nvme_tcp 00:20:43.894 rmmod nvme_fabrics 00:20:43.894 rmmod nvme_keyring 00:20:43.894 11:38:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:20:43.894 11:38:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@128 -- # set -e 00:20:43.894 11:38:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@129 -- # return 0 00:20:43.894 11:38:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@517 -- # '[' -n 1273575 ']' 00:20:43.894 11:38:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@518 -- # killprocess 1273575 00:20:43.894 11:38:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@952 -- # '[' -z 1273575 ']' 00:20:43.894 11:38:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@956 -- # kill -0 1273575 00:20:43.894 11:38:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@957 -- # uname 00:20:43.894 11:38:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:20:43.894 11:38:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 1273575 00:20:43.894 11:38:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:20:43.894 11:38:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:20:43.894 11:38:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@970 -- # echo 'killing process with pid 1273575' 00:20:43.894 killing process with pid 1273575 00:20:43.894 11:38:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@971 -- # kill 1273575 00:20:43.894 11:38:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@976 -- # wait 1273575 00:20:44.152 11:38:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:20:44.152 11:38:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:20:44.152 11:38:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:20:44.152 11:38:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@297 -- # iptr 00:20:44.152 11:38:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@791 -- # iptables-save 00:20:44.152 11:38:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:20:44.152 11:38:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@791 -- # iptables-restore 00:20:44.152 11:38:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:20:44.152 11:38:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@302 -- # remove_spdk_ns 00:20:44.152 11:38:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:44.152 11:38:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:44.152 11:38:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:46.684 11:38:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:20:46.684 00:20:46.684 real 0m9.995s 00:20:46.684 user 0m6.640s 00:20:46.684 sys 0m5.371s 00:20:46.684 11:38:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1128 -- # xtrace_disable 00:20:46.684 11:38:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:20:46.684 ************************************ 00:20:46.684 END TEST nvmf_control_msg_list 00:20:46.684 ************************************ 00:20:46.684 11:38:47 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@44 -- # run_test nvmf_wait_for_buf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/wait_for_buf.sh --transport=tcp 00:20:46.684 11:38:47 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:20:46.684 11:38:47 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1109 -- # xtrace_disable 00:20:46.684 11:38:47 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:20:46.684 ************************************ 00:20:46.684 START TEST nvmf_wait_for_buf 00:20:46.684 ************************************ 00:20:46.684 11:38:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/wait_for_buf.sh --transport=tcp 00:20:46.684 * Looking for test storage... 00:20:46.684 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:20:46.684 11:38:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:20:46.684 11:38:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1691 -- # lcov --version 00:20:46.684 11:38:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:20:46.684 11:38:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:20:46.684 11:38:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:20:46.684 11:38:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@333 -- # local ver1 ver1_l 00:20:46.684 11:38:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@334 -- # local ver2 ver2_l 00:20:46.684 11:38:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@336 -- # IFS=.-: 00:20:46.684 11:38:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@336 -- # read -ra ver1 00:20:46.684 11:38:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@337 -- # IFS=.-: 00:20:46.684 11:38:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@337 -- # read -ra ver2 00:20:46.684 11:38:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@338 -- # local 'op=<' 00:20:46.684 11:38:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@340 -- # ver1_l=2 00:20:46.684 11:38:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@341 -- # ver2_l=1 00:20:46.684 11:38:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:20:46.684 11:38:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@344 -- # case "$op" in 00:20:46.684 11:38:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@345 -- # : 1 00:20:46.684 11:38:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@364 -- # (( v = 0 )) 00:20:46.684 11:38:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:20:46.684 11:38:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@365 -- # decimal 1 00:20:46.684 11:38:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@353 -- # local d=1 00:20:46.684 11:38:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:20:46.684 11:38:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@355 -- # echo 1 00:20:46.684 11:38:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@365 -- # ver1[v]=1 00:20:46.684 11:38:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@366 -- # decimal 2 00:20:46.684 11:38:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@353 -- # local d=2 00:20:46.684 11:38:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:20:46.684 11:38:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@355 -- # echo 2 00:20:46.684 11:38:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@366 -- # ver2[v]=2 00:20:46.685 11:38:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:20:46.685 11:38:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:20:46.685 11:38:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@368 -- # return 0 00:20:46.685 11:38:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:20:46.685 11:38:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:20:46.685 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:46.685 --rc genhtml_branch_coverage=1 00:20:46.685 --rc genhtml_function_coverage=1 00:20:46.685 --rc genhtml_legend=1 00:20:46.685 --rc geninfo_all_blocks=1 00:20:46.685 --rc geninfo_unexecuted_blocks=1 00:20:46.685 00:20:46.685 ' 00:20:46.685 11:38:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:20:46.685 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:46.685 --rc genhtml_branch_coverage=1 00:20:46.685 --rc genhtml_function_coverage=1 00:20:46.685 --rc genhtml_legend=1 00:20:46.685 --rc geninfo_all_blocks=1 00:20:46.685 --rc geninfo_unexecuted_blocks=1 00:20:46.685 00:20:46.685 ' 00:20:46.685 11:38:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:20:46.685 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:46.685 --rc genhtml_branch_coverage=1 00:20:46.685 --rc genhtml_function_coverage=1 00:20:46.685 --rc genhtml_legend=1 00:20:46.685 --rc geninfo_all_blocks=1 00:20:46.685 --rc geninfo_unexecuted_blocks=1 00:20:46.685 00:20:46.685 ' 00:20:46.685 11:38:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:20:46.685 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:46.685 --rc genhtml_branch_coverage=1 00:20:46.685 --rc genhtml_function_coverage=1 00:20:46.685 --rc genhtml_legend=1 00:20:46.685 --rc geninfo_all_blocks=1 00:20:46.685 --rc geninfo_unexecuted_blocks=1 00:20:46.685 00:20:46.685 ' 00:20:46.685 11:38:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:20:46.685 11:38:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@7 -- # uname -s 00:20:46.685 11:38:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:46.685 11:38:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:46.685 11:38:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:46.685 11:38:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:46.685 11:38:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:46.685 11:38:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:46.685 11:38:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:46.685 11:38:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:46.685 11:38:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:46.685 11:38:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:46.685 11:38:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:20:46.685 11:38:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@18 -- # NVME_HOSTID=00abaa28-3537-eb11-906e-0017a4403562 00:20:46.685 11:38:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:46.685 11:38:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:46.685 11:38:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:20:46.685 11:38:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:46.685 11:38:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:20:46.685 11:38:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@15 -- # shopt -s extglob 00:20:46.685 11:38:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:46.685 11:38:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:46.685 11:38:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:46.685 11:38:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:46.685 11:38:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:46.685 11:38:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:46.685 11:38:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@5 -- # export PATH 00:20:46.685 11:38:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:46.685 11:38:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@51 -- # : 0 00:20:46.685 11:38:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:20:46.685 11:38:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:20:46.685 11:38:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:46.685 11:38:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:46.685 11:38:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:46.685 11:38:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:20:46.685 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:20:46.685 11:38:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:20:46.685 11:38:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:20:46.685 11:38:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@55 -- # have_pci_nics=0 00:20:46.685 11:38:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@12 -- # nvmftestinit 00:20:46.685 11:38:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:20:46.685 11:38:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:46.685 11:38:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@476 -- # prepare_net_devs 00:20:46.685 11:38:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@438 -- # local -g is_hw=no 00:20:46.685 11:38:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@440 -- # remove_spdk_ns 00:20:46.685 11:38:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:46.685 11:38:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:46.685 11:38:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:46.685 11:38:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:20:46.685 11:38:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:20:46.685 11:38:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@309 -- # xtrace_disable 00:20:46.686 11:38:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:20:53.249 11:38:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:20:53.249 11:38:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@315 -- # pci_devs=() 00:20:53.249 11:38:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@315 -- # local -a pci_devs 00:20:53.249 11:38:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@316 -- # pci_net_devs=() 00:20:53.249 11:38:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:20:53.249 11:38:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@317 -- # pci_drivers=() 00:20:53.249 11:38:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@317 -- # local -A pci_drivers 00:20:53.249 11:38:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@319 -- # net_devs=() 00:20:53.249 11:38:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@319 -- # local -ga net_devs 00:20:53.249 11:38:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@320 -- # e810=() 00:20:53.249 11:38:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@320 -- # local -ga e810 00:20:53.249 11:38:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@321 -- # x722=() 00:20:53.250 11:38:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@321 -- # local -ga x722 00:20:53.250 11:38:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@322 -- # mlx=() 00:20:53.250 11:38:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@322 -- # local -ga mlx 00:20:53.250 11:38:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:20:53.250 11:38:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:20:53.250 11:38:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:20:53.250 11:38:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:20:53.250 11:38:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:20:53.250 11:38:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:20:53.250 11:38:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:20:53.250 11:38:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:20:53.250 11:38:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:20:53.250 11:38:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:20:53.250 11:38:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:20:53.250 11:38:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:20:53.250 11:38:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:20:53.250 11:38:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:20:53.250 11:38:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:20:53.250 11:38:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:20:53.250 11:38:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:20:53.250 11:38:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:20:53.250 11:38:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:20:53.250 11:38:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:20:53.250 Found 0000:af:00.0 (0x8086 - 0x159b) 00:20:53.250 11:38:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:20:53.250 11:38:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:20:53.250 11:38:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:53.250 11:38:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:53.250 11:38:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:20:53.250 11:38:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:20:53.250 11:38:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:20:53.250 Found 0000:af:00.1 (0x8086 - 0x159b) 00:20:53.250 11:38:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:20:53.250 11:38:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:20:53.250 11:38:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:53.250 11:38:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:53.250 11:38:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:20:53.250 11:38:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:20:53.250 11:38:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:20:53.250 11:38:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:20:53.250 11:38:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:20:53.250 11:38:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:53.250 11:38:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:20:53.250 11:38:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:53.250 11:38:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@418 -- # [[ up == up ]] 00:20:53.250 11:38:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:20:53.250 11:38:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:53.250 11:38:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:20:53.250 Found net devices under 0000:af:00.0: cvl_0_0 00:20:53.250 11:38:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:20:53.250 11:38:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:20:53.250 11:38:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:53.250 11:38:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:20:53.250 11:38:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:53.250 11:38:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@418 -- # [[ up == up ]] 00:20:53.250 11:38:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:20:53.250 11:38:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:53.250 11:38:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:20:53.250 Found net devices under 0000:af:00.1: cvl_0_1 00:20:53.250 11:38:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:20:53.250 11:38:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:20:53.250 11:38:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@442 -- # is_hw=yes 00:20:53.250 11:38:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:20:53.250 11:38:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:20:53.250 11:38:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:20:53.250 11:38:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:20:53.250 11:38:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:53.250 11:38:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:53.250 11:38:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:20:53.250 11:38:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:20:53.250 11:38:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:20:53.250 11:38:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:20:53.250 11:38:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:20:53.250 11:38:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:20:53.250 11:38:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:20:53.250 11:38:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:53.250 11:38:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:20:53.250 11:38:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:20:53.250 11:38:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:20:53.250 11:38:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:20:53.250 11:38:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:20:53.250 11:38:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:20:53.250 11:38:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:20:53.250 11:38:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:20:53.250 11:38:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:20:53.250 11:38:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:20:53.250 11:38:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:20:53.250 11:38:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:20:53.250 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:53.250 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.449 ms 00:20:53.250 00:20:53.250 --- 10.0.0.2 ping statistics --- 00:20:53.250 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:53.250 rtt min/avg/max/mdev = 0.449/0.449/0.449/0.000 ms 00:20:53.250 11:38:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:20:53.250 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:53.250 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.123 ms 00:20:53.250 00:20:53.250 --- 10.0.0.1 ping statistics --- 00:20:53.250 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:53.250 rtt min/avg/max/mdev = 0.123/0.123/0.123/0.000 ms 00:20:53.250 11:38:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:53.250 11:38:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@450 -- # return 0 00:20:53.250 11:38:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:20:53.250 11:38:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:53.250 11:38:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:20:53.250 11:38:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:20:53.250 11:38:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:53.250 11:38:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:20:53.251 11:38:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:20:53.251 11:38:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@13 -- # nvmfappstart --wait-for-rpc 00:20:53.251 11:38:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:20:53.251 11:38:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@724 -- # xtrace_disable 00:20:53.251 11:38:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:20:53.251 11:38:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@509 -- # nvmfpid=1277513 00:20:53.251 11:38:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@510 -- # waitforlisten 1277513 00:20:53.251 11:38:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:20:53.251 11:38:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@833 -- # '[' -z 1277513 ']' 00:20:53.251 11:38:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:53.251 11:38:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@838 -- # local max_retries=100 00:20:53.251 11:38:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:53.251 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:53.251 11:38:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@842 -- # xtrace_disable 00:20:53.251 11:38:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:20:53.251 [2024-11-15 11:38:53.303278] Starting SPDK v25.01-pre git sha1 4b2d483c6 / DPDK 24.03.0 initialization... 00:20:53.251 [2024-11-15 11:38:53.303338] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:53.251 [2024-11-15 11:38:53.404342] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:53.251 [2024-11-15 11:38:53.452269] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:53.251 [2024-11-15 11:38:53.452310] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:53.251 [2024-11-15 11:38:53.452320] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:53.251 [2024-11-15 11:38:53.452330] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:53.251 [2024-11-15 11:38:53.452338] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:53.251 [2024-11-15 11:38:53.453061] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:53.251 11:38:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:20:53.251 11:38:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@866 -- # return 0 00:20:53.251 11:38:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:20:53.251 11:38:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@730 -- # xtrace_disable 00:20:53.251 11:38:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:20:53.251 11:38:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:53.251 11:38:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@15 -- # subnqn=nqn.2024-07.io.spdk:cnode0 00:20:53.251 11:38:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@16 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:20:53.251 11:38:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@19 -- # rpc_cmd accel_set_options --small-cache-size 0 --large-cache-size 0 00:20:53.251 11:38:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:53.251 11:38:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:20:53.251 11:38:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:53.251 11:38:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@20 -- # rpc_cmd iobuf_set_options --small-pool-count 154 --small_bufsize=8192 00:20:53.251 11:38:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:53.251 11:38:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:20:53.251 11:38:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:53.251 11:38:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@21 -- # rpc_cmd framework_start_init 00:20:53.251 11:38:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:53.251 11:38:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:20:53.251 11:38:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:53.251 11:38:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@22 -- # rpc_cmd bdev_malloc_create -b Malloc0 32 512 00:20:53.251 11:38:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:53.251 11:38:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:20:53.251 Malloc0 00:20:53.251 11:38:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:53.251 11:38:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@23 -- # rpc_cmd nvmf_create_transport '-t tcp -o' -u 8192 -n 24 -b 24 00:20:53.251 11:38:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:53.251 11:38:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:20:53.251 [2024-11-15 11:38:53.669760] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:53.251 11:38:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:53.251 11:38:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2024-07.io.spdk:cnode0 -a -s SPDK00000000000001 00:20:53.251 11:38:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:53.251 11:38:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:20:53.251 11:38:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:53.251 11:38:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2024-07.io.spdk:cnode0 Malloc0 00:20:53.251 11:38:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:53.251 11:38:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:20:53.251 11:38:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:53.251 11:38:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2024-07.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:20:53.251 11:38:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:53.251 11:38:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:20:53.251 [2024-11-15 11:38:53.697979] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:53.251 11:38:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:53.251 11:38:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 4 -o 131072 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:20:53.251 [2024-11-15 11:38:53.799560] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:20:54.625 Initializing NVMe Controllers 00:20:54.625 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2024-07.io.spdk:cnode0 00:20:54.625 Associating TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 0 00:20:54.625 Initialization complete. Launching workers. 00:20:54.625 ======================================================== 00:20:54.625 Latency(us) 00:20:54.625 Device Information : IOPS MiB/s Average min max 00:20:54.625 TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 0: 129.00 16.12 32237.81 7241.76 63851.99 00:20:54.625 ======================================================== 00:20:54.625 Total : 129.00 16.12 32237.81 7241.76 63851.99 00:20:54.625 00:20:54.625 11:38:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@32 -- # rpc_cmd iobuf_get_stats 00:20:54.625 11:38:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@32 -- # jq -r '.[] | select(.module == "nvmf_TCP") | .small_pool.retry' 00:20:54.625 11:38:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:54.625 11:38:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:20:54.625 11:38:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:54.625 11:38:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@32 -- # retry_count=2038 00:20:54.625 11:38:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@33 -- # [[ 2038 -eq 0 ]] 00:20:54.625 11:38:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:20:54.625 11:38:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@38 -- # nvmftestfini 00:20:54.626 11:38:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@516 -- # nvmfcleanup 00:20:54.626 11:38:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@121 -- # sync 00:20:54.626 11:38:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:20:54.626 11:38:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@124 -- # set +e 00:20:54.626 11:38:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@125 -- # for i in {1..20} 00:20:54.626 11:38:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:20:54.626 rmmod nvme_tcp 00:20:54.626 rmmod nvme_fabrics 00:20:54.626 rmmod nvme_keyring 00:20:54.626 11:38:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:20:54.626 11:38:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@128 -- # set -e 00:20:54.626 11:38:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@129 -- # return 0 00:20:54.626 11:38:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@517 -- # '[' -n 1277513 ']' 00:20:54.626 11:38:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@518 -- # killprocess 1277513 00:20:54.626 11:38:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@952 -- # '[' -z 1277513 ']' 00:20:54.626 11:38:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@956 -- # kill -0 1277513 00:20:54.626 11:38:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@957 -- # uname 00:20:54.626 11:38:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:20:54.626 11:38:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 1277513 00:20:54.626 11:38:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:20:54.626 11:38:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:20:54.626 11:38:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@970 -- # echo 'killing process with pid 1277513' 00:20:54.626 killing process with pid 1277513 00:20:54.626 11:38:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@971 -- # kill 1277513 00:20:54.626 11:38:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@976 -- # wait 1277513 00:20:54.885 11:38:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:20:54.885 11:38:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:20:54.885 11:38:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:20:54.885 11:38:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@297 -- # iptr 00:20:54.885 11:38:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@791 -- # iptables-save 00:20:54.885 11:38:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@791 -- # iptables-restore 00:20:54.885 11:38:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:20:54.885 11:38:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:20:54.885 11:38:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@302 -- # remove_spdk_ns 00:20:54.885 11:38:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:54.885 11:38:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:54.885 11:38:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:56.786 11:38:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:20:56.786 00:20:56.786 real 0m10.561s 00:20:56.786 user 0m4.117s 00:20:56.786 sys 0m4.946s 00:20:56.786 11:38:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1128 -- # xtrace_disable 00:20:56.786 11:38:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:20:56.786 ************************************ 00:20:56.786 END TEST nvmf_wait_for_buf 00:20:56.786 ************************************ 00:20:57.045 11:38:57 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@47 -- # '[' 0 -eq 1 ']' 00:20:57.045 11:38:57 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@53 -- # [[ phy == phy ]] 00:20:57.045 11:38:57 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@54 -- # '[' tcp = tcp ']' 00:20:57.045 11:38:57 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@55 -- # gather_supported_nvmf_pci_devs 00:20:57.045 11:38:57 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@309 -- # xtrace_disable 00:20:57.045 11:38:57 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:21:02.315 11:39:03 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:21:02.315 11:39:03 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@315 -- # pci_devs=() 00:21:02.315 11:39:03 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@315 -- # local -a pci_devs 00:21:02.315 11:39:03 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@316 -- # pci_net_devs=() 00:21:02.315 11:39:03 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:21:02.315 11:39:03 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@317 -- # pci_drivers=() 00:21:02.315 11:39:03 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@317 -- # local -A pci_drivers 00:21:02.315 11:39:03 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@319 -- # net_devs=() 00:21:02.315 11:39:03 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@319 -- # local -ga net_devs 00:21:02.315 11:39:03 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@320 -- # e810=() 00:21:02.315 11:39:03 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@320 -- # local -ga e810 00:21:02.315 11:39:03 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@321 -- # x722=() 00:21:02.315 11:39:03 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@321 -- # local -ga x722 00:21:02.315 11:39:03 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@322 -- # mlx=() 00:21:02.315 11:39:03 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@322 -- # local -ga mlx 00:21:02.315 11:39:03 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:02.315 11:39:03 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:02.315 11:39:03 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:02.315 11:39:03 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:02.315 11:39:03 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:02.315 11:39:03 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:02.315 11:39:03 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:02.316 11:39:03 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:21:02.316 11:39:03 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:02.316 11:39:03 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:02.316 11:39:03 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:02.316 11:39:03 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:02.316 11:39:03 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:21:02.316 11:39:03 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:21:02.316 11:39:03 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:21:02.316 11:39:03 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:21:02.316 11:39:03 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:21:02.316 11:39:03 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:21:02.316 11:39:03 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:02.316 11:39:03 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:21:02.316 Found 0000:af:00.0 (0x8086 - 0x159b) 00:21:02.316 11:39:03 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:21:02.316 11:39:03 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:21:02.316 11:39:03 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:02.316 11:39:03 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:02.316 11:39:03 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:21:02.316 11:39:03 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:02.316 11:39:03 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:21:02.316 Found 0000:af:00.1 (0x8086 - 0x159b) 00:21:02.316 11:39:03 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:21:02.316 11:39:03 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:21:02.316 11:39:03 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:02.316 11:39:03 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:02.316 11:39:03 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:21:02.316 11:39:03 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:21:02.316 11:39:03 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:21:02.316 11:39:03 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:21:02.316 11:39:03 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:21:02.316 11:39:03 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:02.316 11:39:03 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:21:02.316 11:39:03 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:02.316 11:39:03 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@418 -- # [[ up == up ]] 00:21:02.316 11:39:03 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:21:02.316 11:39:03 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:02.316 11:39:03 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:21:02.316 Found net devices under 0000:af:00.0: cvl_0_0 00:21:02.316 11:39:03 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:21:02.316 11:39:03 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:21:02.316 11:39:03 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:02.316 11:39:03 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:21:02.316 11:39:03 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:02.316 11:39:03 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@418 -- # [[ up == up ]] 00:21:02.316 11:39:03 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:21:02.316 11:39:03 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:02.316 11:39:03 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:21:02.316 Found net devices under 0000:af:00.1: cvl_0_1 00:21:02.316 11:39:03 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:21:02.316 11:39:03 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:21:02.316 11:39:03 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@56 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:02.316 11:39:03 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@57 -- # (( 2 > 0 )) 00:21:02.316 11:39:03 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@58 -- # run_test nvmf_perf_adq /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/perf_adq.sh --transport=tcp 00:21:02.316 11:39:03 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:21:02.316 11:39:03 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1109 -- # xtrace_disable 00:21:02.316 11:39:03 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:21:02.316 ************************************ 00:21:02.316 START TEST nvmf_perf_adq 00:21:02.316 ************************************ 00:21:02.316 11:39:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/perf_adq.sh --transport=tcp 00:21:02.575 * Looking for test storage... 00:21:02.575 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:21:02.576 11:39:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:21:02.576 11:39:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1691 -- # lcov --version 00:21:02.576 11:39:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:21:02.576 11:39:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:21:02.576 11:39:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:21:02.576 11:39:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@333 -- # local ver1 ver1_l 00:21:02.576 11:39:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@334 -- # local ver2 ver2_l 00:21:02.576 11:39:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@336 -- # IFS=.-: 00:21:02.576 11:39:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@336 -- # read -ra ver1 00:21:02.576 11:39:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@337 -- # IFS=.-: 00:21:02.576 11:39:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@337 -- # read -ra ver2 00:21:02.576 11:39:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@338 -- # local 'op=<' 00:21:02.576 11:39:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@340 -- # ver1_l=2 00:21:02.576 11:39:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@341 -- # ver2_l=1 00:21:02.576 11:39:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:21:02.576 11:39:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@344 -- # case "$op" in 00:21:02.576 11:39:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@345 -- # : 1 00:21:02.576 11:39:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@364 -- # (( v = 0 )) 00:21:02.576 11:39:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:21:02.576 11:39:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@365 -- # decimal 1 00:21:02.576 11:39:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@353 -- # local d=1 00:21:02.576 11:39:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:21:02.576 11:39:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@355 -- # echo 1 00:21:02.576 11:39:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@365 -- # ver1[v]=1 00:21:02.576 11:39:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@366 -- # decimal 2 00:21:02.576 11:39:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@353 -- # local d=2 00:21:02.576 11:39:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:21:02.576 11:39:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@355 -- # echo 2 00:21:02.576 11:39:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@366 -- # ver2[v]=2 00:21:02.576 11:39:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:21:02.576 11:39:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:21:02.576 11:39:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@368 -- # return 0 00:21:02.576 11:39:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:21:02.576 11:39:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:21:02.576 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:02.576 --rc genhtml_branch_coverage=1 00:21:02.576 --rc genhtml_function_coverage=1 00:21:02.576 --rc genhtml_legend=1 00:21:02.576 --rc geninfo_all_blocks=1 00:21:02.576 --rc geninfo_unexecuted_blocks=1 00:21:02.576 00:21:02.576 ' 00:21:02.576 11:39:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:21:02.576 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:02.576 --rc genhtml_branch_coverage=1 00:21:02.576 --rc genhtml_function_coverage=1 00:21:02.576 --rc genhtml_legend=1 00:21:02.576 --rc geninfo_all_blocks=1 00:21:02.576 --rc geninfo_unexecuted_blocks=1 00:21:02.576 00:21:02.576 ' 00:21:02.576 11:39:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:21:02.576 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:02.576 --rc genhtml_branch_coverage=1 00:21:02.576 --rc genhtml_function_coverage=1 00:21:02.576 --rc genhtml_legend=1 00:21:02.576 --rc geninfo_all_blocks=1 00:21:02.576 --rc geninfo_unexecuted_blocks=1 00:21:02.576 00:21:02.576 ' 00:21:02.576 11:39:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:21:02.576 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:02.576 --rc genhtml_branch_coverage=1 00:21:02.576 --rc genhtml_function_coverage=1 00:21:02.576 --rc genhtml_legend=1 00:21:02.576 --rc geninfo_all_blocks=1 00:21:02.576 --rc geninfo_unexecuted_blocks=1 00:21:02.576 00:21:02.576 ' 00:21:02.576 11:39:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:21:02.576 11:39:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@7 -- # uname -s 00:21:02.576 11:39:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:02.576 11:39:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:02.576 11:39:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:02.576 11:39:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:02.576 11:39:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:02.576 11:39:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:02.576 11:39:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:02.576 11:39:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:02.576 11:39:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:02.576 11:39:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:02.576 11:39:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:21:02.576 11:39:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@18 -- # NVME_HOSTID=00abaa28-3537-eb11-906e-0017a4403562 00:21:02.576 11:39:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:02.576 11:39:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:02.576 11:39:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:21:02.576 11:39:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:02.576 11:39:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:21:02.576 11:39:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@15 -- # shopt -s extglob 00:21:02.576 11:39:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:02.576 11:39:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:02.576 11:39:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:02.576 11:39:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:02.576 11:39:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:02.576 11:39:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:02.576 11:39:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@5 -- # export PATH 00:21:02.576 11:39:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:02.576 11:39:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@51 -- # : 0 00:21:02.576 11:39:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:21:02.576 11:39:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:21:02.576 11:39:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:02.576 11:39:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:02.576 11:39:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:02.576 11:39:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:21:02.576 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:21:02.576 11:39:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:21:02.576 11:39:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:21:02.576 11:39:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@55 -- # have_pci_nics=0 00:21:02.576 11:39:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@11 -- # gather_supported_nvmf_pci_devs 00:21:02.576 11:39:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@309 -- # xtrace_disable 00:21:02.576 11:39:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:07.847 11:39:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:21:07.847 11:39:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # pci_devs=() 00:21:07.847 11:39:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # local -a pci_devs 00:21:07.847 11:39:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # pci_net_devs=() 00:21:07.847 11:39:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:21:07.847 11:39:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # pci_drivers=() 00:21:07.847 11:39:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # local -A pci_drivers 00:21:07.847 11:39:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # net_devs=() 00:21:07.847 11:39:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # local -ga net_devs 00:21:07.847 11:39:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # e810=() 00:21:07.847 11:39:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # local -ga e810 00:21:07.847 11:39:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # x722=() 00:21:07.848 11:39:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # local -ga x722 00:21:07.848 11:39:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # mlx=() 00:21:07.848 11:39:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # local -ga mlx 00:21:07.848 11:39:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:07.848 11:39:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:07.848 11:39:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:07.848 11:39:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:07.848 11:39:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:07.848 11:39:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:07.848 11:39:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:07.848 11:39:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:21:07.848 11:39:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:07.848 11:39:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:07.848 11:39:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:07.848 11:39:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:07.848 11:39:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:21:07.848 11:39:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:21:07.848 11:39:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:21:07.848 11:39:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:21:07.848 11:39:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:21:07.848 11:39:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:21:07.848 11:39:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:07.848 11:39:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:21:07.848 Found 0000:af:00.0 (0x8086 - 0x159b) 00:21:07.848 11:39:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:21:07.848 11:39:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:21:07.848 11:39:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:07.848 11:39:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:07.848 11:39:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:21:07.848 11:39:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:07.848 11:39:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:21:07.848 Found 0000:af:00.1 (0x8086 - 0x159b) 00:21:07.848 11:39:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:21:07.848 11:39:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:21:07.848 11:39:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:07.848 11:39:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:07.848 11:39:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:21:07.848 11:39:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:21:07.848 11:39:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:21:07.848 11:39:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:21:07.848 11:39:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:21:07.848 11:39:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:07.848 11:39:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:21:07.848 11:39:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:07.848 11:39:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # [[ up == up ]] 00:21:07.848 11:39:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:21:07.848 11:39:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:07.848 11:39:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:21:07.848 Found net devices under 0000:af:00.0: cvl_0_0 00:21:07.848 11:39:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:21:07.848 11:39:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:21:07.848 11:39:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:07.848 11:39:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:21:07.848 11:39:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:07.848 11:39:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # [[ up == up ]] 00:21:07.848 11:39:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:21:07.848 11:39:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:07.848 11:39:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:21:07.848 Found net devices under 0000:af:00.1: cvl_0_1 00:21:07.848 11:39:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:21:07.848 11:39:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:21:07.848 11:39:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@12 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:07.848 11:39:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@13 -- # (( 2 == 0 )) 00:21:07.848 11:39:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@18 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:21:07.848 11:39:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@68 -- # adq_reload_driver 00:21:07.848 11:39:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@58 -- # modprobe -a sch_mqprio 00:21:07.848 11:39:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@61 -- # rmmod ice 00:21:09.230 11:39:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@62 -- # modprobe ice 00:21:11.131 11:39:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@63 -- # sleep 5 00:21:16.401 11:39:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@76 -- # nvmftestinit 00:21:16.401 11:39:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:21:16.401 11:39:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:16.401 11:39:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@476 -- # prepare_net_devs 00:21:16.401 11:39:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@438 -- # local -g is_hw=no 00:21:16.401 11:39:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@440 -- # remove_spdk_ns 00:21:16.401 11:39:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:16.401 11:39:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:16.401 11:39:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:16.401 11:39:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:21:16.401 11:39:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:21:16.401 11:39:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@309 -- # xtrace_disable 00:21:16.401 11:39:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:16.401 11:39:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:21:16.401 11:39:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # pci_devs=() 00:21:16.401 11:39:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # local -a pci_devs 00:21:16.401 11:39:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # pci_net_devs=() 00:21:16.401 11:39:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:21:16.401 11:39:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # pci_drivers=() 00:21:16.401 11:39:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # local -A pci_drivers 00:21:16.401 11:39:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # net_devs=() 00:21:16.401 11:39:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # local -ga net_devs 00:21:16.401 11:39:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # e810=() 00:21:16.401 11:39:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # local -ga e810 00:21:16.401 11:39:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # x722=() 00:21:16.401 11:39:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # local -ga x722 00:21:16.401 11:39:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # mlx=() 00:21:16.401 11:39:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # local -ga mlx 00:21:16.401 11:39:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:16.401 11:39:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:16.401 11:39:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:16.401 11:39:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:16.401 11:39:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:16.401 11:39:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:16.401 11:39:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:16.401 11:39:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:21:16.401 11:39:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:16.401 11:39:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:16.401 11:39:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:16.402 11:39:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:16.402 11:39:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:21:16.402 11:39:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:21:16.402 11:39:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:21:16.402 11:39:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:21:16.402 11:39:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:21:16.402 11:39:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:21:16.402 11:39:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:16.402 11:39:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:21:16.402 Found 0000:af:00.0 (0x8086 - 0x159b) 00:21:16.402 11:39:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:21:16.402 11:39:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:21:16.402 11:39:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:16.402 11:39:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:16.402 11:39:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:21:16.402 11:39:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:16.402 11:39:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:21:16.402 Found 0000:af:00.1 (0x8086 - 0x159b) 00:21:16.402 11:39:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:21:16.402 11:39:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:21:16.402 11:39:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:16.402 11:39:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:16.402 11:39:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:21:16.402 11:39:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:21:16.402 11:39:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:21:16.402 11:39:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:21:16.402 11:39:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:21:16.402 11:39:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:16.402 11:39:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:21:16.402 11:39:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:16.402 11:39:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # [[ up == up ]] 00:21:16.402 11:39:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:21:16.402 11:39:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:16.402 11:39:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:21:16.402 Found net devices under 0000:af:00.0: cvl_0_0 00:21:16.402 11:39:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:21:16.402 11:39:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:21:16.402 11:39:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:16.402 11:39:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:21:16.402 11:39:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:16.402 11:39:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # [[ up == up ]] 00:21:16.402 11:39:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:21:16.402 11:39:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:16.402 11:39:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:21:16.402 Found net devices under 0000:af:00.1: cvl_0_1 00:21:16.402 11:39:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:21:16.402 11:39:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:21:16.402 11:39:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@442 -- # is_hw=yes 00:21:16.402 11:39:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:21:16.402 11:39:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:21:16.402 11:39:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:21:16.402 11:39:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:21:16.402 11:39:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:16.402 11:39:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:16.402 11:39:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:16.402 11:39:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:21:16.402 11:39:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:21:16.402 11:39:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:21:16.402 11:39:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:21:16.402 11:39:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:21:16.402 11:39:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:21:16.402 11:39:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:16.402 11:39:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:21:16.402 11:39:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:21:16.402 11:39:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:21:16.402 11:39:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:21:16.402 11:39:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:21:16.402 11:39:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:21:16.402 11:39:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:21:16.402 11:39:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:21:16.402 11:39:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:21:16.402 11:39:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:21:16.402 11:39:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:21:16.402 11:39:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:21:16.402 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:16.402 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.420 ms 00:21:16.402 00:21:16.402 --- 10.0.0.2 ping statistics --- 00:21:16.402 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:16.402 rtt min/avg/max/mdev = 0.420/0.420/0.420/0.000 ms 00:21:16.402 11:39:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:21:16.402 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:16.402 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.218 ms 00:21:16.402 00:21:16.402 --- 10.0.0.1 ping statistics --- 00:21:16.402 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:16.402 rtt min/avg/max/mdev = 0.218/0.218/0.218/0.000 ms 00:21:16.402 11:39:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:16.402 11:39:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@450 -- # return 0 00:21:16.402 11:39:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:21:16.402 11:39:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:16.402 11:39:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:21:16.402 11:39:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:21:16.402 11:39:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:16.402 11:39:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:21:16.402 11:39:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:21:16.661 11:39:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@77 -- # nvmfappstart -m 0xF --wait-for-rpc 00:21:16.661 11:39:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:21:16.661 11:39:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@724 -- # xtrace_disable 00:21:16.661 11:39:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:16.661 11:39:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@509 -- # nvmfpid=1286785 00:21:16.661 11:39:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@510 -- # waitforlisten 1286785 00:21:16.661 11:39:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:21:16.661 11:39:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@833 -- # '[' -z 1286785 ']' 00:21:16.661 11:39:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:16.661 11:39:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@838 -- # local max_retries=100 00:21:16.661 11:39:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:16.661 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:16.661 11:39:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@842 -- # xtrace_disable 00:21:16.661 11:39:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:16.661 [2024-11-15 11:39:17.317397] Starting SPDK v25.01-pre git sha1 4b2d483c6 / DPDK 24.03.0 initialization... 00:21:16.661 [2024-11-15 11:39:17.317437] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:16.661 [2024-11-15 11:39:17.402194] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:21:16.661 [2024-11-15 11:39:17.454440] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:16.661 [2024-11-15 11:39:17.454490] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:16.661 [2024-11-15 11:39:17.454500] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:16.661 [2024-11-15 11:39:17.454509] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:16.661 [2024-11-15 11:39:17.454517] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:16.661 [2024-11-15 11:39:17.456566] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:21:16.661 [2024-11-15 11:39:17.456665] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:21:16.661 [2024-11-15 11:39:17.456742] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:21:16.661 [2024-11-15 11:39:17.456746] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:16.919 11:39:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:21:16.919 11:39:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@866 -- # return 0 00:21:16.919 11:39:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:21:16.919 11:39:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@730 -- # xtrace_disable 00:21:16.919 11:39:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:16.919 11:39:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:16.919 11:39:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@78 -- # adq_configure_nvmf_target 0 00:21:16.919 11:39:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # rpc_cmd sock_get_default_impl 00:21:16.919 11:39:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # jq -r .impl_name 00:21:16.919 11:39:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:16.919 11:39:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:16.919 11:39:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:16.919 11:39:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # socket_impl=posix 00:21:16.919 11:39:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@43 -- # rpc_cmd sock_impl_set_options --enable-placement-id 0 --enable-zerocopy-send-server -i posix 00:21:16.919 11:39:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:16.919 11:39:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:16.919 11:39:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:16.919 11:39:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@44 -- # rpc_cmd framework_start_init 00:21:16.919 11:39:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:16.919 11:39:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:16.919 11:39:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:16.919 11:39:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o --io-unit-size 8192 --sock-priority 0 00:21:16.919 11:39:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:16.919 11:39:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:16.919 [2024-11-15 11:39:17.729155] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:16.919 11:39:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:16.919 11:39:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@46 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:21:16.919 11:39:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:16.919 11:39:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:16.919 Malloc1 00:21:16.919 11:39:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:16.919 11:39:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@47 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:21:16.919 11:39:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:16.919 11:39:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:17.178 11:39:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:17.178 11:39:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@48 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:21:17.178 11:39:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:17.178 11:39:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:17.178 11:39:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:17.178 11:39:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:21:17.178 11:39:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:17.178 11:39:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:17.178 [2024-11-15 11:39:17.783964] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:17.178 11:39:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:17.178 11:39:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@82 -- # perfpid=1286810 00:21:17.178 11:39:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@83 -- # sleep 2 00:21:17.178 11:39:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randread -t 10 -c 0xF0 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:21:19.080 11:39:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@85 -- # rpc_cmd nvmf_get_stats 00:21:19.080 11:39:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:19.080 11:39:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:19.080 11:39:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:19.080 11:39:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@85 -- # nvmf_stats='{ 00:21:19.080 "tick_rate": 2200000000, 00:21:19.080 "poll_groups": [ 00:21:19.080 { 00:21:19.080 "name": "nvmf_tgt_poll_group_000", 00:21:19.080 "admin_qpairs": 1, 00:21:19.080 "io_qpairs": 1, 00:21:19.080 "current_admin_qpairs": 1, 00:21:19.080 "current_io_qpairs": 1, 00:21:19.080 "pending_bdev_io": 0, 00:21:19.080 "completed_nvme_io": 21258, 00:21:19.080 "transports": [ 00:21:19.080 { 00:21:19.080 "trtype": "TCP" 00:21:19.080 } 00:21:19.080 ] 00:21:19.080 }, 00:21:19.080 { 00:21:19.080 "name": "nvmf_tgt_poll_group_001", 00:21:19.080 "admin_qpairs": 0, 00:21:19.080 "io_qpairs": 1, 00:21:19.080 "current_admin_qpairs": 0, 00:21:19.080 "current_io_qpairs": 1, 00:21:19.080 "pending_bdev_io": 0, 00:21:19.080 "completed_nvme_io": 20917, 00:21:19.080 "transports": [ 00:21:19.080 { 00:21:19.080 "trtype": "TCP" 00:21:19.080 } 00:21:19.080 ] 00:21:19.080 }, 00:21:19.080 { 00:21:19.080 "name": "nvmf_tgt_poll_group_002", 00:21:19.080 "admin_qpairs": 0, 00:21:19.080 "io_qpairs": 1, 00:21:19.080 "current_admin_qpairs": 0, 00:21:19.080 "current_io_qpairs": 1, 00:21:19.080 "pending_bdev_io": 0, 00:21:19.080 "completed_nvme_io": 22552, 00:21:19.080 "transports": [ 00:21:19.080 { 00:21:19.080 "trtype": "TCP" 00:21:19.080 } 00:21:19.080 ] 00:21:19.080 }, 00:21:19.080 { 00:21:19.080 "name": "nvmf_tgt_poll_group_003", 00:21:19.080 "admin_qpairs": 0, 00:21:19.080 "io_qpairs": 1, 00:21:19.080 "current_admin_qpairs": 0, 00:21:19.080 "current_io_qpairs": 1, 00:21:19.080 "pending_bdev_io": 0, 00:21:19.080 "completed_nvme_io": 15978, 00:21:19.080 "transports": [ 00:21:19.080 { 00:21:19.080 "trtype": "TCP" 00:21:19.080 } 00:21:19.080 ] 00:21:19.080 } 00:21:19.080 ] 00:21:19.080 }' 00:21:19.080 11:39:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@86 -- # jq -r '.poll_groups[] | select(.current_io_qpairs == 1) | length' 00:21:19.080 11:39:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@86 -- # wc -l 00:21:19.080 11:39:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@86 -- # count=4 00:21:19.080 11:39:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@87 -- # [[ 4 -ne 4 ]] 00:21:19.080 11:39:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@91 -- # wait 1286810 00:21:27.193 Initializing NVMe Controllers 00:21:27.193 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:21:27.193 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 4 00:21:27.193 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 5 00:21:27.193 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 6 00:21:27.193 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 7 00:21:27.193 Initialization complete. Launching workers. 00:21:27.193 ======================================================== 00:21:27.193 Latency(us) 00:21:27.193 Device Information : IOPS MiB/s Average min max 00:21:27.193 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 4: 11579.56 45.23 5526.73 1977.26 9498.84 00:21:27.193 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 5: 10795.47 42.17 5927.92 1964.92 9612.57 00:21:27.193 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 6: 10986.87 42.92 5826.19 1809.49 9604.65 00:21:27.193 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 7: 8233.30 32.16 7774.83 3908.73 11569.58 00:21:27.193 ======================================================== 00:21:27.193 Total : 41595.21 162.48 6154.94 1809.49 11569.58 00:21:27.193 00:21:27.193 11:39:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@92 -- # nvmftestfini 00:21:27.193 11:39:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@516 -- # nvmfcleanup 00:21:27.193 11:39:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@121 -- # sync 00:21:27.193 11:39:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:21:27.193 11:39:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@124 -- # set +e 00:21:27.193 11:39:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@125 -- # for i in {1..20} 00:21:27.193 11:39:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:21:27.193 rmmod nvme_tcp 00:21:27.193 rmmod nvme_fabrics 00:21:27.193 rmmod nvme_keyring 00:21:27.193 11:39:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:21:27.193 11:39:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@128 -- # set -e 00:21:27.193 11:39:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@129 -- # return 0 00:21:27.193 11:39:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@517 -- # '[' -n 1286785 ']' 00:21:27.193 11:39:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@518 -- # killprocess 1286785 00:21:27.193 11:39:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@952 -- # '[' -z 1286785 ']' 00:21:27.193 11:39:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@956 -- # kill -0 1286785 00:21:27.193 11:39:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@957 -- # uname 00:21:27.193 11:39:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:21:27.193 11:39:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 1286785 00:21:27.193 11:39:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:21:27.193 11:39:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:21:27.193 11:39:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@970 -- # echo 'killing process with pid 1286785' 00:21:27.193 killing process with pid 1286785 00:21:27.193 11:39:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@971 -- # kill 1286785 00:21:27.193 11:39:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@976 -- # wait 1286785 00:21:27.452 11:39:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:21:27.452 11:39:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:21:27.452 11:39:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:21:27.452 11:39:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@297 -- # iptr 00:21:27.452 11:39:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@791 -- # iptables-save 00:21:27.452 11:39:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@791 -- # iptables-restore 00:21:27.452 11:39:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:21:27.452 11:39:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:21:27.452 11:39:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@302 -- # remove_spdk_ns 00:21:27.452 11:39:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:27.452 11:39:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:27.452 11:39:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:29.989 11:39:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:21:29.989 11:39:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@94 -- # adq_reload_driver 00:21:29.989 11:39:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@58 -- # modprobe -a sch_mqprio 00:21:29.989 11:39:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@61 -- # rmmod ice 00:21:30.925 11:39:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@62 -- # modprobe ice 00:21:32.830 11:39:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@63 -- # sleep 5 00:21:38.105 11:39:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@97 -- # nvmftestinit 00:21:38.105 11:39:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:21:38.105 11:39:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:38.105 11:39:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@476 -- # prepare_net_devs 00:21:38.105 11:39:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@438 -- # local -g is_hw=no 00:21:38.105 11:39:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@440 -- # remove_spdk_ns 00:21:38.105 11:39:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:38.105 11:39:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:38.105 11:39:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:38.105 11:39:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:21:38.105 11:39:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:21:38.105 11:39:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@309 -- # xtrace_disable 00:21:38.105 11:39:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:38.105 11:39:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:21:38.105 11:39:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # pci_devs=() 00:21:38.105 11:39:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # local -a pci_devs 00:21:38.105 11:39:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # pci_net_devs=() 00:21:38.105 11:39:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:21:38.105 11:39:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # pci_drivers=() 00:21:38.105 11:39:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # local -A pci_drivers 00:21:38.105 11:39:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # net_devs=() 00:21:38.105 11:39:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # local -ga net_devs 00:21:38.105 11:39:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # e810=() 00:21:38.105 11:39:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # local -ga e810 00:21:38.105 11:39:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # x722=() 00:21:38.105 11:39:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # local -ga x722 00:21:38.105 11:39:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # mlx=() 00:21:38.105 11:39:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # local -ga mlx 00:21:38.105 11:39:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:38.105 11:39:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:38.105 11:39:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:38.105 11:39:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:38.105 11:39:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:38.105 11:39:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:38.105 11:39:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:38.105 11:39:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:21:38.105 11:39:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:38.105 11:39:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:38.105 11:39:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:38.105 11:39:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:38.105 11:39:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:21:38.105 11:39:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:21:38.105 11:39:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:21:38.105 11:39:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:21:38.105 11:39:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:21:38.105 11:39:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:21:38.105 11:39:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:38.105 11:39:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:21:38.105 Found 0000:af:00.0 (0x8086 - 0x159b) 00:21:38.105 11:39:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:21:38.105 11:39:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:21:38.106 11:39:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:38.106 11:39:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:38.106 11:39:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:21:38.106 11:39:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:38.106 11:39:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:21:38.106 Found 0000:af:00.1 (0x8086 - 0x159b) 00:21:38.106 11:39:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:21:38.106 11:39:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:21:38.106 11:39:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:38.106 11:39:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:38.106 11:39:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:21:38.106 11:39:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:21:38.106 11:39:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:21:38.106 11:39:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:21:38.106 11:39:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:21:38.106 11:39:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:38.106 11:39:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:21:38.106 11:39:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:38.106 11:39:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # [[ up == up ]] 00:21:38.106 11:39:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:21:38.106 11:39:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:38.106 11:39:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:21:38.106 Found net devices under 0000:af:00.0: cvl_0_0 00:21:38.106 11:39:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:21:38.106 11:39:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:21:38.106 11:39:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:38.106 11:39:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:21:38.106 11:39:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:38.106 11:39:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # [[ up == up ]] 00:21:38.106 11:39:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:21:38.106 11:39:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:38.106 11:39:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:21:38.106 Found net devices under 0000:af:00.1: cvl_0_1 00:21:38.106 11:39:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:21:38.106 11:39:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:21:38.106 11:39:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@442 -- # is_hw=yes 00:21:38.106 11:39:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:21:38.106 11:39:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:21:38.106 11:39:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:21:38.106 11:39:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:21:38.106 11:39:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:38.106 11:39:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:38.106 11:39:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:38.106 11:39:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:21:38.106 11:39:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:21:38.106 11:39:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:21:38.106 11:39:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:21:38.106 11:39:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:21:38.106 11:39:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:21:38.106 11:39:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:38.106 11:39:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:21:38.106 11:39:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:21:38.106 11:39:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:21:38.106 11:39:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:21:38.106 11:39:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:21:38.106 11:39:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:21:38.106 11:39:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:21:38.106 11:39:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:21:38.106 11:39:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:21:38.106 11:39:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:21:38.106 11:39:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:21:38.106 11:39:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:21:38.106 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:38.106 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.483 ms 00:21:38.106 00:21:38.106 --- 10.0.0.2 ping statistics --- 00:21:38.106 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:38.106 rtt min/avg/max/mdev = 0.483/0.483/0.483/0.000 ms 00:21:38.106 11:39:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:21:38.106 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:38.106 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.230 ms 00:21:38.106 00:21:38.106 --- 10.0.0.1 ping statistics --- 00:21:38.106 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:38.106 rtt min/avg/max/mdev = 0.230/0.230/0.230/0.000 ms 00:21:38.106 11:39:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:38.106 11:39:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@450 -- # return 0 00:21:38.106 11:39:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:21:38.106 11:39:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:38.106 11:39:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:21:38.106 11:39:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:21:38.106 11:39:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:38.106 11:39:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:21:38.106 11:39:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:21:38.106 11:39:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@98 -- # adq_configure_driver 00:21:38.106 11:39:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@22 -- # ip netns exec cvl_0_0_ns_spdk ethtool --offload cvl_0_0 hw-tc-offload on 00:21:38.106 11:39:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@24 -- # ip netns exec cvl_0_0_ns_spdk ethtool --set-priv-flags cvl_0_0 channel-pkt-inspect-optimize off 00:21:38.106 11:39:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@26 -- # sysctl -w net.core.busy_poll=1 00:21:38.106 net.core.busy_poll = 1 00:21:38.106 11:39:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@27 -- # sysctl -w net.core.busy_read=1 00:21:38.106 net.core.busy_read = 1 00:21:38.106 11:39:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@29 -- # tc=/usr/sbin/tc 00:21:38.106 11:39:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@31 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc qdisc add dev cvl_0_0 root mqprio num_tc 2 map 0 1 queues 2@0 2@2 hw 1 mode channel 00:21:38.106 11:39:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@33 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc qdisc add dev cvl_0_0 ingress 00:21:38.106 11:39:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@35 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc filter add dev cvl_0_0 protocol ip parent ffff: prio 1 flower dst_ip 10.0.0.2/32 ip_proto tcp dst_port 4420 skip_sw hw_tc 1 00:21:38.106 11:39:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@38 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/nvmf/set_xps_rxqs cvl_0_0 00:21:38.365 11:39:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@99 -- # nvmfappstart -m 0xF --wait-for-rpc 00:21:38.365 11:39:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:21:38.365 11:39:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@724 -- # xtrace_disable 00:21:38.365 11:39:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:38.365 11:39:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@509 -- # nvmfpid=1290966 00:21:38.365 11:39:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@510 -- # waitforlisten 1290966 00:21:38.365 11:39:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@833 -- # '[' -z 1290966 ']' 00:21:38.365 11:39:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:38.366 11:39:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@838 -- # local max_retries=100 00:21:38.366 11:39:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:38.366 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:38.366 11:39:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@842 -- # xtrace_disable 00:21:38.366 11:39:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:38.366 11:39:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:21:38.366 [2024-11-15 11:39:39.019281] Starting SPDK v25.01-pre git sha1 4b2d483c6 / DPDK 24.03.0 initialization... 00:21:38.366 [2024-11-15 11:39:39.019340] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:38.366 [2024-11-15 11:39:39.121054] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:21:38.366 [2024-11-15 11:39:39.171167] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:38.366 [2024-11-15 11:39:39.171211] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:38.366 [2024-11-15 11:39:39.171223] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:38.366 [2024-11-15 11:39:39.171233] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:38.366 [2024-11-15 11:39:39.171240] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:38.366 [2024-11-15 11:39:39.173314] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:21:38.366 [2024-11-15 11:39:39.173416] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:21:38.366 [2024-11-15 11:39:39.173500] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:21:38.366 [2024-11-15 11:39:39.173503] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:38.626 11:39:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:21:38.626 11:39:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@866 -- # return 0 00:21:38.626 11:39:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:21:38.626 11:39:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@730 -- # xtrace_disable 00:21:38.626 11:39:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:38.626 11:39:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:38.626 11:39:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@100 -- # adq_configure_nvmf_target 1 00:21:38.626 11:39:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # rpc_cmd sock_get_default_impl 00:21:38.626 11:39:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # jq -r .impl_name 00:21:38.626 11:39:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:38.626 11:39:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:38.626 11:39:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:38.626 11:39:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # socket_impl=posix 00:21:38.626 11:39:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@43 -- # rpc_cmd sock_impl_set_options --enable-placement-id 1 --enable-zerocopy-send-server -i posix 00:21:38.626 11:39:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:38.626 11:39:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:38.626 11:39:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:38.626 11:39:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@44 -- # rpc_cmd framework_start_init 00:21:38.626 11:39:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:38.626 11:39:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:38.626 11:39:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:38.626 11:39:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o --io-unit-size 8192 --sock-priority 1 00:21:38.626 11:39:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:38.626 11:39:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:38.626 [2024-11-15 11:39:39.414909] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:38.626 11:39:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:38.626 11:39:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@46 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:21:38.626 11:39:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:38.626 11:39:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:38.626 Malloc1 00:21:38.626 11:39:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:38.626 11:39:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@47 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:21:38.626 11:39:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:38.626 11:39:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:38.626 11:39:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:38.626 11:39:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@48 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:21:38.626 11:39:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:38.626 11:39:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:38.626 11:39:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:38.626 11:39:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:21:38.626 11:39:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:38.626 11:39:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:38.626 [2024-11-15 11:39:39.475555] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:38.885 11:39:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:38.885 11:39:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@104 -- # perfpid=1291130 00:21:38.885 11:39:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@105 -- # sleep 2 00:21:38.885 11:39:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@101 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randread -t 10 -c 0xF0 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:21:40.786 11:39:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@107 -- # rpc_cmd nvmf_get_stats 00:21:40.786 11:39:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:40.786 11:39:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:40.786 11:39:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:40.786 11:39:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@107 -- # nvmf_stats='{ 00:21:40.786 "tick_rate": 2200000000, 00:21:40.786 "poll_groups": [ 00:21:40.786 { 00:21:40.786 "name": "nvmf_tgt_poll_group_000", 00:21:40.786 "admin_qpairs": 1, 00:21:40.786 "io_qpairs": 1, 00:21:40.786 "current_admin_qpairs": 1, 00:21:40.786 "current_io_qpairs": 1, 00:21:40.786 "pending_bdev_io": 0, 00:21:40.786 "completed_nvme_io": 28574, 00:21:40.786 "transports": [ 00:21:40.786 { 00:21:40.786 "trtype": "TCP" 00:21:40.786 } 00:21:40.786 ] 00:21:40.786 }, 00:21:40.786 { 00:21:40.786 "name": "nvmf_tgt_poll_group_001", 00:21:40.786 "admin_qpairs": 0, 00:21:40.786 "io_qpairs": 3, 00:21:40.786 "current_admin_qpairs": 0, 00:21:40.786 "current_io_qpairs": 3, 00:21:40.786 "pending_bdev_io": 0, 00:21:40.786 "completed_nvme_io": 33052, 00:21:40.786 "transports": [ 00:21:40.786 { 00:21:40.786 "trtype": "TCP" 00:21:40.786 } 00:21:40.786 ] 00:21:40.786 }, 00:21:40.786 { 00:21:40.786 "name": "nvmf_tgt_poll_group_002", 00:21:40.786 "admin_qpairs": 0, 00:21:40.786 "io_qpairs": 0, 00:21:40.786 "current_admin_qpairs": 0, 00:21:40.786 "current_io_qpairs": 0, 00:21:40.786 "pending_bdev_io": 0, 00:21:40.786 "completed_nvme_io": 0, 00:21:40.786 "transports": [ 00:21:40.786 { 00:21:40.786 "trtype": "TCP" 00:21:40.786 } 00:21:40.786 ] 00:21:40.786 }, 00:21:40.786 { 00:21:40.786 "name": "nvmf_tgt_poll_group_003", 00:21:40.786 "admin_qpairs": 0, 00:21:40.786 "io_qpairs": 0, 00:21:40.786 "current_admin_qpairs": 0, 00:21:40.786 "current_io_qpairs": 0, 00:21:40.786 "pending_bdev_io": 0, 00:21:40.786 "completed_nvme_io": 0, 00:21:40.786 "transports": [ 00:21:40.786 { 00:21:40.786 "trtype": "TCP" 00:21:40.786 } 00:21:40.786 ] 00:21:40.786 } 00:21:40.786 ] 00:21:40.786 }' 00:21:40.786 11:39:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@108 -- # jq -r '.poll_groups[] | select(.current_io_qpairs == 0) | length' 00:21:40.786 11:39:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@108 -- # wc -l 00:21:40.786 11:39:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@108 -- # count=2 00:21:40.786 11:39:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@109 -- # [[ 2 -lt 2 ]] 00:21:40.786 11:39:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@114 -- # wait 1291130 00:21:48.897 Initializing NVMe Controllers 00:21:48.897 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:21:48.897 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 4 00:21:48.897 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 5 00:21:48.897 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 6 00:21:48.897 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 7 00:21:48.897 Initialization complete. Launching workers. 00:21:48.897 ======================================================== 00:21:48.897 Latency(us) 00:21:48.897 Device Information : IOPS MiB/s Average min max 00:21:48.897 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 4: 6142.60 23.99 10450.30 1260.36 57782.78 00:21:48.897 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 5: 15366.00 60.02 4164.64 1204.39 46166.20 00:21:48.897 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 6: 5371.50 20.98 11914.47 1333.65 57681.43 00:21:48.897 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 7: 5192.30 20.28 12361.81 1372.50 59880.62 00:21:48.897 ======================================================== 00:21:48.897 Total : 32072.39 125.28 7993.50 1204.39 59880.62 00:21:48.897 00:21:48.897 11:39:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@115 -- # nvmftestfini 00:21:48.897 11:39:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@516 -- # nvmfcleanup 00:21:48.897 11:39:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@121 -- # sync 00:21:48.897 11:39:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:21:48.897 11:39:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@124 -- # set +e 00:21:48.897 11:39:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@125 -- # for i in {1..20} 00:21:48.897 11:39:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:21:48.897 rmmod nvme_tcp 00:21:48.897 rmmod nvme_fabrics 00:21:48.897 rmmod nvme_keyring 00:21:48.897 11:39:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:21:48.897 11:39:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@128 -- # set -e 00:21:48.897 11:39:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@129 -- # return 0 00:21:48.897 11:39:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@517 -- # '[' -n 1290966 ']' 00:21:48.897 11:39:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@518 -- # killprocess 1290966 00:21:48.897 11:39:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@952 -- # '[' -z 1290966 ']' 00:21:48.897 11:39:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@956 -- # kill -0 1290966 00:21:48.897 11:39:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@957 -- # uname 00:21:48.897 11:39:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:21:48.897 11:39:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 1290966 00:21:49.156 11:39:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:21:49.156 11:39:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:21:49.156 11:39:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@970 -- # echo 'killing process with pid 1290966' 00:21:49.156 killing process with pid 1290966 00:21:49.156 11:39:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@971 -- # kill 1290966 00:21:49.156 11:39:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@976 -- # wait 1290966 00:21:49.156 11:39:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:21:49.156 11:39:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:21:49.156 11:39:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:21:49.156 11:39:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@297 -- # iptr 00:21:49.156 11:39:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:21:49.156 11:39:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@791 -- # iptables-save 00:21:49.156 11:39:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@791 -- # iptables-restore 00:21:49.156 11:39:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:21:49.156 11:39:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@302 -- # remove_spdk_ns 00:21:49.156 11:39:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:49.156 11:39:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:49.156 11:39:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:52.441 11:39:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:21:52.441 11:39:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@117 -- # trap - SIGINT SIGTERM EXIT 00:21:52.441 00:21:52.441 real 0m49.883s 00:21:52.441 user 2m43.593s 00:21:52.441 sys 0m10.227s 00:21:52.441 11:39:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1128 -- # xtrace_disable 00:21:52.441 11:39:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:52.441 ************************************ 00:21:52.441 END TEST nvmf_perf_adq 00:21:52.441 ************************************ 00:21:52.441 11:39:53 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@65 -- # run_test nvmf_shutdown /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh --transport=tcp 00:21:52.441 11:39:53 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:21:52.441 11:39:53 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1109 -- # xtrace_disable 00:21:52.441 11:39:53 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:21:52.441 ************************************ 00:21:52.441 START TEST nvmf_shutdown 00:21:52.441 ************************************ 00:21:52.441 11:39:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh --transport=tcp 00:21:52.441 * Looking for test storage... 00:21:52.441 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:21:52.441 11:39:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:21:52.441 11:39:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1691 -- # lcov --version 00:21:52.441 11:39:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:21:52.441 11:39:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:21:52.441 11:39:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:21:52.441 11:39:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@333 -- # local ver1 ver1_l 00:21:52.441 11:39:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@334 -- # local ver2 ver2_l 00:21:52.441 11:39:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@336 -- # IFS=.-: 00:21:52.441 11:39:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@336 -- # read -ra ver1 00:21:52.441 11:39:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@337 -- # IFS=.-: 00:21:52.441 11:39:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@337 -- # read -ra ver2 00:21:52.441 11:39:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@338 -- # local 'op=<' 00:21:52.441 11:39:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@340 -- # ver1_l=2 00:21:52.441 11:39:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@341 -- # ver2_l=1 00:21:52.441 11:39:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:21:52.441 11:39:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@344 -- # case "$op" in 00:21:52.441 11:39:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@345 -- # : 1 00:21:52.441 11:39:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@364 -- # (( v = 0 )) 00:21:52.441 11:39:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:21:52.441 11:39:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@365 -- # decimal 1 00:21:52.441 11:39:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@353 -- # local d=1 00:21:52.441 11:39:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:21:52.441 11:39:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@355 -- # echo 1 00:21:52.441 11:39:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@365 -- # ver1[v]=1 00:21:52.441 11:39:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@366 -- # decimal 2 00:21:52.441 11:39:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@353 -- # local d=2 00:21:52.700 11:39:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:21:52.700 11:39:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@355 -- # echo 2 00:21:52.700 11:39:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@366 -- # ver2[v]=2 00:21:52.700 11:39:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:21:52.700 11:39:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:21:52.700 11:39:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@368 -- # return 0 00:21:52.700 11:39:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:21:52.700 11:39:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:21:52.700 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:52.700 --rc genhtml_branch_coverage=1 00:21:52.700 --rc genhtml_function_coverage=1 00:21:52.700 --rc genhtml_legend=1 00:21:52.700 --rc geninfo_all_blocks=1 00:21:52.700 --rc geninfo_unexecuted_blocks=1 00:21:52.700 00:21:52.700 ' 00:21:52.700 11:39:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:21:52.700 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:52.700 --rc genhtml_branch_coverage=1 00:21:52.700 --rc genhtml_function_coverage=1 00:21:52.700 --rc genhtml_legend=1 00:21:52.700 --rc geninfo_all_blocks=1 00:21:52.700 --rc geninfo_unexecuted_blocks=1 00:21:52.700 00:21:52.700 ' 00:21:52.700 11:39:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:21:52.701 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:52.701 --rc genhtml_branch_coverage=1 00:21:52.701 --rc genhtml_function_coverage=1 00:21:52.701 --rc genhtml_legend=1 00:21:52.701 --rc geninfo_all_blocks=1 00:21:52.701 --rc geninfo_unexecuted_blocks=1 00:21:52.701 00:21:52.701 ' 00:21:52.701 11:39:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:21:52.701 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:52.701 --rc genhtml_branch_coverage=1 00:21:52.701 --rc genhtml_function_coverage=1 00:21:52.701 --rc genhtml_legend=1 00:21:52.701 --rc geninfo_all_blocks=1 00:21:52.701 --rc geninfo_unexecuted_blocks=1 00:21:52.701 00:21:52.701 ' 00:21:52.701 11:39:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:21:52.701 11:39:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@7 -- # uname -s 00:21:52.701 11:39:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:52.701 11:39:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:52.701 11:39:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:52.701 11:39:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:52.701 11:39:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:52.701 11:39:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:52.701 11:39:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:52.701 11:39:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:52.701 11:39:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:52.701 11:39:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:52.701 11:39:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:21:52.701 11:39:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@18 -- # NVME_HOSTID=00abaa28-3537-eb11-906e-0017a4403562 00:21:52.701 11:39:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:52.701 11:39:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:52.701 11:39:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:21:52.701 11:39:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:52.701 11:39:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:21:52.701 11:39:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@15 -- # shopt -s extglob 00:21:52.701 11:39:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:52.701 11:39:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:52.701 11:39:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:52.701 11:39:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:52.701 11:39:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:52.701 11:39:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:52.701 11:39:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@5 -- # export PATH 00:21:52.701 11:39:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:52.701 11:39:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@51 -- # : 0 00:21:52.701 11:39:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:21:52.701 11:39:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:21:52.701 11:39:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:52.701 11:39:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:52.701 11:39:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:52.701 11:39:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:21:52.701 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:21:52.701 11:39:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:21:52.701 11:39:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:21:52.701 11:39:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@55 -- # have_pci_nics=0 00:21:52.701 11:39:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@12 -- # MALLOC_BDEV_SIZE=64 00:21:52.701 11:39:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:21:52.701 11:39:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@162 -- # run_test nvmf_shutdown_tc1 nvmf_shutdown_tc1 00:21:52.701 11:39:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:21:52.701 11:39:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1109 -- # xtrace_disable 00:21:52.701 11:39:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:21:52.701 ************************************ 00:21:52.701 START TEST nvmf_shutdown_tc1 00:21:52.701 ************************************ 00:21:52.701 11:39:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@1127 -- # nvmf_shutdown_tc1 00:21:52.701 11:39:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@75 -- # starttarget 00:21:52.701 11:39:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@16 -- # nvmftestinit 00:21:52.701 11:39:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:21:52.701 11:39:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:52.701 11:39:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@476 -- # prepare_net_devs 00:21:52.701 11:39:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@438 -- # local -g is_hw=no 00:21:52.701 11:39:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@440 -- # remove_spdk_ns 00:21:52.701 11:39:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:52.701 11:39:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:52.701 11:39:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:52.701 11:39:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:21:52.701 11:39:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:21:52.701 11:39:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@309 -- # xtrace_disable 00:21:52.701 11:39:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:21:57.964 11:39:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:21:57.964 11:39:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@315 -- # pci_devs=() 00:21:57.964 11:39:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@315 -- # local -a pci_devs 00:21:57.964 11:39:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@316 -- # pci_net_devs=() 00:21:57.964 11:39:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:21:57.964 11:39:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@317 -- # pci_drivers=() 00:21:57.964 11:39:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@317 -- # local -A pci_drivers 00:21:57.965 11:39:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@319 -- # net_devs=() 00:21:57.965 11:39:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@319 -- # local -ga net_devs 00:21:57.965 11:39:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@320 -- # e810=() 00:21:57.965 11:39:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@320 -- # local -ga e810 00:21:57.965 11:39:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@321 -- # x722=() 00:21:57.965 11:39:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@321 -- # local -ga x722 00:21:57.965 11:39:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@322 -- # mlx=() 00:21:57.965 11:39:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@322 -- # local -ga mlx 00:21:57.965 11:39:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:57.965 11:39:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:57.965 11:39:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:57.965 11:39:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:57.965 11:39:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:57.965 11:39:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:57.965 11:39:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:57.965 11:39:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:21:57.965 11:39:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:57.965 11:39:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:57.965 11:39:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:57.965 11:39:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:57.965 11:39:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:21:57.965 11:39:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:21:57.965 11:39:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:21:57.965 11:39:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:21:57.965 11:39:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:21:57.965 11:39:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:21:57.965 11:39:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:57.965 11:39:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:21:57.965 Found 0000:af:00.0 (0x8086 - 0x159b) 00:21:57.965 11:39:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:21:57.965 11:39:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:21:57.965 11:39:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:57.965 11:39:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:57.965 11:39:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:21:57.965 11:39:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:57.965 11:39:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:21:57.965 Found 0000:af:00.1 (0x8086 - 0x159b) 00:21:57.965 11:39:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:21:57.965 11:39:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:21:57.965 11:39:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:57.965 11:39:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:57.965 11:39:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:21:57.965 11:39:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:21:57.965 11:39:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:21:57.965 11:39:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:21:57.965 11:39:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:21:57.965 11:39:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:57.965 11:39:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:21:57.965 11:39:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:57.965 11:39:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:21:57.965 11:39:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:21:57.965 11:39:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:57.965 11:39:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:21:57.965 Found net devices under 0000:af:00.0: cvl_0_0 00:21:57.965 11:39:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:21:57.965 11:39:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:21:57.965 11:39:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:57.965 11:39:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:21:57.965 11:39:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:57.965 11:39:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:21:57.965 11:39:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:21:57.965 11:39:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:57.965 11:39:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:21:57.965 Found net devices under 0000:af:00.1: cvl_0_1 00:21:57.965 11:39:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:21:57.965 11:39:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:21:57.965 11:39:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@442 -- # is_hw=yes 00:21:57.965 11:39:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:21:57.965 11:39:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:21:57.965 11:39:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:21:57.965 11:39:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:21:57.965 11:39:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:57.965 11:39:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:57.965 11:39:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:57.965 11:39:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:21:57.965 11:39:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:21:57.965 11:39:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:21:57.965 11:39:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:21:57.965 11:39:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:21:57.965 11:39:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:21:57.965 11:39:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:57.965 11:39:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:21:57.965 11:39:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:21:57.965 11:39:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:21:57.965 11:39:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:21:57.965 11:39:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:21:57.965 11:39:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:21:57.965 11:39:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:21:57.965 11:39:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:21:57.965 11:39:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:21:57.965 11:39:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:21:57.965 11:39:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:21:57.965 11:39:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:21:57.965 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:57.965 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.325 ms 00:21:57.965 00:21:57.965 --- 10.0.0.2 ping statistics --- 00:21:57.965 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:57.965 rtt min/avg/max/mdev = 0.325/0.325/0.325/0.000 ms 00:21:57.965 11:39:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:21:57.965 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:57.965 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.249 ms 00:21:57.965 00:21:57.966 --- 10.0.0.1 ping statistics --- 00:21:57.966 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:57.966 rtt min/avg/max/mdev = 0.249/0.249/0.249/0.000 ms 00:21:57.966 11:39:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:57.966 11:39:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@450 -- # return 0 00:21:57.966 11:39:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:21:57.966 11:39:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:57.966 11:39:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:21:57.966 11:39:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:21:57.966 11:39:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:57.966 11:39:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:21:57.966 11:39:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:21:57.966 11:39:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@19 -- # nvmfappstart -m 0x1E 00:21:57.966 11:39:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:21:57.966 11:39:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@724 -- # xtrace_disable 00:21:57.966 11:39:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:21:57.966 11:39:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@509 -- # nvmfpid=1296682 00:21:57.966 11:39:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@510 -- # waitforlisten 1296682 00:21:57.966 11:39:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@833 -- # '[' -z 1296682 ']' 00:21:57.966 11:39:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:57.966 11:39:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@838 -- # local max_retries=100 00:21:57.966 11:39:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:57.966 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:57.966 11:39:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@842 -- # xtrace_disable 00:21:57.966 11:39:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:21:57.966 11:39:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:21:57.966 [2024-11-15 11:39:58.492441] Starting SPDK v25.01-pre git sha1 4b2d483c6 / DPDK 24.03.0 initialization... 00:21:57.966 [2024-11-15 11:39:58.492510] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:57.966 [2024-11-15 11:39:58.564556] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:21:57.966 [2024-11-15 11:39:58.605271] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:57.966 [2024-11-15 11:39:58.605306] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:57.966 [2024-11-15 11:39:58.605312] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:57.966 [2024-11-15 11:39:58.605318] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:57.966 [2024-11-15 11:39:58.605323] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:57.966 [2024-11-15 11:39:58.607020] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:21:57.966 [2024-11-15 11:39:58.607126] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:21:57.966 [2024-11-15 11:39:58.607227] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:21:57.966 [2024-11-15 11:39:58.607229] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:21:57.966 11:39:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:21:57.966 11:39:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@866 -- # return 0 00:21:57.966 11:39:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:21:57.966 11:39:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@730 -- # xtrace_disable 00:21:57.966 11:39:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:21:57.966 11:39:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:57.966 11:39:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:21:57.966 11:39:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:57.966 11:39:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:21:57.966 [2024-11-15 11:39:58.759742] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:57.966 11:39:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:57.966 11:39:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@23 -- # num_subsystems=({1..10}) 00:21:57.966 11:39:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@25 -- # timing_enter create_subsystems 00:21:57.966 11:39:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@724 -- # xtrace_disable 00:21:57.966 11:39:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:21:57.966 11:39:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:21:57.966 11:39:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:21:57.966 11:39:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:21:57.966 11:39:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:21:57.966 11:39:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:21:57.966 11:39:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:21:57.966 11:39:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:21:57.966 11:39:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:21:57.966 11:39:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:21:57.966 11:39:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:21:57.966 11:39:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:21:57.966 11:39:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:21:57.966 11:39:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:21:57.966 11:39:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:21:57.966 11:39:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:21:57.966 11:39:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:21:57.966 11:39:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:21:57.966 11:39:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:21:57.966 11:39:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:21:57.966 11:39:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:21:57.966 11:39:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:21:58.225 11:39:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@36 -- # rpc_cmd 00:21:58.225 11:39:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:58.225 11:39:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:21:58.225 Malloc1 00:21:58.225 [2024-11-15 11:39:58.870892] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:58.225 Malloc2 00:21:58.225 Malloc3 00:21:58.225 Malloc4 00:21:58.225 Malloc5 00:21:58.225 Malloc6 00:21:58.485 Malloc7 00:21:58.485 Malloc8 00:21:58.485 Malloc9 00:21:58.485 Malloc10 00:21:58.485 11:39:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:58.485 11:39:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@37 -- # timing_exit create_subsystems 00:21:58.485 11:39:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@730 -- # xtrace_disable 00:21:58.485 11:39:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:21:58.485 11:39:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@79 -- # perfpid=1296853 00:21:58.485 11:39:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@80 -- # waitforlisten 1296853 /var/tmp/bdevperf.sock 00:21:58.485 11:39:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@833 -- # '[' -z 1296853 ']' 00:21:58.485 11:39:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:58.485 11:39:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@78 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -m 0x1 -i 1 -r /var/tmp/bdevperf.sock --json /dev/fd/63 00:21:58.485 11:39:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@838 -- # local max_retries=100 00:21:58.485 11:39:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@78 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:21:58.485 11:39:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:58.485 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:58.485 11:39:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # config=() 00:21:58.485 11:39:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@842 -- # xtrace_disable 00:21:58.485 11:39:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # local subsystem config 00:21:58.485 11:39:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:21:58.485 11:39:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:21:58.485 11:39:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:21:58.485 { 00:21:58.485 "params": { 00:21:58.485 "name": "Nvme$subsystem", 00:21:58.485 "trtype": "$TEST_TRANSPORT", 00:21:58.485 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:58.485 "adrfam": "ipv4", 00:21:58.485 "trsvcid": "$NVMF_PORT", 00:21:58.485 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:58.485 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:58.485 "hdgst": ${hdgst:-false}, 00:21:58.485 "ddgst": ${ddgst:-false} 00:21:58.485 }, 00:21:58.485 "method": "bdev_nvme_attach_controller" 00:21:58.485 } 00:21:58.485 EOF 00:21:58.485 )") 00:21:58.485 11:39:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:21:58.485 11:39:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:21:58.485 11:39:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:21:58.485 { 00:21:58.485 "params": { 00:21:58.485 "name": "Nvme$subsystem", 00:21:58.485 "trtype": "$TEST_TRANSPORT", 00:21:58.485 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:58.485 "adrfam": "ipv4", 00:21:58.485 "trsvcid": "$NVMF_PORT", 00:21:58.485 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:58.485 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:58.485 "hdgst": ${hdgst:-false}, 00:21:58.485 "ddgst": ${ddgst:-false} 00:21:58.485 }, 00:21:58.485 "method": "bdev_nvme_attach_controller" 00:21:58.485 } 00:21:58.485 EOF 00:21:58.485 )") 00:21:58.485 11:39:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:21:58.485 11:39:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:21:58.485 11:39:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:21:58.485 { 00:21:58.485 "params": { 00:21:58.485 "name": "Nvme$subsystem", 00:21:58.485 "trtype": "$TEST_TRANSPORT", 00:21:58.485 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:58.485 "adrfam": "ipv4", 00:21:58.485 "trsvcid": "$NVMF_PORT", 00:21:58.485 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:58.485 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:58.485 "hdgst": ${hdgst:-false}, 00:21:58.485 "ddgst": ${ddgst:-false} 00:21:58.485 }, 00:21:58.485 "method": "bdev_nvme_attach_controller" 00:21:58.485 } 00:21:58.485 EOF 00:21:58.485 )") 00:21:58.485 11:39:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:21:58.485 11:39:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:21:58.485 11:39:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:21:58.485 { 00:21:58.485 "params": { 00:21:58.485 "name": "Nvme$subsystem", 00:21:58.485 "trtype": "$TEST_TRANSPORT", 00:21:58.485 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:58.485 "adrfam": "ipv4", 00:21:58.485 "trsvcid": "$NVMF_PORT", 00:21:58.485 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:58.485 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:58.485 "hdgst": ${hdgst:-false}, 00:21:58.485 "ddgst": ${ddgst:-false} 00:21:58.485 }, 00:21:58.485 "method": "bdev_nvme_attach_controller" 00:21:58.485 } 00:21:58.485 EOF 00:21:58.485 )") 00:21:58.485 11:39:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:21:58.485 11:39:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:21:58.485 11:39:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:21:58.485 { 00:21:58.485 "params": { 00:21:58.485 "name": "Nvme$subsystem", 00:21:58.485 "trtype": "$TEST_TRANSPORT", 00:21:58.485 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:58.485 "adrfam": "ipv4", 00:21:58.485 "trsvcid": "$NVMF_PORT", 00:21:58.485 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:58.485 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:58.485 "hdgst": ${hdgst:-false}, 00:21:58.485 "ddgst": ${ddgst:-false} 00:21:58.485 }, 00:21:58.485 "method": "bdev_nvme_attach_controller" 00:21:58.485 } 00:21:58.485 EOF 00:21:58.485 )") 00:21:58.485 11:39:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:21:58.485 11:39:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:21:58.485 11:39:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:21:58.485 { 00:21:58.485 "params": { 00:21:58.485 "name": "Nvme$subsystem", 00:21:58.485 "trtype": "$TEST_TRANSPORT", 00:21:58.485 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:58.485 "adrfam": "ipv4", 00:21:58.485 "trsvcid": "$NVMF_PORT", 00:21:58.485 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:58.485 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:58.485 "hdgst": ${hdgst:-false}, 00:21:58.485 "ddgst": ${ddgst:-false} 00:21:58.486 }, 00:21:58.486 "method": "bdev_nvme_attach_controller" 00:21:58.486 } 00:21:58.486 EOF 00:21:58.486 )") 00:21:58.486 11:39:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:21:58.746 11:39:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:21:58.746 11:39:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:21:58.746 { 00:21:58.746 "params": { 00:21:58.746 "name": "Nvme$subsystem", 00:21:58.746 "trtype": "$TEST_TRANSPORT", 00:21:58.746 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:58.746 "adrfam": "ipv4", 00:21:58.746 "trsvcid": "$NVMF_PORT", 00:21:58.746 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:58.746 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:58.746 "hdgst": ${hdgst:-false}, 00:21:58.746 "ddgst": ${ddgst:-false} 00:21:58.746 }, 00:21:58.746 "method": "bdev_nvme_attach_controller" 00:21:58.746 } 00:21:58.746 EOF 00:21:58.746 )") 00:21:58.746 [2024-11-15 11:39:59.338632] Starting SPDK v25.01-pre git sha1 4b2d483c6 / DPDK 24.03.0 initialization... 00:21:58.746 [2024-11-15 11:39:59.338678] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:21:58.746 11:39:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:21:58.746 11:39:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:21:58.746 11:39:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:21:58.746 { 00:21:58.746 "params": { 00:21:58.746 "name": "Nvme$subsystem", 00:21:58.746 "trtype": "$TEST_TRANSPORT", 00:21:58.746 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:58.746 "adrfam": "ipv4", 00:21:58.746 "trsvcid": "$NVMF_PORT", 00:21:58.746 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:58.746 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:58.746 "hdgst": ${hdgst:-false}, 00:21:58.746 "ddgst": ${ddgst:-false} 00:21:58.746 }, 00:21:58.746 "method": "bdev_nvme_attach_controller" 00:21:58.746 } 00:21:58.746 EOF 00:21:58.746 )") 00:21:58.746 11:39:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:21:58.746 11:39:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:21:58.746 11:39:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:21:58.746 { 00:21:58.746 "params": { 00:21:58.746 "name": "Nvme$subsystem", 00:21:58.746 "trtype": "$TEST_TRANSPORT", 00:21:58.746 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:58.746 "adrfam": "ipv4", 00:21:58.746 "trsvcid": "$NVMF_PORT", 00:21:58.746 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:58.746 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:58.746 "hdgst": ${hdgst:-false}, 00:21:58.746 "ddgst": ${ddgst:-false} 00:21:58.746 }, 00:21:58.746 "method": "bdev_nvme_attach_controller" 00:21:58.746 } 00:21:58.746 EOF 00:21:58.746 )") 00:21:58.746 11:39:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:21:58.746 11:39:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:21:58.746 11:39:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:21:58.746 { 00:21:58.746 "params": { 00:21:58.746 "name": "Nvme$subsystem", 00:21:58.746 "trtype": "$TEST_TRANSPORT", 00:21:58.746 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:58.746 "adrfam": "ipv4", 00:21:58.746 "trsvcid": "$NVMF_PORT", 00:21:58.746 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:58.746 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:58.746 "hdgst": ${hdgst:-false}, 00:21:58.746 "ddgst": ${ddgst:-false} 00:21:58.746 }, 00:21:58.746 "method": "bdev_nvme_attach_controller" 00:21:58.746 } 00:21:58.746 EOF 00:21:58.746 )") 00:21:58.746 11:39:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:21:58.746 11:39:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@584 -- # jq . 00:21:58.746 11:39:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@585 -- # IFS=, 00:21:58.746 11:39:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:21:58.746 "params": { 00:21:58.746 "name": "Nvme1", 00:21:58.746 "trtype": "tcp", 00:21:58.746 "traddr": "10.0.0.2", 00:21:58.746 "adrfam": "ipv4", 00:21:58.746 "trsvcid": "4420", 00:21:58.746 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:21:58.746 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:21:58.746 "hdgst": false, 00:21:58.746 "ddgst": false 00:21:58.746 }, 00:21:58.746 "method": "bdev_nvme_attach_controller" 00:21:58.746 },{ 00:21:58.746 "params": { 00:21:58.746 "name": "Nvme2", 00:21:58.746 "trtype": "tcp", 00:21:58.746 "traddr": "10.0.0.2", 00:21:58.746 "adrfam": "ipv4", 00:21:58.746 "trsvcid": "4420", 00:21:58.746 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:21:58.746 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:21:58.746 "hdgst": false, 00:21:58.746 "ddgst": false 00:21:58.746 }, 00:21:58.746 "method": "bdev_nvme_attach_controller" 00:21:58.746 },{ 00:21:58.746 "params": { 00:21:58.746 "name": "Nvme3", 00:21:58.746 "trtype": "tcp", 00:21:58.746 "traddr": "10.0.0.2", 00:21:58.746 "adrfam": "ipv4", 00:21:58.746 "trsvcid": "4420", 00:21:58.746 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:21:58.746 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:21:58.746 "hdgst": false, 00:21:58.746 "ddgst": false 00:21:58.746 }, 00:21:58.746 "method": "bdev_nvme_attach_controller" 00:21:58.746 },{ 00:21:58.746 "params": { 00:21:58.746 "name": "Nvme4", 00:21:58.746 "trtype": "tcp", 00:21:58.746 "traddr": "10.0.0.2", 00:21:58.746 "adrfam": "ipv4", 00:21:58.746 "trsvcid": "4420", 00:21:58.746 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:21:58.746 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:21:58.746 "hdgst": false, 00:21:58.746 "ddgst": false 00:21:58.746 }, 00:21:58.746 "method": "bdev_nvme_attach_controller" 00:21:58.746 },{ 00:21:58.746 "params": { 00:21:58.746 "name": "Nvme5", 00:21:58.746 "trtype": "tcp", 00:21:58.746 "traddr": "10.0.0.2", 00:21:58.746 "adrfam": "ipv4", 00:21:58.746 "trsvcid": "4420", 00:21:58.746 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:21:58.746 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:21:58.746 "hdgst": false, 00:21:58.746 "ddgst": false 00:21:58.746 }, 00:21:58.746 "method": "bdev_nvme_attach_controller" 00:21:58.746 },{ 00:21:58.746 "params": { 00:21:58.746 "name": "Nvme6", 00:21:58.747 "trtype": "tcp", 00:21:58.747 "traddr": "10.0.0.2", 00:21:58.747 "adrfam": "ipv4", 00:21:58.747 "trsvcid": "4420", 00:21:58.747 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:21:58.747 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:21:58.747 "hdgst": false, 00:21:58.747 "ddgst": false 00:21:58.747 }, 00:21:58.747 "method": "bdev_nvme_attach_controller" 00:21:58.747 },{ 00:21:58.747 "params": { 00:21:58.747 "name": "Nvme7", 00:21:58.747 "trtype": "tcp", 00:21:58.747 "traddr": "10.0.0.2", 00:21:58.747 "adrfam": "ipv4", 00:21:58.747 "trsvcid": "4420", 00:21:58.747 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:21:58.747 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:21:58.747 "hdgst": false, 00:21:58.747 "ddgst": false 00:21:58.747 }, 00:21:58.747 "method": "bdev_nvme_attach_controller" 00:21:58.747 },{ 00:21:58.747 "params": { 00:21:58.747 "name": "Nvme8", 00:21:58.747 "trtype": "tcp", 00:21:58.747 "traddr": "10.0.0.2", 00:21:58.747 "adrfam": "ipv4", 00:21:58.747 "trsvcid": "4420", 00:21:58.747 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:21:58.747 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:21:58.747 "hdgst": false, 00:21:58.747 "ddgst": false 00:21:58.747 }, 00:21:58.747 "method": "bdev_nvme_attach_controller" 00:21:58.747 },{ 00:21:58.747 "params": { 00:21:58.747 "name": "Nvme9", 00:21:58.747 "trtype": "tcp", 00:21:58.747 "traddr": "10.0.0.2", 00:21:58.747 "adrfam": "ipv4", 00:21:58.747 "trsvcid": "4420", 00:21:58.747 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:21:58.747 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:21:58.747 "hdgst": false, 00:21:58.747 "ddgst": false 00:21:58.747 }, 00:21:58.747 "method": "bdev_nvme_attach_controller" 00:21:58.747 },{ 00:21:58.747 "params": { 00:21:58.747 "name": "Nvme10", 00:21:58.747 "trtype": "tcp", 00:21:58.747 "traddr": "10.0.0.2", 00:21:58.747 "adrfam": "ipv4", 00:21:58.747 "trsvcid": "4420", 00:21:58.747 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:21:58.747 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:21:58.747 "hdgst": false, 00:21:58.747 "ddgst": false 00:21:58.747 }, 00:21:58.747 "method": "bdev_nvme_attach_controller" 00:21:58.747 }' 00:21:58.747 [2024-11-15 11:39:59.422629] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:58.747 [2024-11-15 11:39:59.471065] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:22:00.651 11:40:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:22:00.651 11:40:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@866 -- # return 0 00:22:00.652 11:40:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@81 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:22:00.652 11:40:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:00.652 11:40:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:22:00.652 11:40:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:00.652 11:40:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@84 -- # kill -9 1296853 00:22:00.652 11:40:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@85 -- # rm -f /var/run/spdk_bdev1 00:22:00.652 11:40:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@88 -- # sleep 1 00:22:01.591 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh: line 74: 1296853 Killed $rootdir/test/app/bdev_svc/bdev_svc -m 0x1 -i 1 -r /var/tmp/bdevperf.sock --json <(gen_nvmf_target_json "${num_subsystems[@]}") 00:22:01.591 11:40:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@89 -- # kill -0 1296682 00:22:01.591 11:40:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:22:01.591 11:40:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@92 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:22:01.591 11:40:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # config=() 00:22:01.591 11:40:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # local subsystem config 00:22:01.591 11:40:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:01.591 11:40:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:01.591 { 00:22:01.591 "params": { 00:22:01.591 "name": "Nvme$subsystem", 00:22:01.591 "trtype": "$TEST_TRANSPORT", 00:22:01.591 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:01.591 "adrfam": "ipv4", 00:22:01.591 "trsvcid": "$NVMF_PORT", 00:22:01.591 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:01.591 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:01.591 "hdgst": ${hdgst:-false}, 00:22:01.591 "ddgst": ${ddgst:-false} 00:22:01.591 }, 00:22:01.591 "method": "bdev_nvme_attach_controller" 00:22:01.591 } 00:22:01.591 EOF 00:22:01.591 )") 00:22:01.591 11:40:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:22:01.591 11:40:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:01.591 11:40:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:01.591 { 00:22:01.591 "params": { 00:22:01.591 "name": "Nvme$subsystem", 00:22:01.591 "trtype": "$TEST_TRANSPORT", 00:22:01.591 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:01.591 "adrfam": "ipv4", 00:22:01.591 "trsvcid": "$NVMF_PORT", 00:22:01.591 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:01.591 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:01.591 "hdgst": ${hdgst:-false}, 00:22:01.591 "ddgst": ${ddgst:-false} 00:22:01.591 }, 00:22:01.591 "method": "bdev_nvme_attach_controller" 00:22:01.591 } 00:22:01.591 EOF 00:22:01.591 )") 00:22:01.591 11:40:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:22:01.591 11:40:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:01.591 11:40:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:01.591 { 00:22:01.591 "params": { 00:22:01.591 "name": "Nvme$subsystem", 00:22:01.591 "trtype": "$TEST_TRANSPORT", 00:22:01.591 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:01.591 "adrfam": "ipv4", 00:22:01.591 "trsvcid": "$NVMF_PORT", 00:22:01.591 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:01.591 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:01.591 "hdgst": ${hdgst:-false}, 00:22:01.591 "ddgst": ${ddgst:-false} 00:22:01.591 }, 00:22:01.591 "method": "bdev_nvme_attach_controller" 00:22:01.591 } 00:22:01.591 EOF 00:22:01.591 )") 00:22:01.591 11:40:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:22:01.591 11:40:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:01.591 11:40:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:01.591 { 00:22:01.591 "params": { 00:22:01.591 "name": "Nvme$subsystem", 00:22:01.591 "trtype": "$TEST_TRANSPORT", 00:22:01.591 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:01.591 "adrfam": "ipv4", 00:22:01.591 "trsvcid": "$NVMF_PORT", 00:22:01.591 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:01.591 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:01.591 "hdgst": ${hdgst:-false}, 00:22:01.591 "ddgst": ${ddgst:-false} 00:22:01.591 }, 00:22:01.591 "method": "bdev_nvme_attach_controller" 00:22:01.591 } 00:22:01.591 EOF 00:22:01.591 )") 00:22:01.591 11:40:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:22:01.591 11:40:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:01.591 11:40:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:01.591 { 00:22:01.591 "params": { 00:22:01.591 "name": "Nvme$subsystem", 00:22:01.591 "trtype": "$TEST_TRANSPORT", 00:22:01.591 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:01.591 "adrfam": "ipv4", 00:22:01.591 "trsvcid": "$NVMF_PORT", 00:22:01.591 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:01.591 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:01.591 "hdgst": ${hdgst:-false}, 00:22:01.591 "ddgst": ${ddgst:-false} 00:22:01.591 }, 00:22:01.591 "method": "bdev_nvme_attach_controller" 00:22:01.591 } 00:22:01.591 EOF 00:22:01.591 )") 00:22:01.591 11:40:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:22:01.591 11:40:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:01.591 11:40:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:01.591 { 00:22:01.591 "params": { 00:22:01.591 "name": "Nvme$subsystem", 00:22:01.591 "trtype": "$TEST_TRANSPORT", 00:22:01.591 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:01.591 "adrfam": "ipv4", 00:22:01.591 "trsvcid": "$NVMF_PORT", 00:22:01.591 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:01.591 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:01.591 "hdgst": ${hdgst:-false}, 00:22:01.591 "ddgst": ${ddgst:-false} 00:22:01.591 }, 00:22:01.591 "method": "bdev_nvme_attach_controller" 00:22:01.591 } 00:22:01.591 EOF 00:22:01.591 )") 00:22:01.591 11:40:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:22:01.591 11:40:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:01.591 11:40:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:01.591 { 00:22:01.591 "params": { 00:22:01.591 "name": "Nvme$subsystem", 00:22:01.591 "trtype": "$TEST_TRANSPORT", 00:22:01.591 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:01.591 "adrfam": "ipv4", 00:22:01.591 "trsvcid": "$NVMF_PORT", 00:22:01.591 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:01.591 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:01.591 "hdgst": ${hdgst:-false}, 00:22:01.591 "ddgst": ${ddgst:-false} 00:22:01.591 }, 00:22:01.591 "method": "bdev_nvme_attach_controller" 00:22:01.591 } 00:22:01.591 EOF 00:22:01.591 )") 00:22:01.591 [2024-11-15 11:40:02.424265] Starting SPDK v25.01-pre git sha1 4b2d483c6 / DPDK 24.03.0 initialization... 00:22:01.591 [2024-11-15 11:40:02.424311] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1297405 ] 00:22:01.591 11:40:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:22:01.591 11:40:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:01.591 11:40:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:01.591 { 00:22:01.591 "params": { 00:22:01.591 "name": "Nvme$subsystem", 00:22:01.591 "trtype": "$TEST_TRANSPORT", 00:22:01.591 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:01.591 "adrfam": "ipv4", 00:22:01.591 "trsvcid": "$NVMF_PORT", 00:22:01.591 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:01.591 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:01.591 "hdgst": ${hdgst:-false}, 00:22:01.591 "ddgst": ${ddgst:-false} 00:22:01.591 }, 00:22:01.591 "method": "bdev_nvme_attach_controller" 00:22:01.591 } 00:22:01.591 EOF 00:22:01.591 )") 00:22:01.591 11:40:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:22:01.591 11:40:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:01.591 11:40:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:01.591 { 00:22:01.591 "params": { 00:22:01.591 "name": "Nvme$subsystem", 00:22:01.591 "trtype": "$TEST_TRANSPORT", 00:22:01.591 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:01.591 "adrfam": "ipv4", 00:22:01.591 "trsvcid": "$NVMF_PORT", 00:22:01.591 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:01.591 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:01.591 "hdgst": ${hdgst:-false}, 00:22:01.591 "ddgst": ${ddgst:-false} 00:22:01.591 }, 00:22:01.591 "method": "bdev_nvme_attach_controller" 00:22:01.591 } 00:22:01.591 EOF 00:22:01.591 )") 00:22:01.591 11:40:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:22:01.851 11:40:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:01.851 11:40:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:01.851 { 00:22:01.851 "params": { 00:22:01.851 "name": "Nvme$subsystem", 00:22:01.851 "trtype": "$TEST_TRANSPORT", 00:22:01.851 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:01.851 "adrfam": "ipv4", 00:22:01.851 "trsvcid": "$NVMF_PORT", 00:22:01.851 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:01.851 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:01.851 "hdgst": ${hdgst:-false}, 00:22:01.851 "ddgst": ${ddgst:-false} 00:22:01.851 }, 00:22:01.851 "method": "bdev_nvme_attach_controller" 00:22:01.851 } 00:22:01.851 EOF 00:22:01.851 )") 00:22:01.851 11:40:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:22:01.851 11:40:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@584 -- # jq . 00:22:01.851 11:40:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@585 -- # IFS=, 00:22:01.851 11:40:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:22:01.851 "params": { 00:22:01.851 "name": "Nvme1", 00:22:01.851 "trtype": "tcp", 00:22:01.851 "traddr": "10.0.0.2", 00:22:01.851 "adrfam": "ipv4", 00:22:01.851 "trsvcid": "4420", 00:22:01.851 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:22:01.851 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:22:01.851 "hdgst": false, 00:22:01.851 "ddgst": false 00:22:01.851 }, 00:22:01.851 "method": "bdev_nvme_attach_controller" 00:22:01.851 },{ 00:22:01.851 "params": { 00:22:01.851 "name": "Nvme2", 00:22:01.851 "trtype": "tcp", 00:22:01.851 "traddr": "10.0.0.2", 00:22:01.851 "adrfam": "ipv4", 00:22:01.851 "trsvcid": "4420", 00:22:01.851 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:22:01.851 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:22:01.851 "hdgst": false, 00:22:01.851 "ddgst": false 00:22:01.851 }, 00:22:01.851 "method": "bdev_nvme_attach_controller" 00:22:01.851 },{ 00:22:01.851 "params": { 00:22:01.851 "name": "Nvme3", 00:22:01.851 "trtype": "tcp", 00:22:01.851 "traddr": "10.0.0.2", 00:22:01.851 "adrfam": "ipv4", 00:22:01.851 "trsvcid": "4420", 00:22:01.851 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:22:01.851 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:22:01.851 "hdgst": false, 00:22:01.851 "ddgst": false 00:22:01.851 }, 00:22:01.851 "method": "bdev_nvme_attach_controller" 00:22:01.851 },{ 00:22:01.851 "params": { 00:22:01.851 "name": "Nvme4", 00:22:01.851 "trtype": "tcp", 00:22:01.851 "traddr": "10.0.0.2", 00:22:01.851 "adrfam": "ipv4", 00:22:01.851 "trsvcid": "4420", 00:22:01.851 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:22:01.851 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:22:01.851 "hdgst": false, 00:22:01.851 "ddgst": false 00:22:01.851 }, 00:22:01.851 "method": "bdev_nvme_attach_controller" 00:22:01.851 },{ 00:22:01.851 "params": { 00:22:01.851 "name": "Nvme5", 00:22:01.851 "trtype": "tcp", 00:22:01.851 "traddr": "10.0.0.2", 00:22:01.851 "adrfam": "ipv4", 00:22:01.851 "trsvcid": "4420", 00:22:01.851 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:22:01.851 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:22:01.851 "hdgst": false, 00:22:01.851 "ddgst": false 00:22:01.851 }, 00:22:01.851 "method": "bdev_nvme_attach_controller" 00:22:01.851 },{ 00:22:01.851 "params": { 00:22:01.851 "name": "Nvme6", 00:22:01.851 "trtype": "tcp", 00:22:01.852 "traddr": "10.0.0.2", 00:22:01.852 "adrfam": "ipv4", 00:22:01.852 "trsvcid": "4420", 00:22:01.852 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:22:01.852 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:22:01.852 "hdgst": false, 00:22:01.852 "ddgst": false 00:22:01.852 }, 00:22:01.852 "method": "bdev_nvme_attach_controller" 00:22:01.852 },{ 00:22:01.852 "params": { 00:22:01.852 "name": "Nvme7", 00:22:01.852 "trtype": "tcp", 00:22:01.852 "traddr": "10.0.0.2", 00:22:01.852 "adrfam": "ipv4", 00:22:01.852 "trsvcid": "4420", 00:22:01.852 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:22:01.852 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:22:01.852 "hdgst": false, 00:22:01.852 "ddgst": false 00:22:01.852 }, 00:22:01.852 "method": "bdev_nvme_attach_controller" 00:22:01.852 },{ 00:22:01.852 "params": { 00:22:01.852 "name": "Nvme8", 00:22:01.852 "trtype": "tcp", 00:22:01.852 "traddr": "10.0.0.2", 00:22:01.852 "adrfam": "ipv4", 00:22:01.852 "trsvcid": "4420", 00:22:01.852 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:22:01.852 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:22:01.852 "hdgst": false, 00:22:01.852 "ddgst": false 00:22:01.852 }, 00:22:01.852 "method": "bdev_nvme_attach_controller" 00:22:01.852 },{ 00:22:01.852 "params": { 00:22:01.852 "name": "Nvme9", 00:22:01.852 "trtype": "tcp", 00:22:01.852 "traddr": "10.0.0.2", 00:22:01.852 "adrfam": "ipv4", 00:22:01.852 "trsvcid": "4420", 00:22:01.852 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:22:01.852 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:22:01.852 "hdgst": false, 00:22:01.852 "ddgst": false 00:22:01.852 }, 00:22:01.852 "method": "bdev_nvme_attach_controller" 00:22:01.852 },{ 00:22:01.852 "params": { 00:22:01.852 "name": "Nvme10", 00:22:01.852 "trtype": "tcp", 00:22:01.852 "traddr": "10.0.0.2", 00:22:01.852 "adrfam": "ipv4", 00:22:01.852 "trsvcid": "4420", 00:22:01.852 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:22:01.852 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:22:01.852 "hdgst": false, 00:22:01.852 "ddgst": false 00:22:01.852 }, 00:22:01.852 "method": "bdev_nvme_attach_controller" 00:22:01.852 }' 00:22:01.852 [2024-11-15 11:40:02.509063] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:01.852 [2024-11-15 11:40:02.557480] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:22:03.229 Running I/O for 1 seconds... 00:22:04.164 1368.00 IOPS, 85.50 MiB/s 00:22:04.164 Latency(us) 00:22:04.164 [2024-11-15T10:40:05.017Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:04.164 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:04.164 Verification LBA range: start 0x0 length 0x400 00:22:04.164 Nvme1n1 : 1.13 174.84 10.93 0.00 0.00 352513.23 1727.77 305040.29 00:22:04.164 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:04.164 Verification LBA range: start 0x0 length 0x400 00:22:04.164 Nvme2n1 : 1.17 180.94 11.31 0.00 0.00 325602.11 21209.83 324105.31 00:22:04.164 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:04.164 Verification LBA range: start 0x0 length 0x400 00:22:04.164 Nvme3n1 : 1.23 208.51 13.03 0.00 0.00 291462.75 21090.68 326011.81 00:22:04.164 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:04.164 Verification LBA range: start 0x0 length 0x400 00:22:04.164 Nvme4n1 : 1.22 209.12 13.07 0.00 0.00 284714.82 16324.42 312666.30 00:22:04.164 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:04.164 Verification LBA range: start 0x0 length 0x400 00:22:04.164 Nvme5n1 : 1.22 209.73 13.11 0.00 0.00 277856.12 26095.24 299320.79 00:22:04.164 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:04.164 Verification LBA range: start 0x0 length 0x400 00:22:04.164 Nvme6n1 : 1.17 166.86 10.43 0.00 0.00 339427.54 2874.65 291694.78 00:22:04.164 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:04.164 Verification LBA range: start 0x0 length 0x400 00:22:04.164 Nvme7n1 : 1.23 207.58 12.97 0.00 0.00 269271.51 23950.43 289788.28 00:22:04.165 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:04.165 Verification LBA range: start 0x0 length 0x400 00:22:04.165 Nvme8n1 : 1.24 206.96 12.94 0.00 0.00 264220.39 14596.65 314572.80 00:22:04.165 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:04.165 Verification LBA range: start 0x0 length 0x400 00:22:04.165 Nvme9n1 : 1.24 206.05 12.88 0.00 0.00 259768.32 12809.31 318385.80 00:22:04.165 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:04.165 Verification LBA range: start 0x0 length 0x400 00:22:04.165 Nvme10n1 : 1.22 157.92 9.87 0.00 0.00 329856.00 14894.55 333637.82 00:22:04.165 [2024-11-15T10:40:05.018Z] =================================================================================================================== 00:22:04.165 [2024-11-15T10:40:05.018Z] Total : 1928.53 120.53 0.00 0.00 295760.95 1727.77 333637.82 00:22:04.423 11:40:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@95 -- # stoptarget 00:22:04.423 11:40:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@42 -- # rm -f ./local-job0-0-verify.state 00:22:04.423 11:40:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:22:04.423 11:40:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@44 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:22:04.423 11:40:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@46 -- # nvmftestfini 00:22:04.423 11:40:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@516 -- # nvmfcleanup 00:22:04.423 11:40:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@121 -- # sync 00:22:04.423 11:40:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:22:04.423 11:40:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@124 -- # set +e 00:22:04.423 11:40:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@125 -- # for i in {1..20} 00:22:04.423 11:40:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:22:04.423 rmmod nvme_tcp 00:22:04.423 rmmod nvme_fabrics 00:22:04.682 rmmod nvme_keyring 00:22:04.682 11:40:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:22:04.683 11:40:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@128 -- # set -e 00:22:04.683 11:40:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@129 -- # return 0 00:22:04.683 11:40:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@517 -- # '[' -n 1296682 ']' 00:22:04.683 11:40:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@518 -- # killprocess 1296682 00:22:04.683 11:40:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@952 -- # '[' -z 1296682 ']' 00:22:04.683 11:40:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@956 -- # kill -0 1296682 00:22:04.683 11:40:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@957 -- # uname 00:22:04.683 11:40:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:22:04.683 11:40:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 1296682 00:22:04.683 11:40:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:22:04.683 11:40:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:22:04.683 11:40:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@970 -- # echo 'killing process with pid 1296682' 00:22:04.683 killing process with pid 1296682 00:22:04.683 11:40:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@971 -- # kill 1296682 00:22:04.683 11:40:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@976 -- # wait 1296682 00:22:04.942 11:40:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:22:04.942 11:40:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:22:04.942 11:40:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:22:04.942 11:40:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@297 -- # iptr 00:22:04.942 11:40:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@791 -- # iptables-save 00:22:04.942 11:40:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:22:04.942 11:40:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@791 -- # iptables-restore 00:22:04.942 11:40:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:22:04.942 11:40:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@302 -- # remove_spdk_ns 00:22:04.942 11:40:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:04.942 11:40:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:04.942 11:40:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:07.478 11:40:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:22:07.478 00:22:07.478 real 0m14.457s 00:22:07.478 user 0m34.275s 00:22:07.478 sys 0m5.034s 00:22:07.478 11:40:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@1128 -- # xtrace_disable 00:22:07.478 11:40:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:22:07.479 ************************************ 00:22:07.479 END TEST nvmf_shutdown_tc1 00:22:07.479 ************************************ 00:22:07.479 11:40:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@163 -- # run_test nvmf_shutdown_tc2 nvmf_shutdown_tc2 00:22:07.479 11:40:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:22:07.479 11:40:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1109 -- # xtrace_disable 00:22:07.479 11:40:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:22:07.479 ************************************ 00:22:07.479 START TEST nvmf_shutdown_tc2 00:22:07.479 ************************************ 00:22:07.479 11:40:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@1127 -- # nvmf_shutdown_tc2 00:22:07.479 11:40:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@100 -- # starttarget 00:22:07.479 11:40:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@16 -- # nvmftestinit 00:22:07.479 11:40:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:22:07.479 11:40:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:07.479 11:40:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@476 -- # prepare_net_devs 00:22:07.479 11:40:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@438 -- # local -g is_hw=no 00:22:07.479 11:40:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@440 -- # remove_spdk_ns 00:22:07.479 11:40:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:07.479 11:40:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:07.479 11:40:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:07.479 11:40:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:22:07.479 11:40:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:22:07.479 11:40:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@309 -- # xtrace_disable 00:22:07.479 11:40:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:22:07.479 11:40:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:07.479 11:40:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@315 -- # pci_devs=() 00:22:07.479 11:40:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@315 -- # local -a pci_devs 00:22:07.479 11:40:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@316 -- # pci_net_devs=() 00:22:07.479 11:40:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:22:07.479 11:40:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@317 -- # pci_drivers=() 00:22:07.479 11:40:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@317 -- # local -A pci_drivers 00:22:07.479 11:40:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@319 -- # net_devs=() 00:22:07.479 11:40:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@319 -- # local -ga net_devs 00:22:07.479 11:40:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@320 -- # e810=() 00:22:07.479 11:40:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@320 -- # local -ga e810 00:22:07.479 11:40:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@321 -- # x722=() 00:22:07.479 11:40:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@321 -- # local -ga x722 00:22:07.479 11:40:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@322 -- # mlx=() 00:22:07.479 11:40:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@322 -- # local -ga mlx 00:22:07.479 11:40:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:07.479 11:40:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:07.479 11:40:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:07.479 11:40:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:07.479 11:40:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:07.479 11:40:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:07.479 11:40:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:07.479 11:40:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:22:07.479 11:40:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:07.479 11:40:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:07.479 11:40:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:07.479 11:40:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:07.479 11:40:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:22:07.479 11:40:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:22:07.479 11:40:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:22:07.479 11:40:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:22:07.479 11:40:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:22:07.479 11:40:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:22:07.479 11:40:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:07.479 11:40:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:22:07.479 Found 0000:af:00.0 (0x8086 - 0x159b) 00:22:07.479 11:40:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:07.479 11:40:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:07.479 11:40:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:07.479 11:40:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:07.479 11:40:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:07.479 11:40:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:07.479 11:40:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:22:07.479 Found 0000:af:00.1 (0x8086 - 0x159b) 00:22:07.479 11:40:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:07.479 11:40:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:07.479 11:40:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:07.479 11:40:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:07.479 11:40:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:07.479 11:40:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:22:07.479 11:40:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:22:07.479 11:40:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:22:07.479 11:40:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:22:07.479 11:40:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:07.479 11:40:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:22:07.479 11:40:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:07.479 11:40:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:22:07.479 11:40:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:22:07.479 11:40:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:07.479 11:40:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:22:07.479 Found net devices under 0000:af:00.0: cvl_0_0 00:22:07.479 11:40:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:22:07.479 11:40:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:22:07.479 11:40:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:07.479 11:40:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:22:07.479 11:40:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:07.479 11:40:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:22:07.479 11:40:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:22:07.479 11:40:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:07.479 11:40:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:22:07.479 Found net devices under 0000:af:00.1: cvl_0_1 00:22:07.479 11:40:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:22:07.479 11:40:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:22:07.479 11:40:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@442 -- # is_hw=yes 00:22:07.479 11:40:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:22:07.479 11:40:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:22:07.479 11:40:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:22:07.479 11:40:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:22:07.479 11:40:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:07.480 11:40:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:07.480 11:40:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:07.480 11:40:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:22:07.480 11:40:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:07.480 11:40:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:07.480 11:40:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:22:07.480 11:40:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:22:07.480 11:40:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:07.480 11:40:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:07.480 11:40:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:22:07.480 11:40:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:22:07.480 11:40:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:22:07.480 11:40:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:07.480 11:40:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:07.480 11:40:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:07.480 11:40:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:22:07.480 11:40:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:07.480 11:40:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:07.480 11:40:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:07.480 11:40:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:22:07.480 11:40:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:22:07.480 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:07.480 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.468 ms 00:22:07.480 00:22:07.480 --- 10.0.0.2 ping statistics --- 00:22:07.480 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:07.480 rtt min/avg/max/mdev = 0.468/0.468/0.468/0.000 ms 00:22:07.480 11:40:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:07.480 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:07.480 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.241 ms 00:22:07.480 00:22:07.480 --- 10.0.0.1 ping statistics --- 00:22:07.480 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:07.480 rtt min/avg/max/mdev = 0.241/0.241/0.241/0.000 ms 00:22:07.480 11:40:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:07.480 11:40:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@450 -- # return 0 00:22:07.480 11:40:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:22:07.480 11:40:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:07.480 11:40:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:22:07.480 11:40:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:22:07.480 11:40:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:07.480 11:40:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:22:07.480 11:40:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:22:07.480 11:40:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@19 -- # nvmfappstart -m 0x1E 00:22:07.480 11:40:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:22:07.480 11:40:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@724 -- # xtrace_disable 00:22:07.480 11:40:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:22:07.480 11:40:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@509 -- # nvmfpid=1298557 00:22:07.480 11:40:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@510 -- # waitforlisten 1298557 00:22:07.480 11:40:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:22:07.480 11:40:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@833 -- # '[' -z 1298557 ']' 00:22:07.480 11:40:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:07.480 11:40:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@838 -- # local max_retries=100 00:22:07.480 11:40:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:07.480 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:07.480 11:40:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@842 -- # xtrace_disable 00:22:07.480 11:40:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:22:07.480 [2024-11-15 11:40:08.274828] Starting SPDK v25.01-pre git sha1 4b2d483c6 / DPDK 24.03.0 initialization... 00:22:07.480 [2024-11-15 11:40:08.274872] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:07.740 [2024-11-15 11:40:08.332477] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:22:07.740 [2024-11-15 11:40:08.374435] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:07.740 [2024-11-15 11:40:08.374473] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:07.740 [2024-11-15 11:40:08.374481] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:07.740 [2024-11-15 11:40:08.374486] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:07.740 [2024-11-15 11:40:08.374491] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:07.740 [2024-11-15 11:40:08.376135] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:22:07.740 [2024-11-15 11:40:08.376236] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:22:07.740 [2024-11-15 11:40:08.376346] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:22:07.740 [2024-11-15 11:40:08.376348] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:22:07.740 11:40:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:22:07.740 11:40:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@866 -- # return 0 00:22:07.740 11:40:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:22:07.740 11:40:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@730 -- # xtrace_disable 00:22:07.740 11:40:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:22:07.740 11:40:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:07.740 11:40:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:22:07.740 11:40:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:07.740 11:40:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:22:07.740 [2024-11-15 11:40:08.551602] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:07.740 11:40:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:07.740 11:40:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@23 -- # num_subsystems=({1..10}) 00:22:07.740 11:40:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@25 -- # timing_enter create_subsystems 00:22:07.740 11:40:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@724 -- # xtrace_disable 00:22:07.740 11:40:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:22:07.740 11:40:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:22:07.740 11:40:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:07.740 11:40:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:22:07.740 11:40:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:07.740 11:40:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:22:07.740 11:40:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:07.740 11:40:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:22:07.740 11:40:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:07.740 11:40:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:22:07.740 11:40:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:07.740 11:40:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:22:07.740 11:40:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:07.740 11:40:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:22:07.740 11:40:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:08.000 11:40:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:22:08.000 11:40:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:08.000 11:40:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:22:08.000 11:40:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:08.000 11:40:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:22:08.000 11:40:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:08.000 11:40:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:22:08.000 11:40:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@36 -- # rpc_cmd 00:22:08.000 11:40:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:08.000 11:40:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:22:08.000 Malloc1 00:22:08.000 [2024-11-15 11:40:08.659291] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:08.000 Malloc2 00:22:08.000 Malloc3 00:22:08.000 Malloc4 00:22:08.000 Malloc5 00:22:08.000 Malloc6 00:22:08.259 Malloc7 00:22:08.259 Malloc8 00:22:08.259 Malloc9 00:22:08.259 Malloc10 00:22:08.260 11:40:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:08.260 11:40:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@37 -- # timing_exit create_subsystems 00:22:08.260 11:40:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@730 -- # xtrace_disable 00:22:08.260 11:40:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:22:08.260 11:40:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@104 -- # perfpid=1298850 00:22:08.260 11:40:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@105 -- # waitforlisten 1298850 /var/tmp/bdevperf.sock 00:22:08.260 11:40:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@833 -- # '[' -z 1298850 ']' 00:22:08.260 11:40:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:08.260 11:40:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@103 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:22:08.260 11:40:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@838 -- # local max_retries=100 00:22:08.260 11:40:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@103 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:22:08.260 11:40:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:08.260 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:08.260 11:40:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@560 -- # config=() 00:22:08.260 11:40:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@842 -- # xtrace_disable 00:22:08.260 11:40:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@560 -- # local subsystem config 00:22:08.260 11:40:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:22:08.260 11:40:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:08.260 11:40:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:08.260 { 00:22:08.260 "params": { 00:22:08.260 "name": "Nvme$subsystem", 00:22:08.260 "trtype": "$TEST_TRANSPORT", 00:22:08.260 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:08.260 "adrfam": "ipv4", 00:22:08.260 "trsvcid": "$NVMF_PORT", 00:22:08.260 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:08.260 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:08.260 "hdgst": ${hdgst:-false}, 00:22:08.260 "ddgst": ${ddgst:-false} 00:22:08.260 }, 00:22:08.260 "method": "bdev_nvme_attach_controller" 00:22:08.260 } 00:22:08.260 EOF 00:22:08.260 )") 00:22:08.260 11:40:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:22:08.260 11:40:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:08.260 11:40:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:08.260 { 00:22:08.260 "params": { 00:22:08.260 "name": "Nvme$subsystem", 00:22:08.260 "trtype": "$TEST_TRANSPORT", 00:22:08.260 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:08.260 "adrfam": "ipv4", 00:22:08.260 "trsvcid": "$NVMF_PORT", 00:22:08.260 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:08.260 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:08.260 "hdgst": ${hdgst:-false}, 00:22:08.260 "ddgst": ${ddgst:-false} 00:22:08.260 }, 00:22:08.260 "method": "bdev_nvme_attach_controller" 00:22:08.260 } 00:22:08.260 EOF 00:22:08.260 )") 00:22:08.260 11:40:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:22:08.260 11:40:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:08.260 11:40:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:08.260 { 00:22:08.260 "params": { 00:22:08.260 "name": "Nvme$subsystem", 00:22:08.260 "trtype": "$TEST_TRANSPORT", 00:22:08.260 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:08.260 "adrfam": "ipv4", 00:22:08.260 "trsvcid": "$NVMF_PORT", 00:22:08.260 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:08.260 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:08.260 "hdgst": ${hdgst:-false}, 00:22:08.260 "ddgst": ${ddgst:-false} 00:22:08.260 }, 00:22:08.260 "method": "bdev_nvme_attach_controller" 00:22:08.260 } 00:22:08.260 EOF 00:22:08.260 )") 00:22:08.260 11:40:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:22:08.260 11:40:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:08.260 11:40:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:08.260 { 00:22:08.260 "params": { 00:22:08.260 "name": "Nvme$subsystem", 00:22:08.260 "trtype": "$TEST_TRANSPORT", 00:22:08.260 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:08.260 "adrfam": "ipv4", 00:22:08.260 "trsvcid": "$NVMF_PORT", 00:22:08.260 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:08.260 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:08.260 "hdgst": ${hdgst:-false}, 00:22:08.260 "ddgst": ${ddgst:-false} 00:22:08.260 }, 00:22:08.260 "method": "bdev_nvme_attach_controller" 00:22:08.260 } 00:22:08.260 EOF 00:22:08.260 )") 00:22:08.260 11:40:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:22:08.520 11:40:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:08.520 11:40:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:08.520 { 00:22:08.520 "params": { 00:22:08.520 "name": "Nvme$subsystem", 00:22:08.520 "trtype": "$TEST_TRANSPORT", 00:22:08.520 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:08.520 "adrfam": "ipv4", 00:22:08.520 "trsvcid": "$NVMF_PORT", 00:22:08.520 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:08.520 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:08.520 "hdgst": ${hdgst:-false}, 00:22:08.520 "ddgst": ${ddgst:-false} 00:22:08.520 }, 00:22:08.520 "method": "bdev_nvme_attach_controller" 00:22:08.520 } 00:22:08.520 EOF 00:22:08.520 )") 00:22:08.520 11:40:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:22:08.520 11:40:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:08.520 11:40:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:08.520 { 00:22:08.520 "params": { 00:22:08.520 "name": "Nvme$subsystem", 00:22:08.520 "trtype": "$TEST_TRANSPORT", 00:22:08.520 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:08.520 "adrfam": "ipv4", 00:22:08.520 "trsvcid": "$NVMF_PORT", 00:22:08.520 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:08.520 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:08.520 "hdgst": ${hdgst:-false}, 00:22:08.520 "ddgst": ${ddgst:-false} 00:22:08.520 }, 00:22:08.520 "method": "bdev_nvme_attach_controller" 00:22:08.520 } 00:22:08.520 EOF 00:22:08.520 )") 00:22:08.520 11:40:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:22:08.520 11:40:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:08.520 11:40:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:08.520 { 00:22:08.520 "params": { 00:22:08.520 "name": "Nvme$subsystem", 00:22:08.520 "trtype": "$TEST_TRANSPORT", 00:22:08.520 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:08.520 "adrfam": "ipv4", 00:22:08.520 "trsvcid": "$NVMF_PORT", 00:22:08.520 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:08.520 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:08.520 "hdgst": ${hdgst:-false}, 00:22:08.520 "ddgst": ${ddgst:-false} 00:22:08.520 }, 00:22:08.520 "method": "bdev_nvme_attach_controller" 00:22:08.520 } 00:22:08.520 EOF 00:22:08.520 )") 00:22:08.520 11:40:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:22:08.520 11:40:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:08.520 11:40:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:08.520 { 00:22:08.520 "params": { 00:22:08.520 "name": "Nvme$subsystem", 00:22:08.520 "trtype": "$TEST_TRANSPORT", 00:22:08.520 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:08.520 "adrfam": "ipv4", 00:22:08.520 "trsvcid": "$NVMF_PORT", 00:22:08.520 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:08.520 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:08.520 "hdgst": ${hdgst:-false}, 00:22:08.520 "ddgst": ${ddgst:-false} 00:22:08.520 }, 00:22:08.520 "method": "bdev_nvme_attach_controller" 00:22:08.520 } 00:22:08.520 EOF 00:22:08.520 )") 00:22:08.520 [2024-11-15 11:40:09.134390] Starting SPDK v25.01-pre git sha1 4b2d483c6 / DPDK 24.03.0 initialization... 00:22:08.520 [2024-11-15 11:40:09.134454] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1298850 ] 00:22:08.520 11:40:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:22:08.520 11:40:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:08.520 11:40:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:08.520 { 00:22:08.520 "params": { 00:22:08.520 "name": "Nvme$subsystem", 00:22:08.520 "trtype": "$TEST_TRANSPORT", 00:22:08.520 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:08.520 "adrfam": "ipv4", 00:22:08.520 "trsvcid": "$NVMF_PORT", 00:22:08.520 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:08.520 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:08.520 "hdgst": ${hdgst:-false}, 00:22:08.520 "ddgst": ${ddgst:-false} 00:22:08.520 }, 00:22:08.520 "method": "bdev_nvme_attach_controller" 00:22:08.521 } 00:22:08.521 EOF 00:22:08.521 )") 00:22:08.521 11:40:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:22:08.521 11:40:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:08.521 11:40:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:08.521 { 00:22:08.521 "params": { 00:22:08.521 "name": "Nvme$subsystem", 00:22:08.521 "trtype": "$TEST_TRANSPORT", 00:22:08.521 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:08.521 "adrfam": "ipv4", 00:22:08.521 "trsvcid": "$NVMF_PORT", 00:22:08.521 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:08.521 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:08.521 "hdgst": ${hdgst:-false}, 00:22:08.521 "ddgst": ${ddgst:-false} 00:22:08.521 }, 00:22:08.521 "method": "bdev_nvme_attach_controller" 00:22:08.521 } 00:22:08.521 EOF 00:22:08.521 )") 00:22:08.521 11:40:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:22:08.521 11:40:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@584 -- # jq . 00:22:08.521 11:40:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@585 -- # IFS=, 00:22:08.521 11:40:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:22:08.521 "params": { 00:22:08.521 "name": "Nvme1", 00:22:08.521 "trtype": "tcp", 00:22:08.521 "traddr": "10.0.0.2", 00:22:08.521 "adrfam": "ipv4", 00:22:08.521 "trsvcid": "4420", 00:22:08.521 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:22:08.521 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:22:08.521 "hdgst": false, 00:22:08.521 "ddgst": false 00:22:08.521 }, 00:22:08.521 "method": "bdev_nvme_attach_controller" 00:22:08.521 },{ 00:22:08.521 "params": { 00:22:08.521 "name": "Nvme2", 00:22:08.521 "trtype": "tcp", 00:22:08.521 "traddr": "10.0.0.2", 00:22:08.521 "adrfam": "ipv4", 00:22:08.521 "trsvcid": "4420", 00:22:08.521 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:22:08.521 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:22:08.521 "hdgst": false, 00:22:08.521 "ddgst": false 00:22:08.521 }, 00:22:08.521 "method": "bdev_nvme_attach_controller" 00:22:08.521 },{ 00:22:08.521 "params": { 00:22:08.521 "name": "Nvme3", 00:22:08.521 "trtype": "tcp", 00:22:08.521 "traddr": "10.0.0.2", 00:22:08.521 "adrfam": "ipv4", 00:22:08.521 "trsvcid": "4420", 00:22:08.521 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:22:08.521 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:22:08.521 "hdgst": false, 00:22:08.521 "ddgst": false 00:22:08.521 }, 00:22:08.521 "method": "bdev_nvme_attach_controller" 00:22:08.521 },{ 00:22:08.521 "params": { 00:22:08.521 "name": "Nvme4", 00:22:08.521 "trtype": "tcp", 00:22:08.521 "traddr": "10.0.0.2", 00:22:08.521 "adrfam": "ipv4", 00:22:08.521 "trsvcid": "4420", 00:22:08.521 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:22:08.521 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:22:08.521 "hdgst": false, 00:22:08.521 "ddgst": false 00:22:08.521 }, 00:22:08.521 "method": "bdev_nvme_attach_controller" 00:22:08.521 },{ 00:22:08.521 "params": { 00:22:08.521 "name": "Nvme5", 00:22:08.521 "trtype": "tcp", 00:22:08.521 "traddr": "10.0.0.2", 00:22:08.521 "adrfam": "ipv4", 00:22:08.521 "trsvcid": "4420", 00:22:08.521 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:22:08.521 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:22:08.521 "hdgst": false, 00:22:08.521 "ddgst": false 00:22:08.521 }, 00:22:08.521 "method": "bdev_nvme_attach_controller" 00:22:08.521 },{ 00:22:08.521 "params": { 00:22:08.521 "name": "Nvme6", 00:22:08.521 "trtype": "tcp", 00:22:08.521 "traddr": "10.0.0.2", 00:22:08.521 "adrfam": "ipv4", 00:22:08.521 "trsvcid": "4420", 00:22:08.521 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:22:08.521 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:22:08.521 "hdgst": false, 00:22:08.521 "ddgst": false 00:22:08.521 }, 00:22:08.521 "method": "bdev_nvme_attach_controller" 00:22:08.521 },{ 00:22:08.521 "params": { 00:22:08.521 "name": "Nvme7", 00:22:08.521 "trtype": "tcp", 00:22:08.521 "traddr": "10.0.0.2", 00:22:08.521 "adrfam": "ipv4", 00:22:08.521 "trsvcid": "4420", 00:22:08.521 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:22:08.521 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:22:08.521 "hdgst": false, 00:22:08.521 "ddgst": false 00:22:08.521 }, 00:22:08.521 "method": "bdev_nvme_attach_controller" 00:22:08.521 },{ 00:22:08.521 "params": { 00:22:08.521 "name": "Nvme8", 00:22:08.521 "trtype": "tcp", 00:22:08.521 "traddr": "10.0.0.2", 00:22:08.521 "adrfam": "ipv4", 00:22:08.521 "trsvcid": "4420", 00:22:08.521 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:22:08.521 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:22:08.521 "hdgst": false, 00:22:08.521 "ddgst": false 00:22:08.521 }, 00:22:08.521 "method": "bdev_nvme_attach_controller" 00:22:08.521 },{ 00:22:08.521 "params": { 00:22:08.521 "name": "Nvme9", 00:22:08.521 "trtype": "tcp", 00:22:08.521 "traddr": "10.0.0.2", 00:22:08.521 "adrfam": "ipv4", 00:22:08.521 "trsvcid": "4420", 00:22:08.521 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:22:08.521 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:22:08.521 "hdgst": false, 00:22:08.521 "ddgst": false 00:22:08.521 }, 00:22:08.521 "method": "bdev_nvme_attach_controller" 00:22:08.521 },{ 00:22:08.521 "params": { 00:22:08.521 "name": "Nvme10", 00:22:08.521 "trtype": "tcp", 00:22:08.521 "traddr": "10.0.0.2", 00:22:08.521 "adrfam": "ipv4", 00:22:08.521 "trsvcid": "4420", 00:22:08.521 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:22:08.521 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:22:08.521 "hdgst": false, 00:22:08.521 "ddgst": false 00:22:08.521 }, 00:22:08.521 "method": "bdev_nvme_attach_controller" 00:22:08.521 }' 00:22:08.521 [2024-11-15 11:40:09.230698] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:08.521 [2024-11-15 11:40:09.278825] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:22:09.899 Running I/O for 10 seconds... 00:22:10.467 11:40:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:22:10.467 11:40:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@866 -- # return 0 00:22:10.467 11:40:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@106 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:22:10.467 11:40:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:10.467 11:40:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:22:10.467 11:40:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:10.467 11:40:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@108 -- # waitforio /var/tmp/bdevperf.sock Nvme1n1 00:22:10.467 11:40:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@51 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:22:10.467 11:40:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@55 -- # '[' -z Nvme1n1 ']' 00:22:10.467 11:40:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@58 -- # local ret=1 00:22:10.467 11:40:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # local i 00:22:10.467 11:40:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i = 10 )) 00:22:10.467 11:40:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:22:10.467 11:40:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:22:10.467 11:40:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:10.467 11:40:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:22:10.468 11:40:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:22:10.468 11:40:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:10.468 11:40:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # read_io_count=67 00:22:10.468 11:40:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@64 -- # '[' 67 -ge 100 ']' 00:22:10.468 11:40:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@68 -- # sleep 0.25 00:22:10.726 11:40:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i-- )) 00:22:10.726 11:40:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:22:10.726 11:40:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:22:10.726 11:40:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:22:10.726 11:40:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:10.726 11:40:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:22:10.726 11:40:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:10.726 11:40:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # read_io_count=131 00:22:10.726 11:40:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@64 -- # '[' 131 -ge 100 ']' 00:22:10.726 11:40:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@65 -- # ret=0 00:22:10.726 11:40:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@66 -- # break 00:22:10.726 11:40:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@70 -- # return 0 00:22:10.726 11:40:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@111 -- # killprocess 1298850 00:22:10.726 11:40:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@952 -- # '[' -z 1298850 ']' 00:22:10.726 11:40:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@956 -- # kill -0 1298850 00:22:10.726 11:40:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@957 -- # uname 00:22:10.986 11:40:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:22:10.986 11:40:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 1298850 00:22:10.986 11:40:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:22:10.986 11:40:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:22:10.986 11:40:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@970 -- # echo 'killing process with pid 1298850' 00:22:10.986 killing process with pid 1298850 00:22:10.986 11:40:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@971 -- # kill 1298850 00:22:10.986 11:40:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@976 -- # wait 1298850 00:22:10.986 Received shutdown signal, test time was about 0.958130 seconds 00:22:10.986 00:22:10.986 Latency(us) 00:22:10.986 [2024-11-15T10:40:11.839Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:10.986 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:10.986 Verification LBA range: start 0x0 length 0x400 00:22:10.986 Nvme1n1 : 0.94 204.18 12.76 0.00 0.00 308805.82 17754.30 308853.29 00:22:10.986 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:10.986 Verification LBA range: start 0x0 length 0x400 00:22:10.986 Nvme2n1 : 0.94 203.74 12.73 0.00 0.00 301299.28 34555.35 293601.28 00:22:10.986 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:10.986 Verification LBA range: start 0x0 length 0x400 00:22:10.986 Nvme3n1 : 0.92 214.06 13.38 0.00 0.00 277864.06 3902.37 295507.78 00:22:10.986 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:10.986 Verification LBA range: start 0x0 length 0x400 00:22:10.986 Nvme4n1 : 0.94 207.69 12.98 0.00 0.00 277894.22 8638.84 316479.30 00:22:10.986 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:10.986 Verification LBA range: start 0x0 length 0x400 00:22:10.986 Nvme5n1 : 0.95 201.69 12.61 0.00 0.00 281029.04 18350.08 310759.80 00:22:10.986 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:10.986 Verification LBA range: start 0x0 length 0x400 00:22:10.986 Nvme6n1 : 0.95 205.50 12.84 0.00 0.00 267178.36 3813.00 289788.28 00:22:10.986 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:10.986 Verification LBA range: start 0x0 length 0x400 00:22:10.986 Nvme7n1 : 0.94 209.46 13.09 0.00 0.00 253269.28 4051.32 318385.80 00:22:10.986 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:10.986 Verification LBA range: start 0x0 length 0x400 00:22:10.986 Nvme8n1 : 0.96 200.60 12.54 0.00 0.00 259180.14 12630.57 326011.81 00:22:10.986 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:10.986 Verification LBA range: start 0x0 length 0x400 00:22:10.986 Nvme9n1 : 0.91 140.09 8.76 0.00 0.00 355741.79 17515.99 335544.32 00:22:10.986 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:10.986 Verification LBA range: start 0x0 length 0x400 00:22:10.986 Nvme10n1 : 0.92 144.83 9.05 0.00 0.00 330129.26 3440.64 322198.81 00:22:10.986 [2024-11-15T10:40:11.839Z] =================================================================================================================== 00:22:10.986 [2024-11-15T10:40:11.839Z] Total : 1931.84 120.74 0.00 0.00 287490.12 3440.64 335544.32 00:22:11.245 11:40:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@114 -- # sleep 1 00:22:12.181 11:40:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@115 -- # kill -0 1298557 00:22:12.181 11:40:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@117 -- # stoptarget 00:22:12.181 11:40:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@42 -- # rm -f ./local-job0-0-verify.state 00:22:12.181 11:40:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:22:12.181 11:40:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@44 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:22:12.181 11:40:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@46 -- # nvmftestfini 00:22:12.181 11:40:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@516 -- # nvmfcleanup 00:22:12.181 11:40:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@121 -- # sync 00:22:12.181 11:40:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:22:12.181 11:40:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@124 -- # set +e 00:22:12.181 11:40:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@125 -- # for i in {1..20} 00:22:12.181 11:40:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:22:12.181 rmmod nvme_tcp 00:22:12.181 rmmod nvme_fabrics 00:22:12.181 rmmod nvme_keyring 00:22:12.181 11:40:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:22:12.181 11:40:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@128 -- # set -e 00:22:12.181 11:40:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@129 -- # return 0 00:22:12.181 11:40:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@517 -- # '[' -n 1298557 ']' 00:22:12.181 11:40:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@518 -- # killprocess 1298557 00:22:12.181 11:40:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@952 -- # '[' -z 1298557 ']' 00:22:12.181 11:40:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@956 -- # kill -0 1298557 00:22:12.181 11:40:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@957 -- # uname 00:22:12.181 11:40:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:22:12.181 11:40:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 1298557 00:22:12.441 11:40:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:22:12.441 11:40:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:22:12.441 11:40:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@970 -- # echo 'killing process with pid 1298557' 00:22:12.441 killing process with pid 1298557 00:22:12.441 11:40:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@971 -- # kill 1298557 00:22:12.441 11:40:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@976 -- # wait 1298557 00:22:12.700 11:40:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:22:12.700 11:40:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:22:12.700 11:40:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:22:12.700 11:40:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@297 -- # iptr 00:22:12.700 11:40:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@791 -- # iptables-restore 00:22:12.700 11:40:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@791 -- # iptables-save 00:22:12.700 11:40:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:22:12.700 11:40:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:22:12.700 11:40:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@302 -- # remove_spdk_ns 00:22:12.700 11:40:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:12.701 11:40:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:12.701 11:40:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:14.778 11:40:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:22:14.778 00:22:14.778 real 0m7.593s 00:22:14.778 user 0m22.958s 00:22:14.778 sys 0m1.370s 00:22:14.778 11:40:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@1128 -- # xtrace_disable 00:22:14.778 11:40:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:22:14.778 ************************************ 00:22:14.778 END TEST nvmf_shutdown_tc2 00:22:14.778 ************************************ 00:22:14.778 11:40:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@164 -- # run_test nvmf_shutdown_tc3 nvmf_shutdown_tc3 00:22:14.778 11:40:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:22:14.778 11:40:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1109 -- # xtrace_disable 00:22:14.778 11:40:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:22:14.778 ************************************ 00:22:14.778 START TEST nvmf_shutdown_tc3 00:22:14.778 ************************************ 00:22:14.778 11:40:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@1127 -- # nvmf_shutdown_tc3 00:22:14.778 11:40:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@122 -- # starttarget 00:22:14.778 11:40:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@16 -- # nvmftestinit 00:22:14.778 11:40:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:22:14.778 11:40:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:14.778 11:40:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@476 -- # prepare_net_devs 00:22:14.778 11:40:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@438 -- # local -g is_hw=no 00:22:14.778 11:40:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@440 -- # remove_spdk_ns 00:22:14.778 11:40:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:14.778 11:40:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:14.778 11:40:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:14.778 11:40:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:22:14.778 11:40:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:22:14.778 11:40:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@309 -- # xtrace_disable 00:22:14.778 11:40:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:22:14.778 11:40:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:14.778 11:40:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@315 -- # pci_devs=() 00:22:14.778 11:40:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@315 -- # local -a pci_devs 00:22:14.778 11:40:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@316 -- # pci_net_devs=() 00:22:14.778 11:40:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:22:14.778 11:40:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@317 -- # pci_drivers=() 00:22:14.778 11:40:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@317 -- # local -A pci_drivers 00:22:14.778 11:40:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@319 -- # net_devs=() 00:22:14.778 11:40:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@319 -- # local -ga net_devs 00:22:14.778 11:40:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@320 -- # e810=() 00:22:14.778 11:40:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@320 -- # local -ga e810 00:22:14.778 11:40:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@321 -- # x722=() 00:22:14.778 11:40:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@321 -- # local -ga x722 00:22:14.778 11:40:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@322 -- # mlx=() 00:22:14.778 11:40:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@322 -- # local -ga mlx 00:22:14.778 11:40:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:14.778 11:40:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:14.778 11:40:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:14.778 11:40:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:14.778 11:40:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:14.778 11:40:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:14.778 11:40:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:14.778 11:40:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:22:14.778 11:40:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:14.778 11:40:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:14.778 11:40:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:14.778 11:40:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:14.778 11:40:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:22:14.778 11:40:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:22:14.778 11:40:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:22:14.778 11:40:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:22:14.778 11:40:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:22:14.778 11:40:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:22:14.778 11:40:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:14.778 11:40:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:22:14.778 Found 0000:af:00.0 (0x8086 - 0x159b) 00:22:14.778 11:40:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:14.778 11:40:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:14.778 11:40:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:14.778 11:40:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:14.778 11:40:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:14.778 11:40:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:14.778 11:40:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:22:14.778 Found 0000:af:00.1 (0x8086 - 0x159b) 00:22:14.778 11:40:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:14.778 11:40:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:14.778 11:40:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:14.778 11:40:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:14.778 11:40:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:14.778 11:40:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:22:14.778 11:40:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:22:14.778 11:40:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:22:14.778 11:40:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:22:14.778 11:40:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:14.778 11:40:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:22:14.778 11:40:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:14.778 11:40:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:22:14.778 11:40:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:22:14.778 11:40:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:14.778 11:40:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:22:14.778 Found net devices under 0000:af:00.0: cvl_0_0 00:22:14.778 11:40:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:22:14.778 11:40:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:22:14.778 11:40:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:14.778 11:40:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:22:14.779 11:40:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:14.779 11:40:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:22:14.779 11:40:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:22:14.779 11:40:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:14.779 11:40:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:22:14.779 Found net devices under 0000:af:00.1: cvl_0_1 00:22:14.779 11:40:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:22:14.779 11:40:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:22:14.779 11:40:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@442 -- # is_hw=yes 00:22:14.779 11:40:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:22:14.779 11:40:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:22:14.779 11:40:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:22:14.779 11:40:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:22:14.779 11:40:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:14.779 11:40:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:14.779 11:40:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:14.779 11:40:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:22:14.779 11:40:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:14.779 11:40:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:14.779 11:40:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:22:14.779 11:40:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:22:14.779 11:40:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:14.779 11:40:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:14.779 11:40:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:22:14.779 11:40:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:22:14.779 11:40:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:22:14.779 11:40:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:15.066 11:40:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:15.066 11:40:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:15.066 11:40:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:22:15.066 11:40:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:15.066 11:40:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:15.066 11:40:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:15.066 11:40:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:22:15.066 11:40:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:22:15.066 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:15.066 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.450 ms 00:22:15.066 00:22:15.066 --- 10.0.0.2 ping statistics --- 00:22:15.066 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:15.066 rtt min/avg/max/mdev = 0.450/0.450/0.450/0.000 ms 00:22:15.066 11:40:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:15.066 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:15.066 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.228 ms 00:22:15.066 00:22:15.066 --- 10.0.0.1 ping statistics --- 00:22:15.066 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:15.066 rtt min/avg/max/mdev = 0.228/0.228/0.228/0.000 ms 00:22:15.066 11:40:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:15.066 11:40:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@450 -- # return 0 00:22:15.066 11:40:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:22:15.066 11:40:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:15.066 11:40:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:22:15.066 11:40:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:22:15.066 11:40:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:15.066 11:40:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:22:15.066 11:40:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:22:15.066 11:40:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@19 -- # nvmfappstart -m 0x1E 00:22:15.066 11:40:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:22:15.066 11:40:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@724 -- # xtrace_disable 00:22:15.066 11:40:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:22:15.066 11:40:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@509 -- # nvmfpid=1300045 00:22:15.066 11:40:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@510 -- # waitforlisten 1300045 00:22:15.066 11:40:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@833 -- # '[' -z 1300045 ']' 00:22:15.066 11:40:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:15.066 11:40:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@838 -- # local max_retries=100 00:22:15.066 11:40:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:15.066 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:15.066 11:40:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@842 -- # xtrace_disable 00:22:15.066 11:40:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:22:15.066 11:40:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:22:15.066 [2024-11-15 11:40:15.881270] Starting SPDK v25.01-pre git sha1 4b2d483c6 / DPDK 24.03.0 initialization... 00:22:15.066 [2024-11-15 11:40:15.881325] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:15.335 [2024-11-15 11:40:15.952767] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:22:15.335 [2024-11-15 11:40:15.993050] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:15.335 [2024-11-15 11:40:15.993084] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:15.335 [2024-11-15 11:40:15.993091] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:15.335 [2024-11-15 11:40:15.993096] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:15.335 [2024-11-15 11:40:15.993101] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:15.335 [2024-11-15 11:40:15.994773] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:22:15.335 [2024-11-15 11:40:15.994874] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:22:15.335 [2024-11-15 11:40:15.994973] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:22:15.335 [2024-11-15 11:40:15.994972] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:22:15.335 11:40:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:22:15.335 11:40:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@866 -- # return 0 00:22:15.335 11:40:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:22:15.335 11:40:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@730 -- # xtrace_disable 00:22:15.335 11:40:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:22:15.335 11:40:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:15.335 11:40:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:22:15.335 11:40:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:15.335 11:40:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:22:15.335 [2024-11-15 11:40:16.150347] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:15.335 11:40:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:15.335 11:40:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@23 -- # num_subsystems=({1..10}) 00:22:15.335 11:40:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@25 -- # timing_enter create_subsystems 00:22:15.335 11:40:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@724 -- # xtrace_disable 00:22:15.335 11:40:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:22:15.335 11:40:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:22:15.335 11:40:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:15.335 11:40:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:22:15.335 11:40:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:15.335 11:40:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:22:15.335 11:40:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:15.335 11:40:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:22:15.335 11:40:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:15.335 11:40:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:22:15.335 11:40:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:15.335 11:40:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:22:15.594 11:40:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:15.594 11:40:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:22:15.594 11:40:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:15.594 11:40:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:22:15.594 11:40:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:15.594 11:40:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:22:15.594 11:40:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:15.594 11:40:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:22:15.594 11:40:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:15.594 11:40:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:22:15.594 11:40:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@36 -- # rpc_cmd 00:22:15.594 11:40:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:15.594 11:40:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:22:15.594 Malloc1 00:22:15.594 [2024-11-15 11:40:16.258275] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:15.594 Malloc2 00:22:15.594 Malloc3 00:22:15.594 Malloc4 00:22:15.594 Malloc5 00:22:15.594 Malloc6 00:22:15.853 Malloc7 00:22:15.853 Malloc8 00:22:15.853 Malloc9 00:22:15.853 Malloc10 00:22:15.853 11:40:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:15.853 11:40:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@37 -- # timing_exit create_subsystems 00:22:15.853 11:40:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@730 -- # xtrace_disable 00:22:15.853 11:40:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:22:15.853 11:40:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@126 -- # perfpid=1300344 00:22:15.853 11:40:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@127 -- # waitforlisten 1300344 /var/tmp/bdevperf.sock 00:22:15.853 11:40:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@833 -- # '[' -z 1300344 ']' 00:22:15.853 11:40:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:15.853 11:40:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:22:15.853 11:40:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@838 -- # local max_retries=100 00:22:15.853 11:40:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@125 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:22:15.853 11:40:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:15.853 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:15.853 11:40:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@560 -- # config=() 00:22:15.853 11:40:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@842 -- # xtrace_disable 00:22:15.853 11:40:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@560 -- # local subsystem config 00:22:15.853 11:40:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:22:15.853 11:40:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:15.853 11:40:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:15.853 { 00:22:15.853 "params": { 00:22:15.853 "name": "Nvme$subsystem", 00:22:15.853 "trtype": "$TEST_TRANSPORT", 00:22:15.853 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:15.853 "adrfam": "ipv4", 00:22:15.853 "trsvcid": "$NVMF_PORT", 00:22:15.853 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:15.853 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:15.853 "hdgst": ${hdgst:-false}, 00:22:15.853 "ddgst": ${ddgst:-false} 00:22:15.853 }, 00:22:15.853 "method": "bdev_nvme_attach_controller" 00:22:15.853 } 00:22:15.853 EOF 00:22:15.853 )") 00:22:15.853 11:40:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:22:15.853 11:40:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:15.853 11:40:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:15.853 { 00:22:15.853 "params": { 00:22:15.853 "name": "Nvme$subsystem", 00:22:15.853 "trtype": "$TEST_TRANSPORT", 00:22:15.853 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:15.853 "adrfam": "ipv4", 00:22:15.853 "trsvcid": "$NVMF_PORT", 00:22:15.853 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:15.853 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:15.853 "hdgst": ${hdgst:-false}, 00:22:15.853 "ddgst": ${ddgst:-false} 00:22:15.853 }, 00:22:15.853 "method": "bdev_nvme_attach_controller" 00:22:15.853 } 00:22:15.853 EOF 00:22:15.853 )") 00:22:15.853 11:40:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:22:16.113 11:40:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:16.113 11:40:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:16.113 { 00:22:16.113 "params": { 00:22:16.113 "name": "Nvme$subsystem", 00:22:16.113 "trtype": "$TEST_TRANSPORT", 00:22:16.113 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:16.113 "adrfam": "ipv4", 00:22:16.113 "trsvcid": "$NVMF_PORT", 00:22:16.113 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:16.113 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:16.113 "hdgst": ${hdgst:-false}, 00:22:16.113 "ddgst": ${ddgst:-false} 00:22:16.113 }, 00:22:16.113 "method": "bdev_nvme_attach_controller" 00:22:16.113 } 00:22:16.113 EOF 00:22:16.113 )") 00:22:16.113 11:40:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:22:16.113 11:40:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:16.113 11:40:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:16.113 { 00:22:16.113 "params": { 00:22:16.114 "name": "Nvme$subsystem", 00:22:16.114 "trtype": "$TEST_TRANSPORT", 00:22:16.114 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:16.114 "adrfam": "ipv4", 00:22:16.114 "trsvcid": "$NVMF_PORT", 00:22:16.114 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:16.114 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:16.114 "hdgst": ${hdgst:-false}, 00:22:16.114 "ddgst": ${ddgst:-false} 00:22:16.114 }, 00:22:16.114 "method": "bdev_nvme_attach_controller" 00:22:16.114 } 00:22:16.114 EOF 00:22:16.114 )") 00:22:16.114 11:40:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:22:16.114 11:40:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:16.114 11:40:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:16.114 { 00:22:16.114 "params": { 00:22:16.114 "name": "Nvme$subsystem", 00:22:16.114 "trtype": "$TEST_TRANSPORT", 00:22:16.114 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:16.114 "adrfam": "ipv4", 00:22:16.114 "trsvcid": "$NVMF_PORT", 00:22:16.114 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:16.114 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:16.114 "hdgst": ${hdgst:-false}, 00:22:16.114 "ddgst": ${ddgst:-false} 00:22:16.114 }, 00:22:16.114 "method": "bdev_nvme_attach_controller" 00:22:16.114 } 00:22:16.114 EOF 00:22:16.114 )") 00:22:16.114 11:40:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:22:16.114 11:40:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:16.114 11:40:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:16.114 { 00:22:16.114 "params": { 00:22:16.114 "name": "Nvme$subsystem", 00:22:16.114 "trtype": "$TEST_TRANSPORT", 00:22:16.114 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:16.114 "adrfam": "ipv4", 00:22:16.114 "trsvcid": "$NVMF_PORT", 00:22:16.114 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:16.114 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:16.114 "hdgst": ${hdgst:-false}, 00:22:16.114 "ddgst": ${ddgst:-false} 00:22:16.114 }, 00:22:16.114 "method": "bdev_nvme_attach_controller" 00:22:16.114 } 00:22:16.114 EOF 00:22:16.114 )") 00:22:16.114 11:40:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:22:16.114 11:40:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:16.114 11:40:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:16.114 { 00:22:16.114 "params": { 00:22:16.114 "name": "Nvme$subsystem", 00:22:16.114 "trtype": "$TEST_TRANSPORT", 00:22:16.114 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:16.114 "adrfam": "ipv4", 00:22:16.114 "trsvcid": "$NVMF_PORT", 00:22:16.114 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:16.114 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:16.114 "hdgst": ${hdgst:-false}, 00:22:16.114 "ddgst": ${ddgst:-false} 00:22:16.114 }, 00:22:16.114 "method": "bdev_nvme_attach_controller" 00:22:16.114 } 00:22:16.114 EOF 00:22:16.114 )") 00:22:16.114 11:40:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:22:16.114 [2024-11-15 11:40:16.737164] Starting SPDK v25.01-pre git sha1 4b2d483c6 / DPDK 24.03.0 initialization... 00:22:16.114 [2024-11-15 11:40:16.737210] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1300344 ] 00:22:16.114 11:40:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:16.114 11:40:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:16.114 { 00:22:16.114 "params": { 00:22:16.114 "name": "Nvme$subsystem", 00:22:16.114 "trtype": "$TEST_TRANSPORT", 00:22:16.114 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:16.114 "adrfam": "ipv4", 00:22:16.114 "trsvcid": "$NVMF_PORT", 00:22:16.114 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:16.114 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:16.114 "hdgst": ${hdgst:-false}, 00:22:16.114 "ddgst": ${ddgst:-false} 00:22:16.114 }, 00:22:16.114 "method": "bdev_nvme_attach_controller" 00:22:16.114 } 00:22:16.114 EOF 00:22:16.114 )") 00:22:16.114 11:40:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:22:16.114 11:40:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:16.114 11:40:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:16.114 { 00:22:16.114 "params": { 00:22:16.114 "name": "Nvme$subsystem", 00:22:16.114 "trtype": "$TEST_TRANSPORT", 00:22:16.114 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:16.114 "adrfam": "ipv4", 00:22:16.114 "trsvcid": "$NVMF_PORT", 00:22:16.114 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:16.114 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:16.114 "hdgst": ${hdgst:-false}, 00:22:16.114 "ddgst": ${ddgst:-false} 00:22:16.114 }, 00:22:16.114 "method": "bdev_nvme_attach_controller" 00:22:16.114 } 00:22:16.114 EOF 00:22:16.114 )") 00:22:16.114 11:40:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:22:16.114 11:40:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:16.114 11:40:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:16.114 { 00:22:16.114 "params": { 00:22:16.114 "name": "Nvme$subsystem", 00:22:16.114 "trtype": "$TEST_TRANSPORT", 00:22:16.114 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:16.114 "adrfam": "ipv4", 00:22:16.114 "trsvcid": "$NVMF_PORT", 00:22:16.114 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:16.114 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:16.114 "hdgst": ${hdgst:-false}, 00:22:16.114 "ddgst": ${ddgst:-false} 00:22:16.114 }, 00:22:16.114 "method": "bdev_nvme_attach_controller" 00:22:16.114 } 00:22:16.114 EOF 00:22:16.114 )") 00:22:16.114 11:40:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:22:16.114 11:40:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@584 -- # jq . 00:22:16.114 11:40:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@585 -- # IFS=, 00:22:16.114 11:40:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:22:16.114 "params": { 00:22:16.114 "name": "Nvme1", 00:22:16.114 "trtype": "tcp", 00:22:16.114 "traddr": "10.0.0.2", 00:22:16.114 "adrfam": "ipv4", 00:22:16.114 "trsvcid": "4420", 00:22:16.114 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:22:16.114 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:22:16.114 "hdgst": false, 00:22:16.114 "ddgst": false 00:22:16.114 }, 00:22:16.114 "method": "bdev_nvme_attach_controller" 00:22:16.114 },{ 00:22:16.114 "params": { 00:22:16.114 "name": "Nvme2", 00:22:16.114 "trtype": "tcp", 00:22:16.114 "traddr": "10.0.0.2", 00:22:16.114 "adrfam": "ipv4", 00:22:16.114 "trsvcid": "4420", 00:22:16.114 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:22:16.114 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:22:16.114 "hdgst": false, 00:22:16.114 "ddgst": false 00:22:16.114 }, 00:22:16.114 "method": "bdev_nvme_attach_controller" 00:22:16.114 },{ 00:22:16.114 "params": { 00:22:16.114 "name": "Nvme3", 00:22:16.114 "trtype": "tcp", 00:22:16.114 "traddr": "10.0.0.2", 00:22:16.114 "adrfam": "ipv4", 00:22:16.114 "trsvcid": "4420", 00:22:16.114 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:22:16.114 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:22:16.114 "hdgst": false, 00:22:16.114 "ddgst": false 00:22:16.114 }, 00:22:16.114 "method": "bdev_nvme_attach_controller" 00:22:16.114 },{ 00:22:16.114 "params": { 00:22:16.114 "name": "Nvme4", 00:22:16.114 "trtype": "tcp", 00:22:16.114 "traddr": "10.0.0.2", 00:22:16.114 "adrfam": "ipv4", 00:22:16.114 "trsvcid": "4420", 00:22:16.114 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:22:16.114 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:22:16.114 "hdgst": false, 00:22:16.114 "ddgst": false 00:22:16.114 }, 00:22:16.114 "method": "bdev_nvme_attach_controller" 00:22:16.114 },{ 00:22:16.114 "params": { 00:22:16.114 "name": "Nvme5", 00:22:16.114 "trtype": "tcp", 00:22:16.114 "traddr": "10.0.0.2", 00:22:16.114 "adrfam": "ipv4", 00:22:16.114 "trsvcid": "4420", 00:22:16.114 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:22:16.114 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:22:16.114 "hdgst": false, 00:22:16.114 "ddgst": false 00:22:16.114 }, 00:22:16.114 "method": "bdev_nvme_attach_controller" 00:22:16.114 },{ 00:22:16.114 "params": { 00:22:16.114 "name": "Nvme6", 00:22:16.114 "trtype": "tcp", 00:22:16.114 "traddr": "10.0.0.2", 00:22:16.114 "adrfam": "ipv4", 00:22:16.114 "trsvcid": "4420", 00:22:16.114 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:22:16.114 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:22:16.114 "hdgst": false, 00:22:16.114 "ddgst": false 00:22:16.114 }, 00:22:16.114 "method": "bdev_nvme_attach_controller" 00:22:16.114 },{ 00:22:16.114 "params": { 00:22:16.115 "name": "Nvme7", 00:22:16.115 "trtype": "tcp", 00:22:16.115 "traddr": "10.0.0.2", 00:22:16.115 "adrfam": "ipv4", 00:22:16.115 "trsvcid": "4420", 00:22:16.115 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:22:16.115 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:22:16.115 "hdgst": false, 00:22:16.115 "ddgst": false 00:22:16.115 }, 00:22:16.115 "method": "bdev_nvme_attach_controller" 00:22:16.115 },{ 00:22:16.115 "params": { 00:22:16.115 "name": "Nvme8", 00:22:16.115 "trtype": "tcp", 00:22:16.115 "traddr": "10.0.0.2", 00:22:16.115 "adrfam": "ipv4", 00:22:16.115 "trsvcid": "4420", 00:22:16.115 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:22:16.115 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:22:16.115 "hdgst": false, 00:22:16.115 "ddgst": false 00:22:16.115 }, 00:22:16.115 "method": "bdev_nvme_attach_controller" 00:22:16.115 },{ 00:22:16.115 "params": { 00:22:16.115 "name": "Nvme9", 00:22:16.115 "trtype": "tcp", 00:22:16.115 "traddr": "10.0.0.2", 00:22:16.115 "adrfam": "ipv4", 00:22:16.115 "trsvcid": "4420", 00:22:16.115 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:22:16.115 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:22:16.115 "hdgst": false, 00:22:16.115 "ddgst": false 00:22:16.115 }, 00:22:16.115 "method": "bdev_nvme_attach_controller" 00:22:16.115 },{ 00:22:16.115 "params": { 00:22:16.115 "name": "Nvme10", 00:22:16.115 "trtype": "tcp", 00:22:16.115 "traddr": "10.0.0.2", 00:22:16.115 "adrfam": "ipv4", 00:22:16.115 "trsvcid": "4420", 00:22:16.115 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:22:16.115 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:22:16.115 "hdgst": false, 00:22:16.115 "ddgst": false 00:22:16.115 }, 00:22:16.115 "method": "bdev_nvme_attach_controller" 00:22:16.115 }' 00:22:16.115 [2024-11-15 11:40:16.822675] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:16.115 [2024-11-15 11:40:16.871126] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:22:18.021 Running I/O for 10 seconds... 00:22:18.021 11:40:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:22:18.021 11:40:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@866 -- # return 0 00:22:18.021 11:40:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@128 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:22:18.021 11:40:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:18.021 11:40:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:22:18.021 11:40:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:18.021 11:40:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@131 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:22:18.021 11:40:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@133 -- # waitforio /var/tmp/bdevperf.sock Nvme1n1 00:22:18.021 11:40:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@51 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:22:18.021 11:40:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@55 -- # '[' -z Nvme1n1 ']' 00:22:18.021 11:40:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@58 -- # local ret=1 00:22:18.021 11:40:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # local i 00:22:18.021 11:40:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i = 10 )) 00:22:18.021 11:40:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:22:18.021 11:40:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:22:18.021 11:40:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:22:18.021 11:40:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:18.021 11:40:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:22:18.021 11:40:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:18.280 11:40:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # read_io_count=3 00:22:18.280 11:40:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@64 -- # '[' 3 -ge 100 ']' 00:22:18.280 11:40:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@68 -- # sleep 0.25 00:22:18.540 11:40:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i-- )) 00:22:18.540 11:40:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:22:18.540 11:40:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:22:18.540 11:40:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:22:18.540 11:40:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:18.540 11:40:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:22:18.540 11:40:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:18.540 11:40:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # read_io_count=67 00:22:18.540 11:40:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@64 -- # '[' 67 -ge 100 ']' 00:22:18.540 11:40:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@68 -- # sleep 0.25 00:22:18.814 11:40:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i-- )) 00:22:18.814 11:40:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:22:18.814 11:40:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:22:18.814 11:40:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:18.814 11:40:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:22:18.814 11:40:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:22:18.814 11:40:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:18.814 11:40:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # read_io_count=131 00:22:18.814 11:40:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@64 -- # '[' 131 -ge 100 ']' 00:22:18.814 11:40:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@65 -- # ret=0 00:22:18.814 11:40:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@66 -- # break 00:22:18.814 11:40:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@70 -- # return 0 00:22:18.814 11:40:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@136 -- # killprocess 1300045 00:22:18.814 11:40:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@952 -- # '[' -z 1300045 ']' 00:22:18.814 11:40:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@956 -- # kill -0 1300045 00:22:18.814 11:40:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@957 -- # uname 00:22:18.814 11:40:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:22:18.814 11:40:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 1300045 00:22:18.814 11:40:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:22:18.814 11:40:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:22:18.814 11:40:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@970 -- # echo 'killing process with pid 1300045' 00:22:18.814 killing process with pid 1300045 00:22:18.814 11:40:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@971 -- # kill 1300045 00:22:18.814 11:40:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@976 -- # wait 1300045 00:22:18.814 [2024-11-15 11:40:19.557506] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f2fe90 is same with the state(6) to be set 00:22:18.814 [2024-11-15 11:40:19.557554] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f2fe90 is same with the state(6) to be set 00:22:18.814 [2024-11-15 11:40:19.557562] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f2fe90 is same with the state(6) to be set 00:22:18.814 [2024-11-15 11:40:19.557568] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f2fe90 is same with the state(6) to be set 00:22:18.814 [2024-11-15 11:40:19.557574] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f2fe90 is same with the state(6) to be set 00:22:18.814 [2024-11-15 11:40:19.557580] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f2fe90 is same with the state(6) to be set 00:22:18.814 [2024-11-15 11:40:19.557585] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f2fe90 is same with the state(6) to be set 00:22:18.814 [2024-11-15 11:40:19.557591] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f2fe90 is same with the state(6) to be set 00:22:18.814 [2024-11-15 11:40:19.557604] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f2fe90 is same with the state(6) to be set 00:22:18.814 [2024-11-15 11:40:19.557610] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f2fe90 is same with the state(6) to be set 00:22:18.814 [2024-11-15 11:40:19.557616] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f2fe90 is same with the state(6) to be set 00:22:18.814 [2024-11-15 11:40:19.557622] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f2fe90 is same with the state(6) to be set 00:22:18.814 [2024-11-15 11:40:19.557628] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f2fe90 is same with the state(6) to be set 00:22:18.814 [2024-11-15 11:40:19.557634] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f2fe90 is same with the state(6) to be set 00:22:18.814 [2024-11-15 11:40:19.557640] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f2fe90 is same with the state(6) to be set 00:22:18.814 [2024-11-15 11:40:19.557645] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f2fe90 is same with the state(6) to be set 00:22:18.814 [2024-11-15 11:40:19.557651] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f2fe90 is same with the state(6) to be set 00:22:18.814 [2024-11-15 11:40:19.557657] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f2fe90 is same with the state(6) to be set 00:22:18.814 [2024-11-15 11:40:19.557663] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f2fe90 is same with the state(6) to be set 00:22:18.814 [2024-11-15 11:40:19.557669] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f2fe90 is same with the state(6) to be set 00:22:18.814 [2024-11-15 11:40:19.557675] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f2fe90 is same with the state(6) to be set 00:22:18.814 [2024-11-15 11:40:19.557681] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f2fe90 is same with the state(6) to be set 00:22:18.814 [2024-11-15 11:40:19.557686] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f2fe90 is same with the state(6) to be set 00:22:18.814 [2024-11-15 11:40:19.557692] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f2fe90 is same with the state(6) to be set 00:22:18.814 [2024-11-15 11:40:19.557697] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f2fe90 is same with the state(6) to be set 00:22:18.814 [2024-11-15 11:40:19.557702] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f2fe90 is same with the state(6) to be set 00:22:18.814 [2024-11-15 11:40:19.557708] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f2fe90 is same with the state(6) to be set 00:22:18.814 [2024-11-15 11:40:19.557714] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f2fe90 is same with the state(6) to be set 00:22:18.814 [2024-11-15 11:40:19.557719] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f2fe90 is same with the state(6) to be set 00:22:18.814 [2024-11-15 11:40:19.557724] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f2fe90 is same with the state(6) to be set 00:22:18.814 [2024-11-15 11:40:19.557729] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f2fe90 is same with the state(6) to be set 00:22:18.814 [2024-11-15 11:40:19.557735] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f2fe90 is same with the state(6) to be set 00:22:18.814 [2024-11-15 11:40:19.557740] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f2fe90 is same with the state(6) to be set 00:22:18.814 [2024-11-15 11:40:19.557746] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f2fe90 is same with the state(6) to be set 00:22:18.814 [2024-11-15 11:40:19.557752] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f2fe90 is same with the state(6) to be set 00:22:18.814 [2024-11-15 11:40:19.557760] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f2fe90 is same with the state(6) to be set 00:22:18.814 [2024-11-15 11:40:19.557765] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f2fe90 is same with the state(6) to be set 00:22:18.814 [2024-11-15 11:40:19.557771] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f2fe90 is same with the state(6) to be set 00:22:18.814 [2024-11-15 11:40:19.557777] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f2fe90 is same with the state(6) to be set 00:22:18.814 [2024-11-15 11:40:19.557782] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f2fe90 is same with the state(6) to be set 00:22:18.814 [2024-11-15 11:40:19.557787] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f2fe90 is same with the state(6) to be set 00:22:18.814 [2024-11-15 11:40:19.557793] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f2fe90 is same with the state(6) to be set 00:22:18.814 [2024-11-15 11:40:19.557798] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f2fe90 is same with the state(6) to be set 00:22:18.814 [2024-11-15 11:40:19.557803] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f2fe90 is same with the state(6) to be set 00:22:18.814 [2024-11-15 11:40:19.557809] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f2fe90 is same with the state(6) to be set 00:22:18.814 [2024-11-15 11:40:19.557814] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f2fe90 is same with the state(6) to be set 00:22:18.814 [2024-11-15 11:40:19.557819] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f2fe90 is same with the state(6) to be set 00:22:18.814 [2024-11-15 11:40:19.557824] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f2fe90 is same with the state(6) to be set 00:22:18.814 [2024-11-15 11:40:19.557830] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f2fe90 is same with the state(6) to be set 00:22:18.814 [2024-11-15 11:40:19.557835] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f2fe90 is same with the state(6) to be set 00:22:18.814 [2024-11-15 11:40:19.557841] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f2fe90 is same with the state(6) to be set 00:22:18.814 [2024-11-15 11:40:19.557847] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f2fe90 is same with the state(6) to be set 00:22:18.814 [2024-11-15 11:40:19.557852] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f2fe90 is same with the state(6) to be set 00:22:18.814 [2024-11-15 11:40:19.557858] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f2fe90 is same with the state(6) to be set 00:22:18.814 [2024-11-15 11:40:19.557863] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f2fe90 is same with the state(6) to be set 00:22:18.814 [2024-11-15 11:40:19.557869] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f2fe90 is same with the state(6) to be set 00:22:18.814 [2024-11-15 11:40:19.557874] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f2fe90 is same with the state(6) to be set 00:22:18.814 [2024-11-15 11:40:19.557880] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f2fe90 is same with the state(6) to be set 00:22:18.814 [2024-11-15 11:40:19.557885] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f2fe90 is same with the state(6) to be set 00:22:18.814 [2024-11-15 11:40:19.557891] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f2fe90 is same with the state(6) to be set 00:22:18.814 [2024-11-15 11:40:19.557896] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f2fe90 is same with the state(6) to be set 00:22:18.814 [2024-11-15 11:40:19.557902] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f2fe90 is same with the state(6) to be set 00:22:18.814 [2024-11-15 11:40:19.557908] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f2fe90 is same with the state(6) to be set 00:22:18.814 [2024-11-15 11:40:19.558974] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f32910 is same with the state(6) to be set 00:22:18.814 [2024-11-15 11:40:19.559005] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f32910 is same with the state(6) to be set 00:22:18.814 [2024-11-15 11:40:19.559012] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f32910 is same with the state(6) to be set 00:22:18.814 [2024-11-15 11:40:19.559018] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f32910 is same with the state(6) to be set 00:22:18.814 [2024-11-15 11:40:19.559025] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f32910 is same with the state(6) to be set 00:22:18.814 [2024-11-15 11:40:19.559031] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f32910 is same with the state(6) to be set 00:22:18.814 [2024-11-15 11:40:19.559036] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f32910 is same with the state(6) to be set 00:22:18.814 [2024-11-15 11:40:19.559042] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f32910 is same with the state(6) to be set 00:22:18.814 [2024-11-15 11:40:19.559048] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f32910 is same with the state(6) to be set 00:22:18.814 [2024-11-15 11:40:19.559053] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f32910 is same with the state(6) to be set 00:22:18.814 [2024-11-15 11:40:19.559059] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f32910 is same with the state(6) to be set 00:22:18.814 [2024-11-15 11:40:19.559065] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f32910 is same with the state(6) to be set 00:22:18.814 [2024-11-15 11:40:19.559070] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f32910 is same with the state(6) to be set 00:22:18.815 [2024-11-15 11:40:19.559075] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f32910 is same with the state(6) to be set 00:22:18.815 [2024-11-15 11:40:19.559081] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f32910 is same with the state(6) to be set 00:22:18.815 [2024-11-15 11:40:19.559086] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f32910 is same with the state(6) to be set 00:22:18.815 [2024-11-15 11:40:19.559092] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f32910 is same with the state(6) to be set 00:22:18.815 [2024-11-15 11:40:19.559099] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f32910 is same with the state(6) to be set 00:22:18.815 [2024-11-15 11:40:19.559106] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f32910 is same with the state(6) to be set 00:22:18.815 [2024-11-15 11:40:19.559111] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f32910 is same with the state(6) to be set 00:22:18.815 [2024-11-15 11:40:19.559117] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f32910 is same with the state(6) to be set 00:22:18.815 [2024-11-15 11:40:19.559123] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f32910 is same with the state(6) to be set 00:22:18.815 [2024-11-15 11:40:19.559128] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f32910 is same with the state(6) to be set 00:22:18.815 [2024-11-15 11:40:19.559134] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f32910 is same with the state(6) to be set 00:22:18.815 [2024-11-15 11:40:19.559139] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f32910 is same with the state(6) to be set 00:22:18.815 [2024-11-15 11:40:19.559145] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f32910 is same with the state(6) to be set 00:22:18.815 [2024-11-15 11:40:19.559154] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f32910 is same with the state(6) to be set 00:22:18.815 [2024-11-15 11:40:19.559160] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f32910 is same with the state(6) to be set 00:22:18.815 [2024-11-15 11:40:19.559165] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f32910 is same with the state(6) to be set 00:22:18.815 [2024-11-15 11:40:19.559170] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f32910 is same with the state(6) to be set 00:22:18.815 [2024-11-15 11:40:19.559175] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f32910 is same with the state(6) to be set 00:22:18.815 [2024-11-15 11:40:19.559181] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f32910 is same with the state(6) to be set 00:22:18.815 [2024-11-15 11:40:19.559187] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f32910 is same with the state(6) to be set 00:22:18.815 [2024-11-15 11:40:19.559193] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f32910 is same with the state(6) to be set 00:22:18.815 [2024-11-15 11:40:19.559199] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f32910 is same with the state(6) to be set 00:22:18.815 [2024-11-15 11:40:19.559205] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f32910 is same with the state(6) to be set 00:22:18.815 [2024-11-15 11:40:19.559210] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f32910 is same with the state(6) to be set 00:22:18.815 [2024-11-15 11:40:19.559215] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f32910 is same with the state(6) to be set 00:22:18.815 [2024-11-15 11:40:19.559221] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f32910 is same with the state(6) to be set 00:22:18.815 [2024-11-15 11:40:19.559226] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f32910 is same with the state(6) to be set 00:22:18.815 [2024-11-15 11:40:19.559231] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f32910 is same with the state(6) to be set 00:22:18.815 [2024-11-15 11:40:19.559237] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f32910 is same with the state(6) to be set 00:22:18.815 [2024-11-15 11:40:19.559242] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f32910 is same with the state(6) to be set 00:22:18.815 [2024-11-15 11:40:19.559248] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f32910 is same with the state(6) to be set 00:22:18.815 [2024-11-15 11:40:19.559253] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f32910 is same with the state(6) to be set 00:22:18.815 [2024-11-15 11:40:19.559259] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f32910 is same with the state(6) to be set 00:22:18.815 [2024-11-15 11:40:19.559265] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f32910 is same with the state(6) to be set 00:22:18.815 [2024-11-15 11:40:19.559270] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f32910 is same with the state(6) to be set 00:22:18.815 [2024-11-15 11:40:19.559277] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f32910 is same with the state(6) to be set 00:22:18.815 [2024-11-15 11:40:19.559282] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f32910 is same with the state(6) to be set 00:22:18.815 [2024-11-15 11:40:19.559288] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f32910 is same with the state(6) to be set 00:22:18.815 [2024-11-15 11:40:19.559293] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f32910 is same with the state(6) to be set 00:22:18.815 [2024-11-15 11:40:19.559299] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f32910 is same with the state(6) to be set 00:22:18.815 [2024-11-15 11:40:19.559306] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f32910 is same with the state(6) to be set 00:22:18.815 [2024-11-15 11:40:19.559311] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f32910 is same with the state(6) to be set 00:22:18.815 [2024-11-15 11:40:19.559317] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f32910 is same with the state(6) to be set 00:22:18.815 [2024-11-15 11:40:19.559322] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f32910 is same with the state(6) to be set 00:22:18.815 [2024-11-15 11:40:19.559328] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f32910 is same with the state(6) to be set 00:22:18.815 [2024-11-15 11:40:19.559333] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f32910 is same with the state(6) to be set 00:22:18.815 [2024-11-15 11:40:19.559339] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f32910 is same with the state(6) to be set 00:22:18.815 [2024-11-15 11:40:19.559344] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f32910 is same with the state(6) to be set 00:22:18.815 [2024-11-15 11:40:19.559350] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f32910 is same with the state(6) to be set 00:22:18.815 [2024-11-15 11:40:19.559356] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f32910 is same with the state(6) to be set 00:22:18.815 [2024-11-15 11:40:19.560456] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f30380 is same with the state(6) to be set 00:22:18.815 [2024-11-15 11:40:19.560473] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f30380 is same with the state(6) to be set 00:22:18.815 [2024-11-15 11:40:19.560479] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f30380 is same with the state(6) to be set 00:22:18.815 [2024-11-15 11:40:19.560485] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f30380 is same with the state(6) to be set 00:22:18.815 [2024-11-15 11:40:19.560490] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f30380 is same with the state(6) to be set 00:22:18.815 [2024-11-15 11:40:19.560496] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f30380 is same with the state(6) to be set 00:22:18.815 [2024-11-15 11:40:19.560502] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f30380 is same with the state(6) to be set 00:22:18.815 [2024-11-15 11:40:19.560508] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f30380 is same with the state(6) to be set 00:22:18.815 [2024-11-15 11:40:19.560515] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f30380 is same with the state(6) to be set 00:22:18.815 [2024-11-15 11:40:19.560522] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f30380 is same with the state(6) to be set 00:22:18.815 [2024-11-15 11:40:19.560528] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f30380 is same with the state(6) to be set 00:22:18.815 [2024-11-15 11:40:19.560544] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f30380 is same with the state(6) to be set 00:22:18.815 [2024-11-15 11:40:19.560551] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f30380 is same with the state(6) to be set 00:22:18.815 [2024-11-15 11:40:19.560558] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f30380 is same with the state(6) to be set 00:22:18.815 [2024-11-15 11:40:19.560564] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f30380 is same with the state(6) to be set 00:22:18.815 [2024-11-15 11:40:19.560570] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f30380 is same with the state(6) to be set 00:22:18.815 [2024-11-15 11:40:19.560576] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f30380 is same with the state(6) to be set 00:22:18.815 [2024-11-15 11:40:19.560584] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f30380 is same with the state(6) to be set 00:22:18.815 [2024-11-15 11:40:19.560590] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f30380 is same with the state(6) to be set 00:22:18.815 [2024-11-15 11:40:19.560595] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f30380 is same with the state(6) to be set 00:22:18.815 [2024-11-15 11:40:19.560601] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f30380 is same with the state(6) to be set 00:22:18.815 [2024-11-15 11:40:19.560606] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f30380 is same with the state(6) to be set 00:22:18.815 [2024-11-15 11:40:19.560612] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f30380 is same with the state(6) to be set 00:22:18.815 [2024-11-15 11:40:19.560618] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f30380 is same with the state(6) to be set 00:22:18.815 [2024-11-15 11:40:19.560623] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f30380 is same with the state(6) to be set 00:22:18.815 [2024-11-15 11:40:19.560629] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f30380 is same with the state(6) to be set 00:22:18.815 [2024-11-15 11:40:19.560634] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f30380 is same with the state(6) to be set 00:22:18.815 [2024-11-15 11:40:19.560640] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f30380 is same with the state(6) to be set 00:22:18.815 [2024-11-15 11:40:19.560646] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f30380 is same with the state(6) to be set 00:22:18.815 [2024-11-15 11:40:19.560651] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f30380 is same with the state(6) to be set 00:22:18.815 [2024-11-15 11:40:19.560656] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f30380 is same with the state(6) to be set 00:22:18.815 [2024-11-15 11:40:19.560661] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f30380 is same with the state(6) to be set 00:22:18.815 [2024-11-15 11:40:19.560667] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f30380 is same with the state(6) to be set 00:22:18.815 [2024-11-15 11:40:19.560672] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f30380 is same with the state(6) to be set 00:22:18.815 [2024-11-15 11:40:19.560679] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f30380 is same with the state(6) to be set 00:22:18.815 [2024-11-15 11:40:19.560685] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f30380 is same with the state(6) to be set 00:22:18.815 [2024-11-15 11:40:19.560690] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f30380 is same with the state(6) to be set 00:22:18.815 [2024-11-15 11:40:19.560696] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f30380 is same with the state(6) to be set 00:22:18.815 [2024-11-15 11:40:19.560701] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f30380 is same with the state(6) to be set 00:22:18.815 [2024-11-15 11:40:19.560707] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f30380 is same with the state(6) to be set 00:22:18.815 [2024-11-15 11:40:19.560712] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f30380 is same with the state(6) to be set 00:22:18.815 [2024-11-15 11:40:19.560718] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f30380 is same with the state(6) to be set 00:22:18.815 [2024-11-15 11:40:19.560724] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f30380 is same with the state(6) to be set 00:22:18.815 [2024-11-15 11:40:19.560730] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f30380 is same with the state(6) to be set 00:22:18.815 [2024-11-15 11:40:19.560739] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f30380 is same with the state(6) to be set 00:22:18.815 [2024-11-15 11:40:19.560744] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f30380 is same with the state(6) to be set 00:22:18.815 [2024-11-15 11:40:19.560750] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f30380 is same with the state(6) to be set 00:22:18.815 [2024-11-15 11:40:19.560755] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f30380 is same with the state(6) to be set 00:22:18.815 [2024-11-15 11:40:19.560761] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f30380 is same with the state(6) to be set 00:22:18.815 [2024-11-15 11:40:19.560767] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f30380 is same with the state(6) to be set 00:22:18.815 [2024-11-15 11:40:19.560773] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f30380 is same with the state(6) to be set 00:22:18.815 [2024-11-15 11:40:19.560779] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f30380 is same with the state(6) to be set 00:22:18.815 [2024-11-15 11:40:19.560784] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f30380 is same with the state(6) to be set 00:22:18.815 [2024-11-15 11:40:19.560790] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f30380 is same with the state(6) to be set 00:22:18.815 [2024-11-15 11:40:19.560795] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f30380 is same with the state(6) to be set 00:22:18.815 [2024-11-15 11:40:19.560801] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f30380 is same with the state(6) to be set 00:22:18.815 [2024-11-15 11:40:19.560807] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f30380 is same with the state(6) to be set 00:22:18.815 [2024-11-15 11:40:19.560812] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f30380 is same with the state(6) to be set 00:22:18.815 [2024-11-15 11:40:19.560818] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f30380 is same with the state(6) to be set 00:22:18.815 [2024-11-15 11:40:19.560823] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f30380 is same with the state(6) to be set 00:22:18.815 [2024-11-15 11:40:19.560828] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f30380 is same with the state(6) to be set 00:22:18.815 [2024-11-15 11:40:19.560834] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f30380 is same with the state(6) to be set 00:22:18.815 [2024-11-15 11:40:19.560839] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f30380 is same with the state(6) to be set 00:22:18.815 [2024-11-15 11:40:19.562109] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f30850 is same with the state(6) to be set 00:22:18.815 [2024-11-15 11:40:19.562133] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f30850 is same with the state(6) to be set 00:22:18.815 [2024-11-15 11:40:19.562140] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f30850 is same with the state(6) to be set 00:22:18.815 [2024-11-15 11:40:19.562147] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f30850 is same with the state(6) to be set 00:22:18.815 [2024-11-15 11:40:19.562153] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f30850 is same with the state(6) to be set 00:22:18.815 [2024-11-15 11:40:19.562159] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f30850 is same with the state(6) to be set 00:22:18.815 [2024-11-15 11:40:19.562165] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f30850 is same with the state(6) to be set 00:22:18.815 [2024-11-15 11:40:19.562171] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f30850 is same with the state(6) to be set 00:22:18.815 [2024-11-15 11:40:19.562181] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f30850 is same with the state(6) to be set 00:22:18.815 [2024-11-15 11:40:19.562188] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f30850 is same with the state(6) to be set 00:22:18.815 [2024-11-15 11:40:19.562194] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f30850 is same with the state(6) to be set 00:22:18.815 [2024-11-15 11:40:19.562200] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f30850 is same with the state(6) to be set 00:22:18.815 [2024-11-15 11:40:19.562206] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f30850 is same with the state(6) to be set 00:22:18.815 [2024-11-15 11:40:19.562214] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f30850 is same with the state(6) to be set 00:22:18.815 [2024-11-15 11:40:19.562220] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f30850 is same with the state(6) to be set 00:22:18.815 [2024-11-15 11:40:19.562225] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f30850 is same with the state(6) to be set 00:22:18.815 [2024-11-15 11:40:19.562231] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f30850 is same with the state(6) to be set 00:22:18.815 [2024-11-15 11:40:19.562236] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f30850 is same with the state(6) to be set 00:22:18.815 [2024-11-15 11:40:19.562242] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f30850 is same with the state(6) to be set 00:22:18.815 [2024-11-15 11:40:19.562248] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f30850 is same with the state(6) to be set 00:22:18.815 [2024-11-15 11:40:19.562254] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f30850 is same with the state(6) to be set 00:22:18.815 [2024-11-15 11:40:19.562260] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f30850 is same with the state(6) to be set 00:22:18.815 [2024-11-15 11:40:19.562265] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f30850 is same with the state(6) to be set 00:22:18.815 [2024-11-15 11:40:19.562271] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f30850 is same with the state(6) to be set 00:22:18.815 [2024-11-15 11:40:19.562276] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f30850 is same with the state(6) to be set 00:22:18.815 [2024-11-15 11:40:19.562281] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f30850 is same with the state(6) to be set 00:22:18.815 [2024-11-15 11:40:19.562288] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f30850 is same with the state(6) to be set 00:22:18.815 [2024-11-15 11:40:19.562294] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f30850 is same with the state(6) to be set 00:22:18.815 [2024-11-15 11:40:19.562301] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f30850 is same with the state(6) to be set 00:22:18.815 [2024-11-15 11:40:19.562307] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f30850 is same with the state(6) to be set 00:22:18.815 [2024-11-15 11:40:19.562312] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f30850 is same with the state(6) to be set 00:22:18.815 [2024-11-15 11:40:19.562318] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f30850 is same with the state(6) to be set 00:22:18.815 [2024-11-15 11:40:19.562323] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f30850 is same with the state(6) to be set 00:22:18.815 [2024-11-15 11:40:19.562331] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f30850 is same with the state(6) to be set 00:22:18.815 [2024-11-15 11:40:19.562336] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f30850 is same with the state(6) to be set 00:22:18.815 [2024-11-15 11:40:19.562341] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f30850 is same with the state(6) to be set 00:22:18.815 [2024-11-15 11:40:19.562352] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f30850 is same with the state(6) to be set 00:22:18.815 [2024-11-15 11:40:19.562359] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f30850 is same with the state(6) to be set 00:22:18.815 [2024-11-15 11:40:19.562365] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f30850 is same with the state(6) to be set 00:22:18.815 [2024-11-15 11:40:19.562371] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f30850 is same with the state(6) to be set 00:22:18.816 [2024-11-15 11:40:19.562376] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f30850 is same with the state(6) to be set 00:22:18.816 [2024-11-15 11:40:19.562381] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f30850 is same with the state(6) to be set 00:22:18.816 [2024-11-15 11:40:19.562387] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f30850 is same with the state(6) to be set 00:22:18.816 [2024-11-15 11:40:19.562394] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f30850 is same with the state(6) to be set 00:22:18.816 [2024-11-15 11:40:19.562400] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f30850 is same with the state(6) to be set 00:22:18.816 [2024-11-15 11:40:19.562406] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f30850 is same with the state(6) to be set 00:22:18.816 [2024-11-15 11:40:19.562412] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f30850 is same with the state(6) to be set 00:22:18.816 [2024-11-15 11:40:19.562417] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f30850 is same with the state(6) to be set 00:22:18.816 [2024-11-15 11:40:19.562424] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f30850 is same with the state(6) to be set 00:22:18.816 [2024-11-15 11:40:19.562430] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f30850 is same with the state(6) to be set 00:22:18.816 [2024-11-15 11:40:19.562436] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f30850 is same with the state(6) to be set 00:22:18.816 [2024-11-15 11:40:19.562443] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f30850 is same with the state(6) to be set 00:22:18.816 [2024-11-15 11:40:19.562449] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f30850 is same with the state(6) to be set 00:22:18.816 [2024-11-15 11:40:19.562465] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f30850 is same with the state(6) to be set 00:22:18.816 [2024-11-15 11:40:19.562471] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f30850 is same with the state(6) to be set 00:22:18.816 [2024-11-15 11:40:19.562476] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f30850 is same with the state(6) to be set 00:22:18.816 [2024-11-15 11:40:19.562482] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f30850 is same with the state(6) to be set 00:22:18.816 [2024-11-15 11:40:19.562489] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f30850 is same with the state(6) to be set 00:22:18.816 [2024-11-15 11:40:19.562495] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f30850 is same with the state(6) to be set 00:22:18.816 [2024-11-15 11:40:19.562500] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f30850 is same with the state(6) to be set 00:22:18.816 [2024-11-15 11:40:19.562506] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f30850 is same with the state(6) to be set 00:22:18.816 [2024-11-15 11:40:19.562511] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f30850 is same with the state(6) to be set 00:22:18.816 [2024-11-15 11:40:19.562516] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f30850 is same with the state(6) to be set 00:22:18.816 [2024-11-15 11:40:19.564769] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f310c0 is same with the state(6) to be set 00:22:18.816 [2024-11-15 11:40:19.564788] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f310c0 is same with the state(6) to be set 00:22:18.816 [2024-11-15 11:40:19.564795] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f310c0 is same with the state(6) to be set 00:22:18.816 [2024-11-15 11:40:19.564801] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f310c0 is same with the state(6) to be set 00:22:18.816 [2024-11-15 11:40:19.564806] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f310c0 is same with the state(6) to be set 00:22:18.816 [2024-11-15 11:40:19.564812] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f310c0 is same with the state(6) to be set 00:22:18.816 [2024-11-15 11:40:19.564817] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f310c0 is same with the state(6) to be set 00:22:18.816 [2024-11-15 11:40:19.564823] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f310c0 is same with the state(6) to be set 00:22:18.816 [2024-11-15 11:40:19.564828] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f310c0 is same with the state(6) to be set 00:22:18.816 [2024-11-15 11:40:19.564833] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f310c0 is same with the state(6) to be set 00:22:18.816 [2024-11-15 11:40:19.564839] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f310c0 is same with the state(6) to be set 00:22:18.816 [2024-11-15 11:40:19.564844] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f310c0 is same with the state(6) to be set 00:22:18.816 [2024-11-15 11:40:19.564849] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f310c0 is same with the state(6) to be set 00:22:18.816 [2024-11-15 11:40:19.564855] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f310c0 is same with the state(6) to be set 00:22:18.816 [2024-11-15 11:40:19.564860] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f310c0 is same with the state(6) to be set 00:22:18.816 [2024-11-15 11:40:19.564865] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f310c0 is same with the state(6) to be set 00:22:18.816 [2024-11-15 11:40:19.564870] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f310c0 is same with the state(6) to be set 00:22:18.816 [2024-11-15 11:40:19.564875] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f310c0 is same with the state(6) to be set 00:22:18.816 [2024-11-15 11:40:19.564880] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f310c0 is same with the state(6) to be set 00:22:18.816 [2024-11-15 11:40:19.564886] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f310c0 is same with the state(6) to be set 00:22:18.816 [2024-11-15 11:40:19.564891] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f310c0 is same with the state(6) to be set 00:22:18.816 [2024-11-15 11:40:19.564896] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f310c0 is same with the state(6) to be set 00:22:18.816 [2024-11-15 11:40:19.564901] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f310c0 is same with the state(6) to be set 00:22:18.816 [2024-11-15 11:40:19.564906] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f310c0 is same with the state(6) to be set 00:22:18.816 [2024-11-15 11:40:19.564911] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f310c0 is same with the state(6) to be set 00:22:18.816 [2024-11-15 11:40:19.564916] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f310c0 is same with the state(6) to be set 00:22:18.816 [2024-11-15 11:40:19.564921] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f310c0 is same with the state(6) to be set 00:22:18.816 [2024-11-15 11:40:19.564931] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f310c0 is same with the state(6) to be set 00:22:18.816 [2024-11-15 11:40:19.564937] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f310c0 is same with the state(6) to be set 00:22:18.816 [2024-11-15 11:40:19.564942] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f310c0 is same with the state(6) to be set 00:22:18.816 [2024-11-15 11:40:19.564948] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f310c0 is same with the state(6) to be set 00:22:18.816 [2024-11-15 11:40:19.564953] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f310c0 is same with the state(6) to be set 00:22:18.816 [2024-11-15 11:40:19.564958] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f310c0 is same with the state(6) to be set 00:22:18.816 [2024-11-15 11:40:19.564964] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f310c0 is same with the state(6) to be set 00:22:18.816 [2024-11-15 11:40:19.564970] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f310c0 is same with the state(6) to be set 00:22:18.816 [2024-11-15 11:40:19.565719] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f31590 is same with the state(6) to be set 00:22:18.816 [2024-11-15 11:40:19.565731] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f31590 is same with the state(6) to be set 00:22:18.816 [2024-11-15 11:40:19.565737] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f31590 is same with the state(6) to be set 00:22:18.816 [2024-11-15 11:40:19.565743] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f31590 is same with the state(6) to be set 00:22:18.816 [2024-11-15 11:40:19.565749] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f31590 is same with the state(6) to be set 00:22:18.816 [2024-11-15 11:40:19.565755] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f31590 is same with the state(6) to be set 00:22:18.816 [2024-11-15 11:40:19.565762] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f31590 is same with the state(6) to be set 00:22:18.816 [2024-11-15 11:40:19.565767] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f31590 is same with the state(6) to be set 00:22:18.816 [2024-11-15 11:40:19.565773] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f31590 is same with the state(6) to be set 00:22:18.816 [2024-11-15 11:40:19.565779] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f31590 is same with the state(6) to be set 00:22:18.816 [2024-11-15 11:40:19.565785] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f31590 is same with the state(6) to be set 00:22:18.816 [2024-11-15 11:40:19.565791] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f31590 is same with the state(6) to be set 00:22:18.816 [2024-11-15 11:40:19.565799] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f31590 is same with the state(6) to be set 00:22:18.816 [2024-11-15 11:40:19.565804] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f31590 is same with the state(6) to be set 00:22:18.816 [2024-11-15 11:40:19.565810] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f31590 is same with the state(6) to be set 00:22:18.816 [2024-11-15 11:40:19.565815] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f31590 is same with the state(6) to be set 00:22:18.816 [2024-11-15 11:40:19.565821] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f31590 is same with the state(6) to be set 00:22:18.816 [2024-11-15 11:40:19.565827] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f31590 is same with the state(6) to be set 00:22:18.816 [2024-11-15 11:40:19.565833] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f31590 is same with the state(6) to be set 00:22:18.816 [2024-11-15 11:40:19.565845] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f31590 is same with the state(6) to be set 00:22:18.816 [2024-11-15 11:40:19.565851] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f31590 is same with the state(6) to be set 00:22:18.816 [2024-11-15 11:40:19.565856] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f31590 is same with the state(6) to be set 00:22:18.816 [2024-11-15 11:40:19.565862] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f31590 is same with the state(6) to be set 00:22:18.816 [2024-11-15 11:40:19.565868] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f31590 is same with the state(6) to be set 00:22:18.816 [2024-11-15 11:40:19.565874] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f31590 is same with the state(6) to be set 00:22:18.816 [2024-11-15 11:40:19.565880] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f31590 is same with the state(6) to be set 00:22:18.816 [2024-11-15 11:40:19.565885] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f31590 is same with the state(6) to be set 00:22:18.816 [2024-11-15 11:40:19.565891] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f31590 is same with the state(6) to be set 00:22:18.816 [2024-11-15 11:40:19.565897] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f31590 is same with the state(6) to be set 00:22:18.816 [2024-11-15 11:40:19.565902] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f31590 is same with the state(6) to be set 00:22:18.816 [2024-11-15 11:40:19.565909] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f31590 is same with the state(6) to be set 00:22:18.816 [2024-11-15 11:40:19.565916] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f31590 is same with the state(6) to be set 00:22:18.816 [2024-11-15 11:40:19.565922] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f31590 is same with the state(6) to be set 00:22:18.816 [2024-11-15 11:40:19.565927] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f31590 is same with the state(6) to be set 00:22:18.816 [2024-11-15 11:40:19.565933] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f31590 is same with the state(6) to be set 00:22:18.816 [2024-11-15 11:40:19.565940] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f31590 is same with the state(6) to be set 00:22:18.816 [2024-11-15 11:40:19.565946] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f31590 is same with the state(6) to be set 00:22:18.816 [2024-11-15 11:40:19.565952] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f31590 is same with the state(6) to be set 00:22:18.816 [2024-11-15 11:40:19.565957] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f31590 is same with the state(6) to be set 00:22:18.816 [2024-11-15 11:40:19.565963] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f31590 is same with the state(6) to be set 00:22:18.816 [2024-11-15 11:40:19.565969] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f31590 is same with the state(6) to be set 00:22:18.816 [2024-11-15 11:40:19.565974] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f31590 is same with the state(6) to be set 00:22:18.816 [2024-11-15 11:40:19.565980] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f31590 is same with the state(6) to be set 00:22:18.816 [2024-11-15 11:40:19.565985] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f31590 is same with the state(6) to be set 00:22:18.816 [2024-11-15 11:40:19.565992] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f31590 is same with the state(6) to be set 00:22:18.816 [2024-11-15 11:40:19.565997] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f31590 is same with the state(6) to be set 00:22:18.816 [2024-11-15 11:40:19.566004] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f31590 is same with the state(6) to be set 00:22:18.816 [2024-11-15 11:40:19.566010] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f31590 is same with the state(6) to be set 00:22:18.816 [2024-11-15 11:40:19.566016] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f31590 is same with the state(6) to be set 00:22:18.816 [2024-11-15 11:40:19.566021] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f31590 is same with the state(6) to be set 00:22:18.816 [2024-11-15 11:40:19.566027] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f31590 is same with the state(6) to be set 00:22:18.816 [2024-11-15 11:40:19.566032] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f31590 is same with the state(6) to be set 00:22:18.816 [2024-11-15 11:40:19.566037] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f31590 is same with the state(6) to be set 00:22:18.816 [2024-11-15 11:40:19.566043] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f31590 is same with the state(6) to be set 00:22:18.816 [2024-11-15 11:40:19.566049] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f31590 is same with the state(6) to be set 00:22:18.816 [2024-11-15 11:40:19.566055] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f31590 is same with the state(6) to be set 00:22:18.816 [2024-11-15 11:40:19.566060] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f31590 is same with the state(6) to be set 00:22:18.816 [2024-11-15 11:40:19.566066] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f31590 is same with the state(6) to be set 00:22:18.816 [2024-11-15 11:40:19.566071] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f31590 is same with the state(6) to be set 00:22:18.816 [2024-11-15 11:40:19.566076] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f31590 is same with the state(6) to be set 00:22:18.816 [2024-11-15 11:40:19.566082] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f31590 is same with the state(6) to be set 00:22:18.816 [2024-11-15 11:40:19.566087] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f31590 is same with the state(6) to be set 00:22:18.816 [2024-11-15 11:40:19.566093] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f31590 is same with the state(6) to be set 00:22:18.816 [2024-11-15 11:40:19.566860] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:18.816 [2024-11-15 11:40:19.566898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.816 [2024-11-15 11:40:19.566912] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:18.816 [2024-11-15 11:40:19.566922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.816 [2024-11-15 11:40:19.566933] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:18.816 [2024-11-15 11:40:19.566943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.816 [2024-11-15 11:40:19.566954] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:18.816 [2024-11-15 11:40:19.566965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.816 [2024-11-15 11:40:19.566975] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13a17f0 is same with the state(6) to be set 00:22:18.816 [2024-11-15 11:40:19.567015] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:18.816 [2024-11-15 11:40:19.567032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.816 [2024-11-15 11:40:19.567043] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:18.816 [2024-11-15 11:40:19.567053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.816 [2024-11-15 11:40:19.567064] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:18.817 [2024-11-15 11:40:19.567074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.817 [2024-11-15 11:40:19.567084] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:18.817 [2024-11-15 11:40:19.567094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.817 [2024-11-15 11:40:19.567104] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12b6610 is same with the state(6) to be set 00:22:18.817 [2024-11-15 11:40:19.567107] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f31a60 is same with the state(6) to be set 00:22:18.817 [2024-11-15 11:40:19.567121] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f31a60 is same with the state(6) to be set 00:22:18.817 [2024-11-15 11:40:19.567127] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f31a60 is same with the state(6) to be set 00:22:18.817 [2024-11-15 11:40:19.567132] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f31a60 is same with the state(6) to be set 00:22:18.817 [2024-11-15 11:40:19.567138] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f31a60 is same with the state(6) to be set 00:22:18.817 [2024-11-15 11:40:19.567137] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:18.817 [2024-11-15 11:40:19.567145] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f31a60 is same with the state(6) to be set 00:22:18.817 [2024-11-15 11:40:19.567152] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f31a60 is same with [2024-11-15 11:40:19.567151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cthe state(6) to be set 00:22:18.817 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.817 [2024-11-15 11:40:19.567162] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f31a60 is same with the state(6) to be set 00:22:18.817 [2024-11-15 11:40:19.567166] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 ns[2024-11-15 11:40:19.567168] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f31a60 is same with id:0 cdw10:00000000 cdw11:00000000 00:22:18.817 the state(6) to be set 00:22:18.817 [2024-11-15 11:40:19.567177] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f31a60 is same with the state(6) to be set 00:22:18.817 [2024-11-15 11:40:19.567178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.817 [2024-11-15 11:40:19.567183] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f31a60 is same with the state(6) to be set 00:22:18.817 [2024-11-15 11:40:19.567189] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f31a60 is same with the state(6) to be set 00:22:18.817 [2024-11-15 11:40:19.567190] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:18.817 [2024-11-15 11:40:19.567195] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f31a60 is same with the state(6) to be set 00:22:18.817 [2024-11-15 11:40:19.567201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.817 [2024-11-15 11:40:19.567204] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f31a60 is same with the state(6) to be set 00:22:18.817 [2024-11-15 11:40:19.567211] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f31a60 is same with the state(6) to be set 00:22:18.817 [2024-11-15 11:40:19.567212] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:18.817 [2024-11-15 11:40:19.567216] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f31a60 is same with the state(6) to be set 00:22:18.817 [2024-11-15 11:40:19.567223] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f31a60 is same with the state(6) to be set 00:22:18.817 [2024-11-15 11:40:19.567222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.817 [2024-11-15 11:40:19.567229] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f31a60 is same with the state(6) to be set 00:22:18.817 [2024-11-15 11:40:19.567233] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1819730 is same with the state(6) to be set 00:22:18.817 [2024-11-15 11:40:19.567236] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f31a60 is same with the state(6) to be set 00:22:18.817 [2024-11-15 11:40:19.567244] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f31a60 is same with the state(6) to be set 00:22:18.817 [2024-11-15 11:40:19.567249] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f31a60 is same with the state(6) to be set 00:22:18.817 [2024-11-15 11:40:19.567255] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f31a60 is same with the state(6) to be set 00:22:18.817 [2024-11-15 11:40:19.567261] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f31a60 is same with the state(6) to be set 00:22:18.817 [2024-11-15 11:40:19.567266] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f31a60 is same with the state(6) to be set 00:22:18.817 [2024-11-15 11:40:19.567271] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f31a60 is same with the state(6) to be set 00:22:18.817 [2024-11-15 11:40:19.567276] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f31a60 is same with the state(6) to be set 00:22:18.817 [2024-11-15 11:40:19.567281] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f31a60 is same with the state(6) to be set 00:22:18.817 [2024-11-15 11:40:19.567287] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f31a60 is same with the state(6) to be set 00:22:18.817 [2024-11-15 11:40:19.567292] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f31a60 is same with the state(6) to be set 00:22:18.817 [2024-11-15 11:40:19.567290] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:18.817 [2024-11-15 11:40:19.567298] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f31a60 is same with the state(6) to be set 00:22:18.817 [2024-11-15 11:40:19.567304] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f31a60 is same with the state(6) to be set 00:22:18.817 [2024-11-15 11:40:19.567304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.817 [2024-11-15 11:40:19.567310] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f31a60 is same with the state(6) to be set 00:22:18.817 [2024-11-15 11:40:19.567317] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f31a60 is same with the state(6) to be set 00:22:18.817 [2024-11-15 11:40:19.567316] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:18.817 [2024-11-15 11:40:19.567324] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f31a60 is same with the state(6) to be set 00:22:18.817 [2024-11-15 11:40:19.567332] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f31a60 is same with the state(6) to be set 00:22:18.817 [2024-11-15 11:40:19.567332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.817 [2024-11-15 11:40:19.567338] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f31a60 is same with the state(6) to be set 00:22:18.817 [2024-11-15 11:40:19.567344] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f31a60 is same with the state(6) to be set 00:22:18.817 [2024-11-15 11:40:19.567344] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:18.817 [2024-11-15 11:40:19.567351] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f31a60 is same with the state(6) to be set 00:22:18.817 [2024-11-15 11:40:19.567355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 c[2024-11-15 11:40:19.567357] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f31a60 is same with dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.817 the state(6) to be set 00:22:18.817 [2024-11-15 11:40:19.567366] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f31a60 is same with the state(6) to be set 00:22:18.817 [2024-11-15 11:40:19.567369] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 ns[2024-11-15 11:40:19.567371] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f31a60 is same with id:0 cdw10:00000000 cdw11:00000000 00:22:18.817 the state(6) to be set 00:22:18.817 [2024-11-15 11:40:19.567380] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f31a60 is same with the state(6) to be set 00:22:18.817 [2024-11-15 11:40:19.567381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.817 [2024-11-15 11:40:19.567386] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f31a60 is same with the state(6) to be set 00:22:18.817 [2024-11-15 11:40:19.567392] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f31a60 is same with the state(6) to be set 00:22:18.817 [2024-11-15 11:40:19.567392] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17c2e50 is same with the state(6) to be set 00:22:18.817 [2024-11-15 11:40:19.567400] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f31a60 is same with the state(6) to be set 00:22:18.817 [2024-11-15 11:40:19.567406] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f31a60 is same with the state(6) to be set 00:22:18.817 [2024-11-15 11:40:19.567411] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f31a60 is same with the state(6) to be set 00:22:18.817 [2024-11-15 11:40:19.567416] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f31a60 is same with the state(6) to be set 00:22:18.817 [2024-11-15 11:40:19.567422] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f31a60 is same with the state(6) to be set 00:22:18.817 [2024-11-15 11:40:19.567427] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f31a60 is same with the state(6) to be set 00:22:18.817 [2024-11-15 11:40:19.567428] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 ns[2024-11-15 11:40:19.567432] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f31a60 is same with id:0 cdw10:00000000 cdw11:00000000 00:22:18.817 the state(6) to be set 00:22:18.817 [2024-11-15 11:40:19.567440] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f31a60 is same with the state(6) to be set 00:22:18.817 [2024-11-15 11:40:19.567442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.817 [2024-11-15 11:40:19.567447] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f31a60 is same with the state(6) to be set 00:22:18.817 [2024-11-15 11:40:19.567456] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f31a60 is same with the state(6) to be set 00:22:18.817 [2024-11-15 11:40:19.567467] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f31a60 is same with [2024-11-15 11:40:19.567454] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsthe state(6) to be set 00:22:18.817 id:0 cdw10:00000000 cdw11:00000000 00:22:18.817 [2024-11-15 11:40:19.567475] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f31a60 is same with the state(6) to be set 00:22:18.817 [2024-11-15 11:40:19.567478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.817 [2024-11-15 11:40:19.567481] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f31a60 is same with the state(6) to be set 00:22:18.817 [2024-11-15 11:40:19.567489] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f31a60 is same with the state(6) to be set 00:22:18.817 [2024-11-15 11:40:19.567490] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:18.817 [2024-11-15 11:40:19.567494] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f31a60 is same with the state(6) to be set 00:22:18.817 [2024-11-15 11:40:19.567501] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f31a60 is same with the state(6) to be set 00:22:18.817 [2024-11-15 11:40:19.567501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.817 [2024-11-15 11:40:19.567507] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f31a60 is same with the state(6) to be set 00:22:18.817 [2024-11-15 11:40:19.567514] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f31a60 is same with the state(6) to be set 00:22:18.817 [2024-11-15 11:40:19.567514] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:18.817 [2024-11-15 11:40:19.567520] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f31a60 is same with the state(6) to be set 00:22:18.817 [2024-11-15 11:40:19.567525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.817 [2024-11-15 11:40:19.567535] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13a26f0 is same with the state(6) to be set 00:22:18.817 [2024-11-15 11:40:19.567566] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:18.817 [2024-11-15 11:40:19.567578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.817 [2024-11-15 11:40:19.567589] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:18.817 [2024-11-15 11:40:19.567599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.817 [2024-11-15 11:40:19.567609] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:18.817 [2024-11-15 11:40:19.567619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.817 [2024-11-15 11:40:19.567630] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:18.817 [2024-11-15 11:40:19.567642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.817 [2024-11-15 11:40:19.567651] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13a2270 is same with the state(6) to be set 00:22:18.817 [2024-11-15 11:40:19.567682] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:18.817 [2024-11-15 11:40:19.567694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.817 [2024-11-15 11:40:19.567705] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:18.817 [2024-11-15 11:40:19.567714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.817 [2024-11-15 11:40:19.567725] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:18.817 [2024-11-15 11:40:19.567735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.817 [2024-11-15 11:40:19.567745] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:18.817 [2024-11-15 11:40:19.567755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.817 [2024-11-15 11:40:19.567764] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17cc550 is same with the state(6) to be set 00:22:18.817 [2024-11-15 11:40:19.567797] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:18.817 [2024-11-15 11:40:19.567809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.817 [2024-11-15 11:40:19.567822] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:18.817 [2024-11-15 11:40:19.567832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.817 [2024-11-15 11:40:19.567843] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:18.817 [2024-11-15 11:40:19.567852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.817 [2024-11-15 11:40:19.567863] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:18.817 [2024-11-15 11:40:19.567872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.817 [2024-11-15 11:40:19.567881] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1396750 is same with the state(6) to be set 00:22:18.817 [2024-11-15 11:40:19.568288] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f31f50 is same with the state(6) to be set 00:22:18.817 [2024-11-15 11:40:19.568315] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f31f50 is same with the state(6) to be set 00:22:18.817 [2024-11-15 11:40:19.568321] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f31f50 is same with the state(6) to be set 00:22:18.817 [2024-11-15 11:40:19.568327] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f31f50 is same with the state(6) to be set 00:22:18.817 [2024-11-15 11:40:19.568333] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f31f50 is same with the state(6) to be set 00:22:18.817 [2024-11-15 11:40:19.568339] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f31f50 is same with the state(6) to be set 00:22:18.817 [2024-11-15 11:40:19.568350] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f31f50 is same with the state(6) to be set 00:22:18.817 [2024-11-15 11:40:19.568356] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f31f50 is same with the state(6) to be set 00:22:18.817 [2024-11-15 11:40:19.568362] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f31f50 is same with the state(6) to be set 00:22:18.817 [2024-11-15 11:40:19.568367] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f31f50 is same with the state(6) to be set 00:22:18.817 [2024-11-15 11:40:19.568373] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f31f50 is same with the state(6) to be set 00:22:18.817 [2024-11-15 11:40:19.568379] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f31f50 is same with the state(6) to be set 00:22:18.817 [2024-11-15 11:40:19.568386] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f31f50 is same with the state(6) to be set 00:22:18.817 [2024-11-15 11:40:19.568391] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f31f50 is same with the state(6) to be set 00:22:18.817 [2024-11-15 11:40:19.568397] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f31f50 is same with the state(6) to be set 00:22:18.817 [2024-11-15 11:40:19.568402] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f31f50 is same with the state(6) to be set 00:22:18.817 [2024-11-15 11:40:19.568408] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f31f50 is same with the state(6) to be set 00:22:18.817 [2024-11-15 11:40:19.568413] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f31f50 is same with the state(6) to be set 00:22:18.817 [2024-11-15 11:40:19.568420] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f31f50 is same with the state(6) to be set 00:22:18.817 [2024-11-15 11:40:19.568428] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f31f50 is same with the state(6) to be set 00:22:18.817 [2024-11-15 11:40:19.568433] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f31f50 is same with the state(6) to be set 00:22:18.817 [2024-11-15 11:40:19.568439] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f31f50 is same with the state(6) to be set 00:22:18.817 [2024-11-15 11:40:19.568444] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f31f50 is same with the state(6) to be set 00:22:18.817 [2024-11-15 11:40:19.568449] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f31f50 is same with the state(6) to be set 00:22:18.817 [2024-11-15 11:40:19.568455] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f31f50 is same with the state(6) to be set 00:22:18.817 [2024-11-15 11:40:19.568465] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f31f50 is same with the state(6) to be set 00:22:18.817 [2024-11-15 11:40:19.568470] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f31f50 is same with the state(6) to be set 00:22:18.817 [2024-11-15 11:40:19.568476] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f31f50 is same with the state(6) to be set 00:22:18.817 [2024-11-15 11:40:19.568481] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f31f50 is same with the state(6) to be set 00:22:18.817 [2024-11-15 11:40:19.568487] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f31f50 is same with the state(6) to be set 00:22:18.817 [2024-11-15 11:40:19.568493] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f31f50 is same with the state(6) to be set 00:22:18.817 [2024-11-15 11:40:19.568498] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f31f50 is same with the state(6) to be set 00:22:18.817 [2024-11-15 11:40:19.568504] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f31f50 is same with the state(6) to be set 00:22:18.817 [2024-11-15 11:40:19.568511] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f31f50 is same with the state(6) to be set 00:22:18.817 [2024-11-15 11:40:19.568517] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f31f50 is same with the state(6) to be set 00:22:18.817 [2024-11-15 11:40:19.568522] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f31f50 is same with the state(6) to be set 00:22:18.817 [2024-11-15 11:40:19.568527] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f31f50 is same with the state(6) to be set 00:22:18.817 [2024-11-15 11:40:19.568533] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f31f50 is same with the state(6) to be set 00:22:18.818 [2024-11-15 11:40:19.568538] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f31f50 is same with the state(6) to be set 00:22:18.818 [2024-11-15 11:40:19.568543] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f31f50 is same with the state(6) to be set 00:22:18.818 [2024-11-15 11:40:19.568548] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f31f50 is same with the state(6) to be set 00:22:18.818 [2024-11-15 11:40:19.568553] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f31f50 is same with the state(6) to be set 00:22:18.818 [2024-11-15 11:40:19.568559] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f31f50 is same with the state(6) to be set 00:22:18.818 [2024-11-15 11:40:19.568564] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f31f50 is same with the state(6) to be set 00:22:18.818 [2024-11-15 11:40:19.568571] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f31f50 is same with the state(6) to be set 00:22:18.818 [2024-11-15 11:40:19.568578] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f31f50 is same with the state(6) to be set 00:22:18.818 [2024-11-15 11:40:19.568583] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f31f50 is same with the state(6) to be set 00:22:18.818 [2024-11-15 11:40:19.568590] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f31f50 is same with the state(6) to be set 00:22:18.818 [2024-11-15 11:40:19.568595] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f31f50 is same with the state(6) to be set 00:22:18.818 [2024-11-15 11:40:19.568600] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f31f50 is same with the state(6) to be set 00:22:18.818 [2024-11-15 11:40:19.568605] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f31f50 is same with the state(6) to be set 00:22:18.818 [2024-11-15 11:40:19.568610] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f31f50 is same with the state(6) to be set 00:22:18.818 [2024-11-15 11:40:19.568616] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f31f50 is same with the state(6) to be set 00:22:18.818 [2024-11-15 11:40:19.568621] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f31f50 is same with the state(6) to be set 00:22:18.818 [2024-11-15 11:40:19.568626] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f31f50 is same with the state(6) to be set 00:22:18.818 [2024-11-15 11:40:19.568631] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f31f50 is same with the state(6) to be set 00:22:18.818 [2024-11-15 11:40:19.568637] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f31f50 is same with the state(6) to be set 00:22:18.818 [2024-11-15 11:40:19.568642] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f31f50 is same with the state(6) to be set 00:22:18.818 [2024-11-15 11:40:19.568647] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f31f50 is same with the state(6) to be set 00:22:18.818 [2024-11-15 11:40:19.568653] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f31f50 is same with the state(6) to be set 00:22:18.818 [2024-11-15 11:40:19.568658] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f31f50 is same with the state(6) to be set 00:22:18.818 [2024-11-15 11:40:19.568665] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f31f50 is same with the state(6) to be set 00:22:18.818 [2024-11-15 11:40:19.568670] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f31f50 is same with the state(6) to be set 00:22:18.818 [2024-11-15 11:40:19.569196] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f32420 is same with the state(6) to be set 00:22:18.818 [2024-11-15 11:40:19.569209] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f32420 is same with the state(6) to be set 00:22:18.818 [2024-11-15 11:40:19.569220] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f32420 is same with the state(6) to be set 00:22:18.818 [2024-11-15 11:40:19.569226] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f32420 is same with the state(6) to be set 00:22:18.818 [2024-11-15 11:40:19.569232] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f32420 is same with the state(6) to be set 00:22:18.818 [2024-11-15 11:40:19.569238] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f32420 is same with the state(6) to be set 00:22:18.818 [2024-11-15 11:40:19.569244] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f32420 is same with the state(6) to be set 00:22:18.818 [2024-11-15 11:40:19.569250] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f32420 is same with the state(6) to be set 00:22:18.818 [2024-11-15 11:40:19.569255] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f32420 is same with the state(6) to be set 00:22:18.818 [2024-11-15 11:40:19.569261] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f32420 is same with the state(6) to be set 00:22:18.818 [2024-11-15 11:40:19.569267] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f32420 is same with the state(6) to be set 00:22:18.818 [2024-11-15 11:40:19.569272] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f32420 is same with the state(6) to be set 00:22:18.818 [2024-11-15 11:40:19.569278] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f32420 is same with the state(6) to be set 00:22:18.818 [2024-11-15 11:40:19.569283] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f32420 is same with the state(6) to be set 00:22:18.818 [2024-11-15 11:40:19.576427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.818 [2024-11-15 11:40:19.576456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.818 [2024-11-15 11:40:19.576482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.818 [2024-11-15 11:40:19.576493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.818 [2024-11-15 11:40:19.576507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.818 [2024-11-15 11:40:19.576516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.818 [2024-11-15 11:40:19.576529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.818 [2024-11-15 11:40:19.576540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.818 [2024-11-15 11:40:19.576552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.818 [2024-11-15 11:40:19.576563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.818 [2024-11-15 11:40:19.576579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.818 [2024-11-15 11:40:19.576590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.818 [2024-11-15 11:40:19.576602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.818 [2024-11-15 11:40:19.576612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.818 [2024-11-15 11:40:19.576624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.818 [2024-11-15 11:40:19.576634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.818 [2024-11-15 11:40:19.576646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.818 [2024-11-15 11:40:19.576657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.818 [2024-11-15 11:40:19.576669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.818 [2024-11-15 11:40:19.576678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.818 [2024-11-15 11:40:19.576690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.818 [2024-11-15 11:40:19.576700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.818 [2024-11-15 11:40:19.576712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.818 [2024-11-15 11:40:19.576722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.818 [2024-11-15 11:40:19.576734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.818 [2024-11-15 11:40:19.576745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.818 [2024-11-15 11:40:19.576758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.818 [2024-11-15 11:40:19.576768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.818 [2024-11-15 11:40:19.576780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.818 [2024-11-15 11:40:19.576790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.818 [2024-11-15 11:40:19.576802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.818 [2024-11-15 11:40:19.576812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.818 [2024-11-15 11:40:19.576824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.818 [2024-11-15 11:40:19.576834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.818 [2024-11-15 11:40:19.576846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.818 [2024-11-15 11:40:19.576858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.818 [2024-11-15 11:40:19.576870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.818 [2024-11-15 11:40:19.576880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.818 [2024-11-15 11:40:19.576892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.818 [2024-11-15 11:40:19.576902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.818 [2024-11-15 11:40:19.576914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.818 [2024-11-15 11:40:19.576923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.818 [2024-11-15 11:40:19.576936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.818 [2024-11-15 11:40:19.576946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.818 [2024-11-15 11:40:19.576957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.818 [2024-11-15 11:40:19.576967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.818 [2024-11-15 11:40:19.576979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.818 [2024-11-15 11:40:19.576989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.818 [2024-11-15 11:40:19.577001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.818 [2024-11-15 11:40:19.577011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.818 [2024-11-15 11:40:19.577022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.818 [2024-11-15 11:40:19.577032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.818 [2024-11-15 11:40:19.577044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.818 [2024-11-15 11:40:19.577053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.818 [2024-11-15 11:40:19.577066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.818 [2024-11-15 11:40:19.577076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.818 [2024-11-15 11:40:19.577087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.818 [2024-11-15 11:40:19.577097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.818 [2024-11-15 11:40:19.577110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.818 [2024-11-15 11:40:19.577119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.818 [2024-11-15 11:40:19.577133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.818 [2024-11-15 11:40:19.577143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.818 [2024-11-15 11:40:19.577155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.818 [2024-11-15 11:40:19.577165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.818 [2024-11-15 11:40:19.577177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.818 [2024-11-15 11:40:19.577187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.818 [2024-11-15 11:40:19.577199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.818 [2024-11-15 11:40:19.577208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.818 [2024-11-15 11:40:19.577220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.818 [2024-11-15 11:40:19.577232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.818 [2024-11-15 11:40:19.577244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.818 [2024-11-15 11:40:19.577254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.818 [2024-11-15 11:40:19.577265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.818 [2024-11-15 11:40:19.577275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.818 [2024-11-15 11:40:19.577287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.818 [2024-11-15 11:40:19.577297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.818 [2024-11-15 11:40:19.577309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.818 [2024-11-15 11:40:19.577319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.818 [2024-11-15 11:40:19.577331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.818 [2024-11-15 11:40:19.577341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.818 [2024-11-15 11:40:19.577353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.818 [2024-11-15 11:40:19.577363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.818 [2024-11-15 11:40:19.577375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.818 [2024-11-15 11:40:19.577385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.818 [2024-11-15 11:40:19.577397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.818 [2024-11-15 11:40:19.577409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.818 [2024-11-15 11:40:19.577421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.818 [2024-11-15 11:40:19.577431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.818 [2024-11-15 11:40:19.577443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.818 [2024-11-15 11:40:19.577453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.818 [2024-11-15 11:40:19.577469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.818 [2024-11-15 11:40:19.577479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.818 [2024-11-15 11:40:19.577492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.818 [2024-11-15 11:40:19.577502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.818 [2024-11-15 11:40:19.577514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.818 [2024-11-15 11:40:19.577524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.818 [2024-11-15 11:40:19.577535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.818 [2024-11-15 11:40:19.577545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.818 [2024-11-15 11:40:19.577557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.818 [2024-11-15 11:40:19.577567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.818 [2024-11-15 11:40:19.577580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.818 [2024-11-15 11:40:19.577590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.818 [2024-11-15 11:40:19.577602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.818 [2024-11-15 11:40:19.577612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.818 [2024-11-15 11:40:19.577624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.818 [2024-11-15 11:40:19.577634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.818 [2024-11-15 11:40:19.577646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.818 [2024-11-15 11:40:19.577656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.818 [2024-11-15 11:40:19.577669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.818 [2024-11-15 11:40:19.577678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.818 [2024-11-15 11:40:19.577693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.818 [2024-11-15 11:40:19.577703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.818 [2024-11-15 11:40:19.577714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.818 [2024-11-15 11:40:19.577724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.819 [2024-11-15 11:40:19.577736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.819 [2024-11-15 11:40:19.577746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.819 [2024-11-15 11:40:19.577757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.819 [2024-11-15 11:40:19.577768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.819 [2024-11-15 11:40:19.577779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.819 [2024-11-15 11:40:19.577789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.819 [2024-11-15 11:40:19.577802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.819 [2024-11-15 11:40:19.577812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.819 [2024-11-15 11:40:19.577824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.819 [2024-11-15 11:40:19.577834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.819 [2024-11-15 11:40:19.577846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.819 [2024-11-15 11:40:19.577856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.819 [2024-11-15 11:40:19.577867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.819 [2024-11-15 11:40:19.577877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.819 [2024-11-15 11:40:19.577907] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:22:18.819 [2024-11-15 11:40:19.579978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.819 [2024-11-15 11:40:19.579993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.819 [2024-11-15 11:40:19.580008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.819 [2024-11-15 11:40:19.580018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.819 [2024-11-15 11:40:19.580031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.819 [2024-11-15 11:40:19.580041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.819 [2024-11-15 11:40:19.580056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.819 [2024-11-15 11:40:19.580066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.819 [2024-11-15 11:40:19.580079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.819 [2024-11-15 11:40:19.580089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.819 [2024-11-15 11:40:19.580101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.819 [2024-11-15 11:40:19.580111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.819 [2024-11-15 11:40:19.580124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.819 [2024-11-15 11:40:19.580134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.819 [2024-11-15 11:40:19.580146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.819 [2024-11-15 11:40:19.580156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.819 [2024-11-15 11:40:19.580168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.819 [2024-11-15 11:40:19.580178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.819 [2024-11-15 11:40:19.580190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.819 [2024-11-15 11:40:19.580200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.819 [2024-11-15 11:40:19.580212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.819 [2024-11-15 11:40:19.580222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.819 [2024-11-15 11:40:19.580234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.819 [2024-11-15 11:40:19.580244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.819 [2024-11-15 11:40:19.580256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.819 [2024-11-15 11:40:19.580268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.819 [2024-11-15 11:40:19.580280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.819 [2024-11-15 11:40:19.580290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.819 [2024-11-15 11:40:19.580302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.819 [2024-11-15 11:40:19.580312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.819 [2024-11-15 11:40:19.580325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.819 [2024-11-15 11:40:19.580337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.819 [2024-11-15 11:40:19.580349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.819 [2024-11-15 11:40:19.580360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.819 [2024-11-15 11:40:19.580372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.819 [2024-11-15 11:40:19.580382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.819 [2024-11-15 11:40:19.580394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.819 [2024-11-15 11:40:19.580404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.819 [2024-11-15 11:40:19.580416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.819 [2024-11-15 11:40:19.580427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.819 [2024-11-15 11:40:19.580439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.819 [2024-11-15 11:40:19.580448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.819 [2024-11-15 11:40:19.580470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.819 [2024-11-15 11:40:19.580481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.819 [2024-11-15 11:40:19.580494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.819 [2024-11-15 11:40:19.580504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.819 [2024-11-15 11:40:19.580516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.819 [2024-11-15 11:40:19.580526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.819 [2024-11-15 11:40:19.580539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.819 [2024-11-15 11:40:19.580549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.819 [2024-11-15 11:40:19.580561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.819 [2024-11-15 11:40:19.580571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.819 [2024-11-15 11:40:19.580583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.819 [2024-11-15 11:40:19.580593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.819 [2024-11-15 11:40:19.580606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.819 [2024-11-15 11:40:19.580616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.819 [2024-11-15 11:40:19.580631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.819 [2024-11-15 11:40:19.580641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.819 [2024-11-15 11:40:19.580654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.819 [2024-11-15 11:40:19.580664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.819 [2024-11-15 11:40:19.580675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.819 [2024-11-15 11:40:19.580685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.819 [2024-11-15 11:40:19.580698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.819 [2024-11-15 11:40:19.580708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.819 [2024-11-15 11:40:19.580720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.819 [2024-11-15 11:40:19.580730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.819 [2024-11-15 11:40:19.580743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.819 [2024-11-15 11:40:19.580753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.819 [2024-11-15 11:40:19.580765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.819 [2024-11-15 11:40:19.580775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.819 [2024-11-15 11:40:19.580787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.819 [2024-11-15 11:40:19.580797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.819 [2024-11-15 11:40:19.580809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.819 [2024-11-15 11:40:19.580819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.819 [2024-11-15 11:40:19.580831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.819 [2024-11-15 11:40:19.580840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.819 [2024-11-15 11:40:19.580852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.819 [2024-11-15 11:40:19.580862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.819 [2024-11-15 11:40:19.580875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.819 [2024-11-15 11:40:19.580885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.819 [2024-11-15 11:40:19.580897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.819 [2024-11-15 11:40:19.580909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.819 [2024-11-15 11:40:19.580922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.819 [2024-11-15 11:40:19.580931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.819 [2024-11-15 11:40:19.580944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.819 [2024-11-15 11:40:19.580954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.819 [2024-11-15 11:40:19.580965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.819 [2024-11-15 11:40:19.580976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.819 [2024-11-15 11:40:19.580988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.819 [2024-11-15 11:40:19.580998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.819 [2024-11-15 11:40:19.581010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.819 [2024-11-15 11:40:19.581020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.819 [2024-11-15 11:40:19.581032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.819 [2024-11-15 11:40:19.581038] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f32420 is same with the state(6) to be set 00:22:18.819 [2024-11-15 11:40:19.581043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.819 [2024-11-15 11:40:19.581047] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f32420 is same with the state(6) to be set 00:22:18.819 [2024-11-15 11:40:19.581055] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f32420 is same with the state(6) to be set 00:22:18.819 [2024-11-15 11:40:19.581055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.819 [2024-11-15 11:40:19.581062] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f32420 is same with the state(6) to be set 00:22:18.819 [2024-11-15 11:40:19.581067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-11-15 11:40:19.581069] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f32420 is same with dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.819 the state(6) to be set 00:22:18.819 [2024-11-15 11:40:19.581078] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f32420 is same with the state(6) to be set 00:22:18.819 [2024-11-15 11:40:19.581082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:30720 len:1[2024-11-15 11:40:19.581084] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f32420 is same with 28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.819 the state(6) to be set 00:22:18.819 [2024-11-15 11:40:19.581092] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f32420 is same with the state(6) to be set 00:22:18.819 [2024-11-15 11:40:19.581094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.819 [2024-11-15 11:40:19.581099] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f32420 is same with the state(6) to be set 00:22:18.819 [2024-11-15 11:40:19.581108] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f32420 is same with the state(6) to be set 00:22:18.819 [2024-11-15 11:40:19.581110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.819 [2024-11-15 11:40:19.581113] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f32420 is same with the state(6) to be set 00:22:18.819 [2024-11-15 11:40:19.581121] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f32420 is same with the state(6) to be set 00:22:18.819 [2024-11-15 11:40:19.581121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.819 [2024-11-15 11:40:19.581127] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f32420 is same with the state(6) to be set 00:22:18.819 [2024-11-15 11:40:19.581134] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f32420 is same with the state(6) to be set 00:22:18.819 [2024-11-15 11:40:19.581134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.819 [2024-11-15 11:40:19.581139] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f32420 is same with the state(6) to be set 00:22:18.819 [2024-11-15 11:40:19.581146] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f32420 is same with [2024-11-15 11:40:19.581145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cthe state(6) to be set 00:22:18.819 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.819 [2024-11-15 11:40:19.581155] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f32420 is same with the state(6) to be set 00:22:18.819 [2024-11-15 11:40:19.581160] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f32420 is same with the state(6) to be set 00:22:18.819 [2024-11-15 11:40:19.581160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.819 [2024-11-15 11:40:19.581167] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f32420 is same with the state(6) to be set 00:22:18.819 [2024-11-15 11:40:19.581172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.819 [2024-11-15 11:40:19.581177] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f32420 is same with the state(6) to be set 00:22:18.819 [2024-11-15 11:40:19.581185] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f32420 is same with the state(6) to be set 00:22:18.819 [2024-11-15 11:40:19.581185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.819 [2024-11-15 11:40:19.581193] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f32420 is same with the state(6) to be set 00:22:18.820 [2024-11-15 11:40:19.581198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-11-15 11:40:19.581200] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f32420 is same with dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.820 the state(6) to be set 00:22:18.820 [2024-11-15 11:40:19.581209] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f32420 is same with the state(6) to be set 00:22:18.820 [2024-11-15 11:40:19.581212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:31360 len:1[2024-11-15 11:40:19.581215] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f32420 is same with 28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.820 the state(6) to be set 00:22:18.820 [2024-11-15 11:40:19.581223] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f32420 is same with [2024-11-15 11:40:19.581225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cthe state(6) to be set 00:22:18.820 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.820 [2024-11-15 11:40:19.581234] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f32420 is same with the state(6) to be set 00:22:18.820 [2024-11-15 11:40:19.581240] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f32420 is same with [2024-11-15 11:40:19.581239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:31488 len:1the state(6) to be set 00:22:18.820 28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.820 [2024-11-15 11:40:19.581248] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f32420 is same with the state(6) to be set 00:22:18.820 [2024-11-15 11:40:19.581251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.820 [2024-11-15 11:40:19.581255] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f32420 is same with the state(6) to be set 00:22:18.820 [2024-11-15 11:40:19.581262] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f32420 is same with the state(6) to be set 00:22:18.820 [2024-11-15 11:40:19.581265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:31616 len:1[2024-11-15 11:40:19.581267] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f32420 is same with 28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.820 the state(6) to be set 00:22:18.820 [2024-11-15 11:40:19.581276] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f32420 is same with the state(6) to be set 00:22:18.820 [2024-11-15 11:40:19.581277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.820 [2024-11-15 11:40:19.581282] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f32420 is same with the state(6) to be set 00:22:18.820 [2024-11-15 11:40:19.581288] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f32420 is same with the state(6) to be set 00:22:18.820 [2024-11-15 11:40:19.581290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.820 [2024-11-15 11:40:19.581294] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f32420 is same with the state(6) to be set 00:22:18.820 [2024-11-15 11:40:19.581301] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f32420 is same with the state(6) to be set 00:22:18.820 [2024-11-15 11:40:19.581302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.820 [2024-11-15 11:40:19.581307] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f32420 is same with the state(6) to be set 00:22:18.820 [2024-11-15 11:40:19.581313] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f32420 is same with the state(6) to be set 00:22:18.820 [2024-11-15 11:40:19.581315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.820 [2024-11-15 11:40:19.581318] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f32420 is same with the state(6) to be set 00:22:18.820 [2024-11-15 11:40:19.581326] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f32420 is same with the state(6) to be set 00:22:18.820 [2024-11-15 11:40:19.581326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.820 [2024-11-15 11:40:19.581331] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f32420 is same with the state(6) to be set 00:22:18.820 [2024-11-15 11:40:19.581338] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f32420 is same with the state(6) to be set 00:22:18.820 [2024-11-15 11:40:19.581339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.820 [2024-11-15 11:40:19.581345] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f32420 is same with the state(6) to be set 00:22:18.820 [2024-11-15 11:40:19.581351] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f32420 is same with [2024-11-15 11:40:19.581351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cthe state(6) to be set 00:22:18.820 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.820 [2024-11-15 11:40:19.581362] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f32420 is same with the state(6) to be set 00:22:18.820 [2024-11-15 11:40:19.581368] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f32420 is same with [2024-11-15 11:40:19.581367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:32128 len:1the state(6) to be set 00:22:18.820 28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.820 [2024-11-15 11:40:19.581377] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f32420 is same with the state(6) to be set 00:22:18.820 [2024-11-15 11:40:19.581380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.820 [2024-11-15 11:40:19.581383] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f32420 is same with the state(6) to be set 00:22:18.820 [2024-11-15 11:40:19.581393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.820 [2024-11-15 11:40:19.581404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.820 [2024-11-15 11:40:19.581417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.820 [2024-11-15 11:40:19.581427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.820 [2024-11-15 11:40:19.581439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.820 [2024-11-15 11:40:19.581449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.820 [2024-11-15 11:40:19.581467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.820 [2024-11-15 11:40:19.581477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.820 [2024-11-15 11:40:19.591750] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13a17f0 (9): Bad file descriptor 00:22:18.820 [2024-11-15 11:40:19.591791] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12b6610 (9): Bad file descriptor 00:22:18.820 [2024-11-15 11:40:19.591810] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1819730 (9): Bad file descriptor 00:22:18.820 [2024-11-15 11:40:19.591848] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:18.820 [2024-11-15 11:40:19.591862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.820 [2024-11-15 11:40:19.591874] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:18.820 [2024-11-15 11:40:19.591884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.820 [2024-11-15 11:40:19.591895] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:18.820 [2024-11-15 11:40:19.591909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.820 [2024-11-15 11:40:19.591920] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:18.820 [2024-11-15 11:40:19.591931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.820 [2024-11-15 11:40:19.591940] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17fa570 is same with the state(6) to be set 00:22:18.820 [2024-11-15 11:40:19.591976] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:18.820 [2024-11-15 11:40:19.591989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.820 [2024-11-15 11:40:19.592000] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:18.820 [2024-11-15 11:40:19.592010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.820 [2024-11-15 11:40:19.592021] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:18.820 [2024-11-15 11:40:19.592031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.820 [2024-11-15 11:40:19.592042] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:18.820 [2024-11-15 11:40:19.592052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.820 [2024-11-15 11:40:19.592061] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f3550 is same with the state(6) to be set 00:22:18.820 [2024-11-15 11:40:19.592083] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17c2e50 (9): Bad file descriptor 00:22:18.820 [2024-11-15 11:40:19.592107] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13a26f0 (9): Bad file descriptor 00:22:18.820 [2024-11-15 11:40:19.592126] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13a2270 (9): Bad file descriptor 00:22:18.820 [2024-11-15 11:40:19.592143] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17cc550 (9): Bad file descriptor 00:22:18.820 [2024-11-15 11:40:19.592161] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1396750 (9): Bad file descriptor 00:22:18.820 task offset: 24576 on job bdev=Nvme4n1 fails 00:22:18.820 1414.00 IOPS, 88.38 MiB/s [2024-11-15T10:40:19.673Z] [2024-11-15 11:40:19.595324] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode4, 1] resetting controller 00:22:18.820 [2024-11-15 11:40:19.595839] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode5, 1] resetting controller 00:22:18.820 [2024-11-15 11:40:19.596022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:18.820 [2024-11-15 11:40:19.596048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17cc550 with addr=10.0.0.2, port=4420 00:22:18.820 [2024-11-15 11:40:19.596061] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17cc550 is same with the state(6) to be set 00:22:18.820 [2024-11-15 11:40:19.596930] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:22:18.820 [2024-11-15 11:40:19.597342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:18.820 [2024-11-15 11:40:19.597367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13a2270 with addr=10.0.0.2, port=4420 00:22:18.820 [2024-11-15 11:40:19.597379] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13a2270 is same with the state(6) to be set 00:22:18.820 [2024-11-15 11:40:19.597399] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17cc550 (9): Bad file descriptor 00:22:18.820 [2024-11-15 11:40:19.597455] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:22:18.820 [2024-11-15 11:40:19.597522] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:22:18.820 [2024-11-15 11:40:19.597577] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:22:18.820 [2024-11-15 11:40:19.597668] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:22:18.820 [2024-11-15 11:40:19.597726] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:22:18.820 [2024-11-15 11:40:19.597777] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:22:18.820 [2024-11-15 11:40:19.597806] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13a2270 (9): Bad file descriptor 00:22:18.820 [2024-11-15 11:40:19.597820] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] Ctrlr is in error state 00:22:18.820 [2024-11-15 11:40:19.597830] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] controller reinitialization failed 00:22:18.820 [2024-11-15 11:40:19.597841] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] in failed state. 00:22:18.820 [2024-11-15 11:40:19.597853] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] Resetting controller failed. 00:22:18.820 [2024-11-15 11:40:19.597992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.820 [2024-11-15 11:40:19.598009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.820 [2024-11-15 11:40:19.598029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.820 [2024-11-15 11:40:19.598040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.820 [2024-11-15 11:40:19.598052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.820 [2024-11-15 11:40:19.598062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.820 [2024-11-15 11:40:19.598075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.820 [2024-11-15 11:40:19.598085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.820 [2024-11-15 11:40:19.598097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.820 [2024-11-15 11:40:19.598108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.820 [2024-11-15 11:40:19.598119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.820 [2024-11-15 11:40:19.598129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.820 [2024-11-15 11:40:19.598142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.820 [2024-11-15 11:40:19.598152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.820 [2024-11-15 11:40:19.598165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.820 [2024-11-15 11:40:19.598175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.820 [2024-11-15 11:40:19.598192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.820 [2024-11-15 11:40:19.598202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.820 [2024-11-15 11:40:19.598214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.820 [2024-11-15 11:40:19.598224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.820 [2024-11-15 11:40:19.598236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.820 [2024-11-15 11:40:19.598246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.820 [2024-11-15 11:40:19.598259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.820 [2024-11-15 11:40:19.598271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.820 [2024-11-15 11:40:19.598283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.820 [2024-11-15 11:40:19.598293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.820 [2024-11-15 11:40:19.598306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.820 [2024-11-15 11:40:19.598315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.820 [2024-11-15 11:40:19.598328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.820 [2024-11-15 11:40:19.598338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.820 [2024-11-15 11:40:19.598350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.820 [2024-11-15 11:40:19.598360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.820 [2024-11-15 11:40:19.598373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.820 [2024-11-15 11:40:19.598383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.820 [2024-11-15 11:40:19.598395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.820 [2024-11-15 11:40:19.598405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.820 [2024-11-15 11:40:19.598417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.820 [2024-11-15 11:40:19.598427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.820 [2024-11-15 11:40:19.598439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.820 [2024-11-15 11:40:19.598449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.820 [2024-11-15 11:40:19.598470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.820 [2024-11-15 11:40:19.598483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.820 [2024-11-15 11:40:19.598496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.820 [2024-11-15 11:40:19.598506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.820 [2024-11-15 11:40:19.598519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.820 [2024-11-15 11:40:19.598529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.820 [2024-11-15 11:40:19.598541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.820 [2024-11-15 11:40:19.598551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.820 [2024-11-15 11:40:19.598563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.820 [2024-11-15 11:40:19.598573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.820 [2024-11-15 11:40:19.598586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.820 [2024-11-15 11:40:19.598596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.820 [2024-11-15 11:40:19.598608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.820 [2024-11-15 11:40:19.598618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.820 [2024-11-15 11:40:19.598631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.820 [2024-11-15 11:40:19.598641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.820 [2024-11-15 11:40:19.598654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.820 [2024-11-15 11:40:19.598663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.820 [2024-11-15 11:40:19.598676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.821 [2024-11-15 11:40:19.598687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.821 [2024-11-15 11:40:19.598699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.821 [2024-11-15 11:40:19.598709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.821 [2024-11-15 11:40:19.598721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.821 [2024-11-15 11:40:19.598731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.821 [2024-11-15 11:40:19.598743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.821 [2024-11-15 11:40:19.598753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.821 [2024-11-15 11:40:19.598767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.821 [2024-11-15 11:40:19.598778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.821 [2024-11-15 11:40:19.598790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.821 [2024-11-15 11:40:19.598800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.821 [2024-11-15 11:40:19.598812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.821 [2024-11-15 11:40:19.598823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.821 [2024-11-15 11:40:19.598835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.821 [2024-11-15 11:40:19.598845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.821 [2024-11-15 11:40:19.598858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.821 [2024-11-15 11:40:19.598867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.821 [2024-11-15 11:40:19.598879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.821 [2024-11-15 11:40:19.598889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.821 [2024-11-15 11:40:19.598902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.821 [2024-11-15 11:40:19.598911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.821 [2024-11-15 11:40:19.598924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.821 [2024-11-15 11:40:19.598933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.821 [2024-11-15 11:40:19.598946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.821 [2024-11-15 11:40:19.598956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.821 [2024-11-15 11:40:19.598968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.821 [2024-11-15 11:40:19.598978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.821 [2024-11-15 11:40:19.598991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.821 [2024-11-15 11:40:19.599000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.821 [2024-11-15 11:40:19.599013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.821 [2024-11-15 11:40:19.599023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.821 [2024-11-15 11:40:19.599036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.821 [2024-11-15 11:40:19.599047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.821 [2024-11-15 11:40:19.599060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.821 [2024-11-15 11:40:19.599070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.821 [2024-11-15 11:40:19.599082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.821 [2024-11-15 11:40:19.599092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.821 [2024-11-15 11:40:19.599105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.821 [2024-11-15 11:40:19.599115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.821 [2024-11-15 11:40:19.599128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.821 [2024-11-15 11:40:19.599137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.821 [2024-11-15 11:40:19.599150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.821 [2024-11-15 11:40:19.599159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.821 [2024-11-15 11:40:19.599172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.821 [2024-11-15 11:40:19.599182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.821 [2024-11-15 11:40:19.599194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.821 [2024-11-15 11:40:19.599204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.821 [2024-11-15 11:40:19.599217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.821 [2024-11-15 11:40:19.599227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.821 [2024-11-15 11:40:19.599239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.821 [2024-11-15 11:40:19.599249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.821 [2024-11-15 11:40:19.599261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.821 [2024-11-15 11:40:19.599271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.821 [2024-11-15 11:40:19.599283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.821 [2024-11-15 11:40:19.599293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.821 [2024-11-15 11:40:19.599305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.821 [2024-11-15 11:40:19.599315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.821 [2024-11-15 11:40:19.599330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.821 [2024-11-15 11:40:19.599340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.821 [2024-11-15 11:40:19.599353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.821 [2024-11-15 11:40:19.599363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.821 [2024-11-15 11:40:19.599375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.821 [2024-11-15 11:40:19.599385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.821 [2024-11-15 11:40:19.599398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.821 [2024-11-15 11:40:19.599408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.821 [2024-11-15 11:40:19.599420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.821 [2024-11-15 11:40:19.599430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.821 [2024-11-15 11:40:19.599442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.821 [2024-11-15 11:40:19.599452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.821 [2024-11-15 11:40:19.599468] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1650d50 is same with the state(6) to be set 00:22:18.821 [2024-11-15 11:40:19.599549] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] Ctrlr is in error state 00:22:18.821 [2024-11-15 11:40:19.599560] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] controller reinitialization failed 00:22:18.821 [2024-11-15 11:40:19.599571] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] in failed state. 00:22:18.821 [2024-11-15 11:40:19.599580] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] Resetting controller failed. 00:22:18.821 [2024-11-15 11:40:19.600996] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode10, 1] resetting controller 00:22:18.821 [2024-11-15 11:40:19.601275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:18.821 [2024-11-15 11:40:19.601295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1819730 with addr=10.0.0.2, port=4420 00:22:18.821 [2024-11-15 11:40:19.601307] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1819730 is same with the state(6) to be set 00:22:18.821 [2024-11-15 11:40:19.601684] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1819730 (9): Bad file descriptor 00:22:18.821 [2024-11-15 11:40:19.601747] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] Ctrlr is in error state 00:22:18.821 [2024-11-15 11:40:19.601759] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] controller reinitialization failed 00:22:18.821 [2024-11-15 11:40:19.601770] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] in failed state. 00:22:18.821 [2024-11-15 11:40:19.601780] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] Resetting controller failed. 00:22:18.821 [2024-11-15 11:40:19.601813] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17fa570 (9): Bad file descriptor 00:22:18.821 [2024-11-15 11:40:19.601842] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17f3550 (9): Bad file descriptor 00:22:18.821 [2024-11-15 11:40:19.602009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.821 [2024-11-15 11:40:19.602026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.821 [2024-11-15 11:40:19.602042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.821 [2024-11-15 11:40:19.602053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.821 [2024-11-15 11:40:19.602065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.821 [2024-11-15 11:40:19.602076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.821 [2024-11-15 11:40:19.602088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.821 [2024-11-15 11:40:19.602098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.821 [2024-11-15 11:40:19.602111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.821 [2024-11-15 11:40:19.602121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.821 [2024-11-15 11:40:19.602134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.821 [2024-11-15 11:40:19.602144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.821 [2024-11-15 11:40:19.602156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.821 [2024-11-15 11:40:19.602166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.821 [2024-11-15 11:40:19.602178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.821 [2024-11-15 11:40:19.602189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.821 [2024-11-15 11:40:19.602201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.821 [2024-11-15 11:40:19.602211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.821 [2024-11-15 11:40:19.602223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.821 [2024-11-15 11:40:19.602233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.821 [2024-11-15 11:40:19.602246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.821 [2024-11-15 11:40:19.602256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.821 [2024-11-15 11:40:19.602268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.821 [2024-11-15 11:40:19.602278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.821 [2024-11-15 11:40:19.602295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.821 [2024-11-15 11:40:19.602305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.821 [2024-11-15 11:40:19.602318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.821 [2024-11-15 11:40:19.602328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.821 [2024-11-15 11:40:19.602340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.821 [2024-11-15 11:40:19.602350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.821 [2024-11-15 11:40:19.602362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.821 [2024-11-15 11:40:19.602373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.821 [2024-11-15 11:40:19.602385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.821 [2024-11-15 11:40:19.602395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.821 [2024-11-15 11:40:19.602408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.821 [2024-11-15 11:40:19.602417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.821 [2024-11-15 11:40:19.602430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.821 [2024-11-15 11:40:19.602440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.821 [2024-11-15 11:40:19.602452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.821 [2024-11-15 11:40:19.602471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.821 [2024-11-15 11:40:19.602484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.821 [2024-11-15 11:40:19.602494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.821 [2024-11-15 11:40:19.602506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.821 [2024-11-15 11:40:19.602516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.821 [2024-11-15 11:40:19.602528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.821 [2024-11-15 11:40:19.602538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.821 [2024-11-15 11:40:19.602550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.821 [2024-11-15 11:40:19.602560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.821 [2024-11-15 11:40:19.602573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.821 [2024-11-15 11:40:19.602585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.821 [2024-11-15 11:40:19.602598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.821 [2024-11-15 11:40:19.602607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.821 [2024-11-15 11:40:19.602619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.821 [2024-11-15 11:40:19.602629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.821 [2024-11-15 11:40:19.602642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.821 [2024-11-15 11:40:19.602652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.821 [2024-11-15 11:40:19.602665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.821 [2024-11-15 11:40:19.602675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.821 [2024-11-15 11:40:19.602687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.821 [2024-11-15 11:40:19.602697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.822 [2024-11-15 11:40:19.602709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.822 [2024-11-15 11:40:19.602720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.822 [2024-11-15 11:40:19.602733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.822 [2024-11-15 11:40:19.602743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.822 [2024-11-15 11:40:19.602755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.822 [2024-11-15 11:40:19.602765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.822 [2024-11-15 11:40:19.602778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.822 [2024-11-15 11:40:19.602787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.822 [2024-11-15 11:40:19.602800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.822 [2024-11-15 11:40:19.602810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.822 [2024-11-15 11:40:19.602822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.822 [2024-11-15 11:40:19.602832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.822 [2024-11-15 11:40:19.602844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.822 [2024-11-15 11:40:19.602855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.822 [2024-11-15 11:40:19.602869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.822 [2024-11-15 11:40:19.602879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.822 [2024-11-15 11:40:19.602891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.822 [2024-11-15 11:40:19.602902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.822 [2024-11-15 11:40:19.602914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.822 [2024-11-15 11:40:19.602923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.822 [2024-11-15 11:40:19.602936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.822 [2024-11-15 11:40:19.602945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.822 [2024-11-15 11:40:19.602958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.822 [2024-11-15 11:40:19.602968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.822 [2024-11-15 11:40:19.602981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.822 [2024-11-15 11:40:19.602991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.822 [2024-11-15 11:40:19.603003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.822 [2024-11-15 11:40:19.603013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.822 [2024-11-15 11:40:19.603025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.822 [2024-11-15 11:40:19.603035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.822 [2024-11-15 11:40:19.603048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.822 [2024-11-15 11:40:19.603058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.822 [2024-11-15 11:40:19.603071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.822 [2024-11-15 11:40:19.603081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.822 [2024-11-15 11:40:19.603093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.822 [2024-11-15 11:40:19.603103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.822 [2024-11-15 11:40:19.603115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.822 [2024-11-15 11:40:19.603125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.822 [2024-11-15 11:40:19.603137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.822 [2024-11-15 11:40:19.603149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.822 [2024-11-15 11:40:19.603162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.822 [2024-11-15 11:40:19.603172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.822 [2024-11-15 11:40:19.603184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.822 [2024-11-15 11:40:19.603194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.822 [2024-11-15 11:40:19.603206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.822 [2024-11-15 11:40:19.603216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.822 [2024-11-15 11:40:19.603229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.822 [2024-11-15 11:40:19.603239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.822 [2024-11-15 11:40:19.603251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.822 [2024-11-15 11:40:19.603261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.822 [2024-11-15 11:40:19.603274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.822 [2024-11-15 11:40:19.603284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.822 [2024-11-15 11:40:19.603296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.822 [2024-11-15 11:40:19.603306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.822 [2024-11-15 11:40:19.603318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.822 [2024-11-15 11:40:19.603328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.822 [2024-11-15 11:40:19.603340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.822 [2024-11-15 11:40:19.603350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.822 [2024-11-15 11:40:19.603363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.822 [2024-11-15 11:40:19.603373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.822 [2024-11-15 11:40:19.603386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.822 [2024-11-15 11:40:19.603396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.822 [2024-11-15 11:40:19.603409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.822 [2024-11-15 11:40:19.603418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.822 [2024-11-15 11:40:19.603431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.822 [2024-11-15 11:40:19.603443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.822 [2024-11-15 11:40:19.603456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.822 [2024-11-15 11:40:19.603471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.822 [2024-11-15 11:40:19.603482] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15a6650 is same with the state(6) to be set 00:22:18.822 [2024-11-15 11:40:19.604959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.822 [2024-11-15 11:40:19.604976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.822 [2024-11-15 11:40:19.604990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.822 [2024-11-15 11:40:19.605000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.822 [2024-11-15 11:40:19.605013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.822 [2024-11-15 11:40:19.605023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.822 [2024-11-15 11:40:19.605037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.822 [2024-11-15 11:40:19.605047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.822 [2024-11-15 11:40:19.605059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.822 [2024-11-15 11:40:19.605069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.822 [2024-11-15 11:40:19.605081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.822 [2024-11-15 11:40:19.605092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.822 [2024-11-15 11:40:19.605104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.822 [2024-11-15 11:40:19.605114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.822 [2024-11-15 11:40:19.605126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.822 [2024-11-15 11:40:19.605136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.822 [2024-11-15 11:40:19.605149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.822 [2024-11-15 11:40:19.605159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.822 [2024-11-15 11:40:19.605172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.822 [2024-11-15 11:40:19.605182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.822 [2024-11-15 11:40:19.605194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.822 [2024-11-15 11:40:19.605207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.822 [2024-11-15 11:40:19.605220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.822 [2024-11-15 11:40:19.605230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.822 [2024-11-15 11:40:19.605243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.822 [2024-11-15 11:40:19.605253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.822 [2024-11-15 11:40:19.605265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.822 [2024-11-15 11:40:19.605275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.822 [2024-11-15 11:40:19.605289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.822 [2024-11-15 11:40:19.605299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.822 [2024-11-15 11:40:19.605311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.822 [2024-11-15 11:40:19.605322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.822 [2024-11-15 11:40:19.605334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.822 [2024-11-15 11:40:19.605344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.822 [2024-11-15 11:40:19.605356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.822 [2024-11-15 11:40:19.605367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.822 [2024-11-15 11:40:19.605379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.822 [2024-11-15 11:40:19.605390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.822 [2024-11-15 11:40:19.605402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.822 [2024-11-15 11:40:19.605412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.822 [2024-11-15 11:40:19.605425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.822 [2024-11-15 11:40:19.605436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.822 [2024-11-15 11:40:19.605449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.822 [2024-11-15 11:40:19.605465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.822 [2024-11-15 11:40:19.605478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.822 [2024-11-15 11:40:19.605489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.822 [2024-11-15 11:40:19.605507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.822 [2024-11-15 11:40:19.605518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.822 [2024-11-15 11:40:19.605531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.822 [2024-11-15 11:40:19.605542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.822 [2024-11-15 11:40:19.605555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.822 [2024-11-15 11:40:19.605567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.822 [2024-11-15 11:40:19.605580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.822 [2024-11-15 11:40:19.605592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.822 [2024-11-15 11:40:19.605605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.822 [2024-11-15 11:40:19.605616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.822 [2024-11-15 11:40:19.605631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.822 [2024-11-15 11:40:19.605643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.822 [2024-11-15 11:40:19.605657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.822 [2024-11-15 11:40:19.605669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.822 [2024-11-15 11:40:19.605682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.822 [2024-11-15 11:40:19.605692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.822 [2024-11-15 11:40:19.605705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.822 [2024-11-15 11:40:19.605716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.822 [2024-11-15 11:40:19.605728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.822 [2024-11-15 11:40:19.605739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.822 [2024-11-15 11:40:19.605751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.822 [2024-11-15 11:40:19.605761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.822 [2024-11-15 11:40:19.605773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.822 [2024-11-15 11:40:19.605784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.822 [2024-11-15 11:40:19.605796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.822 [2024-11-15 11:40:19.605809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.822 [2024-11-15 11:40:19.605821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.822 [2024-11-15 11:40:19.605831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.822 [2024-11-15 11:40:19.605843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.822 [2024-11-15 11:40:19.605854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.822 [2024-11-15 11:40:19.605866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.822 [2024-11-15 11:40:19.605877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.822 [2024-11-15 11:40:19.605889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.822 [2024-11-15 11:40:19.605900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.822 [2024-11-15 11:40:19.605912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.822 [2024-11-15 11:40:19.605922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.822 [2024-11-15 11:40:19.605934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.823 [2024-11-15 11:40:19.605945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.823 [2024-11-15 11:40:19.605957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.823 [2024-11-15 11:40:19.605967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.823 [2024-11-15 11:40:19.605980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.823 [2024-11-15 11:40:19.605990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.823 [2024-11-15 11:40:19.606002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.823 [2024-11-15 11:40:19.606013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.823 [2024-11-15 11:40:19.606025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.823 [2024-11-15 11:40:19.606035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.823 [2024-11-15 11:40:19.606048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.823 [2024-11-15 11:40:19.606058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.823 [2024-11-15 11:40:19.606070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.823 [2024-11-15 11:40:19.606080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.823 [2024-11-15 11:40:19.606094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.823 [2024-11-15 11:40:19.606104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.823 [2024-11-15 11:40:19.606117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.823 [2024-11-15 11:40:19.606127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.823 [2024-11-15 11:40:19.606139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.823 [2024-11-15 11:40:19.606151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.823 [2024-11-15 11:40:19.606163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.823 [2024-11-15 11:40:19.606174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.823 [2024-11-15 11:40:19.606187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.823 [2024-11-15 11:40:19.606197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.823 [2024-11-15 11:40:19.606210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.823 [2024-11-15 11:40:19.606221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.823 [2024-11-15 11:40:19.606234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.823 [2024-11-15 11:40:19.606244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.823 [2024-11-15 11:40:19.606257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.823 [2024-11-15 11:40:19.606268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.823 [2024-11-15 11:40:19.606280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.823 [2024-11-15 11:40:19.606290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.823 [2024-11-15 11:40:19.606304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.823 [2024-11-15 11:40:19.606314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.823 [2024-11-15 11:40:19.606327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.823 [2024-11-15 11:40:19.606337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.823 [2024-11-15 11:40:19.606349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.823 [2024-11-15 11:40:19.606360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.823 [2024-11-15 11:40:19.606373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.823 [2024-11-15 11:40:19.606385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.823 [2024-11-15 11:40:19.606400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.823 [2024-11-15 11:40:19.606410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.823 [2024-11-15 11:40:19.606423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.823 [2024-11-15 11:40:19.606434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.823 [2024-11-15 11:40:19.606447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.823 [2024-11-15 11:40:19.606461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.823 [2024-11-15 11:40:19.606472] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15a7610 is same with the state(6) to be set 00:22:18.823 [2024-11-15 11:40:19.607940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.823 [2024-11-15 11:40:19.607958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.823 [2024-11-15 11:40:19.607972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.823 [2024-11-15 11:40:19.607982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.823 [2024-11-15 11:40:19.607995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.823 [2024-11-15 11:40:19.608004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.823 [2024-11-15 11:40:19.608017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.823 [2024-11-15 11:40:19.608027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.823 [2024-11-15 11:40:19.608039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.823 [2024-11-15 11:40:19.608049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.823 [2024-11-15 11:40:19.608061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.823 [2024-11-15 11:40:19.608071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.823 [2024-11-15 11:40:19.608084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.823 [2024-11-15 11:40:19.608094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.823 [2024-11-15 11:40:19.608106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.823 [2024-11-15 11:40:19.608116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.823 [2024-11-15 11:40:19.608128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.823 [2024-11-15 11:40:19.608142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.823 [2024-11-15 11:40:19.608154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.823 [2024-11-15 11:40:19.608164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.823 [2024-11-15 11:40:19.608176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.823 [2024-11-15 11:40:19.608186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.823 [2024-11-15 11:40:19.608198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.823 [2024-11-15 11:40:19.608208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.823 [2024-11-15 11:40:19.608221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.823 [2024-11-15 11:40:19.608231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.823 [2024-11-15 11:40:19.608243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.823 [2024-11-15 11:40:19.608252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.823 [2024-11-15 11:40:19.608265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.823 [2024-11-15 11:40:19.608274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.823 [2024-11-15 11:40:19.608287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.823 [2024-11-15 11:40:19.608297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.823 [2024-11-15 11:40:19.608309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.823 [2024-11-15 11:40:19.608319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.823 [2024-11-15 11:40:19.608331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.823 [2024-11-15 11:40:19.608341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.823 [2024-11-15 11:40:19.608353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.823 [2024-11-15 11:40:19.608363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.823 [2024-11-15 11:40:19.608375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.823 [2024-11-15 11:40:19.608385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.823 [2024-11-15 11:40:19.608397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.823 [2024-11-15 11:40:19.608407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.823 [2024-11-15 11:40:19.608421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.823 [2024-11-15 11:40:19.608431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.823 [2024-11-15 11:40:19.608443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.823 [2024-11-15 11:40:19.608453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.823 [2024-11-15 11:40:19.608475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.823 [2024-11-15 11:40:19.608485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.823 [2024-11-15 11:40:19.608497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.823 [2024-11-15 11:40:19.608507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.823 [2024-11-15 11:40:19.608519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.823 [2024-11-15 11:40:19.608529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.823 [2024-11-15 11:40:19.608541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.823 [2024-11-15 11:40:19.608551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.823 [2024-11-15 11:40:19.608563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.823 [2024-11-15 11:40:19.608573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.823 [2024-11-15 11:40:19.608585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.823 [2024-11-15 11:40:19.608595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.823 [2024-11-15 11:40:19.608607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.823 [2024-11-15 11:40:19.608617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.823 [2024-11-15 11:40:19.608629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.823 [2024-11-15 11:40:19.608638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.823 [2024-11-15 11:40:19.608651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.823 [2024-11-15 11:40:19.608660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.823 [2024-11-15 11:40:19.608673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.823 [2024-11-15 11:40:19.608683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.823 [2024-11-15 11:40:19.608695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.823 [2024-11-15 11:40:19.608707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.823 [2024-11-15 11:40:19.608719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.823 [2024-11-15 11:40:19.608729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.823 [2024-11-15 11:40:19.608741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.823 [2024-11-15 11:40:19.608751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.823 [2024-11-15 11:40:19.608763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.823 [2024-11-15 11:40:19.608773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.823 [2024-11-15 11:40:19.608785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.823 [2024-11-15 11:40:19.608794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.823 [2024-11-15 11:40:19.608807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.823 [2024-11-15 11:40:19.608816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.823 [2024-11-15 11:40:19.608829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.823 [2024-11-15 11:40:19.608838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.823 [2024-11-15 11:40:19.608850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.823 [2024-11-15 11:40:19.608860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.823 [2024-11-15 11:40:19.608872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.823 [2024-11-15 11:40:19.608882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.823 [2024-11-15 11:40:19.608894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.823 [2024-11-15 11:40:19.608904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.823 [2024-11-15 11:40:19.608916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.823 [2024-11-15 11:40:19.608926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.823 [2024-11-15 11:40:19.608938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.823 [2024-11-15 11:40:19.608947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.823 [2024-11-15 11:40:19.608960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.823 [2024-11-15 11:40:19.608970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.823 [2024-11-15 11:40:19.608984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.823 [2024-11-15 11:40:19.608994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.824 [2024-11-15 11:40:19.609006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.824 [2024-11-15 11:40:19.609016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.824 [2024-11-15 11:40:19.609028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.824 [2024-11-15 11:40:19.609037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.824 [2024-11-15 11:40:19.609049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.824 [2024-11-15 11:40:19.609059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.824 [2024-11-15 11:40:19.609072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.824 [2024-11-15 11:40:19.609081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.824 [2024-11-15 11:40:19.609094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.824 [2024-11-15 11:40:19.609104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.824 [2024-11-15 11:40:19.609117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.824 [2024-11-15 11:40:19.609126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.824 [2024-11-15 11:40:19.609138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.824 [2024-11-15 11:40:19.609148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.824 [2024-11-15 11:40:19.609160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.824 [2024-11-15 11:40:19.609169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.824 [2024-11-15 11:40:19.609181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.824 [2024-11-15 11:40:19.609191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.824 [2024-11-15 11:40:19.609203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.824 [2024-11-15 11:40:19.609213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.824 [2024-11-15 11:40:19.609225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.824 [2024-11-15 11:40:19.609235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.824 [2024-11-15 11:40:19.609247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.824 [2024-11-15 11:40:19.609259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.824 [2024-11-15 11:40:19.609272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.824 [2024-11-15 11:40:19.609281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.824 [2024-11-15 11:40:19.609293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.824 [2024-11-15 11:40:19.609303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.824 [2024-11-15 11:40:19.609315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.824 [2024-11-15 11:40:19.609325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.824 [2024-11-15 11:40:19.609337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.824 [2024-11-15 11:40:19.609347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.824 [2024-11-15 11:40:19.609359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.824 [2024-11-15 11:40:19.609368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.824 [2024-11-15 11:40:19.609380] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15a86d0 is same with the state(6) to be set 00:22:18.824 [2024-11-15 11:40:19.610870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.824 [2024-11-15 11:40:19.610893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.824 [2024-11-15 11:40:19.610908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.824 [2024-11-15 11:40:19.610918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.824 [2024-11-15 11:40:19.610930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.824 [2024-11-15 11:40:19.610941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.824 [2024-11-15 11:40:19.610953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.824 [2024-11-15 11:40:19.610963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.824 [2024-11-15 11:40:19.610975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.824 [2024-11-15 11:40:19.610985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.824 [2024-11-15 11:40:19.610997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.824 [2024-11-15 11:40:19.611007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.824 [2024-11-15 11:40:19.611019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.824 [2024-11-15 11:40:19.611032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.824 [2024-11-15 11:40:19.611044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.824 [2024-11-15 11:40:19.611054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.824 [2024-11-15 11:40:19.611066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.824 [2024-11-15 11:40:19.611076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.824 [2024-11-15 11:40:19.611088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.824 [2024-11-15 11:40:19.611098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.824 [2024-11-15 11:40:19.611110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.824 [2024-11-15 11:40:19.611121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.824 [2024-11-15 11:40:19.611132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.824 [2024-11-15 11:40:19.611142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.824 [2024-11-15 11:40:19.611154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.824 [2024-11-15 11:40:19.611164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.824 [2024-11-15 11:40:19.611177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.824 [2024-11-15 11:40:19.611186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.824 [2024-11-15 11:40:19.611198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.824 [2024-11-15 11:40:19.611208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.824 [2024-11-15 11:40:19.611220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.824 [2024-11-15 11:40:19.611230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.824 [2024-11-15 11:40:19.611242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.824 [2024-11-15 11:40:19.611252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.824 [2024-11-15 11:40:19.611264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.824 [2024-11-15 11:40:19.611274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.824 [2024-11-15 11:40:19.611286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.824 [2024-11-15 11:40:19.611296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.824 [2024-11-15 11:40:19.611309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.824 [2024-11-15 11:40:19.611319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.824 [2024-11-15 11:40:19.611332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.824 [2024-11-15 11:40:19.611342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.824 [2024-11-15 11:40:19.611354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.824 [2024-11-15 11:40:19.611364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.824 [2024-11-15 11:40:19.611376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.824 [2024-11-15 11:40:19.611385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.824 [2024-11-15 11:40:19.611397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.824 [2024-11-15 11:40:19.611408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.824 [2024-11-15 11:40:19.611420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.824 [2024-11-15 11:40:19.611429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.824 [2024-11-15 11:40:19.611442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.824 [2024-11-15 11:40:19.611451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.824 [2024-11-15 11:40:19.611474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.824 [2024-11-15 11:40:19.611485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.824 [2024-11-15 11:40:19.611497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.824 [2024-11-15 11:40:19.611507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.824 [2024-11-15 11:40:19.611520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.824 [2024-11-15 11:40:19.611529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.824 [2024-11-15 11:40:19.611541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.824 [2024-11-15 11:40:19.611551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.824 [2024-11-15 11:40:19.611563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.824 [2024-11-15 11:40:19.611573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.824 [2024-11-15 11:40:19.611585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.824 [2024-11-15 11:40:19.611597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.824 [2024-11-15 11:40:19.611609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.824 [2024-11-15 11:40:19.611619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.824 [2024-11-15 11:40:19.611631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.824 [2024-11-15 11:40:19.611641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.824 [2024-11-15 11:40:19.611653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.824 [2024-11-15 11:40:19.611663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.824 [2024-11-15 11:40:19.611675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.824 [2024-11-15 11:40:19.611685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.824 [2024-11-15 11:40:19.611698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.824 [2024-11-15 11:40:19.611708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.824 [2024-11-15 11:40:19.611720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.824 [2024-11-15 11:40:19.611730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.824 [2024-11-15 11:40:19.611742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.824 [2024-11-15 11:40:19.611753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.824 [2024-11-15 11:40:19.611764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.824 [2024-11-15 11:40:19.611774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.824 [2024-11-15 11:40:19.611786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.824 [2024-11-15 11:40:19.611796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.824 [2024-11-15 11:40:19.611809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.824 [2024-11-15 11:40:19.611818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.824 [2024-11-15 11:40:19.611831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.824 [2024-11-15 11:40:19.611841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.824 [2024-11-15 11:40:19.611852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.824 [2024-11-15 11:40:19.611862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.824 [2024-11-15 11:40:19.611874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.824 [2024-11-15 11:40:19.611886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.824 [2024-11-15 11:40:19.611898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.824 [2024-11-15 11:40:19.611908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.824 [2024-11-15 11:40:19.611921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.824 [2024-11-15 11:40:19.611930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.824 [2024-11-15 11:40:19.611942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.824 [2024-11-15 11:40:19.611952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.824 [2024-11-15 11:40:19.611964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.824 [2024-11-15 11:40:19.611974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.824 [2024-11-15 11:40:19.611986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.824 [2024-11-15 11:40:19.611996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.824 [2024-11-15 11:40:19.612008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.824 [2024-11-15 11:40:19.612018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.824 [2024-11-15 11:40:19.612031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.824 [2024-11-15 11:40:19.612040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.824 [2024-11-15 11:40:19.612052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.824 [2024-11-15 11:40:19.612062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.824 [2024-11-15 11:40:19.612074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.824 [2024-11-15 11:40:19.612083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.824 [2024-11-15 11:40:19.612096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.824 [2024-11-15 11:40:19.612106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.824 [2024-11-15 11:40:19.612118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.824 [2024-11-15 11:40:19.612128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.824 [2024-11-15 11:40:19.612140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.824 [2024-11-15 11:40:19.612149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.824 [2024-11-15 11:40:19.612163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.825 [2024-11-15 11:40:19.612173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.825 [2024-11-15 11:40:19.612185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.825 [2024-11-15 11:40:19.612195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.825 [2024-11-15 11:40:19.612207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.825 [2024-11-15 11:40:19.612217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.825 [2024-11-15 11:40:19.612230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.825 [2024-11-15 11:40:19.612240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.825 [2024-11-15 11:40:19.612251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.825 [2024-11-15 11:40:19.612261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.825 [2024-11-15 11:40:19.612273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.825 [2024-11-15 11:40:19.612283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.825 [2024-11-15 11:40:19.612295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.825 [2024-11-15 11:40:19.612305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.825 [2024-11-15 11:40:19.612316] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22562f0 is same with the state(6) to be set 00:22:18.825 [2024-11-15 11:40:19.613813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.825 [2024-11-15 11:40:19.613832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.825 [2024-11-15 11:40:19.613847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.825 [2024-11-15 11:40:19.613857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.825 [2024-11-15 11:40:19.613870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.825 [2024-11-15 11:40:19.613880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.825 [2024-11-15 11:40:19.613892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.825 [2024-11-15 11:40:19.613902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.825 [2024-11-15 11:40:19.613915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.825 [2024-11-15 11:40:19.613925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.825 [2024-11-15 11:40:19.613941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.825 [2024-11-15 11:40:19.613951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.825 [2024-11-15 11:40:19.613963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.825 [2024-11-15 11:40:19.613973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.825 [2024-11-15 11:40:19.613985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.825 [2024-11-15 11:40:19.613995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.825 [2024-11-15 11:40:19.614007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.825 [2024-11-15 11:40:19.614017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.825 [2024-11-15 11:40:19.614029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.825 [2024-11-15 11:40:19.614039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.825 [2024-11-15 11:40:19.614050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.825 [2024-11-15 11:40:19.614060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.825 [2024-11-15 11:40:19.614073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.825 [2024-11-15 11:40:19.614083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.825 [2024-11-15 11:40:19.614095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.825 [2024-11-15 11:40:19.614105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.825 [2024-11-15 11:40:19.614117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.825 [2024-11-15 11:40:19.614127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.825 [2024-11-15 11:40:19.614139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.825 [2024-11-15 11:40:19.614149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.825 [2024-11-15 11:40:19.614162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.825 [2024-11-15 11:40:19.614172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.825 [2024-11-15 11:40:19.614184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.825 [2024-11-15 11:40:19.614193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.825 [2024-11-15 11:40:19.614206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.825 [2024-11-15 11:40:19.614222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.825 [2024-11-15 11:40:19.614234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.825 [2024-11-15 11:40:19.614244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.825 [2024-11-15 11:40:19.614256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.825 [2024-11-15 11:40:19.614266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.825 [2024-11-15 11:40:19.614278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.825 [2024-11-15 11:40:19.614288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.825 [2024-11-15 11:40:19.614300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.825 [2024-11-15 11:40:19.614310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.825 [2024-11-15 11:40:19.614322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.825 [2024-11-15 11:40:19.614332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.825 [2024-11-15 11:40:19.614344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.825 [2024-11-15 11:40:19.614354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.825 [2024-11-15 11:40:19.614366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.825 [2024-11-15 11:40:19.614376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.825 [2024-11-15 11:40:19.614388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.825 [2024-11-15 11:40:19.614399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.825 [2024-11-15 11:40:19.614411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.825 [2024-11-15 11:40:19.614421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.825 [2024-11-15 11:40:19.614433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.825 [2024-11-15 11:40:19.614443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.825 [2024-11-15 11:40:19.614455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.825 [2024-11-15 11:40:19.614471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.825 [2024-11-15 11:40:19.614484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.825 [2024-11-15 11:40:19.614493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.825 [2024-11-15 11:40:19.614508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.825 [2024-11-15 11:40:19.614518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.825 [2024-11-15 11:40:19.614531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.825 [2024-11-15 11:40:19.614540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.825 [2024-11-15 11:40:19.614553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.825 [2024-11-15 11:40:19.614563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.825 [2024-11-15 11:40:19.614575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.825 [2024-11-15 11:40:19.614585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.825 [2024-11-15 11:40:19.614597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.825 [2024-11-15 11:40:19.614607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.825 [2024-11-15 11:40:19.614619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.825 [2024-11-15 11:40:19.614629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.825 [2024-11-15 11:40:19.614641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.825 [2024-11-15 11:40:19.614651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.825 [2024-11-15 11:40:19.614663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.825 [2024-11-15 11:40:19.614673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.825 [2024-11-15 11:40:19.614685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.825 [2024-11-15 11:40:19.614695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.825 [2024-11-15 11:40:19.614707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.825 [2024-11-15 11:40:19.614717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.825 [2024-11-15 11:40:19.614729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.825 [2024-11-15 11:40:19.614738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.825 [2024-11-15 11:40:19.614751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.825 [2024-11-15 11:40:19.614760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.825 [2024-11-15 11:40:19.614772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.825 [2024-11-15 11:40:19.614785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.825 [2024-11-15 11:40:19.614797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.825 [2024-11-15 11:40:19.614806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.825 [2024-11-15 11:40:19.614818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.825 [2024-11-15 11:40:19.614828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.825 [2024-11-15 11:40:19.614841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.825 [2024-11-15 11:40:19.614851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.825 [2024-11-15 11:40:19.614863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.825 [2024-11-15 11:40:19.614873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.825 [2024-11-15 11:40:19.614885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.825 [2024-11-15 11:40:19.614895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.825 [2024-11-15 11:40:19.614907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.825 [2024-11-15 11:40:19.614917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.825 [2024-11-15 11:40:19.614929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.825 [2024-11-15 11:40:19.614939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.825 [2024-11-15 11:40:19.614951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.825 [2024-11-15 11:40:19.614960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.825 [2024-11-15 11:40:19.614973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.825 [2024-11-15 11:40:19.614982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.825 [2024-11-15 11:40:19.614994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.825 [2024-11-15 11:40:19.615005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.825 [2024-11-15 11:40:19.615016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.825 [2024-11-15 11:40:19.615026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.825 [2024-11-15 11:40:19.615038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.825 [2024-11-15 11:40:19.615048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.825 [2024-11-15 11:40:19.615063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.825 [2024-11-15 11:40:19.615073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.825 [2024-11-15 11:40:19.615085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.825 [2024-11-15 11:40:19.615095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.825 [2024-11-15 11:40:19.615107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.825 [2024-11-15 11:40:19.615116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.825 [2024-11-15 11:40:19.615128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.825 [2024-11-15 11:40:19.615138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.825 [2024-11-15 11:40:19.615150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.825 [2024-11-15 11:40:19.615160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.825 [2024-11-15 11:40:19.615172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.825 [2024-11-15 11:40:19.615182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.825 [2024-11-15 11:40:19.615194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.825 [2024-11-15 11:40:19.615204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.825 [2024-11-15 11:40:19.615216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.825 [2024-11-15 11:40:19.615227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.825 [2024-11-15 11:40:19.615239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.825 [2024-11-15 11:40:19.615249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.825 [2024-11-15 11:40:19.615259] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24a3b70 is same with the state(6) to be set 00:22:18.825 [2024-11-15 11:40:19.616708] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:22:18.825 [2024-11-15 11:40:19.616733] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode2, 1] resetting controller 00:22:18.825 [2024-11-15 11:40:19.616748] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode3, 1] resetting controller 00:22:18.825 [2024-11-15 11:40:19.616762] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode6, 1] resetting controller 00:22:18.825 [2024-11-15 11:40:19.616851] bdev_nvme.c:3168:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode7, 1] Unable to perform failover, already in progress. 00:22:18.825 [2024-11-15 11:40:19.616959] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode7, 1] resetting controller 00:22:18.825 [2024-11-15 11:40:19.617205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:18.825 [2024-11-15 11:40:19.617229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13a26f0 with addr=10.0.0.2, port=4420 00:22:18.826 [2024-11-15 11:40:19.617241] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13a26f0 is same with the state(6) to be set 00:22:18.826 [2024-11-15 11:40:19.617488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:18.826 [2024-11-15 11:40:19.617504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1396750 with addr=10.0.0.2, port=4420 00:22:18.826 [2024-11-15 11:40:19.617515] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1396750 is same with the state(6) to be set 00:22:18.826 [2024-11-15 11:40:19.617759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:18.826 [2024-11-15 11:40:19.617774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13a17f0 with addr=10.0.0.2, port=4420 00:22:18.826 [2024-11-15 11:40:19.617783] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13a17f0 is same with the state(6) to be set 00:22:18.826 [2024-11-15 11:40:19.617884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:18.826 [2024-11-15 11:40:19.617899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c2e50 with addr=10.0.0.2, port=4420 00:22:18.826 [2024-11-15 11:40:19.617909] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17c2e50 is same with the state(6) to be set 00:22:18.826 [2024-11-15 11:40:19.619550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.826 [2024-11-15 11:40:19.619571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.826 [2024-11-15 11:40:19.619588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.826 [2024-11-15 11:40:19.619598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.826 [2024-11-15 11:40:19.619610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.826 [2024-11-15 11:40:19.619621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.826 [2024-11-15 11:40:19.619633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.826 [2024-11-15 11:40:19.619643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.826 [2024-11-15 11:40:19.619656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.826 [2024-11-15 11:40:19.619666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.826 [2024-11-15 11:40:19.619678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.826 [2024-11-15 11:40:19.619688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.826 [2024-11-15 11:40:19.619701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.826 [2024-11-15 11:40:19.619711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.826 [2024-11-15 11:40:19.619723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.826 [2024-11-15 11:40:19.619732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.826 [2024-11-15 11:40:19.619749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.826 [2024-11-15 11:40:19.619759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.826 [2024-11-15 11:40:19.619772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.826 [2024-11-15 11:40:19.619781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.826 [2024-11-15 11:40:19.619793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.826 [2024-11-15 11:40:19.619803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.826 [2024-11-15 11:40:19.619815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.826 [2024-11-15 11:40:19.619825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.826 [2024-11-15 11:40:19.619837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.826 [2024-11-15 11:40:19.619847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.826 [2024-11-15 11:40:19.619859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.826 [2024-11-15 11:40:19.619869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.826 [2024-11-15 11:40:19.619881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.826 [2024-11-15 11:40:19.619891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.826 [2024-11-15 11:40:19.619903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.826 [2024-11-15 11:40:19.619913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.826 [2024-11-15 11:40:19.619925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.826 [2024-11-15 11:40:19.619935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.826 [2024-11-15 11:40:19.619948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.826 [2024-11-15 11:40:19.619957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.826 [2024-11-15 11:40:19.619969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.826 [2024-11-15 11:40:19.619980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.826 [2024-11-15 11:40:19.619991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.826 [2024-11-15 11:40:19.620002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.826 [2024-11-15 11:40:19.620014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.826 [2024-11-15 11:40:19.620026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.826 [2024-11-15 11:40:19.620038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.826 [2024-11-15 11:40:19.620048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.826 [2024-11-15 11:40:19.620060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.826 [2024-11-15 11:40:19.620069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.826 [2024-11-15 11:40:19.620082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.826 [2024-11-15 11:40:19.620091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.826 [2024-11-15 11:40:19.620104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.826 [2024-11-15 11:40:19.620114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.826 [2024-11-15 11:40:19.620126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.826 [2024-11-15 11:40:19.620136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.826 [2024-11-15 11:40:19.620148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.826 [2024-11-15 11:40:19.620158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.826 [2024-11-15 11:40:19.620171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.826 [2024-11-15 11:40:19.620181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.826 [2024-11-15 11:40:19.620193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.826 [2024-11-15 11:40:19.620203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.826 [2024-11-15 11:40:19.620215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.826 [2024-11-15 11:40:19.620225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.826 [2024-11-15 11:40:19.620237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.826 [2024-11-15 11:40:19.620247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.826 [2024-11-15 11:40:19.620260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.826 [2024-11-15 11:40:19.620269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.826 [2024-11-15 11:40:19.620281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.826 [2024-11-15 11:40:19.620291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.826 [2024-11-15 11:40:19.620305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.826 [2024-11-15 11:40:19.620315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.826 [2024-11-15 11:40:19.620328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.826 [2024-11-15 11:40:19.620337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.826 [2024-11-15 11:40:19.620349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.826 [2024-11-15 11:40:19.620359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.826 [2024-11-15 11:40:19.620371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.826 [2024-11-15 11:40:19.620381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.826 [2024-11-15 11:40:19.620393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.826 [2024-11-15 11:40:19.620403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.826 [2024-11-15 11:40:19.620415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.826 [2024-11-15 11:40:19.620425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.826 [2024-11-15 11:40:19.620437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.826 [2024-11-15 11:40:19.620447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.826 [2024-11-15 11:40:19.620464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.826 [2024-11-15 11:40:19.620474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.826 [2024-11-15 11:40:19.620486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.826 [2024-11-15 11:40:19.620496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.826 [2024-11-15 11:40:19.620508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.826 [2024-11-15 11:40:19.620520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.826 [2024-11-15 11:40:19.620532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.826 [2024-11-15 11:40:19.620542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.826 [2024-11-15 11:40:19.620554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.826 [2024-11-15 11:40:19.620563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.826 [2024-11-15 11:40:19.620575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.826 [2024-11-15 11:40:19.620589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.826 [2024-11-15 11:40:19.620602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.826 [2024-11-15 11:40:19.620612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.826 [2024-11-15 11:40:19.620624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.826 [2024-11-15 11:40:19.620634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.826 [2024-11-15 11:40:19.620646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.826 [2024-11-15 11:40:19.620657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.826 [2024-11-15 11:40:19.620669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.826 [2024-11-15 11:40:19.620679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.826 [2024-11-15 11:40:19.620691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.826 [2024-11-15 11:40:19.620701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.826 [2024-11-15 11:40:19.620713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.826 [2024-11-15 11:40:19.620722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.826 [2024-11-15 11:40:19.620734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.826 [2024-11-15 11:40:19.620744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.826 [2024-11-15 11:40:19.620757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.826 [2024-11-15 11:40:19.620767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.826 [2024-11-15 11:40:19.620779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.826 [2024-11-15 11:40:19.620789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.826 [2024-11-15 11:40:19.620800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.826 [2024-11-15 11:40:19.620810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.826 [2024-11-15 11:40:19.620823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.826 [2024-11-15 11:40:19.620832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.826 [2024-11-15 11:40:19.620844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.826 [2024-11-15 11:40:19.620854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.826 [2024-11-15 11:40:19.620869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.826 [2024-11-15 11:40:19.620879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.826 [2024-11-15 11:40:19.620891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.826 [2024-11-15 11:40:19.620900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.826 [2024-11-15 11:40:19.620912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.826 [2024-11-15 11:40:19.620922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.826 [2024-11-15 11:40:19.620934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.826 [2024-11-15 11:40:19.620944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.826 [2024-11-15 11:40:19.620956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.826 [2024-11-15 11:40:19.620966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.826 [2024-11-15 11:40:19.620978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.826 [2024-11-15 11:40:19.620988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.826 [2024-11-15 11:40:19.620999] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x26f1570 is same with the state(6) to be set 00:22:18.826 [2024-11-15 11:40:19.622471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.826 [2024-11-15 11:40:19.622487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.826 [2024-11-15 11:40:19.622502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.827 [2024-11-15 11:40:19.622512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.827 [2024-11-15 11:40:19.622524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.827 [2024-11-15 11:40:19.622534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.827 [2024-11-15 11:40:19.622546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.827 [2024-11-15 11:40:19.622557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.827 [2024-11-15 11:40:19.622569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.827 [2024-11-15 11:40:19.622579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.827 [2024-11-15 11:40:19.622591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.827 [2024-11-15 11:40:19.622601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.827 [2024-11-15 11:40:19.622616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.827 [2024-11-15 11:40:19.622626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.827 [2024-11-15 11:40:19.622638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.827 [2024-11-15 11:40:19.622648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.827 [2024-11-15 11:40:19.622660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.827 [2024-11-15 11:40:19.622670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.827 [2024-11-15 11:40:19.622682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.827 [2024-11-15 11:40:19.622692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.827 [2024-11-15 11:40:19.622704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.827 [2024-11-15 11:40:19.622714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.827 [2024-11-15 11:40:19.622727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.827 [2024-11-15 11:40:19.622737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.827 [2024-11-15 11:40:19.622748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.827 [2024-11-15 11:40:19.622758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.827 [2024-11-15 11:40:19.622770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.827 [2024-11-15 11:40:19.622780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.827 [2024-11-15 11:40:19.622792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.827 [2024-11-15 11:40:19.622802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.827 [2024-11-15 11:40:19.622814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.827 [2024-11-15 11:40:19.622824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.827 [2024-11-15 11:40:19.622835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.827 [2024-11-15 11:40:19.622845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.827 [2024-11-15 11:40:19.622858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.827 [2024-11-15 11:40:19.622868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.827 [2024-11-15 11:40:19.622880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.827 [2024-11-15 11:40:19.622891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.827 [2024-11-15 11:40:19.622904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.827 [2024-11-15 11:40:19.622914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.827 [2024-11-15 11:40:19.622926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.827 [2024-11-15 11:40:19.622936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.827 [2024-11-15 11:40:19.622948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.827 [2024-11-15 11:40:19.622958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.827 [2024-11-15 11:40:19.622970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.827 [2024-11-15 11:40:19.622981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.827 [2024-11-15 11:40:19.622993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.827 [2024-11-15 11:40:19.623003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.827 [2024-11-15 11:40:19.623015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.827 [2024-11-15 11:40:19.623025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.827 [2024-11-15 11:40:19.623038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.827 [2024-11-15 11:40:19.623048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.827 [2024-11-15 11:40:19.623059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.827 [2024-11-15 11:40:19.623069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.827 [2024-11-15 11:40:19.623081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.827 [2024-11-15 11:40:19.623091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.827 [2024-11-15 11:40:19.623103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.827 [2024-11-15 11:40:19.623112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.827 [2024-11-15 11:40:19.623124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.827 [2024-11-15 11:40:19.623134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.827 [2024-11-15 11:40:19.623147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.827 [2024-11-15 11:40:19.623156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.827 [2024-11-15 11:40:19.623168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.827 [2024-11-15 11:40:19.623180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.827 [2024-11-15 11:40:19.623192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.827 [2024-11-15 11:40:19.623202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.827 [2024-11-15 11:40:19.623214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.827 [2024-11-15 11:40:19.623224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.827 [2024-11-15 11:40:19.623236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.827 [2024-11-15 11:40:19.623246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.827 [2024-11-15 11:40:19.623258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.827 [2024-11-15 11:40:19.623268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.827 [2024-11-15 11:40:19.623280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.827 [2024-11-15 11:40:19.623291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.827 [2024-11-15 11:40:19.623303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.827 [2024-11-15 11:40:19.623313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.827 [2024-11-15 11:40:19.623324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.827 [2024-11-15 11:40:19.623335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.827 [2024-11-15 11:40:19.623347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.827 [2024-11-15 11:40:19.623356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.827 [2024-11-15 11:40:19.623368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.827 [2024-11-15 11:40:19.623378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.827 [2024-11-15 11:40:19.623390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.827 [2024-11-15 11:40:19.623400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.827 [2024-11-15 11:40:19.623412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.827 [2024-11-15 11:40:19.623422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.827 [2024-11-15 11:40:19.623434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.827 [2024-11-15 11:40:19.623444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.827 [2024-11-15 11:40:19.623461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.827 [2024-11-15 11:40:19.623471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.827 [2024-11-15 11:40:19.623484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.827 [2024-11-15 11:40:19.623493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.827 [2024-11-15 11:40:19.623505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.827 [2024-11-15 11:40:19.623515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.827 [2024-11-15 11:40:19.623528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.827 [2024-11-15 11:40:19.623538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.827 [2024-11-15 11:40:19.623549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.827 [2024-11-15 11:40:19.623559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.827 [2024-11-15 11:40:19.623571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.827 [2024-11-15 11:40:19.623581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.827 [2024-11-15 11:40:19.623593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.827 [2024-11-15 11:40:19.623603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.827 [2024-11-15 11:40:19.623614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.827 [2024-11-15 11:40:19.623624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.827 [2024-11-15 11:40:19.623637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.827 [2024-11-15 11:40:19.623646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.827 [2024-11-15 11:40:19.623659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.827 [2024-11-15 11:40:19.623668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.827 [2024-11-15 11:40:19.623680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.827 [2024-11-15 11:40:19.623690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.827 [2024-11-15 11:40:19.623702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.827 [2024-11-15 11:40:19.623713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.827 [2024-11-15 11:40:19.623725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.827 [2024-11-15 11:40:19.623737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.827 [2024-11-15 11:40:19.623749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.827 [2024-11-15 11:40:19.623759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.827 [2024-11-15 11:40:19.623771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.827 [2024-11-15 11:40:19.623781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.827 [2024-11-15 11:40:19.623793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.827 [2024-11-15 11:40:19.623803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.827 [2024-11-15 11:40:19.623815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.827 [2024-11-15 11:40:19.623825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.827 [2024-11-15 11:40:19.623837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.827 [2024-11-15 11:40:19.623847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.827 [2024-11-15 11:40:19.623858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.827 [2024-11-15 11:40:19.623868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.827 [2024-11-15 11:40:19.623880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.827 [2024-11-15 11:40:19.623890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.827 [2024-11-15 11:40:19.623900] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x164fa50 is same with the state(6) to be set 00:22:18.827 [2024-11-15 11:40:19.625320] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode4, 1] resetting controller 00:22:18.827 [2024-11-15 11:40:19.625339] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode5, 1] resetting controller 00:22:18.827 [2024-11-15 11:40:19.625354] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode10, 1] resetting controller 00:22:18.827 [2024-11-15 11:40:19.625372] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode8, 1] resetting controller 00:22:18.827 00:22:18.827 Latency(us) 00:22:18.827 [2024-11-15T10:40:19.680Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:18.827 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:18.827 Job: Nvme1n1 ended in about 1.06 seconds with error 00:22:18.827 Verification LBA range: start 0x0 length 0x400 00:22:18.827 Nvme1n1 : 1.06 126.75 7.92 60.54 0.00 337477.90 23712.12 301227.29 00:22:18.827 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:18.827 Job: Nvme2n1 ended in about 1.06 seconds with error 00:22:18.827 Verification LBA range: start 0x0 length 0x400 00:22:18.827 Nvme2n1 : 1.06 120.73 7.55 60.37 0.00 341151.81 21328.99 314572.80 00:22:18.827 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:18.827 Job: Nvme3n1 ended in about 1.06 seconds with error 00:22:18.827 Verification LBA range: start 0x0 length 0x400 00:22:18.827 Nvme3n1 : 1.06 120.40 7.53 60.20 0.00 334296.90 32172.22 310759.80 00:22:18.827 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:18.827 Job: Nvme4n1 ended in about 1.05 seconds with error 00:22:18.827 Verification LBA range: start 0x0 length 0x400 00:22:18.827 Nvme4n1 : 1.05 183.54 11.47 61.18 0.00 240411.23 17515.99 314572.80 00:22:18.827 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:18.827 Job: Nvme5n1 ended in about 1.05 seconds with error 00:22:18.827 Verification LBA range: start 0x0 length 0x400 00:22:18.827 Nvme5n1 : 1.05 183.31 11.46 61.10 0.00 234812.97 27405.96 284068.77 00:22:18.827 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:18.827 Job: Nvme6n1 ended in about 1.07 seconds with error 00:22:18.827 Verification LBA range: start 0x0 length 0x400 00:22:18.827 Nvme6n1 : 1.07 120.07 7.50 60.03 0.00 311506.08 30146.56 295507.78 00:22:18.827 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:18.827 Job: Nvme7n1 ended in about 1.07 seconds with error 00:22:18.827 Verification LBA range: start 0x0 length 0x400 00:22:18.827 Nvme7n1 : 1.07 119.74 7.48 59.87 0.00 304635.04 24546.21 310759.80 00:22:18.827 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:18.827 Job: Nvme8n1 ended in about 1.07 seconds with error 00:22:18.827 Verification LBA range: start 0x0 length 0x400 00:22:18.827 Nvme8n1 : 1.07 119.10 7.44 59.55 0.00 298715.07 13822.14 316479.30 00:22:18.827 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:18.827 Job: Nvme9n1 ended in about 1.08 seconds with error 00:22:18.827 Verification LBA range: start 0x0 length 0x400 00:22:18.827 Nvme9n1 : 1.08 118.78 7.42 59.39 0.00 291756.84 17754.30 310759.80 00:22:18.827 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:18.827 Job: Nvme10n1 ended in about 1.05 seconds with error 00:22:18.828 Verification LBA range: start 0x0 length 0x400 00:22:18.828 Nvme10n1 : 1.05 121.51 7.59 60.76 0.00 275772.82 19422.49 339357.32 00:22:18.828 [2024-11-15T10:40:19.681Z] =================================================================================================================== 00:22:18.828 [2024-11-15T10:40:19.681Z] Total : 1333.94 83.37 602.99 0.00 293467.50 13822.14 339357.32 00:22:19.087 [2024-11-15 11:40:19.667390] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:22:19.087 [2024-11-15 11:40:19.667444] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode9, 1] resetting controller 00:22:19.087 [2024-11-15 11:40:19.667826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:19.087 [2024-11-15 11:40:19.667851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b6610 with addr=10.0.0.2, port=4420 00:22:19.087 [2024-11-15 11:40:19.667866] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12b6610 is same with the state(6) to be set 00:22:19.087 [2024-11-15 11:40:19.667884] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13a26f0 (9): Bad file descriptor 00:22:19.087 [2024-11-15 11:40:19.667900] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1396750 (9): Bad file descriptor 00:22:19.087 [2024-11-15 11:40:19.667913] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13a17f0 (9): Bad file descriptor 00:22:19.087 [2024-11-15 11:40:19.667926] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17c2e50 (9): Bad file descriptor 00:22:19.087 [2024-11-15 11:40:19.668261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:19.088 [2024-11-15 11:40:19.668283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17cc550 with addr=10.0.0.2, port=4420 00:22:19.088 [2024-11-15 11:40:19.668294] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17cc550 is same with the state(6) to be set 00:22:19.088 [2024-11-15 11:40:19.668472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:19.088 [2024-11-15 11:40:19.668490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13a2270 with addr=10.0.0.2, port=4420 00:22:19.088 [2024-11-15 11:40:19.668500] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13a2270 is same with the state(6) to be set 00:22:19.088 [2024-11-15 11:40:19.668593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:19.088 [2024-11-15 11:40:19.668607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1819730 with addr=10.0.0.2, port=4420 00:22:19.088 [2024-11-15 11:40:19.668617] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1819730 is same with the state(6) to be set 00:22:19.088 [2024-11-15 11:40:19.668731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:19.088 [2024-11-15 11:40:19.668745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17fa570 with addr=10.0.0.2, port=4420 00:22:19.088 [2024-11-15 11:40:19.668755] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17fa570 is same with the state(6) to be set 00:22:19.088 [2024-11-15 11:40:19.668971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:19.088 [2024-11-15 11:40:19.668984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17f3550 with addr=10.0.0.2, port=4420 00:22:19.088 [2024-11-15 11:40:19.668995] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f3550 is same with the state(6) to be set 00:22:19.088 [2024-11-15 11:40:19.669008] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12b6610 (9): Bad file descriptor 00:22:19.088 [2024-11-15 11:40:19.669019] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Ctrlr is in error state 00:22:19.088 [2024-11-15 11:40:19.669028] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] controller reinitialization failed 00:22:19.088 [2024-11-15 11:40:19.669040] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] in failed state. 00:22:19.088 [2024-11-15 11:40:19.669052] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Resetting controller failed. 00:22:19.088 [2024-11-15 11:40:19.669063] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] Ctrlr is in error state 00:22:19.088 [2024-11-15 11:40:19.669072] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] controller reinitialization failed 00:22:19.088 [2024-11-15 11:40:19.669081] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] in failed state. 00:22:19.088 [2024-11-15 11:40:19.669089] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] Resetting controller failed. 00:22:19.088 [2024-11-15 11:40:19.669099] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] Ctrlr is in error state 00:22:19.088 [2024-11-15 11:40:19.669108] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] controller reinitialization failed 00:22:19.088 [2024-11-15 11:40:19.669117] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] in failed state. 00:22:19.088 [2024-11-15 11:40:19.669125] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] Resetting controller failed. 00:22:19.088 [2024-11-15 11:40:19.669135] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] Ctrlr is in error state 00:22:19.088 [2024-11-15 11:40:19.669143] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] controller reinitialization failed 00:22:19.088 [2024-11-15 11:40:19.669152] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] in failed state. 00:22:19.088 [2024-11-15 11:40:19.669161] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] Resetting controller failed. 00:22:19.088 [2024-11-15 11:40:19.669218] bdev_nvme.c:3168:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode7, 1] Unable to perform failover, already in progress. 00:22:19.088 [2024-11-15 11:40:19.670008] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17cc550 (9): Bad file descriptor 00:22:19.088 [2024-11-15 11:40:19.670029] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13a2270 (9): Bad file descriptor 00:22:19.088 [2024-11-15 11:40:19.670041] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1819730 (9): Bad file descriptor 00:22:19.088 [2024-11-15 11:40:19.670053] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17fa570 (9): Bad file descriptor 00:22:19.088 [2024-11-15 11:40:19.670064] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17f3550 (9): Bad file descriptor 00:22:19.088 [2024-11-15 11:40:19.670075] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] Ctrlr is in error state 00:22:19.088 [2024-11-15 11:40:19.670085] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] controller reinitialization failed 00:22:19.088 [2024-11-15 11:40:19.670094] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] in failed state. 00:22:19.088 [2024-11-15 11:40:19.670103] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] Resetting controller failed. 00:22:19.088 [2024-11-15 11:40:19.670159] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode6, 1] resetting controller 00:22:19.088 [2024-11-15 11:40:19.670175] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode3, 1] resetting controller 00:22:19.088 [2024-11-15 11:40:19.670187] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode2, 1] resetting controller 00:22:19.088 [2024-11-15 11:40:19.670198] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:22:19.088 [2024-11-15 11:40:19.670237] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] Ctrlr is in error state 00:22:19.088 [2024-11-15 11:40:19.670247] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] controller reinitialization failed 00:22:19.088 [2024-11-15 11:40:19.670257] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] in failed state. 00:22:19.088 [2024-11-15 11:40:19.670265] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] Resetting controller failed. 00:22:19.088 [2024-11-15 11:40:19.670275] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] Ctrlr is in error state 00:22:19.088 [2024-11-15 11:40:19.670284] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] controller reinitialization failed 00:22:19.088 [2024-11-15 11:40:19.670293] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] in failed state. 00:22:19.088 [2024-11-15 11:40:19.670302] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] Resetting controller failed. 00:22:19.088 [2024-11-15 11:40:19.670312] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] Ctrlr is in error state 00:22:19.088 [2024-11-15 11:40:19.670320] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] controller reinitialization failed 00:22:19.088 [2024-11-15 11:40:19.670330] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] in failed state. 00:22:19.088 [2024-11-15 11:40:19.670338] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] Resetting controller failed. 00:22:19.088 [2024-11-15 11:40:19.670348] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] Ctrlr is in error state 00:22:19.088 [2024-11-15 11:40:19.670357] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] controller reinitialization failed 00:22:19.088 [2024-11-15 11:40:19.670366] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] in failed state. 00:22:19.088 [2024-11-15 11:40:19.670378] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] Resetting controller failed. 00:22:19.088 [2024-11-15 11:40:19.670389] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] Ctrlr is in error state 00:22:19.088 [2024-11-15 11:40:19.670398] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] controller reinitialization failed 00:22:19.088 [2024-11-15 11:40:19.670407] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] in failed state. 00:22:19.088 [2024-11-15 11:40:19.670416] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] Resetting controller failed. 00:22:19.088 [2024-11-15 11:40:19.670747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:19.088 [2024-11-15 11:40:19.670767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c2e50 with addr=10.0.0.2, port=4420 00:22:19.088 [2024-11-15 11:40:19.670778] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17c2e50 is same with the state(6) to be set 00:22:19.088 [2024-11-15 11:40:19.671026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:19.088 [2024-11-15 11:40:19.671041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13a17f0 with addr=10.0.0.2, port=4420 00:22:19.088 [2024-11-15 11:40:19.671052] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13a17f0 is same with the state(6) to be set 00:22:19.088 [2024-11-15 11:40:19.671307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:19.088 [2024-11-15 11:40:19.671321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1396750 with addr=10.0.0.2, port=4420 00:22:19.088 [2024-11-15 11:40:19.671331] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1396750 is same with the state(6) to be set 00:22:19.088 [2024-11-15 11:40:19.671522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:19.088 [2024-11-15 11:40:19.671537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13a26f0 with addr=10.0.0.2, port=4420 00:22:19.088 [2024-11-15 11:40:19.671547] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13a26f0 is same with the state(6) to be set 00:22:19.088 [2024-11-15 11:40:19.671584] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17c2e50 (9): Bad file descriptor 00:22:19.088 [2024-11-15 11:40:19.671598] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13a17f0 (9): Bad file descriptor 00:22:19.088 [2024-11-15 11:40:19.671611] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1396750 (9): Bad file descriptor 00:22:19.088 [2024-11-15 11:40:19.671623] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13a26f0 (9): Bad file descriptor 00:22:19.088 [2024-11-15 11:40:19.671656] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] Ctrlr is in error state 00:22:19.088 [2024-11-15 11:40:19.671666] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] controller reinitialization failed 00:22:19.088 [2024-11-15 11:40:19.671675] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] in failed state. 00:22:19.088 [2024-11-15 11:40:19.671684] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] Resetting controller failed. 00:22:19.088 [2024-11-15 11:40:19.671694] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] Ctrlr is in error state 00:22:19.089 [2024-11-15 11:40:19.671702] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] controller reinitialization failed 00:22:19.089 [2024-11-15 11:40:19.671711] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] in failed state. 00:22:19.089 [2024-11-15 11:40:19.671720] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] Resetting controller failed. 00:22:19.089 [2024-11-15 11:40:19.671733] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] Ctrlr is in error state 00:22:19.089 [2024-11-15 11:40:19.671742] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] controller reinitialization failed 00:22:19.089 [2024-11-15 11:40:19.671751] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] in failed state. 00:22:19.089 [2024-11-15 11:40:19.671760] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] Resetting controller failed. 00:22:19.089 [2024-11-15 11:40:19.671770] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Ctrlr is in error state 00:22:19.089 [2024-11-15 11:40:19.671778] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] controller reinitialization failed 00:22:19.089 [2024-11-15 11:40:19.671788] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] in failed state. 00:22:19.089 [2024-11-15 11:40:19.671797] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Resetting controller failed. 00:22:19.347 11:40:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@137 -- # sleep 1 00:22:20.283 11:40:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@138 -- # NOT wait 1300344 00:22:20.283 11:40:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@650 -- # local es=0 00:22:20.283 11:40:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@652 -- # valid_exec_arg wait 1300344 00:22:20.283 11:40:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@638 -- # local arg=wait 00:22:20.283 11:40:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:22:20.283 11:40:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@642 -- # type -t wait 00:22:20.283 11:40:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:22:20.283 11:40:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@653 -- # wait 1300344 00:22:20.283 11:40:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@653 -- # es=255 00:22:20.283 11:40:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:22:20.283 11:40:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@662 -- # es=127 00:22:20.283 11:40:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@663 -- # case "$es" in 00:22:20.283 11:40:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@670 -- # es=1 00:22:20.283 11:40:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:22:20.283 11:40:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@140 -- # stoptarget 00:22:20.283 11:40:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@42 -- # rm -f ./local-job0-0-verify.state 00:22:20.283 11:40:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:22:20.283 11:40:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@44 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:22:20.283 11:40:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@46 -- # nvmftestfini 00:22:20.283 11:40:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@516 -- # nvmfcleanup 00:22:20.283 11:40:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@121 -- # sync 00:22:20.283 11:40:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:22:20.283 11:40:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@124 -- # set +e 00:22:20.283 11:40:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@125 -- # for i in {1..20} 00:22:20.283 11:40:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:22:20.283 rmmod nvme_tcp 00:22:20.283 rmmod nvme_fabrics 00:22:20.283 rmmod nvme_keyring 00:22:20.283 11:40:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:22:20.283 11:40:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@128 -- # set -e 00:22:20.283 11:40:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@129 -- # return 0 00:22:20.283 11:40:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@517 -- # '[' -n 1300045 ']' 00:22:20.283 11:40:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@518 -- # killprocess 1300045 00:22:20.283 11:40:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@952 -- # '[' -z 1300045 ']' 00:22:20.283 11:40:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@956 -- # kill -0 1300045 00:22:20.283 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 956: kill: (1300045) - No such process 00:22:20.283 11:40:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@979 -- # echo 'Process with pid 1300045 is not found' 00:22:20.283 Process with pid 1300045 is not found 00:22:20.283 11:40:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:22:20.283 11:40:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:22:20.283 11:40:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:22:20.283 11:40:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@297 -- # iptr 00:22:20.283 11:40:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@791 -- # iptables-save 00:22:20.283 11:40:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:22:20.283 11:40:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@791 -- # iptables-restore 00:22:20.283 11:40:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:22:20.283 11:40:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@302 -- # remove_spdk_ns 00:22:20.283 11:40:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:20.283 11:40:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:20.283 11:40:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:22.819 11:40:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:22:22.819 00:22:22.819 real 0m7.574s 00:22:22.819 user 0m18.991s 00:22:22.819 sys 0m1.309s 00:22:22.819 11:40:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@1128 -- # xtrace_disable 00:22:22.819 11:40:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:22:22.819 ************************************ 00:22:22.819 END TEST nvmf_shutdown_tc3 00:22:22.819 ************************************ 00:22:22.819 11:40:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@166 -- # [[ e810 == \e\8\1\0 ]] 00:22:22.819 11:40:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@166 -- # [[ tcp == \r\d\m\a ]] 00:22:22.819 11:40:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@167 -- # run_test nvmf_shutdown_tc4 nvmf_shutdown_tc4 00:22:22.819 11:40:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:22:22.819 11:40:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1109 -- # xtrace_disable 00:22:22.819 11:40:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:22:22.819 ************************************ 00:22:22.819 START TEST nvmf_shutdown_tc4 00:22:22.819 ************************************ 00:22:22.819 11:40:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@1127 -- # nvmf_shutdown_tc4 00:22:22.819 11:40:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@145 -- # starttarget 00:22:22.819 11:40:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@16 -- # nvmftestinit 00:22:22.819 11:40:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:22:22.819 11:40:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:22.819 11:40:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@476 -- # prepare_net_devs 00:22:22.819 11:40:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@438 -- # local -g is_hw=no 00:22:22.820 11:40:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@440 -- # remove_spdk_ns 00:22:22.820 11:40:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:22.820 11:40:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:22.820 11:40:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:22.820 11:40:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:22:22.820 11:40:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:22:22.820 11:40:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@309 -- # xtrace_disable 00:22:22.820 11:40:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:22:22.820 11:40:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:22.820 11:40:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@315 -- # pci_devs=() 00:22:22.820 11:40:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@315 -- # local -a pci_devs 00:22:22.820 11:40:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@316 -- # pci_net_devs=() 00:22:22.820 11:40:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:22:22.820 11:40:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@317 -- # pci_drivers=() 00:22:22.820 11:40:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@317 -- # local -A pci_drivers 00:22:22.820 11:40:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@319 -- # net_devs=() 00:22:22.820 11:40:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@319 -- # local -ga net_devs 00:22:22.820 11:40:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@320 -- # e810=() 00:22:22.820 11:40:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@320 -- # local -ga e810 00:22:22.820 11:40:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@321 -- # x722=() 00:22:22.820 11:40:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@321 -- # local -ga x722 00:22:22.820 11:40:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@322 -- # mlx=() 00:22:22.820 11:40:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@322 -- # local -ga mlx 00:22:22.820 11:40:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:22.820 11:40:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:22.820 11:40:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:22.820 11:40:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:22.820 11:40:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:22.820 11:40:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:22.820 11:40:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:22.820 11:40:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:22:22.820 11:40:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:22.820 11:40:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:22.820 11:40:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:22.820 11:40:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:22.820 11:40:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:22:22.820 11:40:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:22:22.820 11:40:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:22:22.820 11:40:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:22:22.820 11:40:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:22:22.820 11:40:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:22:22.820 11:40:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:22.820 11:40:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:22:22.820 Found 0000:af:00.0 (0x8086 - 0x159b) 00:22:22.820 11:40:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:22.820 11:40:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:22.820 11:40:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:22.820 11:40:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:22.820 11:40:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:22.820 11:40:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:22.820 11:40:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:22:22.820 Found 0000:af:00.1 (0x8086 - 0x159b) 00:22:22.820 11:40:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:22.820 11:40:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:22.820 11:40:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:22.820 11:40:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:22.820 11:40:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:22.820 11:40:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:22:22.820 11:40:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:22:22.820 11:40:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:22:22.820 11:40:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:22:22.820 11:40:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:22.820 11:40:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:22:22.820 11:40:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:22.820 11:40:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:22:22.820 11:40:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:22:22.820 11:40:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:22.820 11:40:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:22:22.820 Found net devices under 0000:af:00.0: cvl_0_0 00:22:22.820 11:40:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:22:22.820 11:40:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:22:22.820 11:40:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:22.820 11:40:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:22:22.820 11:40:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:22.820 11:40:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:22:22.820 11:40:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:22:22.820 11:40:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:22.820 11:40:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:22:22.820 Found net devices under 0000:af:00.1: cvl_0_1 00:22:22.820 11:40:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:22:22.820 11:40:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:22:22.820 11:40:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@442 -- # is_hw=yes 00:22:22.820 11:40:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:22:22.820 11:40:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:22:22.820 11:40:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:22:22.820 11:40:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:22:22.820 11:40:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:22.820 11:40:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:22.820 11:40:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:22.820 11:40:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:22:22.820 11:40:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:22.820 11:40:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:22.820 11:40:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:22:22.820 11:40:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:22:22.820 11:40:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:22.820 11:40:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:22.820 11:40:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:22:22.820 11:40:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:22:22.820 11:40:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:22:22.820 11:40:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:22.820 11:40:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:22.821 11:40:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:22.821 11:40:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:22:22.821 11:40:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:22.821 11:40:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:22.821 11:40:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:22.821 11:40:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:22:22.821 11:40:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:22:22.821 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:22.821 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.395 ms 00:22:22.821 00:22:22.821 --- 10.0.0.2 ping statistics --- 00:22:22.821 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:22.821 rtt min/avg/max/mdev = 0.395/0.395/0.395/0.000 ms 00:22:22.821 11:40:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:22.821 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:22.821 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.232 ms 00:22:22.821 00:22:22.821 --- 10.0.0.1 ping statistics --- 00:22:22.821 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:22.821 rtt min/avg/max/mdev = 0.232/0.232/0.232/0.000 ms 00:22:22.821 11:40:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:22.821 11:40:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@450 -- # return 0 00:22:22.821 11:40:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:22:22.821 11:40:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:22.821 11:40:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:22:22.821 11:40:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:22:22.821 11:40:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:22.821 11:40:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:22:22.821 11:40:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:22:22.821 11:40:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@19 -- # nvmfappstart -m 0x1E 00:22:22.821 11:40:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:22:22.821 11:40:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@724 -- # xtrace_disable 00:22:22.821 11:40:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:22:22.821 11:40:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@509 -- # nvmfpid=1301526 00:22:22.821 11:40:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@510 -- # waitforlisten 1301526 00:22:22.821 11:40:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:22:22.821 11:40:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@833 -- # '[' -z 1301526 ']' 00:22:22.821 11:40:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:22.821 11:40:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@838 -- # local max_retries=100 00:22:22.821 11:40:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:22.821 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:22.821 11:40:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@842 -- # xtrace_disable 00:22:22.821 11:40:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:22:22.821 [2024-11-15 11:40:23.591696] Starting SPDK v25.01-pre git sha1 4b2d483c6 / DPDK 24.03.0 initialization... 00:22:22.821 [2024-11-15 11:40:23.591760] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:22.821 [2024-11-15 11:40:23.665169] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:22:23.080 [2024-11-15 11:40:23.705757] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:23.080 [2024-11-15 11:40:23.705793] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:23.080 [2024-11-15 11:40:23.705800] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:23.080 [2024-11-15 11:40:23.705806] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:23.080 [2024-11-15 11:40:23.705810] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:23.080 [2024-11-15 11:40:23.707364] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:22:23.080 [2024-11-15 11:40:23.707474] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:22:23.080 [2024-11-15 11:40:23.707562] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:22:23.080 [2024-11-15 11:40:23.707565] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:22:23.080 11:40:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:22:23.080 11:40:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@866 -- # return 0 00:22:23.080 11:40:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:22:23.080 11:40:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@730 -- # xtrace_disable 00:22:23.080 11:40:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:22:23.080 11:40:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:23.080 11:40:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:22:23.080 11:40:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:23.080 11:40:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:22:23.080 [2024-11-15 11:40:23.862737] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:23.080 11:40:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:23.080 11:40:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@23 -- # num_subsystems=({1..10}) 00:22:23.080 11:40:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@25 -- # timing_enter create_subsystems 00:22:23.080 11:40:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@724 -- # xtrace_disable 00:22:23.080 11:40:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:22:23.080 11:40:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@27 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:22:23.080 11:40:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:23.080 11:40:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:22:23.080 11:40:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:23.080 11:40:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:22:23.080 11:40:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:23.080 11:40:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:22:23.080 11:40:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:23.080 11:40:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:22:23.080 11:40:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:23.080 11:40:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:22:23.080 11:40:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:23.080 11:40:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:22:23.080 11:40:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:23.080 11:40:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:22:23.080 11:40:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:23.080 11:40:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:22:23.080 11:40:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:23.080 11:40:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:22:23.080 11:40:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:23.080 11:40:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:22:23.080 11:40:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@36 -- # rpc_cmd 00:22:23.080 11:40:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:23.080 11:40:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:22:23.338 Malloc1 00:22:23.338 [2024-11-15 11:40:23.969643] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:23.338 Malloc2 00:22:23.338 Malloc3 00:22:23.338 Malloc4 00:22:23.338 Malloc5 00:22:23.338 Malloc6 00:22:23.596 Malloc7 00:22:23.596 Malloc8 00:22:23.596 Malloc9 00:22:23.596 Malloc10 00:22:23.596 11:40:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:23.596 11:40:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@37 -- # timing_exit create_subsystems 00:22:23.596 11:40:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@730 -- # xtrace_disable 00:22:23.596 11:40:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:22:23.596 11:40:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@149 -- # perfpid=1301832 00:22:23.596 11:40:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@150 -- # sleep 5 00:22:23.596 11:40:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@148 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 45056 -O 4096 -w randwrite -t 20 -r 'trtype:tcp adrfam:IPV4 traddr:10.0.0.2 trsvcid:4420' -P 4 00:22:23.854 [2024-11-15 11:40:24.474537] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:22:29.133 11:40:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@152 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:22:29.133 11:40:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@155 -- # killprocess 1301526 00:22:29.133 11:40:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@952 -- # '[' -z 1301526 ']' 00:22:29.133 11:40:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@956 -- # kill -0 1301526 00:22:29.133 11:40:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@957 -- # uname 00:22:29.133 11:40:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:22:29.133 11:40:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 1301526 00:22:29.133 11:40:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:22:29.133 11:40:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:22:29.133 11:40:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@970 -- # echo 'killing process with pid 1301526' 00:22:29.133 killing process with pid 1301526 00:22:29.133 11:40:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@971 -- # kill 1301526 00:22:29.133 11:40:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@976 -- # wait 1301526 00:22:29.133 [2024-11-15 11:40:29.472364] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d62d30 is same with the state(6) to be set 00:22:29.133 [2024-11-15 11:40:29.472418] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d62d30 is same with the state(6) to be set 00:22:29.133 [2024-11-15 11:40:29.472426] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d62d30 is same with the state(6) to be set 00:22:29.133 [2024-11-15 11:40:29.472432] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d62d30 is same with the state(6) to be set 00:22:29.133 [2024-11-15 11:40:29.472439] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d62d30 is same with the state(6) to be set 00:22:29.133 [2024-11-15 11:40:29.477140] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d66790 is same with the state(6) to be set 00:22:29.133 [2024-11-15 11:40:29.477168] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d66790 is same with the state(6) to be set 00:22:29.133 [2024-11-15 11:40:29.477176] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d66790 is same with the state(6) to be set 00:22:29.133 [2024-11-15 11:40:29.477182] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d66790 is same with the state(6) to be set 00:22:29.133 [2024-11-15 11:40:29.477189] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d66790 is same with the state(6) to be set 00:22:29.133 [2024-11-15 11:40:29.477195] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d66790 is same with the state(6) to be set 00:22:29.133 [2024-11-15 11:40:29.477574] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fd2010 is same with the state(6) to be set 00:22:29.133 [2024-11-15 11:40:29.477602] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fd2010 is same with the state(6) to be set 00:22:29.133 [2024-11-15 11:40:29.477610] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fd2010 is same with the state(6) to be set 00:22:29.133 [2024-11-15 11:40:29.477617] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fd2010 is same with the state(6) to be set 00:22:29.134 [2024-11-15 11:40:29.477623] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fd2010 is same with the state(6) to be set 00:22:29.134 [2024-11-15 11:40:29.478055] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fd2500 is same with the state(6) to be set 00:22:29.134 [2024-11-15 11:40:29.478077] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fd2500 is same with the state(6) to be set 00:22:29.134 [2024-11-15 11:40:29.478084] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fd2500 is same with the state(6) to be set 00:22:29.134 [2024-11-15 11:40:29.478091] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fd2500 is same with the state(6) to be set 00:22:29.134 [2024-11-15 11:40:29.478097] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fd2500 is same with the state(6) to be set 00:22:29.134 [2024-11-15 11:40:29.478103] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fd2500 is same with the state(6) to be set 00:22:29.134 [2024-11-15 11:40:29.478109] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fd2500 is same with the state(6) to be set 00:22:29.134 [2024-11-15 11:40:29.478115] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fd2500 is same with the state(6) to be set 00:22:29.134 [2024-11-15 11:40:29.478121] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fd2500 is same with the state(6) to be set 00:22:29.134 [2024-11-15 11:40:29.478400] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d662c0 is same with the state(6) to be set 00:22:29.134 [2024-11-15 11:40:29.478420] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d662c0 is same with the state(6) to be set 00:22:29.134 [2024-11-15 11:40:29.478428] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d662c0 is same with the state(6) to be set 00:22:29.134 [2024-11-15 11:40:29.478434] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d662c0 is same with the state(6) to be set 00:22:29.134 [2024-11-15 11:40:29.478440] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d662c0 is same with the state(6) to be set 00:22:29.134 [2024-11-15 11:40:29.478446] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d662c0 is same with the state(6) to be set 00:22:29.134 [2024-11-15 11:40:29.482901] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fd41e0 is same with the state(6) to be set 00:22:29.134 [2024-11-15 11:40:29.482923] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fd41e0 is same with the state(6) to be set 00:22:29.134 [2024-11-15 11:40:29.482931] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fd41e0 is same with the state(6) to be set 00:22:29.134 [2024-11-15 11:40:29.482938] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fd41e0 is same with the state(6) to be set 00:22:29.134 [2024-11-15 11:40:29.482944] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fd41e0 is same with the state(6) to be set 00:22:29.134 [2024-11-15 11:40:29.482951] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fd41e0 is same with the state(6) to be set 00:22:29.134 [2024-11-15 11:40:29.482956] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fd41e0 is same with the state(6) to be set 00:22:29.134 [2024-11-15 11:40:29.482967] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fd41e0 is same with the state(6) to be set 00:22:29.134 [2024-11-15 11:40:29.482973] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fd41e0 is same with the state(6) to be set 00:22:29.134 [2024-11-15 11:40:29.483244] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fd46b0 is same with the state(6) to be set 00:22:29.134 [2024-11-15 11:40:29.483263] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fd46b0 is same with the state(6) to be set 00:22:29.134 [2024-11-15 11:40:29.483270] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fd46b0 is same with the state(6) to be set 00:22:29.134 [2024-11-15 11:40:29.483276] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fd46b0 is same with the state(6) to be set 00:22:29.134 [2024-11-15 11:40:29.483282] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fd46b0 is same with the state(6) to be set 00:22:29.134 [2024-11-15 11:40:29.483288] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fd46b0 is same with the state(6) to be set 00:22:29.134 [2024-11-15 11:40:29.483294] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fd46b0 is same with the state(6) to be set 00:22:29.134 [2024-11-15 11:40:29.483300] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fd46b0 is same with the state(6) to be set 00:22:29.134 [2024-11-15 11:40:29.483305] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fd46b0 is same with the state(6) to be set 00:22:29.134 [2024-11-15 11:40:29.483311] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fd46b0 is same with the state(6) to be set 00:22:29.134 [2024-11-15 11:40:29.483620] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fd4b80 is same with the state(6) to be set 00:22:29.134 [2024-11-15 11:40:29.483639] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fd4b80 is same with the state(6) to be set 00:22:29.134 [2024-11-15 11:40:29.483646] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fd4b80 is same with the state(6) to be set 00:22:29.134 [2024-11-15 11:40:29.483652] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fd4b80 is same with the state(6) to be set 00:22:29.134 [2024-11-15 11:40:29.483658] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fd4b80 is same with the state(6) to be set 00:22:29.134 [2024-11-15 11:40:29.484231] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fd3d10 is same with the state(6) to be set 00:22:29.134 [2024-11-15 11:40:29.484252] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fd3d10 is same with the state(6) to be set 00:22:29.134 [2024-11-15 11:40:29.484259] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fd3d10 is same with the state(6) to be set 00:22:29.134 [2024-11-15 11:40:29.484266] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fd3d10 is same with the state(6) to be set 00:22:29.134 [2024-11-15 11:40:29.484272] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fd3d10 is same with the state(6) to be set 00:22:29.134 [2024-11-15 11:40:29.484279] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fd3d10 is same with the state(6) to be set 00:22:29.134 [2024-11-15 11:40:29.484990] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fd5520 is same with the state(6) to be set 00:22:29.134 [2024-11-15 11:40:29.485007] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fd5520 is same with the state(6) to be set 00:22:29.134 [2024-11-15 11:40:29.485013] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fd5520 is same with the state(6) to be set 00:22:29.134 [2024-11-15 11:40:29.485019] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fd5520 is same with the state(6) to be set 00:22:29.134 [2024-11-15 11:40:29.485029] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fd5520 is same with the state(6) to be set 00:22:29.134 [2024-11-15 11:40:29.485034] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fd5520 is same with the state(6) to be set 00:22:29.134 [2024-11-15 11:40:29.485040] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fd5520 is same with the state(6) to be set 00:22:29.134 [2024-11-15 11:40:29.485046] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fd5520 is same with the state(6) to be set 00:22:29.134 [2024-11-15 11:40:29.485051] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fd5520 is same with the state(6) to be set 00:22:29.134 [2024-11-15 11:40:29.485057] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fd5520 is same with the state(6) to be set 00:22:29.134 Write completed with error (sct=0, sc=8) 00:22:29.134 starting I/O failed: -6 00:22:29.134 Write completed with error (sct=0, sc=8) 00:22:29.134 Write completed with error (sct=0, sc=8) 00:22:29.134 Write completed with error (sct=0, sc=8) 00:22:29.134 Write completed with error (sct=0, sc=8) 00:22:29.134 starting I/O failed: -6 00:22:29.134 Write completed with error (sct=0, sc=8) 00:22:29.134 Write completed with error (sct=0, sc=8) 00:22:29.134 Write completed with error (sct=0, sc=8) 00:22:29.134 Write completed with error (sct=0, sc=8) 00:22:29.134 starting I/O failed: -6 00:22:29.134 Write completed with error (sct=0, sc=8) 00:22:29.134 Write completed with error (sct=0, sc=8) 00:22:29.134 Write completed with error (sct=0, sc=8) 00:22:29.134 [2024-11-15 11:40:29.485493] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fd59f0 is same with the state(6) to be set 00:22:29.134 [2024-11-15 11:40:29.485509] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fd59f0 is same with the state(6) to be set 00:22:29.134 Write completed with error (sct=0, sc=8) 00:22:29.134 [2024-11-15 11:40:29.485516] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fd59f0 is same with the state(6) to be set 00:22:29.134 starting I/O failed: -6 00:22:29.134 Write completed with error (sct=0, sc=8) 00:22:29.134 Write completed with error (sct=0, sc=8) 00:22:29.134 Write completed with error (sct=0, sc=8) 00:22:29.134 Write completed with error (sct=0, sc=8) 00:22:29.134 starting I/O failed: -6 00:22:29.134 [2024-11-15 11:40:29.485608] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fd5ec0 is same with the state(6) to be set 00:22:29.134 Write completed with error (sct=0, sc=8) 00:22:29.134 [2024-11-15 11:40:29.485624] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fd5ec0 is same with the state(6) to be set 00:22:29.135 [2024-11-15 11:40:29.485630] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fd5ec0 is same with the state(6) to be set 00:22:29.135 [2024-11-15 11:40:29.485636] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fd5ec0 is same with the state(6) to be set 00:22:29.135 Write completed with error (sct=0, sc=8) 00:22:29.135 [2024-11-15 11:40:29.485642] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fd5ec0 is same with the state(6) to be set 00:22:29.135 [2024-11-15 11:40:29.485648] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fd5ec0 is same with the state(6) to be set 00:22:29.135 [2024-11-15 11:40:29.485654] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fd5ec0 is same with the state(6) to be set 00:22:29.135 [2024-11-15 11:40:29.485659] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fd5ec0 is same with Write completed with error (sct=0, sc=8) 00:22:29.135 the state(6) to be set 00:22:29.135 Write completed with error (sct=0, sc=8) 00:22:29.135 starting I/O failed: -6 00:22:29.135 Write completed with error (sct=0, sc=8) 00:22:29.135 Write completed with error (sct=0, sc=8) 00:22:29.135 Write completed with error (sct=0, sc=8) 00:22:29.135 Write completed with error (sct=0, sc=8) 00:22:29.135 starting I/O failed: -6 00:22:29.135 Write completed with error (sct=0, sc=8) 00:22:29.135 Write completed with error (sct=0, sc=8) 00:22:29.135 Write completed with error (sct=0, sc=8) 00:22:29.135 Write completed with error (sct=0, sc=8) 00:22:29.135 starting I/O failed: -6 00:22:29.135 Write completed with error (sct=0, sc=8) 00:22:29.135 [2024-11-15 11:40:29.485919] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:22:29.135 NVMe io qpair process completion error 00:22:29.135 [2024-11-15 11:40:29.486125] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fd5050 is same with the state(6) to be set 00:22:29.135 [2024-11-15 11:40:29.486139] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fd5050 is same with the state(6) to be set 00:22:29.135 [2024-11-15 11:40:29.486145] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fd5050 is same with the state(6) to be set 00:22:29.135 [2024-11-15 11:40:29.486151] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fd5050 is same with the state(6) to be set 00:22:29.135 [2024-11-15 11:40:29.486157] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fd5050 is same with the state(6) to be set 00:22:29.135 [2024-11-15 11:40:29.486162] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fd5050 is same with the state(6) to be set 00:22:29.135 [2024-11-15 11:40:29.486168] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fd5050 is same with the state(6) to be set 00:22:29.135 [2024-11-15 11:40:29.486173] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fd5050 is same with the state(6) to be set 00:22:29.135 [2024-11-15 11:40:29.486179] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fd5050 is same with the state(6) to be set 00:22:29.135 [2024-11-15 11:40:29.486185] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fd5050 is same with the state(6) to be set 00:22:29.135 Write completed with error (sct=0, sc=8) 00:22:29.135 Write completed with error (sct=0, sc=8) 00:22:29.135 Write completed with error (sct=0, sc=8) 00:22:29.135 starting I/O failed: -6 00:22:29.135 Write completed with error (sct=0, sc=8) 00:22:29.135 Write completed with error (sct=0, sc=8) 00:22:29.135 Write completed with error (sct=0, sc=8) 00:22:29.135 Write completed with error (sct=0, sc=8) 00:22:29.135 starting I/O failed: -6 00:22:29.135 Write completed with error (sct=0, sc=8) 00:22:29.135 Write completed with error (sct=0, sc=8) 00:22:29.135 Write completed with error (sct=0, sc=8) 00:22:29.135 Write completed with error (sct=0, sc=8) 00:22:29.135 starting I/O failed: -6 00:22:29.135 Write completed with error (sct=0, sc=8) 00:22:29.135 Write completed with error (sct=0, sc=8) 00:22:29.135 Write completed with error (sct=0, sc=8) 00:22:29.135 Write completed with error (sct=0, sc=8) 00:22:29.135 starting I/O failed: -6 00:22:29.135 Write completed with error (sct=0, sc=8) 00:22:29.135 Write completed with error (sct=0, sc=8) 00:22:29.135 Write completed with error (sct=0, sc=8) 00:22:29.135 Write completed with error (sct=0, sc=8) 00:22:29.135 starting I/O failed: -6 00:22:29.135 Write completed with error (sct=0, sc=8) 00:22:29.135 Write completed with error (sct=0, sc=8) 00:22:29.135 Write completed with error (sct=0, sc=8) 00:22:29.135 Write completed with error (sct=0, sc=8) 00:22:29.135 starting I/O failed: -6 00:22:29.135 Write completed with error (sct=0, sc=8) 00:22:29.135 Write completed with error (sct=0, sc=8) 00:22:29.135 Write completed with error (sct=0, sc=8) 00:22:29.135 Write completed with error (sct=0, sc=8) 00:22:29.135 starting I/O failed: -6 00:22:29.135 Write completed with error (sct=0, sc=8) 00:22:29.135 Write completed with error (sct=0, sc=8) 00:22:29.135 Write completed with error (sct=0, sc=8) 00:22:29.135 Write completed with error (sct=0, sc=8) 00:22:29.135 starting I/O failed: -6 00:22:29.135 Write completed with error (sct=0, sc=8) 00:22:29.135 Write completed with error (sct=0, sc=8) 00:22:29.135 Write completed with error (sct=0, sc=8) 00:22:29.135 Write completed with error (sct=0, sc=8) 00:22:29.135 starting I/O failed: -6 00:22:29.135 Write completed with error (sct=0, sc=8) 00:22:29.135 Write completed with error (sct=0, sc=8) 00:22:29.135 Write completed with error (sct=0, sc=8) 00:22:29.135 Write completed with error (sct=0, sc=8) 00:22:29.135 starting I/O failed: -6 00:22:29.135 Write completed with error (sct=0, sc=8) 00:22:29.135 Write completed with error (sct=0, sc=8) 00:22:29.135 Write completed with error (sct=0, sc=8) 00:22:29.135 Write completed with error (sct=0, sc=8) 00:22:29.135 starting I/O failed: -6 00:22:29.135 [2024-11-15 11:40:29.487308] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:22:29.135 starting I/O failed: -6 00:22:29.135 Write completed with error (sct=0, sc=8) 00:22:29.135 starting I/O failed: -6 00:22:29.135 Write completed with error (sct=0, sc=8) 00:22:29.135 Write completed with error (sct=0, sc=8) 00:22:29.135 Write completed with error (sct=0, sc=8) 00:22:29.135 starting I/O failed: -6 00:22:29.135 Write completed with error (sct=0, sc=8) 00:22:29.135 starting I/O failed: -6 00:22:29.135 Write completed with error (sct=0, sc=8) 00:22:29.135 Write completed with error (sct=0, sc=8) 00:22:29.135 Write completed with error (sct=0, sc=8) 00:22:29.135 starting I/O failed: -6 00:22:29.135 Write completed with error (sct=0, sc=8) 00:22:29.135 starting I/O failed: -6 00:22:29.135 Write completed with error (sct=0, sc=8) 00:22:29.135 Write completed with error (sct=0, sc=8) 00:22:29.135 Write completed with error (sct=0, sc=8) 00:22:29.135 starting I/O failed: -6 00:22:29.135 Write completed with error (sct=0, sc=8) 00:22:29.135 starting I/O failed: -6 00:22:29.135 Write completed with error (sct=0, sc=8) 00:22:29.135 Write completed with error (sct=0, sc=8) 00:22:29.135 Write completed with error (sct=0, sc=8) 00:22:29.135 starting I/O failed: -6 00:22:29.135 Write completed with error (sct=0, sc=8) 00:22:29.135 starting I/O failed: -6 00:22:29.135 Write completed with error (sct=0, sc=8) 00:22:29.135 Write completed with error (sct=0, sc=8) 00:22:29.135 Write completed with error (sct=0, sc=8) 00:22:29.135 starting I/O failed: -6 00:22:29.135 Write completed with error (sct=0, sc=8) 00:22:29.135 starting I/O failed: -6 00:22:29.135 Write completed with error (sct=0, sc=8) 00:22:29.135 Write completed with error (sct=0, sc=8) 00:22:29.135 Write completed with error (sct=0, sc=8) 00:22:29.135 starting I/O failed: -6 00:22:29.135 Write completed with error (sct=0, sc=8) 00:22:29.135 starting I/O failed: -6 00:22:29.135 Write completed with error (sct=0, sc=8) 00:22:29.135 Write completed with error (sct=0, sc=8) 00:22:29.135 Write completed with error (sct=0, sc=8) 00:22:29.135 starting I/O failed: -6 00:22:29.135 Write completed with error (sct=0, sc=8) 00:22:29.135 starting I/O failed: -6 00:22:29.135 Write completed with error (sct=0, sc=8) 00:22:29.135 Write completed with error (sct=0, sc=8) 00:22:29.135 Write completed with error (sct=0, sc=8) 00:22:29.135 starting I/O failed: -6 00:22:29.135 Write completed with error (sct=0, sc=8) 00:22:29.135 starting I/O failed: -6 00:22:29.135 Write completed with error (sct=0, sc=8) 00:22:29.135 Write completed with error (sct=0, sc=8) 00:22:29.135 Write completed with error (sct=0, sc=8) 00:22:29.135 starting I/O failed: -6 00:22:29.135 [2024-11-15 11:40:29.488384] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:22:29.135 Write completed with error (sct=0, sc=8) 00:22:29.135 starting I/O failed: -6 00:22:29.135 Write completed with error (sct=0, sc=8) 00:22:29.135 Write completed with error (sct=0, sc=8) 00:22:29.135 starting I/O failed: -6 00:22:29.135 Write completed with error (sct=0, sc=8) 00:22:29.135 starting I/O failed: -6 00:22:29.135 Write completed with error (sct=0, sc=8) 00:22:29.135 starting I/O failed: -6 00:22:29.135 Write completed with error (sct=0, sc=8) 00:22:29.135 Write completed with error (sct=0, sc=8) 00:22:29.135 starting I/O failed: -6 00:22:29.135 Write completed with error (sct=0, sc=8) 00:22:29.135 starting I/O failed: -6 00:22:29.135 Write completed with error (sct=0, sc=8) 00:22:29.135 starting I/O failed: -6 00:22:29.135 Write completed with error (sct=0, sc=8) 00:22:29.135 Write completed with error (sct=0, sc=8) 00:22:29.135 starting I/O failed: -6 00:22:29.135 Write completed with error (sct=0, sc=8) 00:22:29.135 starting I/O failed: -6 00:22:29.135 Write completed with error (sct=0, sc=8) 00:22:29.135 starting I/O failed: -6 00:22:29.135 Write completed with error (sct=0, sc=8) 00:22:29.135 Write completed with error (sct=0, sc=8) 00:22:29.135 starting I/O failed: -6 00:22:29.135 Write completed with error (sct=0, sc=8) 00:22:29.136 starting I/O failed: -6 00:22:29.136 Write completed with error (sct=0, sc=8) 00:22:29.136 starting I/O failed: -6 00:22:29.136 Write completed with error (sct=0, sc=8) 00:22:29.136 Write completed with error (sct=0, sc=8) 00:22:29.136 starting I/O failed: -6 00:22:29.136 Write completed with error (sct=0, sc=8) 00:22:29.136 starting I/O failed: -6 00:22:29.136 Write completed with error (sct=0, sc=8) 00:22:29.136 starting I/O failed: -6 00:22:29.136 Write completed with error (sct=0, sc=8) 00:22:29.136 Write completed with error (sct=0, sc=8) 00:22:29.136 starting I/O failed: -6 00:22:29.136 Write completed with error (sct=0, sc=8) 00:22:29.136 starting I/O failed: -6 00:22:29.136 Write completed with error (sct=0, sc=8) 00:22:29.136 starting I/O failed: -6 00:22:29.136 Write completed with error (sct=0, sc=8) 00:22:29.136 Write completed with error (sct=0, sc=8) 00:22:29.136 starting I/O failed: -6 00:22:29.136 Write completed with error (sct=0, sc=8) 00:22:29.136 starting I/O failed: -6 00:22:29.136 Write completed with error (sct=0, sc=8) 00:22:29.136 starting I/O failed: -6 00:22:29.136 Write completed with error (sct=0, sc=8) 00:22:29.136 Write completed with error (sct=0, sc=8) 00:22:29.136 starting I/O failed: -6 00:22:29.136 Write completed with error (sct=0, sc=8) 00:22:29.136 starting I/O failed: -6 00:22:29.136 Write completed with error (sct=0, sc=8) 00:22:29.136 starting I/O failed: -6 00:22:29.136 Write completed with error (sct=0, sc=8) 00:22:29.136 Write completed with error (sct=0, sc=8) 00:22:29.136 starting I/O failed: -6 00:22:29.136 Write completed with error (sct=0, sc=8) 00:22:29.136 starting I/O failed: -6 00:22:29.136 Write completed with error (sct=0, sc=8) 00:22:29.136 starting I/O failed: -6 00:22:29.136 Write completed with error (sct=0, sc=8) 00:22:29.136 Write completed with error (sct=0, sc=8) 00:22:29.136 starting I/O failed: -6 00:22:29.136 Write completed with error (sct=0, sc=8) 00:22:29.136 starting I/O failed: -6 00:22:29.136 Write completed with error (sct=0, sc=8) 00:22:29.136 starting I/O failed: -6 00:22:29.136 Write completed with error (sct=0, sc=8) 00:22:29.136 Write completed with error (sct=0, sc=8) 00:22:29.136 starting I/O failed: -6 00:22:29.136 Write completed with error (sct=0, sc=8) 00:22:29.136 starting I/O failed: -6 00:22:29.136 Write completed with error (sct=0, sc=8) 00:22:29.136 starting I/O failed: -6 00:22:29.136 Write completed with error (sct=0, sc=8) 00:22:29.136 Write completed with error (sct=0, sc=8) 00:22:29.136 starting I/O failed: -6 00:22:29.136 Write completed with error (sct=0, sc=8) 00:22:29.136 starting I/O failed: -6 00:22:29.136 Write completed with error (sct=0, sc=8) 00:22:29.136 starting I/O failed: -6 00:22:29.136 [2024-11-15 11:40:29.489693] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:22:29.136 Write completed with error (sct=0, sc=8) 00:22:29.136 starting I/O failed: -6 00:22:29.136 Write completed with error (sct=0, sc=8) 00:22:29.136 starting I/O failed: -6 00:22:29.136 Write completed with error (sct=0, sc=8) 00:22:29.136 starting I/O failed: -6 00:22:29.136 Write completed with error (sct=0, sc=8) 00:22:29.136 starting I/O failed: -6 00:22:29.136 Write completed with error (sct=0, sc=8) 00:22:29.136 starting I/O failed: -6 00:22:29.136 Write completed with error (sct=0, sc=8) 00:22:29.136 starting I/O failed: -6 00:22:29.136 Write completed with error (sct=0, sc=8) 00:22:29.136 starting I/O failed: -6 00:22:29.136 Write completed with error (sct=0, sc=8) 00:22:29.136 starting I/O failed: -6 00:22:29.136 Write completed with error (sct=0, sc=8) 00:22:29.136 starting I/O failed: -6 00:22:29.136 Write completed with error (sct=0, sc=8) 00:22:29.136 starting I/O failed: -6 00:22:29.136 Write completed with error (sct=0, sc=8) 00:22:29.136 starting I/O failed: -6 00:22:29.136 Write completed with error (sct=0, sc=8) 00:22:29.136 starting I/O failed: -6 00:22:29.136 Write completed with error (sct=0, sc=8) 00:22:29.136 starting I/O failed: -6 00:22:29.136 Write completed with error (sct=0, sc=8) 00:22:29.136 starting I/O failed: -6 00:22:29.136 Write completed with error (sct=0, sc=8) 00:22:29.136 starting I/O failed: -6 00:22:29.136 Write completed with error (sct=0, sc=8) 00:22:29.136 starting I/O failed: -6 00:22:29.136 Write completed with error (sct=0, sc=8) 00:22:29.136 starting I/O failed: -6 00:22:29.136 Write completed with error (sct=0, sc=8) 00:22:29.136 starting I/O failed: -6 00:22:29.136 Write completed with error (sct=0, sc=8) 00:22:29.136 starting I/O failed: -6 00:22:29.136 Write completed with error (sct=0, sc=8) 00:22:29.136 starting I/O failed: -6 00:22:29.136 Write completed with error (sct=0, sc=8) 00:22:29.136 starting I/O failed: -6 00:22:29.136 Write completed with error (sct=0, sc=8) 00:22:29.136 starting I/O failed: -6 00:22:29.136 Write completed with error (sct=0, sc=8) 00:22:29.136 starting I/O failed: -6 00:22:29.136 Write completed with error (sct=0, sc=8) 00:22:29.136 starting I/O failed: -6 00:22:29.136 Write completed with error (sct=0, sc=8) 00:22:29.136 starting I/O failed: -6 00:22:29.136 Write completed with error (sct=0, sc=8) 00:22:29.136 starting I/O failed: -6 00:22:29.136 Write completed with error (sct=0, sc=8) 00:22:29.136 starting I/O failed: -6 00:22:29.136 Write completed with error (sct=0, sc=8) 00:22:29.136 starting I/O failed: -6 00:22:29.136 Write completed with error (sct=0, sc=8) 00:22:29.136 starting I/O failed: -6 00:22:29.136 Write completed with error (sct=0, sc=8) 00:22:29.136 starting I/O failed: -6 00:22:29.136 Write completed with error (sct=0, sc=8) 00:22:29.136 starting I/O failed: -6 00:22:29.136 Write completed with error (sct=0, sc=8) 00:22:29.136 starting I/O failed: -6 00:22:29.136 Write completed with error (sct=0, sc=8) 00:22:29.136 starting I/O failed: -6 00:22:29.136 Write completed with error (sct=0, sc=8) 00:22:29.136 starting I/O failed: -6 00:22:29.136 Write completed with error (sct=0, sc=8) 00:22:29.136 starting I/O failed: -6 00:22:29.136 Write completed with error (sct=0, sc=8) 00:22:29.136 starting I/O failed: -6 00:22:29.136 Write completed with error (sct=0, sc=8) 00:22:29.136 starting I/O failed: -6 00:22:29.136 Write completed with error (sct=0, sc=8) 00:22:29.136 starting I/O failed: -6 00:22:29.136 Write completed with error (sct=0, sc=8) 00:22:29.136 starting I/O failed: -6 00:22:29.136 Write completed with error (sct=0, sc=8) 00:22:29.136 starting I/O failed: -6 00:22:29.136 Write completed with error (sct=0, sc=8) 00:22:29.136 starting I/O failed: -6 00:22:29.136 Write completed with error (sct=0, sc=8) 00:22:29.136 starting I/O failed: -6 00:22:29.136 Write completed with error (sct=0, sc=8) 00:22:29.136 starting I/O failed: -6 00:22:29.136 Write completed with error (sct=0, sc=8) 00:22:29.136 starting I/O failed: -6 00:22:29.136 Write completed with error (sct=0, sc=8) 00:22:29.136 starting I/O failed: -6 00:22:29.136 Write completed with error (sct=0, sc=8) 00:22:29.136 starting I/O failed: -6 00:22:29.136 Write completed with error (sct=0, sc=8) 00:22:29.136 starting I/O failed: -6 00:22:29.136 Write completed with error (sct=0, sc=8) 00:22:29.136 starting I/O failed: -6 00:22:29.136 Write completed with error (sct=0, sc=8) 00:22:29.136 starting I/O failed: -6 00:22:29.136 Write completed with error (sct=0, sc=8) 00:22:29.136 starting I/O failed: -6 00:22:29.136 Write completed with error (sct=0, sc=8) 00:22:29.136 starting I/O failed: -6 00:22:29.136 Write completed with error (sct=0, sc=8) 00:22:29.136 starting I/O failed: -6 00:22:29.136 Write completed with error (sct=0, sc=8) 00:22:29.136 starting I/O failed: -6 00:22:29.136 Write completed with error (sct=0, sc=8) 00:22:29.136 starting I/O failed: -6 00:22:29.136 Write completed with error (sct=0, sc=8) 00:22:29.136 starting I/O failed: -6 00:22:29.136 Write completed with error (sct=0, sc=8) 00:22:29.136 starting I/O failed: -6 00:22:29.136 Write completed with error (sct=0, sc=8) 00:22:29.136 starting I/O failed: -6 00:22:29.136 Write completed with error (sct=0, sc=8) 00:22:29.136 starting I/O failed: -6 00:22:29.136 Write completed with error (sct=0, sc=8) 00:22:29.136 starting I/O failed: -6 00:22:29.136 Write completed with error (sct=0, sc=8) 00:22:29.136 starting I/O failed: -6 00:22:29.136 Write completed with error (sct=0, sc=8) 00:22:29.136 starting I/O failed: -6 00:22:29.136 [2024-11-15 11:40:29.491993] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:22:29.136 NVMe io qpair process completion error 00:22:29.136 Write completed with error (sct=0, sc=8) 00:22:29.136 Write completed with error (sct=0, sc=8) 00:22:29.136 starting I/O failed: -6 00:22:29.136 Write completed with error (sct=0, sc=8) 00:22:29.136 Write completed with error (sct=0, sc=8) 00:22:29.136 Write completed with error (sct=0, sc=8) 00:22:29.136 Write completed with error (sct=0, sc=8) 00:22:29.136 starting I/O failed: -6 00:22:29.136 Write completed with error (sct=0, sc=8) 00:22:29.136 Write completed with error (sct=0, sc=8) 00:22:29.136 Write completed with error (sct=0, sc=8) 00:22:29.136 Write completed with error (sct=0, sc=8) 00:22:29.137 starting I/O failed: -6 00:22:29.137 Write completed with error (sct=0, sc=8) 00:22:29.137 Write completed with error (sct=0, sc=8) 00:22:29.137 Write completed with error (sct=0, sc=8) 00:22:29.137 Write completed with error (sct=0, sc=8) 00:22:29.137 starting I/O failed: -6 00:22:29.137 Write completed with error (sct=0, sc=8) 00:22:29.137 Write completed with error (sct=0, sc=8) 00:22:29.137 Write completed with error (sct=0, sc=8) 00:22:29.137 Write completed with error (sct=0, sc=8) 00:22:29.137 starting I/O failed: -6 00:22:29.137 Write completed with error (sct=0, sc=8) 00:22:29.137 Write completed with error (sct=0, sc=8) 00:22:29.137 Write completed with error (sct=0, sc=8) 00:22:29.137 Write completed with error (sct=0, sc=8) 00:22:29.137 starting I/O failed: -6 00:22:29.137 Write completed with error (sct=0, sc=8) 00:22:29.137 Write completed with error (sct=0, sc=8) 00:22:29.137 Write completed with error (sct=0, sc=8) 00:22:29.137 Write completed with error (sct=0, sc=8) 00:22:29.137 starting I/O failed: -6 00:22:29.137 Write completed with error (sct=0, sc=8) 00:22:29.137 Write completed with error (sct=0, sc=8) 00:22:29.137 Write completed with error (sct=0, sc=8) 00:22:29.137 Write completed with error (sct=0, sc=8) 00:22:29.137 starting I/O failed: -6 00:22:29.137 Write completed with error (sct=0, sc=8) 00:22:29.137 Write completed with error (sct=0, sc=8) 00:22:29.137 Write completed with error (sct=0, sc=8) 00:22:29.137 Write completed with error (sct=0, sc=8) 00:22:29.137 starting I/O failed: -6 00:22:29.137 Write completed with error (sct=0, sc=8) 00:22:29.137 Write completed with error (sct=0, sc=8) 00:22:29.137 Write completed with error (sct=0, sc=8) 00:22:29.137 Write completed with error (sct=0, sc=8) 00:22:29.137 starting I/O failed: -6 00:22:29.137 Write completed with error (sct=0, sc=8) 00:22:29.137 Write completed with error (sct=0, sc=8) 00:22:29.137 Write completed with error (sct=0, sc=8) 00:22:29.137 Write completed with error (sct=0, sc=8) 00:22:29.137 starting I/O failed: -6 00:22:29.137 [2024-11-15 11:40:29.493369] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:22:29.137 Write completed with error (sct=0, sc=8) 00:22:29.137 starting I/O failed: -6 00:22:29.137 Write completed with error (sct=0, sc=8) 00:22:29.137 starting I/O failed: -6 00:22:29.137 Write completed with error (sct=0, sc=8) 00:22:29.137 Write completed with error (sct=0, sc=8) 00:22:29.137 Write completed with error (sct=0, sc=8) 00:22:29.137 starting I/O failed: -6 00:22:29.137 Write completed with error (sct=0, sc=8) 00:22:29.137 starting I/O failed: -6 00:22:29.137 Write completed with error (sct=0, sc=8) 00:22:29.137 Write completed with error (sct=0, sc=8) 00:22:29.137 Write completed with error (sct=0, sc=8) 00:22:29.137 starting I/O failed: -6 00:22:29.137 Write completed with error (sct=0, sc=8) 00:22:29.137 starting I/O failed: -6 00:22:29.137 Write completed with error (sct=0, sc=8) 00:22:29.137 Write completed with error (sct=0, sc=8) 00:22:29.137 Write completed with error (sct=0, sc=8) 00:22:29.137 starting I/O failed: -6 00:22:29.137 Write completed with error (sct=0, sc=8) 00:22:29.137 starting I/O failed: -6 00:22:29.137 Write completed with error (sct=0, sc=8) 00:22:29.137 Write completed with error (sct=0, sc=8) 00:22:29.137 Write completed with error (sct=0, sc=8) 00:22:29.137 starting I/O failed: -6 00:22:29.137 Write completed with error (sct=0, sc=8) 00:22:29.137 starting I/O failed: -6 00:22:29.137 Write completed with error (sct=0, sc=8) 00:22:29.137 Write completed with error (sct=0, sc=8) 00:22:29.137 Write completed with error (sct=0, sc=8) 00:22:29.137 starting I/O failed: -6 00:22:29.137 Write completed with error (sct=0, sc=8) 00:22:29.137 starting I/O failed: -6 00:22:29.137 Write completed with error (sct=0, sc=8) 00:22:29.137 Write completed with error (sct=0, sc=8) 00:22:29.137 Write completed with error (sct=0, sc=8) 00:22:29.137 starting I/O failed: -6 00:22:29.137 Write completed with error (sct=0, sc=8) 00:22:29.137 starting I/O failed: -6 00:22:29.137 Write completed with error (sct=0, sc=8) 00:22:29.137 Write completed with error (sct=0, sc=8) 00:22:29.137 Write completed with error (sct=0, sc=8) 00:22:29.137 starting I/O failed: -6 00:22:29.137 Write completed with error (sct=0, sc=8) 00:22:29.137 starting I/O failed: -6 00:22:29.137 Write completed with error (sct=0, sc=8) 00:22:29.137 Write completed with error (sct=0, sc=8) 00:22:29.137 Write completed with error (sct=0, sc=8) 00:22:29.137 starting I/O failed: -6 00:22:29.137 Write completed with error (sct=0, sc=8) 00:22:29.137 starting I/O failed: -6 00:22:29.137 Write completed with error (sct=0, sc=8) 00:22:29.137 Write completed with error (sct=0, sc=8) 00:22:29.137 Write completed with error (sct=0, sc=8) 00:22:29.137 starting I/O failed: -6 00:22:29.137 [2024-11-15 11:40:29.494475] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:22:29.137 Write completed with error (sct=0, sc=8) 00:22:29.137 starting I/O failed: -6 00:22:29.137 Write completed with error (sct=0, sc=8) 00:22:29.137 starting I/O failed: -6 00:22:29.137 Write completed with error (sct=0, sc=8) 00:22:29.137 Write completed with error (sct=0, sc=8) 00:22:29.137 starting I/O failed: -6 00:22:29.137 Write completed with error (sct=0, sc=8) 00:22:29.137 starting I/O failed: -6 00:22:29.137 Write completed with error (sct=0, sc=8) 00:22:29.137 starting I/O failed: -6 00:22:29.137 Write completed with error (sct=0, sc=8) 00:22:29.137 Write completed with error (sct=0, sc=8) 00:22:29.137 starting I/O failed: -6 00:22:29.137 Write completed with error (sct=0, sc=8) 00:22:29.137 starting I/O failed: -6 00:22:29.137 Write completed with error (sct=0, sc=8) 00:22:29.137 starting I/O failed: -6 00:22:29.137 Write completed with error (sct=0, sc=8) 00:22:29.137 Write completed with error (sct=0, sc=8) 00:22:29.137 starting I/O failed: -6 00:22:29.137 Write completed with error (sct=0, sc=8) 00:22:29.137 starting I/O failed: -6 00:22:29.137 Write completed with error (sct=0, sc=8) 00:22:29.137 starting I/O failed: -6 00:22:29.137 Write completed with error (sct=0, sc=8) 00:22:29.137 Write completed with error (sct=0, sc=8) 00:22:29.137 starting I/O failed: -6 00:22:29.137 Write completed with error (sct=0, sc=8) 00:22:29.137 starting I/O failed: -6 00:22:29.137 Write completed with error (sct=0, sc=8) 00:22:29.137 starting I/O failed: -6 00:22:29.137 Write completed with error (sct=0, sc=8) 00:22:29.137 Write completed with error (sct=0, sc=8) 00:22:29.137 starting I/O failed: -6 00:22:29.137 Write completed with error (sct=0, sc=8) 00:22:29.137 starting I/O failed: -6 00:22:29.137 Write completed with error (sct=0, sc=8) 00:22:29.137 starting I/O failed: -6 00:22:29.137 Write completed with error (sct=0, sc=8) 00:22:29.137 Write completed with error (sct=0, sc=8) 00:22:29.137 starting I/O failed: -6 00:22:29.137 Write completed with error (sct=0, sc=8) 00:22:29.137 starting I/O failed: -6 00:22:29.137 Write completed with error (sct=0, sc=8) 00:22:29.137 starting I/O failed: -6 00:22:29.137 Write completed with error (sct=0, sc=8) 00:22:29.137 Write completed with error (sct=0, sc=8) 00:22:29.137 starting I/O failed: -6 00:22:29.137 Write completed with error (sct=0, sc=8) 00:22:29.137 starting I/O failed: -6 00:22:29.137 Write completed with error (sct=0, sc=8) 00:22:29.137 starting I/O failed: -6 00:22:29.137 Write completed with error (sct=0, sc=8) 00:22:29.137 Write completed with error (sct=0, sc=8) 00:22:29.137 starting I/O failed: -6 00:22:29.137 Write completed with error (sct=0, sc=8) 00:22:29.137 starting I/O failed: -6 00:22:29.137 Write completed with error (sct=0, sc=8) 00:22:29.137 starting I/O failed: -6 00:22:29.137 Write completed with error (sct=0, sc=8) 00:22:29.137 Write completed with error (sct=0, sc=8) 00:22:29.137 starting I/O failed: -6 00:22:29.137 Write completed with error (sct=0, sc=8) 00:22:29.137 starting I/O failed: -6 00:22:29.137 Write completed with error (sct=0, sc=8) 00:22:29.137 starting I/O failed: -6 00:22:29.137 Write completed with error (sct=0, sc=8) 00:22:29.137 Write completed with error (sct=0, sc=8) 00:22:29.137 starting I/O failed: -6 00:22:29.137 Write completed with error (sct=0, sc=8) 00:22:29.137 starting I/O failed: -6 00:22:29.137 Write completed with error (sct=0, sc=8) 00:22:29.137 starting I/O failed: -6 00:22:29.137 Write completed with error (sct=0, sc=8) 00:22:29.137 Write completed with error (sct=0, sc=8) 00:22:29.137 starting I/O failed: -6 00:22:29.137 Write completed with error (sct=0, sc=8) 00:22:29.137 starting I/O failed: -6 00:22:29.137 Write completed with error (sct=0, sc=8) 00:22:29.137 starting I/O failed: -6 00:22:29.137 Write completed with error (sct=0, sc=8) 00:22:29.137 Write completed with error (sct=0, sc=8) 00:22:29.137 starting I/O failed: -6 00:22:29.137 Write completed with error (sct=0, sc=8) 00:22:29.137 starting I/O failed: -6 00:22:29.137 [2024-11-15 11:40:29.495795] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:22:29.137 Write completed with error (sct=0, sc=8) 00:22:29.137 starting I/O failed: -6 00:22:29.137 Write completed with error (sct=0, sc=8) 00:22:29.137 starting I/O failed: -6 00:22:29.137 Write completed with error (sct=0, sc=8) 00:22:29.137 starting I/O failed: -6 00:22:29.137 Write completed with error (sct=0, sc=8) 00:22:29.137 starting I/O failed: -6 00:22:29.137 Write completed with error (sct=0, sc=8) 00:22:29.138 starting I/O failed: -6 00:22:29.138 Write completed with error (sct=0, sc=8) 00:22:29.138 starting I/O failed: -6 00:22:29.138 Write completed with error (sct=0, sc=8) 00:22:29.138 starting I/O failed: -6 00:22:29.138 Write completed with error (sct=0, sc=8) 00:22:29.138 starting I/O failed: -6 00:22:29.138 Write completed with error (sct=0, sc=8) 00:22:29.138 starting I/O failed: -6 00:22:29.138 Write completed with error (sct=0, sc=8) 00:22:29.138 starting I/O failed: -6 00:22:29.138 Write completed with error (sct=0, sc=8) 00:22:29.138 starting I/O failed: -6 00:22:29.138 Write completed with error (sct=0, sc=8) 00:22:29.138 starting I/O failed: -6 00:22:29.138 Write completed with error (sct=0, sc=8) 00:22:29.138 starting I/O failed: -6 00:22:29.138 Write completed with error (sct=0, sc=8) 00:22:29.138 starting I/O failed: -6 00:22:29.138 Write completed with error (sct=0, sc=8) 00:22:29.138 starting I/O failed: -6 00:22:29.138 Write completed with error (sct=0, sc=8) 00:22:29.138 starting I/O failed: -6 00:22:29.138 Write completed with error (sct=0, sc=8) 00:22:29.138 starting I/O failed: -6 00:22:29.138 Write completed with error (sct=0, sc=8) 00:22:29.138 starting I/O failed: -6 00:22:29.138 Write completed with error (sct=0, sc=8) 00:22:29.138 starting I/O failed: -6 00:22:29.138 Write completed with error (sct=0, sc=8) 00:22:29.138 starting I/O failed: -6 00:22:29.138 Write completed with error (sct=0, sc=8) 00:22:29.138 starting I/O failed: -6 00:22:29.138 Write completed with error (sct=0, sc=8) 00:22:29.138 starting I/O failed: -6 00:22:29.138 Write completed with error (sct=0, sc=8) 00:22:29.138 starting I/O failed: -6 00:22:29.138 Write completed with error (sct=0, sc=8) 00:22:29.138 starting I/O failed: -6 00:22:29.138 Write completed with error (sct=0, sc=8) 00:22:29.138 starting I/O failed: -6 00:22:29.138 Write completed with error (sct=0, sc=8) 00:22:29.138 starting I/O failed: -6 00:22:29.138 Write completed with error (sct=0, sc=8) 00:22:29.138 starting I/O failed: -6 00:22:29.138 Write completed with error (sct=0, sc=8) 00:22:29.138 starting I/O failed: -6 00:22:29.138 Write completed with error (sct=0, sc=8) 00:22:29.138 starting I/O failed: -6 00:22:29.138 Write completed with error (sct=0, sc=8) 00:22:29.138 starting I/O failed: -6 00:22:29.138 Write completed with error (sct=0, sc=8) 00:22:29.138 starting I/O failed: -6 00:22:29.138 Write completed with error (sct=0, sc=8) 00:22:29.138 starting I/O failed: -6 00:22:29.138 Write completed with error (sct=0, sc=8) 00:22:29.138 starting I/O failed: -6 00:22:29.138 Write completed with error (sct=0, sc=8) 00:22:29.138 starting I/O failed: -6 00:22:29.138 Write completed with error (sct=0, sc=8) 00:22:29.138 starting I/O failed: -6 00:22:29.138 Write completed with error (sct=0, sc=8) 00:22:29.138 starting I/O failed: -6 00:22:29.138 Write completed with error (sct=0, sc=8) 00:22:29.138 starting I/O failed: -6 00:22:29.138 Write completed with error (sct=0, sc=8) 00:22:29.138 starting I/O failed: -6 00:22:29.138 Write completed with error (sct=0, sc=8) 00:22:29.138 starting I/O failed: -6 00:22:29.138 Write completed with error (sct=0, sc=8) 00:22:29.138 starting I/O failed: -6 00:22:29.138 Write completed with error (sct=0, sc=8) 00:22:29.138 starting I/O failed: -6 00:22:29.138 Write completed with error (sct=0, sc=8) 00:22:29.138 starting I/O failed: -6 00:22:29.138 Write completed with error (sct=0, sc=8) 00:22:29.138 starting I/O failed: -6 00:22:29.138 Write completed with error (sct=0, sc=8) 00:22:29.138 starting I/O failed: -6 00:22:29.138 Write completed with error (sct=0, sc=8) 00:22:29.138 starting I/O failed: -6 00:22:29.138 Write completed with error (sct=0, sc=8) 00:22:29.138 starting I/O failed: -6 00:22:29.138 Write completed with error (sct=0, sc=8) 00:22:29.138 starting I/O failed: -6 00:22:29.138 Write completed with error (sct=0, sc=8) 00:22:29.138 starting I/O failed: -6 00:22:29.138 Write completed with error (sct=0, sc=8) 00:22:29.138 starting I/O failed: -6 00:22:29.138 Write completed with error (sct=0, sc=8) 00:22:29.138 starting I/O failed: -6 00:22:29.138 Write completed with error (sct=0, sc=8) 00:22:29.138 starting I/O failed: -6 00:22:29.138 Write completed with error (sct=0, sc=8) 00:22:29.138 starting I/O failed: -6 00:22:29.138 Write completed with error (sct=0, sc=8) 00:22:29.138 starting I/O failed: -6 00:22:29.138 Write completed with error (sct=0, sc=8) 00:22:29.138 starting I/O failed: -6 00:22:29.138 Write completed with error (sct=0, sc=8) 00:22:29.138 starting I/O failed: -6 00:22:29.138 Write completed with error (sct=0, sc=8) 00:22:29.138 starting I/O failed: -6 00:22:29.138 Write completed with error (sct=0, sc=8) 00:22:29.138 starting I/O failed: -6 00:22:29.138 Write completed with error (sct=0, sc=8) 00:22:29.138 starting I/O failed: -6 00:22:29.138 Write completed with error (sct=0, sc=8) 00:22:29.138 starting I/O failed: -6 00:22:29.138 Write completed with error (sct=0, sc=8) 00:22:29.138 starting I/O failed: -6 00:22:29.138 Write completed with error (sct=0, sc=8) 00:22:29.138 starting I/O failed: -6 00:22:29.138 [2024-11-15 11:40:29.498066] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:22:29.138 NVMe io qpair process completion error 00:22:29.138 Write completed with error (sct=0, sc=8) 00:22:29.138 Write completed with error (sct=0, sc=8) 00:22:29.138 Write completed with error (sct=0, sc=8) 00:22:29.138 starting I/O failed: -6 00:22:29.138 Write completed with error (sct=0, sc=8) 00:22:29.138 Write completed with error (sct=0, sc=8) 00:22:29.138 Write completed with error (sct=0, sc=8) 00:22:29.138 Write completed with error (sct=0, sc=8) 00:22:29.138 starting I/O failed: -6 00:22:29.138 Write completed with error (sct=0, sc=8) 00:22:29.138 Write completed with error (sct=0, sc=8) 00:22:29.138 Write completed with error (sct=0, sc=8) 00:22:29.138 Write completed with error (sct=0, sc=8) 00:22:29.138 starting I/O failed: -6 00:22:29.138 Write completed with error (sct=0, sc=8) 00:22:29.138 Write completed with error (sct=0, sc=8) 00:22:29.138 Write completed with error (sct=0, sc=8) 00:22:29.138 Write completed with error (sct=0, sc=8) 00:22:29.138 starting I/O failed: -6 00:22:29.138 Write completed with error (sct=0, sc=8) 00:22:29.138 Write completed with error (sct=0, sc=8) 00:22:29.138 Write completed with error (sct=0, sc=8) 00:22:29.138 Write completed with error (sct=0, sc=8) 00:22:29.138 starting I/O failed: -6 00:22:29.138 Write completed with error (sct=0, sc=8) 00:22:29.138 Write completed with error (sct=0, sc=8) 00:22:29.138 Write completed with error (sct=0, sc=8) 00:22:29.138 Write completed with error (sct=0, sc=8) 00:22:29.138 starting I/O failed: -6 00:22:29.138 Write completed with error (sct=0, sc=8) 00:22:29.138 Write completed with error (sct=0, sc=8) 00:22:29.138 Write completed with error (sct=0, sc=8) 00:22:29.138 Write completed with error (sct=0, sc=8) 00:22:29.138 starting I/O failed: -6 00:22:29.138 Write completed with error (sct=0, sc=8) 00:22:29.138 Write completed with error (sct=0, sc=8) 00:22:29.138 Write completed with error (sct=0, sc=8) 00:22:29.138 Write completed with error (sct=0, sc=8) 00:22:29.138 starting I/O failed: -6 00:22:29.138 Write completed with error (sct=0, sc=8) 00:22:29.138 Write completed with error (sct=0, sc=8) 00:22:29.138 Write completed with error (sct=0, sc=8) 00:22:29.138 Write completed with error (sct=0, sc=8) 00:22:29.138 starting I/O failed: -6 00:22:29.138 Write completed with error (sct=0, sc=8) 00:22:29.138 Write completed with error (sct=0, sc=8) 00:22:29.138 Write completed with error (sct=0, sc=8) 00:22:29.138 Write completed with error (sct=0, sc=8) 00:22:29.138 starting I/O failed: -6 00:22:29.138 Write completed with error (sct=0, sc=8) 00:22:29.138 Write completed with error (sct=0, sc=8) 00:22:29.138 Write completed with error (sct=0, sc=8) 00:22:29.138 Write completed with error (sct=0, sc=8) 00:22:29.138 starting I/O failed: -6 00:22:29.138 [2024-11-15 11:40:29.499558] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:22:29.138 starting I/O failed: -6 00:22:29.138 Write completed with error (sct=0, sc=8) 00:22:29.138 starting I/O failed: -6 00:22:29.138 Write completed with error (sct=0, sc=8) 00:22:29.138 Write completed with error (sct=0, sc=8) 00:22:29.138 Write completed with error (sct=0, sc=8) 00:22:29.139 starting I/O failed: -6 00:22:29.139 Write completed with error (sct=0, sc=8) 00:22:29.139 starting I/O failed: -6 00:22:29.139 Write completed with error (sct=0, sc=8) 00:22:29.139 Write completed with error (sct=0, sc=8) 00:22:29.139 Write completed with error (sct=0, sc=8) 00:22:29.139 starting I/O failed: -6 00:22:29.139 Write completed with error (sct=0, sc=8) 00:22:29.139 starting I/O failed: -6 00:22:29.139 Write completed with error (sct=0, sc=8) 00:22:29.139 Write completed with error (sct=0, sc=8) 00:22:29.139 Write completed with error (sct=0, sc=8) 00:22:29.139 starting I/O failed: -6 00:22:29.139 Write completed with error (sct=0, sc=8) 00:22:29.139 starting I/O failed: -6 00:22:29.139 Write completed with error (sct=0, sc=8) 00:22:29.139 Write completed with error (sct=0, sc=8) 00:22:29.139 Write completed with error (sct=0, sc=8) 00:22:29.139 starting I/O failed: -6 00:22:29.139 Write completed with error (sct=0, sc=8) 00:22:29.139 starting I/O failed: -6 00:22:29.139 Write completed with error (sct=0, sc=8) 00:22:29.139 Write completed with error (sct=0, sc=8) 00:22:29.139 Write completed with error (sct=0, sc=8) 00:22:29.139 starting I/O failed: -6 00:22:29.139 Write completed with error (sct=0, sc=8) 00:22:29.139 starting I/O failed: -6 00:22:29.139 Write completed with error (sct=0, sc=8) 00:22:29.139 Write completed with error (sct=0, sc=8) 00:22:29.139 Write completed with error (sct=0, sc=8) 00:22:29.139 starting I/O failed: -6 00:22:29.139 Write completed with error (sct=0, sc=8) 00:22:29.139 starting I/O failed: -6 00:22:29.139 Write completed with error (sct=0, sc=8) 00:22:29.139 Write completed with error (sct=0, sc=8) 00:22:29.139 Write completed with error (sct=0, sc=8) 00:22:29.139 starting I/O failed: -6 00:22:29.139 Write completed with error (sct=0, sc=8) 00:22:29.139 starting I/O failed: -6 00:22:29.139 Write completed with error (sct=0, sc=8) 00:22:29.139 Write completed with error (sct=0, sc=8) 00:22:29.139 Write completed with error (sct=0, sc=8) 00:22:29.139 starting I/O failed: -6 00:22:29.139 Write completed with error (sct=0, sc=8) 00:22:29.139 starting I/O failed: -6 00:22:29.139 Write completed with error (sct=0, sc=8) 00:22:29.139 Write completed with error (sct=0, sc=8) 00:22:29.139 Write completed with error (sct=0, sc=8) 00:22:29.139 starting I/O failed: -6 00:22:29.139 [2024-11-15 11:40:29.500634] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:22:29.139 Write completed with error (sct=0, sc=8) 00:22:29.139 starting I/O failed: -6 00:22:29.139 Write completed with error (sct=0, sc=8) 00:22:29.139 Write completed with error (sct=0, sc=8) 00:22:29.139 starting I/O failed: -6 00:22:29.139 Write completed with error (sct=0, sc=8) 00:22:29.139 starting I/O failed: -6 00:22:29.139 Write completed with error (sct=0, sc=8) 00:22:29.139 starting I/O failed: -6 00:22:29.139 Write completed with error (sct=0, sc=8) 00:22:29.139 Write completed with error (sct=0, sc=8) 00:22:29.139 starting I/O failed: -6 00:22:29.139 Write completed with error (sct=0, sc=8) 00:22:29.139 starting I/O failed: -6 00:22:29.139 Write completed with error (sct=0, sc=8) 00:22:29.139 starting I/O failed: -6 00:22:29.139 Write completed with error (sct=0, sc=8) 00:22:29.139 Write completed with error (sct=0, sc=8) 00:22:29.139 starting I/O failed: -6 00:22:29.139 Write completed with error (sct=0, sc=8) 00:22:29.139 starting I/O failed: -6 00:22:29.139 Write completed with error (sct=0, sc=8) 00:22:29.139 starting I/O failed: -6 00:22:29.139 Write completed with error (sct=0, sc=8) 00:22:29.139 Write completed with error (sct=0, sc=8) 00:22:29.139 starting I/O failed: -6 00:22:29.139 Write completed with error (sct=0, sc=8) 00:22:29.139 starting I/O failed: -6 00:22:29.139 Write completed with error (sct=0, sc=8) 00:22:29.139 starting I/O failed: -6 00:22:29.139 Write completed with error (sct=0, sc=8) 00:22:29.139 Write completed with error (sct=0, sc=8) 00:22:29.139 starting I/O failed: -6 00:22:29.139 Write completed with error (sct=0, sc=8) 00:22:29.139 starting I/O failed: -6 00:22:29.139 Write completed with error (sct=0, sc=8) 00:22:29.139 starting I/O failed: -6 00:22:29.139 Write completed with error (sct=0, sc=8) 00:22:29.139 Write completed with error (sct=0, sc=8) 00:22:29.139 starting I/O failed: -6 00:22:29.139 Write completed with error (sct=0, sc=8) 00:22:29.139 starting I/O failed: -6 00:22:29.139 Write completed with error (sct=0, sc=8) 00:22:29.139 starting I/O failed: -6 00:22:29.139 Write completed with error (sct=0, sc=8) 00:22:29.139 Write completed with error (sct=0, sc=8) 00:22:29.139 starting I/O failed: -6 00:22:29.139 Write completed with error (sct=0, sc=8) 00:22:29.139 starting I/O failed: -6 00:22:29.139 Write completed with error (sct=0, sc=8) 00:22:29.139 starting I/O failed: -6 00:22:29.139 Write completed with error (sct=0, sc=8) 00:22:29.139 Write completed with error (sct=0, sc=8) 00:22:29.139 starting I/O failed: -6 00:22:29.139 Write completed with error (sct=0, sc=8) 00:22:29.139 starting I/O failed: -6 00:22:29.139 Write completed with error (sct=0, sc=8) 00:22:29.139 starting I/O failed: -6 00:22:29.139 Write completed with error (sct=0, sc=8) 00:22:29.139 Write completed with error (sct=0, sc=8) 00:22:29.139 starting I/O failed: -6 00:22:29.139 Write completed with error (sct=0, sc=8) 00:22:29.139 starting I/O failed: -6 00:22:29.139 Write completed with error (sct=0, sc=8) 00:22:29.139 starting I/O failed: -6 00:22:29.139 Write completed with error (sct=0, sc=8) 00:22:29.139 Write completed with error (sct=0, sc=8) 00:22:29.139 starting I/O failed: -6 00:22:29.139 Write completed with error (sct=0, sc=8) 00:22:29.139 starting I/O failed: -6 00:22:29.139 Write completed with error (sct=0, sc=8) 00:22:29.139 starting I/O failed: -6 00:22:29.139 Write completed with error (sct=0, sc=8) 00:22:29.139 Write completed with error (sct=0, sc=8) 00:22:29.139 starting I/O failed: -6 00:22:29.139 Write completed with error (sct=0, sc=8) 00:22:29.139 starting I/O failed: -6 00:22:29.139 Write completed with error (sct=0, sc=8) 00:22:29.139 starting I/O failed: -6 00:22:29.139 Write completed with error (sct=0, sc=8) 00:22:29.139 Write completed with error (sct=0, sc=8) 00:22:29.139 starting I/O failed: -6 00:22:29.139 Write completed with error (sct=0, sc=8) 00:22:29.139 starting I/O failed: -6 00:22:29.139 Write completed with error (sct=0, sc=8) 00:22:29.139 starting I/O failed: -6 00:22:29.139 [2024-11-15 11:40:29.501944] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:22:29.139 starting I/O failed: -6 00:22:29.139 starting I/O failed: -6 00:22:29.139 starting I/O failed: -6 00:22:29.139 starting I/O failed: -6 00:22:29.139 starting I/O failed: -6 00:22:29.139 starting I/O failed: -6 00:22:29.139 starting I/O failed: -6 00:22:29.139 starting I/O failed: -6 00:22:29.139 starting I/O failed: -6 00:22:29.139 Write completed with error (sct=0, sc=8) 00:22:29.139 starting I/O failed: -6 00:22:29.139 Write completed with error (sct=0, sc=8) 00:22:29.139 starting I/O failed: -6 00:22:29.139 Write completed with error (sct=0, sc=8) 00:22:29.139 starting I/O failed: -6 00:22:29.139 Write completed with error (sct=0, sc=8) 00:22:29.139 starting I/O failed: -6 00:22:29.139 Write completed with error (sct=0, sc=8) 00:22:29.139 starting I/O failed: -6 00:22:29.139 Write completed with error (sct=0, sc=8) 00:22:29.139 starting I/O failed: -6 00:22:29.139 Write completed with error (sct=0, sc=8) 00:22:29.139 starting I/O failed: -6 00:22:29.139 Write completed with error (sct=0, sc=8) 00:22:29.139 starting I/O failed: -6 00:22:29.139 Write completed with error (sct=0, sc=8) 00:22:29.139 starting I/O failed: -6 00:22:29.139 Write completed with error (sct=0, sc=8) 00:22:29.139 starting I/O failed: -6 00:22:29.139 Write completed with error (sct=0, sc=8) 00:22:29.139 starting I/O failed: -6 00:22:29.139 Write completed with error (sct=0, sc=8) 00:22:29.139 starting I/O failed: -6 00:22:29.139 Write completed with error (sct=0, sc=8) 00:22:29.139 starting I/O failed: -6 00:22:29.139 Write completed with error (sct=0, sc=8) 00:22:29.139 starting I/O failed: -6 00:22:29.139 Write completed with error (sct=0, sc=8) 00:22:29.139 starting I/O failed: -6 00:22:29.139 Write completed with error (sct=0, sc=8) 00:22:29.139 starting I/O failed: -6 00:22:29.139 Write completed with error (sct=0, sc=8) 00:22:29.139 starting I/O failed: -6 00:22:29.139 Write completed with error (sct=0, sc=8) 00:22:29.139 starting I/O failed: -6 00:22:29.139 Write completed with error (sct=0, sc=8) 00:22:29.139 starting I/O failed: -6 00:22:29.139 Write completed with error (sct=0, sc=8) 00:22:29.139 starting I/O failed: -6 00:22:29.139 Write completed with error (sct=0, sc=8) 00:22:29.139 starting I/O failed: -6 00:22:29.139 Write completed with error (sct=0, sc=8) 00:22:29.139 starting I/O failed: -6 00:22:29.139 Write completed with error (sct=0, sc=8) 00:22:29.139 starting I/O failed: -6 00:22:29.139 Write completed with error (sct=0, sc=8) 00:22:29.139 starting I/O failed: -6 00:22:29.140 Write completed with error (sct=0, sc=8) 00:22:29.140 starting I/O failed: -6 00:22:29.140 Write completed with error (sct=0, sc=8) 00:22:29.140 starting I/O failed: -6 00:22:29.140 Write completed with error (sct=0, sc=8) 00:22:29.140 starting I/O failed: -6 00:22:29.140 Write completed with error (sct=0, sc=8) 00:22:29.140 starting I/O failed: -6 00:22:29.140 Write completed with error (sct=0, sc=8) 00:22:29.140 starting I/O failed: -6 00:22:29.140 Write completed with error (sct=0, sc=8) 00:22:29.140 starting I/O failed: -6 00:22:29.140 Write completed with error (sct=0, sc=8) 00:22:29.140 starting I/O failed: -6 00:22:29.140 Write completed with error (sct=0, sc=8) 00:22:29.140 starting I/O failed: -6 00:22:29.140 Write completed with error (sct=0, sc=8) 00:22:29.140 starting I/O failed: -6 00:22:29.140 Write completed with error (sct=0, sc=8) 00:22:29.140 starting I/O failed: -6 00:22:29.140 Write completed with error (sct=0, sc=8) 00:22:29.140 starting I/O failed: -6 00:22:29.140 Write completed with error (sct=0, sc=8) 00:22:29.140 starting I/O failed: -6 00:22:29.140 Write completed with error (sct=0, sc=8) 00:22:29.140 starting I/O failed: -6 00:22:29.140 Write completed with error (sct=0, sc=8) 00:22:29.140 starting I/O failed: -6 00:22:29.140 Write completed with error (sct=0, sc=8) 00:22:29.140 starting I/O failed: -6 00:22:29.140 Write completed with error (sct=0, sc=8) 00:22:29.140 starting I/O failed: -6 00:22:29.140 Write completed with error (sct=0, sc=8) 00:22:29.140 starting I/O failed: -6 00:22:29.140 Write completed with error (sct=0, sc=8) 00:22:29.140 starting I/O failed: -6 00:22:29.140 Write completed with error (sct=0, sc=8) 00:22:29.140 starting I/O failed: -6 00:22:29.140 Write completed with error (sct=0, sc=8) 00:22:29.140 starting I/O failed: -6 00:22:29.140 Write completed with error (sct=0, sc=8) 00:22:29.140 starting I/O failed: -6 00:22:29.140 Write completed with error (sct=0, sc=8) 00:22:29.140 starting I/O failed: -6 00:22:29.140 Write completed with error (sct=0, sc=8) 00:22:29.140 starting I/O failed: -6 00:22:29.140 Write completed with error (sct=0, sc=8) 00:22:29.140 starting I/O failed: -6 00:22:29.140 Write completed with error (sct=0, sc=8) 00:22:29.140 starting I/O failed: -6 00:22:29.140 Write completed with error (sct=0, sc=8) 00:22:29.140 starting I/O failed: -6 00:22:29.140 Write completed with error (sct=0, sc=8) 00:22:29.140 starting I/O failed: -6 00:22:29.140 Write completed with error (sct=0, sc=8) 00:22:29.140 starting I/O failed: -6 00:22:29.140 [2024-11-15 11:40:29.512141] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:22:29.140 NVMe io qpair process completion error 00:22:29.140 Write completed with error (sct=0, sc=8) 00:22:29.140 Write completed with error (sct=0, sc=8) 00:22:29.140 Write completed with error (sct=0, sc=8) 00:22:29.140 Write completed with error (sct=0, sc=8) 00:22:29.140 starting I/O failed: -6 00:22:29.140 Write completed with error (sct=0, sc=8) 00:22:29.140 Write completed with error (sct=0, sc=8) 00:22:29.140 Write completed with error (sct=0, sc=8) 00:22:29.140 Write completed with error (sct=0, sc=8) 00:22:29.140 starting I/O failed: -6 00:22:29.140 Write completed with error (sct=0, sc=8) 00:22:29.140 Write completed with error (sct=0, sc=8) 00:22:29.140 Write completed with error (sct=0, sc=8) 00:22:29.140 Write completed with error (sct=0, sc=8) 00:22:29.140 starting I/O failed: -6 00:22:29.140 Write completed with error (sct=0, sc=8) 00:22:29.140 Write completed with error (sct=0, sc=8) 00:22:29.140 Write completed with error (sct=0, sc=8) 00:22:29.140 Write completed with error (sct=0, sc=8) 00:22:29.140 starting I/O failed: -6 00:22:29.140 Write completed with error (sct=0, sc=8) 00:22:29.140 Write completed with error (sct=0, sc=8) 00:22:29.140 Write completed with error (sct=0, sc=8) 00:22:29.140 Write completed with error (sct=0, sc=8) 00:22:29.140 starting I/O failed: -6 00:22:29.140 Write completed with error (sct=0, sc=8) 00:22:29.140 Write completed with error (sct=0, sc=8) 00:22:29.140 Write completed with error (sct=0, sc=8) 00:22:29.140 Write completed with error (sct=0, sc=8) 00:22:29.140 starting I/O failed: -6 00:22:29.140 Write completed with error (sct=0, sc=8) 00:22:29.140 Write completed with error (sct=0, sc=8) 00:22:29.140 Write completed with error (sct=0, sc=8) 00:22:29.140 Write completed with error (sct=0, sc=8) 00:22:29.140 starting I/O failed: -6 00:22:29.140 Write completed with error (sct=0, sc=8) 00:22:29.140 Write completed with error (sct=0, sc=8) 00:22:29.140 Write completed with error (sct=0, sc=8) 00:22:29.140 Write completed with error (sct=0, sc=8) 00:22:29.140 starting I/O failed: -6 00:22:29.140 Write completed with error (sct=0, sc=8) 00:22:29.140 Write completed with error (sct=0, sc=8) 00:22:29.140 Write completed with error (sct=0, sc=8) 00:22:29.140 Write completed with error (sct=0, sc=8) 00:22:29.140 starting I/O failed: -6 00:22:29.140 Write completed with error (sct=0, sc=8) 00:22:29.140 Write completed with error (sct=0, sc=8) 00:22:29.140 Write completed with error (sct=0, sc=8) 00:22:29.140 Write completed with error (sct=0, sc=8) 00:22:29.140 starting I/O failed: -6 00:22:29.140 Write completed with error (sct=0, sc=8) 00:22:29.140 Write completed with error (sct=0, sc=8) 00:22:29.140 Write completed with error (sct=0, sc=8) 00:22:29.140 [2024-11-15 11:40:29.514099] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:22:29.140 starting I/O failed: -6 00:22:29.140 Write completed with error (sct=0, sc=8) 00:22:29.140 starting I/O failed: -6 00:22:29.140 Write completed with error (sct=0, sc=8) 00:22:29.140 starting I/O failed: -6 00:22:29.140 Write completed with error (sct=0, sc=8) 00:22:29.140 Write completed with error (sct=0, sc=8) 00:22:29.140 Write completed with error (sct=0, sc=8) 00:22:29.140 starting I/O failed: -6 00:22:29.140 Write completed with error (sct=0, sc=8) 00:22:29.140 starting I/O failed: -6 00:22:29.140 Write completed with error (sct=0, sc=8) 00:22:29.140 Write completed with error (sct=0, sc=8) 00:22:29.140 Write completed with error (sct=0, sc=8) 00:22:29.140 starting I/O failed: -6 00:22:29.140 Write completed with error (sct=0, sc=8) 00:22:29.140 starting I/O failed: -6 00:22:29.140 Write completed with error (sct=0, sc=8) 00:22:29.140 Write completed with error (sct=0, sc=8) 00:22:29.140 Write completed with error (sct=0, sc=8) 00:22:29.140 starting I/O failed: -6 00:22:29.140 Write completed with error (sct=0, sc=8) 00:22:29.140 starting I/O failed: -6 00:22:29.140 Write completed with error (sct=0, sc=8) 00:22:29.140 Write completed with error (sct=0, sc=8) 00:22:29.140 Write completed with error (sct=0, sc=8) 00:22:29.140 starting I/O failed: -6 00:22:29.140 Write completed with error (sct=0, sc=8) 00:22:29.140 starting I/O failed: -6 00:22:29.140 Write completed with error (sct=0, sc=8) 00:22:29.140 Write completed with error (sct=0, sc=8) 00:22:29.140 Write completed with error (sct=0, sc=8) 00:22:29.140 starting I/O failed: -6 00:22:29.140 Write completed with error (sct=0, sc=8) 00:22:29.140 starting I/O failed: -6 00:22:29.140 Write completed with error (sct=0, sc=8) 00:22:29.140 Write completed with error (sct=0, sc=8) 00:22:29.140 Write completed with error (sct=0, sc=8) 00:22:29.140 starting I/O failed: -6 00:22:29.140 Write completed with error (sct=0, sc=8) 00:22:29.140 starting I/O failed: -6 00:22:29.140 Write completed with error (sct=0, sc=8) 00:22:29.140 Write completed with error (sct=0, sc=8) 00:22:29.140 Write completed with error (sct=0, sc=8) 00:22:29.140 starting I/O failed: -6 00:22:29.140 Write completed with error (sct=0, sc=8) 00:22:29.140 starting I/O failed: -6 00:22:29.140 Write completed with error (sct=0, sc=8) 00:22:29.140 Write completed with error (sct=0, sc=8) 00:22:29.140 Write completed with error (sct=0, sc=8) 00:22:29.140 starting I/O failed: -6 00:22:29.140 Write completed with error (sct=0, sc=8) 00:22:29.140 starting I/O failed: -6 00:22:29.140 Write completed with error (sct=0, sc=8) 00:22:29.140 Write completed with error (sct=0, sc=8) 00:22:29.140 [2024-11-15 11:40:29.515495] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:22:29.140 Write completed with error (sct=0, sc=8) 00:22:29.140 starting I/O failed: -6 00:22:29.140 Write completed with error (sct=0, sc=8) 00:22:29.140 starting I/O failed: -6 00:22:29.140 Write completed with error (sct=0, sc=8) 00:22:29.140 Write completed with error (sct=0, sc=8) 00:22:29.140 starting I/O failed: -6 00:22:29.140 Write completed with error (sct=0, sc=8) 00:22:29.140 starting I/O failed: -6 00:22:29.141 Write completed with error (sct=0, sc=8) 00:22:29.141 starting I/O failed: -6 00:22:29.141 Write completed with error (sct=0, sc=8) 00:22:29.141 Write completed with error (sct=0, sc=8) 00:22:29.141 starting I/O failed: -6 00:22:29.141 Write completed with error (sct=0, sc=8) 00:22:29.141 starting I/O failed: -6 00:22:29.141 Write completed with error (sct=0, sc=8) 00:22:29.141 starting I/O failed: -6 00:22:29.141 Write completed with error (sct=0, sc=8) 00:22:29.141 Write completed with error (sct=0, sc=8) 00:22:29.141 starting I/O failed: -6 00:22:29.141 Write completed with error (sct=0, sc=8) 00:22:29.141 starting I/O failed: -6 00:22:29.141 Write completed with error (sct=0, sc=8) 00:22:29.141 starting I/O failed: -6 00:22:29.141 Write completed with error (sct=0, sc=8) 00:22:29.141 Write completed with error (sct=0, sc=8) 00:22:29.141 starting I/O failed: -6 00:22:29.141 Write completed with error (sct=0, sc=8) 00:22:29.141 starting I/O failed: -6 00:22:29.141 Write completed with error (sct=0, sc=8) 00:22:29.141 starting I/O failed: -6 00:22:29.141 Write completed with error (sct=0, sc=8) 00:22:29.141 Write completed with error (sct=0, sc=8) 00:22:29.141 starting I/O failed: -6 00:22:29.141 Write completed with error (sct=0, sc=8) 00:22:29.141 starting I/O failed: -6 00:22:29.141 Write completed with error (sct=0, sc=8) 00:22:29.141 starting I/O failed: -6 00:22:29.141 Write completed with error (sct=0, sc=8) 00:22:29.141 Write completed with error (sct=0, sc=8) 00:22:29.141 starting I/O failed: -6 00:22:29.141 Write completed with error (sct=0, sc=8) 00:22:29.141 starting I/O failed: -6 00:22:29.141 Write completed with error (sct=0, sc=8) 00:22:29.141 starting I/O failed: -6 00:22:29.141 Write completed with error (sct=0, sc=8) 00:22:29.141 Write completed with error (sct=0, sc=8) 00:22:29.141 starting I/O failed: -6 00:22:29.141 Write completed with error (sct=0, sc=8) 00:22:29.141 starting I/O failed: -6 00:22:29.141 Write completed with error (sct=0, sc=8) 00:22:29.141 starting I/O failed: -6 00:22:29.141 Write completed with error (sct=0, sc=8) 00:22:29.141 Write completed with error (sct=0, sc=8) 00:22:29.141 starting I/O failed: -6 00:22:29.141 Write completed with error (sct=0, sc=8) 00:22:29.141 starting I/O failed: -6 00:22:29.141 Write completed with error (sct=0, sc=8) 00:22:29.141 starting I/O failed: -6 00:22:29.141 Write completed with error (sct=0, sc=8) 00:22:29.141 Write completed with error (sct=0, sc=8) 00:22:29.141 starting I/O failed: -6 00:22:29.141 Write completed with error (sct=0, sc=8) 00:22:29.141 starting I/O failed: -6 00:22:29.141 Write completed with error (sct=0, sc=8) 00:22:29.141 starting I/O failed: -6 00:22:29.141 Write completed with error (sct=0, sc=8) 00:22:29.141 Write completed with error (sct=0, sc=8) 00:22:29.141 starting I/O failed: -6 00:22:29.141 Write completed with error (sct=0, sc=8) 00:22:29.141 starting I/O failed: -6 00:22:29.141 Write completed with error (sct=0, sc=8) 00:22:29.141 starting I/O failed: -6 00:22:29.141 Write completed with error (sct=0, sc=8) 00:22:29.141 Write completed with error (sct=0, sc=8) 00:22:29.141 starting I/O failed: -6 00:22:29.141 Write completed with error (sct=0, sc=8) 00:22:29.141 starting I/O failed: -6 00:22:29.141 Write completed with error (sct=0, sc=8) 00:22:29.141 starting I/O failed: -6 00:22:29.141 Write completed with error (sct=0, sc=8) 00:22:29.141 Write completed with error (sct=0, sc=8) 00:22:29.141 starting I/O failed: -6 00:22:29.141 Write completed with error (sct=0, sc=8) 00:22:29.141 starting I/O failed: -6 00:22:29.141 Write completed with error (sct=0, sc=8) 00:22:29.141 starting I/O failed: -6 00:22:29.141 [2024-11-15 11:40:29.517156] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:22:29.141 Write completed with error (sct=0, sc=8) 00:22:29.141 starting I/O failed: -6 00:22:29.141 Write completed with error (sct=0, sc=8) 00:22:29.141 starting I/O failed: -6 00:22:29.141 Write completed with error (sct=0, sc=8) 00:22:29.141 starting I/O failed: -6 00:22:29.141 Write completed with error (sct=0, sc=8) 00:22:29.141 starting I/O failed: -6 00:22:29.141 Write completed with error (sct=0, sc=8) 00:22:29.141 starting I/O failed: -6 00:22:29.141 Write completed with error (sct=0, sc=8) 00:22:29.141 starting I/O failed: -6 00:22:29.141 Write completed with error (sct=0, sc=8) 00:22:29.141 starting I/O failed: -6 00:22:29.141 Write completed with error (sct=0, sc=8) 00:22:29.141 starting I/O failed: -6 00:22:29.141 Write completed with error (sct=0, sc=8) 00:22:29.141 starting I/O failed: -6 00:22:29.141 Write completed with error (sct=0, sc=8) 00:22:29.141 starting I/O failed: -6 00:22:29.141 Write completed with error (sct=0, sc=8) 00:22:29.141 starting I/O failed: -6 00:22:29.141 Write completed with error (sct=0, sc=8) 00:22:29.141 starting I/O failed: -6 00:22:29.141 Write completed with error (sct=0, sc=8) 00:22:29.141 starting I/O failed: -6 00:22:29.141 Write completed with error (sct=0, sc=8) 00:22:29.141 starting I/O failed: -6 00:22:29.141 Write completed with error (sct=0, sc=8) 00:22:29.141 starting I/O failed: -6 00:22:29.141 Write completed with error (sct=0, sc=8) 00:22:29.141 starting I/O failed: -6 00:22:29.141 Write completed with error (sct=0, sc=8) 00:22:29.141 starting I/O failed: -6 00:22:29.141 Write completed with error (sct=0, sc=8) 00:22:29.141 starting I/O failed: -6 00:22:29.141 Write completed with error (sct=0, sc=8) 00:22:29.141 starting I/O failed: -6 00:22:29.141 Write completed with error (sct=0, sc=8) 00:22:29.141 starting I/O failed: -6 00:22:29.141 Write completed with error (sct=0, sc=8) 00:22:29.141 starting I/O failed: -6 00:22:29.141 Write completed with error (sct=0, sc=8) 00:22:29.141 starting I/O failed: -6 00:22:29.141 Write completed with error (sct=0, sc=8) 00:22:29.141 starting I/O failed: -6 00:22:29.141 Write completed with error (sct=0, sc=8) 00:22:29.141 starting I/O failed: -6 00:22:29.141 Write completed with error (sct=0, sc=8) 00:22:29.141 starting I/O failed: -6 00:22:29.141 Write completed with error (sct=0, sc=8) 00:22:29.141 starting I/O failed: -6 00:22:29.141 Write completed with error (sct=0, sc=8) 00:22:29.141 starting I/O failed: -6 00:22:29.141 Write completed with error (sct=0, sc=8) 00:22:29.141 starting I/O failed: -6 00:22:29.141 Write completed with error (sct=0, sc=8) 00:22:29.141 starting I/O failed: -6 00:22:29.141 Write completed with error (sct=0, sc=8) 00:22:29.141 starting I/O failed: -6 00:22:29.141 Write completed with error (sct=0, sc=8) 00:22:29.141 starting I/O failed: -6 00:22:29.141 Write completed with error (sct=0, sc=8) 00:22:29.141 starting I/O failed: -6 00:22:29.141 Write completed with error (sct=0, sc=8) 00:22:29.141 starting I/O failed: -6 00:22:29.141 Write completed with error (sct=0, sc=8) 00:22:29.141 starting I/O failed: -6 00:22:29.141 Write completed with error (sct=0, sc=8) 00:22:29.141 starting I/O failed: -6 00:22:29.141 Write completed with error (sct=0, sc=8) 00:22:29.141 starting I/O failed: -6 00:22:29.141 Write completed with error (sct=0, sc=8) 00:22:29.141 starting I/O failed: -6 00:22:29.141 Write completed with error (sct=0, sc=8) 00:22:29.141 starting I/O failed: -6 00:22:29.141 Write completed with error (sct=0, sc=8) 00:22:29.141 starting I/O failed: -6 00:22:29.141 Write completed with error (sct=0, sc=8) 00:22:29.141 starting I/O failed: -6 00:22:29.141 Write completed with error (sct=0, sc=8) 00:22:29.141 starting I/O failed: -6 00:22:29.141 Write completed with error (sct=0, sc=8) 00:22:29.141 starting I/O failed: -6 00:22:29.141 Write completed with error (sct=0, sc=8) 00:22:29.141 starting I/O failed: -6 00:22:29.141 Write completed with error (sct=0, sc=8) 00:22:29.141 starting I/O failed: -6 00:22:29.141 Write completed with error (sct=0, sc=8) 00:22:29.141 starting I/O failed: -6 00:22:29.141 Write completed with error (sct=0, sc=8) 00:22:29.141 starting I/O failed: -6 00:22:29.141 Write completed with error (sct=0, sc=8) 00:22:29.141 starting I/O failed: -6 00:22:29.141 Write completed with error (sct=0, sc=8) 00:22:29.141 starting I/O failed: -6 00:22:29.141 Write completed with error (sct=0, sc=8) 00:22:29.141 starting I/O failed: -6 00:22:29.141 Write completed with error (sct=0, sc=8) 00:22:29.141 starting I/O failed: -6 00:22:29.141 Write completed with error (sct=0, sc=8) 00:22:29.141 starting I/O failed: -6 00:22:29.141 Write completed with error (sct=0, sc=8) 00:22:29.141 starting I/O failed: -6 00:22:29.141 Write completed with error (sct=0, sc=8) 00:22:29.141 starting I/O failed: -6 00:22:29.141 Write completed with error (sct=0, sc=8) 00:22:29.141 starting I/O failed: -6 00:22:29.141 Write completed with error (sct=0, sc=8) 00:22:29.141 starting I/O failed: -6 00:22:29.141 Write completed with error (sct=0, sc=8) 00:22:29.141 starting I/O failed: -6 00:22:29.141 Write completed with error (sct=0, sc=8) 00:22:29.141 starting I/O failed: -6 00:22:29.141 Write completed with error (sct=0, sc=8) 00:22:29.141 starting I/O failed: -6 00:22:29.141 Write completed with error (sct=0, sc=8) 00:22:29.141 starting I/O failed: -6 00:22:29.141 Write completed with error (sct=0, sc=8) 00:22:29.141 starting I/O failed: -6 00:22:29.141 Write completed with error (sct=0, sc=8) 00:22:29.142 starting I/O failed: -6 00:22:29.142 [2024-11-15 11:40:29.523786] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:22:29.142 NVMe io qpair process completion error 00:22:29.142 Write completed with error (sct=0, sc=8) 00:22:29.142 Write completed with error (sct=0, sc=8) 00:22:29.142 Write completed with error (sct=0, sc=8) 00:22:29.142 starting I/O failed: -6 00:22:29.142 Write completed with error (sct=0, sc=8) 00:22:29.142 Write completed with error (sct=0, sc=8) 00:22:29.142 Write completed with error (sct=0, sc=8) 00:22:29.142 Write completed with error (sct=0, sc=8) 00:22:29.142 starting I/O failed: -6 00:22:29.142 Write completed with error (sct=0, sc=8) 00:22:29.142 Write completed with error (sct=0, sc=8) 00:22:29.142 Write completed with error (sct=0, sc=8) 00:22:29.142 Write completed with error (sct=0, sc=8) 00:22:29.142 starting I/O failed: -6 00:22:29.142 Write completed with error (sct=0, sc=8) 00:22:29.142 Write completed with error (sct=0, sc=8) 00:22:29.142 Write completed with error (sct=0, sc=8) 00:22:29.142 Write completed with error (sct=0, sc=8) 00:22:29.142 starting I/O failed: -6 00:22:29.142 Write completed with error (sct=0, sc=8) 00:22:29.142 Write completed with error (sct=0, sc=8) 00:22:29.142 Write completed with error (sct=0, sc=8) 00:22:29.142 Write completed with error (sct=0, sc=8) 00:22:29.142 starting I/O failed: -6 00:22:29.142 Write completed with error (sct=0, sc=8) 00:22:29.142 Write completed with error (sct=0, sc=8) 00:22:29.142 Write completed with error (sct=0, sc=8) 00:22:29.142 Write completed with error (sct=0, sc=8) 00:22:29.142 starting I/O failed: -6 00:22:29.142 Write completed with error (sct=0, sc=8) 00:22:29.142 Write completed with error (sct=0, sc=8) 00:22:29.142 Write completed with error (sct=0, sc=8) 00:22:29.142 Write completed with error (sct=0, sc=8) 00:22:29.142 starting I/O failed: -6 00:22:29.142 Write completed with error (sct=0, sc=8) 00:22:29.142 Write completed with error (sct=0, sc=8) 00:22:29.142 Write completed with error (sct=0, sc=8) 00:22:29.142 Write completed with error (sct=0, sc=8) 00:22:29.142 starting I/O failed: -6 00:22:29.142 Write completed with error (sct=0, sc=8) 00:22:29.142 Write completed with error (sct=0, sc=8) 00:22:29.142 Write completed with error (sct=0, sc=8) 00:22:29.142 Write completed with error (sct=0, sc=8) 00:22:29.142 starting I/O failed: -6 00:22:29.142 Write completed with error (sct=0, sc=8) 00:22:29.142 Write completed with error (sct=0, sc=8) 00:22:29.142 Write completed with error (sct=0, sc=8) 00:22:29.142 Write completed with error (sct=0, sc=8) 00:22:29.142 starting I/O failed: -6 00:22:29.142 Write completed with error (sct=0, sc=8) 00:22:29.142 [2024-11-15 11:40:29.525032] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:22:29.142 starting I/O failed: -6 00:22:29.142 Write completed with error (sct=0, sc=8) 00:22:29.142 Write completed with error (sct=0, sc=8) 00:22:29.142 Write completed with error (sct=0, sc=8) 00:22:29.142 starting I/O failed: -6 00:22:29.142 Write completed with error (sct=0, sc=8) 00:22:29.142 starting I/O failed: -6 00:22:29.142 Write completed with error (sct=0, sc=8) 00:22:29.142 Write completed with error (sct=0, sc=8) 00:22:29.142 Write completed with error (sct=0, sc=8) 00:22:29.142 starting I/O failed: -6 00:22:29.142 Write completed with error (sct=0, sc=8) 00:22:29.142 starting I/O failed: -6 00:22:29.142 Write completed with error (sct=0, sc=8) 00:22:29.142 Write completed with error (sct=0, sc=8) 00:22:29.142 Write completed with error (sct=0, sc=8) 00:22:29.142 starting I/O failed: -6 00:22:29.142 Write completed with error (sct=0, sc=8) 00:22:29.142 starting I/O failed: -6 00:22:29.142 Write completed with error (sct=0, sc=8) 00:22:29.142 Write completed with error (sct=0, sc=8) 00:22:29.142 Write completed with error (sct=0, sc=8) 00:22:29.142 starting I/O failed: -6 00:22:29.142 Write completed with error (sct=0, sc=8) 00:22:29.142 starting I/O failed: -6 00:22:29.142 Write completed with error (sct=0, sc=8) 00:22:29.142 Write completed with error (sct=0, sc=8) 00:22:29.142 Write completed with error (sct=0, sc=8) 00:22:29.142 starting I/O failed: -6 00:22:29.142 Write completed with error (sct=0, sc=8) 00:22:29.142 starting I/O failed: -6 00:22:29.142 Write completed with error (sct=0, sc=8) 00:22:29.142 Write completed with error (sct=0, sc=8) 00:22:29.142 Write completed with error (sct=0, sc=8) 00:22:29.142 starting I/O failed: -6 00:22:29.142 Write completed with error (sct=0, sc=8) 00:22:29.142 starting I/O failed: -6 00:22:29.142 Write completed with error (sct=0, sc=8) 00:22:29.142 Write completed with error (sct=0, sc=8) 00:22:29.142 Write completed with error (sct=0, sc=8) 00:22:29.142 starting I/O failed: -6 00:22:29.142 Write completed with error (sct=0, sc=8) 00:22:29.142 starting I/O failed: -6 00:22:29.142 Write completed with error (sct=0, sc=8) 00:22:29.142 Write completed with error (sct=0, sc=8) 00:22:29.142 Write completed with error (sct=0, sc=8) 00:22:29.142 starting I/O failed: -6 00:22:29.142 Write completed with error (sct=0, sc=8) 00:22:29.142 starting I/O failed: -6 00:22:29.142 Write completed with error (sct=0, sc=8) 00:22:29.142 Write completed with error (sct=0, sc=8) 00:22:29.142 Write completed with error (sct=0, sc=8) 00:22:29.142 starting I/O failed: -6 00:22:29.142 Write completed with error (sct=0, sc=8) 00:22:29.142 starting I/O failed: -6 00:22:29.142 Write completed with error (sct=0, sc=8) 00:22:29.142 [2024-11-15 11:40:29.526117] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:22:29.142 Write completed with error (sct=0, sc=8) 00:22:29.142 starting I/O failed: -6 00:22:29.142 Write completed with error (sct=0, sc=8) 00:22:29.142 starting I/O failed: -6 00:22:29.142 Write completed with error (sct=0, sc=8) 00:22:29.142 starting I/O failed: -6 00:22:29.142 Write completed with error (sct=0, sc=8) 00:22:29.142 Write completed with error (sct=0, sc=8) 00:22:29.142 starting I/O failed: -6 00:22:29.142 Write completed with error (sct=0, sc=8) 00:22:29.142 starting I/O failed: -6 00:22:29.142 Write completed with error (sct=0, sc=8) 00:22:29.142 starting I/O failed: -6 00:22:29.142 Write completed with error (sct=0, sc=8) 00:22:29.142 Write completed with error (sct=0, sc=8) 00:22:29.142 starting I/O failed: -6 00:22:29.142 Write completed with error (sct=0, sc=8) 00:22:29.142 starting I/O failed: -6 00:22:29.142 Write completed with error (sct=0, sc=8) 00:22:29.142 starting I/O failed: -6 00:22:29.142 Write completed with error (sct=0, sc=8) 00:22:29.142 Write completed with error (sct=0, sc=8) 00:22:29.142 starting I/O failed: -6 00:22:29.142 Write completed with error (sct=0, sc=8) 00:22:29.142 starting I/O failed: -6 00:22:29.142 Write completed with error (sct=0, sc=8) 00:22:29.142 starting I/O failed: -6 00:22:29.142 Write completed with error (sct=0, sc=8) 00:22:29.142 Write completed with error (sct=0, sc=8) 00:22:29.142 starting I/O failed: -6 00:22:29.142 Write completed with error (sct=0, sc=8) 00:22:29.142 starting I/O failed: -6 00:22:29.142 Write completed with error (sct=0, sc=8) 00:22:29.142 starting I/O failed: -6 00:22:29.142 Write completed with error (sct=0, sc=8) 00:22:29.142 Write completed with error (sct=0, sc=8) 00:22:29.142 starting I/O failed: -6 00:22:29.142 Write completed with error (sct=0, sc=8) 00:22:29.142 starting I/O failed: -6 00:22:29.142 Write completed with error (sct=0, sc=8) 00:22:29.142 starting I/O failed: -6 00:22:29.142 Write completed with error (sct=0, sc=8) 00:22:29.142 Write completed with error (sct=0, sc=8) 00:22:29.142 starting I/O failed: -6 00:22:29.142 Write completed with error (sct=0, sc=8) 00:22:29.142 starting I/O failed: -6 00:22:29.142 Write completed with error (sct=0, sc=8) 00:22:29.142 starting I/O failed: -6 00:22:29.142 Write completed with error (sct=0, sc=8) 00:22:29.142 Write completed with error (sct=0, sc=8) 00:22:29.142 starting I/O failed: -6 00:22:29.142 Write completed with error (sct=0, sc=8) 00:22:29.142 starting I/O failed: -6 00:22:29.142 Write completed with error (sct=0, sc=8) 00:22:29.142 starting I/O failed: -6 00:22:29.142 Write completed with error (sct=0, sc=8) 00:22:29.142 Write completed with error (sct=0, sc=8) 00:22:29.142 starting I/O failed: -6 00:22:29.142 Write completed with error (sct=0, sc=8) 00:22:29.142 starting I/O failed: -6 00:22:29.142 Write completed with error (sct=0, sc=8) 00:22:29.142 starting I/O failed: -6 00:22:29.142 Write completed with error (sct=0, sc=8) 00:22:29.142 Write completed with error (sct=0, sc=8) 00:22:29.142 starting I/O failed: -6 00:22:29.142 Write completed with error (sct=0, sc=8) 00:22:29.142 starting I/O failed: -6 00:22:29.142 Write completed with error (sct=0, sc=8) 00:22:29.142 starting I/O failed: -6 00:22:29.142 Write completed with error (sct=0, sc=8) 00:22:29.142 Write completed with error (sct=0, sc=8) 00:22:29.142 starting I/O failed: -6 00:22:29.142 Write completed with error (sct=0, sc=8) 00:22:29.142 starting I/O failed: -6 00:22:29.142 Write completed with error (sct=0, sc=8) 00:22:29.142 starting I/O failed: -6 00:22:29.142 Write completed with error (sct=0, sc=8) 00:22:29.142 Write completed with error (sct=0, sc=8) 00:22:29.142 starting I/O failed: -6 00:22:29.142 Write completed with error (sct=0, sc=8) 00:22:29.142 starting I/O failed: -6 00:22:29.142 Write completed with error (sct=0, sc=8) 00:22:29.142 starting I/O failed: -6 00:22:29.142 Write completed with error (sct=0, sc=8) 00:22:29.142 Write completed with error (sct=0, sc=8) 00:22:29.143 starting I/O failed: -6 00:22:29.143 [2024-11-15 11:40:29.527419] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:22:29.143 Write completed with error (sct=0, sc=8) 00:22:29.143 starting I/O failed: -6 00:22:29.143 Write completed with error (sct=0, sc=8) 00:22:29.143 starting I/O failed: -6 00:22:29.143 Write completed with error (sct=0, sc=8) 00:22:29.143 starting I/O failed: -6 00:22:29.143 Write completed with error (sct=0, sc=8) 00:22:29.143 starting I/O failed: -6 00:22:29.143 Write completed with error (sct=0, sc=8) 00:22:29.143 starting I/O failed: -6 00:22:29.143 Write completed with error (sct=0, sc=8) 00:22:29.143 starting I/O failed: -6 00:22:29.143 Write completed with error (sct=0, sc=8) 00:22:29.143 starting I/O failed: -6 00:22:29.143 Write completed with error (sct=0, sc=8) 00:22:29.143 starting I/O failed: -6 00:22:29.143 Write completed with error (sct=0, sc=8) 00:22:29.143 starting I/O failed: -6 00:22:29.143 Write completed with error (sct=0, sc=8) 00:22:29.143 starting I/O failed: -6 00:22:29.143 Write completed with error (sct=0, sc=8) 00:22:29.143 starting I/O failed: -6 00:22:29.143 Write completed with error (sct=0, sc=8) 00:22:29.143 starting I/O failed: -6 00:22:29.143 Write completed with error (sct=0, sc=8) 00:22:29.143 starting I/O failed: -6 00:22:29.143 Write completed with error (sct=0, sc=8) 00:22:29.143 starting I/O failed: -6 00:22:29.143 Write completed with error (sct=0, sc=8) 00:22:29.143 starting I/O failed: -6 00:22:29.143 Write completed with error (sct=0, sc=8) 00:22:29.143 starting I/O failed: -6 00:22:29.143 Write completed with error (sct=0, sc=8) 00:22:29.143 starting I/O failed: -6 00:22:29.143 Write completed with error (sct=0, sc=8) 00:22:29.143 starting I/O failed: -6 00:22:29.143 Write completed with error (sct=0, sc=8) 00:22:29.143 starting I/O failed: -6 00:22:29.143 Write completed with error (sct=0, sc=8) 00:22:29.143 starting I/O failed: -6 00:22:29.143 Write completed with error (sct=0, sc=8) 00:22:29.143 starting I/O failed: -6 00:22:29.143 Write completed with error (sct=0, sc=8) 00:22:29.143 starting I/O failed: -6 00:22:29.143 Write completed with error (sct=0, sc=8) 00:22:29.143 starting I/O failed: -6 00:22:29.143 Write completed with error (sct=0, sc=8) 00:22:29.143 starting I/O failed: -6 00:22:29.143 Write completed with error (sct=0, sc=8) 00:22:29.143 starting I/O failed: -6 00:22:29.143 Write completed with error (sct=0, sc=8) 00:22:29.143 starting I/O failed: -6 00:22:29.143 Write completed with error (sct=0, sc=8) 00:22:29.143 starting I/O failed: -6 00:22:29.143 Write completed with error (sct=0, sc=8) 00:22:29.143 starting I/O failed: -6 00:22:29.143 Write completed with error (sct=0, sc=8) 00:22:29.143 starting I/O failed: -6 00:22:29.143 Write completed with error (sct=0, sc=8) 00:22:29.143 starting I/O failed: -6 00:22:29.143 Write completed with error (sct=0, sc=8) 00:22:29.143 starting I/O failed: -6 00:22:29.143 Write completed with error (sct=0, sc=8) 00:22:29.143 starting I/O failed: -6 00:22:29.143 Write completed with error (sct=0, sc=8) 00:22:29.143 starting I/O failed: -6 00:22:29.143 Write completed with error (sct=0, sc=8) 00:22:29.143 starting I/O failed: -6 00:22:29.143 Write completed with error (sct=0, sc=8) 00:22:29.143 starting I/O failed: -6 00:22:29.143 Write completed with error (sct=0, sc=8) 00:22:29.143 starting I/O failed: -6 00:22:29.143 Write completed with error (sct=0, sc=8) 00:22:29.143 starting I/O failed: -6 00:22:29.143 Write completed with error (sct=0, sc=8) 00:22:29.143 starting I/O failed: -6 00:22:29.143 Write completed with error (sct=0, sc=8) 00:22:29.143 starting I/O failed: -6 00:22:29.143 Write completed with error (sct=0, sc=8) 00:22:29.143 starting I/O failed: -6 00:22:29.143 Write completed with error (sct=0, sc=8) 00:22:29.143 starting I/O failed: -6 00:22:29.143 Write completed with error (sct=0, sc=8) 00:22:29.143 starting I/O failed: -6 00:22:29.143 Write completed with error (sct=0, sc=8) 00:22:29.143 starting I/O failed: -6 00:22:29.143 Write completed with error (sct=0, sc=8) 00:22:29.143 starting I/O failed: -6 00:22:29.143 Write completed with error (sct=0, sc=8) 00:22:29.143 starting I/O failed: -6 00:22:29.143 Write completed with error (sct=0, sc=8) 00:22:29.143 starting I/O failed: -6 00:22:29.143 Write completed with error (sct=0, sc=8) 00:22:29.143 starting I/O failed: -6 00:22:29.143 Write completed with error (sct=0, sc=8) 00:22:29.143 starting I/O failed: -6 00:22:29.143 Write completed with error (sct=0, sc=8) 00:22:29.143 starting I/O failed: -6 00:22:29.143 Write completed with error (sct=0, sc=8) 00:22:29.143 starting I/O failed: -6 00:22:29.143 Write completed with error (sct=0, sc=8) 00:22:29.143 starting I/O failed: -6 00:22:29.143 Write completed with error (sct=0, sc=8) 00:22:29.143 starting I/O failed: -6 00:22:29.143 Write completed with error (sct=0, sc=8) 00:22:29.143 starting I/O failed: -6 00:22:29.143 Write completed with error (sct=0, sc=8) 00:22:29.143 starting I/O failed: -6 00:22:29.143 Write completed with error (sct=0, sc=8) 00:22:29.143 starting I/O failed: -6 00:22:29.143 Write completed with error (sct=0, sc=8) 00:22:29.143 starting I/O failed: -6 00:22:29.143 Write completed with error (sct=0, sc=8) 00:22:29.143 starting I/O failed: -6 00:22:29.143 Write completed with error (sct=0, sc=8) 00:22:29.143 starting I/O failed: -6 00:22:29.143 Write completed with error (sct=0, sc=8) 00:22:29.143 starting I/O failed: -6 00:22:29.143 Write completed with error (sct=0, sc=8) 00:22:29.143 starting I/O failed: -6 00:22:29.143 Write completed with error (sct=0, sc=8) 00:22:29.143 starting I/O failed: -6 00:22:29.143 Write completed with error (sct=0, sc=8) 00:22:29.143 starting I/O failed: -6 00:22:29.143 [2024-11-15 11:40:29.530781] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:22:29.143 NVMe io qpair process completion error 00:22:29.143 Write completed with error (sct=0, sc=8) 00:22:29.143 Write completed with error (sct=0, sc=8) 00:22:29.143 Write completed with error (sct=0, sc=8) 00:22:29.143 starting I/O failed: -6 00:22:29.143 Write completed with error (sct=0, sc=8) 00:22:29.143 Write completed with error (sct=0, sc=8) 00:22:29.143 Write completed with error (sct=0, sc=8) 00:22:29.143 Write completed with error (sct=0, sc=8) 00:22:29.143 starting I/O failed: -6 00:22:29.143 Write completed with error (sct=0, sc=8) 00:22:29.143 Write completed with error (sct=0, sc=8) 00:22:29.143 Write completed with error (sct=0, sc=8) 00:22:29.143 Write completed with error (sct=0, sc=8) 00:22:29.143 starting I/O failed: -6 00:22:29.143 Write completed with error (sct=0, sc=8) 00:22:29.143 Write completed with error (sct=0, sc=8) 00:22:29.143 Write completed with error (sct=0, sc=8) 00:22:29.143 Write completed with error (sct=0, sc=8) 00:22:29.143 starting I/O failed: -6 00:22:29.143 Write completed with error (sct=0, sc=8) 00:22:29.143 Write completed with error (sct=0, sc=8) 00:22:29.143 Write completed with error (sct=0, sc=8) 00:22:29.143 Write completed with error (sct=0, sc=8) 00:22:29.143 starting I/O failed: -6 00:22:29.143 Write completed with error (sct=0, sc=8) 00:22:29.143 Write completed with error (sct=0, sc=8) 00:22:29.143 Write completed with error (sct=0, sc=8) 00:22:29.143 Write completed with error (sct=0, sc=8) 00:22:29.143 starting I/O failed: -6 00:22:29.143 Write completed with error (sct=0, sc=8) 00:22:29.143 Write completed with error (sct=0, sc=8) 00:22:29.143 Write completed with error (sct=0, sc=8) 00:22:29.143 Write completed with error (sct=0, sc=8) 00:22:29.143 starting I/O failed: -6 00:22:29.143 Write completed with error (sct=0, sc=8) 00:22:29.143 Write completed with error (sct=0, sc=8) 00:22:29.143 Write completed with error (sct=0, sc=8) 00:22:29.143 Write completed with error (sct=0, sc=8) 00:22:29.143 starting I/O failed: -6 00:22:29.143 Write completed with error (sct=0, sc=8) 00:22:29.143 Write completed with error (sct=0, sc=8) 00:22:29.143 Write completed with error (sct=0, sc=8) 00:22:29.143 Write completed with error (sct=0, sc=8) 00:22:29.143 starting I/O failed: -6 00:22:29.143 Write completed with error (sct=0, sc=8) 00:22:29.143 Write completed with error (sct=0, sc=8) 00:22:29.143 Write completed with error (sct=0, sc=8) 00:22:29.143 Write completed with error (sct=0, sc=8) 00:22:29.143 starting I/O failed: -6 00:22:29.143 Write completed with error (sct=0, sc=8) 00:22:29.143 Write completed with error (sct=0, sc=8) 00:22:29.143 Write completed with error (sct=0, sc=8) 00:22:29.144 [2024-11-15 11:40:29.532123] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:22:29.144 starting I/O failed: -6 00:22:29.144 Write completed with error (sct=0, sc=8) 00:22:29.144 Write completed with error (sct=0, sc=8) 00:22:29.144 starting I/O failed: -6 00:22:29.144 Write completed with error (sct=0, sc=8) 00:22:29.144 Write completed with error (sct=0, sc=8) 00:22:29.144 starting I/O failed: -6 00:22:29.144 Write completed with error (sct=0, sc=8) 00:22:29.144 Write completed with error (sct=0, sc=8) 00:22:29.144 starting I/O failed: -6 00:22:29.144 Write completed with error (sct=0, sc=8) 00:22:29.144 Write completed with error (sct=0, sc=8) 00:22:29.144 starting I/O failed: -6 00:22:29.144 Write completed with error (sct=0, sc=8) 00:22:29.144 Write completed with error (sct=0, sc=8) 00:22:29.144 starting I/O failed: -6 00:22:29.144 Write completed with error (sct=0, sc=8) 00:22:29.144 Write completed with error (sct=0, sc=8) 00:22:29.144 starting I/O failed: -6 00:22:29.144 Write completed with error (sct=0, sc=8) 00:22:29.144 Write completed with error (sct=0, sc=8) 00:22:29.144 starting I/O failed: -6 00:22:29.144 Write completed with error (sct=0, sc=8) 00:22:29.144 Write completed with error (sct=0, sc=8) 00:22:29.144 starting I/O failed: -6 00:22:29.144 Write completed with error (sct=0, sc=8) 00:22:29.144 Write completed with error (sct=0, sc=8) 00:22:29.144 starting I/O failed: -6 00:22:29.144 Write completed with error (sct=0, sc=8) 00:22:29.144 Write completed with error (sct=0, sc=8) 00:22:29.144 starting I/O failed: -6 00:22:29.144 Write completed with error (sct=0, sc=8) 00:22:29.144 Write completed with error (sct=0, sc=8) 00:22:29.144 starting I/O failed: -6 00:22:29.144 Write completed with error (sct=0, sc=8) 00:22:29.144 Write completed with error (sct=0, sc=8) 00:22:29.144 starting I/O failed: -6 00:22:29.144 Write completed with error (sct=0, sc=8) 00:22:29.144 Write completed with error (sct=0, sc=8) 00:22:29.144 starting I/O failed: -6 00:22:29.144 Write completed with error (sct=0, sc=8) 00:22:29.144 Write completed with error (sct=0, sc=8) 00:22:29.144 starting I/O failed: -6 00:22:29.144 Write completed with error (sct=0, sc=8) 00:22:29.144 Write completed with error (sct=0, sc=8) 00:22:29.144 starting I/O failed: -6 00:22:29.144 Write completed with error (sct=0, sc=8) 00:22:29.144 Write completed with error (sct=0, sc=8) 00:22:29.144 starting I/O failed: -6 00:22:29.144 Write completed with error (sct=0, sc=8) 00:22:29.144 Write completed with error (sct=0, sc=8) 00:22:29.144 starting I/O failed: -6 00:22:29.144 Write completed with error (sct=0, sc=8) 00:22:29.144 Write completed with error (sct=0, sc=8) 00:22:29.144 starting I/O failed: -6 00:22:29.144 Write completed with error (sct=0, sc=8) 00:22:29.144 [2024-11-15 11:40:29.533196] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:22:29.144 Write completed with error (sct=0, sc=8) 00:22:29.144 starting I/O failed: -6 00:22:29.144 Write completed with error (sct=0, sc=8) 00:22:29.144 starting I/O failed: -6 00:22:29.144 Write completed with error (sct=0, sc=8) 00:22:29.144 starting I/O failed: -6 00:22:29.144 Write completed with error (sct=0, sc=8) 00:22:29.144 Write completed with error (sct=0, sc=8) 00:22:29.144 starting I/O failed: -6 00:22:29.144 Write completed with error (sct=0, sc=8) 00:22:29.144 starting I/O failed: -6 00:22:29.144 Write completed with error (sct=0, sc=8) 00:22:29.144 starting I/O failed: -6 00:22:29.144 Write completed with error (sct=0, sc=8) 00:22:29.144 Write completed with error (sct=0, sc=8) 00:22:29.144 starting I/O failed: -6 00:22:29.144 Write completed with error (sct=0, sc=8) 00:22:29.144 starting I/O failed: -6 00:22:29.144 Write completed with error (sct=0, sc=8) 00:22:29.144 starting I/O failed: -6 00:22:29.144 Write completed with error (sct=0, sc=8) 00:22:29.144 Write completed with error (sct=0, sc=8) 00:22:29.144 starting I/O failed: -6 00:22:29.144 Write completed with error (sct=0, sc=8) 00:22:29.144 starting I/O failed: -6 00:22:29.144 Write completed with error (sct=0, sc=8) 00:22:29.144 starting I/O failed: -6 00:22:29.144 Write completed with error (sct=0, sc=8) 00:22:29.144 Write completed with error (sct=0, sc=8) 00:22:29.144 starting I/O failed: -6 00:22:29.144 Write completed with error (sct=0, sc=8) 00:22:29.144 starting I/O failed: -6 00:22:29.144 Write completed with error (sct=0, sc=8) 00:22:29.144 starting I/O failed: -6 00:22:29.144 Write completed with error (sct=0, sc=8) 00:22:29.144 Write completed with error (sct=0, sc=8) 00:22:29.144 starting I/O failed: -6 00:22:29.144 Write completed with error (sct=0, sc=8) 00:22:29.144 starting I/O failed: -6 00:22:29.144 Write completed with error (sct=0, sc=8) 00:22:29.144 starting I/O failed: -6 00:22:29.144 Write completed with error (sct=0, sc=8) 00:22:29.144 Write completed with error (sct=0, sc=8) 00:22:29.144 starting I/O failed: -6 00:22:29.144 Write completed with error (sct=0, sc=8) 00:22:29.144 starting I/O failed: -6 00:22:29.144 Write completed with error (sct=0, sc=8) 00:22:29.144 starting I/O failed: -6 00:22:29.144 Write completed with error (sct=0, sc=8) 00:22:29.144 Write completed with error (sct=0, sc=8) 00:22:29.144 starting I/O failed: -6 00:22:29.144 Write completed with error (sct=0, sc=8) 00:22:29.144 starting I/O failed: -6 00:22:29.144 Write completed with error (sct=0, sc=8) 00:22:29.144 starting I/O failed: -6 00:22:29.144 Write completed with error (sct=0, sc=8) 00:22:29.144 Write completed with error (sct=0, sc=8) 00:22:29.144 starting I/O failed: -6 00:22:29.144 Write completed with error (sct=0, sc=8) 00:22:29.144 starting I/O failed: -6 00:22:29.144 Write completed with error (sct=0, sc=8) 00:22:29.144 starting I/O failed: -6 00:22:29.144 Write completed with error (sct=0, sc=8) 00:22:29.144 Write completed with error (sct=0, sc=8) 00:22:29.144 starting I/O failed: -6 00:22:29.144 Write completed with error (sct=0, sc=8) 00:22:29.144 starting I/O failed: -6 00:22:29.144 Write completed with error (sct=0, sc=8) 00:22:29.144 starting I/O failed: -6 00:22:29.144 Write completed with error (sct=0, sc=8) 00:22:29.144 Write completed with error (sct=0, sc=8) 00:22:29.144 starting I/O failed: -6 00:22:29.144 Write completed with error (sct=0, sc=8) 00:22:29.144 starting I/O failed: -6 00:22:29.144 Write completed with error (sct=0, sc=8) 00:22:29.144 starting I/O failed: -6 00:22:29.144 Write completed with error (sct=0, sc=8) 00:22:29.144 Write completed with error (sct=0, sc=8) 00:22:29.144 starting I/O failed: -6 00:22:29.144 Write completed with error (sct=0, sc=8) 00:22:29.144 starting I/O failed: -6 00:22:29.144 Write completed with error (sct=0, sc=8) 00:22:29.144 starting I/O failed: -6 00:22:29.144 Write completed with error (sct=0, sc=8) 00:22:29.144 Write completed with error (sct=0, sc=8) 00:22:29.144 starting I/O failed: -6 00:22:29.144 [2024-11-15 11:40:29.534504] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:22:29.144 Write completed with error (sct=0, sc=8) 00:22:29.144 starting I/O failed: -6 00:22:29.144 Write completed with error (sct=0, sc=8) 00:22:29.144 starting I/O failed: -6 00:22:29.144 Write completed with error (sct=0, sc=8) 00:22:29.144 starting I/O failed: -6 00:22:29.144 Write completed with error (sct=0, sc=8) 00:22:29.144 starting I/O failed: -6 00:22:29.144 Write completed with error (sct=0, sc=8) 00:22:29.144 starting I/O failed: -6 00:22:29.144 Write completed with error (sct=0, sc=8) 00:22:29.144 starting I/O failed: -6 00:22:29.144 Write completed with error (sct=0, sc=8) 00:22:29.144 starting I/O failed: -6 00:22:29.145 Write completed with error (sct=0, sc=8) 00:22:29.145 starting I/O failed: -6 00:22:29.145 Write completed with error (sct=0, sc=8) 00:22:29.145 starting I/O failed: -6 00:22:29.145 Write completed with error (sct=0, sc=8) 00:22:29.145 starting I/O failed: -6 00:22:29.145 Write completed with error (sct=0, sc=8) 00:22:29.145 starting I/O failed: -6 00:22:29.145 Write completed with error (sct=0, sc=8) 00:22:29.145 starting I/O failed: -6 00:22:29.145 Write completed with error (sct=0, sc=8) 00:22:29.145 starting I/O failed: -6 00:22:29.145 Write completed with error (sct=0, sc=8) 00:22:29.145 starting I/O failed: -6 00:22:29.145 Write completed with error (sct=0, sc=8) 00:22:29.145 starting I/O failed: -6 00:22:29.145 Write completed with error (sct=0, sc=8) 00:22:29.145 starting I/O failed: -6 00:22:29.145 Write completed with error (sct=0, sc=8) 00:22:29.145 starting I/O failed: -6 00:22:29.145 Write completed with error (sct=0, sc=8) 00:22:29.145 starting I/O failed: -6 00:22:29.145 Write completed with error (sct=0, sc=8) 00:22:29.145 starting I/O failed: -6 00:22:29.145 Write completed with error (sct=0, sc=8) 00:22:29.145 starting I/O failed: -6 00:22:29.145 Write completed with error (sct=0, sc=8) 00:22:29.145 starting I/O failed: -6 00:22:29.145 Write completed with error (sct=0, sc=8) 00:22:29.145 starting I/O failed: -6 00:22:29.145 Write completed with error (sct=0, sc=8) 00:22:29.145 starting I/O failed: -6 00:22:29.145 Write completed with error (sct=0, sc=8) 00:22:29.145 starting I/O failed: -6 00:22:29.145 Write completed with error (sct=0, sc=8) 00:22:29.145 starting I/O failed: -6 00:22:29.145 Write completed with error (sct=0, sc=8) 00:22:29.145 starting I/O failed: -6 00:22:29.145 Write completed with error (sct=0, sc=8) 00:22:29.145 starting I/O failed: -6 00:22:29.145 Write completed with error (sct=0, sc=8) 00:22:29.145 starting I/O failed: -6 00:22:29.145 Write completed with error (sct=0, sc=8) 00:22:29.145 starting I/O failed: -6 00:22:29.145 Write completed with error (sct=0, sc=8) 00:22:29.145 starting I/O failed: -6 00:22:29.145 Write completed with error (sct=0, sc=8) 00:22:29.145 starting I/O failed: -6 00:22:29.145 Write completed with error (sct=0, sc=8) 00:22:29.145 starting I/O failed: -6 00:22:29.145 Write completed with error (sct=0, sc=8) 00:22:29.145 starting I/O failed: -6 00:22:29.145 Write completed with error (sct=0, sc=8) 00:22:29.145 starting I/O failed: -6 00:22:29.145 Write completed with error (sct=0, sc=8) 00:22:29.145 starting I/O failed: -6 00:22:29.145 Write completed with error (sct=0, sc=8) 00:22:29.145 starting I/O failed: -6 00:22:29.145 Write completed with error (sct=0, sc=8) 00:22:29.145 starting I/O failed: -6 00:22:29.145 Write completed with error (sct=0, sc=8) 00:22:29.145 starting I/O failed: -6 00:22:29.145 Write completed with error (sct=0, sc=8) 00:22:29.145 starting I/O failed: -6 00:22:29.145 Write completed with error (sct=0, sc=8) 00:22:29.145 starting I/O failed: -6 00:22:29.145 Write completed with error (sct=0, sc=8) 00:22:29.145 starting I/O failed: -6 00:22:29.145 Write completed with error (sct=0, sc=8) 00:22:29.145 starting I/O failed: -6 00:22:29.145 Write completed with error (sct=0, sc=8) 00:22:29.145 starting I/O failed: -6 00:22:29.145 Write completed with error (sct=0, sc=8) 00:22:29.145 starting I/O failed: -6 00:22:29.145 Write completed with error (sct=0, sc=8) 00:22:29.145 starting I/O failed: -6 00:22:29.145 Write completed with error (sct=0, sc=8) 00:22:29.145 starting I/O failed: -6 00:22:29.145 Write completed with error (sct=0, sc=8) 00:22:29.145 starting I/O failed: -6 00:22:29.145 Write completed with error (sct=0, sc=8) 00:22:29.145 starting I/O failed: -6 00:22:29.145 Write completed with error (sct=0, sc=8) 00:22:29.145 starting I/O failed: -6 00:22:29.145 Write completed with error (sct=0, sc=8) 00:22:29.145 starting I/O failed: -6 00:22:29.145 Write completed with error (sct=0, sc=8) 00:22:29.145 starting I/O failed: -6 00:22:29.145 Write completed with error (sct=0, sc=8) 00:22:29.145 starting I/O failed: -6 00:22:29.145 Write completed with error (sct=0, sc=8) 00:22:29.145 starting I/O failed: -6 00:22:29.145 Write completed with error (sct=0, sc=8) 00:22:29.145 starting I/O failed: -6 00:22:29.145 Write completed with error (sct=0, sc=8) 00:22:29.145 starting I/O failed: -6 00:22:29.145 Write completed with error (sct=0, sc=8) 00:22:29.145 starting I/O failed: -6 00:22:29.145 Write completed with error (sct=0, sc=8) 00:22:29.145 starting I/O failed: -6 00:22:29.145 Write completed with error (sct=0, sc=8) 00:22:29.145 starting I/O failed: -6 00:22:29.145 Write completed with error (sct=0, sc=8) 00:22:29.145 starting I/O failed: -6 00:22:29.145 Write completed with error (sct=0, sc=8) 00:22:29.145 starting I/O failed: -6 00:22:29.145 Write completed with error (sct=0, sc=8) 00:22:29.145 starting I/O failed: -6 00:22:29.145 Write completed with error (sct=0, sc=8) 00:22:29.145 starting I/O failed: -6 00:22:29.145 [2024-11-15 11:40:29.536820] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:22:29.145 NVMe io qpair process completion error 00:22:29.145 Write completed with error (sct=0, sc=8) 00:22:29.145 Write completed with error (sct=0, sc=8) 00:22:29.145 starting I/O failed: -6 00:22:29.145 Write completed with error (sct=0, sc=8) 00:22:29.145 Write completed with error (sct=0, sc=8) 00:22:29.145 Write completed with error (sct=0, sc=8) 00:22:29.145 Write completed with error (sct=0, sc=8) 00:22:29.145 starting I/O failed: -6 00:22:29.145 Write completed with error (sct=0, sc=8) 00:22:29.145 Write completed with error (sct=0, sc=8) 00:22:29.145 Write completed with error (sct=0, sc=8) 00:22:29.145 Write completed with error (sct=0, sc=8) 00:22:29.145 starting I/O failed: -6 00:22:29.145 Write completed with error (sct=0, sc=8) 00:22:29.145 Write completed with error (sct=0, sc=8) 00:22:29.145 Write completed with error (sct=0, sc=8) 00:22:29.145 Write completed with error (sct=0, sc=8) 00:22:29.145 starting I/O failed: -6 00:22:29.145 Write completed with error (sct=0, sc=8) 00:22:29.145 Write completed with error (sct=0, sc=8) 00:22:29.145 Write completed with error (sct=0, sc=8) 00:22:29.145 Write completed with error (sct=0, sc=8) 00:22:29.145 starting I/O failed: -6 00:22:29.145 Write completed with error (sct=0, sc=8) 00:22:29.145 Write completed with error (sct=0, sc=8) 00:22:29.145 Write completed with error (sct=0, sc=8) 00:22:29.145 Write completed with error (sct=0, sc=8) 00:22:29.145 starting I/O failed: -6 00:22:29.145 Write completed with error (sct=0, sc=8) 00:22:29.145 Write completed with error (sct=0, sc=8) 00:22:29.145 Write completed with error (sct=0, sc=8) 00:22:29.145 Write completed with error (sct=0, sc=8) 00:22:29.145 starting I/O failed: -6 00:22:29.145 Write completed with error (sct=0, sc=8) 00:22:29.145 Write completed with error (sct=0, sc=8) 00:22:29.145 Write completed with error (sct=0, sc=8) 00:22:29.145 Write completed with error (sct=0, sc=8) 00:22:29.145 starting I/O failed: -6 00:22:29.145 Write completed with error (sct=0, sc=8) 00:22:29.145 Write completed with error (sct=0, sc=8) 00:22:29.145 Write completed with error (sct=0, sc=8) 00:22:29.145 Write completed with error (sct=0, sc=8) 00:22:29.145 starting I/O failed: -6 00:22:29.145 Write completed with error (sct=0, sc=8) 00:22:29.145 Write completed with error (sct=0, sc=8) 00:22:29.145 Write completed with error (sct=0, sc=8) 00:22:29.145 Write completed with error (sct=0, sc=8) 00:22:29.145 starting I/O failed: -6 00:22:29.145 Write completed with error (sct=0, sc=8) 00:22:29.145 Write completed with error (sct=0, sc=8) 00:22:29.145 Write completed with error (sct=0, sc=8) 00:22:29.145 [2024-11-15 11:40:29.538278] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:22:29.145 Write completed with error (sct=0, sc=8) 00:22:29.145 starting I/O failed: -6 00:22:29.145 Write completed with error (sct=0, sc=8) 00:22:29.145 Write completed with error (sct=0, sc=8) 00:22:29.145 Write completed with error (sct=0, sc=8) 00:22:29.145 starting I/O failed: -6 00:22:29.145 Write completed with error (sct=0, sc=8) 00:22:29.145 starting I/O failed: -6 00:22:29.145 Write completed with error (sct=0, sc=8) 00:22:29.145 Write completed with error (sct=0, sc=8) 00:22:29.145 Write completed with error (sct=0, sc=8) 00:22:29.145 starting I/O failed: -6 00:22:29.145 Write completed with error (sct=0, sc=8) 00:22:29.145 starting I/O failed: -6 00:22:29.145 Write completed with error (sct=0, sc=8) 00:22:29.145 Write completed with error (sct=0, sc=8) 00:22:29.146 Write completed with error (sct=0, sc=8) 00:22:29.146 starting I/O failed: -6 00:22:29.146 Write completed with error (sct=0, sc=8) 00:22:29.146 starting I/O failed: -6 00:22:29.146 Write completed with error (sct=0, sc=8) 00:22:29.146 Write completed with error (sct=0, sc=8) 00:22:29.146 Write completed with error (sct=0, sc=8) 00:22:29.146 starting I/O failed: -6 00:22:29.146 Write completed with error (sct=0, sc=8) 00:22:29.146 starting I/O failed: -6 00:22:29.146 Write completed with error (sct=0, sc=8) 00:22:29.146 Write completed with error (sct=0, sc=8) 00:22:29.146 Write completed with error (sct=0, sc=8) 00:22:29.146 starting I/O failed: -6 00:22:29.146 Write completed with error (sct=0, sc=8) 00:22:29.146 starting I/O failed: -6 00:22:29.146 Write completed with error (sct=0, sc=8) 00:22:29.146 Write completed with error (sct=0, sc=8) 00:22:29.146 Write completed with error (sct=0, sc=8) 00:22:29.146 starting I/O failed: -6 00:22:29.146 Write completed with error (sct=0, sc=8) 00:22:29.146 starting I/O failed: -6 00:22:29.146 Write completed with error (sct=0, sc=8) 00:22:29.146 Write completed with error (sct=0, sc=8) 00:22:29.146 Write completed with error (sct=0, sc=8) 00:22:29.146 starting I/O failed: -6 00:22:29.146 Write completed with error (sct=0, sc=8) 00:22:29.146 starting I/O failed: -6 00:22:29.146 Write completed with error (sct=0, sc=8) 00:22:29.146 Write completed with error (sct=0, sc=8) 00:22:29.146 Write completed with error (sct=0, sc=8) 00:22:29.146 starting I/O failed: -6 00:22:29.146 Write completed with error (sct=0, sc=8) 00:22:29.146 starting I/O failed: -6 00:22:29.146 Write completed with error (sct=0, sc=8) 00:22:29.146 Write completed with error (sct=0, sc=8) 00:22:29.146 Write completed with error (sct=0, sc=8) 00:22:29.146 starting I/O failed: -6 00:22:29.146 Write completed with error (sct=0, sc=8) 00:22:29.146 starting I/O failed: -6 00:22:29.146 Write completed with error (sct=0, sc=8) 00:22:29.146 Write completed with error (sct=0, sc=8) 00:22:29.146 Write completed with error (sct=0, sc=8) 00:22:29.146 starting I/O failed: -6 00:22:29.146 Write completed with error (sct=0, sc=8) 00:22:29.146 starting I/O failed: -6 00:22:29.146 Write completed with error (sct=0, sc=8) 00:22:29.146 Write completed with error (sct=0, sc=8) 00:22:29.146 Write completed with error (sct=0, sc=8) 00:22:29.146 starting I/O failed: -6 00:22:29.146 Write completed with error (sct=0, sc=8) 00:22:29.146 starting I/O failed: -6 00:22:29.146 [2024-11-15 11:40:29.539467] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:22:29.146 Write completed with error (sct=0, sc=8) 00:22:29.146 starting I/O failed: -6 00:22:29.146 Write completed with error (sct=0, sc=8) 00:22:29.146 Write completed with error (sct=0, sc=8) 00:22:29.146 starting I/O failed: -6 00:22:29.146 Write completed with error (sct=0, sc=8) 00:22:29.146 starting I/O failed: -6 00:22:29.146 Write completed with error (sct=0, sc=8) 00:22:29.146 starting I/O failed: -6 00:22:29.146 Write completed with error (sct=0, sc=8) 00:22:29.146 Write completed with error (sct=0, sc=8) 00:22:29.146 starting I/O failed: -6 00:22:29.146 Write completed with error (sct=0, sc=8) 00:22:29.146 starting I/O failed: -6 00:22:29.146 Write completed with error (sct=0, sc=8) 00:22:29.146 starting I/O failed: -6 00:22:29.146 Write completed with error (sct=0, sc=8) 00:22:29.146 Write completed with error (sct=0, sc=8) 00:22:29.146 starting I/O failed: -6 00:22:29.146 Write completed with error (sct=0, sc=8) 00:22:29.146 starting I/O failed: -6 00:22:29.146 Write completed with error (sct=0, sc=8) 00:22:29.146 starting I/O failed: -6 00:22:29.146 Write completed with error (sct=0, sc=8) 00:22:29.146 Write completed with error (sct=0, sc=8) 00:22:29.146 starting I/O failed: -6 00:22:29.146 Write completed with error (sct=0, sc=8) 00:22:29.146 starting I/O failed: -6 00:22:29.146 Write completed with error (sct=0, sc=8) 00:22:29.146 starting I/O failed: -6 00:22:29.146 Write completed with error (sct=0, sc=8) 00:22:29.146 Write completed with error (sct=0, sc=8) 00:22:29.146 starting I/O failed: -6 00:22:29.146 Write completed with error (sct=0, sc=8) 00:22:29.146 starting I/O failed: -6 00:22:29.146 Write completed with error (sct=0, sc=8) 00:22:29.146 starting I/O failed: -6 00:22:29.146 Write completed with error (sct=0, sc=8) 00:22:29.146 Write completed with error (sct=0, sc=8) 00:22:29.146 starting I/O failed: -6 00:22:29.146 Write completed with error (sct=0, sc=8) 00:22:29.146 starting I/O failed: -6 00:22:29.146 Write completed with error (sct=0, sc=8) 00:22:29.146 starting I/O failed: -6 00:22:29.146 Write completed with error (sct=0, sc=8) 00:22:29.146 Write completed with error (sct=0, sc=8) 00:22:29.146 starting I/O failed: -6 00:22:29.146 Write completed with error (sct=0, sc=8) 00:22:29.146 starting I/O failed: -6 00:22:29.146 Write completed with error (sct=0, sc=8) 00:22:29.146 starting I/O failed: -6 00:22:29.146 Write completed with error (sct=0, sc=8) 00:22:29.146 Write completed with error (sct=0, sc=8) 00:22:29.146 starting I/O failed: -6 00:22:29.146 Write completed with error (sct=0, sc=8) 00:22:29.146 starting I/O failed: -6 00:22:29.146 Write completed with error (sct=0, sc=8) 00:22:29.146 starting I/O failed: -6 00:22:29.146 Write completed with error (sct=0, sc=8) 00:22:29.146 Write completed with error (sct=0, sc=8) 00:22:29.146 starting I/O failed: -6 00:22:29.146 Write completed with error (sct=0, sc=8) 00:22:29.146 starting I/O failed: -6 00:22:29.146 Write completed with error (sct=0, sc=8) 00:22:29.146 starting I/O failed: -6 00:22:29.146 Write completed with error (sct=0, sc=8) 00:22:29.146 Write completed with error (sct=0, sc=8) 00:22:29.146 starting I/O failed: -6 00:22:29.146 Write completed with error (sct=0, sc=8) 00:22:29.146 starting I/O failed: -6 00:22:29.146 Write completed with error (sct=0, sc=8) 00:22:29.146 starting I/O failed: -6 00:22:29.146 Write completed with error (sct=0, sc=8) 00:22:29.146 Write completed with error (sct=0, sc=8) 00:22:29.146 starting I/O failed: -6 00:22:29.146 Write completed with error (sct=0, sc=8) 00:22:29.146 starting I/O failed: -6 00:22:29.146 Write completed with error (sct=0, sc=8) 00:22:29.146 starting I/O failed: -6 00:22:29.146 Write completed with error (sct=0, sc=8) 00:22:29.146 Write completed with error (sct=0, sc=8) 00:22:29.146 starting I/O failed: -6 00:22:29.146 [2024-11-15 11:40:29.540708] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:22:29.146 Write completed with error (sct=0, sc=8) 00:22:29.146 starting I/O failed: -6 00:22:29.146 Write completed with error (sct=0, sc=8) 00:22:29.146 starting I/O failed: -6 00:22:29.146 Write completed with error (sct=0, sc=8) 00:22:29.146 starting I/O failed: -6 00:22:29.146 Write completed with error (sct=0, sc=8) 00:22:29.146 starting I/O failed: -6 00:22:29.146 Write completed with error (sct=0, sc=8) 00:22:29.146 starting I/O failed: -6 00:22:29.146 Write completed with error (sct=0, sc=8) 00:22:29.146 starting I/O failed: -6 00:22:29.146 Write completed with error (sct=0, sc=8) 00:22:29.146 starting I/O failed: -6 00:22:29.146 Write completed with error (sct=0, sc=8) 00:22:29.146 starting I/O failed: -6 00:22:29.146 Write completed with error (sct=0, sc=8) 00:22:29.146 starting I/O failed: -6 00:22:29.146 Write completed with error (sct=0, sc=8) 00:22:29.146 starting I/O failed: -6 00:22:29.146 Write completed with error (sct=0, sc=8) 00:22:29.146 starting I/O failed: -6 00:22:29.146 Write completed with error (sct=0, sc=8) 00:22:29.146 starting I/O failed: -6 00:22:29.146 Write completed with error (sct=0, sc=8) 00:22:29.146 starting I/O failed: -6 00:22:29.146 Write completed with error (sct=0, sc=8) 00:22:29.146 starting I/O failed: -6 00:22:29.146 Write completed with error (sct=0, sc=8) 00:22:29.146 starting I/O failed: -6 00:22:29.146 Write completed with error (sct=0, sc=8) 00:22:29.146 starting I/O failed: -6 00:22:29.146 Write completed with error (sct=0, sc=8) 00:22:29.146 starting I/O failed: -6 00:22:29.146 Write completed with error (sct=0, sc=8) 00:22:29.146 starting I/O failed: -6 00:22:29.146 Write completed with error (sct=0, sc=8) 00:22:29.146 starting I/O failed: -6 00:22:29.146 Write completed with error (sct=0, sc=8) 00:22:29.146 starting I/O failed: -6 00:22:29.146 Write completed with error (sct=0, sc=8) 00:22:29.146 starting I/O failed: -6 00:22:29.146 Write completed with error (sct=0, sc=8) 00:22:29.146 starting I/O failed: -6 00:22:29.146 Write completed with error (sct=0, sc=8) 00:22:29.146 starting I/O failed: -6 00:22:29.146 Write completed with error (sct=0, sc=8) 00:22:29.146 starting I/O failed: -6 00:22:29.146 Write completed with error (sct=0, sc=8) 00:22:29.146 starting I/O failed: -6 00:22:29.146 Write completed with error (sct=0, sc=8) 00:22:29.146 starting I/O failed: -6 00:22:29.146 Write completed with error (sct=0, sc=8) 00:22:29.146 starting I/O failed: -6 00:22:29.146 Write completed with error (sct=0, sc=8) 00:22:29.146 starting I/O failed: -6 00:22:29.146 Write completed with error (sct=0, sc=8) 00:22:29.146 starting I/O failed: -6 00:22:29.146 Write completed with error (sct=0, sc=8) 00:22:29.146 starting I/O failed: -6 00:22:29.146 Write completed with error (sct=0, sc=8) 00:22:29.146 starting I/O failed: -6 00:22:29.146 Write completed with error (sct=0, sc=8) 00:22:29.146 starting I/O failed: -6 00:22:29.146 Write completed with error (sct=0, sc=8) 00:22:29.146 starting I/O failed: -6 00:22:29.146 Write completed with error (sct=0, sc=8) 00:22:29.146 starting I/O failed: -6 00:22:29.146 Write completed with error (sct=0, sc=8) 00:22:29.146 starting I/O failed: -6 00:22:29.146 Write completed with error (sct=0, sc=8) 00:22:29.147 starting I/O failed: -6 00:22:29.147 Write completed with error (sct=0, sc=8) 00:22:29.147 starting I/O failed: -6 00:22:29.147 Write completed with error (sct=0, sc=8) 00:22:29.147 starting I/O failed: -6 00:22:29.147 Write completed with error (sct=0, sc=8) 00:22:29.147 starting I/O failed: -6 00:22:29.147 Write completed with error (sct=0, sc=8) 00:22:29.147 starting I/O failed: -6 00:22:29.147 Write completed with error (sct=0, sc=8) 00:22:29.147 starting I/O failed: -6 00:22:29.147 Write completed with error (sct=0, sc=8) 00:22:29.147 starting I/O failed: -6 00:22:29.147 Write completed with error (sct=0, sc=8) 00:22:29.147 starting I/O failed: -6 00:22:29.147 Write completed with error (sct=0, sc=8) 00:22:29.147 starting I/O failed: -6 00:22:29.147 Write completed with error (sct=0, sc=8) 00:22:29.147 starting I/O failed: -6 00:22:29.147 Write completed with error (sct=0, sc=8) 00:22:29.147 starting I/O failed: -6 00:22:29.147 Write completed with error (sct=0, sc=8) 00:22:29.147 starting I/O failed: -6 00:22:29.147 Write completed with error (sct=0, sc=8) 00:22:29.147 starting I/O failed: -6 00:22:29.147 Write completed with error (sct=0, sc=8) 00:22:29.147 starting I/O failed: -6 00:22:29.147 Write completed with error (sct=0, sc=8) 00:22:29.147 starting I/O failed: -6 00:22:29.147 Write completed with error (sct=0, sc=8) 00:22:29.147 starting I/O failed: -6 00:22:29.147 Write completed with error (sct=0, sc=8) 00:22:29.147 starting I/O failed: -6 00:22:29.147 Write completed with error (sct=0, sc=8) 00:22:29.147 starting I/O failed: -6 00:22:29.147 Write completed with error (sct=0, sc=8) 00:22:29.147 starting I/O failed: -6 00:22:29.147 Write completed with error (sct=0, sc=8) 00:22:29.147 starting I/O failed: -6 00:22:29.147 Write completed with error (sct=0, sc=8) 00:22:29.147 starting I/O failed: -6 00:22:29.147 Write completed with error (sct=0, sc=8) 00:22:29.147 starting I/O failed: -6 00:22:29.147 Write completed with error (sct=0, sc=8) 00:22:29.147 starting I/O failed: -6 00:22:29.147 Write completed with error (sct=0, sc=8) 00:22:29.147 starting I/O failed: -6 00:22:29.147 Write completed with error (sct=0, sc=8) 00:22:29.147 starting I/O failed: -6 00:22:29.147 [2024-11-15 11:40:29.550290] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:22:29.147 NVMe io qpair process completion error 00:22:29.147 Write completed with error (sct=0, sc=8) 00:22:29.147 Write completed with error (sct=0, sc=8) 00:22:29.147 starting I/O failed: -6 00:22:29.147 Write completed with error (sct=0, sc=8) 00:22:29.147 Write completed with error (sct=0, sc=8) 00:22:29.147 Write completed with error (sct=0, sc=8) 00:22:29.147 Write completed with error (sct=0, sc=8) 00:22:29.147 starting I/O failed: -6 00:22:29.147 Write completed with error (sct=0, sc=8) 00:22:29.147 Write completed with error (sct=0, sc=8) 00:22:29.147 Write completed with error (sct=0, sc=8) 00:22:29.147 Write completed with error (sct=0, sc=8) 00:22:29.147 starting I/O failed: -6 00:22:29.147 Write completed with error (sct=0, sc=8) 00:22:29.147 Write completed with error (sct=0, sc=8) 00:22:29.147 Write completed with error (sct=0, sc=8) 00:22:29.147 Write completed with error (sct=0, sc=8) 00:22:29.147 starting I/O failed: -6 00:22:29.147 Write completed with error (sct=0, sc=8) 00:22:29.147 Write completed with error (sct=0, sc=8) 00:22:29.147 Write completed with error (sct=0, sc=8) 00:22:29.147 Write completed with error (sct=0, sc=8) 00:22:29.147 starting I/O failed: -6 00:22:29.147 Write completed with error (sct=0, sc=8) 00:22:29.147 Write completed with error (sct=0, sc=8) 00:22:29.147 Write completed with error (sct=0, sc=8) 00:22:29.147 Write completed with error (sct=0, sc=8) 00:22:29.147 starting I/O failed: -6 00:22:29.147 Write completed with error (sct=0, sc=8) 00:22:29.147 Write completed with error (sct=0, sc=8) 00:22:29.147 Write completed with error (sct=0, sc=8) 00:22:29.147 Write completed with error (sct=0, sc=8) 00:22:29.147 starting I/O failed: -6 00:22:29.147 Write completed with error (sct=0, sc=8) 00:22:29.147 Write completed with error (sct=0, sc=8) 00:22:29.147 Write completed with error (sct=0, sc=8) 00:22:29.147 Write completed with error (sct=0, sc=8) 00:22:29.147 starting I/O failed: -6 00:22:29.147 Write completed with error (sct=0, sc=8) 00:22:29.147 Write completed with error (sct=0, sc=8) 00:22:29.147 Write completed with error (sct=0, sc=8) 00:22:29.147 Write completed with error (sct=0, sc=8) 00:22:29.147 starting I/O failed: -6 00:22:29.147 Write completed with error (sct=0, sc=8) 00:22:29.147 Write completed with error (sct=0, sc=8) 00:22:29.147 Write completed with error (sct=0, sc=8) 00:22:29.147 Write completed with error (sct=0, sc=8) 00:22:29.147 starting I/O failed: -6 00:22:29.147 Write completed with error (sct=0, sc=8) 00:22:29.147 Write completed with error (sct=0, sc=8) 00:22:29.147 Write completed with error (sct=0, sc=8) 00:22:29.147 [2024-11-15 11:40:29.551544] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:22:29.147 starting I/O failed: -6 00:22:29.147 Write completed with error (sct=0, sc=8) 00:22:29.147 Write completed with error (sct=0, sc=8) 00:22:29.147 Write completed with error (sct=0, sc=8) 00:22:29.147 starting I/O failed: -6 00:22:29.147 Write completed with error (sct=0, sc=8) 00:22:29.147 starting I/O failed: -6 00:22:29.147 Write completed with error (sct=0, sc=8) 00:22:29.147 Write completed with error (sct=0, sc=8) 00:22:29.147 Write completed with error (sct=0, sc=8) 00:22:29.147 starting I/O failed: -6 00:22:29.147 Write completed with error (sct=0, sc=8) 00:22:29.147 starting I/O failed: -6 00:22:29.147 Write completed with error (sct=0, sc=8) 00:22:29.147 Write completed with error (sct=0, sc=8) 00:22:29.147 Write completed with error (sct=0, sc=8) 00:22:29.147 starting I/O failed: -6 00:22:29.147 Write completed with error (sct=0, sc=8) 00:22:29.147 starting I/O failed: -6 00:22:29.147 Write completed with error (sct=0, sc=8) 00:22:29.147 Write completed with error (sct=0, sc=8) 00:22:29.147 Write completed with error (sct=0, sc=8) 00:22:29.147 starting I/O failed: -6 00:22:29.147 Write completed with error (sct=0, sc=8) 00:22:29.147 starting I/O failed: -6 00:22:29.147 Write completed with error (sct=0, sc=8) 00:22:29.147 Write completed with error (sct=0, sc=8) 00:22:29.147 Write completed with error (sct=0, sc=8) 00:22:29.147 starting I/O failed: -6 00:22:29.147 Write completed with error (sct=0, sc=8) 00:22:29.147 starting I/O failed: -6 00:22:29.147 Write completed with error (sct=0, sc=8) 00:22:29.147 Write completed with error (sct=0, sc=8) 00:22:29.147 Write completed with error (sct=0, sc=8) 00:22:29.147 starting I/O failed: -6 00:22:29.147 Write completed with error (sct=0, sc=8) 00:22:29.147 starting I/O failed: -6 00:22:29.147 Write completed with error (sct=0, sc=8) 00:22:29.147 Write completed with error (sct=0, sc=8) 00:22:29.147 Write completed with error (sct=0, sc=8) 00:22:29.147 starting I/O failed: -6 00:22:29.147 Write completed with error (sct=0, sc=8) 00:22:29.147 starting I/O failed: -6 00:22:29.147 Write completed with error (sct=0, sc=8) 00:22:29.147 Write completed with error (sct=0, sc=8) 00:22:29.147 Write completed with error (sct=0, sc=8) 00:22:29.147 starting I/O failed: -6 00:22:29.147 Write completed with error (sct=0, sc=8) 00:22:29.147 starting I/O failed: -6 00:22:29.147 Write completed with error (sct=0, sc=8) 00:22:29.147 Write completed with error (sct=0, sc=8) 00:22:29.147 Write completed with error (sct=0, sc=8) 00:22:29.147 starting I/O failed: -6 00:22:29.147 Write completed with error (sct=0, sc=8) 00:22:29.147 starting I/O failed: -6 00:22:29.147 Write completed with error (sct=0, sc=8) 00:22:29.147 Write completed with error (sct=0, sc=8) 00:22:29.147 Write completed with error (sct=0, sc=8) 00:22:29.147 starting I/O failed: -6 00:22:29.147 Write completed with error (sct=0, sc=8) 00:22:29.147 starting I/O failed: -6 00:22:29.147 [2024-11-15 11:40:29.552918] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:22:29.147 Write completed with error (sct=0, sc=8) 00:22:29.147 Write completed with error (sct=0, sc=8) 00:22:29.147 starting I/O failed: -6 00:22:29.147 Write completed with error (sct=0, sc=8) 00:22:29.147 starting I/O failed: -6 00:22:29.147 Write completed with error (sct=0, sc=8) 00:22:29.147 starting I/O failed: -6 00:22:29.147 Write completed with error (sct=0, sc=8) 00:22:29.147 Write completed with error (sct=0, sc=8) 00:22:29.147 starting I/O failed: -6 00:22:29.147 Write completed with error (sct=0, sc=8) 00:22:29.147 starting I/O failed: -6 00:22:29.147 Write completed with error (sct=0, sc=8) 00:22:29.147 starting I/O failed: -6 00:22:29.147 Write completed with error (sct=0, sc=8) 00:22:29.147 Write completed with error (sct=0, sc=8) 00:22:29.147 starting I/O failed: -6 00:22:29.147 Write completed with error (sct=0, sc=8) 00:22:29.147 starting I/O failed: -6 00:22:29.147 Write completed with error (sct=0, sc=8) 00:22:29.147 starting I/O failed: -6 00:22:29.147 Write completed with error (sct=0, sc=8) 00:22:29.147 Write completed with error (sct=0, sc=8) 00:22:29.147 starting I/O failed: -6 00:22:29.147 Write completed with error (sct=0, sc=8) 00:22:29.147 starting I/O failed: -6 00:22:29.147 Write completed with error (sct=0, sc=8) 00:22:29.147 starting I/O failed: -6 00:22:29.147 Write completed with error (sct=0, sc=8) 00:22:29.147 Write completed with error (sct=0, sc=8) 00:22:29.148 starting I/O failed: -6 00:22:29.148 Write completed with error (sct=0, sc=8) 00:22:29.148 starting I/O failed: -6 00:22:29.148 Write completed with error (sct=0, sc=8) 00:22:29.148 starting I/O failed: -6 00:22:29.148 Write completed with error (sct=0, sc=8) 00:22:29.148 Write completed with error (sct=0, sc=8) 00:22:29.148 starting I/O failed: -6 00:22:29.148 Write completed with error (sct=0, sc=8) 00:22:29.148 starting I/O failed: -6 00:22:29.148 Write completed with error (sct=0, sc=8) 00:22:29.148 starting I/O failed: -6 00:22:29.148 Write completed with error (sct=0, sc=8) 00:22:29.148 Write completed with error (sct=0, sc=8) 00:22:29.148 starting I/O failed: -6 00:22:29.148 Write completed with error (sct=0, sc=8) 00:22:29.148 starting I/O failed: -6 00:22:29.148 Write completed with error (sct=0, sc=8) 00:22:29.148 starting I/O failed: -6 00:22:29.148 Write completed with error (sct=0, sc=8) 00:22:29.148 Write completed with error (sct=0, sc=8) 00:22:29.148 starting I/O failed: -6 00:22:29.148 Write completed with error (sct=0, sc=8) 00:22:29.148 starting I/O failed: -6 00:22:29.148 Write completed with error (sct=0, sc=8) 00:22:29.148 starting I/O failed: -6 00:22:29.148 Write completed with error (sct=0, sc=8) 00:22:29.148 Write completed with error (sct=0, sc=8) 00:22:29.148 starting I/O failed: -6 00:22:29.148 Write completed with error (sct=0, sc=8) 00:22:29.148 starting I/O failed: -6 00:22:29.148 Write completed with error (sct=0, sc=8) 00:22:29.148 starting I/O failed: -6 00:22:29.148 Write completed with error (sct=0, sc=8) 00:22:29.148 Write completed with error (sct=0, sc=8) 00:22:29.148 starting I/O failed: -6 00:22:29.148 Write completed with error (sct=0, sc=8) 00:22:29.148 starting I/O failed: -6 00:22:29.148 Write completed with error (sct=0, sc=8) 00:22:29.148 starting I/O failed: -6 00:22:29.148 Write completed with error (sct=0, sc=8) 00:22:29.148 Write completed with error (sct=0, sc=8) 00:22:29.148 starting I/O failed: -6 00:22:29.148 Write completed with error (sct=0, sc=8) 00:22:29.148 starting I/O failed: -6 00:22:29.148 Write completed with error (sct=0, sc=8) 00:22:29.148 starting I/O failed: -6 00:22:29.148 Write completed with error (sct=0, sc=8) 00:22:29.148 Write completed with error (sct=0, sc=8) 00:22:29.148 starting I/O failed: -6 00:22:29.148 Write completed with error (sct=0, sc=8) 00:22:29.148 starting I/O failed: -6 00:22:29.148 Write completed with error (sct=0, sc=8) 00:22:29.148 starting I/O failed: -6 00:22:29.148 Write completed with error (sct=0, sc=8) 00:22:29.148 [2024-11-15 11:40:29.554556] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:22:29.148 Write completed with error (sct=0, sc=8) 00:22:29.148 starting I/O failed: -6 00:22:29.148 Write completed with error (sct=0, sc=8) 00:22:29.148 starting I/O failed: -6 00:22:29.148 Write completed with error (sct=0, sc=8) 00:22:29.148 starting I/O failed: -6 00:22:29.148 Write completed with error (sct=0, sc=8) 00:22:29.148 starting I/O failed: -6 00:22:29.148 Write completed with error (sct=0, sc=8) 00:22:29.148 starting I/O failed: -6 00:22:29.148 Write completed with error (sct=0, sc=8) 00:22:29.148 starting I/O failed: -6 00:22:29.148 Write completed with error (sct=0, sc=8) 00:22:29.148 starting I/O failed: -6 00:22:29.148 Write completed with error (sct=0, sc=8) 00:22:29.148 starting I/O failed: -6 00:22:29.148 Write completed with error (sct=0, sc=8) 00:22:29.148 starting I/O failed: -6 00:22:29.148 Write completed with error (sct=0, sc=8) 00:22:29.148 starting I/O failed: -6 00:22:29.148 Write completed with error (sct=0, sc=8) 00:22:29.148 starting I/O failed: -6 00:22:29.148 Write completed with error (sct=0, sc=8) 00:22:29.148 starting I/O failed: -6 00:22:29.148 Write completed with error (sct=0, sc=8) 00:22:29.148 starting I/O failed: -6 00:22:29.148 Write completed with error (sct=0, sc=8) 00:22:29.148 starting I/O failed: -6 00:22:29.148 Write completed with error (sct=0, sc=8) 00:22:29.148 starting I/O failed: -6 00:22:29.148 Write completed with error (sct=0, sc=8) 00:22:29.148 starting I/O failed: -6 00:22:29.148 Write completed with error (sct=0, sc=8) 00:22:29.148 starting I/O failed: -6 00:22:29.148 Write completed with error (sct=0, sc=8) 00:22:29.148 starting I/O failed: -6 00:22:29.148 Write completed with error (sct=0, sc=8) 00:22:29.148 starting I/O failed: -6 00:22:29.148 Write completed with error (sct=0, sc=8) 00:22:29.148 starting I/O failed: -6 00:22:29.148 Write completed with error (sct=0, sc=8) 00:22:29.148 starting I/O failed: -6 00:22:29.148 Write completed with error (sct=0, sc=8) 00:22:29.148 starting I/O failed: -6 00:22:29.148 Write completed with error (sct=0, sc=8) 00:22:29.148 starting I/O failed: -6 00:22:29.148 Write completed with error (sct=0, sc=8) 00:22:29.148 starting I/O failed: -6 00:22:29.148 Write completed with error (sct=0, sc=8) 00:22:29.148 starting I/O failed: -6 00:22:29.148 Write completed with error (sct=0, sc=8) 00:22:29.148 starting I/O failed: -6 00:22:29.148 Write completed with error (sct=0, sc=8) 00:22:29.148 starting I/O failed: -6 00:22:29.148 Write completed with error (sct=0, sc=8) 00:22:29.148 starting I/O failed: -6 00:22:29.148 Write completed with error (sct=0, sc=8) 00:22:29.148 starting I/O failed: -6 00:22:29.148 Write completed with error (sct=0, sc=8) 00:22:29.148 starting I/O failed: -6 00:22:29.148 Write completed with error (sct=0, sc=8) 00:22:29.148 starting I/O failed: -6 00:22:29.148 Write completed with error (sct=0, sc=8) 00:22:29.148 starting I/O failed: -6 00:22:29.148 Write completed with error (sct=0, sc=8) 00:22:29.148 starting I/O failed: -6 00:22:29.148 Write completed with error (sct=0, sc=8) 00:22:29.148 starting I/O failed: -6 00:22:29.148 Write completed with error (sct=0, sc=8) 00:22:29.148 starting I/O failed: -6 00:22:29.148 Write completed with error (sct=0, sc=8) 00:22:29.148 starting I/O failed: -6 00:22:29.148 Write completed with error (sct=0, sc=8) 00:22:29.148 starting I/O failed: -6 00:22:29.148 Write completed with error (sct=0, sc=8) 00:22:29.148 starting I/O failed: -6 00:22:29.148 Write completed with error (sct=0, sc=8) 00:22:29.148 starting I/O failed: -6 00:22:29.148 Write completed with error (sct=0, sc=8) 00:22:29.148 starting I/O failed: -6 00:22:29.148 Write completed with error (sct=0, sc=8) 00:22:29.148 starting I/O failed: -6 00:22:29.148 Write completed with error (sct=0, sc=8) 00:22:29.148 starting I/O failed: -6 00:22:29.148 Write completed with error (sct=0, sc=8) 00:22:29.148 starting I/O failed: -6 00:22:29.148 Write completed with error (sct=0, sc=8) 00:22:29.148 starting I/O failed: -6 00:22:29.148 Write completed with error (sct=0, sc=8) 00:22:29.148 starting I/O failed: -6 00:22:29.148 Write completed with error (sct=0, sc=8) 00:22:29.148 starting I/O failed: -6 00:22:29.148 Write completed with error (sct=0, sc=8) 00:22:29.148 starting I/O failed: -6 00:22:29.148 Write completed with error (sct=0, sc=8) 00:22:29.148 starting I/O failed: -6 00:22:29.148 Write completed with error (sct=0, sc=8) 00:22:29.148 starting I/O failed: -6 00:22:29.148 Write completed with error (sct=0, sc=8) 00:22:29.148 starting I/O failed: -6 00:22:29.148 Write completed with error (sct=0, sc=8) 00:22:29.148 starting I/O failed: -6 00:22:29.148 Write completed with error (sct=0, sc=8) 00:22:29.148 starting I/O failed: -6 00:22:29.148 Write completed with error (sct=0, sc=8) 00:22:29.148 starting I/O failed: -6 00:22:29.148 Write completed with error (sct=0, sc=8) 00:22:29.148 starting I/O failed: -6 00:22:29.148 Write completed with error (sct=0, sc=8) 00:22:29.148 starting I/O failed: -6 00:22:29.148 Write completed with error (sct=0, sc=8) 00:22:29.148 starting I/O failed: -6 00:22:29.148 Write completed with error (sct=0, sc=8) 00:22:29.148 starting I/O failed: -6 00:22:29.148 Write completed with error (sct=0, sc=8) 00:22:29.148 starting I/O failed: -6 00:22:29.148 Write completed with error (sct=0, sc=8) 00:22:29.148 starting I/O failed: -6 00:22:29.148 Write completed with error (sct=0, sc=8) 00:22:29.148 starting I/O failed: -6 00:22:29.148 Write completed with error (sct=0, sc=8) 00:22:29.148 starting I/O failed: -6 00:22:29.148 [2024-11-15 11:40:29.557447] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:22:29.148 NVMe io qpair process completion error 00:22:29.148 Write completed with error (sct=0, sc=8) 00:22:29.148 Write completed with error (sct=0, sc=8) 00:22:29.148 starting I/O failed: -6 00:22:29.148 Write completed with error (sct=0, sc=8) 00:22:29.148 Write completed with error (sct=0, sc=8) 00:22:29.148 Write completed with error (sct=0, sc=8) 00:22:29.149 Write completed with error (sct=0, sc=8) 00:22:29.149 starting I/O failed: -6 00:22:29.149 Write completed with error (sct=0, sc=8) 00:22:29.149 Write completed with error (sct=0, sc=8) 00:22:29.149 Write completed with error (sct=0, sc=8) 00:22:29.149 Write completed with error (sct=0, sc=8) 00:22:29.149 starting I/O failed: -6 00:22:29.149 Write completed with error (sct=0, sc=8) 00:22:29.149 Write completed with error (sct=0, sc=8) 00:22:29.149 Write completed with error (sct=0, sc=8) 00:22:29.149 Write completed with error (sct=0, sc=8) 00:22:29.149 starting I/O failed: -6 00:22:29.149 Write completed with error (sct=0, sc=8) 00:22:29.149 Write completed with error (sct=0, sc=8) 00:22:29.149 Write completed with error (sct=0, sc=8) 00:22:29.149 Write completed with error (sct=0, sc=8) 00:22:29.149 starting I/O failed: -6 00:22:29.149 Write completed with error (sct=0, sc=8) 00:22:29.149 Write completed with error (sct=0, sc=8) 00:22:29.149 Write completed with error (sct=0, sc=8) 00:22:29.149 Write completed with error (sct=0, sc=8) 00:22:29.149 starting I/O failed: -6 00:22:29.149 Write completed with error (sct=0, sc=8) 00:22:29.149 Write completed with error (sct=0, sc=8) 00:22:29.149 Write completed with error (sct=0, sc=8) 00:22:29.149 Write completed with error (sct=0, sc=8) 00:22:29.149 starting I/O failed: -6 00:22:29.149 Write completed with error (sct=0, sc=8) 00:22:29.149 Write completed with error (sct=0, sc=8) 00:22:29.149 Write completed with error (sct=0, sc=8) 00:22:29.149 Write completed with error (sct=0, sc=8) 00:22:29.149 starting I/O failed: -6 00:22:29.149 Write completed with error (sct=0, sc=8) 00:22:29.149 Write completed with error (sct=0, sc=8) 00:22:29.149 Write completed with error (sct=0, sc=8) 00:22:29.149 Write completed with error (sct=0, sc=8) 00:22:29.149 starting I/O failed: -6 00:22:29.149 Write completed with error (sct=0, sc=8) 00:22:29.149 Write completed with error (sct=0, sc=8) 00:22:29.149 Write completed with error (sct=0, sc=8) 00:22:29.149 Write completed with error (sct=0, sc=8) 00:22:29.149 starting I/O failed: -6 00:22:29.149 Write completed with error (sct=0, sc=8) 00:22:29.149 Write completed with error (sct=0, sc=8) 00:22:29.149 Write completed with error (sct=0, sc=8) 00:22:29.149 Write completed with error (sct=0, sc=8) 00:22:29.149 starting I/O failed: -6 00:22:29.149 [2024-11-15 11:40:29.559300] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:22:29.149 Write completed with error (sct=0, sc=8) 00:22:29.149 starting I/O failed: -6 00:22:29.149 Write completed with error (sct=0, sc=8) 00:22:29.149 Write completed with error (sct=0, sc=8) 00:22:29.149 Write completed with error (sct=0, sc=8) 00:22:29.149 starting I/O failed: -6 00:22:29.149 Write completed with error (sct=0, sc=8) 00:22:29.149 starting I/O failed: -6 00:22:29.149 Write completed with error (sct=0, sc=8) 00:22:29.149 Write completed with error (sct=0, sc=8) 00:22:29.149 Write completed with error (sct=0, sc=8) 00:22:29.149 starting I/O failed: -6 00:22:29.149 Write completed with error (sct=0, sc=8) 00:22:29.149 starting I/O failed: -6 00:22:29.149 Write completed with error (sct=0, sc=8) 00:22:29.149 Write completed with error (sct=0, sc=8) 00:22:29.149 Write completed with error (sct=0, sc=8) 00:22:29.149 starting I/O failed: -6 00:22:29.149 Write completed with error (sct=0, sc=8) 00:22:29.149 starting I/O failed: -6 00:22:29.149 Write completed with error (sct=0, sc=8) 00:22:29.149 Write completed with error (sct=0, sc=8) 00:22:29.149 Write completed with error (sct=0, sc=8) 00:22:29.149 starting I/O failed: -6 00:22:29.149 Write completed with error (sct=0, sc=8) 00:22:29.149 starting I/O failed: -6 00:22:29.149 Write completed with error (sct=0, sc=8) 00:22:29.149 Write completed with error (sct=0, sc=8) 00:22:29.149 Write completed with error (sct=0, sc=8) 00:22:29.149 starting I/O failed: -6 00:22:29.149 Write completed with error (sct=0, sc=8) 00:22:29.149 starting I/O failed: -6 00:22:29.149 Write completed with error (sct=0, sc=8) 00:22:29.149 Write completed with error (sct=0, sc=8) 00:22:29.149 Write completed with error (sct=0, sc=8) 00:22:29.149 starting I/O failed: -6 00:22:29.149 Write completed with error (sct=0, sc=8) 00:22:29.149 starting I/O failed: -6 00:22:29.149 Write completed with error (sct=0, sc=8) 00:22:29.149 Write completed with error (sct=0, sc=8) 00:22:29.149 Write completed with error (sct=0, sc=8) 00:22:29.149 starting I/O failed: -6 00:22:29.149 Write completed with error (sct=0, sc=8) 00:22:29.149 starting I/O failed: -6 00:22:29.149 Write completed with error (sct=0, sc=8) 00:22:29.149 Write completed with error (sct=0, sc=8) 00:22:29.149 Write completed with error (sct=0, sc=8) 00:22:29.149 starting I/O failed: -6 00:22:29.149 Write completed with error (sct=0, sc=8) 00:22:29.149 starting I/O failed: -6 00:22:29.149 Write completed with error (sct=0, sc=8) 00:22:29.149 Write completed with error (sct=0, sc=8) 00:22:29.149 Write completed with error (sct=0, sc=8) 00:22:29.149 starting I/O failed: -6 00:22:29.149 Write completed with error (sct=0, sc=8) 00:22:29.149 starting I/O failed: -6 00:22:29.149 Write completed with error (sct=0, sc=8) 00:22:29.149 Write completed with error (sct=0, sc=8) 00:22:29.149 Write completed with error (sct=0, sc=8) 00:22:29.149 starting I/O failed: -6 00:22:29.149 Write completed with error (sct=0, sc=8) 00:22:29.149 starting I/O failed: -6 00:22:29.149 Write completed with error (sct=0, sc=8) 00:22:29.149 Write completed with error (sct=0, sc=8) 00:22:29.149 Write completed with error (sct=0, sc=8) 00:22:29.149 starting I/O failed: -6 00:22:29.149 Write completed with error (sct=0, sc=8) 00:22:29.149 starting I/O failed: -6 00:22:29.149 Write completed with error (sct=0, sc=8) 00:22:29.149 [2024-11-15 11:40:29.560853] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:22:29.149 Write completed with error (sct=0, sc=8) 00:22:29.149 starting I/O failed: -6 00:22:29.149 Write completed with error (sct=0, sc=8) 00:22:29.149 starting I/O failed: -6 00:22:29.149 Write completed with error (sct=0, sc=8) 00:22:29.149 starting I/O failed: -6 00:22:29.149 Write completed with error (sct=0, sc=8) 00:22:29.149 Write completed with error (sct=0, sc=8) 00:22:29.149 starting I/O failed: -6 00:22:29.149 Write completed with error (sct=0, sc=8) 00:22:29.149 starting I/O failed: -6 00:22:29.149 Write completed with error (sct=0, sc=8) 00:22:29.149 starting I/O failed: -6 00:22:29.149 Write completed with error (sct=0, sc=8) 00:22:29.149 Write completed with error (sct=0, sc=8) 00:22:29.149 starting I/O failed: -6 00:22:29.149 Write completed with error (sct=0, sc=8) 00:22:29.149 starting I/O failed: -6 00:22:29.149 Write completed with error (sct=0, sc=8) 00:22:29.149 starting I/O failed: -6 00:22:29.149 Write completed with error (sct=0, sc=8) 00:22:29.149 Write completed with error (sct=0, sc=8) 00:22:29.149 starting I/O failed: -6 00:22:29.149 Write completed with error (sct=0, sc=8) 00:22:29.149 starting I/O failed: -6 00:22:29.149 Write completed with error (sct=0, sc=8) 00:22:29.149 starting I/O failed: -6 00:22:29.149 Write completed with error (sct=0, sc=8) 00:22:29.149 Write completed with error (sct=0, sc=8) 00:22:29.149 starting I/O failed: -6 00:22:29.149 Write completed with error (sct=0, sc=8) 00:22:29.149 starting I/O failed: -6 00:22:29.149 Write completed with error (sct=0, sc=8) 00:22:29.149 starting I/O failed: -6 00:22:29.149 Write completed with error (sct=0, sc=8) 00:22:29.149 Write completed with error (sct=0, sc=8) 00:22:29.149 starting I/O failed: -6 00:22:29.149 Write completed with error (sct=0, sc=8) 00:22:29.149 starting I/O failed: -6 00:22:29.149 Write completed with error (sct=0, sc=8) 00:22:29.149 starting I/O failed: -6 00:22:29.149 Write completed with error (sct=0, sc=8) 00:22:29.149 Write completed with error (sct=0, sc=8) 00:22:29.149 starting I/O failed: -6 00:22:29.149 Write completed with error (sct=0, sc=8) 00:22:29.149 starting I/O failed: -6 00:22:29.149 Write completed with error (sct=0, sc=8) 00:22:29.149 starting I/O failed: -6 00:22:29.149 Write completed with error (sct=0, sc=8) 00:22:29.149 Write completed with error (sct=0, sc=8) 00:22:29.149 starting I/O failed: -6 00:22:29.149 Write completed with error (sct=0, sc=8) 00:22:29.149 starting I/O failed: -6 00:22:29.149 Write completed with error (sct=0, sc=8) 00:22:29.149 starting I/O failed: -6 00:22:29.149 Write completed with error (sct=0, sc=8) 00:22:29.149 Write completed with error (sct=0, sc=8) 00:22:29.149 starting I/O failed: -6 00:22:29.149 Write completed with error (sct=0, sc=8) 00:22:29.149 starting I/O failed: -6 00:22:29.149 Write completed with error (sct=0, sc=8) 00:22:29.149 starting I/O failed: -6 00:22:29.150 Write completed with error (sct=0, sc=8) 00:22:29.150 Write completed with error (sct=0, sc=8) 00:22:29.150 starting I/O failed: -6 00:22:29.150 Write completed with error (sct=0, sc=8) 00:22:29.150 starting I/O failed: -6 00:22:29.150 Write completed with error (sct=0, sc=8) 00:22:29.150 starting I/O failed: -6 00:22:29.150 Write completed with error (sct=0, sc=8) 00:22:29.150 Write completed with error (sct=0, sc=8) 00:22:29.150 starting I/O failed: -6 00:22:29.150 Write completed with error (sct=0, sc=8) 00:22:29.150 starting I/O failed: -6 00:22:29.150 Write completed with error (sct=0, sc=8) 00:22:29.150 starting I/O failed: -6 00:22:29.150 Write completed with error (sct=0, sc=8) 00:22:29.150 Write completed with error (sct=0, sc=8) 00:22:29.150 starting I/O failed: -6 00:22:29.150 Write completed with error (sct=0, sc=8) 00:22:29.150 starting I/O failed: -6 00:22:29.150 Write completed with error (sct=0, sc=8) 00:22:29.150 starting I/O failed: -6 00:22:29.150 [2024-11-15 11:40:29.562287] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:22:29.150 Write completed with error (sct=0, sc=8) 00:22:29.150 starting I/O failed: -6 00:22:29.150 Write completed with error (sct=0, sc=8) 00:22:29.150 starting I/O failed: -6 00:22:29.150 Write completed with error (sct=0, sc=8) 00:22:29.150 starting I/O failed: -6 00:22:29.150 Write completed with error (sct=0, sc=8) 00:22:29.150 starting I/O failed: -6 00:22:29.150 Write completed with error (sct=0, sc=8) 00:22:29.150 starting I/O failed: -6 00:22:29.150 Write completed with error (sct=0, sc=8) 00:22:29.150 starting I/O failed: -6 00:22:29.150 Write completed with error (sct=0, sc=8) 00:22:29.150 starting I/O failed: -6 00:22:29.150 Write completed with error (sct=0, sc=8) 00:22:29.150 starting I/O failed: -6 00:22:29.150 Write completed with error (sct=0, sc=8) 00:22:29.150 starting I/O failed: -6 00:22:29.150 Write completed with error (sct=0, sc=8) 00:22:29.150 starting I/O failed: -6 00:22:29.150 Write completed with error (sct=0, sc=8) 00:22:29.150 starting I/O failed: -6 00:22:29.150 Write completed with error (sct=0, sc=8) 00:22:29.150 starting I/O failed: -6 00:22:29.150 Write completed with error (sct=0, sc=8) 00:22:29.150 starting I/O failed: -6 00:22:29.150 Write completed with error (sct=0, sc=8) 00:22:29.150 starting I/O failed: -6 00:22:29.150 Write completed with error (sct=0, sc=8) 00:22:29.150 starting I/O failed: -6 00:22:29.150 Write completed with error (sct=0, sc=8) 00:22:29.150 starting I/O failed: -6 00:22:29.150 Write completed with error (sct=0, sc=8) 00:22:29.150 starting I/O failed: -6 00:22:29.150 Write completed with error (sct=0, sc=8) 00:22:29.150 starting I/O failed: -6 00:22:29.150 Write completed with error (sct=0, sc=8) 00:22:29.150 starting I/O failed: -6 00:22:29.150 Write completed with error (sct=0, sc=8) 00:22:29.150 starting I/O failed: -6 00:22:29.150 Write completed with error (sct=0, sc=8) 00:22:29.150 starting I/O failed: -6 00:22:29.150 Write completed with error (sct=0, sc=8) 00:22:29.150 starting I/O failed: -6 00:22:29.150 Write completed with error (sct=0, sc=8) 00:22:29.150 starting I/O failed: -6 00:22:29.150 Write completed with error (sct=0, sc=8) 00:22:29.150 starting I/O failed: -6 00:22:29.150 Write completed with error (sct=0, sc=8) 00:22:29.150 starting I/O failed: -6 00:22:29.150 Write completed with error (sct=0, sc=8) 00:22:29.150 starting I/O failed: -6 00:22:29.150 Write completed with error (sct=0, sc=8) 00:22:29.150 starting I/O failed: -6 00:22:29.150 Write completed with error (sct=0, sc=8) 00:22:29.150 starting I/O failed: -6 00:22:29.150 Write completed with error (sct=0, sc=8) 00:22:29.150 starting I/O failed: -6 00:22:29.150 Write completed with error (sct=0, sc=8) 00:22:29.150 starting I/O failed: -6 00:22:29.150 Write completed with error (sct=0, sc=8) 00:22:29.150 starting I/O failed: -6 00:22:29.150 Write completed with error (sct=0, sc=8) 00:22:29.150 starting I/O failed: -6 00:22:29.150 Write completed with error (sct=0, sc=8) 00:22:29.150 starting I/O failed: -6 00:22:29.150 Write completed with error (sct=0, sc=8) 00:22:29.150 starting I/O failed: -6 00:22:29.150 Write completed with error (sct=0, sc=8) 00:22:29.150 starting I/O failed: -6 00:22:29.150 Write completed with error (sct=0, sc=8) 00:22:29.150 starting I/O failed: -6 00:22:29.150 Write completed with error (sct=0, sc=8) 00:22:29.150 starting I/O failed: -6 00:22:29.150 Write completed with error (sct=0, sc=8) 00:22:29.150 starting I/O failed: -6 00:22:29.150 Write completed with error (sct=0, sc=8) 00:22:29.150 starting I/O failed: -6 00:22:29.150 Write completed with error (sct=0, sc=8) 00:22:29.150 starting I/O failed: -6 00:22:29.150 Write completed with error (sct=0, sc=8) 00:22:29.150 starting I/O failed: -6 00:22:29.150 Write completed with error (sct=0, sc=8) 00:22:29.150 starting I/O failed: -6 00:22:29.150 Write completed with error (sct=0, sc=8) 00:22:29.150 starting I/O failed: -6 00:22:29.150 Write completed with error (sct=0, sc=8) 00:22:29.150 starting I/O failed: -6 00:22:29.150 Write completed with error (sct=0, sc=8) 00:22:29.150 starting I/O failed: -6 00:22:29.150 Write completed with error (sct=0, sc=8) 00:22:29.150 starting I/O failed: -6 00:22:29.150 Write completed with error (sct=0, sc=8) 00:22:29.150 starting I/O failed: -6 00:22:29.150 Write completed with error (sct=0, sc=8) 00:22:29.150 starting I/O failed: -6 00:22:29.150 Write completed with error (sct=0, sc=8) 00:22:29.150 starting I/O failed: -6 00:22:29.150 Write completed with error (sct=0, sc=8) 00:22:29.150 starting I/O failed: -6 00:22:29.150 Write completed with error (sct=0, sc=8) 00:22:29.150 starting I/O failed: -6 00:22:29.150 Write completed with error (sct=0, sc=8) 00:22:29.150 starting I/O failed: -6 00:22:29.150 Write completed with error (sct=0, sc=8) 00:22:29.150 starting I/O failed: -6 00:22:29.150 Write completed with error (sct=0, sc=8) 00:22:29.150 starting I/O failed: -6 00:22:29.150 Write completed with error (sct=0, sc=8) 00:22:29.150 starting I/O failed: -6 00:22:29.150 Write completed with error (sct=0, sc=8) 00:22:29.150 starting I/O failed: -6 00:22:29.150 Write completed with error (sct=0, sc=8) 00:22:29.150 starting I/O failed: -6 00:22:29.150 Write completed with error (sct=0, sc=8) 00:22:29.150 starting I/O failed: -6 00:22:29.150 [2024-11-15 11:40:29.570145] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:22:29.150 NVMe io qpair process completion error 00:22:29.150 Write completed with error (sct=0, sc=8) 00:22:29.150 Write completed with error (sct=0, sc=8) 00:22:29.150 Write completed with error (sct=0, sc=8) 00:22:29.150 Write completed with error (sct=0, sc=8) 00:22:29.150 Write completed with error (sct=0, sc=8) 00:22:29.150 Write completed with error (sct=0, sc=8) 00:22:29.150 Write completed with error (sct=0, sc=8) 00:22:29.150 Write completed with error (sct=0, sc=8) 00:22:29.150 Write completed with error (sct=0, sc=8) 00:22:29.150 Write completed with error (sct=0, sc=8) 00:22:29.150 Write completed with error (sct=0, sc=8) 00:22:29.150 Write completed with error (sct=0, sc=8) 00:22:29.150 Write completed with error (sct=0, sc=8) 00:22:29.150 Write completed with error (sct=0, sc=8) 00:22:29.150 Write completed with error (sct=0, sc=8) 00:22:29.150 Write completed with error (sct=0, sc=8) 00:22:29.150 Write completed with error (sct=0, sc=8) 00:22:29.150 Write completed with error (sct=0, sc=8) 00:22:29.150 Write completed with error (sct=0, sc=8) 00:22:29.150 Write completed with error (sct=0, sc=8) 00:22:29.150 Write completed with error (sct=0, sc=8) 00:22:29.151 Write completed with error (sct=0, sc=8) 00:22:29.151 Write completed with error (sct=0, sc=8) 00:22:29.151 Write completed with error (sct=0, sc=8) 00:22:29.151 Write completed with error (sct=0, sc=8) 00:22:29.151 Write completed with error (sct=0, sc=8) 00:22:29.151 Write completed with error (sct=0, sc=8) 00:22:29.151 Write completed with error (sct=0, sc=8) 00:22:29.151 Write completed with error (sct=0, sc=8) 00:22:29.151 Write completed with error (sct=0, sc=8) 00:22:29.151 Write completed with error (sct=0, sc=8) 00:22:29.151 Write completed with error (sct=0, sc=8) 00:22:29.151 Write completed with error (sct=0, sc=8) 00:22:29.151 Write completed with error (sct=0, sc=8) 00:22:29.151 Write completed with error (sct=0, sc=8) 00:22:29.151 Write completed with error (sct=0, sc=8) 00:22:29.151 Write completed with error (sct=0, sc=8) 00:22:29.151 Write completed with error (sct=0, sc=8) 00:22:29.151 Write completed with error (sct=0, sc=8) 00:22:29.151 Write completed with error (sct=0, sc=8) 00:22:29.151 Write completed with error (sct=0, sc=8) 00:22:29.151 Write completed with error (sct=0, sc=8) 00:22:29.151 Write completed with error (sct=0, sc=8) 00:22:29.151 Write completed with error (sct=0, sc=8) 00:22:29.151 Write completed with error (sct=0, sc=8) 00:22:29.151 Write completed with error (sct=0, sc=8) 00:22:29.151 Write completed with error (sct=0, sc=8) 00:22:29.151 Write completed with error (sct=0, sc=8) 00:22:29.151 Write completed with error (sct=0, sc=8) 00:22:29.151 Write completed with error (sct=0, sc=8) 00:22:29.151 Write completed with error (sct=0, sc=8) 00:22:29.151 Write completed with error (sct=0, sc=8) 00:22:29.151 Write completed with error (sct=0, sc=8) 00:22:29.151 Write completed with error (sct=0, sc=8) 00:22:29.151 Write completed with error (sct=0, sc=8) 00:22:29.151 Write completed with error (sct=0, sc=8) 00:22:29.151 Write completed with error (sct=0, sc=8) 00:22:29.151 Write completed with error (sct=0, sc=8) 00:22:29.151 Write completed with error (sct=0, sc=8) 00:22:29.151 Write completed with error (sct=0, sc=8) 00:22:29.151 Write completed with error (sct=0, sc=8) 00:22:29.151 Write completed with error (sct=0, sc=8) 00:22:29.151 Write completed with error (sct=0, sc=8) 00:22:29.151 Write completed with error (sct=0, sc=8) 00:22:29.151 Write completed with error (sct=0, sc=8) 00:22:29.151 Write completed with error (sct=0, sc=8) 00:22:29.151 Write completed with error (sct=0, sc=8) 00:22:29.151 Write completed with error (sct=0, sc=8) 00:22:29.151 Write completed with error (sct=0, sc=8) 00:22:29.151 Write completed with error (sct=0, sc=8) 00:22:29.151 Write completed with error (sct=0, sc=8) 00:22:29.151 Write completed with error (sct=0, sc=8) 00:22:29.151 Write completed with error (sct=0, sc=8) 00:22:29.151 Write completed with error (sct=0, sc=8) 00:22:29.151 Write completed with error (sct=0, sc=8) 00:22:29.151 Write completed with error (sct=0, sc=8) 00:22:29.151 Write completed with error (sct=0, sc=8) 00:22:29.151 Write completed with error (sct=0, sc=8) 00:22:29.151 Write completed with error (sct=0, sc=8) 00:22:29.151 Write completed with error (sct=0, sc=8) 00:22:29.151 Write completed with error (sct=0, sc=8) 00:22:29.151 Write completed with error (sct=0, sc=8) 00:22:29.151 Write completed with error (sct=0, sc=8) 00:22:29.151 Write completed with error (sct=0, sc=8) 00:22:29.151 [2024-11-15 11:40:29.572525] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:22:29.151 Write completed with error (sct=0, sc=8) 00:22:29.151 Write completed with error (sct=0, sc=8) 00:22:29.151 Write completed with error (sct=0, sc=8) 00:22:29.151 Write completed with error (sct=0, sc=8) 00:22:29.151 Write completed with error (sct=0, sc=8) 00:22:29.151 Write completed with error (sct=0, sc=8) 00:22:29.151 Write completed with error (sct=0, sc=8) 00:22:29.151 Write completed with error (sct=0, sc=8) 00:22:29.151 Write completed with error (sct=0, sc=8) 00:22:29.151 Write completed with error (sct=0, sc=8) 00:22:29.151 Write completed with error (sct=0, sc=8) 00:22:29.151 Write completed with error (sct=0, sc=8) 00:22:29.151 Write completed with error (sct=0, sc=8) 00:22:29.151 Write completed with error (sct=0, sc=8) 00:22:29.151 Write completed with error (sct=0, sc=8) 00:22:29.151 Write completed with error (sct=0, sc=8) 00:22:29.151 Write completed with error (sct=0, sc=8) 00:22:29.151 Write completed with error (sct=0, sc=8) 00:22:29.151 Write completed with error (sct=0, sc=8) 00:22:29.151 Write completed with error (sct=0, sc=8) 00:22:29.151 Write completed with error (sct=0, sc=8) 00:22:29.151 Write completed with error (sct=0, sc=8) 00:22:29.151 Write completed with error (sct=0, sc=8) 00:22:29.151 Write completed with error (sct=0, sc=8) 00:22:29.151 Write completed with error (sct=0, sc=8) 00:22:29.151 Write completed with error (sct=0, sc=8) 00:22:29.151 Write completed with error (sct=0, sc=8) 00:22:29.151 Write completed with error (sct=0, sc=8) 00:22:29.151 Write completed with error (sct=0, sc=8) 00:22:29.151 Write completed with error (sct=0, sc=8) 00:22:29.151 [2024-11-15 11:40:29.573970] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:22:29.151 NVMe io qpair process completion error 00:22:29.151 Initializing NVMe Controllers 00:22:29.151 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode3 00:22:29.151 Controller IO queue size 128, less than required. 00:22:29.151 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:22:29.151 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode7 00:22:29.151 Controller IO queue size 128, less than required. 00:22:29.151 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:22:29.151 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:22:29.151 Controller IO queue size 128, less than required. 00:22:29.151 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:22:29.151 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode9 00:22:29.151 Controller IO queue size 128, less than required. 00:22:29.151 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:22:29.151 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode4 00:22:29.151 Controller IO queue size 128, less than required. 00:22:29.151 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:22:29.151 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode2 00:22:29.151 Controller IO queue size 128, less than required. 00:22:29.151 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:22:29.151 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode8 00:22:29.151 Controller IO queue size 128, less than required. 00:22:29.151 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:22:29.151 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode5 00:22:29.151 Controller IO queue size 128, less than required. 00:22:29.151 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:22:29.151 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode6 00:22:29.151 Controller IO queue size 128, less than required. 00:22:29.151 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:22:29.151 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode10 00:22:29.151 Controller IO queue size 128, less than required. 00:22:29.151 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:22:29.151 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode3) NSID 1 with lcore 0 00:22:29.151 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode7) NSID 1 with lcore 0 00:22:29.151 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:22:29.151 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode9) NSID 1 with lcore 0 00:22:29.151 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode4) NSID 1 with lcore 0 00:22:29.151 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode2) NSID 1 with lcore 0 00:22:29.151 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode8) NSID 1 with lcore 0 00:22:29.151 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode5) NSID 1 with lcore 0 00:22:29.151 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode6) NSID 1 with lcore 0 00:22:29.151 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode10) NSID 1 with lcore 0 00:22:29.151 Initialization complete. Launching workers. 00:22:29.151 ======================================================== 00:22:29.151 Latency(us) 00:22:29.151 Device Information : IOPS MiB/s Average min max 00:22:29.151 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode3) NSID 1 from core 0: 1604.15 68.93 79808.75 927.08 151367.31 00:22:29.151 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode7) NSID 1 from core 0: 1597.95 68.66 80160.52 943.44 149717.01 00:22:29.151 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1612.68 69.30 79598.48 1188.76 155019.18 00:22:29.152 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode9) NSID 1 from core 0: 1617.59 69.51 79456.86 869.10 148541.71 00:22:29.152 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode4) NSID 1 from core 0: 1595.82 68.57 80587.61 869.91 147169.43 00:22:29.152 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode2) NSID 1 from core 0: 1592.19 68.41 80806.71 1066.18 168503.23 00:22:29.152 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode8) NSID 1 from core 0: 1598.38 68.68 80647.18 1363.44 146844.08 00:22:29.152 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode5) NSID 1 from core 0: 1617.17 69.49 79765.88 1520.74 181722.55 00:22:29.152 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode6) NSID 1 from core 0: 1603.50 68.90 80445.24 764.94 157767.55 00:22:29.152 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode10) NSID 1 from core 0: 1583.65 68.05 80055.69 920.42 141820.56 00:22:29.152 ======================================================== 00:22:29.152 Total : 16023.09 688.49 80131.29 764.94 181722.55 00:22:29.152 00:22:29.152 [2024-11-15 11:40:29.577422] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11a4390 is same with the state(6) to be set 00:22:29.152 [2024-11-15 11:40:29.577496] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11a5380 is same with the state(6) to be set 00:22:29.152 [2024-11-15 11:40:29.577544] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11a6360 is same with the state(6) to be set 00:22:29.152 [2024-11-15 11:40:29.577592] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11a59e0 is same with the state(6) to be set 00:22:29.152 [2024-11-15 11:40:29.577636] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11a46c0 is same with the state(6) to be set 00:22:29.152 [2024-11-15 11:40:29.577679] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11a4060 is same with the state(6) to be set 00:22:29.152 [2024-11-15 11:40:29.577723] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11a56b0 is same with the state(6) to be set 00:22:29.152 [2024-11-15 11:40:29.577766] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11a49f0 is same with the state(6) to be set 00:22:29.152 [2024-11-15 11:40:29.577809] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11a5050 is same with the state(6) to be set 00:22:29.152 [2024-11-15 11:40:29.577853] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11a6540 is same with the state(6) to be set 00:22:29.152 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf: errors occurred 00:22:29.152 11:40:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@156 -- # sleep 1 00:22:30.089 11:40:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@158 -- # NOT wait 1301832 00:22:30.089 11:40:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@650 -- # local es=0 00:22:30.089 11:40:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@652 -- # valid_exec_arg wait 1301832 00:22:30.089 11:40:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@638 -- # local arg=wait 00:22:30.089 11:40:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:22:30.089 11:40:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@642 -- # type -t wait 00:22:30.089 11:40:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:22:30.089 11:40:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@653 -- # wait 1301832 00:22:30.089 11:40:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@653 -- # es=1 00:22:30.089 11:40:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:22:30.089 11:40:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:22:30.089 11:40:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:22:30.089 11:40:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@159 -- # stoptarget 00:22:30.089 11:40:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@42 -- # rm -f ./local-job0-0-verify.state 00:22:30.089 11:40:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:22:30.089 11:40:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@44 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:22:30.089 11:40:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@46 -- # nvmftestfini 00:22:30.089 11:40:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@516 -- # nvmfcleanup 00:22:30.089 11:40:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@121 -- # sync 00:22:30.089 11:40:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:22:30.089 11:40:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@124 -- # set +e 00:22:30.089 11:40:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@125 -- # for i in {1..20} 00:22:30.089 11:40:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:22:30.089 rmmod nvme_tcp 00:22:30.089 rmmod nvme_fabrics 00:22:30.089 rmmod nvme_keyring 00:22:30.089 11:40:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:22:30.089 11:40:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@128 -- # set -e 00:22:30.089 11:40:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@129 -- # return 0 00:22:30.089 11:40:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@517 -- # '[' -n 1301526 ']' 00:22:30.089 11:40:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@518 -- # killprocess 1301526 00:22:30.089 11:40:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@952 -- # '[' -z 1301526 ']' 00:22:30.089 11:40:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@956 -- # kill -0 1301526 00:22:30.089 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 956: kill: (1301526) - No such process 00:22:30.089 11:40:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@979 -- # echo 'Process with pid 1301526 is not found' 00:22:30.089 Process with pid 1301526 is not found 00:22:30.089 11:40:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:22:30.089 11:40:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:22:30.089 11:40:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:22:30.089 11:40:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@297 -- # iptr 00:22:30.089 11:40:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@791 -- # iptables-save 00:22:30.089 11:40:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:22:30.089 11:40:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@791 -- # iptables-restore 00:22:30.089 11:40:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:22:30.089 11:40:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@302 -- # remove_spdk_ns 00:22:30.089 11:40:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:30.089 11:40:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:30.089 11:40:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:32.626 11:40:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:22:32.626 00:22:32.626 real 0m9.811s 00:22:32.626 user 0m25.816s 00:22:32.626 sys 0m4.493s 00:22:32.626 11:40:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@1128 -- # xtrace_disable 00:22:32.626 11:40:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:22:32.626 ************************************ 00:22:32.626 END TEST nvmf_shutdown_tc4 00:22:32.626 ************************************ 00:22:32.626 11:40:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@170 -- # trap - SIGINT SIGTERM EXIT 00:22:32.626 00:22:32.626 real 0m39.912s 00:22:32.626 user 1m42.251s 00:22:32.626 sys 0m12.505s 00:22:32.626 11:40:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1128 -- # xtrace_disable 00:22:32.626 11:40:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:22:32.626 ************************************ 00:22:32.626 END TEST nvmf_shutdown 00:22:32.626 ************************************ 00:22:32.626 11:40:33 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@67 -- # run_test nvmf_nsid /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nsid.sh --transport=tcp 00:22:32.626 11:40:33 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:22:32.626 11:40:33 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1109 -- # xtrace_disable 00:22:32.626 11:40:33 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:22:32.626 ************************************ 00:22:32.626 START TEST nvmf_nsid 00:22:32.626 ************************************ 00:22:32.626 11:40:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nsid.sh --transport=tcp 00:22:32.626 * Looking for test storage... 00:22:32.626 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:22:32.626 11:40:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:22:32.626 11:40:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1691 -- # lcov --version 00:22:32.626 11:40:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:22:32.626 11:40:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:22:32.626 11:40:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:22:32.626 11:40:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@333 -- # local ver1 ver1_l 00:22:32.626 11:40:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@334 -- # local ver2 ver2_l 00:22:32.626 11:40:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@336 -- # IFS=.-: 00:22:32.626 11:40:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@336 -- # read -ra ver1 00:22:32.626 11:40:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@337 -- # IFS=.-: 00:22:32.626 11:40:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@337 -- # read -ra ver2 00:22:32.626 11:40:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@338 -- # local 'op=<' 00:22:32.626 11:40:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@340 -- # ver1_l=2 00:22:32.626 11:40:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@341 -- # ver2_l=1 00:22:32.626 11:40:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:22:32.626 11:40:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@344 -- # case "$op" in 00:22:32.626 11:40:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@345 -- # : 1 00:22:32.626 11:40:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@364 -- # (( v = 0 )) 00:22:32.626 11:40:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:22:32.626 11:40:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@365 -- # decimal 1 00:22:32.626 11:40:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@353 -- # local d=1 00:22:32.626 11:40:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:22:32.626 11:40:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@355 -- # echo 1 00:22:32.626 11:40:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@365 -- # ver1[v]=1 00:22:32.626 11:40:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@366 -- # decimal 2 00:22:32.627 11:40:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@353 -- # local d=2 00:22:32.627 11:40:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:22:32.627 11:40:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@355 -- # echo 2 00:22:32.627 11:40:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@366 -- # ver2[v]=2 00:22:32.627 11:40:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:22:32.627 11:40:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:22:32.627 11:40:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@368 -- # return 0 00:22:32.627 11:40:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:22:32.627 11:40:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:22:32.627 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:32.627 --rc genhtml_branch_coverage=1 00:22:32.627 --rc genhtml_function_coverage=1 00:22:32.627 --rc genhtml_legend=1 00:22:32.627 --rc geninfo_all_blocks=1 00:22:32.627 --rc geninfo_unexecuted_blocks=1 00:22:32.627 00:22:32.627 ' 00:22:32.627 11:40:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:22:32.627 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:32.627 --rc genhtml_branch_coverage=1 00:22:32.627 --rc genhtml_function_coverage=1 00:22:32.627 --rc genhtml_legend=1 00:22:32.627 --rc geninfo_all_blocks=1 00:22:32.627 --rc geninfo_unexecuted_blocks=1 00:22:32.627 00:22:32.627 ' 00:22:32.627 11:40:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:22:32.627 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:32.627 --rc genhtml_branch_coverage=1 00:22:32.627 --rc genhtml_function_coverage=1 00:22:32.627 --rc genhtml_legend=1 00:22:32.627 --rc geninfo_all_blocks=1 00:22:32.627 --rc geninfo_unexecuted_blocks=1 00:22:32.627 00:22:32.627 ' 00:22:32.627 11:40:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:22:32.627 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:32.627 --rc genhtml_branch_coverage=1 00:22:32.627 --rc genhtml_function_coverage=1 00:22:32.627 --rc genhtml_legend=1 00:22:32.627 --rc geninfo_all_blocks=1 00:22:32.627 --rc geninfo_unexecuted_blocks=1 00:22:32.627 00:22:32.627 ' 00:22:32.627 11:40:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:22:32.627 11:40:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@7 -- # uname -s 00:22:32.627 11:40:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:32.627 11:40:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:32.627 11:40:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:32.627 11:40:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:32.627 11:40:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:32.627 11:40:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:32.627 11:40:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:32.627 11:40:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:32.627 11:40:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:32.627 11:40:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:32.627 11:40:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:22:32.627 11:40:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@18 -- # NVME_HOSTID=00abaa28-3537-eb11-906e-0017a4403562 00:22:32.627 11:40:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:32.627 11:40:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:32.627 11:40:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:22:32.627 11:40:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:32.627 11:40:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:22:32.627 11:40:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@15 -- # shopt -s extglob 00:22:32.627 11:40:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:32.627 11:40:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:32.627 11:40:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:32.627 11:40:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:32.627 11:40:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:32.627 11:40:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:32.627 11:40:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@5 -- # export PATH 00:22:32.627 11:40:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:32.627 11:40:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@51 -- # : 0 00:22:32.627 11:40:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:22:32.627 11:40:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:22:32.627 11:40:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:32.627 11:40:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:32.627 11:40:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:32.627 11:40:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:22:32.627 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:22:32.627 11:40:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:22:32.627 11:40:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:22:32.627 11:40:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@55 -- # have_pci_nics=0 00:22:32.627 11:40:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@11 -- # subnqn1=nqn.2024-10.io.spdk:cnode0 00:22:32.627 11:40:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@12 -- # subnqn2=nqn.2024-10.io.spdk:cnode1 00:22:32.627 11:40:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@13 -- # subnqn3=nqn.2024-10.io.spdk:cnode2 00:22:32.627 11:40:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@14 -- # tgt2sock=/var/tmp/tgt2.sock 00:22:32.627 11:40:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@15 -- # tgt2pid= 00:22:32.627 11:40:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@46 -- # nvmftestinit 00:22:32.627 11:40:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:22:32.627 11:40:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:32.627 11:40:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@476 -- # prepare_net_devs 00:22:32.627 11:40:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@438 -- # local -g is_hw=no 00:22:32.627 11:40:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@440 -- # remove_spdk_ns 00:22:32.627 11:40:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:32.627 11:40:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:32.627 11:40:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:32.627 11:40:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:22:32.627 11:40:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:22:32.627 11:40:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@309 -- # xtrace_disable 00:22:32.627 11:40:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:22:37.902 11:40:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:37.902 11:40:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@315 -- # pci_devs=() 00:22:37.902 11:40:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@315 -- # local -a pci_devs 00:22:37.902 11:40:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@316 -- # pci_net_devs=() 00:22:37.902 11:40:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:22:37.902 11:40:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@317 -- # pci_drivers=() 00:22:37.902 11:40:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@317 -- # local -A pci_drivers 00:22:37.902 11:40:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@319 -- # net_devs=() 00:22:37.902 11:40:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@319 -- # local -ga net_devs 00:22:37.902 11:40:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@320 -- # e810=() 00:22:37.902 11:40:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@320 -- # local -ga e810 00:22:37.902 11:40:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@321 -- # x722=() 00:22:37.902 11:40:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@321 -- # local -ga x722 00:22:37.902 11:40:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@322 -- # mlx=() 00:22:37.902 11:40:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@322 -- # local -ga mlx 00:22:37.902 11:40:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:37.902 11:40:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:37.902 11:40:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:37.902 11:40:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:37.902 11:40:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:37.902 11:40:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:37.902 11:40:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:37.902 11:40:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:22:37.902 11:40:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:37.902 11:40:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:37.902 11:40:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:37.902 11:40:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:37.902 11:40:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:22:37.902 11:40:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:22:37.902 11:40:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:22:37.902 11:40:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:22:37.902 11:40:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:22:37.902 11:40:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:22:37.902 11:40:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:37.902 11:40:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:22:37.902 Found 0000:af:00.0 (0x8086 - 0x159b) 00:22:37.902 11:40:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:37.902 11:40:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:37.902 11:40:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:37.902 11:40:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:37.902 11:40:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:37.902 11:40:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:37.902 11:40:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:22:37.902 Found 0000:af:00.1 (0x8086 - 0x159b) 00:22:37.902 11:40:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:37.902 11:40:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:37.902 11:40:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:37.903 11:40:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:37.903 11:40:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:37.903 11:40:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:22:37.903 11:40:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:22:37.903 11:40:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:22:37.903 11:40:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:22:37.903 11:40:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:37.903 11:40:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:22:37.903 11:40:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:37.903 11:40:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@418 -- # [[ up == up ]] 00:22:37.903 11:40:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:22:37.903 11:40:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:37.903 11:40:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:22:37.903 Found net devices under 0000:af:00.0: cvl_0_0 00:22:37.903 11:40:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:22:37.903 11:40:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:22:37.903 11:40:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:37.903 11:40:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:22:37.903 11:40:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:37.903 11:40:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@418 -- # [[ up == up ]] 00:22:37.903 11:40:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:22:37.903 11:40:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:37.903 11:40:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:22:37.903 Found net devices under 0000:af:00.1: cvl_0_1 00:22:37.903 11:40:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:22:37.903 11:40:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:22:37.903 11:40:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@442 -- # is_hw=yes 00:22:37.903 11:40:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:22:37.903 11:40:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:22:37.903 11:40:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:22:37.903 11:40:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:22:37.903 11:40:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:37.903 11:40:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:37.903 11:40:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:37.903 11:40:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:22:37.903 11:40:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:37.903 11:40:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:37.903 11:40:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:22:37.903 11:40:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:22:37.903 11:40:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:37.903 11:40:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:37.903 11:40:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:22:37.903 11:40:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:22:37.903 11:40:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:22:37.903 11:40:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:37.903 11:40:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:37.903 11:40:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:37.903 11:40:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:22:37.903 11:40:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:37.903 11:40:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:37.903 11:40:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:37.903 11:40:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:22:37.903 11:40:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:22:37.903 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:37.903 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.382 ms 00:22:37.903 00:22:37.903 --- 10.0.0.2 ping statistics --- 00:22:37.903 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:37.903 rtt min/avg/max/mdev = 0.382/0.382/0.382/0.000 ms 00:22:37.903 11:40:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:37.903 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:37.903 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.140 ms 00:22:37.903 00:22:37.903 --- 10.0.0.1 ping statistics --- 00:22:37.903 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:37.903 rtt min/avg/max/mdev = 0.140/0.140/0.140/0.000 ms 00:22:37.903 11:40:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:37.903 11:40:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@450 -- # return 0 00:22:37.903 11:40:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:22:37.903 11:40:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:37.903 11:40:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:22:37.903 11:40:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:22:37.903 11:40:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:37.903 11:40:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:22:37.903 11:40:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:22:37.903 11:40:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@47 -- # nvmfappstart -m 1 00:22:37.903 11:40:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:22:37.903 11:40:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@724 -- # xtrace_disable 00:22:37.903 11:40:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:22:37.903 11:40:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@509 -- # nvmfpid=1306460 00:22:37.903 11:40:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@510 -- # waitforlisten 1306460 00:22:37.903 11:40:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@833 -- # '[' -z 1306460 ']' 00:22:37.903 11:40:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:37.903 11:40:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@838 -- # local max_retries=100 00:22:37.903 11:40:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:37.903 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:37.903 11:40:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@842 -- # xtrace_disable 00:22:37.903 11:40:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:22:37.903 11:40:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 1 00:22:37.903 [2024-11-15 11:40:38.604865] Starting SPDK v25.01-pre git sha1 4b2d483c6 / DPDK 24.03.0 initialization... 00:22:37.903 [2024-11-15 11:40:38.604922] [ DPDK EAL parameters: nvmf -c 1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:37.903 [2024-11-15 11:40:38.706922] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:38.163 [2024-11-15 11:40:38.755269] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:38.163 [2024-11-15 11:40:38.755311] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:38.163 [2024-11-15 11:40:38.755321] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:38.163 [2024-11-15 11:40:38.755330] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:38.163 [2024-11-15 11:40:38.755338] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:38.163 [2024-11-15 11:40:38.756066] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:22:38.163 11:40:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:22:38.163 11:40:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@866 -- # return 0 00:22:38.163 11:40:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:22:38.163 11:40:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@730 -- # xtrace_disable 00:22:38.163 11:40:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:22:38.163 11:40:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:38.163 11:40:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@49 -- # trap cleanup SIGINT SIGTERM EXIT 00:22:38.163 11:40:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@52 -- # tgt2pid=1306645 00:22:38.163 11:40:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 2 -r /var/tmp/tgt2.sock 00:22:38.163 11:40:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@54 -- # tgt1addr=10.0.0.2 00:22:38.163 11:40:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@55 -- # get_main_ns_ip 00:22:38.163 11:40:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@769 -- # local ip 00:22:38.163 11:40:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@770 -- # ip_candidates=() 00:22:38.163 11:40:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@770 -- # local -A ip_candidates 00:22:38.163 11:40:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:38.163 11:40:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:38.163 11:40:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:22:38.163 11:40:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:22:38.163 11:40:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:22:38.163 11:40:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:22:38.163 11:40:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:22:38.163 11:40:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@55 -- # tgt2addr=10.0.0.1 00:22:38.163 11:40:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@56 -- # uuidgen 00:22:38.163 11:40:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@56 -- # ns1uuid=549b8e32-dbb1-4a42-9be2-f35bddec684a 00:22:38.163 11:40:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@57 -- # uuidgen 00:22:38.163 11:40:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@57 -- # ns2uuid=0974f05e-0f26-4e39-bd71-a49e0e0bc1cc 00:22:38.163 11:40:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@58 -- # uuidgen 00:22:38.163 11:40:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@58 -- # ns3uuid=78dafc4a-8026-40d7-a768-9803118b4365 00:22:38.163 11:40:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@63 -- # rpc_cmd 00:22:38.163 11:40:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:38.163 11:40:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:22:38.163 null0 00:22:38.163 null1 00:22:38.163 null2 00:22:38.163 [2024-11-15 11:40:38.956337] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:38.163 [2024-11-15 11:40:38.958289] Starting SPDK v25.01-pre git sha1 4b2d483c6 / DPDK 24.03.0 initialization... 00:22:38.163 [2024-11-15 11:40:38.958347] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1306645 ] 00:22:38.163 [2024-11-15 11:40:38.980561] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:38.422 11:40:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:38.422 11:40:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@79 -- # waitforlisten 1306645 /var/tmp/tgt2.sock 00:22:38.422 11:40:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@833 -- # '[' -z 1306645 ']' 00:22:38.422 11:40:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/tgt2.sock 00:22:38.422 11:40:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@838 -- # local max_retries=100 00:22:38.422 11:40:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/tgt2.sock...' 00:22:38.422 Waiting for process to start up and listen on UNIX domain socket /var/tmp/tgt2.sock... 00:22:38.422 11:40:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@842 -- # xtrace_disable 00:22:38.422 11:40:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:22:38.422 [2024-11-15 11:40:39.024357] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:38.422 [2024-11-15 11:40:39.065514] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:22:38.682 11:40:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:22:38.682 11:40:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@866 -- # return 0 00:22:38.682 11:40:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/tgt2.sock 00:22:38.944 [2024-11-15 11:40:39.693917] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:38.944 [2024-11-15 11:40:39.710033] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.1 port 4421 *** 00:22:38.944 nvme0n1 nvme0n2 00:22:38.944 nvme1n1 00:22:38.944 11:40:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@94 -- # nvme_connect 00:22:38.944 11:40:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@23 -- # local ctrlr 00:22:38.944 11:40:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@25 -- # nvme connect -t tcp -a 10.0.0.1 -s 4421 -n nqn.2024-10.io.spdk:cnode2 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid=00abaa28-3537-eb11-906e-0017a4403562 00:22:40.322 11:40:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@28 -- # for ctrlr in /sys/class/nvme/nvme* 00:22:40.322 11:40:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@30 -- # [[ -e /sys/class/nvme/nvme0/subsysnqn ]] 00:22:40.322 11:40:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@30 -- # [[ nqn.2024-10.io.spdk:cnode2 == \n\q\n\.\2\0\2\4\-\1\0\.\i\o\.\s\p\d\k\:\c\n\o\d\e\2 ]] 00:22:40.322 11:40:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@31 -- # echo nvme0 00:22:40.322 11:40:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@32 -- # return 00:22:40.322 11:40:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@94 -- # ctrlr=nvme0 00:22:40.322 11:40:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@95 -- # waitforblk nvme0n1 00:22:40.322 11:40:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1237 -- # local i=0 00:22:40.322 11:40:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1238 -- # lsblk -l -o NAME 00:22:40.322 11:40:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1238 -- # grep -q -w nvme0n1 00:22:40.322 11:40:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1239 -- # '[' 0 -lt 15 ']' 00:22:40.322 11:40:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # i=1 00:22:40.322 11:40:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1241 -- # sleep 1 00:22:41.259 11:40:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1238 -- # lsblk -l -o NAME 00:22:41.259 11:40:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1238 -- # grep -q -w nvme0n1 00:22:41.259 11:40:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1244 -- # lsblk -l -o NAME 00:22:41.259 11:40:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1244 -- # grep -q -w nvme0n1 00:22:41.259 11:40:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1248 -- # return 0 00:22:41.259 11:40:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@96 -- # uuid2nguid 549b8e32-dbb1-4a42-9be2-f35bddec684a 00:22:41.259 11:40:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@787 -- # tr -d - 00:22:41.259 11:40:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@96 -- # nvme_get_nguid nvme0 1 00:22:41.259 11:40:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@40 -- # local ctrlr=nvme0 nsid=1 nguid 00:22:41.259 11:40:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nvme id-ns /dev/nvme0n1 -o json 00:22:41.259 11:40:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # jq -r .nguid 00:22:41.259 11:40:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nguid=549b8e32dbb14a429be2f35bddec684a 00:22:41.259 11:40:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@43 -- # echo 549B8E32DBB14A429BE2F35BDDEC684A 00:22:41.259 11:40:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@96 -- # [[ 549B8E32DBB14A429BE2F35BDDEC684A == \5\4\9\B\8\E\3\2\D\B\B\1\4\A\4\2\9\B\E\2\F\3\5\B\D\D\E\C\6\8\4\A ]] 00:22:41.259 11:40:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@97 -- # waitforblk nvme0n2 00:22:41.259 11:40:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1237 -- # local i=0 00:22:41.259 11:40:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1238 -- # lsblk -l -o NAME 00:22:41.259 11:40:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1238 -- # grep -q -w nvme0n2 00:22:41.259 11:40:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1244 -- # lsblk -l -o NAME 00:22:41.259 11:40:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1244 -- # grep -q -w nvme0n2 00:22:41.259 11:40:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1248 -- # return 0 00:22:41.259 11:40:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@98 -- # uuid2nguid 0974f05e-0f26-4e39-bd71-a49e0e0bc1cc 00:22:41.259 11:40:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@787 -- # tr -d - 00:22:41.259 11:40:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@98 -- # nvme_get_nguid nvme0 2 00:22:41.259 11:40:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@40 -- # local ctrlr=nvme0 nsid=2 nguid 00:22:41.519 11:40:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nvme id-ns /dev/nvme0n2 -o json 00:22:41.519 11:40:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # jq -r .nguid 00:22:41.519 11:40:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nguid=0974f05e0f264e39bd71a49e0e0bc1cc 00:22:41.519 11:40:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@43 -- # echo 0974F05E0F264E39BD71A49E0E0BC1CC 00:22:41.519 11:40:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@98 -- # [[ 0974F05E0F264E39BD71A49E0E0BC1CC == \0\9\7\4\F\0\5\E\0\F\2\6\4\E\3\9\B\D\7\1\A\4\9\E\0\E\0\B\C\1\C\C ]] 00:22:41.519 11:40:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@99 -- # waitforblk nvme0n3 00:22:41.519 11:40:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1237 -- # local i=0 00:22:41.519 11:40:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1238 -- # lsblk -l -o NAME 00:22:41.519 11:40:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1238 -- # grep -q -w nvme0n3 00:22:41.519 11:40:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1244 -- # lsblk -l -o NAME 00:22:41.519 11:40:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1244 -- # grep -q -w nvme0n3 00:22:41.519 11:40:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1248 -- # return 0 00:22:41.519 11:40:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@100 -- # uuid2nguid 78dafc4a-8026-40d7-a768-9803118b4365 00:22:41.519 11:40:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@787 -- # tr -d - 00:22:41.519 11:40:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@100 -- # nvme_get_nguid nvme0 3 00:22:41.519 11:40:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@40 -- # local ctrlr=nvme0 nsid=3 nguid 00:22:41.519 11:40:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nvme id-ns /dev/nvme0n3 -o json 00:22:41.519 11:40:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # jq -r .nguid 00:22:41.519 11:40:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nguid=78dafc4a802640d7a7689803118b4365 00:22:41.519 11:40:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@43 -- # echo 78DAFC4A802640D7A7689803118B4365 00:22:41.519 11:40:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@100 -- # [[ 78DAFC4A802640D7A7689803118B4365 == \7\8\D\A\F\C\4\A\8\0\2\6\4\0\D\7\A\7\6\8\9\8\0\3\1\1\8\B\4\3\6\5 ]] 00:22:41.519 11:40:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@101 -- # nvme disconnect -d /dev/nvme0 00:22:41.778 11:40:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@103 -- # trap - SIGINT SIGTERM EXIT 00:22:41.778 11:40:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@104 -- # cleanup 00:22:41.778 11:40:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@18 -- # killprocess 1306645 00:22:41.778 11:40:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@952 -- # '[' -z 1306645 ']' 00:22:41.778 11:40:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@956 -- # kill -0 1306645 00:22:41.778 11:40:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@957 -- # uname 00:22:41.778 11:40:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:22:41.778 11:40:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 1306645 00:22:41.778 11:40:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:22:41.778 11:40:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:22:41.778 11:40:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@970 -- # echo 'killing process with pid 1306645' 00:22:41.778 killing process with pid 1306645 00:22:41.778 11:40:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@971 -- # kill 1306645 00:22:41.778 11:40:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@976 -- # wait 1306645 00:22:42.037 11:40:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@19 -- # nvmftestfini 00:22:42.037 11:40:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@516 -- # nvmfcleanup 00:22:42.037 11:40:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@121 -- # sync 00:22:42.037 11:40:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:22:42.037 11:40:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@124 -- # set +e 00:22:42.037 11:40:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@125 -- # for i in {1..20} 00:22:42.037 11:40:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:22:42.037 rmmod nvme_tcp 00:22:42.037 rmmod nvme_fabrics 00:22:42.037 rmmod nvme_keyring 00:22:42.037 11:40:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:22:42.037 11:40:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@128 -- # set -e 00:22:42.037 11:40:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@129 -- # return 0 00:22:42.037 11:40:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@517 -- # '[' -n 1306460 ']' 00:22:42.037 11:40:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@518 -- # killprocess 1306460 00:22:42.037 11:40:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@952 -- # '[' -z 1306460 ']' 00:22:42.037 11:40:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@956 -- # kill -0 1306460 00:22:42.037 11:40:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@957 -- # uname 00:22:42.037 11:40:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:22:42.037 11:40:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 1306460 00:22:42.037 11:40:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:22:42.296 11:40:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:22:42.296 11:40:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@970 -- # echo 'killing process with pid 1306460' 00:22:42.296 killing process with pid 1306460 00:22:42.296 11:40:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@971 -- # kill 1306460 00:22:42.296 11:40:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@976 -- # wait 1306460 00:22:42.296 11:40:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:22:42.296 11:40:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:22:42.296 11:40:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:22:42.296 11:40:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@297 -- # iptr 00:22:42.296 11:40:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:22:42.296 11:40:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@791 -- # iptables-save 00:22:42.296 11:40:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@791 -- # iptables-restore 00:22:42.296 11:40:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:22:42.296 11:40:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@302 -- # remove_spdk_ns 00:22:42.296 11:40:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:42.296 11:40:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:42.296 11:40:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:44.832 11:40:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:22:44.832 00:22:44.832 real 0m12.035s 00:22:44.832 user 0m10.179s 00:22:44.832 sys 0m4.943s 00:22:44.832 11:40:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1128 -- # xtrace_disable 00:22:44.832 11:40:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:22:44.832 ************************************ 00:22:44.832 END TEST nvmf_nsid 00:22:44.832 ************************************ 00:22:44.832 11:40:45 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:22:44.832 00:22:44.832 real 12m43.154s 00:22:44.832 user 28m5.829s 00:22:44.832 sys 3m37.891s 00:22:44.832 11:40:45 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1128 -- # xtrace_disable 00:22:44.832 11:40:45 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:22:44.832 ************************************ 00:22:44.832 END TEST nvmf_target_extra 00:22:44.832 ************************************ 00:22:44.832 11:40:45 nvmf_tcp -- nvmf/nvmf.sh@16 -- # run_test nvmf_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_host.sh --transport=tcp 00:22:44.832 11:40:45 nvmf_tcp -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:22:44.832 11:40:45 nvmf_tcp -- common/autotest_common.sh@1109 -- # xtrace_disable 00:22:44.832 11:40:45 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:22:44.832 ************************************ 00:22:44.832 START TEST nvmf_host 00:22:44.832 ************************************ 00:22:44.832 11:40:45 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_host.sh --transport=tcp 00:22:44.832 * Looking for test storage... 00:22:44.832 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:22:44.833 11:40:45 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:22:44.833 11:40:45 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1691 -- # lcov --version 00:22:44.833 11:40:45 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:22:44.833 11:40:45 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:22:44.833 11:40:45 nvmf_tcp.nvmf_host -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:22:44.833 11:40:45 nvmf_tcp.nvmf_host -- scripts/common.sh@333 -- # local ver1 ver1_l 00:22:44.833 11:40:45 nvmf_tcp.nvmf_host -- scripts/common.sh@334 -- # local ver2 ver2_l 00:22:44.833 11:40:45 nvmf_tcp.nvmf_host -- scripts/common.sh@336 -- # IFS=.-: 00:22:44.833 11:40:45 nvmf_tcp.nvmf_host -- scripts/common.sh@336 -- # read -ra ver1 00:22:44.833 11:40:45 nvmf_tcp.nvmf_host -- scripts/common.sh@337 -- # IFS=.-: 00:22:44.833 11:40:45 nvmf_tcp.nvmf_host -- scripts/common.sh@337 -- # read -ra ver2 00:22:44.833 11:40:45 nvmf_tcp.nvmf_host -- scripts/common.sh@338 -- # local 'op=<' 00:22:44.833 11:40:45 nvmf_tcp.nvmf_host -- scripts/common.sh@340 -- # ver1_l=2 00:22:44.833 11:40:45 nvmf_tcp.nvmf_host -- scripts/common.sh@341 -- # ver2_l=1 00:22:44.833 11:40:45 nvmf_tcp.nvmf_host -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:22:44.833 11:40:45 nvmf_tcp.nvmf_host -- scripts/common.sh@344 -- # case "$op" in 00:22:44.833 11:40:45 nvmf_tcp.nvmf_host -- scripts/common.sh@345 -- # : 1 00:22:44.833 11:40:45 nvmf_tcp.nvmf_host -- scripts/common.sh@364 -- # (( v = 0 )) 00:22:44.833 11:40:45 nvmf_tcp.nvmf_host -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:22:44.833 11:40:45 nvmf_tcp.nvmf_host -- scripts/common.sh@365 -- # decimal 1 00:22:44.833 11:40:45 nvmf_tcp.nvmf_host -- scripts/common.sh@353 -- # local d=1 00:22:44.833 11:40:45 nvmf_tcp.nvmf_host -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:22:44.833 11:40:45 nvmf_tcp.nvmf_host -- scripts/common.sh@355 -- # echo 1 00:22:44.833 11:40:45 nvmf_tcp.nvmf_host -- scripts/common.sh@365 -- # ver1[v]=1 00:22:44.833 11:40:45 nvmf_tcp.nvmf_host -- scripts/common.sh@366 -- # decimal 2 00:22:44.833 11:40:45 nvmf_tcp.nvmf_host -- scripts/common.sh@353 -- # local d=2 00:22:44.833 11:40:45 nvmf_tcp.nvmf_host -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:22:44.833 11:40:45 nvmf_tcp.nvmf_host -- scripts/common.sh@355 -- # echo 2 00:22:44.833 11:40:45 nvmf_tcp.nvmf_host -- scripts/common.sh@366 -- # ver2[v]=2 00:22:44.833 11:40:45 nvmf_tcp.nvmf_host -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:22:44.833 11:40:45 nvmf_tcp.nvmf_host -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:22:44.833 11:40:45 nvmf_tcp.nvmf_host -- scripts/common.sh@368 -- # return 0 00:22:44.833 11:40:45 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:22:44.833 11:40:45 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:22:44.833 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:44.833 --rc genhtml_branch_coverage=1 00:22:44.833 --rc genhtml_function_coverage=1 00:22:44.833 --rc genhtml_legend=1 00:22:44.833 --rc geninfo_all_blocks=1 00:22:44.833 --rc geninfo_unexecuted_blocks=1 00:22:44.833 00:22:44.833 ' 00:22:44.833 11:40:45 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:22:44.833 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:44.833 --rc genhtml_branch_coverage=1 00:22:44.833 --rc genhtml_function_coverage=1 00:22:44.833 --rc genhtml_legend=1 00:22:44.833 --rc geninfo_all_blocks=1 00:22:44.833 --rc geninfo_unexecuted_blocks=1 00:22:44.833 00:22:44.833 ' 00:22:44.833 11:40:45 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:22:44.833 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:44.833 --rc genhtml_branch_coverage=1 00:22:44.833 --rc genhtml_function_coverage=1 00:22:44.833 --rc genhtml_legend=1 00:22:44.833 --rc geninfo_all_blocks=1 00:22:44.833 --rc geninfo_unexecuted_blocks=1 00:22:44.833 00:22:44.833 ' 00:22:44.833 11:40:45 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:22:44.833 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:44.833 --rc genhtml_branch_coverage=1 00:22:44.833 --rc genhtml_function_coverage=1 00:22:44.833 --rc genhtml_legend=1 00:22:44.833 --rc geninfo_all_blocks=1 00:22:44.833 --rc geninfo_unexecuted_blocks=1 00:22:44.833 00:22:44.833 ' 00:22:44.833 11:40:45 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:22:44.833 11:40:45 nvmf_tcp.nvmf_host -- nvmf/common.sh@7 -- # uname -s 00:22:44.833 11:40:45 nvmf_tcp.nvmf_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:44.833 11:40:45 nvmf_tcp.nvmf_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:44.833 11:40:45 nvmf_tcp.nvmf_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:44.833 11:40:45 nvmf_tcp.nvmf_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:44.833 11:40:45 nvmf_tcp.nvmf_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:44.833 11:40:45 nvmf_tcp.nvmf_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:44.833 11:40:45 nvmf_tcp.nvmf_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:44.833 11:40:45 nvmf_tcp.nvmf_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:44.833 11:40:45 nvmf_tcp.nvmf_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:44.833 11:40:45 nvmf_tcp.nvmf_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:44.833 11:40:45 nvmf_tcp.nvmf_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:22:44.833 11:40:45 nvmf_tcp.nvmf_host -- nvmf/common.sh@18 -- # NVME_HOSTID=00abaa28-3537-eb11-906e-0017a4403562 00:22:44.833 11:40:45 nvmf_tcp.nvmf_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:44.833 11:40:45 nvmf_tcp.nvmf_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:44.833 11:40:45 nvmf_tcp.nvmf_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:22:44.833 11:40:45 nvmf_tcp.nvmf_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:44.833 11:40:45 nvmf_tcp.nvmf_host -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:22:44.833 11:40:45 nvmf_tcp.nvmf_host -- scripts/common.sh@15 -- # shopt -s extglob 00:22:44.833 11:40:45 nvmf_tcp.nvmf_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:44.833 11:40:45 nvmf_tcp.nvmf_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:44.833 11:40:45 nvmf_tcp.nvmf_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:44.833 11:40:45 nvmf_tcp.nvmf_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:44.833 11:40:45 nvmf_tcp.nvmf_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:44.833 11:40:45 nvmf_tcp.nvmf_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:44.833 11:40:45 nvmf_tcp.nvmf_host -- paths/export.sh@5 -- # export PATH 00:22:44.833 11:40:45 nvmf_tcp.nvmf_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:44.833 11:40:45 nvmf_tcp.nvmf_host -- nvmf/common.sh@51 -- # : 0 00:22:44.833 11:40:45 nvmf_tcp.nvmf_host -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:22:44.833 11:40:45 nvmf_tcp.nvmf_host -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:22:44.833 11:40:45 nvmf_tcp.nvmf_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:44.833 11:40:45 nvmf_tcp.nvmf_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:44.833 11:40:45 nvmf_tcp.nvmf_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:44.833 11:40:45 nvmf_tcp.nvmf_host -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:22:44.833 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:22:44.833 11:40:45 nvmf_tcp.nvmf_host -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:22:44.833 11:40:45 nvmf_tcp.nvmf_host -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:22:44.833 11:40:45 nvmf_tcp.nvmf_host -- nvmf/common.sh@55 -- # have_pci_nics=0 00:22:44.833 11:40:45 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@11 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:22:44.833 11:40:45 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@13 -- # TEST_ARGS=("$@") 00:22:44.833 11:40:45 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@15 -- # [[ 0 -eq 0 ]] 00:22:44.833 11:40:45 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@16 -- # run_test nvmf_multicontroller /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:22:44.833 11:40:45 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:22:44.833 11:40:45 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1109 -- # xtrace_disable 00:22:44.833 11:40:45 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:22:44.833 ************************************ 00:22:44.833 START TEST nvmf_multicontroller 00:22:44.833 ************************************ 00:22:44.833 11:40:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:22:44.833 * Looking for test storage... 00:22:44.833 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:22:44.833 11:40:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:22:44.833 11:40:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1691 -- # lcov --version 00:22:44.833 11:40:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:22:44.833 11:40:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:22:44.833 11:40:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:22:44.833 11:40:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@333 -- # local ver1 ver1_l 00:22:44.833 11:40:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@334 -- # local ver2 ver2_l 00:22:44.833 11:40:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@336 -- # IFS=.-: 00:22:44.834 11:40:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@336 -- # read -ra ver1 00:22:44.834 11:40:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@337 -- # IFS=.-: 00:22:44.834 11:40:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@337 -- # read -ra ver2 00:22:44.834 11:40:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@338 -- # local 'op=<' 00:22:44.834 11:40:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@340 -- # ver1_l=2 00:22:44.834 11:40:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@341 -- # ver2_l=1 00:22:44.834 11:40:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:22:44.834 11:40:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@344 -- # case "$op" in 00:22:44.834 11:40:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@345 -- # : 1 00:22:44.834 11:40:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@364 -- # (( v = 0 )) 00:22:44.834 11:40:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:22:44.834 11:40:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@365 -- # decimal 1 00:22:44.834 11:40:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@353 -- # local d=1 00:22:44.834 11:40:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:22:44.834 11:40:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@355 -- # echo 1 00:22:44.834 11:40:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@365 -- # ver1[v]=1 00:22:44.834 11:40:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@366 -- # decimal 2 00:22:44.834 11:40:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@353 -- # local d=2 00:22:44.834 11:40:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:22:44.834 11:40:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@355 -- # echo 2 00:22:44.834 11:40:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@366 -- # ver2[v]=2 00:22:44.834 11:40:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:22:44.834 11:40:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:22:44.834 11:40:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@368 -- # return 0 00:22:44.834 11:40:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:22:44.834 11:40:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:22:44.834 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:44.834 --rc genhtml_branch_coverage=1 00:22:44.834 --rc genhtml_function_coverage=1 00:22:44.834 --rc genhtml_legend=1 00:22:44.834 --rc geninfo_all_blocks=1 00:22:44.834 --rc geninfo_unexecuted_blocks=1 00:22:44.834 00:22:44.834 ' 00:22:44.834 11:40:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:22:44.834 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:44.834 --rc genhtml_branch_coverage=1 00:22:44.834 --rc genhtml_function_coverage=1 00:22:44.834 --rc genhtml_legend=1 00:22:44.834 --rc geninfo_all_blocks=1 00:22:44.834 --rc geninfo_unexecuted_blocks=1 00:22:44.834 00:22:44.834 ' 00:22:44.834 11:40:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:22:44.834 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:44.834 --rc genhtml_branch_coverage=1 00:22:44.834 --rc genhtml_function_coverage=1 00:22:44.834 --rc genhtml_legend=1 00:22:44.834 --rc geninfo_all_blocks=1 00:22:44.834 --rc geninfo_unexecuted_blocks=1 00:22:44.834 00:22:44.834 ' 00:22:44.834 11:40:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:22:44.834 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:44.834 --rc genhtml_branch_coverage=1 00:22:44.834 --rc genhtml_function_coverage=1 00:22:44.834 --rc genhtml_legend=1 00:22:44.834 --rc geninfo_all_blocks=1 00:22:44.834 --rc geninfo_unexecuted_blocks=1 00:22:44.834 00:22:44.834 ' 00:22:44.834 11:40:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:22:44.834 11:40:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@7 -- # uname -s 00:22:44.834 11:40:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:44.834 11:40:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:44.834 11:40:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:44.834 11:40:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:44.834 11:40:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:44.834 11:40:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:44.834 11:40:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:44.834 11:40:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:44.834 11:40:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:44.834 11:40:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:44.834 11:40:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:22:44.834 11:40:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@18 -- # NVME_HOSTID=00abaa28-3537-eb11-906e-0017a4403562 00:22:44.834 11:40:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:44.834 11:40:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:44.834 11:40:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:22:44.834 11:40:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:44.834 11:40:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:22:44.834 11:40:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@15 -- # shopt -s extglob 00:22:44.834 11:40:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:44.834 11:40:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:44.834 11:40:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:44.834 11:40:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:44.834 11:40:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:44.834 11:40:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:44.834 11:40:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@5 -- # export PATH 00:22:44.834 11:40:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:44.834 11:40:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@51 -- # : 0 00:22:44.834 11:40:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:22:44.834 11:40:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:22:44.834 11:40:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:44.834 11:40:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:44.834 11:40:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:44.834 11:40:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:22:44.834 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:22:44.834 11:40:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:22:44.834 11:40:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:22:44.834 11:40:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@55 -- # have_pci_nics=0 00:22:44.834 11:40:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@11 -- # MALLOC_BDEV_SIZE=64 00:22:44.834 11:40:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:22:44.834 11:40:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@13 -- # NVMF_HOST_FIRST_PORT=60000 00:22:44.834 11:40:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@14 -- # NVMF_HOST_SECOND_PORT=60001 00:22:44.834 11:40:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:22:44.834 11:40:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@18 -- # '[' tcp == rdma ']' 00:22:44.834 11:40:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@23 -- # nvmftestinit 00:22:44.834 11:40:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:22:44.834 11:40:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:44.834 11:40:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@476 -- # prepare_net_devs 00:22:44.834 11:40:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@438 -- # local -g is_hw=no 00:22:44.834 11:40:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@440 -- # remove_spdk_ns 00:22:44.834 11:40:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:44.835 11:40:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:44.835 11:40:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:44.835 11:40:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:22:44.835 11:40:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:22:44.835 11:40:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@309 -- # xtrace_disable 00:22:44.835 11:40:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:50.110 11:40:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:50.110 11:40:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@315 -- # pci_devs=() 00:22:50.110 11:40:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@315 -- # local -a pci_devs 00:22:50.110 11:40:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@316 -- # pci_net_devs=() 00:22:50.110 11:40:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:22:50.110 11:40:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@317 -- # pci_drivers=() 00:22:50.110 11:40:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@317 -- # local -A pci_drivers 00:22:50.110 11:40:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@319 -- # net_devs=() 00:22:50.110 11:40:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@319 -- # local -ga net_devs 00:22:50.110 11:40:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@320 -- # e810=() 00:22:50.110 11:40:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@320 -- # local -ga e810 00:22:50.110 11:40:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@321 -- # x722=() 00:22:50.110 11:40:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@321 -- # local -ga x722 00:22:50.110 11:40:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@322 -- # mlx=() 00:22:50.110 11:40:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@322 -- # local -ga mlx 00:22:50.110 11:40:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:50.110 11:40:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:50.110 11:40:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:50.110 11:40:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:50.110 11:40:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:50.110 11:40:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:50.110 11:40:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:50.110 11:40:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:22:50.110 11:40:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:50.110 11:40:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:50.110 11:40:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:50.110 11:40:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:50.110 11:40:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:22:50.110 11:40:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:22:50.110 11:40:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:22:50.110 11:40:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:22:50.110 11:40:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:22:50.110 11:40:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:22:50.110 11:40:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:50.110 11:40:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:22:50.110 Found 0000:af:00.0 (0x8086 - 0x159b) 00:22:50.110 11:40:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:50.110 11:40:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:50.110 11:40:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:50.110 11:40:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:50.110 11:40:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:50.110 11:40:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:50.110 11:40:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:22:50.110 Found 0000:af:00.1 (0x8086 - 0x159b) 00:22:50.110 11:40:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:50.110 11:40:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:50.110 11:40:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:50.110 11:40:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:50.110 11:40:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:50.110 11:40:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:22:50.110 11:40:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:22:50.110 11:40:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:22:50.110 11:40:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:22:50.110 11:40:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:50.110 11:40:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:22:50.110 11:40:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:50.110 11:40:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@418 -- # [[ up == up ]] 00:22:50.110 11:40:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:22:50.111 11:40:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:50.111 11:40:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:22:50.111 Found net devices under 0000:af:00.0: cvl_0_0 00:22:50.111 11:40:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:22:50.111 11:40:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:22:50.111 11:40:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:50.111 11:40:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:22:50.111 11:40:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:50.111 11:40:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@418 -- # [[ up == up ]] 00:22:50.111 11:40:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:22:50.111 11:40:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:50.111 11:40:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:22:50.111 Found net devices under 0000:af:00.1: cvl_0_1 00:22:50.111 11:40:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:22:50.111 11:40:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:22:50.111 11:40:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@442 -- # is_hw=yes 00:22:50.111 11:40:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:22:50.111 11:40:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:22:50.111 11:40:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:22:50.111 11:40:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:22:50.111 11:40:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:50.111 11:40:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:50.111 11:40:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:50.111 11:40:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:22:50.111 11:40:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:50.111 11:40:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:50.111 11:40:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:22:50.111 11:40:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:22:50.111 11:40:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:50.111 11:40:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:50.111 11:40:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:22:50.111 11:40:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:22:50.111 11:40:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:22:50.111 11:40:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:50.370 11:40:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:50.370 11:40:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:50.370 11:40:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:22:50.370 11:40:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:50.370 11:40:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:50.370 11:40:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:50.370 11:40:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:22:50.370 11:40:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:22:50.370 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:50.370 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.472 ms 00:22:50.370 00:22:50.370 --- 10.0.0.2 ping statistics --- 00:22:50.370 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:50.370 rtt min/avg/max/mdev = 0.472/0.472/0.472/0.000 ms 00:22:50.370 11:40:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:50.370 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:50.370 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.221 ms 00:22:50.370 00:22:50.370 --- 10.0.0.1 ping statistics --- 00:22:50.370 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:50.370 rtt min/avg/max/mdev = 0.221/0.221/0.221/0.000 ms 00:22:50.370 11:40:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:50.370 11:40:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@450 -- # return 0 00:22:50.370 11:40:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:22:50.370 11:40:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:50.370 11:40:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:22:50.370 11:40:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:22:50.370 11:40:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:50.370 11:40:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:22:50.370 11:40:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:22:50.629 11:40:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@25 -- # nvmfappstart -m 0xE 00:22:50.629 11:40:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:22:50.629 11:40:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@724 -- # xtrace_disable 00:22:50.629 11:40:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:50.629 11:40:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@509 -- # nvmfpid=1311021 00:22:50.629 11:40:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@510 -- # waitforlisten 1311021 00:22:50.629 11:40:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:22:50.629 11:40:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@833 -- # '[' -z 1311021 ']' 00:22:50.629 11:40:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:50.629 11:40:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@838 -- # local max_retries=100 00:22:50.629 11:40:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:50.629 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:50.629 11:40:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@842 -- # xtrace_disable 00:22:50.629 11:40:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:50.629 [2024-11-15 11:40:51.294289] Starting SPDK v25.01-pre git sha1 4b2d483c6 / DPDK 24.03.0 initialization... 00:22:50.629 [2024-11-15 11:40:51.294347] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:50.630 [2024-11-15 11:40:51.365632] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:22:50.630 [2024-11-15 11:40:51.405761] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:50.630 [2024-11-15 11:40:51.405797] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:50.630 [2024-11-15 11:40:51.405803] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:50.630 [2024-11-15 11:40:51.405809] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:50.630 [2024-11-15 11:40:51.405813] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:50.630 [2024-11-15 11:40:51.407251] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:22:50.630 [2024-11-15 11:40:51.407267] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:22:50.630 [2024-11-15 11:40:51.407269] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:22:50.889 11:40:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:22:50.889 11:40:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@866 -- # return 0 00:22:50.889 11:40:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:22:50.889 11:40:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@730 -- # xtrace_disable 00:22:50.889 11:40:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:50.889 11:40:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:50.889 11:40:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@27 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:22:50.889 11:40:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:50.889 11:40:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:50.889 [2024-11-15 11:40:51.561904] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:50.889 11:40:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:50.889 11:40:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@29 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:22:50.889 11:40:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:50.889 11:40:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:50.889 Malloc0 00:22:50.889 11:40:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:50.889 11:40:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@30 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:22:50.889 11:40:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:50.889 11:40:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:50.889 11:40:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:50.889 11:40:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:22:50.889 11:40:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:50.889 11:40:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:50.889 11:40:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:50.889 11:40:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:22:50.889 11:40:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:50.889 11:40:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:50.889 [2024-11-15 11:40:51.622033] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:50.889 11:40:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:50.889 11:40:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:22:50.889 11:40:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:50.889 11:40:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:50.889 [2024-11-15 11:40:51.629956] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:22:50.889 11:40:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:50.889 11:40:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@36 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:22:50.889 11:40:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:50.889 11:40:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:50.889 Malloc1 00:22:50.889 11:40:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:50.889 11:40:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@37 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:22:50.889 11:40:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:50.890 11:40:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:50.890 11:40:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:50.890 11:40:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@38 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc1 00:22:50.890 11:40:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:50.890 11:40:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:50.890 11:40:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:50.890 11:40:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:22:50.890 11:40:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:50.890 11:40:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:50.890 11:40:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:50.890 11:40:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4421 00:22:50.890 11:40:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:50.890 11:40:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:50.890 11:40:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:50.890 11:40:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@44 -- # bdevperf_pid=1311052 00:22:50.890 11:40:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@46 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; pap "$testdir/try.txt"; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:22:50.890 11:40:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w write -t 1 -f 00:22:50.890 11:40:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@47 -- # waitforlisten 1311052 /var/tmp/bdevperf.sock 00:22:50.890 11:40:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@833 -- # '[' -z 1311052 ']' 00:22:50.890 11:40:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:50.890 11:40:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@838 -- # local max_retries=100 00:22:50.890 11:40:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:50.890 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:50.890 11:40:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@842 -- # xtrace_disable 00:22:50.890 11:40:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:51.149 11:40:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:22:51.149 11:40:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@866 -- # return 0 00:22:51.149 11:40:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@50 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 00:22:51.149 11:40:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:51.149 11:40:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:51.409 NVMe0n1 00:22:51.409 11:40:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:51.409 11:40:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@54 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:22:51.409 11:40:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@54 -- # grep -c NVMe 00:22:51.409 11:40:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:51.409 11:40:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:51.409 11:40:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:51.409 1 00:22:51.409 11:40:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@60 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -q nqn.2021-09-7.io.spdk:00001 00:22:51.409 11:40:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@650 -- # local es=0 00:22:51.409 11:40:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -q nqn.2021-09-7.io.spdk:00001 00:22:51.409 11:40:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:22:51.409 11:40:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:22:51.409 11:40:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:22:51.409 11:40:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:22:51.409 11:40:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@653 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -q nqn.2021-09-7.io.spdk:00001 00:22:51.409 11:40:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:51.409 11:40:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:51.409 request: 00:22:51.409 { 00:22:51.409 "name": "NVMe0", 00:22:51.409 "trtype": "tcp", 00:22:51.409 "traddr": "10.0.0.2", 00:22:51.409 "adrfam": "ipv4", 00:22:51.409 "trsvcid": "4420", 00:22:51.409 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:22:51.409 "hostnqn": "nqn.2021-09-7.io.spdk:00001", 00:22:51.409 "hostaddr": "10.0.0.1", 00:22:51.409 "prchk_reftag": false, 00:22:51.409 "prchk_guard": false, 00:22:51.409 "hdgst": false, 00:22:51.409 "ddgst": false, 00:22:51.409 "allow_unrecognized_csi": false, 00:22:51.409 "method": "bdev_nvme_attach_controller", 00:22:51.409 "req_id": 1 00:22:51.409 } 00:22:51.409 Got JSON-RPC error response 00:22:51.409 response: 00:22:51.409 { 00:22:51.409 "code": -114, 00:22:51.409 "message": "A controller named NVMe0 already exists with the specified network path" 00:22:51.409 } 00:22:51.409 11:40:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:22:51.409 11:40:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@653 -- # es=1 00:22:51.409 11:40:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:22:51.409 11:40:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:22:51.409 11:40:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:22:51.409 11:40:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@65 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.1 00:22:51.409 11:40:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@650 -- # local es=0 00:22:51.409 11:40:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.1 00:22:51.409 11:40:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:22:51.410 11:40:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:22:51.410 11:40:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:22:51.410 11:40:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:22:51.410 11:40:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@653 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.1 00:22:51.410 11:40:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:51.410 11:40:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:51.410 request: 00:22:51.410 { 00:22:51.410 "name": "NVMe0", 00:22:51.410 "trtype": "tcp", 00:22:51.410 "traddr": "10.0.0.2", 00:22:51.410 "adrfam": "ipv4", 00:22:51.410 "trsvcid": "4420", 00:22:51.410 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:22:51.410 "hostaddr": "10.0.0.1", 00:22:51.410 "prchk_reftag": false, 00:22:51.410 "prchk_guard": false, 00:22:51.410 "hdgst": false, 00:22:51.410 "ddgst": false, 00:22:51.410 "allow_unrecognized_csi": false, 00:22:51.410 "method": "bdev_nvme_attach_controller", 00:22:51.410 "req_id": 1 00:22:51.410 } 00:22:51.410 Got JSON-RPC error response 00:22:51.410 response: 00:22:51.410 { 00:22:51.410 "code": -114, 00:22:51.410 "message": "A controller named NVMe0 already exists with the specified network path" 00:22:51.410 } 00:22:51.410 11:40:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:22:51.410 11:40:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@653 -- # es=1 00:22:51.410 11:40:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:22:51.410 11:40:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:22:51.410 11:40:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:22:51.410 11:40:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@69 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x disable 00:22:51.410 11:40:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@650 -- # local es=0 00:22:51.410 11:40:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x disable 00:22:51.410 11:40:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:22:51.410 11:40:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:22:51.410 11:40:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:22:51.410 11:40:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:22:51.410 11:40:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@653 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x disable 00:22:51.410 11:40:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:51.410 11:40:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:51.410 request: 00:22:51.410 { 00:22:51.410 "name": "NVMe0", 00:22:51.410 "trtype": "tcp", 00:22:51.410 "traddr": "10.0.0.2", 00:22:51.410 "adrfam": "ipv4", 00:22:51.410 "trsvcid": "4420", 00:22:51.410 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:22:51.410 "hostaddr": "10.0.0.1", 00:22:51.410 "prchk_reftag": false, 00:22:51.410 "prchk_guard": false, 00:22:51.410 "hdgst": false, 00:22:51.410 "ddgst": false, 00:22:51.410 "multipath": "disable", 00:22:51.410 "allow_unrecognized_csi": false, 00:22:51.410 "method": "bdev_nvme_attach_controller", 00:22:51.410 "req_id": 1 00:22:51.410 } 00:22:51.410 Got JSON-RPC error response 00:22:51.410 response: 00:22:51.410 { 00:22:51.410 "code": -114, 00:22:51.410 "message": "A controller named NVMe0 already exists and multipath is disabled" 00:22:51.410 } 00:22:51.410 11:40:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:22:51.410 11:40:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@653 -- # es=1 00:22:51.410 11:40:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:22:51.410 11:40:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:22:51.410 11:40:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:22:51.410 11:40:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@74 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x failover 00:22:51.410 11:40:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@650 -- # local es=0 00:22:51.410 11:40:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x failover 00:22:51.410 11:40:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:22:51.410 11:40:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:22:51.410 11:40:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:22:51.410 11:40:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:22:51.410 11:40:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@653 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x failover 00:22:51.410 11:40:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:51.410 11:40:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:51.410 request: 00:22:51.410 { 00:22:51.410 "name": "NVMe0", 00:22:51.410 "trtype": "tcp", 00:22:51.410 "traddr": "10.0.0.2", 00:22:51.410 "adrfam": "ipv4", 00:22:51.410 "trsvcid": "4420", 00:22:51.410 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:22:51.410 "hostaddr": "10.0.0.1", 00:22:51.410 "prchk_reftag": false, 00:22:51.410 "prchk_guard": false, 00:22:51.410 "hdgst": false, 00:22:51.410 "ddgst": false, 00:22:51.410 "multipath": "failover", 00:22:51.410 "allow_unrecognized_csi": false, 00:22:51.410 "method": "bdev_nvme_attach_controller", 00:22:51.410 "req_id": 1 00:22:51.410 } 00:22:51.410 Got JSON-RPC error response 00:22:51.410 response: 00:22:51.410 { 00:22:51.410 "code": -114, 00:22:51.410 "message": "A controller named NVMe0 already exists with the specified network path" 00:22:51.410 } 00:22:51.410 11:40:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:22:51.410 11:40:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@653 -- # es=1 00:22:51.410 11:40:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:22:51.410 11:40:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:22:51.410 11:40:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:22:51.410 11:40:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@79 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:22:51.410 11:40:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:51.410 11:40:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:51.669 NVMe0n1 00:22:51.669 11:40:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:51.669 11:40:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@83 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:22:51.669 11:40:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:51.669 11:40:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:51.669 11:40:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:51.669 11:40:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@87 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe1 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 00:22:51.669 11:40:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:51.669 11:40:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:51.669 00:22:51.669 11:40:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:51.669 11:40:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@90 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:22:51.669 11:40:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@90 -- # grep -c NVMe 00:22:51.669 11:40:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:51.669 11:40:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:51.669 11:40:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:51.669 11:40:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@90 -- # '[' 2 '!=' 2 ']' 00:22:51.669 11:40:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@95 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:22:53.046 { 00:22:53.046 "results": [ 00:22:53.046 { 00:22:53.046 "job": "NVMe0n1", 00:22:53.046 "core_mask": "0x1", 00:22:53.046 "workload": "write", 00:22:53.046 "status": "finished", 00:22:53.046 "queue_depth": 128, 00:22:53.046 "io_size": 4096, 00:22:53.047 "runtime": 1.003354, 00:22:53.047 "iops": 27120.039387893008, 00:22:53.047 "mibps": 105.93765385895706, 00:22:53.047 "io_failed": 0, 00:22:53.047 "io_timeout": 0, 00:22:53.047 "avg_latency_us": 4711.361048773724, 00:22:53.047 "min_latency_us": 2427.8109090909093, 00:22:53.047 "max_latency_us": 9651.665454545455 00:22:53.047 } 00:22:53.047 ], 00:22:53.047 "core_count": 1 00:22:53.047 } 00:22:53.047 11:40:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@98 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe1 00:22:53.047 11:40:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:53.047 11:40:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:53.047 11:40:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:53.047 11:40:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@100 -- # [[ -n '' ]] 00:22:53.047 11:40:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@116 -- # killprocess 1311052 00:22:53.047 11:40:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@952 -- # '[' -z 1311052 ']' 00:22:53.047 11:40:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@956 -- # kill -0 1311052 00:22:53.047 11:40:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@957 -- # uname 00:22:53.047 11:40:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:22:53.047 11:40:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 1311052 00:22:53.047 11:40:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:22:53.047 11:40:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:22:53.047 11:40:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@970 -- # echo 'killing process with pid 1311052' 00:22:53.047 killing process with pid 1311052 00:22:53.047 11:40:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@971 -- # kill 1311052 00:22:53.047 11:40:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@976 -- # wait 1311052 00:22:53.047 11:40:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@118 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:22:53.047 11:40:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:53.047 11:40:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:53.047 11:40:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:53.047 11:40:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@119 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:22:53.047 11:40:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:53.047 11:40:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:53.047 11:40:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:53.047 11:40:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@121 -- # trap - SIGINT SIGTERM EXIT 00:22:53.047 11:40:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@123 -- # pap /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:22:53.047 11:40:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1597 -- # read -r file 00:22:53.047 11:40:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1596 -- # find /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt -type f 00:22:53.047 11:40:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1596 -- # sort -u 00:22:53.047 11:40:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1598 -- # cat 00:22:53.047 --- /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt --- 00:22:53.047 [2024-11-15 11:40:51.734531] Starting SPDK v25.01-pre git sha1 4b2d483c6 / DPDK 24.03.0 initialization... 00:22:53.047 [2024-11-15 11:40:51.734596] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1311052 ] 00:22:53.047 [2024-11-15 11:40:51.828791] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:53.047 [2024-11-15 11:40:51.877678] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:22:53.047 [2024-11-15 11:40:52.365042] bdev.c:4903:bdev_name_add: *ERROR*: Bdev name 43baaf66-7f4d-4b2d-8bba-8f22592583f0 already exists 00:22:53.047 [2024-11-15 11:40:52.365076] bdev.c:8112:bdev_register: *ERROR*: Unable to add uuid:43baaf66-7f4d-4b2d-8bba-8f22592583f0 alias for bdev NVMe1n1 00:22:53.047 [2024-11-15 11:40:52.365087] bdev_nvme.c:4658:nvme_bdev_create: *ERROR*: spdk_bdev_register() failed 00:22:53.047 Running I/O for 1 seconds... 00:22:53.047 27083.00 IOPS, 105.79 MiB/s 00:22:53.047 Latency(us) 00:22:53.047 [2024-11-15T10:40:53.900Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:53.047 Job: NVMe0n1 (Core Mask 0x1, workload: write, depth: 128, IO size: 4096) 00:22:53.047 NVMe0n1 : 1.00 27120.04 105.94 0.00 0.00 4711.36 2427.81 9651.67 00:22:53.047 [2024-11-15T10:40:53.900Z] =================================================================================================================== 00:22:53.047 [2024-11-15T10:40:53.900Z] Total : 27120.04 105.94 0.00 0.00 4711.36 2427.81 9651.67 00:22:53.047 Received shutdown signal, test time was about 1.000000 seconds 00:22:53.047 00:22:53.047 Latency(us) 00:22:53.047 [2024-11-15T10:40:53.900Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:53.047 [2024-11-15T10:40:53.900Z] =================================================================================================================== 00:22:53.047 [2024-11-15T10:40:53.900Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:22:53.047 --- /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt --- 00:22:53.047 11:40:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1603 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:22:53.047 11:40:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1597 -- # read -r file 00:22:53.047 11:40:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@124 -- # nvmftestfini 00:22:53.047 11:40:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@516 -- # nvmfcleanup 00:22:53.047 11:40:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@121 -- # sync 00:22:53.047 11:40:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:22:53.047 11:40:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@124 -- # set +e 00:22:53.047 11:40:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@125 -- # for i in {1..20} 00:22:53.047 11:40:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:22:53.047 rmmod nvme_tcp 00:22:53.047 rmmod nvme_fabrics 00:22:53.047 rmmod nvme_keyring 00:22:53.047 11:40:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:22:53.047 11:40:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@128 -- # set -e 00:22:53.047 11:40:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@129 -- # return 0 00:22:53.047 11:40:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@517 -- # '[' -n 1311021 ']' 00:22:53.047 11:40:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@518 -- # killprocess 1311021 00:22:53.047 11:40:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@952 -- # '[' -z 1311021 ']' 00:22:53.047 11:40:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@956 -- # kill -0 1311021 00:22:53.047 11:40:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@957 -- # uname 00:22:53.047 11:40:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:22:53.047 11:40:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 1311021 00:22:53.307 11:40:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:22:53.307 11:40:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:22:53.307 11:40:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@970 -- # echo 'killing process with pid 1311021' 00:22:53.307 killing process with pid 1311021 00:22:53.307 11:40:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@971 -- # kill 1311021 00:22:53.307 11:40:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@976 -- # wait 1311021 00:22:53.307 11:40:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:22:53.307 11:40:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:22:53.307 11:40:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:22:53.307 11:40:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@297 -- # iptr 00:22:53.307 11:40:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@791 -- # iptables-save 00:22:53.307 11:40:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:22:53.307 11:40:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@791 -- # iptables-restore 00:22:53.566 11:40:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:22:53.566 11:40:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@302 -- # remove_spdk_ns 00:22:53.566 11:40:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:53.566 11:40:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:53.566 11:40:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:55.471 11:40:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:22:55.471 00:22:55.471 real 0m10.769s 00:22:55.471 user 0m12.092s 00:22:55.471 sys 0m4.950s 00:22:55.471 11:40:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1128 -- # xtrace_disable 00:22:55.471 11:40:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:55.471 ************************************ 00:22:55.471 END TEST nvmf_multicontroller 00:22:55.471 ************************************ 00:22:55.471 11:40:56 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@17 -- # run_test nvmf_aer /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/aer.sh --transport=tcp 00:22:55.471 11:40:56 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:22:55.471 11:40:56 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1109 -- # xtrace_disable 00:22:55.471 11:40:56 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:22:55.471 ************************************ 00:22:55.471 START TEST nvmf_aer 00:22:55.471 ************************************ 00:22:55.471 11:40:56 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/aer.sh --transport=tcp 00:22:55.730 * Looking for test storage... 00:22:55.730 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:22:55.730 11:40:56 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:22:55.730 11:40:56 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1691 -- # lcov --version 00:22:55.730 11:40:56 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:22:55.730 11:40:56 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:22:55.730 11:40:56 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:22:55.730 11:40:56 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@333 -- # local ver1 ver1_l 00:22:55.730 11:40:56 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@334 -- # local ver2 ver2_l 00:22:55.730 11:40:56 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@336 -- # IFS=.-: 00:22:55.730 11:40:56 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@336 -- # read -ra ver1 00:22:55.730 11:40:56 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@337 -- # IFS=.-: 00:22:55.730 11:40:56 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@337 -- # read -ra ver2 00:22:55.730 11:40:56 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@338 -- # local 'op=<' 00:22:55.730 11:40:56 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@340 -- # ver1_l=2 00:22:55.730 11:40:56 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@341 -- # ver2_l=1 00:22:55.730 11:40:56 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:22:55.730 11:40:56 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@344 -- # case "$op" in 00:22:55.730 11:40:56 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@345 -- # : 1 00:22:55.731 11:40:56 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@364 -- # (( v = 0 )) 00:22:55.731 11:40:56 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:22:55.731 11:40:56 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@365 -- # decimal 1 00:22:55.731 11:40:56 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@353 -- # local d=1 00:22:55.731 11:40:56 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:22:55.731 11:40:56 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@355 -- # echo 1 00:22:55.731 11:40:56 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@365 -- # ver1[v]=1 00:22:55.731 11:40:56 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@366 -- # decimal 2 00:22:55.731 11:40:56 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@353 -- # local d=2 00:22:55.731 11:40:56 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:22:55.731 11:40:56 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@355 -- # echo 2 00:22:55.731 11:40:56 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@366 -- # ver2[v]=2 00:22:55.731 11:40:56 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:22:55.731 11:40:56 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:22:55.731 11:40:56 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@368 -- # return 0 00:22:55.731 11:40:56 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:22:55.731 11:40:56 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:22:55.731 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:55.731 --rc genhtml_branch_coverage=1 00:22:55.731 --rc genhtml_function_coverage=1 00:22:55.731 --rc genhtml_legend=1 00:22:55.731 --rc geninfo_all_blocks=1 00:22:55.731 --rc geninfo_unexecuted_blocks=1 00:22:55.731 00:22:55.731 ' 00:22:55.731 11:40:56 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:22:55.731 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:55.731 --rc genhtml_branch_coverage=1 00:22:55.731 --rc genhtml_function_coverage=1 00:22:55.731 --rc genhtml_legend=1 00:22:55.731 --rc geninfo_all_blocks=1 00:22:55.731 --rc geninfo_unexecuted_blocks=1 00:22:55.731 00:22:55.731 ' 00:22:55.731 11:40:56 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:22:55.731 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:55.731 --rc genhtml_branch_coverage=1 00:22:55.731 --rc genhtml_function_coverage=1 00:22:55.731 --rc genhtml_legend=1 00:22:55.731 --rc geninfo_all_blocks=1 00:22:55.731 --rc geninfo_unexecuted_blocks=1 00:22:55.731 00:22:55.731 ' 00:22:55.731 11:40:56 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:22:55.731 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:55.731 --rc genhtml_branch_coverage=1 00:22:55.731 --rc genhtml_function_coverage=1 00:22:55.731 --rc genhtml_legend=1 00:22:55.731 --rc geninfo_all_blocks=1 00:22:55.731 --rc geninfo_unexecuted_blocks=1 00:22:55.731 00:22:55.731 ' 00:22:55.731 11:40:56 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:22:55.731 11:40:56 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@7 -- # uname -s 00:22:55.731 11:40:56 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:55.731 11:40:56 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:55.731 11:40:56 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:55.731 11:40:56 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:55.731 11:40:56 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:55.731 11:40:56 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:55.731 11:40:56 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:55.731 11:40:56 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:55.731 11:40:56 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:55.731 11:40:56 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:55.731 11:40:56 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:22:55.731 11:40:56 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@18 -- # NVME_HOSTID=00abaa28-3537-eb11-906e-0017a4403562 00:22:55.731 11:40:56 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:55.731 11:40:56 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:55.731 11:40:56 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:22:55.731 11:40:56 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:55.731 11:40:56 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:22:55.731 11:40:56 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@15 -- # shopt -s extglob 00:22:55.731 11:40:56 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:55.731 11:40:56 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:55.731 11:40:56 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:55.731 11:40:56 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:55.731 11:40:56 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:55.731 11:40:56 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:55.731 11:40:56 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@5 -- # export PATH 00:22:55.731 11:40:56 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:55.731 11:40:56 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@51 -- # : 0 00:22:55.731 11:40:56 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:22:55.731 11:40:56 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:22:55.731 11:40:56 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:55.731 11:40:56 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:55.731 11:40:56 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:55.731 11:40:56 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:22:55.731 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:22:55.731 11:40:56 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:22:55.731 11:40:56 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:22:55.731 11:40:56 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@55 -- # have_pci_nics=0 00:22:55.731 11:40:56 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@11 -- # nvmftestinit 00:22:55.731 11:40:56 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:22:55.731 11:40:56 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:55.731 11:40:56 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@476 -- # prepare_net_devs 00:22:55.731 11:40:56 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@438 -- # local -g is_hw=no 00:22:55.731 11:40:56 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@440 -- # remove_spdk_ns 00:22:55.731 11:40:56 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:55.731 11:40:56 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:55.731 11:40:56 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:55.731 11:40:56 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:22:55.731 11:40:56 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:22:55.731 11:40:56 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@309 -- # xtrace_disable 00:22:55.731 11:40:56 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:01.006 11:41:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:23:01.006 11:41:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@315 -- # pci_devs=() 00:23:01.006 11:41:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@315 -- # local -a pci_devs 00:23:01.006 11:41:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@316 -- # pci_net_devs=() 00:23:01.006 11:41:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:23:01.006 11:41:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@317 -- # pci_drivers=() 00:23:01.006 11:41:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@317 -- # local -A pci_drivers 00:23:01.006 11:41:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@319 -- # net_devs=() 00:23:01.006 11:41:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@319 -- # local -ga net_devs 00:23:01.007 11:41:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@320 -- # e810=() 00:23:01.007 11:41:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@320 -- # local -ga e810 00:23:01.007 11:41:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@321 -- # x722=() 00:23:01.007 11:41:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@321 -- # local -ga x722 00:23:01.007 11:41:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@322 -- # mlx=() 00:23:01.007 11:41:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@322 -- # local -ga mlx 00:23:01.007 11:41:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:01.007 11:41:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:01.007 11:41:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:01.007 11:41:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:01.007 11:41:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:01.007 11:41:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:01.007 11:41:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:01.007 11:41:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:23:01.007 11:41:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:01.007 11:41:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:01.007 11:41:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:01.007 11:41:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:01.007 11:41:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:23:01.007 11:41:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:23:01.007 11:41:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:23:01.007 11:41:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:23:01.007 11:41:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:23:01.007 11:41:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:23:01.007 11:41:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:23:01.007 11:41:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:23:01.007 Found 0000:af:00.0 (0x8086 - 0x159b) 00:23:01.007 11:41:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:23:01.007 11:41:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:23:01.007 11:41:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:01.007 11:41:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:01.007 11:41:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:23:01.007 11:41:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:23:01.007 11:41:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:23:01.007 Found 0000:af:00.1 (0x8086 - 0x159b) 00:23:01.007 11:41:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:23:01.007 11:41:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:23:01.007 11:41:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:01.007 11:41:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:01.007 11:41:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:23:01.007 11:41:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:23:01.007 11:41:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:23:01.007 11:41:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:23:01.007 11:41:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:23:01.007 11:41:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:01.007 11:41:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:23:01.007 11:41:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:01.007 11:41:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@418 -- # [[ up == up ]] 00:23:01.007 11:41:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:23:01.007 11:41:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:01.007 11:41:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:23:01.007 Found net devices under 0000:af:00.0: cvl_0_0 00:23:01.007 11:41:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:23:01.007 11:41:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:23:01.007 11:41:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:01.007 11:41:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:23:01.007 11:41:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:01.007 11:41:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@418 -- # [[ up == up ]] 00:23:01.007 11:41:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:23:01.007 11:41:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:01.007 11:41:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:23:01.007 Found net devices under 0000:af:00.1: cvl_0_1 00:23:01.007 11:41:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:23:01.007 11:41:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:23:01.007 11:41:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@442 -- # is_hw=yes 00:23:01.007 11:41:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:23:01.007 11:41:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:23:01.007 11:41:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:23:01.007 11:41:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:23:01.007 11:41:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:01.007 11:41:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:01.007 11:41:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:01.007 11:41:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:23:01.007 11:41:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:23:01.007 11:41:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:23:01.007 11:41:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:23:01.007 11:41:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:23:01.007 11:41:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:23:01.007 11:41:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:01.007 11:41:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:23:01.007 11:41:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:23:01.007 11:41:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:23:01.007 11:41:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:23:01.007 11:41:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:23:01.007 11:41:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:01.007 11:41:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:23:01.007 11:41:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:01.007 11:41:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:23:01.007 11:41:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:23:01.007 11:41:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:23:01.007 11:41:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:23:01.007 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:01.007 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.461 ms 00:23:01.007 00:23:01.007 --- 10.0.0.2 ping statistics --- 00:23:01.007 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:01.007 rtt min/avg/max/mdev = 0.461/0.461/0.461/0.000 ms 00:23:01.007 11:41:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:23:01.007 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:01.007 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.227 ms 00:23:01.007 00:23:01.007 --- 10.0.0.1 ping statistics --- 00:23:01.007 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:01.007 rtt min/avg/max/mdev = 0.227/0.227/0.227/0.000 ms 00:23:01.007 11:41:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:01.007 11:41:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@450 -- # return 0 00:23:01.007 11:41:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:23:01.007 11:41:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:01.007 11:41:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:23:01.007 11:41:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:23:01.007 11:41:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:01.007 11:41:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:23:01.007 11:41:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:23:01.007 11:41:01 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@12 -- # nvmfappstart -m 0xF 00:23:01.007 11:41:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:23:01.007 11:41:01 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@724 -- # xtrace_disable 00:23:01.007 11:41:01 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:01.007 11:41:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@509 -- # nvmfpid=1315037 00:23:01.007 11:41:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@510 -- # waitforlisten 1315037 00:23:01.008 11:41:01 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@833 -- # '[' -z 1315037 ']' 00:23:01.008 11:41:01 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:01.008 11:41:01 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@838 -- # local max_retries=100 00:23:01.008 11:41:01 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:01.008 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:01.008 11:41:01 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@842 -- # xtrace_disable 00:23:01.008 11:41:01 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:01.008 11:41:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:23:01.266 [2024-11-15 11:41:01.913918] Starting SPDK v25.01-pre git sha1 4b2d483c6 / DPDK 24.03.0 initialization... 00:23:01.267 [2024-11-15 11:41:01.913979] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:01.267 [2024-11-15 11:41:02.013402] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:23:01.267 [2024-11-15 11:41:02.065714] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:01.267 [2024-11-15 11:41:02.065752] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:01.267 [2024-11-15 11:41:02.065762] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:01.267 [2024-11-15 11:41:02.065771] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:01.267 [2024-11-15 11:41:02.065779] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:01.267 [2024-11-15 11:41:02.067839] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:23:01.267 [2024-11-15 11:41:02.067939] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:23:01.267 [2024-11-15 11:41:02.068040] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:23:01.267 [2024-11-15 11:41:02.068041] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:23:01.526 11:41:02 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:23:01.526 11:41:02 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@866 -- # return 0 00:23:01.526 11:41:02 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:23:01.526 11:41:02 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@730 -- # xtrace_disable 00:23:01.526 11:41:02 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:01.526 11:41:02 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:01.526 11:41:02 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@14 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:23:01.526 11:41:02 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:01.526 11:41:02 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:01.526 [2024-11-15 11:41:02.216243] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:01.526 11:41:02 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:01.526 11:41:02 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@16 -- # rpc_cmd bdev_malloc_create 64 512 --name Malloc0 00:23:01.526 11:41:02 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:01.526 11:41:02 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:01.526 Malloc0 00:23:01.526 11:41:02 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:01.526 11:41:02 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@17 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 2 00:23:01.526 11:41:02 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:01.526 11:41:02 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:01.526 11:41:02 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:01.526 11:41:02 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@18 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:23:01.526 11:41:02 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:01.526 11:41:02 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:01.526 11:41:02 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:01.526 11:41:02 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@19 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:23:01.526 11:41:02 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:01.526 11:41:02 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:01.526 [2024-11-15 11:41:02.275352] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:01.526 11:41:02 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:01.526 11:41:02 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@21 -- # rpc_cmd nvmf_get_subsystems 00:23:01.526 11:41:02 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:01.526 11:41:02 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:01.526 [ 00:23:01.526 { 00:23:01.526 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:23:01.526 "subtype": "Discovery", 00:23:01.526 "listen_addresses": [], 00:23:01.526 "allow_any_host": true, 00:23:01.526 "hosts": [] 00:23:01.526 }, 00:23:01.526 { 00:23:01.526 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:01.526 "subtype": "NVMe", 00:23:01.526 "listen_addresses": [ 00:23:01.526 { 00:23:01.526 "trtype": "TCP", 00:23:01.526 "adrfam": "IPv4", 00:23:01.526 "traddr": "10.0.0.2", 00:23:01.526 "trsvcid": "4420" 00:23:01.526 } 00:23:01.526 ], 00:23:01.526 "allow_any_host": true, 00:23:01.526 "hosts": [], 00:23:01.526 "serial_number": "SPDK00000000000001", 00:23:01.526 "model_number": "SPDK bdev Controller", 00:23:01.526 "max_namespaces": 2, 00:23:01.526 "min_cntlid": 1, 00:23:01.526 "max_cntlid": 65519, 00:23:01.526 "namespaces": [ 00:23:01.526 { 00:23:01.526 "nsid": 1, 00:23:01.526 "bdev_name": "Malloc0", 00:23:01.526 "name": "Malloc0", 00:23:01.526 "nguid": "383A25E983494D089753D31FF7BDEA62", 00:23:01.526 "uuid": "383a25e9-8349-4d08-9753-d31ff7bdea62" 00:23:01.526 } 00:23:01.526 ] 00:23:01.526 } 00:23:01.526 ] 00:23:01.526 11:41:02 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:01.526 11:41:02 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@23 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:23:01.526 11:41:02 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@24 -- # rm -f /tmp/aer_touch_file 00:23:01.526 11:41:02 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@33 -- # aerpid=1315202 00:23:01.526 11:41:02 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -n 2 -t /tmp/aer_touch_file 00:23:01.526 11:41:02 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@36 -- # waitforfile /tmp/aer_touch_file 00:23:01.526 11:41:02 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1267 -- # local i=0 00:23:01.526 11:41:02 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1268 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:23:01.526 11:41:02 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1269 -- # '[' 0 -lt 200 ']' 00:23:01.526 11:41:02 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1270 -- # i=1 00:23:01.526 11:41:02 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1271 -- # sleep 0.1 00:23:01.785 11:41:02 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1268 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:23:01.785 11:41:02 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1269 -- # '[' 1 -lt 200 ']' 00:23:01.785 11:41:02 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1270 -- # i=2 00:23:01.785 11:41:02 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1271 -- # sleep 0.1 00:23:01.785 11:41:02 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1268 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:23:01.785 11:41:02 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1269 -- # '[' 2 -lt 200 ']' 00:23:01.785 11:41:02 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1270 -- # i=3 00:23:01.785 11:41:02 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1271 -- # sleep 0.1 00:23:01.785 11:41:02 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1268 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:23:01.785 11:41:02 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1274 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:23:01.785 11:41:02 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1278 -- # return 0 00:23:01.785 11:41:02 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@39 -- # rpc_cmd bdev_malloc_create 64 4096 --name Malloc1 00:23:01.785 11:41:02 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:01.785 11:41:02 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:02.044 Malloc1 00:23:02.044 11:41:02 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:02.044 11:41:02 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@40 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 2 00:23:02.045 11:41:02 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:02.045 11:41:02 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:02.045 11:41:02 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:02.045 11:41:02 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@41 -- # rpc_cmd nvmf_get_subsystems 00:23:02.045 11:41:02 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:02.045 11:41:02 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:02.045 [ 00:23:02.045 { 00:23:02.045 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:23:02.045 "subtype": "Discovery", 00:23:02.045 "listen_addresses": [], 00:23:02.045 "allow_any_host": true, 00:23:02.045 "hosts": [] 00:23:02.045 }, 00:23:02.045 { 00:23:02.045 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:02.045 "subtype": "NVMe", 00:23:02.045 "listen_addresses": [ 00:23:02.045 { 00:23:02.045 "trtype": "TCP", 00:23:02.045 "adrfam": "IPv4", 00:23:02.045 "traddr": "10.0.0.2", 00:23:02.045 "trsvcid": "4420" 00:23:02.045 } 00:23:02.045 ], 00:23:02.045 "allow_any_host": true, 00:23:02.045 "hosts": [], 00:23:02.045 "serial_number": "SPDK00000000000001", 00:23:02.045 "model_number": "SPDK bdev Controller", 00:23:02.045 "max_namespaces": 2, 00:23:02.045 "min_cntlid": 1, 00:23:02.045 "max_cntlid": 65519, 00:23:02.045 "namespaces": [ 00:23:02.045 { 00:23:02.045 "nsid": 1, 00:23:02.045 "bdev_name": "Malloc0", 00:23:02.045 "name": "Malloc0", 00:23:02.045 Asynchronous Event Request test 00:23:02.045 Attaching to 10.0.0.2 00:23:02.045 Attached to 10.0.0.2 00:23:02.045 Registering asynchronous event callbacks... 00:23:02.045 Starting namespace attribute notice tests for all controllers... 00:23:02.045 10.0.0.2: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:23:02.045 aer_cb - Changed Namespace 00:23:02.045 Cleaning up... 00:23:02.045 "nguid": "383A25E983494D089753D31FF7BDEA62", 00:23:02.045 "uuid": "383a25e9-8349-4d08-9753-d31ff7bdea62" 00:23:02.045 }, 00:23:02.045 { 00:23:02.045 "nsid": 2, 00:23:02.045 "bdev_name": "Malloc1", 00:23:02.045 "name": "Malloc1", 00:23:02.045 "nguid": "5AB40EFC4DC940988965A64C67FE7AE3", 00:23:02.045 "uuid": "5ab40efc-4dc9-4098-8965-a64c67fe7ae3" 00:23:02.045 } 00:23:02.045 ] 00:23:02.045 } 00:23:02.045 ] 00:23:02.045 11:41:02 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:02.045 11:41:02 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@43 -- # wait 1315202 00:23:02.045 11:41:02 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@45 -- # rpc_cmd bdev_malloc_delete Malloc0 00:23:02.045 11:41:02 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:02.045 11:41:02 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:02.045 11:41:02 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:02.045 11:41:02 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@46 -- # rpc_cmd bdev_malloc_delete Malloc1 00:23:02.045 11:41:02 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:02.045 11:41:02 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:02.045 11:41:02 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:02.045 11:41:02 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@47 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:23:02.045 11:41:02 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:02.045 11:41:02 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:02.045 11:41:02 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:02.045 11:41:02 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@49 -- # trap - SIGINT SIGTERM EXIT 00:23:02.045 11:41:02 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@51 -- # nvmftestfini 00:23:02.045 11:41:02 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@516 -- # nvmfcleanup 00:23:02.045 11:41:02 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@121 -- # sync 00:23:02.045 11:41:02 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:23:02.045 11:41:02 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@124 -- # set +e 00:23:02.045 11:41:02 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@125 -- # for i in {1..20} 00:23:02.045 11:41:02 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:23:02.045 rmmod nvme_tcp 00:23:02.045 rmmod nvme_fabrics 00:23:02.045 rmmod nvme_keyring 00:23:02.045 11:41:02 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:23:02.045 11:41:02 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@128 -- # set -e 00:23:02.045 11:41:02 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@129 -- # return 0 00:23:02.045 11:41:02 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@517 -- # '[' -n 1315037 ']' 00:23:02.045 11:41:02 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@518 -- # killprocess 1315037 00:23:02.045 11:41:02 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@952 -- # '[' -z 1315037 ']' 00:23:02.045 11:41:02 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@956 -- # kill -0 1315037 00:23:02.045 11:41:02 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@957 -- # uname 00:23:02.045 11:41:02 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:23:02.045 11:41:02 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 1315037 00:23:02.045 11:41:02 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:23:02.045 11:41:02 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:23:02.045 11:41:02 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@970 -- # echo 'killing process with pid 1315037' 00:23:02.045 killing process with pid 1315037 00:23:02.045 11:41:02 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@971 -- # kill 1315037 00:23:02.045 11:41:02 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@976 -- # wait 1315037 00:23:02.304 11:41:03 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:23:02.304 11:41:03 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:23:02.304 11:41:03 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:23:02.304 11:41:03 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@297 -- # iptr 00:23:02.304 11:41:03 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@791 -- # iptables-save 00:23:02.304 11:41:03 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:23:02.304 11:41:03 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@791 -- # iptables-restore 00:23:02.304 11:41:03 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:23:02.304 11:41:03 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@302 -- # remove_spdk_ns 00:23:02.304 11:41:03 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:02.304 11:41:03 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:02.304 11:41:03 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:04.839 11:41:05 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:23:04.839 00:23:04.839 real 0m8.830s 00:23:04.839 user 0m5.439s 00:23:04.839 sys 0m4.417s 00:23:04.839 11:41:05 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1128 -- # xtrace_disable 00:23:04.839 11:41:05 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:04.839 ************************************ 00:23:04.839 END TEST nvmf_aer 00:23:04.839 ************************************ 00:23:04.839 11:41:05 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@18 -- # run_test nvmf_async_init /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:23:04.839 11:41:05 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:23:04.839 11:41:05 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1109 -- # xtrace_disable 00:23:04.839 11:41:05 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:23:04.839 ************************************ 00:23:04.839 START TEST nvmf_async_init 00:23:04.839 ************************************ 00:23:04.839 11:41:05 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:23:04.839 * Looking for test storage... 00:23:04.839 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:23:04.839 11:41:05 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:23:04.839 11:41:05 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1691 -- # lcov --version 00:23:04.839 11:41:05 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:23:04.839 11:41:05 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:23:04.839 11:41:05 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:23:04.839 11:41:05 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@333 -- # local ver1 ver1_l 00:23:04.839 11:41:05 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@334 -- # local ver2 ver2_l 00:23:04.839 11:41:05 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@336 -- # IFS=.-: 00:23:04.839 11:41:05 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@336 -- # read -ra ver1 00:23:04.839 11:41:05 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@337 -- # IFS=.-: 00:23:04.839 11:41:05 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@337 -- # read -ra ver2 00:23:04.839 11:41:05 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@338 -- # local 'op=<' 00:23:04.839 11:41:05 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@340 -- # ver1_l=2 00:23:04.839 11:41:05 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@341 -- # ver2_l=1 00:23:04.839 11:41:05 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:23:04.839 11:41:05 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@344 -- # case "$op" in 00:23:04.839 11:41:05 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@345 -- # : 1 00:23:04.839 11:41:05 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@364 -- # (( v = 0 )) 00:23:04.839 11:41:05 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:23:04.839 11:41:05 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@365 -- # decimal 1 00:23:04.839 11:41:05 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@353 -- # local d=1 00:23:04.839 11:41:05 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:23:04.839 11:41:05 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@355 -- # echo 1 00:23:04.839 11:41:05 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@365 -- # ver1[v]=1 00:23:04.839 11:41:05 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@366 -- # decimal 2 00:23:04.839 11:41:05 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@353 -- # local d=2 00:23:04.839 11:41:05 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:23:04.839 11:41:05 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@355 -- # echo 2 00:23:04.839 11:41:05 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@366 -- # ver2[v]=2 00:23:04.839 11:41:05 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:23:04.839 11:41:05 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:23:04.839 11:41:05 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@368 -- # return 0 00:23:04.839 11:41:05 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:23:04.839 11:41:05 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:23:04.839 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:04.839 --rc genhtml_branch_coverage=1 00:23:04.839 --rc genhtml_function_coverage=1 00:23:04.839 --rc genhtml_legend=1 00:23:04.839 --rc geninfo_all_blocks=1 00:23:04.839 --rc geninfo_unexecuted_blocks=1 00:23:04.839 00:23:04.839 ' 00:23:04.839 11:41:05 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:23:04.839 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:04.839 --rc genhtml_branch_coverage=1 00:23:04.839 --rc genhtml_function_coverage=1 00:23:04.839 --rc genhtml_legend=1 00:23:04.839 --rc geninfo_all_blocks=1 00:23:04.839 --rc geninfo_unexecuted_blocks=1 00:23:04.839 00:23:04.839 ' 00:23:04.839 11:41:05 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:23:04.839 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:04.839 --rc genhtml_branch_coverage=1 00:23:04.839 --rc genhtml_function_coverage=1 00:23:04.839 --rc genhtml_legend=1 00:23:04.839 --rc geninfo_all_blocks=1 00:23:04.839 --rc geninfo_unexecuted_blocks=1 00:23:04.839 00:23:04.839 ' 00:23:04.839 11:41:05 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:23:04.839 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:04.839 --rc genhtml_branch_coverage=1 00:23:04.839 --rc genhtml_function_coverage=1 00:23:04.839 --rc genhtml_legend=1 00:23:04.839 --rc geninfo_all_blocks=1 00:23:04.839 --rc geninfo_unexecuted_blocks=1 00:23:04.839 00:23:04.839 ' 00:23:04.839 11:41:05 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:23:04.840 11:41:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@7 -- # uname -s 00:23:04.840 11:41:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:04.840 11:41:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:04.840 11:41:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:04.840 11:41:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:04.840 11:41:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:04.840 11:41:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:04.840 11:41:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:04.840 11:41:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:04.840 11:41:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:04.840 11:41:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:04.840 11:41:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:23:04.840 11:41:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@18 -- # NVME_HOSTID=00abaa28-3537-eb11-906e-0017a4403562 00:23:04.840 11:41:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:04.840 11:41:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:04.840 11:41:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:04.840 11:41:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:04.840 11:41:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:23:04.840 11:41:05 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@15 -- # shopt -s extglob 00:23:04.840 11:41:05 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:04.840 11:41:05 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:04.840 11:41:05 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:04.840 11:41:05 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:04.840 11:41:05 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:04.840 11:41:05 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:04.840 11:41:05 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@5 -- # export PATH 00:23:04.840 11:41:05 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:04.840 11:41:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@51 -- # : 0 00:23:04.840 11:41:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:23:04.840 11:41:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:23:04.840 11:41:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:04.840 11:41:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:04.840 11:41:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:04.840 11:41:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:23:04.840 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:23:04.840 11:41:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:23:04.840 11:41:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:23:04.840 11:41:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@55 -- # have_pci_nics=0 00:23:04.840 11:41:05 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@13 -- # null_bdev_size=1024 00:23:04.840 11:41:05 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@14 -- # null_block_size=512 00:23:04.840 11:41:05 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@15 -- # null_bdev=null0 00:23:04.840 11:41:05 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@16 -- # nvme_bdev=nvme0 00:23:04.840 11:41:05 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@20 -- # uuidgen 00:23:04.840 11:41:05 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@20 -- # tr -d - 00:23:04.840 11:41:05 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@20 -- # nguid=0bfeec01f9e344ba98221f03736f06fe 00:23:04.840 11:41:05 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@22 -- # nvmftestinit 00:23:04.840 11:41:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:23:04.840 11:41:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:04.840 11:41:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@476 -- # prepare_net_devs 00:23:04.840 11:41:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@438 -- # local -g is_hw=no 00:23:04.840 11:41:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@440 -- # remove_spdk_ns 00:23:04.840 11:41:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:04.840 11:41:05 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:04.840 11:41:05 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:04.840 11:41:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:23:04.840 11:41:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:23:04.840 11:41:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@309 -- # xtrace_disable 00:23:04.840 11:41:05 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:11.411 11:41:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:23:11.411 11:41:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@315 -- # pci_devs=() 00:23:11.411 11:41:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@315 -- # local -a pci_devs 00:23:11.411 11:41:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@316 -- # pci_net_devs=() 00:23:11.411 11:41:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:23:11.411 11:41:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@317 -- # pci_drivers=() 00:23:11.411 11:41:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@317 -- # local -A pci_drivers 00:23:11.411 11:41:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@319 -- # net_devs=() 00:23:11.411 11:41:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@319 -- # local -ga net_devs 00:23:11.411 11:41:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@320 -- # e810=() 00:23:11.411 11:41:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@320 -- # local -ga e810 00:23:11.411 11:41:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@321 -- # x722=() 00:23:11.411 11:41:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@321 -- # local -ga x722 00:23:11.411 11:41:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@322 -- # mlx=() 00:23:11.411 11:41:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@322 -- # local -ga mlx 00:23:11.411 11:41:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:11.411 11:41:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:11.411 11:41:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:11.411 11:41:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:11.411 11:41:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:11.411 11:41:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:11.411 11:41:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:11.411 11:41:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:23:11.412 11:41:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:11.412 11:41:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:11.412 11:41:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:11.412 11:41:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:11.412 11:41:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:23:11.412 11:41:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:23:11.412 11:41:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:23:11.412 11:41:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:23:11.412 11:41:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:23:11.412 11:41:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:23:11.412 11:41:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:23:11.412 11:41:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:23:11.412 Found 0000:af:00.0 (0x8086 - 0x159b) 00:23:11.412 11:41:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:23:11.412 11:41:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:23:11.412 11:41:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:11.412 11:41:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:11.412 11:41:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:23:11.412 11:41:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:23:11.412 11:41:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:23:11.412 Found 0000:af:00.1 (0x8086 - 0x159b) 00:23:11.412 11:41:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:23:11.412 11:41:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:23:11.412 11:41:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:11.412 11:41:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:11.412 11:41:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:23:11.412 11:41:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:23:11.412 11:41:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:23:11.412 11:41:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:23:11.412 11:41:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:23:11.412 11:41:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:11.412 11:41:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:23:11.412 11:41:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:11.412 11:41:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@418 -- # [[ up == up ]] 00:23:11.412 11:41:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:23:11.412 11:41:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:11.412 11:41:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:23:11.412 Found net devices under 0000:af:00.0: cvl_0_0 00:23:11.412 11:41:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:23:11.412 11:41:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:23:11.412 11:41:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:11.412 11:41:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:23:11.412 11:41:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:11.412 11:41:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@418 -- # [[ up == up ]] 00:23:11.412 11:41:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:23:11.412 11:41:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:11.412 11:41:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:23:11.412 Found net devices under 0000:af:00.1: cvl_0_1 00:23:11.412 11:41:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:23:11.412 11:41:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:23:11.412 11:41:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@442 -- # is_hw=yes 00:23:11.412 11:41:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:23:11.412 11:41:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:23:11.412 11:41:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:23:11.412 11:41:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:23:11.412 11:41:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:11.412 11:41:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:11.412 11:41:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:11.412 11:41:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:23:11.412 11:41:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:23:11.412 11:41:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:23:11.412 11:41:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:23:11.412 11:41:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:23:11.412 11:41:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:23:11.412 11:41:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:11.412 11:41:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:23:11.412 11:41:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:23:11.412 11:41:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:23:11.412 11:41:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:23:11.412 11:41:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:23:11.412 11:41:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:11.412 11:41:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:23:11.412 11:41:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:11.412 11:41:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:23:11.412 11:41:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:23:11.412 11:41:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:23:11.412 11:41:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:23:11.412 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:11.412 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.393 ms 00:23:11.412 00:23:11.412 --- 10.0.0.2 ping statistics --- 00:23:11.412 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:11.412 rtt min/avg/max/mdev = 0.393/0.393/0.393/0.000 ms 00:23:11.412 11:41:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:23:11.412 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:11.412 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.195 ms 00:23:11.412 00:23:11.412 --- 10.0.0.1 ping statistics --- 00:23:11.412 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:11.412 rtt min/avg/max/mdev = 0.195/0.195/0.195/0.000 ms 00:23:11.412 11:41:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:11.412 11:41:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@450 -- # return 0 00:23:11.412 11:41:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:23:11.412 11:41:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:11.412 11:41:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:23:11.412 11:41:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:23:11.412 11:41:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:11.412 11:41:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:23:11.412 11:41:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:23:11.412 11:41:11 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@23 -- # nvmfappstart -m 0x1 00:23:11.412 11:41:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:23:11.412 11:41:11 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@724 -- # xtrace_disable 00:23:11.412 11:41:11 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:11.412 11:41:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@509 -- # nvmfpid=1318989 00:23:11.412 11:41:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@510 -- # waitforlisten 1318989 00:23:11.412 11:41:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:23:11.412 11:41:11 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@833 -- # '[' -z 1318989 ']' 00:23:11.412 11:41:11 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:11.412 11:41:11 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@838 -- # local max_retries=100 00:23:11.412 11:41:11 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:11.412 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:11.412 11:41:11 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@842 -- # xtrace_disable 00:23:11.412 11:41:11 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:11.412 [2024-11-15 11:41:11.429079] Starting SPDK v25.01-pre git sha1 4b2d483c6 / DPDK 24.03.0 initialization... 00:23:11.412 [2024-11-15 11:41:11.429139] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:11.412 [2024-11-15 11:41:11.528859] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:11.412 [2024-11-15 11:41:11.575768] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:11.413 [2024-11-15 11:41:11.575810] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:11.413 [2024-11-15 11:41:11.575822] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:11.413 [2024-11-15 11:41:11.575830] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:11.413 [2024-11-15 11:41:11.575838] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:11.413 [2024-11-15 11:41:11.576548] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:23:11.413 11:41:11 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:23:11.413 11:41:11 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@866 -- # return 0 00:23:11.413 11:41:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:23:11.413 11:41:11 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@730 -- # xtrace_disable 00:23:11.413 11:41:11 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:11.413 11:41:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:11.413 11:41:11 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@26 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:23:11.413 11:41:11 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:11.413 11:41:11 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:11.413 [2024-11-15 11:41:11.715197] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:11.413 11:41:11 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:11.413 11:41:11 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@27 -- # rpc_cmd bdev_null_create null0 1024 512 00:23:11.413 11:41:11 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:11.413 11:41:11 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:11.413 null0 00:23:11.413 11:41:11 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:11.413 11:41:11 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@28 -- # rpc_cmd bdev_wait_for_examine 00:23:11.413 11:41:11 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:11.413 11:41:11 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:11.413 11:41:11 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:11.413 11:41:11 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@29 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a 00:23:11.413 11:41:11 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:11.413 11:41:11 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:11.413 11:41:11 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:11.413 11:41:11 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 -g 0bfeec01f9e344ba98221f03736f06fe 00:23:11.413 11:41:11 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:11.413 11:41:11 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:11.413 11:41:11 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:11.413 11:41:11 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@31 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:23:11.413 11:41:11 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:11.413 11:41:11 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:11.413 [2024-11-15 11:41:11.767530] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:11.413 11:41:11 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:11.413 11:41:11 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@37 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode0 00:23:11.413 11:41:11 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:11.413 11:41:11 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:11.413 nvme0n1 00:23:11.413 11:41:11 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:11.413 11:41:11 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@41 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:23:11.413 11:41:11 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:11.413 11:41:12 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:11.413 [ 00:23:11.413 { 00:23:11.413 "name": "nvme0n1", 00:23:11.413 "aliases": [ 00:23:11.413 "0bfeec01-f9e3-44ba-9822-1f03736f06fe" 00:23:11.413 ], 00:23:11.413 "product_name": "NVMe disk", 00:23:11.413 "block_size": 512, 00:23:11.413 "num_blocks": 2097152, 00:23:11.413 "uuid": "0bfeec01-f9e3-44ba-9822-1f03736f06fe", 00:23:11.413 "numa_id": 1, 00:23:11.413 "assigned_rate_limits": { 00:23:11.413 "rw_ios_per_sec": 0, 00:23:11.413 "rw_mbytes_per_sec": 0, 00:23:11.413 "r_mbytes_per_sec": 0, 00:23:11.413 "w_mbytes_per_sec": 0 00:23:11.413 }, 00:23:11.413 "claimed": false, 00:23:11.413 "zoned": false, 00:23:11.413 "supported_io_types": { 00:23:11.413 "read": true, 00:23:11.413 "write": true, 00:23:11.413 "unmap": false, 00:23:11.413 "flush": true, 00:23:11.413 "reset": true, 00:23:11.413 "nvme_admin": true, 00:23:11.413 "nvme_io": true, 00:23:11.413 "nvme_io_md": false, 00:23:11.413 "write_zeroes": true, 00:23:11.413 "zcopy": false, 00:23:11.413 "get_zone_info": false, 00:23:11.413 "zone_management": false, 00:23:11.413 "zone_append": false, 00:23:11.413 "compare": true, 00:23:11.413 "compare_and_write": true, 00:23:11.413 "abort": true, 00:23:11.413 "seek_hole": false, 00:23:11.413 "seek_data": false, 00:23:11.413 "copy": true, 00:23:11.413 "nvme_iov_md": false 00:23:11.413 }, 00:23:11.413 "memory_domains": [ 00:23:11.413 { 00:23:11.413 "dma_device_id": "system", 00:23:11.413 "dma_device_type": 1 00:23:11.413 } 00:23:11.413 ], 00:23:11.413 "driver_specific": { 00:23:11.413 "nvme": [ 00:23:11.413 { 00:23:11.413 "trid": { 00:23:11.413 "trtype": "TCP", 00:23:11.413 "adrfam": "IPv4", 00:23:11.413 "traddr": "10.0.0.2", 00:23:11.413 "trsvcid": "4420", 00:23:11.413 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:23:11.413 }, 00:23:11.413 "ctrlr_data": { 00:23:11.413 "cntlid": 1, 00:23:11.413 "vendor_id": "0x8086", 00:23:11.413 "model_number": "SPDK bdev Controller", 00:23:11.413 "serial_number": "00000000000000000000", 00:23:11.413 "firmware_revision": "25.01", 00:23:11.413 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:23:11.413 "oacs": { 00:23:11.413 "security": 0, 00:23:11.413 "format": 0, 00:23:11.413 "firmware": 0, 00:23:11.413 "ns_manage": 0 00:23:11.413 }, 00:23:11.413 "multi_ctrlr": true, 00:23:11.413 "ana_reporting": false 00:23:11.413 }, 00:23:11.413 "vs": { 00:23:11.413 "nvme_version": "1.3" 00:23:11.413 }, 00:23:11.413 "ns_data": { 00:23:11.413 "id": 1, 00:23:11.413 "can_share": true 00:23:11.413 } 00:23:11.413 } 00:23:11.413 ], 00:23:11.413 "mp_policy": "active_passive" 00:23:11.413 } 00:23:11.413 } 00:23:11.413 ] 00:23:11.413 11:41:12 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:11.413 11:41:12 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@44 -- # rpc_cmd bdev_nvme_reset_controller nvme0 00:23:11.413 11:41:12 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:11.413 11:41:12 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:11.414 [2024-11-15 11:41:12.024008] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:23:11.414 [2024-11-15 11:41:12.024083] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1034730 (9): Bad file descriptor 00:23:11.414 [2024-11-15 11:41:12.156575] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 2] Resetting controller successful. 00:23:11.414 11:41:12 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:11.414 11:41:12 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@47 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:23:11.414 11:41:12 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:11.414 11:41:12 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:11.414 [ 00:23:11.414 { 00:23:11.414 "name": "nvme0n1", 00:23:11.414 "aliases": [ 00:23:11.414 "0bfeec01-f9e3-44ba-9822-1f03736f06fe" 00:23:11.414 ], 00:23:11.414 "product_name": "NVMe disk", 00:23:11.414 "block_size": 512, 00:23:11.414 "num_blocks": 2097152, 00:23:11.414 "uuid": "0bfeec01-f9e3-44ba-9822-1f03736f06fe", 00:23:11.414 "numa_id": 1, 00:23:11.414 "assigned_rate_limits": { 00:23:11.414 "rw_ios_per_sec": 0, 00:23:11.414 "rw_mbytes_per_sec": 0, 00:23:11.414 "r_mbytes_per_sec": 0, 00:23:11.414 "w_mbytes_per_sec": 0 00:23:11.414 }, 00:23:11.414 "claimed": false, 00:23:11.414 "zoned": false, 00:23:11.414 "supported_io_types": { 00:23:11.414 "read": true, 00:23:11.414 "write": true, 00:23:11.414 "unmap": false, 00:23:11.414 "flush": true, 00:23:11.414 "reset": true, 00:23:11.414 "nvme_admin": true, 00:23:11.414 "nvme_io": true, 00:23:11.414 "nvme_io_md": false, 00:23:11.414 "write_zeroes": true, 00:23:11.414 "zcopy": false, 00:23:11.414 "get_zone_info": false, 00:23:11.414 "zone_management": false, 00:23:11.414 "zone_append": false, 00:23:11.414 "compare": true, 00:23:11.414 "compare_and_write": true, 00:23:11.414 "abort": true, 00:23:11.414 "seek_hole": false, 00:23:11.414 "seek_data": false, 00:23:11.414 "copy": true, 00:23:11.414 "nvme_iov_md": false 00:23:11.414 }, 00:23:11.414 "memory_domains": [ 00:23:11.414 { 00:23:11.414 "dma_device_id": "system", 00:23:11.414 "dma_device_type": 1 00:23:11.414 } 00:23:11.414 ], 00:23:11.414 "driver_specific": { 00:23:11.414 "nvme": [ 00:23:11.414 { 00:23:11.414 "trid": { 00:23:11.414 "trtype": "TCP", 00:23:11.414 "adrfam": "IPv4", 00:23:11.414 "traddr": "10.0.0.2", 00:23:11.414 "trsvcid": "4420", 00:23:11.414 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:23:11.414 }, 00:23:11.414 "ctrlr_data": { 00:23:11.414 "cntlid": 2, 00:23:11.414 "vendor_id": "0x8086", 00:23:11.414 "model_number": "SPDK bdev Controller", 00:23:11.414 "serial_number": "00000000000000000000", 00:23:11.414 "firmware_revision": "25.01", 00:23:11.414 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:23:11.414 "oacs": { 00:23:11.414 "security": 0, 00:23:11.414 "format": 0, 00:23:11.414 "firmware": 0, 00:23:11.414 "ns_manage": 0 00:23:11.414 }, 00:23:11.414 "multi_ctrlr": true, 00:23:11.414 "ana_reporting": false 00:23:11.414 }, 00:23:11.414 "vs": { 00:23:11.414 "nvme_version": "1.3" 00:23:11.414 }, 00:23:11.414 "ns_data": { 00:23:11.414 "id": 1, 00:23:11.414 "can_share": true 00:23:11.414 } 00:23:11.414 } 00:23:11.414 ], 00:23:11.414 "mp_policy": "active_passive" 00:23:11.414 } 00:23:11.414 } 00:23:11.414 ] 00:23:11.414 11:41:12 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:11.414 11:41:12 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@50 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:11.414 11:41:12 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:11.414 11:41:12 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:11.414 11:41:12 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:11.414 11:41:12 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@53 -- # mktemp 00:23:11.414 11:41:12 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@53 -- # key_path=/tmp/tmp.rI7hPRSInq 00:23:11.414 11:41:12 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@54 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:23:11.414 11:41:12 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@55 -- # chmod 0600 /tmp/tmp.rI7hPRSInq 00:23:11.414 11:41:12 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@56 -- # rpc_cmd keyring_file_add_key key0 /tmp/tmp.rI7hPRSInq 00:23:11.414 11:41:12 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:11.414 11:41:12 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:11.414 11:41:12 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:11.414 11:41:12 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@57 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode0 --disable 00:23:11.414 11:41:12 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:11.414 11:41:12 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:11.414 11:41:12 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:11.414 11:41:12 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@58 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 --secure-channel 00:23:11.414 11:41:12 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:11.414 11:41:12 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:11.414 [2024-11-15 11:41:12.236696] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:23:11.414 [2024-11-15 11:41:12.236828] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:23:11.414 11:41:12 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:11.414 11:41:12 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@60 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host1 --psk key0 00:23:11.414 11:41:12 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:11.414 11:41:12 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:11.414 11:41:12 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:11.414 11:41:12 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@66 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -f ipv4 -s 4421 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host1 --psk key0 00:23:11.414 11:41:12 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:11.414 11:41:12 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:11.414 [2024-11-15 11:41:12.256764] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:23:11.673 nvme0n1 00:23:11.674 11:41:12 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:11.674 11:41:12 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@70 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:23:11.674 11:41:12 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:11.674 11:41:12 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:11.674 [ 00:23:11.674 { 00:23:11.674 "name": "nvme0n1", 00:23:11.674 "aliases": [ 00:23:11.674 "0bfeec01-f9e3-44ba-9822-1f03736f06fe" 00:23:11.674 ], 00:23:11.674 "product_name": "NVMe disk", 00:23:11.674 "block_size": 512, 00:23:11.674 "num_blocks": 2097152, 00:23:11.674 "uuid": "0bfeec01-f9e3-44ba-9822-1f03736f06fe", 00:23:11.674 "numa_id": 1, 00:23:11.674 "assigned_rate_limits": { 00:23:11.674 "rw_ios_per_sec": 0, 00:23:11.674 "rw_mbytes_per_sec": 0, 00:23:11.674 "r_mbytes_per_sec": 0, 00:23:11.674 "w_mbytes_per_sec": 0 00:23:11.674 }, 00:23:11.674 "claimed": false, 00:23:11.674 "zoned": false, 00:23:11.674 "supported_io_types": { 00:23:11.674 "read": true, 00:23:11.674 "write": true, 00:23:11.674 "unmap": false, 00:23:11.674 "flush": true, 00:23:11.674 "reset": true, 00:23:11.674 "nvme_admin": true, 00:23:11.674 "nvme_io": true, 00:23:11.674 "nvme_io_md": false, 00:23:11.674 "write_zeroes": true, 00:23:11.674 "zcopy": false, 00:23:11.674 "get_zone_info": false, 00:23:11.674 "zone_management": false, 00:23:11.674 "zone_append": false, 00:23:11.674 "compare": true, 00:23:11.674 "compare_and_write": true, 00:23:11.674 "abort": true, 00:23:11.674 "seek_hole": false, 00:23:11.674 "seek_data": false, 00:23:11.674 "copy": true, 00:23:11.674 "nvme_iov_md": false 00:23:11.674 }, 00:23:11.674 "memory_domains": [ 00:23:11.674 { 00:23:11.674 "dma_device_id": "system", 00:23:11.674 "dma_device_type": 1 00:23:11.674 } 00:23:11.674 ], 00:23:11.674 "driver_specific": { 00:23:11.674 "nvme": [ 00:23:11.674 { 00:23:11.674 "trid": { 00:23:11.674 "trtype": "TCP", 00:23:11.674 "adrfam": "IPv4", 00:23:11.674 "traddr": "10.0.0.2", 00:23:11.674 "trsvcid": "4421", 00:23:11.674 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:23:11.674 }, 00:23:11.674 "ctrlr_data": { 00:23:11.674 "cntlid": 3, 00:23:11.674 "vendor_id": "0x8086", 00:23:11.674 "model_number": "SPDK bdev Controller", 00:23:11.674 "serial_number": "00000000000000000000", 00:23:11.674 "firmware_revision": "25.01", 00:23:11.674 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:23:11.674 "oacs": { 00:23:11.674 "security": 0, 00:23:11.674 "format": 0, 00:23:11.674 "firmware": 0, 00:23:11.674 "ns_manage": 0 00:23:11.674 }, 00:23:11.674 "multi_ctrlr": true, 00:23:11.674 "ana_reporting": false 00:23:11.674 }, 00:23:11.674 "vs": { 00:23:11.674 "nvme_version": "1.3" 00:23:11.674 }, 00:23:11.674 "ns_data": { 00:23:11.674 "id": 1, 00:23:11.674 "can_share": true 00:23:11.674 } 00:23:11.674 } 00:23:11.674 ], 00:23:11.674 "mp_policy": "active_passive" 00:23:11.674 } 00:23:11.674 } 00:23:11.674 ] 00:23:11.674 11:41:12 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:11.674 11:41:12 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@73 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:11.674 11:41:12 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:11.674 11:41:12 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:11.674 11:41:12 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:11.674 11:41:12 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@76 -- # rm -f /tmp/tmp.rI7hPRSInq 00:23:11.674 11:41:12 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@78 -- # trap - SIGINT SIGTERM EXIT 00:23:11.674 11:41:12 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@79 -- # nvmftestfini 00:23:11.674 11:41:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@516 -- # nvmfcleanup 00:23:11.674 11:41:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@121 -- # sync 00:23:11.674 11:41:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:23:11.674 11:41:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@124 -- # set +e 00:23:11.674 11:41:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@125 -- # for i in {1..20} 00:23:11.674 11:41:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:23:11.674 rmmod nvme_tcp 00:23:11.674 rmmod nvme_fabrics 00:23:11.674 rmmod nvme_keyring 00:23:11.674 11:41:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:23:11.674 11:41:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@128 -- # set -e 00:23:11.674 11:41:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@129 -- # return 0 00:23:11.674 11:41:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@517 -- # '[' -n 1318989 ']' 00:23:11.674 11:41:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@518 -- # killprocess 1318989 00:23:11.674 11:41:12 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@952 -- # '[' -z 1318989 ']' 00:23:11.674 11:41:12 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@956 -- # kill -0 1318989 00:23:11.674 11:41:12 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@957 -- # uname 00:23:11.674 11:41:12 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:23:11.674 11:41:12 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 1318989 00:23:11.674 11:41:12 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:23:11.674 11:41:12 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:23:11.674 11:41:12 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@970 -- # echo 'killing process with pid 1318989' 00:23:11.674 killing process with pid 1318989 00:23:11.674 11:41:12 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@971 -- # kill 1318989 00:23:11.674 11:41:12 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@976 -- # wait 1318989 00:23:11.933 11:41:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:23:11.933 11:41:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:23:11.933 11:41:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:23:11.933 11:41:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@297 -- # iptr 00:23:11.933 11:41:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@791 -- # iptables-restore 00:23:11.933 11:41:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@791 -- # iptables-save 00:23:11.933 11:41:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:23:11.933 11:41:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:23:11.933 11:41:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@302 -- # remove_spdk_ns 00:23:11.933 11:41:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:11.933 11:41:12 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:11.933 11:41:12 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:14.472 11:41:14 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:23:14.472 00:23:14.472 real 0m9.506s 00:23:14.472 user 0m3.175s 00:23:14.472 sys 0m4.818s 00:23:14.472 11:41:14 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1128 -- # xtrace_disable 00:23:14.472 11:41:14 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:14.472 ************************************ 00:23:14.472 END TEST nvmf_async_init 00:23:14.472 ************************************ 00:23:14.472 11:41:14 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@19 -- # run_test dma /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/dma.sh --transport=tcp 00:23:14.472 11:41:14 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:23:14.472 11:41:14 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1109 -- # xtrace_disable 00:23:14.472 11:41:14 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:23:14.472 ************************************ 00:23:14.472 START TEST dma 00:23:14.472 ************************************ 00:23:14.472 11:41:14 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/dma.sh --transport=tcp 00:23:14.472 * Looking for test storage... 00:23:14.472 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:23:14.472 11:41:14 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:23:14.472 11:41:14 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1691 -- # lcov --version 00:23:14.472 11:41:14 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:23:14.472 11:41:14 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:23:14.472 11:41:14 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:23:14.472 11:41:14 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@333 -- # local ver1 ver1_l 00:23:14.472 11:41:14 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@334 -- # local ver2 ver2_l 00:23:14.472 11:41:14 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@336 -- # IFS=.-: 00:23:14.472 11:41:14 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@336 -- # read -ra ver1 00:23:14.472 11:41:14 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@337 -- # IFS=.-: 00:23:14.472 11:41:14 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@337 -- # read -ra ver2 00:23:14.473 11:41:14 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@338 -- # local 'op=<' 00:23:14.473 11:41:14 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@340 -- # ver1_l=2 00:23:14.473 11:41:14 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@341 -- # ver2_l=1 00:23:14.473 11:41:14 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:23:14.473 11:41:14 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@344 -- # case "$op" in 00:23:14.473 11:41:14 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@345 -- # : 1 00:23:14.473 11:41:14 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@364 -- # (( v = 0 )) 00:23:14.473 11:41:14 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:23:14.473 11:41:14 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@365 -- # decimal 1 00:23:14.473 11:41:14 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@353 -- # local d=1 00:23:14.473 11:41:14 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:23:14.473 11:41:14 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@355 -- # echo 1 00:23:14.473 11:41:14 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@365 -- # ver1[v]=1 00:23:14.473 11:41:14 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@366 -- # decimal 2 00:23:14.473 11:41:14 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@353 -- # local d=2 00:23:14.473 11:41:14 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:23:14.473 11:41:14 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@355 -- # echo 2 00:23:14.473 11:41:14 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@366 -- # ver2[v]=2 00:23:14.473 11:41:14 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:23:14.473 11:41:14 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:23:14.473 11:41:14 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@368 -- # return 0 00:23:14.473 11:41:14 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:23:14.473 11:41:14 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:23:14.473 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:14.473 --rc genhtml_branch_coverage=1 00:23:14.473 --rc genhtml_function_coverage=1 00:23:14.473 --rc genhtml_legend=1 00:23:14.473 --rc geninfo_all_blocks=1 00:23:14.473 --rc geninfo_unexecuted_blocks=1 00:23:14.473 00:23:14.473 ' 00:23:14.473 11:41:14 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:23:14.473 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:14.473 --rc genhtml_branch_coverage=1 00:23:14.473 --rc genhtml_function_coverage=1 00:23:14.473 --rc genhtml_legend=1 00:23:14.473 --rc geninfo_all_blocks=1 00:23:14.473 --rc geninfo_unexecuted_blocks=1 00:23:14.473 00:23:14.473 ' 00:23:14.473 11:41:14 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:23:14.473 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:14.473 --rc genhtml_branch_coverage=1 00:23:14.473 --rc genhtml_function_coverage=1 00:23:14.473 --rc genhtml_legend=1 00:23:14.473 --rc geninfo_all_blocks=1 00:23:14.473 --rc geninfo_unexecuted_blocks=1 00:23:14.473 00:23:14.473 ' 00:23:14.473 11:41:14 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:23:14.473 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:14.473 --rc genhtml_branch_coverage=1 00:23:14.473 --rc genhtml_function_coverage=1 00:23:14.473 --rc genhtml_legend=1 00:23:14.473 --rc geninfo_all_blocks=1 00:23:14.473 --rc geninfo_unexecuted_blocks=1 00:23:14.473 00:23:14.473 ' 00:23:14.473 11:41:14 nvmf_tcp.nvmf_host.dma -- host/dma.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:23:14.473 11:41:14 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@7 -- # uname -s 00:23:14.473 11:41:14 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:14.473 11:41:14 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:14.473 11:41:14 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:14.473 11:41:14 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:14.473 11:41:14 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:14.473 11:41:14 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:14.473 11:41:14 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:14.473 11:41:14 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:14.473 11:41:14 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:14.473 11:41:14 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:14.473 11:41:14 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:23:14.473 11:41:14 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@18 -- # NVME_HOSTID=00abaa28-3537-eb11-906e-0017a4403562 00:23:14.473 11:41:14 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:14.473 11:41:14 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:14.473 11:41:14 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:14.473 11:41:14 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:14.473 11:41:14 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:23:14.473 11:41:14 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@15 -- # shopt -s extglob 00:23:14.473 11:41:14 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:14.473 11:41:14 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:14.473 11:41:14 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:14.473 11:41:14 nvmf_tcp.nvmf_host.dma -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:14.473 11:41:14 nvmf_tcp.nvmf_host.dma -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:14.473 11:41:14 nvmf_tcp.nvmf_host.dma -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:14.473 11:41:14 nvmf_tcp.nvmf_host.dma -- paths/export.sh@5 -- # export PATH 00:23:14.473 11:41:14 nvmf_tcp.nvmf_host.dma -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:14.473 11:41:14 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@51 -- # : 0 00:23:14.473 11:41:14 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:23:14.473 11:41:14 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:23:14.473 11:41:14 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:14.473 11:41:14 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:14.473 11:41:14 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:14.473 11:41:14 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:23:14.473 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:23:14.473 11:41:14 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:23:14.473 11:41:14 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:23:14.473 11:41:14 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@55 -- # have_pci_nics=0 00:23:14.473 11:41:14 nvmf_tcp.nvmf_host.dma -- host/dma.sh@12 -- # '[' tcp '!=' rdma ']' 00:23:14.473 11:41:14 nvmf_tcp.nvmf_host.dma -- host/dma.sh@13 -- # exit 0 00:23:14.473 00:23:14.473 real 0m0.158s 00:23:14.473 user 0m0.082s 00:23:14.473 sys 0m0.089s 00:23:14.473 11:41:14 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1128 -- # xtrace_disable 00:23:14.473 11:41:14 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@10 -- # set +x 00:23:14.473 ************************************ 00:23:14.473 END TEST dma 00:23:14.473 ************************************ 00:23:14.473 11:41:14 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@22 -- # run_test nvmf_identify /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify.sh --transport=tcp 00:23:14.473 11:41:14 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:23:14.473 11:41:14 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1109 -- # xtrace_disable 00:23:14.473 11:41:14 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:23:14.473 ************************************ 00:23:14.473 START TEST nvmf_identify 00:23:14.473 ************************************ 00:23:14.473 11:41:15 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify.sh --transport=tcp 00:23:14.473 * Looking for test storage... 00:23:14.473 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:23:14.473 11:41:15 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:23:14.473 11:41:15 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1691 -- # lcov --version 00:23:14.473 11:41:15 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:23:14.473 11:41:15 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:23:14.473 11:41:15 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:23:14.473 11:41:15 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@333 -- # local ver1 ver1_l 00:23:14.473 11:41:15 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@334 -- # local ver2 ver2_l 00:23:14.473 11:41:15 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@336 -- # IFS=.-: 00:23:14.473 11:41:15 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@336 -- # read -ra ver1 00:23:14.473 11:41:15 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@337 -- # IFS=.-: 00:23:14.474 11:41:15 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@337 -- # read -ra ver2 00:23:14.474 11:41:15 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@338 -- # local 'op=<' 00:23:14.474 11:41:15 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@340 -- # ver1_l=2 00:23:14.474 11:41:15 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@341 -- # ver2_l=1 00:23:14.474 11:41:15 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:23:14.474 11:41:15 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@344 -- # case "$op" in 00:23:14.474 11:41:15 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@345 -- # : 1 00:23:14.474 11:41:15 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@364 -- # (( v = 0 )) 00:23:14.474 11:41:15 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:23:14.474 11:41:15 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@365 -- # decimal 1 00:23:14.474 11:41:15 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@353 -- # local d=1 00:23:14.474 11:41:15 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:23:14.474 11:41:15 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@355 -- # echo 1 00:23:14.474 11:41:15 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@365 -- # ver1[v]=1 00:23:14.474 11:41:15 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@366 -- # decimal 2 00:23:14.474 11:41:15 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@353 -- # local d=2 00:23:14.474 11:41:15 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:23:14.474 11:41:15 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@355 -- # echo 2 00:23:14.474 11:41:15 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@366 -- # ver2[v]=2 00:23:14.474 11:41:15 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:23:14.474 11:41:15 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:23:14.474 11:41:15 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@368 -- # return 0 00:23:14.474 11:41:15 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:23:14.474 11:41:15 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:23:14.474 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:14.474 --rc genhtml_branch_coverage=1 00:23:14.474 --rc genhtml_function_coverage=1 00:23:14.474 --rc genhtml_legend=1 00:23:14.474 --rc geninfo_all_blocks=1 00:23:14.474 --rc geninfo_unexecuted_blocks=1 00:23:14.474 00:23:14.474 ' 00:23:14.474 11:41:15 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:23:14.474 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:14.474 --rc genhtml_branch_coverage=1 00:23:14.474 --rc genhtml_function_coverage=1 00:23:14.474 --rc genhtml_legend=1 00:23:14.474 --rc geninfo_all_blocks=1 00:23:14.474 --rc geninfo_unexecuted_blocks=1 00:23:14.474 00:23:14.474 ' 00:23:14.474 11:41:15 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:23:14.474 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:14.474 --rc genhtml_branch_coverage=1 00:23:14.474 --rc genhtml_function_coverage=1 00:23:14.474 --rc genhtml_legend=1 00:23:14.474 --rc geninfo_all_blocks=1 00:23:14.474 --rc geninfo_unexecuted_blocks=1 00:23:14.474 00:23:14.474 ' 00:23:14.474 11:41:15 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:23:14.474 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:14.474 --rc genhtml_branch_coverage=1 00:23:14.474 --rc genhtml_function_coverage=1 00:23:14.474 --rc genhtml_legend=1 00:23:14.474 --rc geninfo_all_blocks=1 00:23:14.474 --rc geninfo_unexecuted_blocks=1 00:23:14.474 00:23:14.474 ' 00:23:14.474 11:41:15 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:23:14.474 11:41:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@7 -- # uname -s 00:23:14.474 11:41:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:14.474 11:41:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:14.474 11:41:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:14.474 11:41:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:14.474 11:41:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:14.474 11:41:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:14.474 11:41:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:14.474 11:41:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:14.474 11:41:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:14.474 11:41:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:14.474 11:41:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:23:14.474 11:41:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@18 -- # NVME_HOSTID=00abaa28-3537-eb11-906e-0017a4403562 00:23:14.474 11:41:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:14.474 11:41:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:14.474 11:41:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:14.474 11:41:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:14.474 11:41:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:23:14.474 11:41:15 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@15 -- # shopt -s extglob 00:23:14.474 11:41:15 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:14.474 11:41:15 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:14.474 11:41:15 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:14.474 11:41:15 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:14.474 11:41:15 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:14.474 11:41:15 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:14.474 11:41:15 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@5 -- # export PATH 00:23:14.474 11:41:15 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:14.474 11:41:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@51 -- # : 0 00:23:14.474 11:41:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:23:14.474 11:41:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:23:14.474 11:41:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:14.474 11:41:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:14.474 11:41:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:14.474 11:41:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:23:14.474 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:23:14.474 11:41:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:23:14.474 11:41:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:23:14.474 11:41:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@55 -- # have_pci_nics=0 00:23:14.474 11:41:15 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@11 -- # MALLOC_BDEV_SIZE=64 00:23:14.474 11:41:15 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:23:14.474 11:41:15 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@14 -- # nvmftestinit 00:23:14.474 11:41:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:23:14.474 11:41:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:14.474 11:41:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@476 -- # prepare_net_devs 00:23:14.474 11:41:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@438 -- # local -g is_hw=no 00:23:14.474 11:41:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@440 -- # remove_spdk_ns 00:23:14.474 11:41:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:14.474 11:41:15 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:14.474 11:41:15 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:14.474 11:41:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:23:14.474 11:41:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:23:14.474 11:41:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@309 -- # xtrace_disable 00:23:14.474 11:41:15 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:23:19.751 11:41:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:23:19.751 11:41:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@315 -- # pci_devs=() 00:23:19.751 11:41:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@315 -- # local -a pci_devs 00:23:19.751 11:41:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@316 -- # pci_net_devs=() 00:23:19.751 11:41:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:23:19.751 11:41:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@317 -- # pci_drivers=() 00:23:19.751 11:41:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@317 -- # local -A pci_drivers 00:23:19.751 11:41:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@319 -- # net_devs=() 00:23:19.751 11:41:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@319 -- # local -ga net_devs 00:23:19.751 11:41:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@320 -- # e810=() 00:23:19.751 11:41:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@320 -- # local -ga e810 00:23:19.751 11:41:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@321 -- # x722=() 00:23:19.751 11:41:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@321 -- # local -ga x722 00:23:19.751 11:41:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@322 -- # mlx=() 00:23:19.751 11:41:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@322 -- # local -ga mlx 00:23:19.751 11:41:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:19.751 11:41:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:19.751 11:41:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:19.751 11:41:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:19.751 11:41:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:19.751 11:41:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:19.751 11:41:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:19.751 11:41:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:23:19.751 11:41:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:19.751 11:41:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:19.751 11:41:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:19.751 11:41:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:19.751 11:41:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:23:19.751 11:41:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:23:19.751 11:41:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:23:19.751 11:41:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:23:19.751 11:41:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:23:19.751 11:41:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:23:19.751 11:41:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:23:19.751 11:41:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:23:19.751 Found 0000:af:00.0 (0x8086 - 0x159b) 00:23:19.751 11:41:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:23:19.751 11:41:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:23:19.751 11:41:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:19.751 11:41:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:19.751 11:41:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:23:19.751 11:41:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:23:19.751 11:41:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:23:19.751 Found 0000:af:00.1 (0x8086 - 0x159b) 00:23:19.751 11:41:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:23:19.751 11:41:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:23:19.751 11:41:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:19.751 11:41:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:19.751 11:41:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:23:19.751 11:41:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:23:19.751 11:41:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:23:19.751 11:41:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:23:19.751 11:41:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:23:19.751 11:41:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:19.751 11:41:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:23:19.751 11:41:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:19.751 11:41:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@418 -- # [[ up == up ]] 00:23:19.751 11:41:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:23:19.752 11:41:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:19.752 11:41:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:23:19.752 Found net devices under 0000:af:00.0: cvl_0_0 00:23:19.752 11:41:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:23:19.752 11:41:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:23:19.752 11:41:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:19.752 11:41:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:23:19.752 11:41:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:19.752 11:41:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@418 -- # [[ up == up ]] 00:23:19.752 11:41:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:23:19.752 11:41:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:19.752 11:41:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:23:19.752 Found net devices under 0000:af:00.1: cvl_0_1 00:23:19.752 11:41:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:23:19.752 11:41:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:23:19.752 11:41:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@442 -- # is_hw=yes 00:23:19.752 11:41:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:23:19.752 11:41:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:23:19.752 11:41:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:23:19.752 11:41:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:23:19.752 11:41:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:19.752 11:41:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:19.752 11:41:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:19.752 11:41:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:23:19.752 11:41:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:23:19.752 11:41:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:23:19.752 11:41:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:23:19.752 11:41:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:23:19.752 11:41:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:23:19.752 11:41:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:19.752 11:41:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:23:19.752 11:41:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:23:19.752 11:41:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:23:19.752 11:41:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:23:20.011 11:41:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:23:20.011 11:41:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:20.011 11:41:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:23:20.011 11:41:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:20.011 11:41:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:23:20.011 11:41:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:23:20.011 11:41:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:23:20.011 11:41:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:23:20.011 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:20.011 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.468 ms 00:23:20.011 00:23:20.011 --- 10.0.0.2 ping statistics --- 00:23:20.011 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:20.011 rtt min/avg/max/mdev = 0.468/0.468/0.468/0.000 ms 00:23:20.011 11:41:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:23:20.011 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:20.011 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.235 ms 00:23:20.011 00:23:20.011 --- 10.0.0.1 ping statistics --- 00:23:20.011 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:20.011 rtt min/avg/max/mdev = 0.235/0.235/0.235/0.000 ms 00:23:20.011 11:41:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:20.011 11:41:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@450 -- # return 0 00:23:20.011 11:41:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:23:20.011 11:41:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:20.011 11:41:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:23:20.011 11:41:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:23:20.011 11:41:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:20.011 11:41:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:23:20.011 11:41:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:23:20.011 11:41:20 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@16 -- # timing_enter start_nvmf_tgt 00:23:20.011 11:41:20 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@724 -- # xtrace_disable 00:23:20.011 11:41:20 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:23:20.011 11:41:20 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@19 -- # nvmfpid=1322854 00:23:20.011 11:41:20 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@18 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:23:20.011 11:41:20 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@21 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:23:20.011 11:41:20 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@23 -- # waitforlisten 1322854 00:23:20.011 11:41:20 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@833 -- # '[' -z 1322854 ']' 00:23:20.011 11:41:20 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:20.011 11:41:20 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@838 -- # local max_retries=100 00:23:20.011 11:41:20 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:20.011 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:20.011 11:41:20 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@842 -- # xtrace_disable 00:23:20.011 11:41:20 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:23:20.270 [2024-11-15 11:41:20.891298] Starting SPDK v25.01-pre git sha1 4b2d483c6 / DPDK 24.03.0 initialization... 00:23:20.271 [2024-11-15 11:41:20.891354] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:20.271 [2024-11-15 11:41:20.993370] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:23:20.271 [2024-11-15 11:41:21.044623] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:20.271 [2024-11-15 11:41:21.044669] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:20.271 [2024-11-15 11:41:21.044679] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:20.271 [2024-11-15 11:41:21.044688] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:20.271 [2024-11-15 11:41:21.044696] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:20.271 [2024-11-15 11:41:21.046604] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:23:20.271 [2024-11-15 11:41:21.046705] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:23:20.271 [2024-11-15 11:41:21.046782] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:23:20.271 [2024-11-15 11:41:21.046786] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:23:20.530 11:41:21 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:23:20.530 11:41:21 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@866 -- # return 0 00:23:20.530 11:41:21 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@24 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:23:20.530 11:41:21 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:20.530 11:41:21 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:23:20.530 [2024-11-15 11:41:21.158675] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:20.530 11:41:21 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:20.530 11:41:21 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@25 -- # timing_exit start_nvmf_tgt 00:23:20.530 11:41:21 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@730 -- # xtrace_disable 00:23:20.530 11:41:21 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:23:20.530 11:41:21 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@27 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:23:20.530 11:41:21 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:20.530 11:41:21 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:23:20.530 Malloc0 00:23:20.530 11:41:21 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:20.530 11:41:21 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:23:20.530 11:41:21 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:20.530 11:41:21 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:23:20.530 11:41:21 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:20.530 11:41:21 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 --nguid ABCDEF0123456789ABCDEF0123456789 --eui64 ABCDEF0123456789 00:23:20.530 11:41:21 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:20.530 11:41:21 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:23:20.530 11:41:21 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:20.530 11:41:21 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:23:20.530 11:41:21 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:20.530 11:41:21 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:23:20.530 [2024-11-15 11:41:21.260987] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:20.530 11:41:21 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:20.530 11:41:21 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@35 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:23:20.530 11:41:21 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:20.530 11:41:21 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:23:20.530 11:41:21 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:20.530 11:41:21 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@37 -- # rpc_cmd nvmf_get_subsystems 00:23:20.530 11:41:21 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:20.530 11:41:21 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:23:20.530 [ 00:23:20.530 { 00:23:20.530 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:23:20.530 "subtype": "Discovery", 00:23:20.530 "listen_addresses": [ 00:23:20.530 { 00:23:20.530 "trtype": "TCP", 00:23:20.530 "adrfam": "IPv4", 00:23:20.530 "traddr": "10.0.0.2", 00:23:20.530 "trsvcid": "4420" 00:23:20.530 } 00:23:20.530 ], 00:23:20.530 "allow_any_host": true, 00:23:20.530 "hosts": [] 00:23:20.530 }, 00:23:20.530 { 00:23:20.530 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:20.530 "subtype": "NVMe", 00:23:20.530 "listen_addresses": [ 00:23:20.530 { 00:23:20.530 "trtype": "TCP", 00:23:20.530 "adrfam": "IPv4", 00:23:20.530 "traddr": "10.0.0.2", 00:23:20.530 "trsvcid": "4420" 00:23:20.530 } 00:23:20.530 ], 00:23:20.530 "allow_any_host": true, 00:23:20.530 "hosts": [], 00:23:20.530 "serial_number": "SPDK00000000000001", 00:23:20.530 "model_number": "SPDK bdev Controller", 00:23:20.530 "max_namespaces": 32, 00:23:20.530 "min_cntlid": 1, 00:23:20.530 "max_cntlid": 65519, 00:23:20.530 "namespaces": [ 00:23:20.530 { 00:23:20.530 "nsid": 1, 00:23:20.530 "bdev_name": "Malloc0", 00:23:20.530 "name": "Malloc0", 00:23:20.530 "nguid": "ABCDEF0123456789ABCDEF0123456789", 00:23:20.530 "eui64": "ABCDEF0123456789", 00:23:20.530 "uuid": "b61f0533-f048-40d6-a24b-97b4df710482" 00:23:20.530 } 00:23:20.530 ] 00:23:20.530 } 00:23:20.530 ] 00:23:20.530 11:41:21 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:20.530 11:41:21 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@39 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' -L all 00:23:20.530 [2024-11-15 11:41:21.314721] Starting SPDK v25.01-pre git sha1 4b2d483c6 / DPDK 24.03.0 initialization... 00:23:20.531 [2024-11-15 11:41:21.314770] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1322876 ] 00:23:20.531 [2024-11-15 11:41:21.372100] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 0] setting state to connect adminq (no timeout) 00:23:20.531 [2024-11-15 11:41:21.372158] nvme_tcp.c:2238:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:23:20.531 [2024-11-15 11:41:21.372165] nvme_tcp.c:2242:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:23:20.531 [2024-11-15 11:41:21.372181] nvme_tcp.c:2263:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:23:20.531 [2024-11-15 11:41:21.372195] sock.c: 373:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:23:20.531 [2024-11-15 11:41:21.375832] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 0] setting state to wait for connect adminq (no timeout) 00:23:20.531 [2024-11-15 11:41:21.375878] nvme_tcp.c:1455:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0xc93550 0 00:23:20.531 [2024-11-15 11:41:21.376102] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:23:20.531 [2024-11-15 11:41:21.376114] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:23:20.531 [2024-11-15 11:41:21.376120] nvme_tcp.c:1501:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:23:20.531 [2024-11-15 11:41:21.376125] nvme_tcp.c:1502:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:23:20.531 [2024-11-15 11:41:21.376162] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:20.531 [2024-11-15 11:41:21.376170] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:20.531 [2024-11-15 11:41:21.376175] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xc93550) 00:23:20.531 [2024-11-15 11:41:21.376191] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:23:20.531 [2024-11-15 11:41:21.376209] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xcf5100, cid 0, qid 0 00:23:20.794 [2024-11-15 11:41:21.383473] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:20.794 [2024-11-15 11:41:21.383487] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:20.794 [2024-11-15 11:41:21.383492] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:20.794 [2024-11-15 11:41:21.383498] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xcf5100) on tqpair=0xc93550 00:23:20.794 [2024-11-15 11:41:21.383510] nvme_fabric.c: 621:nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:23:20.794 [2024-11-15 11:41:21.383519] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to read vs (no timeout) 00:23:20.794 [2024-11-15 11:41:21.383525] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to read vs wait for vs (no timeout) 00:23:20.794 [2024-11-15 11:41:21.383542] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:20.794 [2024-11-15 11:41:21.383548] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:20.794 [2024-11-15 11:41:21.383553] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xc93550) 00:23:20.794 [2024-11-15 11:41:21.383563] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.794 [2024-11-15 11:41:21.383580] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xcf5100, cid 0, qid 0 00:23:20.794 [2024-11-15 11:41:21.383754] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:20.794 [2024-11-15 11:41:21.383763] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:20.794 [2024-11-15 11:41:21.383768] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:20.794 [2024-11-15 11:41:21.383774] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xcf5100) on tqpair=0xc93550 00:23:20.794 [2024-11-15 11:41:21.383781] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to read cap (no timeout) 00:23:20.794 [2024-11-15 11:41:21.383790] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to read cap wait for cap (no timeout) 00:23:20.794 [2024-11-15 11:41:21.383799] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:20.794 [2024-11-15 11:41:21.383805] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:20.794 [2024-11-15 11:41:21.383809] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xc93550) 00:23:20.794 [2024-11-15 11:41:21.383818] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.794 [2024-11-15 11:41:21.383833] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xcf5100, cid 0, qid 0 00:23:20.794 [2024-11-15 11:41:21.383908] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:20.794 [2024-11-15 11:41:21.383917] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:20.794 [2024-11-15 11:41:21.383922] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:20.794 [2024-11-15 11:41:21.383927] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xcf5100) on tqpair=0xc93550 00:23:20.794 [2024-11-15 11:41:21.383937] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to check en (no timeout) 00:23:20.794 [2024-11-15 11:41:21.383947] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to check en wait for cc (timeout 15000 ms) 00:23:20.794 [2024-11-15 11:41:21.383956] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:20.794 [2024-11-15 11:41:21.383961] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:20.794 [2024-11-15 11:41:21.383966] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xc93550) 00:23:20.794 [2024-11-15 11:41:21.383975] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.794 [2024-11-15 11:41:21.383989] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xcf5100, cid 0, qid 0 00:23:20.794 [2024-11-15 11:41:21.384071] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:20.794 [2024-11-15 11:41:21.384079] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:20.794 [2024-11-15 11:41:21.384084] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:20.794 [2024-11-15 11:41:21.384089] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xcf5100) on tqpair=0xc93550 00:23:20.794 [2024-11-15 11:41:21.384095] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:23:20.794 [2024-11-15 11:41:21.384108] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:20.794 [2024-11-15 11:41:21.384113] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:20.794 [2024-11-15 11:41:21.384118] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xc93550) 00:23:20.794 [2024-11-15 11:41:21.384127] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.794 [2024-11-15 11:41:21.384141] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xcf5100, cid 0, qid 0 00:23:20.794 [2024-11-15 11:41:21.384210] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:20.794 [2024-11-15 11:41:21.384219] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:20.794 [2024-11-15 11:41:21.384224] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:20.794 [2024-11-15 11:41:21.384229] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xcf5100) on tqpair=0xc93550 00:23:20.794 [2024-11-15 11:41:21.384235] nvme_ctrlr.c:3906:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] CC.EN = 0 && CSTS.RDY = 0 00:23:20.794 [2024-11-15 11:41:21.384242] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to controller is disabled (timeout 15000 ms) 00:23:20.794 [2024-11-15 11:41:21.384251] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:23:20.794 [2024-11-15 11:41:21.384362] nvme_ctrlr.c:4104:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] Setting CC.EN = 1 00:23:20.794 [2024-11-15 11:41:21.384368] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:23:20.794 [2024-11-15 11:41:21.384379] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:20.794 [2024-11-15 11:41:21.384384] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:20.794 [2024-11-15 11:41:21.384389] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xc93550) 00:23:20.794 [2024-11-15 11:41:21.384398] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.794 [2024-11-15 11:41:21.384412] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xcf5100, cid 0, qid 0 00:23:20.794 [2024-11-15 11:41:21.384482] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:20.794 [2024-11-15 11:41:21.384494] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:20.794 [2024-11-15 11:41:21.384499] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:20.794 [2024-11-15 11:41:21.384504] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xcf5100) on tqpair=0xc93550 00:23:20.794 [2024-11-15 11:41:21.384510] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:23:20.794 [2024-11-15 11:41:21.384522] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:20.794 [2024-11-15 11:41:21.384527] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:20.794 [2024-11-15 11:41:21.384532] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xc93550) 00:23:20.794 [2024-11-15 11:41:21.384541] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.794 [2024-11-15 11:41:21.384555] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xcf5100, cid 0, qid 0 00:23:20.794 [2024-11-15 11:41:21.384637] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:20.794 [2024-11-15 11:41:21.384645] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:20.794 [2024-11-15 11:41:21.384650] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:20.794 [2024-11-15 11:41:21.384655] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xcf5100) on tqpair=0xc93550 00:23:20.794 [2024-11-15 11:41:21.384661] nvme_ctrlr.c:3941:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:23:20.794 [2024-11-15 11:41:21.384667] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to reset admin queue (timeout 30000 ms) 00:23:20.794 [2024-11-15 11:41:21.384677] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to identify controller (no timeout) 00:23:20.794 [2024-11-15 11:41:21.384690] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to wait for identify controller (timeout 30000 ms) 00:23:20.794 [2024-11-15 11:41:21.384701] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:20.794 [2024-11-15 11:41:21.384707] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xc93550) 00:23:20.794 [2024-11-15 11:41:21.384715] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.794 [2024-11-15 11:41:21.384729] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xcf5100, cid 0, qid 0 00:23:20.794 [2024-11-15 11:41:21.384822] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:23:20.794 [2024-11-15 11:41:21.384831] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:23:20.794 [2024-11-15 11:41:21.384836] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:23:20.795 [2024-11-15 11:41:21.384841] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xc93550): datao=0, datal=4096, cccid=0 00:23:20.795 [2024-11-15 11:41:21.384847] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xcf5100) on tqpair(0xc93550): expected_datao=0, payload_size=4096 00:23:20.795 [2024-11-15 11:41:21.384853] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:20.795 [2024-11-15 11:41:21.384863] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:23:20.795 [2024-11-15 11:41:21.384868] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:23:20.795 [2024-11-15 11:41:21.384903] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:20.795 [2024-11-15 11:41:21.384911] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:20.795 [2024-11-15 11:41:21.384916] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:20.795 [2024-11-15 11:41:21.384921] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xcf5100) on tqpair=0xc93550 00:23:20.795 [2024-11-15 11:41:21.384930] nvme_ctrlr.c:2081:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] transport max_xfer_size 4294967295 00:23:20.795 [2024-11-15 11:41:21.384941] nvme_ctrlr.c:2085:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] MDTS max_xfer_size 131072 00:23:20.795 [2024-11-15 11:41:21.384947] nvme_ctrlr.c:2088:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] CNTLID 0x0001 00:23:20.795 [2024-11-15 11:41:21.384957] nvme_ctrlr.c:2112:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] transport max_sges 16 00:23:20.795 [2024-11-15 11:41:21.384964] nvme_ctrlr.c:2127:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] fuses compare and write: 1 00:23:20.795 [2024-11-15 11:41:21.384970] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to configure AER (timeout 30000 ms) 00:23:20.795 [2024-11-15 11:41:21.384984] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to wait for configure aer (timeout 30000 ms) 00:23:20.795 [2024-11-15 11:41:21.384993] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:20.795 [2024-11-15 11:41:21.384999] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:20.795 [2024-11-15 11:41:21.385004] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xc93550) 00:23:20.795 [2024-11-15 11:41:21.385013] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:23:20.795 [2024-11-15 11:41:21.385027] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xcf5100, cid 0, qid 0 00:23:20.795 [2024-11-15 11:41:21.385100] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:20.795 [2024-11-15 11:41:21.385108] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:20.795 [2024-11-15 11:41:21.385113] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:20.795 [2024-11-15 11:41:21.385118] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xcf5100) on tqpair=0xc93550 00:23:20.795 [2024-11-15 11:41:21.385128] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:20.795 [2024-11-15 11:41:21.385134] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:20.795 [2024-11-15 11:41:21.385138] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xc93550) 00:23:20.795 [2024-11-15 11:41:21.385146] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:20.795 [2024-11-15 11:41:21.385154] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:20.795 [2024-11-15 11:41:21.385159] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:20.795 [2024-11-15 11:41:21.385164] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0xc93550) 00:23:20.795 [2024-11-15 11:41:21.385172] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:20.795 [2024-11-15 11:41:21.385180] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:20.795 [2024-11-15 11:41:21.385185] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:20.795 [2024-11-15 11:41:21.385190] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0xc93550) 00:23:20.795 [2024-11-15 11:41:21.385198] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:20.795 [2024-11-15 11:41:21.385206] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:20.795 [2024-11-15 11:41:21.385211] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:20.795 [2024-11-15 11:41:21.385216] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xc93550) 00:23:20.795 [2024-11-15 11:41:21.385224] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:20.795 [2024-11-15 11:41:21.385230] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to set keep alive timeout (timeout 30000 ms) 00:23:20.795 [2024-11-15 11:41:21.385245] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:23:20.795 [2024-11-15 11:41:21.385253] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:20.795 [2024-11-15 11:41:21.385258] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xc93550) 00:23:20.795 [2024-11-15 11:41:21.385267] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.795 [2024-11-15 11:41:21.385283] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xcf5100, cid 0, qid 0 00:23:20.795 [2024-11-15 11:41:21.385290] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xcf5280, cid 1, qid 0 00:23:20.795 [2024-11-15 11:41:21.385296] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xcf5400, cid 2, qid 0 00:23:20.795 [2024-11-15 11:41:21.385302] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xcf5580, cid 3, qid 0 00:23:20.795 [2024-11-15 11:41:21.385309] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xcf5700, cid 4, qid 0 00:23:20.795 [2024-11-15 11:41:21.385443] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:20.795 [2024-11-15 11:41:21.385452] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:20.795 [2024-11-15 11:41:21.385456] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:20.795 [2024-11-15 11:41:21.385468] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xcf5700) on tqpair=0xc93550 00:23:20.795 [2024-11-15 11:41:21.385478] nvme_ctrlr.c:3059:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] Sending keep alive every 5000000 us 00:23:20.795 [2024-11-15 11:41:21.385485] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to ready (no timeout) 00:23:20.795 [2024-11-15 11:41:21.385499] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:20.795 [2024-11-15 11:41:21.385505] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xc93550) 00:23:20.795 [2024-11-15 11:41:21.385513] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.795 [2024-11-15 11:41:21.385528] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xcf5700, cid 4, qid 0 00:23:20.795 [2024-11-15 11:41:21.385604] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:23:20.795 [2024-11-15 11:41:21.385612] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:23:20.795 [2024-11-15 11:41:21.385617] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:23:20.795 [2024-11-15 11:41:21.385622] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xc93550): datao=0, datal=4096, cccid=4 00:23:20.795 [2024-11-15 11:41:21.385628] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xcf5700) on tqpair(0xc93550): expected_datao=0, payload_size=4096 00:23:20.795 [2024-11-15 11:41:21.385634] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:20.795 [2024-11-15 11:41:21.385650] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:23:20.795 [2024-11-15 11:41:21.385656] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:23:20.795 [2024-11-15 11:41:21.385732] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:20.795 [2024-11-15 11:41:21.385740] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:20.795 [2024-11-15 11:41:21.385745] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:20.795 [2024-11-15 11:41:21.385750] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xcf5700) on tqpair=0xc93550 00:23:20.795 [2024-11-15 11:41:21.385764] nvme_ctrlr.c:4202:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] Ctrlr already in ready state 00:23:20.795 [2024-11-15 11:41:21.385789] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:20.795 [2024-11-15 11:41:21.385795] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xc93550) 00:23:20.795 [2024-11-15 11:41:21.385806] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.795 [2024-11-15 11:41:21.385815] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:20.795 [2024-11-15 11:41:21.385820] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:20.795 [2024-11-15 11:41:21.385825] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0xc93550) 00:23:20.795 [2024-11-15 11:41:21.385832] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:23:20.795 [2024-11-15 11:41:21.385851] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xcf5700, cid 4, qid 0 00:23:20.795 [2024-11-15 11:41:21.385858] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xcf5880, cid 5, qid 0 00:23:20.795 [2024-11-15 11:41:21.385960] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:23:20.795 [2024-11-15 11:41:21.385969] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:23:20.795 [2024-11-15 11:41:21.385974] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:23:20.796 [2024-11-15 11:41:21.385978] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xc93550): datao=0, datal=1024, cccid=4 00:23:20.796 [2024-11-15 11:41:21.385984] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xcf5700) on tqpair(0xc93550): expected_datao=0, payload_size=1024 00:23:20.796 [2024-11-15 11:41:21.385990] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:20.796 [2024-11-15 11:41:21.385999] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:23:20.796 [2024-11-15 11:41:21.386004] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:23:20.796 [2024-11-15 11:41:21.386011] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:20.796 [2024-11-15 11:41:21.386019] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:20.796 [2024-11-15 11:41:21.386023] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:20.796 [2024-11-15 11:41:21.386028] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xcf5880) on tqpair=0xc93550 00:23:20.796 [2024-11-15 11:41:21.426660] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:20.796 [2024-11-15 11:41:21.426677] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:20.796 [2024-11-15 11:41:21.426682] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:20.796 [2024-11-15 11:41:21.426688] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xcf5700) on tqpair=0xc93550 00:23:20.796 [2024-11-15 11:41:21.426704] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:20.796 [2024-11-15 11:41:21.426709] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xc93550) 00:23:20.796 [2024-11-15 11:41:21.426719] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:02ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.796 [2024-11-15 11:41:21.426742] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xcf5700, cid 4, qid 0 00:23:20.796 [2024-11-15 11:41:21.426823] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:23:20.796 [2024-11-15 11:41:21.426832] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:23:20.796 [2024-11-15 11:41:21.426836] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:23:20.796 [2024-11-15 11:41:21.426841] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xc93550): datao=0, datal=3072, cccid=4 00:23:20.796 [2024-11-15 11:41:21.426847] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xcf5700) on tqpair(0xc93550): expected_datao=0, payload_size=3072 00:23:20.796 [2024-11-15 11:41:21.426854] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:20.796 [2024-11-15 11:41:21.426863] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:23:20.796 [2024-11-15 11:41:21.426868] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:23:20.796 [2024-11-15 11:41:21.426923] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:20.796 [2024-11-15 11:41:21.426935] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:20.796 [2024-11-15 11:41:21.426940] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:20.796 [2024-11-15 11:41:21.426945] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xcf5700) on tqpair=0xc93550 00:23:20.796 [2024-11-15 11:41:21.426956] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:20.796 [2024-11-15 11:41:21.426961] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xc93550) 00:23:20.796 [2024-11-15 11:41:21.426970] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00010070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.796 [2024-11-15 11:41:21.426990] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xcf5700, cid 4, qid 0 00:23:20.796 [2024-11-15 11:41:21.427097] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:23:20.796 [2024-11-15 11:41:21.427106] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:23:20.796 [2024-11-15 11:41:21.427111] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:23:20.796 [2024-11-15 11:41:21.427115] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xc93550): datao=0, datal=8, cccid=4 00:23:20.796 [2024-11-15 11:41:21.427121] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xcf5700) on tqpair(0xc93550): expected_datao=0, payload_size=8 00:23:20.796 [2024-11-15 11:41:21.427127] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:20.796 [2024-11-15 11:41:21.427135] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:23:20.796 [2024-11-15 11:41:21.427140] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:23:20.796 [2024-11-15 11:41:21.470470] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:20.796 [2024-11-15 11:41:21.470485] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:20.796 [2024-11-15 11:41:21.470489] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:20.796 [2024-11-15 11:41:21.470495] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xcf5700) on tqpair=0xc93550 00:23:20.796 ===================================================== 00:23:20.796 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2014-08.org.nvmexpress.discovery 00:23:20.796 ===================================================== 00:23:20.796 Controller Capabilities/Features 00:23:20.796 ================================ 00:23:20.796 Vendor ID: 0000 00:23:20.796 Subsystem Vendor ID: 0000 00:23:20.796 Serial Number: .................... 00:23:20.796 Model Number: ........................................ 00:23:20.796 Firmware Version: 25.01 00:23:20.796 Recommended Arb Burst: 0 00:23:20.796 IEEE OUI Identifier: 00 00 00 00:23:20.796 Multi-path I/O 00:23:20.796 May have multiple subsystem ports: No 00:23:20.796 May have multiple controllers: No 00:23:20.796 Associated with SR-IOV VF: No 00:23:20.796 Max Data Transfer Size: 131072 00:23:20.796 Max Number of Namespaces: 0 00:23:20.796 Max Number of I/O Queues: 1024 00:23:20.796 NVMe Specification Version (VS): 1.3 00:23:20.796 NVMe Specification Version (Identify): 1.3 00:23:20.796 Maximum Queue Entries: 128 00:23:20.796 Contiguous Queues Required: Yes 00:23:20.796 Arbitration Mechanisms Supported 00:23:20.796 Weighted Round Robin: Not Supported 00:23:20.796 Vendor Specific: Not Supported 00:23:20.796 Reset Timeout: 15000 ms 00:23:20.796 Doorbell Stride: 4 bytes 00:23:20.796 NVM Subsystem Reset: Not Supported 00:23:20.796 Command Sets Supported 00:23:20.796 NVM Command Set: Supported 00:23:20.796 Boot Partition: Not Supported 00:23:20.796 Memory Page Size Minimum: 4096 bytes 00:23:20.796 Memory Page Size Maximum: 4096 bytes 00:23:20.796 Persistent Memory Region: Not Supported 00:23:20.796 Optional Asynchronous Events Supported 00:23:20.796 Namespace Attribute Notices: Not Supported 00:23:20.796 Firmware Activation Notices: Not Supported 00:23:20.796 ANA Change Notices: Not Supported 00:23:20.796 PLE Aggregate Log Change Notices: Not Supported 00:23:20.796 LBA Status Info Alert Notices: Not Supported 00:23:20.796 EGE Aggregate Log Change Notices: Not Supported 00:23:20.796 Normal NVM Subsystem Shutdown event: Not Supported 00:23:20.796 Zone Descriptor Change Notices: Not Supported 00:23:20.796 Discovery Log Change Notices: Supported 00:23:20.796 Controller Attributes 00:23:20.796 128-bit Host Identifier: Not Supported 00:23:20.796 Non-Operational Permissive Mode: Not Supported 00:23:20.796 NVM Sets: Not Supported 00:23:20.796 Read Recovery Levels: Not Supported 00:23:20.796 Endurance Groups: Not Supported 00:23:20.796 Predictable Latency Mode: Not Supported 00:23:20.796 Traffic Based Keep ALive: Not Supported 00:23:20.796 Namespace Granularity: Not Supported 00:23:20.796 SQ Associations: Not Supported 00:23:20.796 UUID List: Not Supported 00:23:20.796 Multi-Domain Subsystem: Not Supported 00:23:20.796 Fixed Capacity Management: Not Supported 00:23:20.796 Variable Capacity Management: Not Supported 00:23:20.796 Delete Endurance Group: Not Supported 00:23:20.796 Delete NVM Set: Not Supported 00:23:20.796 Extended LBA Formats Supported: Not Supported 00:23:20.796 Flexible Data Placement Supported: Not Supported 00:23:20.796 00:23:20.796 Controller Memory Buffer Support 00:23:20.796 ================================ 00:23:20.796 Supported: No 00:23:20.796 00:23:20.796 Persistent Memory Region Support 00:23:20.796 ================================ 00:23:20.797 Supported: No 00:23:20.797 00:23:20.797 Admin Command Set Attributes 00:23:20.797 ============================ 00:23:20.797 Security Send/Receive: Not Supported 00:23:20.797 Format NVM: Not Supported 00:23:20.797 Firmware Activate/Download: Not Supported 00:23:20.797 Namespace Management: Not Supported 00:23:20.797 Device Self-Test: Not Supported 00:23:20.797 Directives: Not Supported 00:23:20.797 NVMe-MI: Not Supported 00:23:20.797 Virtualization Management: Not Supported 00:23:20.797 Doorbell Buffer Config: Not Supported 00:23:20.797 Get LBA Status Capability: Not Supported 00:23:20.797 Command & Feature Lockdown Capability: Not Supported 00:23:20.797 Abort Command Limit: 1 00:23:20.797 Async Event Request Limit: 4 00:23:20.797 Number of Firmware Slots: N/A 00:23:20.797 Firmware Slot 1 Read-Only: N/A 00:23:20.797 Firmware Activation Without Reset: N/A 00:23:20.797 Multiple Update Detection Support: N/A 00:23:20.797 Firmware Update Granularity: No Information Provided 00:23:20.797 Per-Namespace SMART Log: No 00:23:20.797 Asymmetric Namespace Access Log Page: Not Supported 00:23:20.797 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:23:20.797 Command Effects Log Page: Not Supported 00:23:20.797 Get Log Page Extended Data: Supported 00:23:20.797 Telemetry Log Pages: Not Supported 00:23:20.797 Persistent Event Log Pages: Not Supported 00:23:20.797 Supported Log Pages Log Page: May Support 00:23:20.797 Commands Supported & Effects Log Page: Not Supported 00:23:20.797 Feature Identifiers & Effects Log Page:May Support 00:23:20.797 NVMe-MI Commands & Effects Log Page: May Support 00:23:20.797 Data Area 4 for Telemetry Log: Not Supported 00:23:20.797 Error Log Page Entries Supported: 128 00:23:20.797 Keep Alive: Not Supported 00:23:20.797 00:23:20.797 NVM Command Set Attributes 00:23:20.797 ========================== 00:23:20.797 Submission Queue Entry Size 00:23:20.797 Max: 1 00:23:20.797 Min: 1 00:23:20.797 Completion Queue Entry Size 00:23:20.797 Max: 1 00:23:20.797 Min: 1 00:23:20.797 Number of Namespaces: 0 00:23:20.797 Compare Command: Not Supported 00:23:20.797 Write Uncorrectable Command: Not Supported 00:23:20.797 Dataset Management Command: Not Supported 00:23:20.797 Write Zeroes Command: Not Supported 00:23:20.797 Set Features Save Field: Not Supported 00:23:20.797 Reservations: Not Supported 00:23:20.797 Timestamp: Not Supported 00:23:20.797 Copy: Not Supported 00:23:20.797 Volatile Write Cache: Not Present 00:23:20.797 Atomic Write Unit (Normal): 1 00:23:20.797 Atomic Write Unit (PFail): 1 00:23:20.797 Atomic Compare & Write Unit: 1 00:23:20.797 Fused Compare & Write: Supported 00:23:20.797 Scatter-Gather List 00:23:20.797 SGL Command Set: Supported 00:23:20.797 SGL Keyed: Supported 00:23:20.797 SGL Bit Bucket Descriptor: Not Supported 00:23:20.797 SGL Metadata Pointer: Not Supported 00:23:20.797 Oversized SGL: Not Supported 00:23:20.797 SGL Metadata Address: Not Supported 00:23:20.797 SGL Offset: Supported 00:23:20.797 Transport SGL Data Block: Not Supported 00:23:20.797 Replay Protected Memory Block: Not Supported 00:23:20.797 00:23:20.797 Firmware Slot Information 00:23:20.797 ========================= 00:23:20.797 Active slot: 0 00:23:20.797 00:23:20.797 00:23:20.797 Error Log 00:23:20.797 ========= 00:23:20.797 00:23:20.797 Active Namespaces 00:23:20.797 ================= 00:23:20.797 Discovery Log Page 00:23:20.797 ================== 00:23:20.797 Generation Counter: 2 00:23:20.797 Number of Records: 2 00:23:20.797 Record Format: 0 00:23:20.797 00:23:20.797 Discovery Log Entry 0 00:23:20.797 ---------------------- 00:23:20.797 Transport Type: 3 (TCP) 00:23:20.797 Address Family: 1 (IPv4) 00:23:20.797 Subsystem Type: 3 (Current Discovery Subsystem) 00:23:20.797 Entry Flags: 00:23:20.797 Duplicate Returned Information: 1 00:23:20.797 Explicit Persistent Connection Support for Discovery: 1 00:23:20.797 Transport Requirements: 00:23:20.797 Secure Channel: Not Required 00:23:20.797 Port ID: 0 (0x0000) 00:23:20.797 Controller ID: 65535 (0xffff) 00:23:20.797 Admin Max SQ Size: 128 00:23:20.797 Transport Service Identifier: 4420 00:23:20.797 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:23:20.797 Transport Address: 10.0.0.2 00:23:20.797 Discovery Log Entry 1 00:23:20.797 ---------------------- 00:23:20.797 Transport Type: 3 (TCP) 00:23:20.797 Address Family: 1 (IPv4) 00:23:20.797 Subsystem Type: 2 (NVM Subsystem) 00:23:20.797 Entry Flags: 00:23:20.797 Duplicate Returned Information: 0 00:23:20.797 Explicit Persistent Connection Support for Discovery: 0 00:23:20.797 Transport Requirements: 00:23:20.797 Secure Channel: Not Required 00:23:20.797 Port ID: 0 (0x0000) 00:23:20.797 Controller ID: 65535 (0xffff) 00:23:20.797 Admin Max SQ Size: 128 00:23:20.797 Transport Service Identifier: 4420 00:23:20.797 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:cnode1 00:23:20.797 Transport Address: 10.0.0.2 [2024-11-15 11:41:21.470604] nvme_ctrlr.c:4399:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] Prepare to destruct SSD 00:23:20.797 [2024-11-15 11:41:21.470619] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xcf5100) on tqpair=0xc93550 00:23:20.797 [2024-11-15 11:41:21.470627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.797 [2024-11-15 11:41:21.470634] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xcf5280) on tqpair=0xc93550 00:23:20.797 [2024-11-15 11:41:21.470641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.797 [2024-11-15 11:41:21.470647] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xcf5400) on tqpair=0xc93550 00:23:20.797 [2024-11-15 11:41:21.470653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.797 [2024-11-15 11:41:21.470660] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xcf5580) on tqpair=0xc93550 00:23:20.797 [2024-11-15 11:41:21.470666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.797 [2024-11-15 11:41:21.470679] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:20.797 [2024-11-15 11:41:21.470685] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:20.797 [2024-11-15 11:41:21.470690] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xc93550) 00:23:20.797 [2024-11-15 11:41:21.470699] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.797 [2024-11-15 11:41:21.470719] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xcf5580, cid 3, qid 0 00:23:20.797 [2024-11-15 11:41:21.470782] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:20.797 [2024-11-15 11:41:21.470791] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:20.797 [2024-11-15 11:41:21.470798] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:20.797 [2024-11-15 11:41:21.470804] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xcf5580) on tqpair=0xc93550 00:23:20.797 [2024-11-15 11:41:21.470812] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:20.797 [2024-11-15 11:41:21.470817] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:20.797 [2024-11-15 11:41:21.470822] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xc93550) 00:23:20.797 [2024-11-15 11:41:21.470831] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.797 [2024-11-15 11:41:21.470850] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xcf5580, cid 3, qid 0 00:23:20.797 [2024-11-15 11:41:21.470955] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:20.797 [2024-11-15 11:41:21.470964] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:20.797 [2024-11-15 11:41:21.470968] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:20.797 [2024-11-15 11:41:21.470974] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xcf5580) on tqpair=0xc93550 00:23:20.797 [2024-11-15 11:41:21.470980] nvme_ctrlr.c:1151:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] RTD3E = 0 us 00:23:20.797 [2024-11-15 11:41:21.470986] nvme_ctrlr.c:1154:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] shutdown timeout = 10000 ms 00:23:20.797 [2024-11-15 11:41:21.470998] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:20.797 [2024-11-15 11:41:21.471003] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:20.797 [2024-11-15 11:41:21.471008] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xc93550) 00:23:20.798 [2024-11-15 11:41:21.471017] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.798 [2024-11-15 11:41:21.471031] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xcf5580, cid 3, qid 0 00:23:20.798 [2024-11-15 11:41:21.471118] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:20.798 [2024-11-15 11:41:21.471126] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:20.798 [2024-11-15 11:41:21.471131] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:20.798 [2024-11-15 11:41:21.471136] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xcf5580) on tqpair=0xc93550 00:23:20.798 [2024-11-15 11:41:21.471149] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:20.798 [2024-11-15 11:41:21.471155] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:20.798 [2024-11-15 11:41:21.471159] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xc93550) 00:23:20.798 [2024-11-15 11:41:21.471168] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.798 [2024-11-15 11:41:21.471182] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xcf5580, cid 3, qid 0 00:23:20.798 [2024-11-15 11:41:21.471262] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:20.798 [2024-11-15 11:41:21.471271] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:20.798 [2024-11-15 11:41:21.471275] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:20.798 [2024-11-15 11:41:21.471280] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xcf5580) on tqpair=0xc93550 00:23:20.798 [2024-11-15 11:41:21.471292] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:20.798 [2024-11-15 11:41:21.471298] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:20.798 [2024-11-15 11:41:21.471303] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xc93550) 00:23:20.798 [2024-11-15 11:41:21.471311] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.798 [2024-11-15 11:41:21.471325] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xcf5580, cid 3, qid 0 00:23:20.798 [2024-11-15 11:41:21.471392] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:20.798 [2024-11-15 11:41:21.471401] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:20.798 [2024-11-15 11:41:21.471405] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:20.798 [2024-11-15 11:41:21.471410] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xcf5580) on tqpair=0xc93550 00:23:20.798 [2024-11-15 11:41:21.471423] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:20.798 [2024-11-15 11:41:21.471429] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:20.798 [2024-11-15 11:41:21.471433] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xc93550) 00:23:20.798 [2024-11-15 11:41:21.471442] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.798 [2024-11-15 11:41:21.471456] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xcf5580, cid 3, qid 0 00:23:20.798 [2024-11-15 11:41:21.471563] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:20.798 [2024-11-15 11:41:21.471572] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:20.798 [2024-11-15 11:41:21.471577] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:20.798 [2024-11-15 11:41:21.471582] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xcf5580) on tqpair=0xc93550 00:23:20.798 [2024-11-15 11:41:21.471594] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:20.798 [2024-11-15 11:41:21.471600] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:20.798 [2024-11-15 11:41:21.471605] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xc93550) 00:23:20.798 [2024-11-15 11:41:21.471613] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.798 [2024-11-15 11:41:21.471628] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xcf5580, cid 3, qid 0 00:23:20.798 [2024-11-15 11:41:21.471714] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:20.798 [2024-11-15 11:41:21.471723] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:20.798 [2024-11-15 11:41:21.471727] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:20.798 [2024-11-15 11:41:21.471732] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xcf5580) on tqpair=0xc93550 00:23:20.798 [2024-11-15 11:41:21.471744] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:20.798 [2024-11-15 11:41:21.471750] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:20.798 [2024-11-15 11:41:21.471754] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xc93550) 00:23:20.798 [2024-11-15 11:41:21.471763] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.798 [2024-11-15 11:41:21.471777] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xcf5580, cid 3, qid 0 00:23:20.798 [2024-11-15 11:41:21.471864] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:20.798 [2024-11-15 11:41:21.471872] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:20.798 [2024-11-15 11:41:21.471877] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:20.798 [2024-11-15 11:41:21.471882] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xcf5580) on tqpair=0xc93550 00:23:20.798 [2024-11-15 11:41:21.471893] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:20.798 [2024-11-15 11:41:21.471899] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:20.798 [2024-11-15 11:41:21.471904] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xc93550) 00:23:20.798 [2024-11-15 11:41:21.471912] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.798 [2024-11-15 11:41:21.471926] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xcf5580, cid 3, qid 0 00:23:20.798 [2024-11-15 11:41:21.471994] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:20.798 [2024-11-15 11:41:21.472005] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:20.798 [2024-11-15 11:41:21.472010] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:20.798 [2024-11-15 11:41:21.472015] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xcf5580) on tqpair=0xc93550 00:23:20.798 [2024-11-15 11:41:21.472028] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:20.798 [2024-11-15 11:41:21.472033] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:20.798 [2024-11-15 11:41:21.472038] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xc93550) 00:23:20.798 [2024-11-15 11:41:21.472046] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.798 [2024-11-15 11:41:21.472061] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xcf5580, cid 3, qid 0 00:23:20.798 [2024-11-15 11:41:21.472166] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:20.798 [2024-11-15 11:41:21.472175] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:20.798 [2024-11-15 11:41:21.472180] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:20.798 [2024-11-15 11:41:21.472185] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xcf5580) on tqpair=0xc93550 00:23:20.798 [2024-11-15 11:41:21.472197] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:20.798 [2024-11-15 11:41:21.472202] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:20.798 [2024-11-15 11:41:21.472207] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xc93550) 00:23:20.798 [2024-11-15 11:41:21.472216] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.798 [2024-11-15 11:41:21.472229] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xcf5580, cid 3, qid 0 00:23:20.798 [2024-11-15 11:41:21.472329] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:20.798 [2024-11-15 11:41:21.472337] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:20.798 [2024-11-15 11:41:21.472341] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:20.798 [2024-11-15 11:41:21.472346] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xcf5580) on tqpair=0xc93550 00:23:20.798 [2024-11-15 11:41:21.472359] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:20.798 [2024-11-15 11:41:21.472364] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:20.798 [2024-11-15 11:41:21.472369] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xc93550) 00:23:20.798 [2024-11-15 11:41:21.472378] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.798 [2024-11-15 11:41:21.472392] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xcf5580, cid 3, qid 0 00:23:20.798 [2024-11-15 11:41:21.472488] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:20.798 [2024-11-15 11:41:21.472497] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:20.798 [2024-11-15 11:41:21.472502] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:20.798 [2024-11-15 11:41:21.472507] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xcf5580) on tqpair=0xc93550 00:23:20.798 [2024-11-15 11:41:21.472520] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:20.798 [2024-11-15 11:41:21.472526] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:20.798 [2024-11-15 11:41:21.472530] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xc93550) 00:23:20.798 [2024-11-15 11:41:21.472539] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.798 [2024-11-15 11:41:21.472554] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xcf5580, cid 3, qid 0 00:23:20.798 [2024-11-15 11:41:21.472621] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:20.799 [2024-11-15 11:41:21.472630] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:20.799 [2024-11-15 11:41:21.472637] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:20.799 [2024-11-15 11:41:21.472642] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xcf5580) on tqpair=0xc93550 00:23:20.799 [2024-11-15 11:41:21.472654] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:20.799 [2024-11-15 11:41:21.472660] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:20.799 [2024-11-15 11:41:21.472664] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xc93550) 00:23:20.799 [2024-11-15 11:41:21.472673] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.799 [2024-11-15 11:41:21.472687] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xcf5580, cid 3, qid 0 00:23:20.799 [2024-11-15 11:41:21.472772] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:20.799 [2024-11-15 11:41:21.472780] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:20.799 [2024-11-15 11:41:21.472785] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:20.799 [2024-11-15 11:41:21.472790] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xcf5580) on tqpair=0xc93550 00:23:20.799 [2024-11-15 11:41:21.472802] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:20.799 [2024-11-15 11:41:21.472808] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:20.799 [2024-11-15 11:41:21.472812] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xc93550) 00:23:20.799 [2024-11-15 11:41:21.472821] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.799 [2024-11-15 11:41:21.472835] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xcf5580, cid 3, qid 0 00:23:20.799 [2024-11-15 11:41:21.472922] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:20.799 [2024-11-15 11:41:21.472930] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:20.799 [2024-11-15 11:41:21.472935] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:20.799 [2024-11-15 11:41:21.472940] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xcf5580) on tqpair=0xc93550 00:23:20.799 [2024-11-15 11:41:21.472952] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:20.799 [2024-11-15 11:41:21.472958] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:20.799 [2024-11-15 11:41:21.472962] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xc93550) 00:23:20.799 [2024-11-15 11:41:21.472971] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.799 [2024-11-15 11:41:21.472984] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xcf5580, cid 3, qid 0 00:23:20.799 [2024-11-15 11:41:21.473075] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:20.799 [2024-11-15 11:41:21.473083] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:20.799 [2024-11-15 11:41:21.473088] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:20.799 [2024-11-15 11:41:21.473093] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xcf5580) on tqpair=0xc93550 00:23:20.799 [2024-11-15 11:41:21.473105] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:20.799 [2024-11-15 11:41:21.473111] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:20.799 [2024-11-15 11:41:21.473115] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xc93550) 00:23:20.799 [2024-11-15 11:41:21.473124] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.799 [2024-11-15 11:41:21.473138] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xcf5580, cid 3, qid 0 00:23:20.799 [2024-11-15 11:41:21.473205] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:20.799 [2024-11-15 11:41:21.473214] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:20.799 [2024-11-15 11:41:21.473218] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:20.799 [2024-11-15 11:41:21.473223] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xcf5580) on tqpair=0xc93550 00:23:20.799 [2024-11-15 11:41:21.473239] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:20.799 [2024-11-15 11:41:21.473245] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:20.799 [2024-11-15 11:41:21.473250] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xc93550) 00:23:20.799 [2024-11-15 11:41:21.473258] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.799 [2024-11-15 11:41:21.473272] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xcf5580, cid 3, qid 0 00:23:20.799 [2024-11-15 11:41:21.473378] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:20.799 [2024-11-15 11:41:21.473387] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:20.799 [2024-11-15 11:41:21.473391] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:20.799 [2024-11-15 11:41:21.473396] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xcf5580) on tqpair=0xc93550 00:23:20.799 [2024-11-15 11:41:21.473409] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:20.799 [2024-11-15 11:41:21.473414] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:20.799 [2024-11-15 11:41:21.473419] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xc93550) 00:23:20.799 [2024-11-15 11:41:21.473427] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.799 [2024-11-15 11:41:21.473441] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xcf5580, cid 3, qid 0 00:23:20.799 [2024-11-15 11:41:21.473528] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:20.799 [2024-11-15 11:41:21.473537] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:20.799 [2024-11-15 11:41:21.473542] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:20.799 [2024-11-15 11:41:21.473547] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xcf5580) on tqpair=0xc93550 00:23:20.799 [2024-11-15 11:41:21.473559] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:20.799 [2024-11-15 11:41:21.473565] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:20.799 [2024-11-15 11:41:21.473569] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xc93550) 00:23:20.799 [2024-11-15 11:41:21.473578] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.799 [2024-11-15 11:41:21.473592] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xcf5580, cid 3, qid 0 00:23:20.799 [2024-11-15 11:41:21.473679] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:20.800 [2024-11-15 11:41:21.473687] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:20.800 [2024-11-15 11:41:21.473692] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:20.800 [2024-11-15 11:41:21.473697] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xcf5580) on tqpair=0xc93550 00:23:20.800 [2024-11-15 11:41:21.473709] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:20.800 [2024-11-15 11:41:21.473715] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:20.800 [2024-11-15 11:41:21.473719] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xc93550) 00:23:20.800 [2024-11-15 11:41:21.473728] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.800 [2024-11-15 11:41:21.473742] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xcf5580, cid 3, qid 0 00:23:20.800 [2024-11-15 11:41:21.473808] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:20.800 [2024-11-15 11:41:21.473817] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:20.800 [2024-11-15 11:41:21.473821] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:20.800 [2024-11-15 11:41:21.473826] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xcf5580) on tqpair=0xc93550 00:23:20.800 [2024-11-15 11:41:21.473839] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:20.800 [2024-11-15 11:41:21.473847] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:20.800 [2024-11-15 11:41:21.473852] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xc93550) 00:23:20.800 [2024-11-15 11:41:21.473860] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.800 [2024-11-15 11:41:21.473875] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xcf5580, cid 3, qid 0 00:23:20.800 [2024-11-15 11:41:21.473981] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:20.800 [2024-11-15 11:41:21.473990] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:20.800 [2024-11-15 11:41:21.473995] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:20.800 [2024-11-15 11:41:21.473999] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xcf5580) on tqpair=0xc93550 00:23:20.800 [2024-11-15 11:41:21.474012] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:20.800 [2024-11-15 11:41:21.474018] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:20.800 [2024-11-15 11:41:21.474022] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xc93550) 00:23:20.800 [2024-11-15 11:41:21.474031] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.800 [2024-11-15 11:41:21.474045] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xcf5580, cid 3, qid 0 00:23:20.800 [2024-11-15 11:41:21.474133] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:20.800 [2024-11-15 11:41:21.474142] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:20.800 [2024-11-15 11:41:21.474146] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:20.800 [2024-11-15 11:41:21.474151] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xcf5580) on tqpair=0xc93550 00:23:20.800 [2024-11-15 11:41:21.474164] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:20.800 [2024-11-15 11:41:21.474169] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:20.800 [2024-11-15 11:41:21.474174] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xc93550) 00:23:20.800 [2024-11-15 11:41:21.474182] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.800 [2024-11-15 11:41:21.474196] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xcf5580, cid 3, qid 0 00:23:20.800 [2024-11-15 11:41:21.474284] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:20.800 [2024-11-15 11:41:21.474293] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:20.800 [2024-11-15 11:41:21.474297] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:20.800 [2024-11-15 11:41:21.474302] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xcf5580) on tqpair=0xc93550 00:23:20.800 [2024-11-15 11:41:21.474314] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:20.800 [2024-11-15 11:41:21.474320] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:20.800 [2024-11-15 11:41:21.474324] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xc93550) 00:23:20.800 [2024-11-15 11:41:21.474333] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.800 [2024-11-15 11:41:21.474348] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xcf5580, cid 3, qid 0 00:23:20.800 [2024-11-15 11:41:21.474408] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:20.800 [2024-11-15 11:41:21.474417] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:20.800 [2024-11-15 11:41:21.474421] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:20.800 [2024-11-15 11:41:21.474426] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xcf5580) on tqpair=0xc93550 00:23:20.800 [2024-11-15 11:41:21.474439] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:20.800 [2024-11-15 11:41:21.474445] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:20.800 [2024-11-15 11:41:21.474453] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xc93550) 00:23:20.800 [2024-11-15 11:41:21.478469] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.800 [2024-11-15 11:41:21.478489] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xcf5580, cid 3, qid 0 00:23:20.800 [2024-11-15 11:41:21.478566] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:20.800 [2024-11-15 11:41:21.478575] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:20.800 [2024-11-15 11:41:21.478580] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:20.800 [2024-11-15 11:41:21.478585] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xcf5580) on tqpair=0xc93550 00:23:20.800 [2024-11-15 11:41:21.478595] nvme_ctrlr.c:1273:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] shutdown complete in 7 milliseconds 00:23:20.800 00:23:20.800 11:41:21 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -L all 00:23:20.800 [2024-11-15 11:41:21.521505] Starting SPDK v25.01-pre git sha1 4b2d483c6 / DPDK 24.03.0 initialization... 00:23:20.800 [2024-11-15 11:41:21.521542] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1322922 ] 00:23:20.800 [2024-11-15 11:41:21.579372] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 0] setting state to connect adminq (no timeout) 00:23:20.800 [2024-11-15 11:41:21.579431] nvme_tcp.c:2238:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:23:20.800 [2024-11-15 11:41:21.579438] nvme_tcp.c:2242:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:23:20.800 [2024-11-15 11:41:21.579455] nvme_tcp.c:2263:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:23:20.800 [2024-11-15 11:41:21.579475] sock.c: 373:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:23:20.800 [2024-11-15 11:41:21.579914] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 0] setting state to wait for connect adminq (no timeout) 00:23:20.800 [2024-11-15 11:41:21.579951] nvme_tcp.c:1455:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x1fd9550 0 00:23:20.800 [2024-11-15 11:41:21.590477] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:23:20.800 [2024-11-15 11:41:21.590499] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:23:20.800 [2024-11-15 11:41:21.590506] nvme_tcp.c:1501:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:23:20.800 [2024-11-15 11:41:21.590510] nvme_tcp.c:1502:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:23:20.800 [2024-11-15 11:41:21.590546] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:20.800 [2024-11-15 11:41:21.590554] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:20.800 [2024-11-15 11:41:21.590559] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1fd9550) 00:23:20.800 [2024-11-15 11:41:21.590573] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:23:20.800 [2024-11-15 11:41:21.590596] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x203b100, cid 0, qid 0 00:23:20.800 [2024-11-15 11:41:21.600470] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:20.800 [2024-11-15 11:41:21.600484] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:20.800 [2024-11-15 11:41:21.600489] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:20.800 [2024-11-15 11:41:21.600495] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x203b100) on tqpair=0x1fd9550 00:23:20.800 [2024-11-15 11:41:21.600514] nvme_fabric.c: 621:nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:23:20.800 [2024-11-15 11:41:21.600523] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to read vs (no timeout) 00:23:20.800 [2024-11-15 11:41:21.600530] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to read vs wait for vs (no timeout) 00:23:20.800 [2024-11-15 11:41:21.600545] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:20.800 [2024-11-15 11:41:21.600551] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:20.800 [2024-11-15 11:41:21.600555] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1fd9550) 00:23:20.800 [2024-11-15 11:41:21.600566] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.800 [2024-11-15 11:41:21.600584] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x203b100, cid 0, qid 0 00:23:20.800 [2024-11-15 11:41:21.600755] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:20.800 [2024-11-15 11:41:21.600763] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:20.801 [2024-11-15 11:41:21.600768] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:20.801 [2024-11-15 11:41:21.600773] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x203b100) on tqpair=0x1fd9550 00:23:20.801 [2024-11-15 11:41:21.600780] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to read cap (no timeout) 00:23:20.801 [2024-11-15 11:41:21.600789] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to read cap wait for cap (no timeout) 00:23:20.801 [2024-11-15 11:41:21.600798] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:20.801 [2024-11-15 11:41:21.600803] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:20.801 [2024-11-15 11:41:21.600808] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1fd9550) 00:23:20.801 [2024-11-15 11:41:21.600817] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.801 [2024-11-15 11:41:21.600831] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x203b100, cid 0, qid 0 00:23:20.801 [2024-11-15 11:41:21.600912] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:20.801 [2024-11-15 11:41:21.600920] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:20.801 [2024-11-15 11:41:21.600925] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:20.801 [2024-11-15 11:41:21.600930] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x203b100) on tqpair=0x1fd9550 00:23:20.801 [2024-11-15 11:41:21.600936] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to check en (no timeout) 00:23:20.801 [2024-11-15 11:41:21.600947] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to check en wait for cc (timeout 15000 ms) 00:23:20.801 [2024-11-15 11:41:21.600956] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:20.801 [2024-11-15 11:41:21.600961] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:20.801 [2024-11-15 11:41:21.600966] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1fd9550) 00:23:20.801 [2024-11-15 11:41:21.600974] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.801 [2024-11-15 11:41:21.600988] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x203b100, cid 0, qid 0 00:23:20.801 [2024-11-15 11:41:21.601058] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:20.801 [2024-11-15 11:41:21.601066] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:20.801 [2024-11-15 11:41:21.601071] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:20.801 [2024-11-15 11:41:21.601076] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x203b100) on tqpair=0x1fd9550 00:23:20.801 [2024-11-15 11:41:21.601085] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:23:20.801 [2024-11-15 11:41:21.601097] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:20.801 [2024-11-15 11:41:21.601103] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:20.801 [2024-11-15 11:41:21.601108] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1fd9550) 00:23:20.801 [2024-11-15 11:41:21.601116] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.801 [2024-11-15 11:41:21.601130] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x203b100, cid 0, qid 0 00:23:20.801 [2024-11-15 11:41:21.601196] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:20.801 [2024-11-15 11:41:21.601205] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:20.801 [2024-11-15 11:41:21.601209] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:20.801 [2024-11-15 11:41:21.601214] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x203b100) on tqpair=0x1fd9550 00:23:20.801 [2024-11-15 11:41:21.601220] nvme_ctrlr.c:3906:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] CC.EN = 0 && CSTS.RDY = 0 00:23:20.801 [2024-11-15 11:41:21.601226] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to controller is disabled (timeout 15000 ms) 00:23:20.801 [2024-11-15 11:41:21.601236] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:23:20.801 [2024-11-15 11:41:21.601347] nvme_ctrlr.c:4104:nvme_ctrlr_process_init: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] Setting CC.EN = 1 00:23:20.801 [2024-11-15 11:41:21.601353] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:23:20.801 [2024-11-15 11:41:21.601363] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:20.801 [2024-11-15 11:41:21.601368] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:20.801 [2024-11-15 11:41:21.601373] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1fd9550) 00:23:20.801 [2024-11-15 11:41:21.601381] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.801 [2024-11-15 11:41:21.601396] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x203b100, cid 0, qid 0 00:23:20.801 [2024-11-15 11:41:21.601473] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:20.801 [2024-11-15 11:41:21.601481] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:20.801 [2024-11-15 11:41:21.601486] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:20.801 [2024-11-15 11:41:21.601491] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x203b100) on tqpair=0x1fd9550 00:23:20.801 [2024-11-15 11:41:21.601497] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:23:20.801 [2024-11-15 11:41:21.601510] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:20.801 [2024-11-15 11:41:21.601515] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:20.801 [2024-11-15 11:41:21.601520] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1fd9550) 00:23:20.801 [2024-11-15 11:41:21.601529] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.801 [2024-11-15 11:41:21.601544] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x203b100, cid 0, qid 0 00:23:20.801 [2024-11-15 11:41:21.601609] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:20.801 [2024-11-15 11:41:21.601617] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:20.801 [2024-11-15 11:41:21.601622] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:20.801 [2024-11-15 11:41:21.601627] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x203b100) on tqpair=0x1fd9550 00:23:20.801 [2024-11-15 11:41:21.601635] nvme_ctrlr.c:3941:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:23:20.801 [2024-11-15 11:41:21.601642] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to reset admin queue (timeout 30000 ms) 00:23:20.801 [2024-11-15 11:41:21.601651] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify controller (no timeout) 00:23:20.801 [2024-11-15 11:41:21.601666] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for identify controller (timeout 30000 ms) 00:23:20.801 [2024-11-15 11:41:21.601677] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:20.801 [2024-11-15 11:41:21.601682] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1fd9550) 00:23:20.801 [2024-11-15 11:41:21.601690] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.801 [2024-11-15 11:41:21.601705] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x203b100, cid 0, qid 0 00:23:20.801 [2024-11-15 11:41:21.601796] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:23:20.801 [2024-11-15 11:41:21.601805] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:23:20.801 [2024-11-15 11:41:21.601810] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:23:20.801 [2024-11-15 11:41:21.601815] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1fd9550): datao=0, datal=4096, cccid=0 00:23:20.801 [2024-11-15 11:41:21.601821] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x203b100) on tqpair(0x1fd9550): expected_datao=0, payload_size=4096 00:23:20.801 [2024-11-15 11:41:21.601827] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:20.801 [2024-11-15 11:41:21.601857] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:23:20.801 [2024-11-15 11:41:21.601863] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:23:21.064 [2024-11-15 11:41:21.645468] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:21.064 [2024-11-15 11:41:21.645484] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:21.064 [2024-11-15 11:41:21.645489] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:21.064 [2024-11-15 11:41:21.645494] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x203b100) on tqpair=0x1fd9550 00:23:21.064 [2024-11-15 11:41:21.645504] nvme_ctrlr.c:2081:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] transport max_xfer_size 4294967295 00:23:21.064 [2024-11-15 11:41:21.645512] nvme_ctrlr.c:2085:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] MDTS max_xfer_size 131072 00:23:21.064 [2024-11-15 11:41:21.645518] nvme_ctrlr.c:2088:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] CNTLID 0x0001 00:23:21.064 [2024-11-15 11:41:21.645531] nvme_ctrlr.c:2112:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] transport max_sges 16 00:23:21.064 [2024-11-15 11:41:21.645538] nvme_ctrlr.c:2127:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] fuses compare and write: 1 00:23:21.064 [2024-11-15 11:41:21.645545] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to configure AER (timeout 30000 ms) 00:23:21.064 [2024-11-15 11:41:21.645560] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for configure aer (timeout 30000 ms) 00:23:21.064 [2024-11-15 11:41:21.645569] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:21.064 [2024-11-15 11:41:21.645575] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:21.064 [2024-11-15 11:41:21.645580] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1fd9550) 00:23:21.064 [2024-11-15 11:41:21.645589] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:23:21.064 [2024-11-15 11:41:21.645610] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x203b100, cid 0, qid 0 00:23:21.064 [2024-11-15 11:41:21.645742] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:21.064 [2024-11-15 11:41:21.645751] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:21.064 [2024-11-15 11:41:21.645755] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:21.064 [2024-11-15 11:41:21.645761] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x203b100) on tqpair=0x1fd9550 00:23:21.064 [2024-11-15 11:41:21.645768] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:21.064 [2024-11-15 11:41:21.645774] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:21.064 [2024-11-15 11:41:21.645779] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1fd9550) 00:23:21.064 [2024-11-15 11:41:21.645786] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:21.064 [2024-11-15 11:41:21.645794] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:21.064 [2024-11-15 11:41:21.645799] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:21.065 [2024-11-15 11:41:21.645804] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x1fd9550) 00:23:21.065 [2024-11-15 11:41:21.645811] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:21.065 [2024-11-15 11:41:21.645819] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:21.065 [2024-11-15 11:41:21.645824] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:21.065 [2024-11-15 11:41:21.645829] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x1fd9550) 00:23:21.065 [2024-11-15 11:41:21.645836] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:21.065 [2024-11-15 11:41:21.645844] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:21.065 [2024-11-15 11:41:21.645849] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:21.065 [2024-11-15 11:41:21.645853] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1fd9550) 00:23:21.065 [2024-11-15 11:41:21.645861] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:21.065 [2024-11-15 11:41:21.645867] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set keep alive timeout (timeout 30000 ms) 00:23:21.065 [2024-11-15 11:41:21.645878] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:23:21.065 [2024-11-15 11:41:21.645887] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:21.065 [2024-11-15 11:41:21.645892] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1fd9550) 00:23:21.065 [2024-11-15 11:41:21.645900] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.065 [2024-11-15 11:41:21.645917] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x203b100, cid 0, qid 0 00:23:21.065 [2024-11-15 11:41:21.645924] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x203b280, cid 1, qid 0 00:23:21.065 [2024-11-15 11:41:21.645930] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x203b400, cid 2, qid 0 00:23:21.065 [2024-11-15 11:41:21.645936] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x203b580, cid 3, qid 0 00:23:21.065 [2024-11-15 11:41:21.645943] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x203b700, cid 4, qid 0 00:23:21.065 [2024-11-15 11:41:21.646055] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:21.065 [2024-11-15 11:41:21.646063] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:21.065 [2024-11-15 11:41:21.646068] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:21.065 [2024-11-15 11:41:21.646073] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x203b700) on tqpair=0x1fd9550 00:23:21.065 [2024-11-15 11:41:21.646083] nvme_ctrlr.c:3059:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] Sending keep alive every 5000000 us 00:23:21.065 [2024-11-15 11:41:21.646091] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify controller iocs specific (timeout 30000 ms) 00:23:21.065 [2024-11-15 11:41:21.646102] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set number of queues (timeout 30000 ms) 00:23:21.065 [2024-11-15 11:41:21.646110] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for set number of queues (timeout 30000 ms) 00:23:21.065 [2024-11-15 11:41:21.646118] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:21.065 [2024-11-15 11:41:21.646123] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:21.065 [2024-11-15 11:41:21.646128] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1fd9550) 00:23:21.065 [2024-11-15 11:41:21.646136] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:4 cdw10:00000007 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:23:21.065 [2024-11-15 11:41:21.646150] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x203b700, cid 4, qid 0 00:23:21.065 [2024-11-15 11:41:21.646217] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:21.065 [2024-11-15 11:41:21.646225] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:21.065 [2024-11-15 11:41:21.646230] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:21.065 [2024-11-15 11:41:21.646235] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x203b700) on tqpair=0x1fd9550 00:23:21.065 [2024-11-15 11:41:21.646312] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify active ns (timeout 30000 ms) 00:23:21.065 [2024-11-15 11:41:21.646326] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for identify active ns (timeout 30000 ms) 00:23:21.065 [2024-11-15 11:41:21.646336] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:21.065 [2024-11-15 11:41:21.646341] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1fd9550) 00:23:21.065 [2024-11-15 11:41:21.646349] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.065 [2024-11-15 11:41:21.646364] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x203b700, cid 4, qid 0 00:23:21.065 [2024-11-15 11:41:21.646470] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:23:21.065 [2024-11-15 11:41:21.646479] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:23:21.065 [2024-11-15 11:41:21.646484] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:23:21.065 [2024-11-15 11:41:21.646489] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1fd9550): datao=0, datal=4096, cccid=4 00:23:21.065 [2024-11-15 11:41:21.646495] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x203b700) on tqpair(0x1fd9550): expected_datao=0, payload_size=4096 00:23:21.065 [2024-11-15 11:41:21.646501] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:21.065 [2024-11-15 11:41:21.646510] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:23:21.065 [2024-11-15 11:41:21.646514] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:23:21.065 [2024-11-15 11:41:21.646525] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:21.065 [2024-11-15 11:41:21.646533] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:21.065 [2024-11-15 11:41:21.646537] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:21.065 [2024-11-15 11:41:21.646542] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x203b700) on tqpair=0x1fd9550 00:23:21.065 [2024-11-15 11:41:21.646553] nvme_ctrlr.c:4735:spdk_nvme_ctrlr_get_ns: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] Namespace 1 was added 00:23:21.065 [2024-11-15 11:41:21.646564] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify ns (timeout 30000 ms) 00:23:21.065 [2024-11-15 11:41:21.646579] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for identify ns (timeout 30000 ms) 00:23:21.065 [2024-11-15 11:41:21.646589] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:21.065 [2024-11-15 11:41:21.646594] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1fd9550) 00:23:21.065 [2024-11-15 11:41:21.646602] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.065 [2024-11-15 11:41:21.646618] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x203b700, cid 4, qid 0 00:23:21.065 [2024-11-15 11:41:21.646710] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:23:21.065 [2024-11-15 11:41:21.646718] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:23:21.065 [2024-11-15 11:41:21.646723] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:23:21.065 [2024-11-15 11:41:21.646728] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1fd9550): datao=0, datal=4096, cccid=4 00:23:21.065 [2024-11-15 11:41:21.646733] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x203b700) on tqpair(0x1fd9550): expected_datao=0, payload_size=4096 00:23:21.065 [2024-11-15 11:41:21.646739] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:21.065 [2024-11-15 11:41:21.646748] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:23:21.065 [2024-11-15 11:41:21.646752] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:23:21.065 [2024-11-15 11:41:21.646762] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:21.065 [2024-11-15 11:41:21.646770] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:21.065 [2024-11-15 11:41:21.646775] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:21.065 [2024-11-15 11:41:21.646779] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x203b700) on tqpair=0x1fd9550 00:23:21.065 [2024-11-15 11:41:21.646793] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify namespace id descriptors (timeout 30000 ms) 00:23:21.065 [2024-11-15 11:41:21.646805] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:23:21.065 [2024-11-15 11:41:21.646815] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:21.065 [2024-11-15 11:41:21.646820] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1fd9550) 00:23:21.065 [2024-11-15 11:41:21.646828] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.065 [2024-11-15 11:41:21.646842] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x203b700, cid 4, qid 0 00:23:21.065 [2024-11-15 11:41:21.646921] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:23:21.065 [2024-11-15 11:41:21.646929] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:23:21.065 [2024-11-15 11:41:21.646934] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:23:21.065 [2024-11-15 11:41:21.646938] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1fd9550): datao=0, datal=4096, cccid=4 00:23:21.065 [2024-11-15 11:41:21.646944] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x203b700) on tqpair(0x1fd9550): expected_datao=0, payload_size=4096 00:23:21.065 [2024-11-15 11:41:21.646950] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:21.065 [2024-11-15 11:41:21.646958] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:23:21.065 [2024-11-15 11:41:21.646963] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:23:21.065 [2024-11-15 11:41:21.646974] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:21.065 [2024-11-15 11:41:21.646981] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:21.065 [2024-11-15 11:41:21.646986] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:21.065 [2024-11-15 11:41:21.646993] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x203b700) on tqpair=0x1fd9550 00:23:21.065 [2024-11-15 11:41:21.647003] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify ns iocs specific (timeout 30000 ms) 00:23:21.065 [2024-11-15 11:41:21.647013] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set supported log pages (timeout 30000 ms) 00:23:21.066 [2024-11-15 11:41:21.647024] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set supported features (timeout 30000 ms) 00:23:21.066 [2024-11-15 11:41:21.647032] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set host behavior support feature (timeout 30000 ms) 00:23:21.066 [2024-11-15 11:41:21.647039] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set doorbell buffer config (timeout 30000 ms) 00:23:21.066 [2024-11-15 11:41:21.647045] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set host ID (timeout 30000 ms) 00:23:21.066 [2024-11-15 11:41:21.647052] nvme_ctrlr.c:3147:nvme_ctrlr_set_host_id: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] NVMe-oF transport - not sending Set Features - Host ID 00:23:21.066 [2024-11-15 11:41:21.647058] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to transport ready (timeout 30000 ms) 00:23:21.066 [2024-11-15 11:41:21.647065] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to ready (no timeout) 00:23:21.066 [2024-11-15 11:41:21.647082] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:21.066 [2024-11-15 11:41:21.647087] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1fd9550) 00:23:21.066 [2024-11-15 11:41:21.647095] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:4 cdw10:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.066 [2024-11-15 11:41:21.647104] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:21.066 [2024-11-15 11:41:21.647109] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:21.066 [2024-11-15 11:41:21.647113] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x1fd9550) 00:23:21.066 [2024-11-15 11:41:21.647121] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:23:21.066 [2024-11-15 11:41:21.647139] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x203b700, cid 4, qid 0 00:23:21.066 [2024-11-15 11:41:21.647147] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x203b880, cid 5, qid 0 00:23:21.066 [2024-11-15 11:41:21.647222] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:21.066 [2024-11-15 11:41:21.647231] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:21.066 [2024-11-15 11:41:21.647235] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:21.066 [2024-11-15 11:41:21.647240] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x203b700) on tqpair=0x1fd9550 00:23:21.066 [2024-11-15 11:41:21.647248] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:21.066 [2024-11-15 11:41:21.647256] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:21.066 [2024-11-15 11:41:21.647260] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:21.066 [2024-11-15 11:41:21.647266] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x203b880) on tqpair=0x1fd9550 00:23:21.066 [2024-11-15 11:41:21.647279] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:21.066 [2024-11-15 11:41:21.647284] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x1fd9550) 00:23:21.066 [2024-11-15 11:41:21.647292] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:5 cdw10:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.066 [2024-11-15 11:41:21.647306] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x203b880, cid 5, qid 0 00:23:21.066 [2024-11-15 11:41:21.647378] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:21.066 [2024-11-15 11:41:21.647387] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:21.066 [2024-11-15 11:41:21.647391] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:21.066 [2024-11-15 11:41:21.647396] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x203b880) on tqpair=0x1fd9550 00:23:21.066 [2024-11-15 11:41:21.647408] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:21.066 [2024-11-15 11:41:21.647413] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x1fd9550) 00:23:21.066 [2024-11-15 11:41:21.647421] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:5 cdw10:00000004 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.066 [2024-11-15 11:41:21.647435] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x203b880, cid 5, qid 0 00:23:21.066 [2024-11-15 11:41:21.647510] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:21.066 [2024-11-15 11:41:21.647519] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:21.066 [2024-11-15 11:41:21.647523] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:21.066 [2024-11-15 11:41:21.647528] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x203b880) on tqpair=0x1fd9550 00:23:21.066 [2024-11-15 11:41:21.647540] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:21.066 [2024-11-15 11:41:21.647545] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x1fd9550) 00:23:21.066 [2024-11-15 11:41:21.647553] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:5 cdw10:00000007 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.066 [2024-11-15 11:41:21.647567] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x203b880, cid 5, qid 0 00:23:21.066 [2024-11-15 11:41:21.647630] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:21.066 [2024-11-15 11:41:21.647639] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:21.066 [2024-11-15 11:41:21.647643] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:21.066 [2024-11-15 11:41:21.647648] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x203b880) on tqpair=0x1fd9550 00:23:21.066 [2024-11-15 11:41:21.647665] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:21.066 [2024-11-15 11:41:21.647671] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x1fd9550) 00:23:21.066 [2024-11-15 11:41:21.647680] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.066 [2024-11-15 11:41:21.647689] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:21.066 [2024-11-15 11:41:21.647694] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1fd9550) 00:23:21.066 [2024-11-15 11:41:21.647702] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:ffffffff cdw10:007f0002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.066 [2024-11-15 11:41:21.647711] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:21.066 [2024-11-15 11:41:21.647716] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=6 on tqpair(0x1fd9550) 00:23:21.066 [2024-11-15 11:41:21.647724] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:ffffffff cdw10:007f0003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.066 [2024-11-15 11:41:21.647733] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:21.066 [2024-11-15 11:41:21.647738] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x1fd9550) 00:23:21.066 [2024-11-15 11:41:21.647746] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.066 [2024-11-15 11:41:21.647761] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x203b880, cid 5, qid 0 00:23:21.066 [2024-11-15 11:41:21.647771] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x203b700, cid 4, qid 0 00:23:21.066 [2024-11-15 11:41:21.647778] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x203ba00, cid 6, qid 0 00:23:21.066 [2024-11-15 11:41:21.647784] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x203bb80, cid 7, qid 0 00:23:21.066 [2024-11-15 11:41:21.647917] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:23:21.066 [2024-11-15 11:41:21.647925] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:23:21.066 [2024-11-15 11:41:21.647929] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:23:21.066 [2024-11-15 11:41:21.647934] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1fd9550): datao=0, datal=8192, cccid=5 00:23:21.066 [2024-11-15 11:41:21.647940] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x203b880) on tqpair(0x1fd9550): expected_datao=0, payload_size=8192 00:23:21.066 [2024-11-15 11:41:21.647946] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:21.066 [2024-11-15 11:41:21.647966] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:23:21.066 [2024-11-15 11:41:21.647971] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:23:21.066 [2024-11-15 11:41:21.647978] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:23:21.066 [2024-11-15 11:41:21.647986] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:23:21.066 [2024-11-15 11:41:21.647990] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:23:21.066 [2024-11-15 11:41:21.647995] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1fd9550): datao=0, datal=512, cccid=4 00:23:21.066 [2024-11-15 11:41:21.648000] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x203b700) on tqpair(0x1fd9550): expected_datao=0, payload_size=512 00:23:21.066 [2024-11-15 11:41:21.648006] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:21.066 [2024-11-15 11:41:21.648014] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:23:21.066 [2024-11-15 11:41:21.648019] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:23:21.066 [2024-11-15 11:41:21.648026] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:23:21.066 [2024-11-15 11:41:21.648033] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:23:21.066 [2024-11-15 11:41:21.648038] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:23:21.066 [2024-11-15 11:41:21.648042] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1fd9550): datao=0, datal=512, cccid=6 00:23:21.066 [2024-11-15 11:41:21.648048] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x203ba00) on tqpair(0x1fd9550): expected_datao=0, payload_size=512 00:23:21.066 [2024-11-15 11:41:21.648054] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:21.066 [2024-11-15 11:41:21.648062] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:23:21.066 [2024-11-15 11:41:21.648067] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:23:21.066 [2024-11-15 11:41:21.648075] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:23:21.066 [2024-11-15 11:41:21.648082] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:23:21.066 [2024-11-15 11:41:21.648086] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:23:21.066 [2024-11-15 11:41:21.648091] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1fd9550): datao=0, datal=4096, cccid=7 00:23:21.066 [2024-11-15 11:41:21.648096] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x203bb80) on tqpair(0x1fd9550): expected_datao=0, payload_size=4096 00:23:21.066 [2024-11-15 11:41:21.648102] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:21.066 [2024-11-15 11:41:21.648110] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:23:21.066 [2024-11-15 11:41:21.648115] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:23:21.066 [2024-11-15 11:41:21.648125] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:21.066 [2024-11-15 11:41:21.648133] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:21.066 [2024-11-15 11:41:21.648137] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:21.066 [2024-11-15 11:41:21.648147] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x203b880) on tqpair=0x1fd9550 00:23:21.066 [2024-11-15 11:41:21.648162] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:21.066 [2024-11-15 11:41:21.648169] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:21.067 [2024-11-15 11:41:21.648174] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:21.067 [2024-11-15 11:41:21.648179] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x203b700) on tqpair=0x1fd9550 00:23:21.067 [2024-11-15 11:41:21.648192] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:21.067 [2024-11-15 11:41:21.648199] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:21.067 [2024-11-15 11:41:21.648204] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:21.067 [2024-11-15 11:41:21.648209] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x203ba00) on tqpair=0x1fd9550 00:23:21.067 [2024-11-15 11:41:21.648218] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:21.067 [2024-11-15 11:41:21.648225] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:21.067 [2024-11-15 11:41:21.648230] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:21.067 [2024-11-15 11:41:21.648235] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x203bb80) on tqpair=0x1fd9550 00:23:21.067 ===================================================== 00:23:21.067 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:23:21.067 ===================================================== 00:23:21.067 Controller Capabilities/Features 00:23:21.067 ================================ 00:23:21.067 Vendor ID: 8086 00:23:21.067 Subsystem Vendor ID: 8086 00:23:21.067 Serial Number: SPDK00000000000001 00:23:21.067 Model Number: SPDK bdev Controller 00:23:21.067 Firmware Version: 25.01 00:23:21.067 Recommended Arb Burst: 6 00:23:21.067 IEEE OUI Identifier: e4 d2 5c 00:23:21.067 Multi-path I/O 00:23:21.067 May have multiple subsystem ports: Yes 00:23:21.067 May have multiple controllers: Yes 00:23:21.067 Associated with SR-IOV VF: No 00:23:21.067 Max Data Transfer Size: 131072 00:23:21.067 Max Number of Namespaces: 32 00:23:21.067 Max Number of I/O Queues: 127 00:23:21.067 NVMe Specification Version (VS): 1.3 00:23:21.067 NVMe Specification Version (Identify): 1.3 00:23:21.067 Maximum Queue Entries: 128 00:23:21.067 Contiguous Queues Required: Yes 00:23:21.067 Arbitration Mechanisms Supported 00:23:21.067 Weighted Round Robin: Not Supported 00:23:21.067 Vendor Specific: Not Supported 00:23:21.067 Reset Timeout: 15000 ms 00:23:21.067 Doorbell Stride: 4 bytes 00:23:21.067 NVM Subsystem Reset: Not Supported 00:23:21.067 Command Sets Supported 00:23:21.067 NVM Command Set: Supported 00:23:21.067 Boot Partition: Not Supported 00:23:21.067 Memory Page Size Minimum: 4096 bytes 00:23:21.067 Memory Page Size Maximum: 4096 bytes 00:23:21.067 Persistent Memory Region: Not Supported 00:23:21.067 Optional Asynchronous Events Supported 00:23:21.067 Namespace Attribute Notices: Supported 00:23:21.067 Firmware Activation Notices: Not Supported 00:23:21.067 ANA Change Notices: Not Supported 00:23:21.067 PLE Aggregate Log Change Notices: Not Supported 00:23:21.067 LBA Status Info Alert Notices: Not Supported 00:23:21.067 EGE Aggregate Log Change Notices: Not Supported 00:23:21.067 Normal NVM Subsystem Shutdown event: Not Supported 00:23:21.067 Zone Descriptor Change Notices: Not Supported 00:23:21.067 Discovery Log Change Notices: Not Supported 00:23:21.067 Controller Attributes 00:23:21.067 128-bit Host Identifier: Supported 00:23:21.067 Non-Operational Permissive Mode: Not Supported 00:23:21.067 NVM Sets: Not Supported 00:23:21.067 Read Recovery Levels: Not Supported 00:23:21.067 Endurance Groups: Not Supported 00:23:21.067 Predictable Latency Mode: Not Supported 00:23:21.067 Traffic Based Keep ALive: Not Supported 00:23:21.067 Namespace Granularity: Not Supported 00:23:21.067 SQ Associations: Not Supported 00:23:21.067 UUID List: Not Supported 00:23:21.067 Multi-Domain Subsystem: Not Supported 00:23:21.067 Fixed Capacity Management: Not Supported 00:23:21.067 Variable Capacity Management: Not Supported 00:23:21.067 Delete Endurance Group: Not Supported 00:23:21.067 Delete NVM Set: Not Supported 00:23:21.067 Extended LBA Formats Supported: Not Supported 00:23:21.067 Flexible Data Placement Supported: Not Supported 00:23:21.067 00:23:21.067 Controller Memory Buffer Support 00:23:21.067 ================================ 00:23:21.067 Supported: No 00:23:21.067 00:23:21.067 Persistent Memory Region Support 00:23:21.067 ================================ 00:23:21.067 Supported: No 00:23:21.067 00:23:21.067 Admin Command Set Attributes 00:23:21.067 ============================ 00:23:21.067 Security Send/Receive: Not Supported 00:23:21.067 Format NVM: Not Supported 00:23:21.067 Firmware Activate/Download: Not Supported 00:23:21.067 Namespace Management: Not Supported 00:23:21.067 Device Self-Test: Not Supported 00:23:21.067 Directives: Not Supported 00:23:21.067 NVMe-MI: Not Supported 00:23:21.067 Virtualization Management: Not Supported 00:23:21.067 Doorbell Buffer Config: Not Supported 00:23:21.067 Get LBA Status Capability: Not Supported 00:23:21.067 Command & Feature Lockdown Capability: Not Supported 00:23:21.067 Abort Command Limit: 4 00:23:21.067 Async Event Request Limit: 4 00:23:21.067 Number of Firmware Slots: N/A 00:23:21.067 Firmware Slot 1 Read-Only: N/A 00:23:21.067 Firmware Activation Without Reset: N/A 00:23:21.067 Multiple Update Detection Support: N/A 00:23:21.067 Firmware Update Granularity: No Information Provided 00:23:21.067 Per-Namespace SMART Log: No 00:23:21.067 Asymmetric Namespace Access Log Page: Not Supported 00:23:21.067 Subsystem NQN: nqn.2016-06.io.spdk:cnode1 00:23:21.067 Command Effects Log Page: Supported 00:23:21.067 Get Log Page Extended Data: Supported 00:23:21.067 Telemetry Log Pages: Not Supported 00:23:21.067 Persistent Event Log Pages: Not Supported 00:23:21.067 Supported Log Pages Log Page: May Support 00:23:21.067 Commands Supported & Effects Log Page: Not Supported 00:23:21.067 Feature Identifiers & Effects Log Page:May Support 00:23:21.067 NVMe-MI Commands & Effects Log Page: May Support 00:23:21.067 Data Area 4 for Telemetry Log: Not Supported 00:23:21.067 Error Log Page Entries Supported: 128 00:23:21.067 Keep Alive: Supported 00:23:21.067 Keep Alive Granularity: 10000 ms 00:23:21.067 00:23:21.067 NVM Command Set Attributes 00:23:21.067 ========================== 00:23:21.067 Submission Queue Entry Size 00:23:21.067 Max: 64 00:23:21.067 Min: 64 00:23:21.067 Completion Queue Entry Size 00:23:21.067 Max: 16 00:23:21.067 Min: 16 00:23:21.067 Number of Namespaces: 32 00:23:21.067 Compare Command: Supported 00:23:21.067 Write Uncorrectable Command: Not Supported 00:23:21.067 Dataset Management Command: Supported 00:23:21.067 Write Zeroes Command: Supported 00:23:21.067 Set Features Save Field: Not Supported 00:23:21.067 Reservations: Supported 00:23:21.067 Timestamp: Not Supported 00:23:21.067 Copy: Supported 00:23:21.067 Volatile Write Cache: Present 00:23:21.067 Atomic Write Unit (Normal): 1 00:23:21.067 Atomic Write Unit (PFail): 1 00:23:21.067 Atomic Compare & Write Unit: 1 00:23:21.067 Fused Compare & Write: Supported 00:23:21.067 Scatter-Gather List 00:23:21.067 SGL Command Set: Supported 00:23:21.067 SGL Keyed: Supported 00:23:21.067 SGL Bit Bucket Descriptor: Not Supported 00:23:21.067 SGL Metadata Pointer: Not Supported 00:23:21.067 Oversized SGL: Not Supported 00:23:21.067 SGL Metadata Address: Not Supported 00:23:21.067 SGL Offset: Supported 00:23:21.067 Transport SGL Data Block: Not Supported 00:23:21.067 Replay Protected Memory Block: Not Supported 00:23:21.067 00:23:21.067 Firmware Slot Information 00:23:21.067 ========================= 00:23:21.067 Active slot: 1 00:23:21.067 Slot 1 Firmware Revision: 25.01 00:23:21.067 00:23:21.067 00:23:21.067 Commands Supported and Effects 00:23:21.067 ============================== 00:23:21.067 Admin Commands 00:23:21.067 -------------- 00:23:21.067 Get Log Page (02h): Supported 00:23:21.067 Identify (06h): Supported 00:23:21.067 Abort (08h): Supported 00:23:21.067 Set Features (09h): Supported 00:23:21.067 Get Features (0Ah): Supported 00:23:21.067 Asynchronous Event Request (0Ch): Supported 00:23:21.067 Keep Alive (18h): Supported 00:23:21.067 I/O Commands 00:23:21.067 ------------ 00:23:21.067 Flush (00h): Supported LBA-Change 00:23:21.067 Write (01h): Supported LBA-Change 00:23:21.067 Read (02h): Supported 00:23:21.067 Compare (05h): Supported 00:23:21.067 Write Zeroes (08h): Supported LBA-Change 00:23:21.067 Dataset Management (09h): Supported LBA-Change 00:23:21.067 Copy (19h): Supported LBA-Change 00:23:21.067 00:23:21.067 Error Log 00:23:21.067 ========= 00:23:21.067 00:23:21.067 Arbitration 00:23:21.067 =========== 00:23:21.067 Arbitration Burst: 1 00:23:21.067 00:23:21.067 Power Management 00:23:21.067 ================ 00:23:21.067 Number of Power States: 1 00:23:21.067 Current Power State: Power State #0 00:23:21.067 Power State #0: 00:23:21.067 Max Power: 0.00 W 00:23:21.067 Non-Operational State: Operational 00:23:21.067 Entry Latency: Not Reported 00:23:21.067 Exit Latency: Not Reported 00:23:21.067 Relative Read Throughput: 0 00:23:21.067 Relative Read Latency: 0 00:23:21.067 Relative Write Throughput: 0 00:23:21.068 Relative Write Latency: 0 00:23:21.068 Idle Power: Not Reported 00:23:21.068 Active Power: Not Reported 00:23:21.068 Non-Operational Permissive Mode: Not Supported 00:23:21.068 00:23:21.068 Health Information 00:23:21.068 ================== 00:23:21.068 Critical Warnings: 00:23:21.068 Available Spare Space: OK 00:23:21.068 Temperature: OK 00:23:21.068 Device Reliability: OK 00:23:21.068 Read Only: No 00:23:21.068 Volatile Memory Backup: OK 00:23:21.068 Current Temperature: 0 Kelvin (-273 Celsius) 00:23:21.068 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:23:21.068 Available Spare: 0% 00:23:21.068 Available Spare Threshold: 0% 00:23:21.068 Life Percentage Used:[2024-11-15 11:41:21.648352] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:21.068 [2024-11-15 11:41:21.648359] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x1fd9550) 00:23:21.068 [2024-11-15 11:41:21.648368] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:7 cdw10:00000005 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.068 [2024-11-15 11:41:21.648384] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x203bb80, cid 7, qid 0 00:23:21.068 [2024-11-15 11:41:21.648469] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:21.068 [2024-11-15 11:41:21.648478] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:21.068 [2024-11-15 11:41:21.648482] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:21.068 [2024-11-15 11:41:21.648487] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x203bb80) on tqpair=0x1fd9550 00:23:21.068 [2024-11-15 11:41:21.648524] nvme_ctrlr.c:4399:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] Prepare to destruct SSD 00:23:21.068 [2024-11-15 11:41:21.648538] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x203b100) on tqpair=0x1fd9550 00:23:21.068 [2024-11-15 11:41:21.648546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.068 [2024-11-15 11:41:21.648552] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x203b280) on tqpair=0x1fd9550 00:23:21.068 [2024-11-15 11:41:21.648559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.068 [2024-11-15 11:41:21.648565] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x203b400) on tqpair=0x1fd9550 00:23:21.068 [2024-11-15 11:41:21.648571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.068 [2024-11-15 11:41:21.648578] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x203b580) on tqpair=0x1fd9550 00:23:21.068 [2024-11-15 11:41:21.648584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.068 [2024-11-15 11:41:21.648593] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:21.068 [2024-11-15 11:41:21.648599] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:21.068 [2024-11-15 11:41:21.648603] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1fd9550) 00:23:21.068 [2024-11-15 11:41:21.648612] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.068 [2024-11-15 11:41:21.648628] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x203b580, cid 3, qid 0 00:23:21.068 [2024-11-15 11:41:21.648699] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:21.068 [2024-11-15 11:41:21.648708] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:21.068 [2024-11-15 11:41:21.648712] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:21.068 [2024-11-15 11:41:21.648717] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x203b580) on tqpair=0x1fd9550 00:23:21.068 [2024-11-15 11:41:21.648726] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:21.068 [2024-11-15 11:41:21.648731] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:21.068 [2024-11-15 11:41:21.648735] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1fd9550) 00:23:21.068 [2024-11-15 11:41:21.648744] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.068 [2024-11-15 11:41:21.648762] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x203b580, cid 3, qid 0 00:23:21.068 [2024-11-15 11:41:21.648837] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:21.068 [2024-11-15 11:41:21.648845] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:21.068 [2024-11-15 11:41:21.648850] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:21.068 [2024-11-15 11:41:21.648855] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x203b580) on tqpair=0x1fd9550 00:23:21.068 [2024-11-15 11:41:21.648861] nvme_ctrlr.c:1151:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] RTD3E = 0 us 00:23:21.068 [2024-11-15 11:41:21.648867] nvme_ctrlr.c:1154:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] shutdown timeout = 10000 ms 00:23:21.068 [2024-11-15 11:41:21.648878] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:21.068 [2024-11-15 11:41:21.648884] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:21.068 [2024-11-15 11:41:21.648888] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1fd9550) 00:23:21.068 [2024-11-15 11:41:21.648897] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.068 [2024-11-15 11:41:21.648911] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x203b580, cid 3, qid 0 00:23:21.068 [2024-11-15 11:41:21.648980] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:21.068 [2024-11-15 11:41:21.648988] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:21.068 [2024-11-15 11:41:21.648992] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:21.068 [2024-11-15 11:41:21.648997] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x203b580) on tqpair=0x1fd9550 00:23:21.068 [2024-11-15 11:41:21.649010] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:21.068 [2024-11-15 11:41:21.649015] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:21.068 [2024-11-15 11:41:21.649020] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1fd9550) 00:23:21.068 [2024-11-15 11:41:21.649028] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.068 [2024-11-15 11:41:21.649042] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x203b580, cid 3, qid 0 00:23:21.068 [2024-11-15 11:41:21.649105] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:21.068 [2024-11-15 11:41:21.649113] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:21.068 [2024-11-15 11:41:21.649118] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:21.068 [2024-11-15 11:41:21.649123] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x203b580) on tqpair=0x1fd9550 00:23:21.068 [2024-11-15 11:41:21.649135] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:21.068 [2024-11-15 11:41:21.649141] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:21.068 [2024-11-15 11:41:21.649145] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1fd9550) 00:23:21.068 [2024-11-15 11:41:21.649154] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.068 [2024-11-15 11:41:21.649173] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x203b580, cid 3, qid 0 00:23:21.068 [2024-11-15 11:41:21.649237] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:21.068 [2024-11-15 11:41:21.649245] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:21.068 [2024-11-15 11:41:21.649250] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:21.068 [2024-11-15 11:41:21.649255] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x203b580) on tqpair=0x1fd9550 00:23:21.068 [2024-11-15 11:41:21.649267] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:21.068 [2024-11-15 11:41:21.649272] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:21.068 [2024-11-15 11:41:21.649277] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1fd9550) 00:23:21.068 [2024-11-15 11:41:21.649285] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.068 [2024-11-15 11:41:21.649299] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x203b580, cid 3, qid 0 00:23:21.068 [2024-11-15 11:41:21.649363] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:21.068 [2024-11-15 11:41:21.649371] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:21.068 [2024-11-15 11:41:21.649376] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:21.068 [2024-11-15 11:41:21.649381] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x203b580) on tqpair=0x1fd9550 00:23:21.068 [2024-11-15 11:41:21.649393] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:21.068 [2024-11-15 11:41:21.649398] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:21.068 [2024-11-15 11:41:21.649403] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1fd9550) 00:23:21.068 [2024-11-15 11:41:21.649412] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.068 [2024-11-15 11:41:21.649426] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x203b580, cid 3, qid 0 00:23:21.068 [2024-11-15 11:41:21.649496] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:21.068 [2024-11-15 11:41:21.649505] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:21.068 [2024-11-15 11:41:21.649510] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:21.068 [2024-11-15 11:41:21.649514] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x203b580) on tqpair=0x1fd9550 00:23:21.068 [2024-11-15 11:41:21.649527] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:21.068 [2024-11-15 11:41:21.649532] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:21.068 [2024-11-15 11:41:21.649537] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1fd9550) 00:23:21.068 [2024-11-15 11:41:21.649545] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.068 [2024-11-15 11:41:21.649559] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x203b580, cid 3, qid 0 00:23:21.068 [2024-11-15 11:41:21.649624] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:21.068 [2024-11-15 11:41:21.649632] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:21.068 [2024-11-15 11:41:21.649637] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:21.068 [2024-11-15 11:41:21.649642] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x203b580) on tqpair=0x1fd9550 00:23:21.068 [2024-11-15 11:41:21.649654] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:21.068 [2024-11-15 11:41:21.649659] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:21.068 [2024-11-15 11:41:21.649664] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1fd9550) 00:23:21.069 [2024-11-15 11:41:21.649672] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.069 [2024-11-15 11:41:21.649689] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x203b580, cid 3, qid 0 00:23:21.069 [2024-11-15 11:41:21.649763] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:21.069 [2024-11-15 11:41:21.649773] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:21.069 [2024-11-15 11:41:21.649778] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:21.069 [2024-11-15 11:41:21.649785] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x203b580) on tqpair=0x1fd9550 00:23:21.069 [2024-11-15 11:41:21.649800] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:21.069 [2024-11-15 11:41:21.649807] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:21.069 [2024-11-15 11:41:21.649813] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1fd9550) 00:23:21.069 [2024-11-15 11:41:21.649823] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.069 [2024-11-15 11:41:21.649837] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x203b580, cid 3, qid 0 00:23:21.069 [2024-11-15 11:41:21.649902] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:21.069 [2024-11-15 11:41:21.649911] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:21.069 [2024-11-15 11:41:21.649915] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:21.069 [2024-11-15 11:41:21.649920] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x203b580) on tqpair=0x1fd9550 00:23:21.069 [2024-11-15 11:41:21.649932] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:21.069 [2024-11-15 11:41:21.649937] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:21.069 [2024-11-15 11:41:21.649942] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1fd9550) 00:23:21.069 [2024-11-15 11:41:21.649950] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.069 [2024-11-15 11:41:21.649964] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x203b580, cid 3, qid 0 00:23:21.069 [2024-11-15 11:41:21.650029] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:21.069 [2024-11-15 11:41:21.650038] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:21.069 [2024-11-15 11:41:21.650042] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:21.069 [2024-11-15 11:41:21.650047] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x203b580) on tqpair=0x1fd9550 00:23:21.069 [2024-11-15 11:41:21.650059] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:21.069 [2024-11-15 11:41:21.650065] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:21.069 [2024-11-15 11:41:21.650069] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1fd9550) 00:23:21.069 [2024-11-15 11:41:21.650078] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.069 [2024-11-15 11:41:21.650091] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x203b580, cid 3, qid 0 00:23:21.069 [2024-11-15 11:41:21.650163] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:21.069 [2024-11-15 11:41:21.650171] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:21.069 [2024-11-15 11:41:21.650175] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:21.069 [2024-11-15 11:41:21.650180] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x203b580) on tqpair=0x1fd9550 00:23:21.069 [2024-11-15 11:41:21.650193] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:21.069 [2024-11-15 11:41:21.650198] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:21.069 [2024-11-15 11:41:21.650203] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1fd9550) 00:23:21.069 [2024-11-15 11:41:21.650211] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.069 [2024-11-15 11:41:21.650225] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x203b580, cid 3, qid 0 00:23:21.069 [2024-11-15 11:41:21.650294] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:21.069 [2024-11-15 11:41:21.650302] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:21.069 [2024-11-15 11:41:21.650307] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:21.069 [2024-11-15 11:41:21.650312] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x203b580) on tqpair=0x1fd9550 00:23:21.069 [2024-11-15 11:41:21.650325] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:21.069 [2024-11-15 11:41:21.650331] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:21.069 [2024-11-15 11:41:21.650336] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1fd9550) 00:23:21.069 [2024-11-15 11:41:21.650345] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.069 [2024-11-15 11:41:21.650358] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x203b580, cid 3, qid 0 00:23:21.069 [2024-11-15 11:41:21.650419] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:21.069 [2024-11-15 11:41:21.650428] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:21.069 [2024-11-15 11:41:21.650434] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:21.069 [2024-11-15 11:41:21.650439] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x203b580) on tqpair=0x1fd9550 00:23:21.069 [2024-11-15 11:41:21.650451] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:21.069 [2024-11-15 11:41:21.650456] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:21.069 [2024-11-15 11:41:21.650466] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1fd9550) 00:23:21.069 [2024-11-15 11:41:21.650475] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.069 [2024-11-15 11:41:21.650490] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x203b580, cid 3, qid 0 00:23:21.069 [2024-11-15 11:41:21.650555] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:21.069 [2024-11-15 11:41:21.650564] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:21.069 [2024-11-15 11:41:21.650569] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:21.069 [2024-11-15 11:41:21.650575] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x203b580) on tqpair=0x1fd9550 00:23:21.069 [2024-11-15 11:41:21.650587] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:21.069 [2024-11-15 11:41:21.650592] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:21.069 [2024-11-15 11:41:21.650597] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1fd9550) 00:23:21.069 [2024-11-15 11:41:21.650607] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.069 [2024-11-15 11:41:21.650621] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x203b580, cid 3, qid 0 00:23:21.069 [2024-11-15 11:41:21.650687] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:21.069 [2024-11-15 11:41:21.650696] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:21.069 [2024-11-15 11:41:21.650700] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:21.069 [2024-11-15 11:41:21.650705] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x203b580) on tqpair=0x1fd9550 00:23:21.069 [2024-11-15 11:41:21.650718] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:21.069 [2024-11-15 11:41:21.650723] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:21.069 [2024-11-15 11:41:21.650728] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1fd9550) 00:23:21.069 [2024-11-15 11:41:21.650736] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.069 [2024-11-15 11:41:21.650751] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x203b580, cid 3, qid 0 00:23:21.069 [2024-11-15 11:41:21.650816] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:21.069 [2024-11-15 11:41:21.650827] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:21.069 [2024-11-15 11:41:21.650832] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:21.069 [2024-11-15 11:41:21.650838] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x203b580) on tqpair=0x1fd9550 00:23:21.069 [2024-11-15 11:41:21.650850] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:21.069 [2024-11-15 11:41:21.650855] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:21.069 [2024-11-15 11:41:21.650860] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1fd9550) 00:23:21.069 [2024-11-15 11:41:21.650868] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.069 [2024-11-15 11:41:21.650882] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x203b580, cid 3, qid 0 00:23:21.069 [2024-11-15 11:41:21.650943] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:21.069 [2024-11-15 11:41:21.650951] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:21.069 [2024-11-15 11:41:21.650956] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:21.069 [2024-11-15 11:41:21.650961] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x203b580) on tqpair=0x1fd9550 00:23:21.070 [2024-11-15 11:41:21.650973] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:21.070 [2024-11-15 11:41:21.650978] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:21.070 [2024-11-15 11:41:21.650983] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1fd9550) 00:23:21.070 [2024-11-15 11:41:21.650991] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.070 [2024-11-15 11:41:21.651005] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x203b580, cid 3, qid 0 00:23:21.070 [2024-11-15 11:41:21.651078] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:21.070 [2024-11-15 11:41:21.651086] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:21.070 [2024-11-15 11:41:21.651090] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:21.070 [2024-11-15 11:41:21.651095] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x203b580) on tqpair=0x1fd9550 00:23:21.070 [2024-11-15 11:41:21.651108] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:21.070 [2024-11-15 11:41:21.651113] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:21.070 [2024-11-15 11:41:21.651117] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1fd9550) 00:23:21.070 [2024-11-15 11:41:21.651126] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.070 [2024-11-15 11:41:21.651140] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x203b580, cid 3, qid 0 00:23:21.070 [2024-11-15 11:41:21.651204] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:21.070 [2024-11-15 11:41:21.651212] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:21.070 [2024-11-15 11:41:21.651217] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:21.070 [2024-11-15 11:41:21.651222] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x203b580) on tqpair=0x1fd9550 00:23:21.070 [2024-11-15 11:41:21.651234] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:21.070 [2024-11-15 11:41:21.651239] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:21.070 [2024-11-15 11:41:21.651244] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1fd9550) 00:23:21.070 [2024-11-15 11:41:21.651252] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.070 [2024-11-15 11:41:21.651266] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x203b580, cid 3, qid 0 00:23:21.070 [2024-11-15 11:41:21.651339] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:21.070 [2024-11-15 11:41:21.651347] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:21.070 [2024-11-15 11:41:21.651353] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:21.070 [2024-11-15 11:41:21.651359] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x203b580) on tqpair=0x1fd9550 00:23:21.070 [2024-11-15 11:41:21.651372] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:21.070 [2024-11-15 11:41:21.651377] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:21.070 [2024-11-15 11:41:21.651382] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1fd9550) 00:23:21.070 [2024-11-15 11:41:21.651390] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.070 [2024-11-15 11:41:21.651405] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x203b580, cid 3, qid 0 00:23:21.070 [2024-11-15 11:41:21.655467] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:21.070 [2024-11-15 11:41:21.655480] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:21.070 [2024-11-15 11:41:21.655484] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:21.070 [2024-11-15 11:41:21.655490] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x203b580) on tqpair=0x1fd9550 00:23:21.070 [2024-11-15 11:41:21.655503] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:21.070 [2024-11-15 11:41:21.655509] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:21.070 [2024-11-15 11:41:21.655513] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1fd9550) 00:23:21.070 [2024-11-15 11:41:21.655523] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.070 [2024-11-15 11:41:21.655539] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x203b580, cid 3, qid 0 00:23:21.070 [2024-11-15 11:41:21.655727] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:21.070 [2024-11-15 11:41:21.655736] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:21.070 [2024-11-15 11:41:21.655741] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:21.070 [2024-11-15 11:41:21.655745] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x203b580) on tqpair=0x1fd9550 00:23:21.070 [2024-11-15 11:41:21.655755] nvme_ctrlr.c:1273:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] shutdown complete in 6 milliseconds 00:23:21.070 0% 00:23:21.070 Data Units Read: 0 00:23:21.070 Data Units Written: 0 00:23:21.070 Host Read Commands: 0 00:23:21.070 Host Write Commands: 0 00:23:21.070 Controller Busy Time: 0 minutes 00:23:21.070 Power Cycles: 0 00:23:21.070 Power On Hours: 0 hours 00:23:21.070 Unsafe Shutdowns: 0 00:23:21.070 Unrecoverable Media Errors: 0 00:23:21.070 Lifetime Error Log Entries: 0 00:23:21.070 Warning Temperature Time: 0 minutes 00:23:21.070 Critical Temperature Time: 0 minutes 00:23:21.070 00:23:21.070 Number of Queues 00:23:21.070 ================ 00:23:21.070 Number of I/O Submission Queues: 127 00:23:21.070 Number of I/O Completion Queues: 127 00:23:21.070 00:23:21.070 Active Namespaces 00:23:21.070 ================= 00:23:21.070 Namespace ID:1 00:23:21.070 Error Recovery Timeout: Unlimited 00:23:21.070 Command Set Identifier: NVM (00h) 00:23:21.070 Deallocate: Supported 00:23:21.070 Deallocated/Unwritten Error: Not Supported 00:23:21.070 Deallocated Read Value: Unknown 00:23:21.070 Deallocate in Write Zeroes: Not Supported 00:23:21.070 Deallocated Guard Field: 0xFFFF 00:23:21.070 Flush: Supported 00:23:21.070 Reservation: Supported 00:23:21.070 Namespace Sharing Capabilities: Multiple Controllers 00:23:21.070 Size (in LBAs): 131072 (0GiB) 00:23:21.070 Capacity (in LBAs): 131072 (0GiB) 00:23:21.070 Utilization (in LBAs): 131072 (0GiB) 00:23:21.070 NGUID: ABCDEF0123456789ABCDEF0123456789 00:23:21.070 EUI64: ABCDEF0123456789 00:23:21.070 UUID: b61f0533-f048-40d6-a24b-97b4df710482 00:23:21.070 Thin Provisioning: Not Supported 00:23:21.070 Per-NS Atomic Units: Yes 00:23:21.070 Atomic Boundary Size (Normal): 0 00:23:21.070 Atomic Boundary Size (PFail): 0 00:23:21.070 Atomic Boundary Offset: 0 00:23:21.070 Maximum Single Source Range Length: 65535 00:23:21.070 Maximum Copy Length: 65535 00:23:21.070 Maximum Source Range Count: 1 00:23:21.070 NGUID/EUI64 Never Reused: No 00:23:21.070 Namespace Write Protected: No 00:23:21.070 Number of LBA Formats: 1 00:23:21.070 Current LBA Format: LBA Format #00 00:23:21.070 LBA Format #00: Data Size: 512 Metadata Size: 0 00:23:21.070 00:23:21.070 11:41:21 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@51 -- # sync 00:23:21.070 11:41:21 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@52 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:23:21.070 11:41:21 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:21.070 11:41:21 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:23:21.070 11:41:21 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:21.070 11:41:21 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@54 -- # trap - SIGINT SIGTERM EXIT 00:23:21.070 11:41:21 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@56 -- # nvmftestfini 00:23:21.070 11:41:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@516 -- # nvmfcleanup 00:23:21.070 11:41:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@121 -- # sync 00:23:21.070 11:41:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:23:21.070 11:41:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@124 -- # set +e 00:23:21.070 11:41:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@125 -- # for i in {1..20} 00:23:21.070 11:41:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:23:21.070 rmmod nvme_tcp 00:23:21.070 rmmod nvme_fabrics 00:23:21.070 rmmod nvme_keyring 00:23:21.070 11:41:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:23:21.070 11:41:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@128 -- # set -e 00:23:21.070 11:41:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@129 -- # return 0 00:23:21.070 11:41:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@517 -- # '[' -n 1322854 ']' 00:23:21.070 11:41:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@518 -- # killprocess 1322854 00:23:21.070 11:41:21 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@952 -- # '[' -z 1322854 ']' 00:23:21.070 11:41:21 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@956 -- # kill -0 1322854 00:23:21.070 11:41:21 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@957 -- # uname 00:23:21.070 11:41:21 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:23:21.070 11:41:21 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 1322854 00:23:21.070 11:41:21 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:23:21.070 11:41:21 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:23:21.070 11:41:21 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@970 -- # echo 'killing process with pid 1322854' 00:23:21.070 killing process with pid 1322854 00:23:21.070 11:41:21 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@971 -- # kill 1322854 00:23:21.070 11:41:21 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@976 -- # wait 1322854 00:23:21.330 11:41:22 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:23:21.330 11:41:22 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:23:21.330 11:41:22 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:23:21.330 11:41:22 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@297 -- # iptr 00:23:21.330 11:41:22 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@791 -- # iptables-save 00:23:21.330 11:41:22 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:23:21.330 11:41:22 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@791 -- # iptables-restore 00:23:21.330 11:41:22 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:23:21.330 11:41:22 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@302 -- # remove_spdk_ns 00:23:21.330 11:41:22 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:21.330 11:41:22 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:21.330 11:41:22 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:23.868 11:41:24 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:23:23.868 00:23:23.868 real 0m9.095s 00:23:23.868 user 0m5.613s 00:23:23.868 sys 0m4.602s 00:23:23.868 11:41:24 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1128 -- # xtrace_disable 00:23:23.868 11:41:24 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:23:23.868 ************************************ 00:23:23.868 END TEST nvmf_identify 00:23:23.868 ************************************ 00:23:23.868 11:41:24 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@23 -- # run_test nvmf_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/perf.sh --transport=tcp 00:23:23.868 11:41:24 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:23:23.868 11:41:24 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1109 -- # xtrace_disable 00:23:23.868 11:41:24 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:23:23.868 ************************************ 00:23:23.868 START TEST nvmf_perf 00:23:23.868 ************************************ 00:23:23.868 11:41:24 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/perf.sh --transport=tcp 00:23:23.868 * Looking for test storage... 00:23:23.868 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:23:23.868 11:41:24 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:23:23.868 11:41:24 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1691 -- # lcov --version 00:23:23.868 11:41:24 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:23:23.868 11:41:24 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:23:23.868 11:41:24 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:23:23.868 11:41:24 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@333 -- # local ver1 ver1_l 00:23:23.868 11:41:24 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@334 -- # local ver2 ver2_l 00:23:23.868 11:41:24 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@336 -- # IFS=.-: 00:23:23.868 11:41:24 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@336 -- # read -ra ver1 00:23:23.868 11:41:24 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@337 -- # IFS=.-: 00:23:23.868 11:41:24 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@337 -- # read -ra ver2 00:23:23.868 11:41:24 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@338 -- # local 'op=<' 00:23:23.868 11:41:24 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@340 -- # ver1_l=2 00:23:23.868 11:41:24 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@341 -- # ver2_l=1 00:23:23.868 11:41:24 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:23:23.868 11:41:24 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@344 -- # case "$op" in 00:23:23.868 11:41:24 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@345 -- # : 1 00:23:23.868 11:41:24 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@364 -- # (( v = 0 )) 00:23:23.868 11:41:24 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:23:23.868 11:41:24 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@365 -- # decimal 1 00:23:23.868 11:41:24 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@353 -- # local d=1 00:23:23.868 11:41:24 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:23:23.868 11:41:24 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@355 -- # echo 1 00:23:23.868 11:41:24 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@365 -- # ver1[v]=1 00:23:23.868 11:41:24 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@366 -- # decimal 2 00:23:23.868 11:41:24 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@353 -- # local d=2 00:23:23.868 11:41:24 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:23:23.868 11:41:24 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@355 -- # echo 2 00:23:23.868 11:41:24 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@366 -- # ver2[v]=2 00:23:23.868 11:41:24 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:23:23.868 11:41:24 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:23:23.868 11:41:24 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@368 -- # return 0 00:23:23.868 11:41:24 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:23:23.868 11:41:24 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:23:23.868 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:23.868 --rc genhtml_branch_coverage=1 00:23:23.868 --rc genhtml_function_coverage=1 00:23:23.868 --rc genhtml_legend=1 00:23:23.868 --rc geninfo_all_blocks=1 00:23:23.868 --rc geninfo_unexecuted_blocks=1 00:23:23.868 00:23:23.868 ' 00:23:23.868 11:41:24 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:23:23.868 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:23.868 --rc genhtml_branch_coverage=1 00:23:23.868 --rc genhtml_function_coverage=1 00:23:23.868 --rc genhtml_legend=1 00:23:23.868 --rc geninfo_all_blocks=1 00:23:23.868 --rc geninfo_unexecuted_blocks=1 00:23:23.868 00:23:23.868 ' 00:23:23.868 11:41:24 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:23:23.868 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:23.868 --rc genhtml_branch_coverage=1 00:23:23.868 --rc genhtml_function_coverage=1 00:23:23.868 --rc genhtml_legend=1 00:23:23.868 --rc geninfo_all_blocks=1 00:23:23.868 --rc geninfo_unexecuted_blocks=1 00:23:23.868 00:23:23.868 ' 00:23:23.868 11:41:24 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:23:23.868 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:23.868 --rc genhtml_branch_coverage=1 00:23:23.868 --rc genhtml_function_coverage=1 00:23:23.868 --rc genhtml_legend=1 00:23:23.868 --rc geninfo_all_blocks=1 00:23:23.868 --rc geninfo_unexecuted_blocks=1 00:23:23.868 00:23:23.868 ' 00:23:23.868 11:41:24 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:23:23.868 11:41:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@7 -- # uname -s 00:23:23.868 11:41:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:23.868 11:41:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:23.868 11:41:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:23.868 11:41:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:23.868 11:41:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:23.868 11:41:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:23.868 11:41:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:23.868 11:41:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:23.868 11:41:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:23.868 11:41:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:23.868 11:41:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:23:23.868 11:41:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@18 -- # NVME_HOSTID=00abaa28-3537-eb11-906e-0017a4403562 00:23:23.868 11:41:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:23.868 11:41:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:23.868 11:41:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:23.868 11:41:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:23.868 11:41:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:23:23.868 11:41:24 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@15 -- # shopt -s extglob 00:23:23.868 11:41:24 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:23.868 11:41:24 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:23.868 11:41:24 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:23.868 11:41:24 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:23.868 11:41:24 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:23.868 11:41:24 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:23.868 11:41:24 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@5 -- # export PATH 00:23:23.868 11:41:24 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:23.869 11:41:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@51 -- # : 0 00:23:23.869 11:41:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:23:23.869 11:41:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:23:23.869 11:41:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:23.869 11:41:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:23.869 11:41:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:23.869 11:41:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:23:23.869 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:23:23.869 11:41:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:23:23.869 11:41:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:23:23.869 11:41:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@55 -- # have_pci_nics=0 00:23:23.869 11:41:24 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@12 -- # MALLOC_BDEV_SIZE=64 00:23:23.869 11:41:24 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:23:23.869 11:41:24 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:23:23.869 11:41:24 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@17 -- # nvmftestinit 00:23:23.869 11:41:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:23:23.869 11:41:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:23.869 11:41:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@476 -- # prepare_net_devs 00:23:23.869 11:41:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@438 -- # local -g is_hw=no 00:23:23.869 11:41:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@440 -- # remove_spdk_ns 00:23:23.869 11:41:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:23.869 11:41:24 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:23.869 11:41:24 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:23.869 11:41:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:23:23.869 11:41:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:23:23.869 11:41:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@309 -- # xtrace_disable 00:23:23.869 11:41:24 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:23:29.140 11:41:29 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:23:29.140 11:41:29 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@315 -- # pci_devs=() 00:23:29.140 11:41:29 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@315 -- # local -a pci_devs 00:23:29.140 11:41:29 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@316 -- # pci_net_devs=() 00:23:29.140 11:41:29 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:23:29.140 11:41:29 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@317 -- # pci_drivers=() 00:23:29.140 11:41:29 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@317 -- # local -A pci_drivers 00:23:29.140 11:41:29 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@319 -- # net_devs=() 00:23:29.140 11:41:29 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@319 -- # local -ga net_devs 00:23:29.140 11:41:29 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@320 -- # e810=() 00:23:29.140 11:41:29 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@320 -- # local -ga e810 00:23:29.140 11:41:29 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@321 -- # x722=() 00:23:29.140 11:41:29 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@321 -- # local -ga x722 00:23:29.140 11:41:29 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@322 -- # mlx=() 00:23:29.140 11:41:29 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@322 -- # local -ga mlx 00:23:29.140 11:41:29 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:29.140 11:41:29 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:29.140 11:41:29 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:29.140 11:41:29 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:29.140 11:41:29 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:29.140 11:41:29 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:29.140 11:41:29 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:29.140 11:41:29 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:23:29.140 11:41:29 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:29.140 11:41:29 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:29.140 11:41:29 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:29.140 11:41:29 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:29.140 11:41:29 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:23:29.140 11:41:29 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:23:29.140 11:41:29 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:23:29.140 11:41:29 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:23:29.140 11:41:29 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:23:29.140 11:41:29 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:23:29.140 11:41:29 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:23:29.140 11:41:29 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:23:29.140 Found 0000:af:00.0 (0x8086 - 0x159b) 00:23:29.140 11:41:29 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:23:29.140 11:41:29 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:23:29.140 11:41:29 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:29.140 11:41:29 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:29.140 11:41:29 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:23:29.140 11:41:29 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:23:29.140 11:41:29 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:23:29.140 Found 0000:af:00.1 (0x8086 - 0x159b) 00:23:29.140 11:41:29 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:23:29.140 11:41:29 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:23:29.140 11:41:29 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:29.140 11:41:29 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:29.140 11:41:29 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:23:29.140 11:41:29 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:23:29.140 11:41:29 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:23:29.140 11:41:29 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:23:29.140 11:41:29 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:23:29.140 11:41:29 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:29.140 11:41:29 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:23:29.140 11:41:29 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:29.140 11:41:29 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@418 -- # [[ up == up ]] 00:23:29.140 11:41:29 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:23:29.140 11:41:29 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:29.140 11:41:29 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:23:29.140 Found net devices under 0000:af:00.0: cvl_0_0 00:23:29.140 11:41:29 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:23:29.140 11:41:29 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:23:29.140 11:41:29 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:29.140 11:41:29 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:23:29.140 11:41:29 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:29.141 11:41:29 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@418 -- # [[ up == up ]] 00:23:29.141 11:41:29 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:23:29.141 11:41:29 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:29.141 11:41:29 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:23:29.141 Found net devices under 0000:af:00.1: cvl_0_1 00:23:29.141 11:41:29 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:23:29.141 11:41:29 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:23:29.141 11:41:29 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@442 -- # is_hw=yes 00:23:29.141 11:41:29 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:23:29.141 11:41:29 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:23:29.141 11:41:29 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:23:29.141 11:41:29 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:23:29.141 11:41:29 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:29.141 11:41:29 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:29.141 11:41:29 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:29.141 11:41:29 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:23:29.141 11:41:29 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:23:29.141 11:41:29 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:23:29.141 11:41:29 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:23:29.141 11:41:29 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:23:29.141 11:41:29 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:23:29.141 11:41:29 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:29.141 11:41:29 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:23:29.141 11:41:29 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:23:29.141 11:41:29 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:23:29.141 11:41:29 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:23:29.141 11:41:29 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:23:29.141 11:41:29 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:29.141 11:41:29 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:23:29.141 11:41:29 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:29.141 11:41:29 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:23:29.141 11:41:29 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:23:29.141 11:41:29 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:23:29.141 11:41:29 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:23:29.141 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:29.141 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.456 ms 00:23:29.141 00:23:29.141 --- 10.0.0.2 ping statistics --- 00:23:29.141 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:29.141 rtt min/avg/max/mdev = 0.456/0.456/0.456/0.000 ms 00:23:29.141 11:41:29 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:23:29.141 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:29.141 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.206 ms 00:23:29.141 00:23:29.141 --- 10.0.0.1 ping statistics --- 00:23:29.141 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:29.141 rtt min/avg/max/mdev = 0.206/0.206/0.206/0.000 ms 00:23:29.141 11:41:29 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:29.141 11:41:29 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@450 -- # return 0 00:23:29.141 11:41:29 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:23:29.141 11:41:29 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:29.141 11:41:29 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:23:29.141 11:41:29 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:23:29.141 11:41:29 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:29.141 11:41:29 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:23:29.141 11:41:29 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:23:29.141 11:41:29 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@18 -- # nvmfappstart -m 0xF 00:23:29.141 11:41:29 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:23:29.141 11:41:29 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@724 -- # xtrace_disable 00:23:29.141 11:41:29 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:23:29.141 11:41:29 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@509 -- # nvmfpid=1326594 00:23:29.141 11:41:29 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@510 -- # waitforlisten 1326594 00:23:29.141 11:41:29 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@833 -- # '[' -z 1326594 ']' 00:23:29.141 11:41:29 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:29.141 11:41:29 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@838 -- # local max_retries=100 00:23:29.141 11:41:29 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:29.141 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:29.141 11:41:29 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@842 -- # xtrace_disable 00:23:29.141 11:41:29 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:23:29.141 11:41:29 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:23:29.141 [2024-11-15 11:41:29.568786] Starting SPDK v25.01-pre git sha1 4b2d483c6 / DPDK 24.03.0 initialization... 00:23:29.141 [2024-11-15 11:41:29.568843] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:29.141 [2024-11-15 11:41:29.668224] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:23:29.141 [2024-11-15 11:41:29.718030] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:29.141 [2024-11-15 11:41:29.718074] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:29.141 [2024-11-15 11:41:29.718084] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:29.141 [2024-11-15 11:41:29.718093] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:29.141 [2024-11-15 11:41:29.718100] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:29.141 [2024-11-15 11:41:29.720102] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:23:29.141 [2024-11-15 11:41:29.720209] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:23:29.141 [2024-11-15 11:41:29.720294] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:23:29.141 [2024-11-15 11:41:29.720295] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:23:29.141 11:41:29 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:23:29.141 11:41:29 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@866 -- # return 0 00:23:29.141 11:41:29 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:23:29.141 11:41:29 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@730 -- # xtrace_disable 00:23:29.141 11:41:29 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:23:29.141 11:41:29 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:29.141 11:41:29 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:23:29.141 11:41:29 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py load_subsystem_config 00:23:32.431 11:41:32 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py framework_get_config bdev 00:23:32.431 11:41:32 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # jq -r '.[].params | select(.name=="Nvme0").traddr' 00:23:32.432 11:41:33 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # local_nvme_trid=0000:86:00.0 00:23:32.432 11:41:33 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:23:33.000 11:41:33 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@31 -- # bdevs=' Malloc0' 00:23:33.000 11:41:33 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@33 -- # '[' -n 0000:86:00.0 ']' 00:23:33.000 11:41:33 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@34 -- # bdevs=' Malloc0 Nvme0n1' 00:23:33.000 11:41:33 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@37 -- # '[' tcp == rdma ']' 00:23:33.000 11:41:33 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:23:33.000 [2024-11-15 11:41:33.816291] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:33.000 11:41:33 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:23:33.568 11:41:34 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:23:33.568 11:41:34 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:23:33.568 11:41:34 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:23:33.568 11:41:34 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:23:34.136 11:41:34 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:23:34.136 [2024-11-15 11:41:34.951632] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:34.136 11:41:34 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:23:34.704 11:41:35 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@52 -- # '[' -n 0000:86:00.0 ']' 00:23:34.704 11:41:35 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@53 -- # perf_app -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:86:00.0' 00:23:34.704 11:41:35 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@21 -- # '[' 0 -eq 1 ']' 00:23:34.704 11:41:35 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:86:00.0' 00:23:36.081 Initializing NVMe Controllers 00:23:36.082 Attached to NVMe Controller at 0000:86:00.0 [8086:0a54] 00:23:36.082 Associating PCIE (0000:86:00.0) NSID 1 with lcore 0 00:23:36.082 Initialization complete. Launching workers. 00:23:36.082 ======================================================== 00:23:36.082 Latency(us) 00:23:36.082 Device Information : IOPS MiB/s Average min max 00:23:36.082 PCIE (0000:86:00.0) NSID 1 from core 0: 69172.32 270.20 461.71 53.89 4643.55 00:23:36.082 ======================================================== 00:23:36.082 Total : 69172.32 270.20 461.71 53.89 4643.55 00:23:36.082 00:23:36.082 11:41:36 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 1 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:23:37.018 Initializing NVMe Controllers 00:23:37.018 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:23:37.018 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:23:37.018 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:23:37.018 Initialization complete. Launching workers. 00:23:37.018 ======================================================== 00:23:37.018 Latency(us) 00:23:37.018 Device Information : IOPS MiB/s Average min max 00:23:37.018 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 85.00 0.33 12102.00 121.19 44987.46 00:23:37.018 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 71.00 0.28 14712.89 6981.76 48878.52 00:23:37.018 ======================================================== 00:23:37.018 Total : 156.00 0.61 13290.29 121.19 48878.52 00:23:37.018 00:23:37.277 11:41:37 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 32 -o 4096 -w randrw -M 50 -t 1 -HI -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:23:38.795 Initializing NVMe Controllers 00:23:38.795 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:23:38.795 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:23:38.795 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:23:38.795 Initialization complete. Launching workers. 00:23:38.795 ======================================================== 00:23:38.795 Latency(us) 00:23:38.795 Device Information : IOPS MiB/s Average min max 00:23:38.795 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 10435.98 40.77 3075.47 567.77 6746.84 00:23:38.795 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 3837.99 14.99 8372.50 5119.34 15980.03 00:23:38.795 ======================================================== 00:23:38.795 Total : 14273.97 55.76 4499.74 567.77 15980.03 00:23:38.795 00:23:38.795 11:41:39 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@59 -- # [[ e810 == \e\8\1\0 ]] 00:23:38.795 11:41:39 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@59 -- # [[ tcp == \r\d\m\a ]] 00:23:38.795 11:41:39 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -O 16384 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:23:41.326 Initializing NVMe Controllers 00:23:41.327 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:23:41.327 Controller IO queue size 128, less than required. 00:23:41.327 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:23:41.327 Controller IO queue size 128, less than required. 00:23:41.327 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:23:41.327 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:23:41.327 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:23:41.327 Initialization complete. Launching workers. 00:23:41.327 ======================================================== 00:23:41.327 Latency(us) 00:23:41.327 Device Information : IOPS MiB/s Average min max 00:23:41.327 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1505.50 376.37 86670.32 50028.83 138016.46 00:23:41.327 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 557.00 139.25 236718.03 71022.30 362873.58 00:23:41.327 ======================================================== 00:23:41.327 Total : 2062.50 515.62 127192.30 50028.83 362873.58 00:23:41.327 00:23:41.327 11:41:41 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 36964 -O 4096 -w randrw -M 50 -t 5 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0xf -P 4 00:23:41.327 No valid NVMe controllers or AIO or URING devices found 00:23:41.327 Initializing NVMe Controllers 00:23:41.327 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:23:41.327 Controller IO queue size 128, less than required. 00:23:41.327 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:23:41.327 WARNING: IO size 36964 (-o) is not a multiple of nsid 1 sector size 512. Removing this ns from test 00:23:41.327 Controller IO queue size 128, less than required. 00:23:41.327 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:23:41.327 WARNING: IO size 36964 (-o) is not a multiple of nsid 2 sector size 512. Removing this ns from test 00:23:41.327 WARNING: Some requested NVMe devices were skipped 00:23:41.327 11:41:42 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' --transport-stat 00:23:43.877 Initializing NVMe Controllers 00:23:43.877 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:23:43.877 Controller IO queue size 128, less than required. 00:23:43.877 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:23:43.877 Controller IO queue size 128, less than required. 00:23:43.877 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:23:43.877 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:23:43.877 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:23:43.877 Initialization complete. Launching workers. 00:23:43.877 00:23:43.877 ==================== 00:23:43.877 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 statistics: 00:23:43.877 TCP transport: 00:23:43.877 polls: 7968 00:23:43.877 idle_polls: 4887 00:23:43.877 sock_completions: 3081 00:23:43.877 nvme_completions: 5067 00:23:43.877 submitted_requests: 7660 00:23:43.877 queued_requests: 1 00:23:43.877 00:23:43.877 ==================== 00:23:43.877 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 statistics: 00:23:43.877 TCP transport: 00:23:43.877 polls: 8073 00:23:43.877 idle_polls: 4818 00:23:43.877 sock_completions: 3255 00:23:43.877 nvme_completions: 5823 00:23:43.877 submitted_requests: 8734 00:23:43.877 queued_requests: 1 00:23:43.877 ======================================================== 00:23:43.877 Latency(us) 00:23:43.877 Device Information : IOPS MiB/s Average min max 00:23:43.877 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1263.90 315.98 104288.43 57450.87 165486.75 00:23:43.877 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 1452.51 363.13 88662.05 48471.45 125222.68 00:23:43.877 ======================================================== 00:23:43.877 Total : 2716.42 679.10 95932.74 48471.45 165486.75 00:23:43.877 00:23:43.877 11:41:44 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@66 -- # sync 00:23:43.877 11:41:44 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@67 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:23:44.136 11:41:44 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@69 -- # '[' 0 -eq 1 ']' 00:23:44.136 11:41:44 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@112 -- # trap - SIGINT SIGTERM EXIT 00:23:44.136 11:41:44 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@114 -- # nvmftestfini 00:23:44.136 11:41:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@516 -- # nvmfcleanup 00:23:44.136 11:41:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@121 -- # sync 00:23:44.136 11:41:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:23:44.136 11:41:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@124 -- # set +e 00:23:44.136 11:41:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@125 -- # for i in {1..20} 00:23:44.136 11:41:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:23:44.136 rmmod nvme_tcp 00:23:44.136 rmmod nvme_fabrics 00:23:44.136 rmmod nvme_keyring 00:23:44.136 11:41:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:23:44.136 11:41:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@128 -- # set -e 00:23:44.136 11:41:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@129 -- # return 0 00:23:44.136 11:41:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@517 -- # '[' -n 1326594 ']' 00:23:44.136 11:41:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@518 -- # killprocess 1326594 00:23:44.136 11:41:44 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@952 -- # '[' -z 1326594 ']' 00:23:44.136 11:41:44 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@956 -- # kill -0 1326594 00:23:44.136 11:41:44 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@957 -- # uname 00:23:44.136 11:41:44 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:23:44.136 11:41:44 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 1326594 00:23:44.136 11:41:44 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:23:44.136 11:41:44 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:23:44.136 11:41:44 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@970 -- # echo 'killing process with pid 1326594' 00:23:44.136 killing process with pid 1326594 00:23:44.136 11:41:44 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@971 -- # kill 1326594 00:23:44.136 11:41:44 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@976 -- # wait 1326594 00:23:46.041 11:41:46 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:23:46.041 11:41:46 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:23:46.041 11:41:46 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:23:46.041 11:41:46 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@297 -- # iptr 00:23:46.041 11:41:46 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@791 -- # iptables-save 00:23:46.041 11:41:46 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:23:46.041 11:41:46 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@791 -- # iptables-restore 00:23:46.041 11:41:46 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:23:46.041 11:41:46 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@302 -- # remove_spdk_ns 00:23:46.041 11:41:46 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:46.041 11:41:46 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:46.041 11:41:46 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:47.947 11:41:48 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:23:47.947 00:23:47.947 real 0m24.434s 00:23:47.947 user 1m7.307s 00:23:47.947 sys 0m7.569s 00:23:47.947 11:41:48 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1128 -- # xtrace_disable 00:23:47.947 11:41:48 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:23:47.947 ************************************ 00:23:47.947 END TEST nvmf_perf 00:23:47.947 ************************************ 00:23:47.947 11:41:48 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@24 -- # run_test nvmf_fio_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/fio.sh --transport=tcp 00:23:47.947 11:41:48 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:23:47.947 11:41:48 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1109 -- # xtrace_disable 00:23:47.947 11:41:48 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:23:47.947 ************************************ 00:23:47.947 START TEST nvmf_fio_host 00:23:47.947 ************************************ 00:23:47.947 11:41:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/fio.sh --transport=tcp 00:23:47.947 * Looking for test storage... 00:23:47.947 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:23:47.947 11:41:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:23:47.947 11:41:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1691 -- # lcov --version 00:23:47.947 11:41:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:23:47.947 11:41:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:23:47.947 11:41:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:23:47.947 11:41:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@333 -- # local ver1 ver1_l 00:23:47.947 11:41:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@334 -- # local ver2 ver2_l 00:23:47.947 11:41:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@336 -- # IFS=.-: 00:23:47.947 11:41:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@336 -- # read -ra ver1 00:23:47.947 11:41:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@337 -- # IFS=.-: 00:23:47.947 11:41:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@337 -- # read -ra ver2 00:23:47.947 11:41:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@338 -- # local 'op=<' 00:23:47.947 11:41:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@340 -- # ver1_l=2 00:23:47.947 11:41:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@341 -- # ver2_l=1 00:23:47.947 11:41:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:23:47.947 11:41:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@344 -- # case "$op" in 00:23:47.947 11:41:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@345 -- # : 1 00:23:47.947 11:41:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@364 -- # (( v = 0 )) 00:23:47.947 11:41:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:23:47.947 11:41:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@365 -- # decimal 1 00:23:47.947 11:41:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@353 -- # local d=1 00:23:47.947 11:41:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:23:47.947 11:41:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@355 -- # echo 1 00:23:47.947 11:41:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@365 -- # ver1[v]=1 00:23:47.947 11:41:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@366 -- # decimal 2 00:23:48.207 11:41:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@353 -- # local d=2 00:23:48.207 11:41:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:23:48.207 11:41:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@355 -- # echo 2 00:23:48.207 11:41:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@366 -- # ver2[v]=2 00:23:48.207 11:41:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:23:48.207 11:41:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:23:48.207 11:41:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@368 -- # return 0 00:23:48.207 11:41:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:23:48.207 11:41:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:23:48.207 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:48.207 --rc genhtml_branch_coverage=1 00:23:48.207 --rc genhtml_function_coverage=1 00:23:48.207 --rc genhtml_legend=1 00:23:48.207 --rc geninfo_all_blocks=1 00:23:48.207 --rc geninfo_unexecuted_blocks=1 00:23:48.207 00:23:48.207 ' 00:23:48.207 11:41:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:23:48.207 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:48.207 --rc genhtml_branch_coverage=1 00:23:48.207 --rc genhtml_function_coverage=1 00:23:48.207 --rc genhtml_legend=1 00:23:48.207 --rc geninfo_all_blocks=1 00:23:48.207 --rc geninfo_unexecuted_blocks=1 00:23:48.207 00:23:48.207 ' 00:23:48.207 11:41:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:23:48.207 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:48.207 --rc genhtml_branch_coverage=1 00:23:48.207 --rc genhtml_function_coverage=1 00:23:48.207 --rc genhtml_legend=1 00:23:48.207 --rc geninfo_all_blocks=1 00:23:48.207 --rc geninfo_unexecuted_blocks=1 00:23:48.207 00:23:48.207 ' 00:23:48.207 11:41:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:23:48.207 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:48.207 --rc genhtml_branch_coverage=1 00:23:48.207 --rc genhtml_function_coverage=1 00:23:48.207 --rc genhtml_legend=1 00:23:48.207 --rc geninfo_all_blocks=1 00:23:48.207 --rc geninfo_unexecuted_blocks=1 00:23:48.207 00:23:48.207 ' 00:23:48.207 11:41:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:23:48.207 11:41:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@15 -- # shopt -s extglob 00:23:48.207 11:41:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:48.207 11:41:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:48.207 11:41:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:48.207 11:41:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:48.208 11:41:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:48.208 11:41:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:48.208 11:41:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:23:48.208 11:41:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:48.208 11:41:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:23:48.208 11:41:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@7 -- # uname -s 00:23:48.208 11:41:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:48.208 11:41:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:48.208 11:41:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:48.208 11:41:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:48.208 11:41:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:48.208 11:41:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:48.208 11:41:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:48.208 11:41:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:48.208 11:41:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:48.208 11:41:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:48.208 11:41:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:23:48.208 11:41:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@18 -- # NVME_HOSTID=00abaa28-3537-eb11-906e-0017a4403562 00:23:48.208 11:41:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:48.208 11:41:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:48.208 11:41:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:48.208 11:41:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:48.208 11:41:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:23:48.208 11:41:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@15 -- # shopt -s extglob 00:23:48.208 11:41:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:48.208 11:41:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:48.208 11:41:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:48.208 11:41:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:48.208 11:41:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:48.208 11:41:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:48.208 11:41:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:23:48.208 11:41:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:48.208 11:41:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@51 -- # : 0 00:23:48.208 11:41:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:23:48.208 11:41:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:23:48.208 11:41:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:48.208 11:41:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:48.208 11:41:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:48.208 11:41:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:23:48.208 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:23:48.208 11:41:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:23:48.208 11:41:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:23:48.208 11:41:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@55 -- # have_pci_nics=0 00:23:48.208 11:41:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:23:48.208 11:41:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@14 -- # nvmftestinit 00:23:48.208 11:41:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:23:48.208 11:41:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:48.208 11:41:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@476 -- # prepare_net_devs 00:23:48.208 11:41:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@438 -- # local -g is_hw=no 00:23:48.208 11:41:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@440 -- # remove_spdk_ns 00:23:48.208 11:41:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:48.208 11:41:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:48.208 11:41:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:48.208 11:41:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:23:48.208 11:41:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:23:48.208 11:41:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@309 -- # xtrace_disable 00:23:48.208 11:41:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:23:53.484 11:41:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:23:53.484 11:41:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@315 -- # pci_devs=() 00:23:53.484 11:41:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@315 -- # local -a pci_devs 00:23:53.484 11:41:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@316 -- # pci_net_devs=() 00:23:53.484 11:41:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:23:53.484 11:41:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@317 -- # pci_drivers=() 00:23:53.484 11:41:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@317 -- # local -A pci_drivers 00:23:53.484 11:41:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@319 -- # net_devs=() 00:23:53.484 11:41:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@319 -- # local -ga net_devs 00:23:53.484 11:41:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@320 -- # e810=() 00:23:53.484 11:41:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@320 -- # local -ga e810 00:23:53.484 11:41:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@321 -- # x722=() 00:23:53.484 11:41:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@321 -- # local -ga x722 00:23:53.484 11:41:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@322 -- # mlx=() 00:23:53.484 11:41:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@322 -- # local -ga mlx 00:23:53.484 11:41:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:53.484 11:41:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:53.484 11:41:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:53.484 11:41:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:53.484 11:41:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:53.484 11:41:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:53.484 11:41:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:53.484 11:41:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:23:53.484 11:41:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:53.484 11:41:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:53.484 11:41:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:53.484 11:41:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:53.484 11:41:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:23:53.484 11:41:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:23:53.484 11:41:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:23:53.484 11:41:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:23:53.484 11:41:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:23:53.484 11:41:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:23:53.484 11:41:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:23:53.484 11:41:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:23:53.484 Found 0000:af:00.0 (0x8086 - 0x159b) 00:23:53.484 11:41:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:23:53.484 11:41:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:23:53.484 11:41:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:53.484 11:41:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:53.484 11:41:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:23:53.484 11:41:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:23:53.484 11:41:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:23:53.484 Found 0000:af:00.1 (0x8086 - 0x159b) 00:23:53.484 11:41:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:23:53.484 11:41:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:23:53.484 11:41:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:53.484 11:41:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:53.484 11:41:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:23:53.484 11:41:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:23:53.484 11:41:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:23:53.484 11:41:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:23:53.484 11:41:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:23:53.484 11:41:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:53.484 11:41:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:23:53.485 11:41:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:53.485 11:41:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@418 -- # [[ up == up ]] 00:23:53.485 11:41:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:23:53.485 11:41:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:53.485 11:41:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:23:53.485 Found net devices under 0000:af:00.0: cvl_0_0 00:23:53.485 11:41:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:23:53.485 11:41:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:23:53.485 11:41:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:53.485 11:41:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:23:53.485 11:41:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:53.485 11:41:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@418 -- # [[ up == up ]] 00:23:53.485 11:41:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:23:53.485 11:41:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:53.485 11:41:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:23:53.485 Found net devices under 0000:af:00.1: cvl_0_1 00:23:53.485 11:41:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:23:53.485 11:41:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:23:53.485 11:41:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@442 -- # is_hw=yes 00:23:53.485 11:41:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:23:53.485 11:41:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:23:53.485 11:41:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:23:53.485 11:41:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:23:53.485 11:41:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:53.485 11:41:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:53.485 11:41:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:53.485 11:41:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:23:53.485 11:41:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:23:53.485 11:41:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:23:53.485 11:41:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:23:53.485 11:41:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:23:53.485 11:41:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:23:53.485 11:41:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:53.485 11:41:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:23:53.485 11:41:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:23:53.485 11:41:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:23:53.485 11:41:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:23:53.485 11:41:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:23:53.485 11:41:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:53.485 11:41:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:23:53.485 11:41:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:53.485 11:41:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:23:53.485 11:41:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:23:53.485 11:41:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:23:53.485 11:41:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:23:53.485 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:53.485 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.409 ms 00:23:53.485 00:23:53.485 --- 10.0.0.2 ping statistics --- 00:23:53.485 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:53.485 rtt min/avg/max/mdev = 0.409/0.409/0.409/0.000 ms 00:23:53.485 11:41:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:23:53.485 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:53.485 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.213 ms 00:23:53.485 00:23:53.485 --- 10.0.0.1 ping statistics --- 00:23:53.485 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:53.485 rtt min/avg/max/mdev = 0.213/0.213/0.213/0.000 ms 00:23:53.485 11:41:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:53.485 11:41:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@450 -- # return 0 00:23:53.485 11:41:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:23:53.485 11:41:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:53.485 11:41:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:23:53.485 11:41:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:23:53.485 11:41:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:53.485 11:41:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:23:53.485 11:41:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:23:53.744 11:41:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@16 -- # [[ y != y ]] 00:23:53.744 11:41:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@21 -- # timing_enter start_nvmf_tgt 00:23:53.744 11:41:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@724 -- # xtrace_disable 00:23:53.744 11:41:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:23:53.744 11:41:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@24 -- # nvmfpid=1333147 00:23:53.744 11:41:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@26 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:23:53.744 11:41:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@23 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:23:53.744 11:41:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@28 -- # waitforlisten 1333147 00:23:53.744 11:41:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@833 -- # '[' -z 1333147 ']' 00:23:53.744 11:41:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:53.744 11:41:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@838 -- # local max_retries=100 00:23:53.744 11:41:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:53.744 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:53.744 11:41:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@842 -- # xtrace_disable 00:23:53.744 11:41:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:23:53.744 [2024-11-15 11:41:54.421753] Starting SPDK v25.01-pre git sha1 4b2d483c6 / DPDK 24.03.0 initialization... 00:23:53.744 [2024-11-15 11:41:54.421809] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:53.744 [2024-11-15 11:41:54.523726] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:23:53.744 [2024-11-15 11:41:54.573641] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:53.744 [2024-11-15 11:41:54.573682] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:53.744 [2024-11-15 11:41:54.573693] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:53.744 [2024-11-15 11:41:54.573703] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:53.744 [2024-11-15 11:41:54.573710] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:53.744 [2024-11-15 11:41:54.575774] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:23:53.744 [2024-11-15 11:41:54.575876] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:23:53.744 [2024-11-15 11:41:54.575978] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:23:53.744 [2024-11-15 11:41:54.575982] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:23:54.004 11:41:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:23:54.004 11:41:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@866 -- # return 0 00:23:54.004 11:41:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:23:54.263 [2024-11-15 11:41:54.932858] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:54.263 11:41:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@30 -- # timing_exit start_nvmf_tgt 00:23:54.263 11:41:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@730 -- # xtrace_disable 00:23:54.263 11:41:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:23:54.263 11:41:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:23:54.522 Malloc1 00:23:54.522 11:41:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:23:54.781 11:41:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:23:55.039 11:41:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:23:55.299 [2024-11-15 11:41:56.091356] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:55.299 11:41:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:23:55.558 11:41:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@38 -- # PLUGIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme 00:23:55.558 11:41:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@41 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:23:55.558 11:41:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1362 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:23:55.558 11:41:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1339 -- # local fio_dir=/usr/src/fio 00:23:55.558 11:41:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:23:55.558 11:41:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # local sanitizers 00:23:55.558 11:41:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1342 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:23:55.558 11:41:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # shift 00:23:55.558 11:41:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # local asan_lib= 00:23:55.558 11:41:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1346 -- # for sanitizer in "${sanitizers[@]}" 00:23:55.558 11:41:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:23:55.558 11:41:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # grep libasan 00:23:55.558 11:41:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # awk '{print $3}' 00:23:55.558 11:41:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # asan_lib= 00:23:55.558 11:41:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # [[ -n '' ]] 00:23:55.558 11:41:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1346 -- # for sanitizer in "${sanitizers[@]}" 00:23:55.558 11:41:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:23:55.558 11:41:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # grep libclang_rt.asan 00:23:55.558 11:41:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # awk '{print $3}' 00:23:55.834 11:41:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # asan_lib= 00:23:55.834 11:41:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # [[ -n '' ]] 00:23:55.834 11:41:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1354 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:23:55.834 11:41:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1354 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:23:56.094 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:23:56.094 fio-3.35 00:23:56.094 Starting 1 thread 00:23:58.612 00:23:58.612 test: (groupid=0, jobs=1): err= 0: pid=1333698: Fri Nov 15 11:41:59 2024 00:23:58.612 read: IOPS=12.9k, BW=50.2MiB/s (52.6MB/s)(101MiB/2005msec) 00:23:58.612 slat (usec): min=2, max=182, avg= 2.63, stdev= 1.61 00:23:58.612 clat (usec): min=2327, max=9412, avg=5437.65, stdev=381.89 00:23:58.612 lat (usec): min=2359, max=9414, avg=5440.28, stdev=381.81 00:23:58.612 clat percentiles (usec): 00:23:58.612 | 1.00th=[ 4490], 5.00th=[ 4817], 10.00th=[ 4948], 20.00th=[ 5145], 00:23:58.612 | 30.00th=[ 5276], 40.00th=[ 5342], 50.00th=[ 5473], 60.00th=[ 5538], 00:23:58.612 | 70.00th=[ 5604], 80.00th=[ 5735], 90.00th=[ 5932], 95.00th=[ 5997], 00:23:58.612 | 99.00th=[ 6325], 99.50th=[ 6390], 99.90th=[ 6849], 99.95th=[ 8094], 00:23:58.612 | 99.99th=[ 9372] 00:23:58.612 bw ( KiB/s): min=50115, max=51968, per=99.97%, avg=51402.75, stdev=877.40, samples=4 00:23:58.612 iops : min=12528, max=12992, avg=12850.50, stdev=219.72, samples=4 00:23:58.612 write: IOPS=12.8k, BW=50.1MiB/s (52.5MB/s)(100MiB/2005msec); 0 zone resets 00:23:58.612 slat (usec): min=2, max=167, avg= 2.71, stdev= 1.20 00:23:58.612 clat (usec): min=1811, max=8749, avg=4466.87, stdev=325.03 00:23:58.612 lat (usec): min=1827, max=8751, avg=4469.58, stdev=325.00 00:23:58.612 clat percentiles (usec): 00:23:58.612 | 1.00th=[ 3752], 5.00th=[ 3982], 10.00th=[ 4080], 20.00th=[ 4228], 00:23:58.612 | 30.00th=[ 4293], 40.00th=[ 4424], 50.00th=[ 4490], 60.00th=[ 4555], 00:23:58.612 | 70.00th=[ 4621], 80.00th=[ 4686], 90.00th=[ 4817], 95.00th=[ 4948], 00:23:58.612 | 99.00th=[ 5211], 99.50th=[ 5342], 99.90th=[ 6980], 99.95th=[ 8029], 00:23:58.612 | 99.99th=[ 8586] 00:23:58.612 bw ( KiB/s): min=50594, max=51776, per=99.93%, avg=51274.50, stdev=508.07, samples=4 00:23:58.612 iops : min=12648, max=12944, avg=12818.50, stdev=127.24, samples=4 00:23:58.612 lat (msec) : 2=0.01%, 4=2.94%, 10=97.04% 00:23:58.612 cpu : usr=77.50%, sys=20.06%, ctx=58, majf=0, minf=3 00:23:58.612 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:23:58.612 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:58.612 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:23:58.612 issued rwts: total=25772,25719,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:58.612 latency : target=0, window=0, percentile=100.00%, depth=128 00:23:58.612 00:23:58.612 Run status group 0 (all jobs): 00:23:58.612 READ: bw=50.2MiB/s (52.6MB/s), 50.2MiB/s-50.2MiB/s (52.6MB/s-52.6MB/s), io=101MiB (106MB), run=2005-2005msec 00:23:58.612 WRITE: bw=50.1MiB/s (52.5MB/s), 50.1MiB/s-50.1MiB/s (52.5MB/s-52.5MB/s), io=100MiB (105MB), run=2005-2005msec 00:23:58.612 11:41:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@45 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:23:58.612 11:41:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1362 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:23:58.612 11:41:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1339 -- # local fio_dir=/usr/src/fio 00:23:58.612 11:41:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:23:58.612 11:41:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # local sanitizers 00:23:58.613 11:41:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1342 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:23:58.613 11:41:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # shift 00:23:58.613 11:41:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # local asan_lib= 00:23:58.613 11:41:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1346 -- # for sanitizer in "${sanitizers[@]}" 00:23:58.613 11:41:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:23:58.613 11:41:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # grep libasan 00:23:58.613 11:41:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # awk '{print $3}' 00:23:58.613 11:41:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # asan_lib= 00:23:58.613 11:41:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # [[ -n '' ]] 00:23:58.613 11:41:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1346 -- # for sanitizer in "${sanitizers[@]}" 00:23:58.613 11:41:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:23:58.613 11:41:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # grep libclang_rt.asan 00:23:58.613 11:41:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # awk '{print $3}' 00:23:58.613 11:41:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # asan_lib= 00:23:58.613 11:41:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # [[ -n '' ]] 00:23:58.613 11:41:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1354 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:23:58.613 11:41:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1354 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:23:58.869 test: (g=0): rw=randrw, bs=(R) 16.0KiB-16.0KiB, (W) 16.0KiB-16.0KiB, (T) 16.0KiB-16.0KiB, ioengine=spdk, iodepth=128 00:23:58.869 fio-3.35 00:23:58.869 Starting 1 thread 00:24:01.388 00:24:01.388 test: (groupid=0, jobs=1): err= 0: pid=1334355: Fri Nov 15 11:42:02 2024 00:24:01.388 read: IOPS=8268, BW=129MiB/s (135MB/s)(259MiB/2006msec) 00:24:01.388 slat (usec): min=3, max=125, avg= 4.16, stdev= 1.51 00:24:01.388 clat (usec): min=2604, max=16133, avg=8992.49, stdev=2087.13 00:24:01.388 lat (usec): min=2608, max=16137, avg=8996.65, stdev=2087.13 00:24:01.388 clat percentiles (usec): 00:24:01.388 | 1.00th=[ 4686], 5.00th=[ 5669], 10.00th=[ 6194], 20.00th=[ 7177], 00:24:01.388 | 30.00th=[ 7832], 40.00th=[ 8356], 50.00th=[ 8979], 60.00th=[ 9503], 00:24:01.388 | 70.00th=[10159], 80.00th=[10945], 90.00th=[11469], 95.00th=[12387], 00:24:01.388 | 99.00th=[13829], 99.50th=[14353], 99.90th=[15008], 99.95th=[15139], 00:24:01.388 | 99.99th=[15401] 00:24:01.388 bw ( KiB/s): min=53440, max=79776, per=51.26%, avg=67816.00, stdev=12849.99, samples=4 00:24:01.388 iops : min= 3340, max= 4986, avg=4238.50, stdev=803.12, samples=4 00:24:01.388 write: IOPS=4894, BW=76.5MiB/s (80.2MB/s)(138MiB/1808msec); 0 zone resets 00:24:01.388 slat (usec): min=45, max=258, avg=46.58, stdev= 4.13 00:24:01.388 clat (usec): min=2817, max=19197, avg=11053.96, stdev=2083.80 00:24:01.388 lat (usec): min=2863, max=19242, avg=11100.55, stdev=2083.55 00:24:01.388 clat percentiles (usec): 00:24:01.388 | 1.00th=[ 7308], 5.00th=[ 8160], 10.00th=[ 8586], 20.00th=[ 9372], 00:24:01.388 | 30.00th=[ 9896], 40.00th=[10290], 50.00th=[10814], 60.00th=[11338], 00:24:01.388 | 70.00th=[11863], 80.00th=[12649], 90.00th=[13829], 95.00th=[15139], 00:24:01.388 | 99.00th=[16909], 99.50th=[17171], 99.90th=[18744], 99.95th=[19006], 00:24:01.388 | 99.99th=[19268] 00:24:01.388 bw ( KiB/s): min=56096, max=81824, per=89.86%, avg=70376.00, stdev=12765.03, samples=4 00:24:01.388 iops : min= 3506, max= 5114, avg=4398.50, stdev=797.81, samples=4 00:24:01.388 lat (msec) : 4=0.40%, 10=55.45%, 20=44.15% 00:24:01.388 cpu : usr=87.13%, sys=11.97%, ctx=57, majf=0, minf=3 00:24:01.388 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.6%, >=64=98.8% 00:24:01.388 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:01.388 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:24:01.388 issued rwts: total=16587,8850,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:01.388 latency : target=0, window=0, percentile=100.00%, depth=128 00:24:01.388 00:24:01.388 Run status group 0 (all jobs): 00:24:01.388 READ: bw=129MiB/s (135MB/s), 129MiB/s-129MiB/s (135MB/s-135MB/s), io=259MiB (272MB), run=2006-2006msec 00:24:01.388 WRITE: bw=76.5MiB/s (80.2MB/s), 76.5MiB/s-76.5MiB/s (80.2MB/s-80.2MB/s), io=138MiB (145MB), run=1808-1808msec 00:24:01.388 11:42:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:24:01.645 11:42:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@49 -- # '[' 0 -eq 1 ']' 00:24:01.645 11:42:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:24:01.645 11:42:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@85 -- # rm -f ./local-test-0-verify.state 00:24:01.645 11:42:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@86 -- # nvmftestfini 00:24:01.645 11:42:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@516 -- # nvmfcleanup 00:24:01.645 11:42:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@121 -- # sync 00:24:01.645 11:42:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:24:01.645 11:42:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@124 -- # set +e 00:24:01.645 11:42:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@125 -- # for i in {1..20} 00:24:01.645 11:42:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:24:01.645 rmmod nvme_tcp 00:24:01.645 rmmod nvme_fabrics 00:24:01.645 rmmod nvme_keyring 00:24:01.645 11:42:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:24:01.645 11:42:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@128 -- # set -e 00:24:01.645 11:42:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@129 -- # return 0 00:24:01.645 11:42:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@517 -- # '[' -n 1333147 ']' 00:24:01.645 11:42:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@518 -- # killprocess 1333147 00:24:01.645 11:42:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@952 -- # '[' -z 1333147 ']' 00:24:01.645 11:42:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@956 -- # kill -0 1333147 00:24:01.645 11:42:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@957 -- # uname 00:24:01.645 11:42:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:24:01.645 11:42:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 1333147 00:24:01.645 11:42:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:24:01.645 11:42:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:24:01.645 11:42:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@970 -- # echo 'killing process with pid 1333147' 00:24:01.645 killing process with pid 1333147 00:24:01.645 11:42:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@971 -- # kill 1333147 00:24:01.645 11:42:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@976 -- # wait 1333147 00:24:01.903 11:42:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:24:01.903 11:42:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:24:01.903 11:42:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:24:01.903 11:42:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@297 -- # iptr 00:24:01.903 11:42:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@791 -- # iptables-save 00:24:01.903 11:42:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:24:01.903 11:42:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@791 -- # iptables-restore 00:24:01.903 11:42:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:24:01.903 11:42:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@302 -- # remove_spdk_ns 00:24:01.903 11:42:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:01.903 11:42:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:01.903 11:42:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:04.431 11:42:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:24:04.431 00:24:04.431 real 0m16.101s 00:24:04.431 user 1m1.191s 00:24:04.431 sys 0m6.148s 00:24:04.431 11:42:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1128 -- # xtrace_disable 00:24:04.432 11:42:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:24:04.432 ************************************ 00:24:04.432 END TEST nvmf_fio_host 00:24:04.432 ************************************ 00:24:04.432 11:42:04 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@25 -- # run_test nvmf_failover /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/failover.sh --transport=tcp 00:24:04.432 11:42:04 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:24:04.432 11:42:04 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1109 -- # xtrace_disable 00:24:04.432 11:42:04 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:24:04.432 ************************************ 00:24:04.432 START TEST nvmf_failover 00:24:04.432 ************************************ 00:24:04.432 11:42:04 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/failover.sh --transport=tcp 00:24:04.432 * Looking for test storage... 00:24:04.432 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:24:04.432 11:42:04 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:24:04.432 11:42:04 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1691 -- # lcov --version 00:24:04.432 11:42:04 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:24:04.432 11:42:04 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:24:04.432 11:42:04 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:24:04.432 11:42:04 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@333 -- # local ver1 ver1_l 00:24:04.432 11:42:04 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@334 -- # local ver2 ver2_l 00:24:04.432 11:42:04 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@336 -- # IFS=.-: 00:24:04.432 11:42:04 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@336 -- # read -ra ver1 00:24:04.432 11:42:04 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@337 -- # IFS=.-: 00:24:04.432 11:42:04 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@337 -- # read -ra ver2 00:24:04.432 11:42:04 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@338 -- # local 'op=<' 00:24:04.432 11:42:04 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@340 -- # ver1_l=2 00:24:04.432 11:42:04 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@341 -- # ver2_l=1 00:24:04.432 11:42:04 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:24:04.432 11:42:04 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@344 -- # case "$op" in 00:24:04.432 11:42:04 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@345 -- # : 1 00:24:04.432 11:42:04 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@364 -- # (( v = 0 )) 00:24:04.432 11:42:04 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:24:04.432 11:42:04 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@365 -- # decimal 1 00:24:04.432 11:42:04 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@353 -- # local d=1 00:24:04.432 11:42:04 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:24:04.432 11:42:04 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@355 -- # echo 1 00:24:04.432 11:42:04 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@365 -- # ver1[v]=1 00:24:04.432 11:42:04 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@366 -- # decimal 2 00:24:04.432 11:42:04 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@353 -- # local d=2 00:24:04.432 11:42:04 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:24:04.432 11:42:04 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@355 -- # echo 2 00:24:04.432 11:42:04 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@366 -- # ver2[v]=2 00:24:04.432 11:42:04 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:24:04.432 11:42:04 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:24:04.432 11:42:04 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@368 -- # return 0 00:24:04.432 11:42:04 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:24:04.432 11:42:04 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:24:04.432 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:04.432 --rc genhtml_branch_coverage=1 00:24:04.432 --rc genhtml_function_coverage=1 00:24:04.432 --rc genhtml_legend=1 00:24:04.432 --rc geninfo_all_blocks=1 00:24:04.432 --rc geninfo_unexecuted_blocks=1 00:24:04.432 00:24:04.432 ' 00:24:04.432 11:42:04 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:24:04.432 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:04.432 --rc genhtml_branch_coverage=1 00:24:04.432 --rc genhtml_function_coverage=1 00:24:04.432 --rc genhtml_legend=1 00:24:04.432 --rc geninfo_all_blocks=1 00:24:04.432 --rc geninfo_unexecuted_blocks=1 00:24:04.432 00:24:04.432 ' 00:24:04.432 11:42:04 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:24:04.432 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:04.432 --rc genhtml_branch_coverage=1 00:24:04.432 --rc genhtml_function_coverage=1 00:24:04.432 --rc genhtml_legend=1 00:24:04.432 --rc geninfo_all_blocks=1 00:24:04.432 --rc geninfo_unexecuted_blocks=1 00:24:04.432 00:24:04.432 ' 00:24:04.432 11:42:04 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:24:04.432 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:04.432 --rc genhtml_branch_coverage=1 00:24:04.432 --rc genhtml_function_coverage=1 00:24:04.432 --rc genhtml_legend=1 00:24:04.432 --rc geninfo_all_blocks=1 00:24:04.432 --rc geninfo_unexecuted_blocks=1 00:24:04.432 00:24:04.432 ' 00:24:04.432 11:42:04 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:24:04.432 11:42:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@7 -- # uname -s 00:24:04.432 11:42:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:04.432 11:42:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:04.432 11:42:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:04.432 11:42:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:04.432 11:42:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:04.432 11:42:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:04.432 11:42:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:04.432 11:42:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:04.432 11:42:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:04.432 11:42:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:04.432 11:42:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:24:04.432 11:42:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@18 -- # NVME_HOSTID=00abaa28-3537-eb11-906e-0017a4403562 00:24:04.432 11:42:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:04.432 11:42:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:04.432 11:42:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:04.432 11:42:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:04.432 11:42:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:04.432 11:42:05 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@15 -- # shopt -s extglob 00:24:04.432 11:42:05 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:04.432 11:42:05 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:04.432 11:42:05 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:04.433 11:42:05 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:04.433 11:42:05 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:04.433 11:42:05 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:04.433 11:42:05 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@5 -- # export PATH 00:24:04.433 11:42:05 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:04.433 11:42:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@51 -- # : 0 00:24:04.433 11:42:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:24:04.433 11:42:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:24:04.433 11:42:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:04.433 11:42:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:04.433 11:42:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:04.433 11:42:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:24:04.433 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:24:04.433 11:42:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:24:04.433 11:42:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:24:04.433 11:42:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@55 -- # have_pci_nics=0 00:24:04.433 11:42:05 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@11 -- # MALLOC_BDEV_SIZE=64 00:24:04.433 11:42:05 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:24:04.433 11:42:05 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:24:04.433 11:42:05 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:24:04.433 11:42:05 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@18 -- # nvmftestinit 00:24:04.433 11:42:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:24:04.433 11:42:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:04.433 11:42:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@476 -- # prepare_net_devs 00:24:04.433 11:42:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@438 -- # local -g is_hw=no 00:24:04.433 11:42:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@440 -- # remove_spdk_ns 00:24:04.433 11:42:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:04.433 11:42:05 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:04.433 11:42:05 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:04.433 11:42:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:24:04.433 11:42:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:24:04.433 11:42:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@309 -- # xtrace_disable 00:24:04.433 11:42:05 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:24:09.690 11:42:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:24:09.690 11:42:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@315 -- # pci_devs=() 00:24:09.690 11:42:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@315 -- # local -a pci_devs 00:24:09.690 11:42:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@316 -- # pci_net_devs=() 00:24:09.690 11:42:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:24:09.690 11:42:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@317 -- # pci_drivers=() 00:24:09.690 11:42:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@317 -- # local -A pci_drivers 00:24:09.690 11:42:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@319 -- # net_devs=() 00:24:09.690 11:42:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@319 -- # local -ga net_devs 00:24:09.690 11:42:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@320 -- # e810=() 00:24:09.690 11:42:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@320 -- # local -ga e810 00:24:09.690 11:42:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@321 -- # x722=() 00:24:09.690 11:42:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@321 -- # local -ga x722 00:24:09.690 11:42:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@322 -- # mlx=() 00:24:09.690 11:42:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@322 -- # local -ga mlx 00:24:09.690 11:42:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:09.690 11:42:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:09.690 11:42:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:09.690 11:42:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:09.690 11:42:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:09.690 11:42:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:09.690 11:42:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:09.690 11:42:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:24:09.690 11:42:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:09.690 11:42:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:09.690 11:42:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:09.690 11:42:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:09.690 11:42:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:24:09.690 11:42:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:24:09.690 11:42:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:24:09.690 11:42:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:24:09.690 11:42:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:24:09.690 11:42:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:24:09.690 11:42:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:09.690 11:42:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:24:09.690 Found 0000:af:00.0 (0x8086 - 0x159b) 00:24:09.690 11:42:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:24:09.690 11:42:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:24:09.690 11:42:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:09.690 11:42:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:09.690 11:42:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:24:09.690 11:42:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:09.690 11:42:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:24:09.690 Found 0000:af:00.1 (0x8086 - 0x159b) 00:24:09.690 11:42:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:24:09.690 11:42:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:24:09.690 11:42:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:09.690 11:42:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:09.690 11:42:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:24:09.690 11:42:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:24:09.690 11:42:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:24:09.690 11:42:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:24:09.690 11:42:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:24:09.690 11:42:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:09.690 11:42:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:24:09.690 11:42:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:09.690 11:42:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@418 -- # [[ up == up ]] 00:24:09.690 11:42:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:24:09.690 11:42:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:09.690 11:42:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:24:09.690 Found net devices under 0000:af:00.0: cvl_0_0 00:24:09.690 11:42:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:24:09.690 11:42:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:24:09.690 11:42:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:09.690 11:42:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:24:09.690 11:42:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:09.690 11:42:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@418 -- # [[ up == up ]] 00:24:09.690 11:42:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:24:09.690 11:42:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:09.690 11:42:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:24:09.690 Found net devices under 0000:af:00.1: cvl_0_1 00:24:09.690 11:42:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:24:09.690 11:42:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:24:09.690 11:42:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@442 -- # is_hw=yes 00:24:09.690 11:42:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:24:09.690 11:42:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:24:09.690 11:42:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:24:09.690 11:42:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:24:09.690 11:42:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:09.690 11:42:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:09.690 11:42:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:09.690 11:42:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:24:09.690 11:42:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:09.690 11:42:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:09.690 11:42:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:24:09.690 11:42:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:24:09.690 11:42:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:09.691 11:42:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:09.691 11:42:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:24:09.691 11:42:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:24:09.691 11:42:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:24:09.691 11:42:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:09.948 11:42:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:09.948 11:42:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:09.948 11:42:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:24:09.948 11:42:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:09.948 11:42:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:09.948 11:42:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:09.948 11:42:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:24:09.948 11:42:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:24:09.948 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:09.948 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.317 ms 00:24:09.948 00:24:09.948 --- 10.0.0.2 ping statistics --- 00:24:09.948 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:09.948 rtt min/avg/max/mdev = 0.317/0.317/0.317/0.000 ms 00:24:09.948 11:42:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:09.948 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:09.948 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.222 ms 00:24:09.948 00:24:09.948 --- 10.0.0.1 ping statistics --- 00:24:09.948 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:09.948 rtt min/avg/max/mdev = 0.222/0.222/0.222/0.000 ms 00:24:09.948 11:42:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:09.948 11:42:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@450 -- # return 0 00:24:09.948 11:42:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:24:09.948 11:42:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:09.948 11:42:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:24:09.948 11:42:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:24:09.948 11:42:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:09.948 11:42:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:24:09.948 11:42:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:24:09.948 11:42:10 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@20 -- # nvmfappstart -m 0xE 00:24:09.948 11:42:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:24:09.948 11:42:10 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@724 -- # xtrace_disable 00:24:09.948 11:42:10 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:24:09.948 11:42:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@509 -- # nvmfpid=1338994 00:24:09.948 11:42:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@510 -- # waitforlisten 1338994 00:24:09.948 11:42:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:24:09.949 11:42:10 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@833 -- # '[' -z 1338994 ']' 00:24:09.949 11:42:10 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:09.949 11:42:10 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@838 -- # local max_retries=100 00:24:09.949 11:42:10 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:09.949 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:09.949 11:42:10 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@842 -- # xtrace_disable 00:24:09.949 11:42:10 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:24:09.949 [2024-11-15 11:42:10.789637] Starting SPDK v25.01-pre git sha1 4b2d483c6 / DPDK 24.03.0 initialization... 00:24:09.949 [2024-11-15 11:42:10.789692] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:10.206 [2024-11-15 11:42:10.862549] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:24:10.206 [2024-11-15 11:42:10.902869] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:10.206 [2024-11-15 11:42:10.902903] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:10.206 [2024-11-15 11:42:10.902909] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:10.206 [2024-11-15 11:42:10.902915] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:10.206 [2024-11-15 11:42:10.902919] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:10.206 [2024-11-15 11:42:10.904352] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:24:10.206 [2024-11-15 11:42:10.904457] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:24:10.206 [2024-11-15 11:42:10.904457] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:24:10.207 11:42:11 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:24:10.207 11:42:11 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@866 -- # return 0 00:24:10.207 11:42:11 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:24:10.207 11:42:11 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@730 -- # xtrace_disable 00:24:10.207 11:42:11 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:24:10.207 11:42:11 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:10.207 11:42:11 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:24:10.464 [2024-11-15 11:42:11.309119] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:10.721 11:42:11 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:24:10.979 Malloc0 00:24:10.979 11:42:11 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:24:11.236 11:42:11 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:24:11.493 11:42:12 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:24:11.751 [2024-11-15 11:42:12.412261] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:11.751 11:42:12 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:24:12.009 [2024-11-15 11:42:12.681059] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:24:12.009 11:42:12 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:24:12.267 [2024-11-15 11:42:12.953948] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:24:12.267 11:42:12 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 15 -f 00:24:12.267 11:42:12 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@31 -- # bdevperf_pid=1339399 00:24:12.267 11:42:12 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; cat $testdir/try.txt; rm -f $testdir/try.txt; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:24:12.267 11:42:12 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@34 -- # waitforlisten 1339399 /var/tmp/bdevperf.sock 00:24:12.267 11:42:12 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@833 -- # '[' -z 1339399 ']' 00:24:12.267 11:42:12 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:24:12.267 11:42:12 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@838 -- # local max_retries=100 00:24:12.267 11:42:12 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:24:12.267 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:24:12.267 11:42:12 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@842 -- # xtrace_disable 00:24:12.267 11:42:12 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:24:12.524 11:42:13 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:24:12.524 11:42:13 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@866 -- # return 0 00:24:12.524 11:42:13 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:24:13.089 NVMe0n1 00:24:13.089 11:42:13 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:24:13.347 00:24:13.347 11:42:14 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@39 -- # run_test_pid=1339663 00:24:13.347 11:42:14 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:24:13.347 11:42:14 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@41 -- # sleep 1 00:24:14.718 11:42:15 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:24:14.718 [2024-11-15 11:42:15.436718] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15750f0 is same with the state(6) to be set 00:24:14.718 [2024-11-15 11:42:15.436763] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15750f0 is same with the state(6) to be set 00:24:14.718 [2024-11-15 11:42:15.436771] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15750f0 is same with the state(6) to be set 00:24:14.718 [2024-11-15 11:42:15.436782] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15750f0 is same with the state(6) to be set 00:24:14.718 [2024-11-15 11:42:15.436788] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15750f0 is same with the state(6) to be set 00:24:14.718 [2024-11-15 11:42:15.436793] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15750f0 is same with the state(6) to be set 00:24:14.718 [2024-11-15 11:42:15.436799] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15750f0 is same with the state(6) to be set 00:24:14.719 [2024-11-15 11:42:15.436804] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15750f0 is same with the state(6) to be set 00:24:14.719 [2024-11-15 11:42:15.436809] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15750f0 is same with the state(6) to be set 00:24:14.719 [2024-11-15 11:42:15.436814] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15750f0 is same with the state(6) to be set 00:24:14.719 [2024-11-15 11:42:15.436820] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15750f0 is same with the state(6) to be set 00:24:14.719 [2024-11-15 11:42:15.436825] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15750f0 is same with the state(6) to be set 00:24:14.719 [2024-11-15 11:42:15.436830] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15750f0 is same with the state(6) to be set 00:24:14.719 [2024-11-15 11:42:15.436836] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15750f0 is same with the state(6) to be set 00:24:14.719 [2024-11-15 11:42:15.436841] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15750f0 is same with the state(6) to be set 00:24:14.719 [2024-11-15 11:42:15.436846] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15750f0 is same with the state(6) to be set 00:24:14.719 [2024-11-15 11:42:15.436851] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15750f0 is same with the state(6) to be set 00:24:14.719 [2024-11-15 11:42:15.436856] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15750f0 is same with the state(6) to be set 00:24:14.719 [2024-11-15 11:42:15.436862] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15750f0 is same with the state(6) to be set 00:24:14.719 [2024-11-15 11:42:15.436867] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15750f0 is same with the state(6) to be set 00:24:14.719 [2024-11-15 11:42:15.436872] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15750f0 is same with the state(6) to be set 00:24:14.719 [2024-11-15 11:42:15.436877] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15750f0 is same with the state(6) to be set 00:24:14.719 [2024-11-15 11:42:15.436883] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15750f0 is same with the state(6) to be set 00:24:14.719 [2024-11-15 11:42:15.436888] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15750f0 is same with the state(6) to be set 00:24:14.719 [2024-11-15 11:42:15.436893] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15750f0 is same with the state(6) to be set 00:24:14.719 [2024-11-15 11:42:15.436898] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15750f0 is same with the state(6) to be set 00:24:14.719 [2024-11-15 11:42:15.436903] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15750f0 is same with the state(6) to be set 00:24:14.719 [2024-11-15 11:42:15.436909] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15750f0 is same with the state(6) to be set 00:24:14.719 [2024-11-15 11:42:15.436914] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15750f0 is same with the state(6) to be set 00:24:14.719 [2024-11-15 11:42:15.436919] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15750f0 is same with the state(6) to be set 00:24:14.719 [2024-11-15 11:42:15.436926] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15750f0 is same with the state(6) to be set 00:24:14.719 [2024-11-15 11:42:15.436932] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15750f0 is same with the state(6) to be set 00:24:14.719 [2024-11-15 11:42:15.436938] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15750f0 is same with the state(6) to be set 00:24:14.719 [2024-11-15 11:42:15.436943] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15750f0 is same with the state(6) to be set 00:24:14.719 [2024-11-15 11:42:15.436949] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15750f0 is same with the state(6) to be set 00:24:14.719 [2024-11-15 11:42:15.436955] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15750f0 is same with the state(6) to be set 00:24:14.719 [2024-11-15 11:42:15.436960] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15750f0 is same with the state(6) to be set 00:24:14.719 [2024-11-15 11:42:15.436965] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15750f0 is same with the state(6) to be set 00:24:14.719 [2024-11-15 11:42:15.436971] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15750f0 is same with the state(6) to be set 00:24:14.719 [2024-11-15 11:42:15.436976] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15750f0 is same with the state(6) to be set 00:24:14.719 [2024-11-15 11:42:15.436982] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15750f0 is same with the state(6) to be set 00:24:14.719 [2024-11-15 11:42:15.436987] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15750f0 is same with the state(6) to be set 00:24:14.719 [2024-11-15 11:42:15.436992] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15750f0 is same with the state(6) to be set 00:24:14.719 [2024-11-15 11:42:15.436997] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15750f0 is same with the state(6) to be set 00:24:14.719 [2024-11-15 11:42:15.437003] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15750f0 is same with the state(6) to be set 00:24:14.719 [2024-11-15 11:42:15.437008] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15750f0 is same with the state(6) to be set 00:24:14.719 [2024-11-15 11:42:15.437013] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15750f0 is same with the state(6) to be set 00:24:14.719 [2024-11-15 11:42:15.437018] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15750f0 is same with the state(6) to be set 00:24:14.719 [2024-11-15 11:42:15.437023] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15750f0 is same with the state(6) to be set 00:24:14.719 [2024-11-15 11:42:15.437029] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15750f0 is same with the state(6) to be set 00:24:14.719 [2024-11-15 11:42:15.437034] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15750f0 is same with the state(6) to be set 00:24:14.719 [2024-11-15 11:42:15.437039] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15750f0 is same with the state(6) to be set 00:24:14.719 [2024-11-15 11:42:15.437045] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15750f0 is same with the state(6) to be set 00:24:14.719 [2024-11-15 11:42:15.437050] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15750f0 is same with the state(6) to be set 00:24:14.719 [2024-11-15 11:42:15.437056] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15750f0 is same with the state(6) to be set 00:24:14.719 [2024-11-15 11:42:15.437061] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15750f0 is same with the state(6) to be set 00:24:14.719 [2024-11-15 11:42:15.437066] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15750f0 is same with the state(6) to be set 00:24:14.719 [2024-11-15 11:42:15.437073] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15750f0 is same with the state(6) to be set 00:24:14.719 [2024-11-15 11:42:15.437078] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15750f0 is same with the state(6) to be set 00:24:14.719 [2024-11-15 11:42:15.437084] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15750f0 is same with the state(6) to be set 00:24:14.719 [2024-11-15 11:42:15.437089] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15750f0 is same with the state(6) to be set 00:24:14.719 [2024-11-15 11:42:15.437094] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15750f0 is same with the state(6) to be set 00:24:14.719 [2024-11-15 11:42:15.437099] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15750f0 is same with the state(6) to be set 00:24:14.719 [2024-11-15 11:42:15.437105] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15750f0 is same with the state(6) to be set 00:24:14.719 [2024-11-15 11:42:15.437110] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15750f0 is same with the state(6) to be set 00:24:14.719 [2024-11-15 11:42:15.437115] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15750f0 is same with the state(6) to be set 00:24:14.719 [2024-11-15 11:42:15.437121] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15750f0 is same with the state(6) to be set 00:24:14.719 [2024-11-15 11:42:15.437126] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15750f0 is same with the state(6) to be set 00:24:14.719 [2024-11-15 11:42:15.437132] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15750f0 is same with the state(6) to be set 00:24:14.719 [2024-11-15 11:42:15.437137] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15750f0 is same with the state(6) to be set 00:24:14.719 [2024-11-15 11:42:15.437143] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15750f0 is same with the state(6) to be set 00:24:14.719 [2024-11-15 11:42:15.437149] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15750f0 is same with the state(6) to be set 00:24:14.719 [2024-11-15 11:42:15.437154] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15750f0 is same with the state(6) to be set 00:24:14.719 11:42:15 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@45 -- # sleep 3 00:24:17.995 11:42:18 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:24:18.253 00:24:18.253 11:42:18 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:24:18.511 11:42:19 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@50 -- # sleep 3 00:24:21.785 11:42:22 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:24:21.785 [2024-11-15 11:42:22.511240] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:21.785 11:42:22 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@55 -- # sleep 1 00:24:22.716 11:42:23 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:24:22.974 [2024-11-15 11:42:23.784974] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1576ca0 is same with the state(6) to be set 00:24:22.974 [2024-11-15 11:42:23.785018] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1576ca0 is same with the state(6) to be set 00:24:22.974 [2024-11-15 11:42:23.785024] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1576ca0 is same with the state(6) to be set 00:24:22.974 [2024-11-15 11:42:23.785035] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1576ca0 is same with the state(6) to be set 00:24:22.974 [2024-11-15 11:42:23.785041] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1576ca0 is same with the state(6) to be set 00:24:22.974 [2024-11-15 11:42:23.785062] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1576ca0 is same with the state(6) to be set 00:24:22.974 [2024-11-15 11:42:23.785068] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1576ca0 is same with the state(6) to be set 00:24:22.974 [2024-11-15 11:42:23.785073] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1576ca0 is same with the state(6) to be set 00:24:22.974 [2024-11-15 11:42:23.785078] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1576ca0 is same with the state(6) to be set 00:24:22.974 [2024-11-15 11:42:23.785084] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1576ca0 is same with the state(6) to be set 00:24:22.974 [2024-11-15 11:42:23.785089] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1576ca0 is same with the state(6) to be set 00:24:22.974 [2024-11-15 11:42:23.785095] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1576ca0 is same with the state(6) to be set 00:24:22.974 [2024-11-15 11:42:23.785100] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1576ca0 is same with the state(6) to be set 00:24:22.974 [2024-11-15 11:42:23.785105] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1576ca0 is same with the state(6) to be set 00:24:22.974 [2024-11-15 11:42:23.785110] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1576ca0 is same with the state(6) to be set 00:24:22.974 [2024-11-15 11:42:23.785116] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1576ca0 is same with the state(6) to be set 00:24:22.974 [2024-11-15 11:42:23.785121] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1576ca0 is same with the state(6) to be set 00:24:22.974 [2024-11-15 11:42:23.785126] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1576ca0 is same with the state(6) to be set 00:24:22.974 [2024-11-15 11:42:23.785132] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1576ca0 is same with the state(6) to be set 00:24:22.974 [2024-11-15 11:42:23.785137] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1576ca0 is same with the state(6) to be set 00:24:22.974 [2024-11-15 11:42:23.785142] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1576ca0 is same with the state(6) to be set 00:24:22.974 [2024-11-15 11:42:23.785148] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1576ca0 is same with the state(6) to be set 00:24:22.974 [2024-11-15 11:42:23.785154] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1576ca0 is same with the state(6) to be set 00:24:22.974 [2024-11-15 11:42:23.785159] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1576ca0 is same with the state(6) to be set 00:24:22.974 [2024-11-15 11:42:23.785165] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1576ca0 is same with the state(6) to be set 00:24:22.974 [2024-11-15 11:42:23.785171] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1576ca0 is same with the state(6) to be set 00:24:22.974 [2024-11-15 11:42:23.785176] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1576ca0 is same with the state(6) to be set 00:24:22.974 [2024-11-15 11:42:23.785182] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1576ca0 is same with the state(6) to be set 00:24:22.974 [2024-11-15 11:42:23.785187] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1576ca0 is same with the state(6) to be set 00:24:22.974 [2024-11-15 11:42:23.785192] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1576ca0 is same with the state(6) to be set 00:24:22.974 [2024-11-15 11:42:23.785203] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1576ca0 is same with the state(6) to be set 00:24:22.974 [2024-11-15 11:42:23.785208] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1576ca0 is same with the state(6) to be set 00:24:22.974 [2024-11-15 11:42:23.785214] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1576ca0 is same with the state(6) to be set 00:24:22.974 [2024-11-15 11:42:23.785220] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1576ca0 is same with the state(6) to be set 00:24:22.974 [2024-11-15 11:42:23.785226] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1576ca0 is same with the state(6) to be set 00:24:22.974 [2024-11-15 11:42:23.785231] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1576ca0 is same with the state(6) to be set 00:24:22.974 [2024-11-15 11:42:23.785237] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1576ca0 is same with the state(6) to be set 00:24:22.974 [2024-11-15 11:42:23.785242] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1576ca0 is same with the state(6) to be set 00:24:22.974 [2024-11-15 11:42:23.785247] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1576ca0 is same with the state(6) to be set 00:24:22.974 [2024-11-15 11:42:23.785252] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1576ca0 is same with the state(6) to be set 00:24:22.974 [2024-11-15 11:42:23.785258] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1576ca0 is same with the state(6) to be set 00:24:22.974 [2024-11-15 11:42:23.785263] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1576ca0 is same with the state(6) to be set 00:24:22.974 [2024-11-15 11:42:23.785268] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1576ca0 is same with the state(6) to be set 00:24:22.974 [2024-11-15 11:42:23.785274] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1576ca0 is same with the state(6) to be set 00:24:22.974 [2024-11-15 11:42:23.785279] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1576ca0 is same with the state(6) to be set 00:24:22.974 [2024-11-15 11:42:23.785284] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1576ca0 is same with the state(6) to be set 00:24:22.974 [2024-11-15 11:42:23.785289] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1576ca0 is same with the state(6) to be set 00:24:22.974 [2024-11-15 11:42:23.785294] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1576ca0 is same with the state(6) to be set 00:24:22.974 [2024-11-15 11:42:23.785299] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1576ca0 is same with the state(6) to be set 00:24:22.974 [2024-11-15 11:42:23.785305] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1576ca0 is same with the state(6) to be set 00:24:22.974 [2024-11-15 11:42:23.785311] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1576ca0 is same with the state(6) to be set 00:24:22.974 [2024-11-15 11:42:23.785316] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1576ca0 is same with the state(6) to be set 00:24:22.974 [2024-11-15 11:42:23.785322] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1576ca0 is same with the state(6) to be set 00:24:22.974 [2024-11-15 11:42:23.785327] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1576ca0 is same with the state(6) to be set 00:24:22.974 [2024-11-15 11:42:23.785332] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1576ca0 is same with the state(6) to be set 00:24:22.974 [2024-11-15 11:42:23.785338] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1576ca0 is same with the state(6) to be set 00:24:22.974 [2024-11-15 11:42:23.785343] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1576ca0 is same with the state(6) to be set 00:24:22.974 [2024-11-15 11:42:23.785350] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1576ca0 is same with the state(6) to be set 00:24:22.974 [2024-11-15 11:42:23.785355] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1576ca0 is same with the state(6) to be set 00:24:22.974 [2024-11-15 11:42:23.785361] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1576ca0 is same with the state(6) to be set 00:24:22.974 [2024-11-15 11:42:23.785366] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1576ca0 is same with the state(6) to be set 00:24:22.974 [2024-11-15 11:42:23.785372] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1576ca0 is same with the state(6) to be set 00:24:22.974 [2024-11-15 11:42:23.785377] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1576ca0 is same with the state(6) to be set 00:24:22.974 [2024-11-15 11:42:23.785382] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1576ca0 is same with the state(6) to be set 00:24:22.974 [2024-11-15 11:42:23.785388] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1576ca0 is same with the state(6) to be set 00:24:22.974 [2024-11-15 11:42:23.785393] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1576ca0 is same with the state(6) to be set 00:24:22.974 [2024-11-15 11:42:23.785399] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1576ca0 is same with the state(6) to be set 00:24:22.974 [2024-11-15 11:42:23.785404] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1576ca0 is same with the state(6) to be set 00:24:22.974 [2024-11-15 11:42:23.785409] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1576ca0 is same with the state(6) to be set 00:24:22.974 [2024-11-15 11:42:23.785415] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1576ca0 is same with the state(6) to be set 00:24:22.974 [2024-11-15 11:42:23.785420] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1576ca0 is same with the state(6) to be set 00:24:22.974 [2024-11-15 11:42:23.785425] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1576ca0 is same with the state(6) to be set 00:24:22.974 [2024-11-15 11:42:23.785430] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1576ca0 is same with the state(6) to be set 00:24:22.974 [2024-11-15 11:42:23.785435] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1576ca0 is same with the state(6) to be set 00:24:22.974 [2024-11-15 11:42:23.785441] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1576ca0 is same with the state(6) to be set 00:24:22.974 [2024-11-15 11:42:23.785446] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1576ca0 is same with the state(6) to be set 00:24:22.975 [2024-11-15 11:42:23.785451] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1576ca0 is same with the state(6) to be set 00:24:22.975 [2024-11-15 11:42:23.785462] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1576ca0 is same with the state(6) to be set 00:24:22.975 [2024-11-15 11:42:23.785468] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1576ca0 is same with the state(6) to be set 00:24:22.975 [2024-11-15 11:42:23.785474] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1576ca0 is same with the state(6) to be set 00:24:22.975 [2024-11-15 11:42:23.785479] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1576ca0 is same with the state(6) to be set 00:24:22.975 11:42:23 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@59 -- # wait 1339663 00:24:29.530 { 00:24:29.530 "results": [ 00:24:29.530 { 00:24:29.530 "job": "NVMe0n1", 00:24:29.530 "core_mask": "0x1", 00:24:29.530 "workload": "verify", 00:24:29.530 "status": "finished", 00:24:29.530 "verify_range": { 00:24:29.530 "start": 0, 00:24:29.530 "length": 16384 00:24:29.530 }, 00:24:29.530 "queue_depth": 128, 00:24:29.531 "io_size": 4096, 00:24:29.531 "runtime": 15.010511, 00:24:29.531 "iops": 10287.724381934766, 00:24:29.531 "mibps": 40.18642336693268, 00:24:29.531 "io_failed": 4501, 00:24:29.531 "io_timeout": 0, 00:24:29.531 "avg_latency_us": 12055.424824082256, 00:24:29.531 "min_latency_us": 558.5454545454545, 00:24:29.531 "max_latency_us": 35746.90909090909 00:24:29.531 } 00:24:29.531 ], 00:24:29.531 "core_count": 1 00:24:29.531 } 00:24:29.531 11:42:29 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@61 -- # killprocess 1339399 00:24:29.531 11:42:29 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@952 -- # '[' -z 1339399 ']' 00:24:29.531 11:42:29 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@956 -- # kill -0 1339399 00:24:29.531 11:42:29 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@957 -- # uname 00:24:29.531 11:42:29 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:24:29.531 11:42:29 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 1339399 00:24:29.531 11:42:29 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:24:29.531 11:42:29 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:24:29.531 11:42:29 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@970 -- # echo 'killing process with pid 1339399' 00:24:29.531 killing process with pid 1339399 00:24:29.531 11:42:29 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@971 -- # kill 1339399 00:24:29.531 11:42:29 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@976 -- # wait 1339399 00:24:29.531 11:42:29 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@63 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:24:29.531 [2024-11-15 11:42:13.021053] Starting SPDK v25.01-pre git sha1 4b2d483c6 / DPDK 24.03.0 initialization... 00:24:29.531 [2024-11-15 11:42:13.021120] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1339399 ] 00:24:29.531 [2024-11-15 11:42:13.117148] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:29.531 [2024-11-15 11:42:13.166369] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:24:29.531 Running I/O for 15 seconds... 00:24:29.531 10445.00 IOPS, 40.80 MiB/s [2024-11-15T10:42:30.384Z] [2024-11-15 11:42:15.438256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:95800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.531 [2024-11-15 11:42:15.438296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.531 [2024-11-15 11:42:15.438316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:95808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.531 [2024-11-15 11:42:15.438328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.531 [2024-11-15 11:42:15.438341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:95816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.531 [2024-11-15 11:42:15.438351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.531 [2024-11-15 11:42:15.438364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:95824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.531 [2024-11-15 11:42:15.438374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.531 [2024-11-15 11:42:15.438386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:95832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.531 [2024-11-15 11:42:15.438396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.531 [2024-11-15 11:42:15.438408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:95840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.531 [2024-11-15 11:42:15.438417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.531 [2024-11-15 11:42:15.438430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:95848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.531 [2024-11-15 11:42:15.438439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.531 [2024-11-15 11:42:15.438451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:95856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.531 [2024-11-15 11:42:15.438468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.531 [2024-11-15 11:42:15.438481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:95864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.531 [2024-11-15 11:42:15.438491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.531 [2024-11-15 11:42:15.438503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:95872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.531 [2024-11-15 11:42:15.438513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.531 [2024-11-15 11:42:15.438525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:95880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.531 [2024-11-15 11:42:15.438534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.531 [2024-11-15 11:42:15.438554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:95888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.531 [2024-11-15 11:42:15.438564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.531 [2024-11-15 11:42:15.438576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:95896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.531 [2024-11-15 11:42:15.438586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.531 [2024-11-15 11:42:15.438597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:95904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.531 [2024-11-15 11:42:15.438608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.532 [2024-11-15 11:42:15.438620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:95912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.532 [2024-11-15 11:42:15.438630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.532 [2024-11-15 11:42:15.438642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:95920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.532 [2024-11-15 11:42:15.438652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.532 [2024-11-15 11:42:15.438664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:95928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.532 [2024-11-15 11:42:15.438676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.532 [2024-11-15 11:42:15.438688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:95936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.532 [2024-11-15 11:42:15.438699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.532 [2024-11-15 11:42:15.438711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:95944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.532 [2024-11-15 11:42:15.438720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.532 [2024-11-15 11:42:15.438732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:95952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.532 [2024-11-15 11:42:15.438742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.532 [2024-11-15 11:42:15.438754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:95960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.532 [2024-11-15 11:42:15.438764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.532 [2024-11-15 11:42:15.438775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:95968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.532 [2024-11-15 11:42:15.438785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.532 [2024-11-15 11:42:15.438797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:95976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.532 [2024-11-15 11:42:15.438806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.532 [2024-11-15 11:42:15.438818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:95984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.532 [2024-11-15 11:42:15.438835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.532 [2024-11-15 11:42:15.438847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:95992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.532 [2024-11-15 11:42:15.438856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.532 [2024-11-15 11:42:15.438868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:96000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.532 [2024-11-15 11:42:15.438878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.532 [2024-11-15 11:42:15.438890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:96008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.532 [2024-11-15 11:42:15.438900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.532 [2024-11-15 11:42:15.438911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:96016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.532 [2024-11-15 11:42:15.438922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.532 [2024-11-15 11:42:15.438934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:96024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.532 [2024-11-15 11:42:15.438944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.532 [2024-11-15 11:42:15.438955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:96032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.532 [2024-11-15 11:42:15.438965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.532 [2024-11-15 11:42:15.438977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:96040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.532 [2024-11-15 11:42:15.438986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.532 [2024-11-15 11:42:15.438998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:96048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.532 [2024-11-15 11:42:15.439008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.532 [2024-11-15 11:42:15.439020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:96056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.532 [2024-11-15 11:42:15.439032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.532 [2024-11-15 11:42:15.439044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:96064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.532 [2024-11-15 11:42:15.439053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.532 [2024-11-15 11:42:15.439065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:96072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.532 [2024-11-15 11:42:15.439075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.532 [2024-11-15 11:42:15.439087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:96080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.532 [2024-11-15 11:42:15.439096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.532 [2024-11-15 11:42:15.439110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:96088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.532 [2024-11-15 11:42:15.439120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.532 [2024-11-15 11:42:15.439132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:96096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.532 [2024-11-15 11:42:15.439142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.532 [2024-11-15 11:42:15.439153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:96104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.532 [2024-11-15 11:42:15.439163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.532 [2024-11-15 11:42:15.439174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:96112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.532 [2024-11-15 11:42:15.439184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.532 [2024-11-15 11:42:15.439196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:96120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.532 [2024-11-15 11:42:15.439207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.533 [2024-11-15 11:42:15.439219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:96128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.533 [2024-11-15 11:42:15.439229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.533 [2024-11-15 11:42:15.439241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:96136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.533 [2024-11-15 11:42:15.439250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.533 [2024-11-15 11:42:15.439262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:96144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.533 [2024-11-15 11:42:15.439271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.533 [2024-11-15 11:42:15.439283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:96152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.533 [2024-11-15 11:42:15.439293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.533 [2024-11-15 11:42:15.439304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:96160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.533 [2024-11-15 11:42:15.439314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.533 [2024-11-15 11:42:15.439326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:96168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.533 [2024-11-15 11:42:15.439335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.533 [2024-11-15 11:42:15.439347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:96176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.533 [2024-11-15 11:42:15.439357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.533 [2024-11-15 11:42:15.439368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:96184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.533 [2024-11-15 11:42:15.439381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.533 [2024-11-15 11:42:15.439394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:96192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.533 [2024-11-15 11:42:15.439403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.533 [2024-11-15 11:42:15.439415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:96200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.533 [2024-11-15 11:42:15.439424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.533 [2024-11-15 11:42:15.439436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:96208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.533 [2024-11-15 11:42:15.439445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.533 [2024-11-15 11:42:15.439457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:96216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.533 [2024-11-15 11:42:15.439470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.533 [2024-11-15 11:42:15.439482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:96224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.533 [2024-11-15 11:42:15.439491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.533 [2024-11-15 11:42:15.439503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:96232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.533 [2024-11-15 11:42:15.439512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.533 [2024-11-15 11:42:15.439524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:96240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.533 [2024-11-15 11:42:15.439533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.533 [2024-11-15 11:42:15.439545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:96248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.533 [2024-11-15 11:42:15.439555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.533 [2024-11-15 11:42:15.439567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:96256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.533 [2024-11-15 11:42:15.439576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.533 [2024-11-15 11:42:15.439588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:96264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.533 [2024-11-15 11:42:15.439597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.533 [2024-11-15 11:42:15.439609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:96272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.533 [2024-11-15 11:42:15.439619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.533 [2024-11-15 11:42:15.439630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:96280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.533 [2024-11-15 11:42:15.439640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.533 [2024-11-15 11:42:15.439651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:96288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.533 [2024-11-15 11:42:15.439663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.533 [2024-11-15 11:42:15.439675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:96296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.533 [2024-11-15 11:42:15.439685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.533 [2024-11-15 11:42:15.439697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:96304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.533 [2024-11-15 11:42:15.439706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.533 [2024-11-15 11:42:15.439718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:96312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.533 [2024-11-15 11:42:15.439729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.533 [2024-11-15 11:42:15.439741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:96320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.533 [2024-11-15 11:42:15.439750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.533 [2024-11-15 11:42:15.439762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:96328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.533 [2024-11-15 11:42:15.439772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.533 [2024-11-15 11:42:15.439784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:96336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.533 [2024-11-15 11:42:15.439793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.533 [2024-11-15 11:42:15.439805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:96344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.533 [2024-11-15 11:42:15.439814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.533 [2024-11-15 11:42:15.439826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:96352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.533 [2024-11-15 11:42:15.439835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.533 [2024-11-15 11:42:15.439847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:96360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.534 [2024-11-15 11:42:15.439856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.534 [2024-11-15 11:42:15.439868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:96368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.534 [2024-11-15 11:42:15.439877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.534 [2024-11-15 11:42:15.439889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:96376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.534 [2024-11-15 11:42:15.439899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.534 [2024-11-15 11:42:15.439910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:96384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.534 [2024-11-15 11:42:15.439920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.534 [2024-11-15 11:42:15.439934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:96392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.534 [2024-11-15 11:42:15.439943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.534 [2024-11-15 11:42:15.439955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:96400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.534 [2024-11-15 11:42:15.439964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.534 [2024-11-15 11:42:15.439976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:96408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.534 [2024-11-15 11:42:15.439986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.534 [2024-11-15 11:42:15.439997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:96416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.534 [2024-11-15 11:42:15.440007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.534 [2024-11-15 11:42:15.440018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:96464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.534 [2024-11-15 11:42:15.440028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.534 [2024-11-15 11:42:15.440040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:96472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.534 [2024-11-15 11:42:15.440049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.534 [2024-11-15 11:42:15.440061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:96480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.534 [2024-11-15 11:42:15.440072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.534 [2024-11-15 11:42:15.440084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:96488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.534 [2024-11-15 11:42:15.440093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.534 [2024-11-15 11:42:15.440105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:96496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.534 [2024-11-15 11:42:15.440114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.534 [2024-11-15 11:42:15.440126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:96504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.534 [2024-11-15 11:42:15.440135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.534 [2024-11-15 11:42:15.440147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:96512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.534 [2024-11-15 11:42:15.440156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.534 [2024-11-15 11:42:15.440168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:96520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.534 [2024-11-15 11:42:15.440177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.534 [2024-11-15 11:42:15.440189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:96528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.534 [2024-11-15 11:42:15.440200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.534 [2024-11-15 11:42:15.440212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:96536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.534 [2024-11-15 11:42:15.440221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.534 [2024-11-15 11:42:15.440232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:96544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.534 [2024-11-15 11:42:15.440243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.534 [2024-11-15 11:42:15.440254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:96552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.534 [2024-11-15 11:42:15.440264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.534 [2024-11-15 11:42:15.440276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:96560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.534 [2024-11-15 11:42:15.440285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.534 [2024-11-15 11:42:15.440297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:96568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.534 [2024-11-15 11:42:15.440307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.534 [2024-11-15 11:42:15.440319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:96576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.534 [2024-11-15 11:42:15.440328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.534 [2024-11-15 11:42:15.440340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:96584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.534 [2024-11-15 11:42:15.440349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.534 [2024-11-15 11:42:15.440361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:96592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.534 [2024-11-15 11:42:15.440370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.534 [2024-11-15 11:42:15.440382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:96600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.534 [2024-11-15 11:42:15.440391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.534 [2024-11-15 11:42:15.440403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:96608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.534 [2024-11-15 11:42:15.440414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.534 [2024-11-15 11:42:15.440426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:96616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.534 [2024-11-15 11:42:15.440435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.535 [2024-11-15 11:42:15.440447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:96624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.535 [2024-11-15 11:42:15.440456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.535 [2024-11-15 11:42:15.440475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:96632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.535 [2024-11-15 11:42:15.440485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.535 [2024-11-15 11:42:15.440497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:96640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.535 [2024-11-15 11:42:15.440506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.535 [2024-11-15 11:42:15.440518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:96648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.535 [2024-11-15 11:42:15.440528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.535 [2024-11-15 11:42:15.440540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:96656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.535 [2024-11-15 11:42:15.440549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.535 [2024-11-15 11:42:15.440561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:96664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.535 [2024-11-15 11:42:15.440571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.535 [2024-11-15 11:42:15.440582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:96672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.535 [2024-11-15 11:42:15.440594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.535 [2024-11-15 11:42:15.440606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:96680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.535 [2024-11-15 11:42:15.440615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.535 [2024-11-15 11:42:15.440627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:96688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.535 [2024-11-15 11:42:15.440636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.535 [2024-11-15 11:42:15.440648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:96696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.535 [2024-11-15 11:42:15.440657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.535 [2024-11-15 11:42:15.440669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:96704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.535 [2024-11-15 11:42:15.440678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.535 [2024-11-15 11:42:15.440690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:96712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.535 [2024-11-15 11:42:15.440699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.535 [2024-11-15 11:42:15.440711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:96720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.535 [2024-11-15 11:42:15.440721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.535 [2024-11-15 11:42:15.440732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:96728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.535 [2024-11-15 11:42:15.440742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.535 [2024-11-15 11:42:15.440756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:96736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.535 [2024-11-15 11:42:15.440768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.535 [2024-11-15 11:42:15.440779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:96744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.535 [2024-11-15 11:42:15.440789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.535 [2024-11-15 11:42:15.440801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:96752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.535 [2024-11-15 11:42:15.440810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.535 [2024-11-15 11:42:15.440822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:96760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.535 [2024-11-15 11:42:15.440832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.535 [2024-11-15 11:42:15.440843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:96768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.535 [2024-11-15 11:42:15.440852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.535 [2024-11-15 11:42:15.440864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:96776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.535 [2024-11-15 11:42:15.440873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.535 [2024-11-15 11:42:15.440885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:96784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.535 [2024-11-15 11:42:15.440894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.535 [2024-11-15 11:42:15.440906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:96792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.535 [2024-11-15 11:42:15.440916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.535 [2024-11-15 11:42:15.440927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:96800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.535 [2024-11-15 11:42:15.440938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.535 [2024-11-15 11:42:15.440950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:96808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.535 [2024-11-15 11:42:15.440960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.535 [2024-11-15 11:42:15.440971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:96816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.535 [2024-11-15 11:42:15.440981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.535 [2024-11-15 11:42:15.440992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:96424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.535 [2024-11-15 11:42:15.441002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.535 [2024-11-15 11:42:15.441014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:96432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.535 [2024-11-15 11:42:15.441025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.535 [2024-11-15 11:42:15.441037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:96440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.535 [2024-11-15 11:42:15.441047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.535 [2024-11-15 11:42:15.441058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:96448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.536 [2024-11-15 11:42:15.441068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.536 [2024-11-15 11:42:15.441105] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:29.536 [2024-11-15 11:42:15.441115] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:29.536 [2024-11-15 11:42:15.441124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:96456 len:8 PRP1 0x0 PRP2 0x0 00:24:29.536 [2024-11-15 11:42:15.441136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.536 [2024-11-15 11:42:15.441195] bdev_nvme.c:2052:bdev_nvme_failover_trid: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:24:29.536 [2024-11-15 11:42:15.441222] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:24:29.536 [2024-11-15 11:42:15.441233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.536 [2024-11-15 11:42:15.441244] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:24:29.536 [2024-11-15 11:42:15.441253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.536 [2024-11-15 11:42:15.441263] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:24:29.536 [2024-11-15 11:42:15.441273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.536 [2024-11-15 11:42:15.441283] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:24:29.536 [2024-11-15 11:42:15.441293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.536 [2024-11-15 11:42:15.441302] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] in failed state. 00:24:29.536 [2024-11-15 11:42:15.441335] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23de830 (9): Bad file descriptor 00:24:29.536 [2024-11-15 11:42:15.445646] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:24:29.536 [2024-11-15 11:42:15.513503] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] Resetting controller successful. 00:24:29.536 10040.50 IOPS, 39.22 MiB/s [2024-11-15T10:42:30.389Z] 10201.33 IOPS, 39.85 MiB/s [2024-11-15T10:42:30.389Z] 10207.25 IOPS, 39.87 MiB/s [2024-11-15T10:42:30.389Z] [2024-11-15 11:42:19.212322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:26016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.536 [2024-11-15 11:42:19.212372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.536 [2024-11-15 11:42:19.212392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:26024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.536 [2024-11-15 11:42:19.212402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.536 [2024-11-15 11:42:19.212421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:26032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.536 [2024-11-15 11:42:19.212432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.536 [2024-11-15 11:42:19.212444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:26040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.536 [2024-11-15 11:42:19.212454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.536 [2024-11-15 11:42:19.212472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:26048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.536 [2024-11-15 11:42:19.212482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.536 [2024-11-15 11:42:19.212494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.536 [2024-11-15 11:42:19.212504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.536 [2024-11-15 11:42:19.212517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.536 [2024-11-15 11:42:19.212526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.536 [2024-11-15 11:42:19.212538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:26072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.536 [2024-11-15 11:42:19.212548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.536 [2024-11-15 11:42:19.212560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:26080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.536 [2024-11-15 11:42:19.212582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.536 [2024-11-15 11:42:19.212597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:26088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.536 [2024-11-15 11:42:19.212606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.536 [2024-11-15 11:42:19.212619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:26096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.536 [2024-11-15 11:42:19.212629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.536 [2024-11-15 11:42:19.212641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:26104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.536 [2024-11-15 11:42:19.212650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.536 [2024-11-15 11:42:19.212662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:26112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.536 [2024-11-15 11:42:19.212672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.536 [2024-11-15 11:42:19.212684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:26120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.536 [2024-11-15 11:42:19.212694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.536 [2024-11-15 11:42:19.212706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.536 [2024-11-15 11:42:19.212719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.536 [2024-11-15 11:42:19.212731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:26136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.536 [2024-11-15 11:42:19.212741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.536 [2024-11-15 11:42:19.212753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:26144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.536 [2024-11-15 11:42:19.212763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.536 [2024-11-15 11:42:19.212776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:26152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.536 [2024-11-15 11:42:19.212787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.536 [2024-11-15 11:42:19.212799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:26160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.536 [2024-11-15 11:42:19.212810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.536 [2024-11-15 11:42:19.212822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:26168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.536 [2024-11-15 11:42:19.212831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.536 [2024-11-15 11:42:19.212843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:26176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.536 [2024-11-15 11:42:19.212853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.536 [2024-11-15 11:42:19.212865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:26184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.537 [2024-11-15 11:42:19.212874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.537 [2024-11-15 11:42:19.212887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:26192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.537 [2024-11-15 11:42:19.212896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.537 [2024-11-15 11:42:19.212909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:26200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.537 [2024-11-15 11:42:19.212919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.537 [2024-11-15 11:42:19.212930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:26208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.537 [2024-11-15 11:42:19.212940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.537 [2024-11-15 11:42:19.212952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:26216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.537 [2024-11-15 11:42:19.212962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.537 [2024-11-15 11:42:19.212974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:26224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.537 [2024-11-15 11:42:19.212983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.537 [2024-11-15 11:42:19.212998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:26232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.537 [2024-11-15 11:42:19.213008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.537 [2024-11-15 11:42:19.213021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:26240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.537 [2024-11-15 11:42:19.213031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.537 [2024-11-15 11:42:19.213043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:26248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.537 [2024-11-15 11:42:19.213052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.537 [2024-11-15 11:42:19.213065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:26256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.537 [2024-11-15 11:42:19.213074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.537 [2024-11-15 11:42:19.213086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:26264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.537 [2024-11-15 11:42:19.213096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.537 [2024-11-15 11:42:19.213108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:26272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.537 [2024-11-15 11:42:19.213117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.537 [2024-11-15 11:42:19.213129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:26280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.537 [2024-11-15 11:42:19.213139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.537 [2024-11-15 11:42:19.213152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:26288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.537 [2024-11-15 11:42:19.213161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.537 [2024-11-15 11:42:19.213174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:26296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.537 [2024-11-15 11:42:19.213183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.537 [2024-11-15 11:42:19.213195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:26304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.537 [2024-11-15 11:42:19.213205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.537 [2024-11-15 11:42:19.213217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:26312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.537 [2024-11-15 11:42:19.213227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.537 [2024-11-15 11:42:19.213239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:26320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.537 [2024-11-15 11:42:19.213248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.538 [2024-11-15 11:42:19.213260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:26328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.538 [2024-11-15 11:42:19.213273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.538 [2024-11-15 11:42:19.213285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:26336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.538 [2024-11-15 11:42:19.213294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.538 [2024-11-15 11:42:19.213306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:26344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.538 [2024-11-15 11:42:19.213316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.538 [2024-11-15 11:42:19.213328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:26352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.538 [2024-11-15 11:42:19.213337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.538 [2024-11-15 11:42:19.213349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:26360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.538 [2024-11-15 11:42:19.213359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.538 [2024-11-15 11:42:19.213371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:26368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.538 [2024-11-15 11:42:19.213380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.538 [2024-11-15 11:42:19.213392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:26376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.538 [2024-11-15 11:42:19.213402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.538 [2024-11-15 11:42:19.213414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:26384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.538 [2024-11-15 11:42:19.213424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.538 [2024-11-15 11:42:19.213435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:26392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.538 [2024-11-15 11:42:19.213445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.538 [2024-11-15 11:42:19.213456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:26400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.538 [2024-11-15 11:42:19.213473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.538 [2024-11-15 11:42:19.213485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:26408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.538 [2024-11-15 11:42:19.213495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.538 [2024-11-15 11:42:19.213507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:26416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.538 [2024-11-15 11:42:19.213517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.538 [2024-11-15 11:42:19.213529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:26424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.538 [2024-11-15 11:42:19.213539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.538 [2024-11-15 11:42:19.213550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:26432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.538 [2024-11-15 11:42:19.213562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.538 [2024-11-15 11:42:19.213574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:26440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.538 [2024-11-15 11:42:19.213583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.538 [2024-11-15 11:42:19.213596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:26448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.538 [2024-11-15 11:42:19.213605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.538 [2024-11-15 11:42:19.213617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:26456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.538 [2024-11-15 11:42:19.213627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.538 [2024-11-15 11:42:19.213638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:26464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.538 [2024-11-15 11:42:19.213648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.538 [2024-11-15 11:42:19.213660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:26472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.538 [2024-11-15 11:42:19.213669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.538 [2024-11-15 11:42:19.213681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:26480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.538 [2024-11-15 11:42:19.213690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.538 [2024-11-15 11:42:19.213702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:26488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.538 [2024-11-15 11:42:19.213712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.538 [2024-11-15 11:42:19.213723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:26496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.538 [2024-11-15 11:42:19.213732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.538 [2024-11-15 11:42:19.213744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:26504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.538 [2024-11-15 11:42:19.213754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.538 [2024-11-15 11:42:19.213765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:26512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.538 [2024-11-15 11:42:19.213775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.538 [2024-11-15 11:42:19.213786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:26520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.538 [2024-11-15 11:42:19.213795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.538 [2024-11-15 11:42:19.213807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:26528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.538 [2024-11-15 11:42:19.213818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.538 [2024-11-15 11:42:19.213832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:26536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.538 [2024-11-15 11:42:19.213842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.538 [2024-11-15 11:42:19.213854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:26544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.538 [2024-11-15 11:42:19.213864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.539 [2024-11-15 11:42:19.213876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:26552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.539 [2024-11-15 11:42:19.213885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.539 [2024-11-15 11:42:19.213897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:26560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.539 [2024-11-15 11:42:19.213906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.539 [2024-11-15 11:42:19.213918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:26568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.539 [2024-11-15 11:42:19.213928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.539 [2024-11-15 11:42:19.213940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:26576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.539 [2024-11-15 11:42:19.213949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.539 [2024-11-15 11:42:19.213960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:26584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.539 [2024-11-15 11:42:19.213970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.539 [2024-11-15 11:42:19.213982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:26592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.539 [2024-11-15 11:42:19.213991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.539 [2024-11-15 11:42:19.214003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:26600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.539 [2024-11-15 11:42:19.214012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.539 [2024-11-15 11:42:19.214024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:26608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.539 [2024-11-15 11:42:19.214034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.539 [2024-11-15 11:42:19.214046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:26616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.539 [2024-11-15 11:42:19.214055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.539 [2024-11-15 11:42:19.214067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:26624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.539 [2024-11-15 11:42:19.214076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.539 [2024-11-15 11:42:19.214088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:26632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.539 [2024-11-15 11:42:19.214100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.539 [2024-11-15 11:42:19.214112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:26640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.539 [2024-11-15 11:42:19.214121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.539 [2024-11-15 11:42:19.214133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:26648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.539 [2024-11-15 11:42:19.214142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.539 [2024-11-15 11:42:19.214155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:26656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.539 [2024-11-15 11:42:19.214165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.539 [2024-11-15 11:42:19.214176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:26664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.539 [2024-11-15 11:42:19.214186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.539 [2024-11-15 11:42:19.214226] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:29.539 [2024-11-15 11:42:19.214236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:26672 len:8 PRP1 0x0 PRP2 0x0 00:24:29.539 [2024-11-15 11:42:19.214246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.539 [2024-11-15 11:42:19.214259] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:29.539 [2024-11-15 11:42:19.214267] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:29.539 [2024-11-15 11:42:19.214275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:26680 len:8 PRP1 0x0 PRP2 0x0 00:24:29.539 [2024-11-15 11:42:19.214284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.539 [2024-11-15 11:42:19.214294] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:29.539 [2024-11-15 11:42:19.214302] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:29.539 [2024-11-15 11:42:19.214310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:26688 len:8 PRP1 0x0 PRP2 0x0 00:24:29.539 [2024-11-15 11:42:19.214319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.539 [2024-11-15 11:42:19.214329] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:29.539 [2024-11-15 11:42:19.214336] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:29.539 [2024-11-15 11:42:19.214344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:26696 len:8 PRP1 0x0 PRP2 0x0 00:24:29.539 [2024-11-15 11:42:19.214353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.539 [2024-11-15 11:42:19.214362] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:29.539 [2024-11-15 11:42:19.214370] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:29.539 [2024-11-15 11:42:19.214378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:26704 len:8 PRP1 0x0 PRP2 0x0 00:24:29.539 [2024-11-15 11:42:19.214387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.539 [2024-11-15 11:42:19.214396] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:29.539 [2024-11-15 11:42:19.214406] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:29.539 [2024-11-15 11:42:19.214414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:26712 len:8 PRP1 0x0 PRP2 0x0 00:24:29.539 [2024-11-15 11:42:19.214423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.539 [2024-11-15 11:42:19.214433] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:29.539 [2024-11-15 11:42:19.214440] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:29.539 [2024-11-15 11:42:19.214448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:26720 len:8 PRP1 0x0 PRP2 0x0 00:24:29.539 [2024-11-15 11:42:19.214457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.539 [2024-11-15 11:42:19.214472] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:29.539 [2024-11-15 11:42:19.214480] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:29.540 [2024-11-15 11:42:19.214488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:26728 len:8 PRP1 0x0 PRP2 0x0 00:24:29.540 [2024-11-15 11:42:19.214497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.540 [2024-11-15 11:42:19.214511] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:29.540 [2024-11-15 11:42:19.214518] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:29.540 [2024-11-15 11:42:19.214526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:26736 len:8 PRP1 0x0 PRP2 0x0 00:24:29.540 [2024-11-15 11:42:19.214535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.540 [2024-11-15 11:42:19.214545] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:29.540 [2024-11-15 11:42:19.214552] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:29.540 [2024-11-15 11:42:19.214560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:26744 len:8 PRP1 0x0 PRP2 0x0 00:24:29.540 [2024-11-15 11:42:19.214569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.540 [2024-11-15 11:42:19.214579] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:29.540 [2024-11-15 11:42:19.214587] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:29.540 [2024-11-15 11:42:19.214595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:26752 len:8 PRP1 0x0 PRP2 0x0 00:24:29.540 [2024-11-15 11:42:19.214604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.540 [2024-11-15 11:42:19.214614] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:29.540 [2024-11-15 11:42:19.214622] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:29.540 [2024-11-15 11:42:19.214629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:26760 len:8 PRP1 0x0 PRP2 0x0 00:24:29.540 [2024-11-15 11:42:19.214638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.540 [2024-11-15 11:42:19.214648] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:29.540 [2024-11-15 11:42:19.214656] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:29.540 [2024-11-15 11:42:19.214664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:26768 len:8 PRP1 0x0 PRP2 0x0 00:24:29.540 [2024-11-15 11:42:19.214673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.540 [2024-11-15 11:42:19.214685] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:29.540 [2024-11-15 11:42:19.214692] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:29.540 [2024-11-15 11:42:19.214701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:26776 len:8 PRP1 0x0 PRP2 0x0 00:24:29.540 [2024-11-15 11:42:19.214710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.540 [2024-11-15 11:42:19.214720] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:29.540 [2024-11-15 11:42:19.214727] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:29.540 [2024-11-15 11:42:19.214734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:26784 len:8 PRP1 0x0 PRP2 0x0 00:24:29.540 [2024-11-15 11:42:19.214744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.540 [2024-11-15 11:42:19.214754] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:29.540 [2024-11-15 11:42:19.214761] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:29.540 [2024-11-15 11:42:19.214769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:26792 len:8 PRP1 0x0 PRP2 0x0 00:24:29.540 [2024-11-15 11:42:19.214777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.540 [2024-11-15 11:42:19.214789] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:29.540 [2024-11-15 11:42:19.214796] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:29.540 [2024-11-15 11:42:19.214804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:26800 len:8 PRP1 0x0 PRP2 0x0 00:24:29.540 [2024-11-15 11:42:19.214813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.540 [2024-11-15 11:42:19.214823] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:29.540 [2024-11-15 11:42:19.214831] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:29.540 [2024-11-15 11:42:19.214838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:26808 len:8 PRP1 0x0 PRP2 0x0 00:24:29.540 [2024-11-15 11:42:19.214849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.540 [2024-11-15 11:42:19.214859] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:29.540 [2024-11-15 11:42:19.214866] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:29.540 [2024-11-15 11:42:19.214875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:26816 len:8 PRP1 0x0 PRP2 0x0 00:24:29.540 [2024-11-15 11:42:19.214884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.540 [2024-11-15 11:42:19.214894] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:29.540 [2024-11-15 11:42:19.214901] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:29.540 [2024-11-15 11:42:19.214909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:26824 len:8 PRP1 0x0 PRP2 0x0 00:24:29.540 [2024-11-15 11:42:19.214918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.540 [2024-11-15 11:42:19.214929] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:29.540 [2024-11-15 11:42:19.214936] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:29.540 [2024-11-15 11:42:19.214944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:26832 len:8 PRP1 0x0 PRP2 0x0 00:24:29.540 [2024-11-15 11:42:19.214955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.540 [2024-11-15 11:42:19.214965] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:29.540 [2024-11-15 11:42:19.214972] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:29.540 [2024-11-15 11:42:19.214980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:26840 len:8 PRP1 0x0 PRP2 0x0 00:24:29.541 [2024-11-15 11:42:19.214989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.541 [2024-11-15 11:42:19.214999] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:29.541 [2024-11-15 11:42:19.215006] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:29.541 [2024-11-15 11:42:19.215014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:26848 len:8 PRP1 0x0 PRP2 0x0 00:24:29.541 [2024-11-15 11:42:19.215023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.541 [2024-11-15 11:42:19.215034] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:29.541 [2024-11-15 11:42:19.215041] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:29.541 [2024-11-15 11:42:19.215049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:26856 len:8 PRP1 0x0 PRP2 0x0 00:24:29.541 [2024-11-15 11:42:19.215058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.541 [2024-11-15 11:42:19.215070] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:29.541 [2024-11-15 11:42:19.215078] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:29.541 [2024-11-15 11:42:19.215086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:26864 len:8 PRP1 0x0 PRP2 0x0 00:24:29.541 [2024-11-15 11:42:19.215096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.541 [2024-11-15 11:42:19.215105] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:29.541 [2024-11-15 11:42:19.215113] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:29.541 [2024-11-15 11:42:19.215121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:26872 len:8 PRP1 0x0 PRP2 0x0 00:24:29.541 [2024-11-15 11:42:19.215130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.541 [2024-11-15 11:42:19.215140] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:29.541 [2024-11-15 11:42:19.215147] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:29.541 [2024-11-15 11:42:19.215156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:26880 len:8 PRP1 0x0 PRP2 0x0 00:24:29.541 [2024-11-15 11:42:19.215165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.541 [2024-11-15 11:42:19.215175] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:29.541 [2024-11-15 11:42:19.215182] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:29.541 [2024-11-15 11:42:19.215191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:26888 len:8 PRP1 0x0 PRP2 0x0 00:24:29.541 [2024-11-15 11:42:19.215200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.541 [2024-11-15 11:42:19.215210] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:29.541 [2024-11-15 11:42:19.215218] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:29.541 [2024-11-15 11:42:19.215231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:26896 len:8 PRP1 0x0 PRP2 0x0 00:24:29.541 [2024-11-15 11:42:19.215241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.541 [2024-11-15 11:42:19.215251] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:29.541 [2024-11-15 11:42:19.215258] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:29.541 [2024-11-15 11:42:19.215266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:26904 len:8 PRP1 0x0 PRP2 0x0 00:24:29.541 [2024-11-15 11:42:19.215275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.541 [2024-11-15 11:42:19.215285] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:29.541 [2024-11-15 11:42:19.215292] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:29.541 [2024-11-15 11:42:19.215301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:26912 len:8 PRP1 0x0 PRP2 0x0 00:24:29.541 [2024-11-15 11:42:19.215310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.541 [2024-11-15 11:42:19.215320] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:29.541 [2024-11-15 11:42:19.215327] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:29.541 [2024-11-15 11:42:19.215336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:26920 len:8 PRP1 0x0 PRP2 0x0 00:24:29.541 [2024-11-15 11:42:19.215345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.541 [2024-11-15 11:42:19.215356] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:29.541 [2024-11-15 11:42:19.215363] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:29.541 [2024-11-15 11:42:19.215371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:26928 len:8 PRP1 0x0 PRP2 0x0 00:24:29.541 [2024-11-15 11:42:19.215380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.541 [2024-11-15 11:42:19.215390] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:29.541 [2024-11-15 11:42:19.215397] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:29.541 [2024-11-15 11:42:19.215405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:26936 len:8 PRP1 0x0 PRP2 0x0 00:24:29.541 [2024-11-15 11:42:19.215415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.541 [2024-11-15 11:42:19.215424] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:29.541 [2024-11-15 11:42:19.215431] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:29.541 [2024-11-15 11:42:19.215439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:26944 len:8 PRP1 0x0 PRP2 0x0 00:24:29.541 [2024-11-15 11:42:19.215448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.541 [2024-11-15 11:42:19.215462] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:29.541 [2024-11-15 11:42:19.215469] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:29.541 [2024-11-15 11:42:19.215477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:26952 len:8 PRP1 0x0 PRP2 0x0 00:24:29.541 [2024-11-15 11:42:19.215486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.541 [2024-11-15 11:42:19.215498] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:29.541 [2024-11-15 11:42:19.215505] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:29.541 [2024-11-15 11:42:19.215514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:26960 len:8 PRP1 0x0 PRP2 0x0 00:24:29.541 [2024-11-15 11:42:19.215523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.541 [2024-11-15 11:42:19.215533] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:29.541 [2024-11-15 11:42:19.215540] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:29.541 [2024-11-15 11:42:19.215548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:26968 len:8 PRP1 0x0 PRP2 0x0 00:24:29.541 [2024-11-15 11:42:19.215557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.541 [2024-11-15 11:42:19.215566] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:29.541 [2024-11-15 11:42:19.215574] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:29.541 [2024-11-15 11:42:19.215582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:26976 len:8 PRP1 0x0 PRP2 0x0 00:24:29.541 [2024-11-15 11:42:19.215591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.541 [2024-11-15 11:42:19.215600] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:29.541 [2024-11-15 11:42:19.215608] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:29.541 [2024-11-15 11:42:19.215616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:26984 len:8 PRP1 0x0 PRP2 0x0 00:24:29.541 [2024-11-15 11:42:19.215625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.541 [2024-11-15 11:42:19.215635] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:29.541 [2024-11-15 11:42:19.225886] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:29.542 [2024-11-15 11:42:19.225903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:26992 len:8 PRP1 0x0 PRP2 0x0 00:24:29.542 [2024-11-15 11:42:19.225918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.542 [2024-11-15 11:42:19.225931] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:29.542 [2024-11-15 11:42:19.225941] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:29.542 [2024-11-15 11:42:19.225952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:27000 len:8 PRP1 0x0 PRP2 0x0 00:24:29.542 [2024-11-15 11:42:19.225965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.542 [2024-11-15 11:42:19.225978] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:29.542 [2024-11-15 11:42:19.225988] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:29.542 [2024-11-15 11:42:19.225999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:27008 len:8 PRP1 0x0 PRP2 0x0 00:24:29.542 [2024-11-15 11:42:19.226011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.542 [2024-11-15 11:42:19.226025] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:29.542 [2024-11-15 11:42:19.226035] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:29.542 [2024-11-15 11:42:19.226046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:27016 len:8 PRP1 0x0 PRP2 0x0 00:24:29.542 [2024-11-15 11:42:19.226062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.542 [2024-11-15 11:42:19.226075] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:29.542 [2024-11-15 11:42:19.226085] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:29.542 [2024-11-15 11:42:19.226096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:27024 len:8 PRP1 0x0 PRP2 0x0 00:24:29.542 [2024-11-15 11:42:19.226109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.542 [2024-11-15 11:42:19.226122] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:29.542 [2024-11-15 11:42:19.226132] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:29.542 [2024-11-15 11:42:19.226143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:27032 len:8 PRP1 0x0 PRP2 0x0 00:24:29.542 [2024-11-15 11:42:19.226156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.542 [2024-11-15 11:42:19.226214] bdev_nvme.c:2052:bdev_nvme_failover_trid: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] Start failover from 10.0.0.2:4421 to 10.0.0.2:4422 00:24:29.542 [2024-11-15 11:42:19.226253] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:24:29.542 [2024-11-15 11:42:19.226268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.542 [2024-11-15 11:42:19.226283] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:24:29.542 [2024-11-15 11:42:19.226297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.542 [2024-11-15 11:42:19.226313] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:24:29.542 [2024-11-15 11:42:19.226327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.542 [2024-11-15 11:42:19.226341] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:24:29.542 [2024-11-15 11:42:19.226355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.542 [2024-11-15 11:42:19.226368] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] in failed state. 00:24:29.542 [2024-11-15 11:42:19.226421] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23de830 (9): Bad file descriptor 00:24:29.542 [2024-11-15 11:42:19.232307] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] resetting controller 00:24:29.542 [2024-11-15 11:42:19.263523] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] Resetting controller successful. 00:24:29.542 10113.40 IOPS, 39.51 MiB/s [2024-11-15T10:42:30.395Z] 10217.00 IOPS, 39.91 MiB/s [2024-11-15T10:42:30.395Z] 10203.71 IOPS, 39.86 MiB/s [2024-11-15T10:42:30.395Z] 10288.50 IOPS, 40.19 MiB/s [2024-11-15T10:42:30.395Z] 10300.22 IOPS, 40.24 MiB/s [2024-11-15T10:42:30.395Z] [2024-11-15 11:42:23.789203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:20744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.542 [2024-11-15 11:42:23.789245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.542 [2024-11-15 11:42:23.789265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.542 [2024-11-15 11:42:23.789277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.542 [2024-11-15 11:42:23.789291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:20904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.542 [2024-11-15 11:42:23.789306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.542 [2024-11-15 11:42:23.789319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:20912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.542 [2024-11-15 11:42:23.789329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.542 [2024-11-15 11:42:23.789341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:20920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.542 [2024-11-15 11:42:23.789351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.542 [2024-11-15 11:42:23.789364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:20928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.542 [2024-11-15 11:42:23.789374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.542 [2024-11-15 11:42:23.789386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:20936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.542 [2024-11-15 11:42:23.789397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.542 [2024-11-15 11:42:23.789409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:20944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.542 [2024-11-15 11:42:23.789418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.542 [2024-11-15 11:42:23.789430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:20952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.542 [2024-11-15 11:42:23.789439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.542 [2024-11-15 11:42:23.789451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:20960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.542 [2024-11-15 11:42:23.789468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.542 [2024-11-15 11:42:23.789480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:20968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.542 [2024-11-15 11:42:23.789491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.542 [2024-11-15 11:42:23.789503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:20976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.542 [2024-11-15 11:42:23.789513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.542 [2024-11-15 11:42:23.789525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.542 [2024-11-15 11:42:23.789534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.542 [2024-11-15 11:42:23.789546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:20992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.542 [2024-11-15 11:42:23.789556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.542 [2024-11-15 11:42:23.789567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:21000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.542 [2024-11-15 11:42:23.789577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.542 [2024-11-15 11:42:23.789591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:21008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.542 [2024-11-15 11:42:23.789601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.542 [2024-11-15 11:42:23.789613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:21016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.542 [2024-11-15 11:42:23.789623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.542 [2024-11-15 11:42:23.789635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:21024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.542 [2024-11-15 11:42:23.789644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.542 [2024-11-15 11:42:23.789656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:21032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.542 [2024-11-15 11:42:23.789666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.542 [2024-11-15 11:42:23.789677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:21040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.542 [2024-11-15 11:42:23.789687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.542 [2024-11-15 11:42:23.789699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:21048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.542 [2024-11-15 11:42:23.789708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.542 [2024-11-15 11:42:23.789720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:21056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.542 [2024-11-15 11:42:23.789730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.542 [2024-11-15 11:42:23.789741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:21064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.542 [2024-11-15 11:42:23.789751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.543 [2024-11-15 11:42:23.789762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:21072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.543 [2024-11-15 11:42:23.789772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.543 [2024-11-15 11:42:23.789783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:21080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.543 [2024-11-15 11:42:23.789793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.543 [2024-11-15 11:42:23.789804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:21088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.543 [2024-11-15 11:42:23.789814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.543 [2024-11-15 11:42:23.789826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:21096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.543 [2024-11-15 11:42:23.789836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.543 [2024-11-15 11:42:23.789847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:21104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.543 [2024-11-15 11:42:23.789859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.543 [2024-11-15 11:42:23.789871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:21112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.543 [2024-11-15 11:42:23.789881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.543 [2024-11-15 11:42:23.789893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:21120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.543 [2024-11-15 11:42:23.789903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.543 [2024-11-15 11:42:23.789914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:21128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.543 [2024-11-15 11:42:23.789924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.543 [2024-11-15 11:42:23.789936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:21136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.543 [2024-11-15 11:42:23.789946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.543 [2024-11-15 11:42:23.789957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:21144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.543 [2024-11-15 11:42:23.789969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.543 [2024-11-15 11:42:23.789980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:21152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.543 [2024-11-15 11:42:23.789990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.543 [2024-11-15 11:42:23.790001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:21160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.543 [2024-11-15 11:42:23.790012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.543 [2024-11-15 11:42:23.790024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:21168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.543 [2024-11-15 11:42:23.790034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.543 [2024-11-15 11:42:23.790045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:21176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.543 [2024-11-15 11:42:23.790055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.543 [2024-11-15 11:42:23.790067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:21184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.543 [2024-11-15 11:42:23.790077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.543 [2024-11-15 11:42:23.790088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:21192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.543 [2024-11-15 11:42:23.790098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.543 [2024-11-15 11:42:23.790110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:21200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.543 [2024-11-15 11:42:23.790120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.543 [2024-11-15 11:42:23.790134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:21208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.543 [2024-11-15 11:42:23.790143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.543 [2024-11-15 11:42:23.790156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:21216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.543 [2024-11-15 11:42:23.790167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.543 [2024-11-15 11:42:23.790179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:21224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.543 [2024-11-15 11:42:23.790189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.543 [2024-11-15 11:42:23.790201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:21232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.543 [2024-11-15 11:42:23.790211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.543 [2024-11-15 11:42:23.790223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:21240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.543 [2024-11-15 11:42:23.790233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.543 [2024-11-15 11:42:23.790245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:20752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.543 [2024-11-15 11:42:23.790255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.543 [2024-11-15 11:42:23.790268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:20760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.543 [2024-11-15 11:42:23.790277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.543 [2024-11-15 11:42:23.790289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:21248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.543 [2024-11-15 11:42:23.790299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.543 [2024-11-15 11:42:23.790311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:21256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.543 [2024-11-15 11:42:23.790322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.543 [2024-11-15 11:42:23.790334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:21264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.543 [2024-11-15 11:42:23.790343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.543 [2024-11-15 11:42:23.790355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:21272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.543 [2024-11-15 11:42:23.790365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.543 [2024-11-15 11:42:23.790377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:21280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.543 [2024-11-15 11:42:23.790387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.543 [2024-11-15 11:42:23.790399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:21288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.543 [2024-11-15 11:42:23.790408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.543 [2024-11-15 11:42:23.790422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:21296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.543 [2024-11-15 11:42:23.790433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.543 [2024-11-15 11:42:23.790444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:21304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.544 [2024-11-15 11:42:23.790454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.544 [2024-11-15 11:42:23.790470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:21312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.544 [2024-11-15 11:42:23.790480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.544 [2024-11-15 11:42:23.790492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:21320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.544 [2024-11-15 11:42:23.790502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.544 [2024-11-15 11:42:23.790514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:21328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.544 [2024-11-15 11:42:23.790523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.544 [2024-11-15 11:42:23.790535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:21336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.544 [2024-11-15 11:42:23.790545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.544 [2024-11-15 11:42:23.790557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:21344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.544 [2024-11-15 11:42:23.790567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.544 [2024-11-15 11:42:23.790578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:21352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.544 [2024-11-15 11:42:23.790589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.544 [2024-11-15 11:42:23.790603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:21360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.544 [2024-11-15 11:42:23.790613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.544 [2024-11-15 11:42:23.790624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:21368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.544 [2024-11-15 11:42:23.790634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.544 [2024-11-15 11:42:23.790646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:21376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.544 [2024-11-15 11:42:23.790655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.544 [2024-11-15 11:42:23.790667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:21384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.544 [2024-11-15 11:42:23.790677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.544 [2024-11-15 11:42:23.790688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:21392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.544 [2024-11-15 11:42:23.790704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.544 [2024-11-15 11:42:23.790730] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:29.544 [2024-11-15 11:42:23.790740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21400 len:8 PRP1 0x0 PRP2 0x0 00:24:29.544 [2024-11-15 11:42:23.790749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.544 [2024-11-15 11:42:23.790796] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:24:29.544 [2024-11-15 11:42:23.790809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.544 [2024-11-15 11:42:23.790820] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:24:29.544 [2024-11-15 11:42:23.790830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.544 [2024-11-15 11:42:23.790840] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:24:29.544 [2024-11-15 11:42:23.790849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.544 [2024-11-15 11:42:23.790860] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:24:29.544 [2024-11-15 11:42:23.790869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.544 [2024-11-15 11:42:23.790879] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23de830 is same with the state(6) to be set 00:24:29.544 [2024-11-15 11:42:23.791053] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:29.544 [2024-11-15 11:42:23.791063] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:29.544 [2024-11-15 11:42:23.791072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21408 len:8 PRP1 0x0 PRP2 0x0 00:24:29.544 [2024-11-15 11:42:23.791081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.544 [2024-11-15 11:42:23.791093] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:29.544 [2024-11-15 11:42:23.791100] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:29.544 [2024-11-15 11:42:23.791108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21416 len:8 PRP1 0x0 PRP2 0x0 00:24:29.544 [2024-11-15 11:42:23.791118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.544 [2024-11-15 11:42:23.791128] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:29.544 [2024-11-15 11:42:23.791135] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:29.544 [2024-11-15 11:42:23.791145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21424 len:8 PRP1 0x0 PRP2 0x0 00:24:29.544 [2024-11-15 11:42:23.791155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.544 [2024-11-15 11:42:23.791165] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:29.544 [2024-11-15 11:42:23.791172] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:29.544 [2024-11-15 11:42:23.791181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21432 len:8 PRP1 0x0 PRP2 0x0 00:24:29.544 [2024-11-15 11:42:23.791190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.544 [2024-11-15 11:42:23.791203] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:29.544 [2024-11-15 11:42:23.791211] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:29.544 [2024-11-15 11:42:23.791220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21440 len:8 PRP1 0x0 PRP2 0x0 00:24:29.544 [2024-11-15 11:42:23.791229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.544 [2024-11-15 11:42:23.791239] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:29.544 [2024-11-15 11:42:23.791247] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:29.544 [2024-11-15 11:42:23.791255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21448 len:8 PRP1 0x0 PRP2 0x0 00:24:29.545 [2024-11-15 11:42:23.791264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.545 [2024-11-15 11:42:23.791274] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:29.545 [2024-11-15 11:42:23.791281] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:29.545 [2024-11-15 11:42:23.791289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21456 len:8 PRP1 0x0 PRP2 0x0 00:24:29.545 [2024-11-15 11:42:23.791299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.545 [2024-11-15 11:42:23.791309] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:29.545 [2024-11-15 11:42:23.791316] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:29.545 [2024-11-15 11:42:23.791324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21464 len:8 PRP1 0x0 PRP2 0x0 00:24:29.545 [2024-11-15 11:42:23.791333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.545 [2024-11-15 11:42:23.791344] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:29.545 [2024-11-15 11:42:23.791351] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:29.545 [2024-11-15 11:42:23.791359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21472 len:8 PRP1 0x0 PRP2 0x0 00:24:29.545 [2024-11-15 11:42:23.791369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.545 [2024-11-15 11:42:23.791379] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:29.545 [2024-11-15 11:42:23.791387] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:29.545 [2024-11-15 11:42:23.791395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21480 len:8 PRP1 0x0 PRP2 0x0 00:24:29.545 [2024-11-15 11:42:23.791405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.545 [2024-11-15 11:42:23.791415] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:29.545 [2024-11-15 11:42:23.791422] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:29.545 [2024-11-15 11:42:23.791430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21488 len:8 PRP1 0x0 PRP2 0x0 00:24:29.545 [2024-11-15 11:42:23.791439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.545 [2024-11-15 11:42:23.791450] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:29.545 [2024-11-15 11:42:23.791457] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:29.545 [2024-11-15 11:42:23.791474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21496 len:8 PRP1 0x0 PRP2 0x0 00:24:29.545 [2024-11-15 11:42:23.791485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.545 [2024-11-15 11:42:23.791495] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:29.545 [2024-11-15 11:42:23.791507] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:29.545 [2024-11-15 11:42:23.791515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21504 len:8 PRP1 0x0 PRP2 0x0 00:24:29.545 [2024-11-15 11:42:23.791525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.545 [2024-11-15 11:42:23.791535] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:29.545 [2024-11-15 11:42:23.791543] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:29.545 [2024-11-15 11:42:23.791551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21512 len:8 PRP1 0x0 PRP2 0x0 00:24:29.545 [2024-11-15 11:42:23.791561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.545 [2024-11-15 11:42:23.791571] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:29.545 [2024-11-15 11:42:23.791578] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:29.545 [2024-11-15 11:42:23.791587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21520 len:8 PRP1 0x0 PRP2 0x0 00:24:29.545 [2024-11-15 11:42:23.791596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.545 [2024-11-15 11:42:23.791606] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:29.545 [2024-11-15 11:42:23.791613] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:29.545 [2024-11-15 11:42:23.791622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21528 len:8 PRP1 0x0 PRP2 0x0 00:24:29.545 [2024-11-15 11:42:23.791632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.545 [2024-11-15 11:42:23.791642] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:29.545 [2024-11-15 11:42:23.791649] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:29.545 [2024-11-15 11:42:23.791657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21536 len:8 PRP1 0x0 PRP2 0x0 00:24:29.545 [2024-11-15 11:42:23.791666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.545 [2024-11-15 11:42:23.791676] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:29.545 [2024-11-15 11:42:23.791684] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:29.545 [2024-11-15 11:42:23.791692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21544 len:8 PRP1 0x0 PRP2 0x0 00:24:29.545 [2024-11-15 11:42:23.791702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.545 [2024-11-15 11:42:23.791711] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:29.545 [2024-11-15 11:42:23.791719] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:29.545 [2024-11-15 11:42:23.791727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21552 len:8 PRP1 0x0 PRP2 0x0 00:24:29.545 [2024-11-15 11:42:23.791737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.545 [2024-11-15 11:42:23.791747] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:29.545 [2024-11-15 11:42:23.791756] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:29.545 [2024-11-15 11:42:23.791764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21560 len:8 PRP1 0x0 PRP2 0x0 00:24:29.545 [2024-11-15 11:42:23.791773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.545 [2024-11-15 11:42:23.791788] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:29.545 [2024-11-15 11:42:23.791797] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:29.545 [2024-11-15 11:42:23.791805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21568 len:8 PRP1 0x0 PRP2 0x0 00:24:29.545 [2024-11-15 11:42:23.791814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.545 [2024-11-15 11:42:23.791824] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:29.545 [2024-11-15 11:42:23.791832] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:29.545 [2024-11-15 11:42:23.791840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21576 len:8 PRP1 0x0 PRP2 0x0 00:24:29.545 [2024-11-15 11:42:23.791849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.545 [2024-11-15 11:42:23.791859] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:29.545 [2024-11-15 11:42:23.791866] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:29.545 [2024-11-15 11:42:23.791874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21584 len:8 PRP1 0x0 PRP2 0x0 00:24:29.545 [2024-11-15 11:42:23.791884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.545 [2024-11-15 11:42:23.791894] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:29.546 [2024-11-15 11:42:23.791901] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:29.546 [2024-11-15 11:42:23.791909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21592 len:8 PRP1 0x0 PRP2 0x0 00:24:29.546 [2024-11-15 11:42:23.791918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.546 [2024-11-15 11:42:23.791928] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:29.546 [2024-11-15 11:42:23.791936] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:29.546 [2024-11-15 11:42:23.791944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21600 len:8 PRP1 0x0 PRP2 0x0 00:24:29.546 [2024-11-15 11:42:23.791953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.546 [2024-11-15 11:42:23.791963] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:29.546 [2024-11-15 11:42:23.791970] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:29.546 [2024-11-15 11:42:23.791978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21608 len:8 PRP1 0x0 PRP2 0x0 00:24:29.546 [2024-11-15 11:42:23.791988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.546 [2024-11-15 11:42:23.791998] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:29.546 [2024-11-15 11:42:23.792005] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:29.546 [2024-11-15 11:42:23.792013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21616 len:8 PRP1 0x0 PRP2 0x0 00:24:29.546 [2024-11-15 11:42:23.792022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.546 [2024-11-15 11:42:23.792034] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:29.546 [2024-11-15 11:42:23.792041] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:29.546 [2024-11-15 11:42:23.792049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21624 len:8 PRP1 0x0 PRP2 0x0 00:24:29.546 [2024-11-15 11:42:23.792058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.546 [2024-11-15 11:42:23.792070] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:29.546 [2024-11-15 11:42:23.792079] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:29.546 [2024-11-15 11:42:23.792087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21632 len:8 PRP1 0x0 PRP2 0x0 00:24:29.546 [2024-11-15 11:42:23.792096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.546 [2024-11-15 11:42:23.792106] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:29.546 [2024-11-15 11:42:23.792114] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:29.546 [2024-11-15 11:42:23.792121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21640 len:8 PRP1 0x0 PRP2 0x0 00:24:29.546 [2024-11-15 11:42:23.792131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.546 [2024-11-15 11:42:23.792141] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:29.546 [2024-11-15 11:42:23.792148] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:29.546 [2024-11-15 11:42:23.792156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21648 len:8 PRP1 0x0 PRP2 0x0 00:24:29.546 [2024-11-15 11:42:23.792165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.546 [2024-11-15 11:42:23.792175] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:29.546 [2024-11-15 11:42:23.792182] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:29.546 [2024-11-15 11:42:23.792190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21656 len:8 PRP1 0x0 PRP2 0x0 00:24:29.546 [2024-11-15 11:42:23.792199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.546 [2024-11-15 11:42:23.792209] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:29.546 [2024-11-15 11:42:23.792217] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:29.546 [2024-11-15 11:42:23.792225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21664 len:8 PRP1 0x0 PRP2 0x0 00:24:29.546 [2024-11-15 11:42:23.792234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.546 [2024-11-15 11:42:23.792244] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:29.546 [2024-11-15 11:42:23.792251] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:29.546 [2024-11-15 11:42:23.792259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21672 len:8 PRP1 0x0 PRP2 0x0 00:24:29.546 [2024-11-15 11:42:23.792268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.546 [2024-11-15 11:42:23.792278] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:29.546 [2024-11-15 11:42:23.792285] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:29.546 [2024-11-15 11:42:23.792293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21680 len:8 PRP1 0x0 PRP2 0x0 00:24:29.546 [2024-11-15 11:42:23.792304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.546 [2024-11-15 11:42:23.792314] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:29.546 [2024-11-15 11:42:23.792322] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:29.546 [2024-11-15 11:42:23.792330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21688 len:8 PRP1 0x0 PRP2 0x0 00:24:29.546 [2024-11-15 11:42:23.792339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.546 [2024-11-15 11:42:23.792350] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:29.546 [2024-11-15 11:42:23.792359] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:29.546 [2024-11-15 11:42:23.792367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21696 len:8 PRP1 0x0 PRP2 0x0 00:24:29.546 [2024-11-15 11:42:23.792377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.546 [2024-11-15 11:42:23.792387] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:29.546 [2024-11-15 11:42:23.792394] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:29.546 [2024-11-15 11:42:23.792402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21704 len:8 PRP1 0x0 PRP2 0x0 00:24:29.546 [2024-11-15 11:42:23.792412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.546 [2024-11-15 11:42:23.792421] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:29.546 [2024-11-15 11:42:23.792429] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:29.546 [2024-11-15 11:42:23.792437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21712 len:8 PRP1 0x0 PRP2 0x0 00:24:29.546 [2024-11-15 11:42:23.792447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.546 [2024-11-15 11:42:23.792457] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:29.546 [2024-11-15 11:42:23.792468] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:29.546 [2024-11-15 11:42:23.792476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21720 len:8 PRP1 0x0 PRP2 0x0 00:24:29.546 [2024-11-15 11:42:23.792485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.546 [2024-11-15 11:42:23.792495] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:29.546 [2024-11-15 11:42:23.792503] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:29.546 [2024-11-15 11:42:23.792510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21728 len:8 PRP1 0x0 PRP2 0x0 00:24:29.546 [2024-11-15 11:42:23.792519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.546 [2024-11-15 11:42:23.792530] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:29.547 [2024-11-15 11:42:23.792537] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:29.547 [2024-11-15 11:42:23.792545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21736 len:8 PRP1 0x0 PRP2 0x0 00:24:29.547 [2024-11-15 11:42:23.792554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.547 [2024-11-15 11:42:23.792564] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:29.547 [2024-11-15 11:42:23.792571] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:29.547 [2024-11-15 11:42:23.792581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21744 len:8 PRP1 0x0 PRP2 0x0 00:24:29.547 [2024-11-15 11:42:23.792591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.547 [2024-11-15 11:42:23.792601] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:29.547 [2024-11-15 11:42:23.792609] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:29.547 [2024-11-15 11:42:23.792617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21752 len:8 PRP1 0x0 PRP2 0x0 00:24:29.547 [2024-11-15 11:42:23.792626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.547 [2024-11-15 11:42:23.792637] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:29.547 [2024-11-15 11:42:23.792646] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:29.547 [2024-11-15 11:42:23.792654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21760 len:8 PRP1 0x0 PRP2 0x0 00:24:29.547 [2024-11-15 11:42:23.792664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.547 [2024-11-15 11:42:23.792673] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:29.547 [2024-11-15 11:42:23.792681] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:29.547 [2024-11-15 11:42:23.802871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:20768 len:8 PRP1 0x0 PRP2 0x0 00:24:29.547 [2024-11-15 11:42:23.802892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.547 [2024-11-15 11:42:23.802907] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:29.547 [2024-11-15 11:42:23.802916] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:29.547 [2024-11-15 11:42:23.802928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:20776 len:8 PRP1 0x0 PRP2 0x0 00:24:29.547 [2024-11-15 11:42:23.802940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.547 [2024-11-15 11:42:23.802954] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:29.547 [2024-11-15 11:42:23.802964] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:29.547 [2024-11-15 11:42:23.802975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:20784 len:8 PRP1 0x0 PRP2 0x0 00:24:29.547 [2024-11-15 11:42:23.802988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.547 [2024-11-15 11:42:23.803001] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:29.547 [2024-11-15 11:42:23.803012] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:29.547 [2024-11-15 11:42:23.803023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:20792 len:8 PRP1 0x0 PRP2 0x0 00:24:29.547 [2024-11-15 11:42:23.803036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.547 [2024-11-15 11:42:23.803049] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:29.547 [2024-11-15 11:42:23.803059] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:29.547 [2024-11-15 11:42:23.803070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:20800 len:8 PRP1 0x0 PRP2 0x0 00:24:29.547 [2024-11-15 11:42:23.803083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.547 [2024-11-15 11:42:23.803097] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:29.547 [2024-11-15 11:42:23.803110] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:29.547 [2024-11-15 11:42:23.803121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:20808 len:8 PRP1 0x0 PRP2 0x0 00:24:29.547 [2024-11-15 11:42:23.803134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.547 [2024-11-15 11:42:23.803147] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:29.547 [2024-11-15 11:42:23.803158] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:29.547 [2024-11-15 11:42:23.803169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:20816 len:8 PRP1 0x0 PRP2 0x0 00:24:29.547 [2024-11-15 11:42:23.803181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.547 [2024-11-15 11:42:23.803196] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:29.547 [2024-11-15 11:42:23.803207] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:29.547 [2024-11-15 11:42:23.803218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:20824 len:8 PRP1 0x0 PRP2 0x0 00:24:29.547 [2024-11-15 11:42:23.803231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.547 [2024-11-15 11:42:23.803245] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:29.547 [2024-11-15 11:42:23.803258] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:29.547 [2024-11-15 11:42:23.803270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:20832 len:8 PRP1 0x0 PRP2 0x0 00:24:29.547 [2024-11-15 11:42:23.803283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.547 [2024-11-15 11:42:23.803297] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:29.547 [2024-11-15 11:42:23.803306] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:29.547 [2024-11-15 11:42:23.803318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:20840 len:8 PRP1 0x0 PRP2 0x0 00:24:29.547 [2024-11-15 11:42:23.803330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.547 [2024-11-15 11:42:23.803344] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:29.547 [2024-11-15 11:42:23.803354] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:29.547 [2024-11-15 11:42:23.803365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:20848 len:8 PRP1 0x0 PRP2 0x0 00:24:29.547 [2024-11-15 11:42:23.803378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.547 [2024-11-15 11:42:23.803391] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:29.547 [2024-11-15 11:42:23.803402] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:29.547 [2024-11-15 11:42:23.803412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:20856 len:8 PRP1 0x0 PRP2 0x0 00:24:29.547 [2024-11-15 11:42:23.803426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.547 [2024-11-15 11:42:23.803439] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:29.547 [2024-11-15 11:42:23.803449] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:29.548 [2024-11-15 11:42:23.803469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:20864 len:8 PRP1 0x0 PRP2 0x0 00:24:29.548 [2024-11-15 11:42:23.803482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.548 [2024-11-15 11:42:23.803499] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:29.548 [2024-11-15 11:42:23.803509] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:29.548 [2024-11-15 11:42:23.803519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:20872 len:8 PRP1 0x0 PRP2 0x0 00:24:29.548 [2024-11-15 11:42:23.803532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.548 [2024-11-15 11:42:23.803546] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:29.548 [2024-11-15 11:42:23.803556] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:29.548 [2024-11-15 11:42:23.803567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:20880 len:8 PRP1 0x0 PRP2 0x0 00:24:29.548 [2024-11-15 11:42:23.803580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.548 [2024-11-15 11:42:23.803593] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:29.548 [2024-11-15 11:42:23.803603] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:29.548 [2024-11-15 11:42:23.803614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:20888 len:8 PRP1 0x0 PRP2 0x0 00:24:29.548 [2024-11-15 11:42:23.803628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.548 [2024-11-15 11:42:23.803641] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:29.548 [2024-11-15 11:42:23.803651] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:29.548 [2024-11-15 11:42:23.803662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:20744 len:8 PRP1 0x0 PRP2 0x0 00:24:29.548 [2024-11-15 11:42:23.803674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.548 [2024-11-15 11:42:23.803688] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:29.548 [2024-11-15 11:42:23.803698] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:29.548 [2024-11-15 11:42:23.803709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20896 len:8 PRP1 0x0 PRP2 0x0 00:24:29.548 [2024-11-15 11:42:23.803722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.548 [2024-11-15 11:42:23.803736] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:29.548 [2024-11-15 11:42:23.803746] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:29.548 [2024-11-15 11:42:23.803757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20904 len:8 PRP1 0x0 PRP2 0x0 00:24:29.548 [2024-11-15 11:42:23.803769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.548 [2024-11-15 11:42:23.803783] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:29.548 [2024-11-15 11:42:23.803793] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:29.548 [2024-11-15 11:42:23.803804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20912 len:8 PRP1 0x0 PRP2 0x0 00:24:29.548 [2024-11-15 11:42:23.803817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.548 [2024-11-15 11:42:23.803831] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:29.548 [2024-11-15 11:42:23.803840] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:29.548 [2024-11-15 11:42:23.803851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20920 len:8 PRP1 0x0 PRP2 0x0 00:24:29.548 [2024-11-15 11:42:23.803875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.548 [2024-11-15 11:42:23.803889] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:29.548 [2024-11-15 11:42:23.803899] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:29.548 [2024-11-15 11:42:23.803910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20928 len:8 PRP1 0x0 PRP2 0x0 00:24:29.548 [2024-11-15 11:42:23.803922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.548 [2024-11-15 11:42:23.803936] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:29.548 [2024-11-15 11:42:23.803946] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:29.548 [2024-11-15 11:42:23.803957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20936 len:8 PRP1 0x0 PRP2 0x0 00:24:29.548 [2024-11-15 11:42:23.803970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.548 [2024-11-15 11:42:23.803984] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:29.548 [2024-11-15 11:42:23.803994] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:29.548 [2024-11-15 11:42:23.804005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20944 len:8 PRP1 0x0 PRP2 0x0 00:24:29.548 [2024-11-15 11:42:23.804018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.548 [2024-11-15 11:42:23.804031] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:29.548 [2024-11-15 11:42:23.804041] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:29.548 [2024-11-15 11:42:23.804052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20952 len:8 PRP1 0x0 PRP2 0x0 00:24:29.548 [2024-11-15 11:42:23.804065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.548 [2024-11-15 11:42:23.804079] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:29.548 [2024-11-15 11:42:23.804088] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:29.548 [2024-11-15 11:42:23.804099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20960 len:8 PRP1 0x0 PRP2 0x0 00:24:29.548 [2024-11-15 11:42:23.804112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.548 [2024-11-15 11:42:23.804126] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:29.548 [2024-11-15 11:42:23.804136] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:29.548 [2024-11-15 11:42:23.804146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20968 len:8 PRP1 0x0 PRP2 0x0 00:24:29.548 [2024-11-15 11:42:23.804159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.548 [2024-11-15 11:42:23.804173] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:29.548 [2024-11-15 11:42:23.804183] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:29.548 [2024-11-15 11:42:23.804194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20976 len:8 PRP1 0x0 PRP2 0x0 00:24:29.548 [2024-11-15 11:42:23.804206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.548 [2024-11-15 11:42:23.804220] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:29.548 [2024-11-15 11:42:23.804232] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:29.548 [2024-11-15 11:42:23.804243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20984 len:8 PRP1 0x0 PRP2 0x0 00:24:29.549 [2024-11-15 11:42:23.804256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.549 [2024-11-15 11:42:23.804269] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:29.549 [2024-11-15 11:42:23.804279] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:29.549 [2024-11-15 11:42:23.804290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20992 len:8 PRP1 0x0 PRP2 0x0 00:24:29.549 [2024-11-15 11:42:23.804303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.549 [2024-11-15 11:42:23.804317] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:29.549 [2024-11-15 11:42:23.804327] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:29.549 [2024-11-15 11:42:23.804337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21000 len:8 PRP1 0x0 PRP2 0x0 00:24:29.549 [2024-11-15 11:42:23.804350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.549 [2024-11-15 11:42:23.804364] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:29.549 [2024-11-15 11:42:23.804374] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:29.549 [2024-11-15 11:42:23.804385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21008 len:8 PRP1 0x0 PRP2 0x0 00:24:29.549 [2024-11-15 11:42:23.804398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.549 [2024-11-15 11:42:23.804411] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:29.549 [2024-11-15 11:42:23.804423] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:29.549 [2024-11-15 11:42:23.804434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21016 len:8 PRP1 0x0 PRP2 0x0 00:24:29.549 [2024-11-15 11:42:23.804447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.549 [2024-11-15 11:42:23.804466] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:29.549 [2024-11-15 11:42:23.804476] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:29.549 [2024-11-15 11:42:23.804488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21024 len:8 PRP1 0x0 PRP2 0x0 00:24:29.549 [2024-11-15 11:42:23.804501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.549 [2024-11-15 11:42:23.804514] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:29.549 [2024-11-15 11:42:23.804525] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:29.549 [2024-11-15 11:42:23.804536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21032 len:8 PRP1 0x0 PRP2 0x0 00:24:29.549 [2024-11-15 11:42:23.804549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.549 [2024-11-15 11:42:23.804563] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:29.549 [2024-11-15 11:42:23.804572] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:29.549 [2024-11-15 11:42:23.804583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21040 len:8 PRP1 0x0 PRP2 0x0 00:24:29.549 [2024-11-15 11:42:23.804597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.549 [2024-11-15 11:42:23.804613] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:29.549 [2024-11-15 11:42:23.804624] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:29.549 [2024-11-15 11:42:23.804635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21048 len:8 PRP1 0x0 PRP2 0x0 00:24:29.549 [2024-11-15 11:42:23.804648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.549 [2024-11-15 11:42:23.804661] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:29.549 [2024-11-15 11:42:23.804671] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:29.549 [2024-11-15 11:42:23.804683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21056 len:8 PRP1 0x0 PRP2 0x0 00:24:29.549 [2024-11-15 11:42:23.804696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.549 [2024-11-15 11:42:23.804709] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:29.549 [2024-11-15 11:42:23.804719] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:29.549 [2024-11-15 11:42:23.804730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21064 len:8 PRP1 0x0 PRP2 0x0 00:24:29.549 [2024-11-15 11:42:23.804743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.549 [2024-11-15 11:42:23.804757] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:29.549 [2024-11-15 11:42:23.804767] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:29.549 [2024-11-15 11:42:23.804778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21072 len:8 PRP1 0x0 PRP2 0x0 00:24:29.549 [2024-11-15 11:42:23.804791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.549 [2024-11-15 11:42:23.804805] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:29.549 [2024-11-15 11:42:23.804815] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:29.549 [2024-11-15 11:42:23.804826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21080 len:8 PRP1 0x0 PRP2 0x0 00:24:29.550 [2024-11-15 11:42:23.804839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.550 [2024-11-15 11:42:23.804852] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:29.550 [2024-11-15 11:42:23.804862] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:29.550 [2024-11-15 11:42:23.804874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21088 len:8 PRP1 0x0 PRP2 0x0 00:24:29.550 [2024-11-15 11:42:23.804887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.550 [2024-11-15 11:42:23.804900] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:29.550 [2024-11-15 11:42:23.804910] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:29.550 [2024-11-15 11:42:23.804921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21096 len:8 PRP1 0x0 PRP2 0x0 00:24:29.550 [2024-11-15 11:42:23.804935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.550 [2024-11-15 11:42:23.804948] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:29.550 [2024-11-15 11:42:23.804959] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:29.550 [2024-11-15 11:42:23.804970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21104 len:8 PRP1 0x0 PRP2 0x0 00:24:29.550 [2024-11-15 11:42:23.804985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.550 [2024-11-15 11:42:23.804999] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:29.550 [2024-11-15 11:42:23.805009] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:29.550 [2024-11-15 11:42:23.805021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21112 len:8 PRP1 0x0 PRP2 0x0 00:24:29.550 [2024-11-15 11:42:23.805033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.550 [2024-11-15 11:42:23.805046] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:29.550 [2024-11-15 11:42:23.805057] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:29.550 [2024-11-15 11:42:23.805068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21120 len:8 PRP1 0x0 PRP2 0x0 00:24:29.550 [2024-11-15 11:42:23.805081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.550 [2024-11-15 11:42:23.805094] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:29.550 [2024-11-15 11:42:23.805104] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:29.550 [2024-11-15 11:42:23.805115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21128 len:8 PRP1 0x0 PRP2 0x0 00:24:29.550 [2024-11-15 11:42:23.805128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.550 [2024-11-15 11:42:23.805142] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:29.550 [2024-11-15 11:42:23.805157] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:29.550 [2024-11-15 11:42:23.805168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21136 len:8 PRP1 0x0 PRP2 0x0 00:24:29.550 [2024-11-15 11:42:23.805181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.550 [2024-11-15 11:42:23.805195] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:29.550 [2024-11-15 11:42:23.805205] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:29.550 [2024-11-15 11:42:23.805216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21144 len:8 PRP1 0x0 PRP2 0x0 00:24:29.550 [2024-11-15 11:42:23.805229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.550 [2024-11-15 11:42:23.805242] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:29.550 [2024-11-15 11:42:23.805252] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:29.550 [2024-11-15 11:42:23.805263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21152 len:8 PRP1 0x0 PRP2 0x0 00:24:29.550 [2024-11-15 11:42:23.805276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.550 [2024-11-15 11:42:23.805290] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:29.550 [2024-11-15 11:42:23.805300] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:29.550 [2024-11-15 11:42:23.805310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21160 len:8 PRP1 0x0 PRP2 0x0 00:24:29.550 [2024-11-15 11:42:23.805324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.550 [2024-11-15 11:42:23.805337] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:29.550 [2024-11-15 11:42:23.805348] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:29.550 [2024-11-15 11:42:23.805361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21168 len:8 PRP1 0x0 PRP2 0x0 00:24:29.550 [2024-11-15 11:42:23.805375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.550 [2024-11-15 11:42:23.805389] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:29.550 [2024-11-15 11:42:23.805399] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:29.550 [2024-11-15 11:42:23.805410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21176 len:8 PRP1 0x0 PRP2 0x0 00:24:29.550 [2024-11-15 11:42:23.805424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.550 [2024-11-15 11:42:23.805438] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:29.550 [2024-11-15 11:42:23.805448] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:29.550 [2024-11-15 11:42:23.805465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21184 len:8 PRP1 0x0 PRP2 0x0 00:24:29.550 [2024-11-15 11:42:23.805478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.550 [2024-11-15 11:42:23.805492] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:29.550 [2024-11-15 11:42:23.805502] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:29.550 [2024-11-15 11:42:23.805513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21192 len:8 PRP1 0x0 PRP2 0x0 00:24:29.550 [2024-11-15 11:42:23.805526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.550 [2024-11-15 11:42:23.805539] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:29.550 [2024-11-15 11:42:23.805551] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:29.550 [2024-11-15 11:42:23.805563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21200 len:8 PRP1 0x0 PRP2 0x0 00:24:29.550 [2024-11-15 11:42:23.805575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.550 [2024-11-15 11:42:23.805589] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:29.550 [2024-11-15 11:42:23.805599] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:29.550 [2024-11-15 11:42:23.805610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21208 len:8 PRP1 0x0 PRP2 0x0 00:24:29.550 [2024-11-15 11:42:23.805623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.550 [2024-11-15 11:42:23.805637] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:29.550 [2024-11-15 11:42:23.805647] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:29.551 [2024-11-15 11:42:23.805658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21216 len:8 PRP1 0x0 PRP2 0x0 00:24:29.551 [2024-11-15 11:42:23.805671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.551 [2024-11-15 11:42:23.805685] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:29.551 [2024-11-15 11:42:23.805695] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:29.551 [2024-11-15 11:42:23.805706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21224 len:8 PRP1 0x0 PRP2 0x0 00:24:29.551 [2024-11-15 11:42:23.805719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.551 [2024-11-15 11:42:23.805732] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:29.551 [2024-11-15 11:42:23.805745] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:29.551 [2024-11-15 11:42:23.805757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21232 len:8 PRP1 0x0 PRP2 0x0 00:24:29.551 [2024-11-15 11:42:23.805770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.551 [2024-11-15 11:42:23.805783] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:29.551 [2024-11-15 11:42:23.805793] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:29.551 [2024-11-15 11:42:23.805804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21240 len:8 PRP1 0x0 PRP2 0x0 00:24:29.551 [2024-11-15 11:42:23.805817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.551 [2024-11-15 11:42:23.805831] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:29.551 [2024-11-15 11:42:23.805841] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:29.551 [2024-11-15 11:42:23.805852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:20752 len:8 PRP1 0x0 PRP2 0x0 00:24:29.551 [2024-11-15 11:42:23.805866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.551 [2024-11-15 11:42:23.805881] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:29.551 [2024-11-15 11:42:23.805891] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:29.551 [2024-11-15 11:42:23.805902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:20760 len:8 PRP1 0x0 PRP2 0x0 00:24:29.551 [2024-11-15 11:42:23.805916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.551 [2024-11-15 11:42:23.805929] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:29.551 [2024-11-15 11:42:23.805941] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:29.551 [2024-11-15 11:42:23.805952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21248 len:8 PRP1 0x0 PRP2 0x0 00:24:29.551 [2024-11-15 11:42:23.805965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.551 [2024-11-15 11:42:23.805978] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:29.551 [2024-11-15 11:42:23.805988] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:29.551 [2024-11-15 11:42:23.805999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21256 len:8 PRP1 0x0 PRP2 0x0 00:24:29.551 [2024-11-15 11:42:23.806012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.551 [2024-11-15 11:42:23.806026] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:29.551 [2024-11-15 11:42:23.806036] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:29.551 [2024-11-15 11:42:23.806046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21264 len:8 PRP1 0x0 PRP2 0x0 00:24:29.551 [2024-11-15 11:42:23.806059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.551 [2024-11-15 11:42:23.806072] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:29.551 [2024-11-15 11:42:23.806083] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:29.551 [2024-11-15 11:42:23.806094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21272 len:8 PRP1 0x0 PRP2 0x0 00:24:29.551 [2024-11-15 11:42:23.806106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.551 [2024-11-15 11:42:23.806125] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:29.551 [2024-11-15 11:42:23.806135] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:29.551 [2024-11-15 11:42:23.806146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21280 len:8 PRP1 0x0 PRP2 0x0 00:24:29.551 [2024-11-15 11:42:23.806159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.551 [2024-11-15 11:42:23.806173] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:29.551 [2024-11-15 11:42:23.806183] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:29.551 [2024-11-15 11:42:23.806194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21288 len:8 PRP1 0x0 PRP2 0x0 00:24:29.551 [2024-11-15 11:42:23.806206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.551 [2024-11-15 11:42:23.806220] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:29.551 [2024-11-15 11:42:23.806230] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:29.551 [2024-11-15 11:42:23.806241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21296 len:8 PRP1 0x0 PRP2 0x0 00:24:29.551 [2024-11-15 11:42:23.806254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.551 [2024-11-15 11:42:23.806268] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:29.551 [2024-11-15 11:42:23.806278] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:29.551 [2024-11-15 11:42:23.806289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21304 len:8 PRP1 0x0 PRP2 0x0 00:24:29.551 [2024-11-15 11:42:23.806301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.551 [2024-11-15 11:42:23.806315] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:29.551 [2024-11-15 11:42:23.806328] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:29.551 [2024-11-15 11:42:23.806339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21312 len:8 PRP1 0x0 PRP2 0x0 00:24:29.551 [2024-11-15 11:42:23.806352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.551 [2024-11-15 11:42:23.806365] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:29.551 [2024-11-15 11:42:23.806375] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:29.551 [2024-11-15 11:42:23.806386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21320 len:8 PRP1 0x0 PRP2 0x0 00:24:29.551 [2024-11-15 11:42:23.806399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.551 [2024-11-15 11:42:23.806412] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:29.551 [2024-11-15 11:42:23.806422] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:29.551 [2024-11-15 11:42:23.806433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21328 len:8 PRP1 0x0 PRP2 0x0 00:24:29.551 [2024-11-15 11:42:23.806445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.551 [2024-11-15 11:42:23.806464] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:29.551 [2024-11-15 11:42:23.806474] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:29.551 [2024-11-15 11:42:23.806485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21336 len:8 PRP1 0x0 PRP2 0x0 00:24:29.552 [2024-11-15 11:42:23.806501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.552 [2024-11-15 11:42:23.806514] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:29.552 [2024-11-15 11:42:23.806524] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:29.552 [2024-11-15 11:42:23.806535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21344 len:8 PRP1 0x0 PRP2 0x0 00:24:29.552 [2024-11-15 11:42:23.806548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.552 [2024-11-15 11:42:23.806561] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:29.552 [2024-11-15 11:42:23.806571] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:29.552 [2024-11-15 11:42:23.806582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21352 len:8 PRP1 0x0 PRP2 0x0 00:24:29.552 [2024-11-15 11:42:23.806595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.552 [2024-11-15 11:42:23.806608] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:29.552 [2024-11-15 11:42:23.806618] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:29.552 [2024-11-15 11:42:23.806628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21360 len:8 PRP1 0x0 PRP2 0x0 00:24:29.552 [2024-11-15 11:42:23.806641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.552 [2024-11-15 11:42:23.806655] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:29.552 [2024-11-15 11:42:23.806665] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:29.552 [2024-11-15 11:42:23.806676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21368 len:8 PRP1 0x0 PRP2 0x0 00:24:29.552 [2024-11-15 11:42:23.806689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.552 [2024-11-15 11:42:23.806702] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:29.552 [2024-11-15 11:42:23.806713] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:29.552 [2024-11-15 11:42:23.806725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21376 len:8 PRP1 0x0 PRP2 0x0 00:24:29.552 [2024-11-15 11:42:23.806738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.552 [2024-11-15 11:42:23.806751] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:29.552 [2024-11-15 11:42:23.806761] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:29.552 [2024-11-15 11:42:23.806772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21384 len:8 PRP1 0x0 PRP2 0x0 00:24:29.552 [2024-11-15 11:42:23.806785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.552 [2024-11-15 11:42:23.806798] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:29.552 [2024-11-15 11:42:23.806808] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:29.552 [2024-11-15 11:42:23.806819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21392 len:8 PRP1 0x0 PRP2 0x0 00:24:29.552 [2024-11-15 11:42:23.806832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.552 [2024-11-15 11:42:23.806845] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:29.552 [2024-11-15 11:42:23.806858] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:29.552 [2024-11-15 11:42:23.806869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21400 len:8 PRP1 0x0 PRP2 0x0 00:24:29.552 [2024-11-15 11:42:23.813204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.552 [2024-11-15 11:42:23.813285] bdev_nvme.c:2052:bdev_nvme_failover_trid: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] Start failover from 10.0.0.2:4422 to 10.0.0.2:4420 00:24:29.552 [2024-11-15 11:42:23.813306] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] in failed state. 00:24:29.552 [2024-11-15 11:42:23.813373] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23de830 (9): Bad file descriptor 00:24:29.552 [2024-11-15 11:42:23.821199] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] resetting controller 00:24:29.552 [2024-11-15 11:42:23.855432] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 6] Resetting controller successful. 00:24:29.552 10249.30 IOPS, 40.04 MiB/s [2024-11-15T10:42:30.405Z] 10256.36 IOPS, 40.06 MiB/s [2024-11-15T10:42:30.405Z] 10265.33 IOPS, 40.10 MiB/s [2024-11-15T10:42:30.405Z] 10333.23 IOPS, 40.36 MiB/s [2024-11-15T10:42:30.405Z] 10300.57 IOPS, 40.24 MiB/s 00:24:29.552 Latency(us) 00:24:29.552 [2024-11-15T10:42:30.405Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:29.552 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:24:29.552 Verification LBA range: start 0x0 length 0x4000 00:24:29.552 NVMe0n1 : 15.01 10287.72 40.19 299.86 0.00 12055.42 558.55 35746.91 00:24:29.552 [2024-11-15T10:42:30.405Z] =================================================================================================================== 00:24:29.552 [2024-11-15T10:42:30.405Z] Total : 10287.72 40.19 299.86 0.00 12055.42 558.55 35746.91 00:24:29.552 Received shutdown signal, test time was about 15.000000 seconds 00:24:29.552 00:24:29.552 Latency(us) 00:24:29.552 [2024-11-15T10:42:30.405Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:29.552 [2024-11-15T10:42:30.405Z] =================================================================================================================== 00:24:29.552 [2024-11-15T10:42:30.405Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:24:29.552 11:42:29 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@65 -- # grep -c 'Resetting controller successful' 00:24:29.552 11:42:29 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@65 -- # count=3 00:24:29.552 11:42:29 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@67 -- # (( count != 3 )) 00:24:29.552 11:42:29 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@73 -- # bdevperf_pid=1342289 00:24:29.552 11:42:29 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 1 -f 00:24:29.552 11:42:29 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@75 -- # waitforlisten 1342289 /var/tmp/bdevperf.sock 00:24:29.552 11:42:29 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@833 -- # '[' -z 1342289 ']' 00:24:29.552 11:42:29 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:24:29.552 11:42:29 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@838 -- # local max_retries=100 00:24:29.552 11:42:29 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:24:29.552 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:24:29.552 11:42:29 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@842 -- # xtrace_disable 00:24:29.552 11:42:29 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:24:29.552 11:42:29 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:24:29.552 11:42:29 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@866 -- # return 0 00:24:29.552 11:42:29 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@76 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:24:29.552 [2024-11-15 11:42:30.149128] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:24:29.552 11:42:30 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:24:29.809 [2024-11-15 11:42:30.417960] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:24:29.809 11:42:30 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@78 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:24:30.067 NVMe0n1 00:24:30.324 11:42:30 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:24:30.581 00:24:30.581 11:42:31 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:24:31.147 00:24:31.147 11:42:31 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:24:31.147 11:42:31 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@82 -- # grep -q NVMe0 00:24:31.404 11:42:32 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:24:31.660 11:42:32 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@87 -- # sleep 3 00:24:34.932 11:42:35 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:24:34.932 11:42:35 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@88 -- # grep -q NVMe0 00:24:34.932 11:42:35 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@90 -- # run_test_pid=1343353 00:24:34.932 11:42:35 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:24:34.932 11:42:35 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@92 -- # wait 1343353 00:24:36.097 { 00:24:36.097 "results": [ 00:24:36.097 { 00:24:36.097 "job": "NVMe0n1", 00:24:36.097 "core_mask": "0x1", 00:24:36.097 "workload": "verify", 00:24:36.097 "status": "finished", 00:24:36.097 "verify_range": { 00:24:36.097 "start": 0, 00:24:36.097 "length": 16384 00:24:36.097 }, 00:24:36.097 "queue_depth": 128, 00:24:36.097 "io_size": 4096, 00:24:36.097 "runtime": 1.013267, 00:24:36.097 "iops": 10241.13091613563, 00:24:36.097 "mibps": 40.0044176411548, 00:24:36.097 "io_failed": 0, 00:24:36.097 "io_timeout": 0, 00:24:36.097 "avg_latency_us": 12429.800321340026, 00:24:36.097 "min_latency_us": 2874.6472727272726, 00:24:36.097 "max_latency_us": 15609.483636363637 00:24:36.097 } 00:24:36.097 ], 00:24:36.097 "core_count": 1 00:24:36.097 } 00:24:36.356 11:42:36 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@94 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:24:36.356 [2024-11-15 11:42:29.660536] Starting SPDK v25.01-pre git sha1 4b2d483c6 / DPDK 24.03.0 initialization... 00:24:36.356 [2024-11-15 11:42:29.660601] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1342289 ] 00:24:36.356 [2024-11-15 11:42:29.755770] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:36.356 [2024-11-15 11:42:29.800606] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:24:36.356 [2024-11-15 11:42:32.461325] bdev_nvme.c:2052:bdev_nvme_failover_trid: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 7] Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:24:36.356 [2024-11-15 11:42:32.461378] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:24:36.356 [2024-11-15 11:42:32.461393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:36.356 [2024-11-15 11:42:32.461406] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:24:36.356 [2024-11-15 11:42:32.461415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:36.356 [2024-11-15 11:42:32.461426] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:24:36.356 [2024-11-15 11:42:32.461436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:36.356 [2024-11-15 11:42:32.461447] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:24:36.356 [2024-11-15 11:42:32.461457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:36.356 [2024-11-15 11:42:32.461474] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 7] in failed state. 00:24:36.356 [2024-11-15 11:42:32.461508] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 7] resetting controller 00:24:36.356 [2024-11-15 11:42:32.461526] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x86f830 (9): Bad file descriptor 00:24:36.356 [2024-11-15 11:42:32.554736] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 10] Resetting controller successful. 00:24:36.356 Running I/O for 1 seconds... 00:24:36.356 10249.00 IOPS, 40.04 MiB/s 00:24:36.356 Latency(us) 00:24:36.356 [2024-11-15T10:42:37.209Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:36.356 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:24:36.356 Verification LBA range: start 0x0 length 0x4000 00:24:36.356 NVMe0n1 : 1.01 10241.13 40.00 0.00 0.00 12429.80 2874.65 15609.48 00:24:36.356 [2024-11-15T10:42:37.209Z] =================================================================================================================== 00:24:36.356 [2024-11-15T10:42:37.209Z] Total : 10241.13 40.00 0.00 0.00 12429.80 2874.65 15609.48 00:24:36.356 11:42:36 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@95 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:24:36.356 11:42:36 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@95 -- # grep -q NVMe0 00:24:36.356 11:42:37 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@98 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:24:36.613 11:42:37 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:24:36.613 11:42:37 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@99 -- # grep -q NVMe0 00:24:36.871 11:42:37 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:24:37.435 11:42:37 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@101 -- # sleep 3 00:24:40.706 11:42:40 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@103 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:24:40.706 11:42:40 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@103 -- # grep -q NVMe0 00:24:40.706 11:42:41 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@108 -- # killprocess 1342289 00:24:40.706 11:42:41 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@952 -- # '[' -z 1342289 ']' 00:24:40.706 11:42:41 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@956 -- # kill -0 1342289 00:24:40.706 11:42:41 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@957 -- # uname 00:24:40.706 11:42:41 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:24:40.706 11:42:41 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 1342289 00:24:40.706 11:42:41 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:24:40.706 11:42:41 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:24:40.706 11:42:41 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@970 -- # echo 'killing process with pid 1342289' 00:24:40.706 killing process with pid 1342289 00:24:40.706 11:42:41 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@971 -- # kill 1342289 00:24:40.706 11:42:41 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@976 -- # wait 1342289 00:24:40.706 11:42:41 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@110 -- # sync 00:24:40.706 11:42:41 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:24:40.963 11:42:41 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@113 -- # trap - SIGINT SIGTERM EXIT 00:24:40.963 11:42:41 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@115 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:24:40.963 11:42:41 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@116 -- # nvmftestfini 00:24:40.963 11:42:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@516 -- # nvmfcleanup 00:24:40.963 11:42:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@121 -- # sync 00:24:40.963 11:42:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:24:40.963 11:42:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@124 -- # set +e 00:24:40.963 11:42:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@125 -- # for i in {1..20} 00:24:40.963 11:42:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:24:40.963 rmmod nvme_tcp 00:24:40.963 rmmod nvme_fabrics 00:24:40.963 rmmod nvme_keyring 00:24:41.221 11:42:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:24:41.221 11:42:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@128 -- # set -e 00:24:41.221 11:42:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@129 -- # return 0 00:24:41.221 11:42:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@517 -- # '[' -n 1338994 ']' 00:24:41.221 11:42:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@518 -- # killprocess 1338994 00:24:41.221 11:42:41 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@952 -- # '[' -z 1338994 ']' 00:24:41.221 11:42:41 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@956 -- # kill -0 1338994 00:24:41.221 11:42:41 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@957 -- # uname 00:24:41.221 11:42:41 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:24:41.221 11:42:41 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 1338994 00:24:41.221 11:42:41 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:24:41.221 11:42:41 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:24:41.221 11:42:41 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@970 -- # echo 'killing process with pid 1338994' 00:24:41.221 killing process with pid 1338994 00:24:41.221 11:42:41 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@971 -- # kill 1338994 00:24:41.221 11:42:41 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@976 -- # wait 1338994 00:24:41.478 11:42:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:24:41.478 11:42:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:24:41.478 11:42:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:24:41.478 11:42:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@297 -- # iptr 00:24:41.478 11:42:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:24:41.478 11:42:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@791 -- # iptables-save 00:24:41.478 11:42:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@791 -- # iptables-restore 00:24:41.478 11:42:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:24:41.478 11:42:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@302 -- # remove_spdk_ns 00:24:41.478 11:42:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:41.478 11:42:42 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:41.478 11:42:42 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:43.376 11:42:44 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:24:43.376 00:24:43.376 real 0m39.350s 00:24:43.376 user 2m8.585s 00:24:43.376 sys 0m7.848s 00:24:43.376 11:42:44 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1128 -- # xtrace_disable 00:24:43.376 11:42:44 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:24:43.376 ************************************ 00:24:43.376 END TEST nvmf_failover 00:24:43.376 ************************************ 00:24:43.377 11:42:44 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@26 -- # run_test nvmf_host_discovery /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:24:43.377 11:42:44 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:24:43.377 11:42:44 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1109 -- # xtrace_disable 00:24:43.377 11:42:44 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:24:43.377 ************************************ 00:24:43.377 START TEST nvmf_host_discovery 00:24:43.377 ************************************ 00:24:43.377 11:42:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:24:43.634 * Looking for test storage... 00:24:43.634 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:24:43.634 11:42:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:24:43.634 11:42:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1691 -- # lcov --version 00:24:43.634 11:42:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:24:43.634 11:42:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:24:43.634 11:42:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:24:43.634 11:42:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@333 -- # local ver1 ver1_l 00:24:43.634 11:42:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@334 -- # local ver2 ver2_l 00:24:43.634 11:42:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@336 -- # IFS=.-: 00:24:43.634 11:42:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@336 -- # read -ra ver1 00:24:43.634 11:42:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@337 -- # IFS=.-: 00:24:43.634 11:42:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@337 -- # read -ra ver2 00:24:43.634 11:42:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@338 -- # local 'op=<' 00:24:43.634 11:42:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@340 -- # ver1_l=2 00:24:43.634 11:42:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@341 -- # ver2_l=1 00:24:43.634 11:42:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:24:43.634 11:42:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@344 -- # case "$op" in 00:24:43.634 11:42:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@345 -- # : 1 00:24:43.634 11:42:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@364 -- # (( v = 0 )) 00:24:43.634 11:42:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:24:43.635 11:42:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@365 -- # decimal 1 00:24:43.635 11:42:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@353 -- # local d=1 00:24:43.635 11:42:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:24:43.635 11:42:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@355 -- # echo 1 00:24:43.635 11:42:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@365 -- # ver1[v]=1 00:24:43.635 11:42:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@366 -- # decimal 2 00:24:43.635 11:42:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@353 -- # local d=2 00:24:43.635 11:42:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:24:43.635 11:42:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@355 -- # echo 2 00:24:43.635 11:42:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@366 -- # ver2[v]=2 00:24:43.635 11:42:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:24:43.635 11:42:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:24:43.635 11:42:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@368 -- # return 0 00:24:43.635 11:42:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:24:43.635 11:42:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:24:43.635 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:43.635 --rc genhtml_branch_coverage=1 00:24:43.635 --rc genhtml_function_coverage=1 00:24:43.635 --rc genhtml_legend=1 00:24:43.635 --rc geninfo_all_blocks=1 00:24:43.635 --rc geninfo_unexecuted_blocks=1 00:24:43.635 00:24:43.635 ' 00:24:43.635 11:42:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:24:43.635 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:43.635 --rc genhtml_branch_coverage=1 00:24:43.635 --rc genhtml_function_coverage=1 00:24:43.635 --rc genhtml_legend=1 00:24:43.635 --rc geninfo_all_blocks=1 00:24:43.635 --rc geninfo_unexecuted_blocks=1 00:24:43.635 00:24:43.635 ' 00:24:43.635 11:42:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:24:43.635 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:43.635 --rc genhtml_branch_coverage=1 00:24:43.635 --rc genhtml_function_coverage=1 00:24:43.635 --rc genhtml_legend=1 00:24:43.635 --rc geninfo_all_blocks=1 00:24:43.635 --rc geninfo_unexecuted_blocks=1 00:24:43.635 00:24:43.635 ' 00:24:43.635 11:42:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:24:43.635 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:43.635 --rc genhtml_branch_coverage=1 00:24:43.635 --rc genhtml_function_coverage=1 00:24:43.635 --rc genhtml_legend=1 00:24:43.635 --rc geninfo_all_blocks=1 00:24:43.635 --rc geninfo_unexecuted_blocks=1 00:24:43.635 00:24:43.635 ' 00:24:43.635 11:42:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:24:43.635 11:42:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@7 -- # uname -s 00:24:43.635 11:42:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:43.635 11:42:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:43.635 11:42:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:43.635 11:42:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:43.635 11:42:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:43.635 11:42:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:43.635 11:42:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:43.635 11:42:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:43.635 11:42:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:43.635 11:42:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:43.635 11:42:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:24:43.635 11:42:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=00abaa28-3537-eb11-906e-0017a4403562 00:24:43.635 11:42:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:43.635 11:42:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:43.635 11:42:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:43.635 11:42:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:43.635 11:42:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:43.635 11:42:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@15 -- # shopt -s extglob 00:24:43.635 11:42:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:43.635 11:42:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:43.635 11:42:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:43.635 11:42:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:43.635 11:42:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:43.635 11:42:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:43.635 11:42:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@5 -- # export PATH 00:24:43.635 11:42:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:43.635 11:42:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@51 -- # : 0 00:24:43.635 11:42:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:24:43.635 11:42:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:24:43.635 11:42:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:43.635 11:42:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:43.635 11:42:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:43.635 11:42:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:24:43.635 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:24:43.635 11:42:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:24:43.635 11:42:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:24:43.635 11:42:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@55 -- # have_pci_nics=0 00:24:43.635 11:42:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@11 -- # '[' tcp == rdma ']' 00:24:43.635 11:42:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@16 -- # DISCOVERY_PORT=8009 00:24:43.635 11:42:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@17 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:24:43.635 11:42:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@20 -- # NQN=nqn.2016-06.io.spdk:cnode 00:24:43.635 11:42:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@22 -- # HOST_NQN=nqn.2021-12.io.spdk:test 00:24:43.635 11:42:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@23 -- # HOST_SOCK=/tmp/host.sock 00:24:43.635 11:42:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@25 -- # nvmftestinit 00:24:43.635 11:42:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:24:43.635 11:42:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:43.635 11:42:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@476 -- # prepare_net_devs 00:24:43.635 11:42:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@438 -- # local -g is_hw=no 00:24:43.635 11:42:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@440 -- # remove_spdk_ns 00:24:43.635 11:42:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:43.635 11:42:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:43.635 11:42:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:43.635 11:42:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:24:43.635 11:42:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:24:43.635 11:42:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@309 -- # xtrace_disable 00:24:43.635 11:42:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:50.183 11:42:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:24:50.183 11:42:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@315 -- # pci_devs=() 00:24:50.183 11:42:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@315 -- # local -a pci_devs 00:24:50.183 11:42:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@316 -- # pci_net_devs=() 00:24:50.183 11:42:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:24:50.183 11:42:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@317 -- # pci_drivers=() 00:24:50.183 11:42:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@317 -- # local -A pci_drivers 00:24:50.183 11:42:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@319 -- # net_devs=() 00:24:50.183 11:42:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@319 -- # local -ga net_devs 00:24:50.183 11:42:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@320 -- # e810=() 00:24:50.183 11:42:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@320 -- # local -ga e810 00:24:50.183 11:42:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@321 -- # x722=() 00:24:50.183 11:42:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@321 -- # local -ga x722 00:24:50.183 11:42:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@322 -- # mlx=() 00:24:50.183 11:42:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@322 -- # local -ga mlx 00:24:50.183 11:42:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:50.183 11:42:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:50.183 11:42:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:50.183 11:42:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:50.183 11:42:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:50.183 11:42:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:50.183 11:42:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:50.183 11:42:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:24:50.183 11:42:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:50.183 11:42:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:50.183 11:42:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:50.183 11:42:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:50.183 11:42:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:24:50.183 11:42:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:24:50.183 11:42:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:24:50.183 11:42:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:24:50.183 11:42:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:24:50.183 11:42:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:24:50.183 11:42:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:50.183 11:42:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:24:50.183 Found 0000:af:00.0 (0x8086 - 0x159b) 00:24:50.183 11:42:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:24:50.183 11:42:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:24:50.183 11:42:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:50.183 11:42:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:50.183 11:42:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:24:50.183 11:42:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:50.183 11:42:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:24:50.183 Found 0000:af:00.1 (0x8086 - 0x159b) 00:24:50.183 11:42:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:24:50.183 11:42:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:24:50.183 11:42:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:50.183 11:42:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:50.183 11:42:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:24:50.183 11:42:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:24:50.183 11:42:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:24:50.183 11:42:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:24:50.183 11:42:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:24:50.183 11:42:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:50.183 11:42:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:24:50.183 11:42:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:50.183 11:42:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@418 -- # [[ up == up ]] 00:24:50.183 11:42:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:24:50.183 11:42:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:50.183 11:42:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:24:50.183 Found net devices under 0000:af:00.0: cvl_0_0 00:24:50.183 11:42:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:24:50.183 11:42:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:24:50.183 11:42:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:50.183 11:42:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:24:50.183 11:42:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:50.183 11:42:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@418 -- # [[ up == up ]] 00:24:50.183 11:42:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:24:50.183 11:42:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:50.183 11:42:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:24:50.183 Found net devices under 0000:af:00.1: cvl_0_1 00:24:50.183 11:42:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:24:50.183 11:42:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:24:50.183 11:42:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@442 -- # is_hw=yes 00:24:50.183 11:42:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:24:50.183 11:42:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:24:50.183 11:42:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:24:50.183 11:42:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:24:50.183 11:42:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:50.183 11:42:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:50.183 11:42:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:50.183 11:42:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:24:50.183 11:42:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:50.183 11:42:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:50.183 11:42:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:24:50.183 11:42:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:24:50.183 11:42:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:50.183 11:42:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:50.183 11:42:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:24:50.183 11:42:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:24:50.183 11:42:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:24:50.183 11:42:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:50.183 11:42:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:50.183 11:42:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:50.183 11:42:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:24:50.183 11:42:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:50.183 11:42:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:50.183 11:42:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:50.183 11:42:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:24:50.183 11:42:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:24:50.183 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:50.183 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.472 ms 00:24:50.183 00:24:50.183 --- 10.0.0.2 ping statistics --- 00:24:50.183 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:50.183 rtt min/avg/max/mdev = 0.472/0.472/0.472/0.000 ms 00:24:50.183 11:42:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:50.183 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:50.183 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.224 ms 00:24:50.183 00:24:50.183 --- 10.0.0.1 ping statistics --- 00:24:50.183 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:50.183 rtt min/avg/max/mdev = 0.224/0.224/0.224/0.000 ms 00:24:50.183 11:42:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:50.183 11:42:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@450 -- # return 0 00:24:50.184 11:42:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:24:50.184 11:42:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:50.184 11:42:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:24:50.184 11:42:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:24:50.184 11:42:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:50.184 11:42:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:24:50.184 11:42:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:24:50.184 11:42:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@30 -- # nvmfappstart -m 0x2 00:24:50.184 11:42:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:24:50.184 11:42:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@724 -- # xtrace_disable 00:24:50.184 11:42:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:50.184 11:42:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@509 -- # nvmfpid=1348136 00:24:50.184 11:42:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@510 -- # waitforlisten 1348136 00:24:50.184 11:42:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:24:50.184 11:42:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@833 -- # '[' -z 1348136 ']' 00:24:50.184 11:42:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:50.184 11:42:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@838 -- # local max_retries=100 00:24:50.184 11:42:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:50.184 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:50.184 11:42:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@842 -- # xtrace_disable 00:24:50.184 11:42:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:50.184 [2024-11-15 11:42:50.182585] Starting SPDK v25.01-pre git sha1 4b2d483c6 / DPDK 24.03.0 initialization... 00:24:50.184 [2024-11-15 11:42:50.182641] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:50.184 [2024-11-15 11:42:50.255386] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:50.184 [2024-11-15 11:42:50.294591] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:50.184 [2024-11-15 11:42:50.294625] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:50.184 [2024-11-15 11:42:50.294631] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:50.184 [2024-11-15 11:42:50.294637] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:50.184 [2024-11-15 11:42:50.294641] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:50.184 [2024-11-15 11:42:50.295164] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:24:50.184 11:42:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:24:50.184 11:42:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@866 -- # return 0 00:24:50.184 11:42:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:24:50.184 11:42:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@730 -- # xtrace_disable 00:24:50.184 11:42:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:50.184 11:42:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:50.184 11:42:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@32 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:24:50.184 11:42:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:50.184 11:42:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:50.184 [2024-11-15 11:42:50.457383] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:50.184 11:42:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:50.184 11:42:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2014-08.org.nvmexpress.discovery -t tcp -a 10.0.0.2 -s 8009 00:24:50.184 11:42:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:50.184 11:42:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:50.184 [2024-11-15 11:42:50.469572] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:24:50.184 11:42:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:50.184 11:42:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@35 -- # rpc_cmd bdev_null_create null0 1000 512 00:24:50.184 11:42:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:50.184 11:42:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:50.184 null0 00:24:50.184 11:42:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:50.184 11:42:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@36 -- # rpc_cmd bdev_null_create null1 1000 512 00:24:50.184 11:42:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:50.184 11:42:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:50.184 null1 00:24:50.184 11:42:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:50.184 11:42:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@37 -- # rpc_cmd bdev_wait_for_examine 00:24:50.184 11:42:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:50.184 11:42:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:50.184 11:42:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:50.184 11:42:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@45 -- # hostpid=1348158 00:24:50.184 11:42:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock 00:24:50.184 11:42:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@46 -- # waitforlisten 1348158 /tmp/host.sock 00:24:50.184 11:42:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@833 -- # '[' -z 1348158 ']' 00:24:50.184 11:42:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@837 -- # local rpc_addr=/tmp/host.sock 00:24:50.184 11:42:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@838 -- # local max_retries=100 00:24:50.184 11:42:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:24:50.184 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:24:50.184 11:42:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@842 -- # xtrace_disable 00:24:50.184 11:42:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:50.184 [2024-11-15 11:42:50.553078] Starting SPDK v25.01-pre git sha1 4b2d483c6 / DPDK 24.03.0 initialization... 00:24:50.184 [2024-11-15 11:42:50.553134] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1348158 ] 00:24:50.184 [2024-11-15 11:42:50.648490] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:50.184 [2024-11-15 11:42:50.698476] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:24:50.184 11:42:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:24:50.184 11:42:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@866 -- # return 0 00:24:50.184 11:42:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@48 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:24:50.184 11:42:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@50 -- # rpc_cmd -s /tmp/host.sock log_set_flag bdev_nvme 00:24:50.184 11:42:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:50.184 11:42:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:50.184 11:42:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:50.184 11:42:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@51 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test 00:24:50.184 11:42:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:50.184 11:42:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:50.184 11:42:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:50.184 11:42:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@72 -- # notify_id=0 00:24:50.184 11:42:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@83 -- # get_subsystem_names 00:24:50.184 11:42:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:24:50.184 11:42:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:24:50.184 11:42:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:50.184 11:42:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:24:50.184 11:42:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:50.184 11:42:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:24:50.184 11:42:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:50.184 11:42:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@83 -- # [[ '' == '' ]] 00:24:50.184 11:42:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@84 -- # get_bdev_list 00:24:50.184 11:42:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:50.184 11:42:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:24:50.184 11:42:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:50.184 11:42:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:24:50.184 11:42:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:50.184 11:42:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:24:50.184 11:42:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:50.184 11:42:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@84 -- # [[ '' == '' ]] 00:24:50.184 11:42:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@86 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 00:24:50.184 11:42:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:50.184 11:42:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:50.184 11:42:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:50.185 11:42:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@87 -- # get_subsystem_names 00:24:50.185 11:42:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:24:50.185 11:42:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:24:50.185 11:42:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:50.185 11:42:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:50.185 11:42:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:24:50.185 11:42:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:24:50.185 11:42:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:50.185 11:42:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@87 -- # [[ '' == '' ]] 00:24:50.185 11:42:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@88 -- # get_bdev_list 00:24:50.185 11:42:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:50.185 11:42:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:24:50.185 11:42:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:50.185 11:42:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:24:50.185 11:42:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:50.185 11:42:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:24:50.185 11:42:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:50.441 11:42:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@88 -- # [[ '' == '' ]] 00:24:50.441 11:42:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@90 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 00:24:50.441 11:42:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:50.441 11:42:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:50.441 11:42:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:50.441 11:42:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@91 -- # get_subsystem_names 00:24:50.441 11:42:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:24:50.441 11:42:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:24:50.441 11:42:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:50.441 11:42:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:24:50.441 11:42:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:50.441 11:42:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:24:50.441 11:42:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:50.441 11:42:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@91 -- # [[ '' == '' ]] 00:24:50.441 11:42:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@92 -- # get_bdev_list 00:24:50.441 11:42:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:24:50.441 11:42:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:50.441 11:42:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:24:50.441 11:42:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:50.441 11:42:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:50.441 11:42:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:24:50.441 11:42:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:50.441 11:42:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@92 -- # [[ '' == '' ]] 00:24:50.441 11:42:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@96 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:24:50.441 11:42:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:50.441 11:42:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:50.441 [2024-11-15 11:42:51.151292] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:50.441 11:42:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:50.441 11:42:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@97 -- # get_subsystem_names 00:24:50.441 11:42:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:24:50.441 11:42:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:24:50.441 11:42:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:50.441 11:42:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:50.441 11:42:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:24:50.441 11:42:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:24:50.441 11:42:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:50.441 11:42:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@97 -- # [[ '' == '' ]] 00:24:50.441 11:42:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@98 -- # get_bdev_list 00:24:50.441 11:42:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:50.441 11:42:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:24:50.441 11:42:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:24:50.441 11:42:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:50.441 11:42:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:24:50.441 11:42:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:50.441 11:42:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:50.441 11:42:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@98 -- # [[ '' == '' ]] 00:24:50.441 11:42:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@99 -- # is_notification_count_eq 0 00:24:50.441 11:42:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:24:50.441 11:42:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:24:50.441 11:42:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:24:50.441 11:42:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # local max=10 00:24:50.441 11:42:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # (( max-- )) 00:24:50.441 11:42:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:24:50.441 11:42:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # get_notification_count 00:24:50.441 11:42:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:24:50.441 11:42:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:24:50.441 11:42:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:50.441 11:42:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:50.441 11:42:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:50.698 11:42:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:24:50.698 11:42:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=0 00:24:50.698 11:42:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # (( notification_count == expected_count )) 00:24:50.698 11:42:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # return 0 00:24:50.698 11:42:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@103 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2021-12.io.spdk:test 00:24:50.698 11:42:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:50.698 11:42:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:50.698 11:42:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:50.698 11:42:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@105 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:24:50.698 11:42:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:24:50.698 11:42:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # local max=10 00:24:50.698 11:42:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # (( max-- )) 00:24:50.698 11:42:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:24:50.698 11:42:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # get_subsystem_names 00:24:50.698 11:42:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:24:50.698 11:42:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:24:50.698 11:42:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:50.699 11:42:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:50.699 11:42:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:24:50.699 11:42:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:24:50.699 11:42:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:50.699 11:42:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # [[ '' == \n\v\m\e\0 ]] 00:24:50.699 11:42:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # sleep 1 00:24:51.261 [2024-11-15 11:42:51.876027] bdev_nvme.c:7384:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:24:51.261 [2024-11-15 11:42:51.876051] bdev_nvme.c:7470:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:24:51.261 [2024-11-15 11:42:51.876068] bdev_nvme.c:7347:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:24:51.261 [2024-11-15 11:42:51.962370] bdev_nvme.c:7313:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:24:51.517 [2024-11-15 11:42:52.178662] bdev_nvme.c:5634:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr was created to 10.0.0.2:4420 00:24:51.517 [2024-11-15 11:42:52.179693] bdev_nvme.c:1985:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Connecting qpair 0x1c762a0:1 started. 00:24:51.518 [2024-11-15 11:42:52.181616] bdev_nvme.c:7203:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:24:51.518 [2024-11-15 11:42:52.181644] bdev_nvme.c:7162:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:24:51.518 [2024-11-15 11:42:52.184729] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpair 0x1c762a0 was disconnected and freed. delete nvme_qpair. 00:24:51.774 11:42:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # (( max-- )) 00:24:51.774 11:42:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:24:51.774 11:42:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # get_subsystem_names 00:24:51.774 11:42:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:24:51.774 11:42:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:24:51.774 11:42:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:24:51.774 11:42:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:51.774 11:42:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:24:51.774 11:42:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:51.774 11:42:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:51.774 11:42:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:51.774 11:42:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # return 0 00:24:51.774 11:42:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@106 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:24:51.774 11:42:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:24:51.774 11:42:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # local max=10 00:24:51.774 11:42:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # (( max-- )) 00:24:51.774 11:42:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1"' ']]' 00:24:51.774 11:42:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # get_bdev_list 00:24:51.774 11:42:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:51.774 11:42:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:24:51.774 11:42:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:24:51.774 11:42:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:51.775 11:42:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:24:51.775 11:42:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:51.775 11:42:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:51.775 11:42:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # [[ nvme0n1 == \n\v\m\e\0\n\1 ]] 00:24:51.775 11:42:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # return 0 00:24:51.775 11:42:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@107 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:24:51.775 11:42:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:24:51.775 11:42:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # local max=10 00:24:51.775 11:42:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # (( max-- )) 00:24:51.775 11:42:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT"' ']]' 00:24:51.775 11:42:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # get_subsystem_paths nvme0 00:24:51.775 11:42:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:24:51.775 11:42:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:24:51.775 11:42:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:51.775 11:42:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:24:51.775 11:42:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:51.775 11:42:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:24:51.775 11:42:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:51.775 11:42:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # [[ 4420 == \4\4\2\0 ]] 00:24:51.775 11:42:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # return 0 00:24:51.775 11:42:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@108 -- # is_notification_count_eq 1 00:24:51.775 11:42:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:24:51.775 11:42:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:24:51.775 11:42:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:24:51.775 11:42:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # local max=10 00:24:51.775 11:42:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # (( max-- )) 00:24:51.775 11:42:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:24:51.775 11:42:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # get_notification_count 00:24:51.775 11:42:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:24:51.775 11:42:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:24:51.775 11:42:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:51.775 11:42:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:51.775 11:42:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:51.775 11:42:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:24:51.775 11:42:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=1 00:24:51.775 11:42:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # (( notification_count == expected_count )) 00:24:51.775 11:42:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # return 0 00:24:51.775 11:42:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@111 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null1 00:24:51.775 11:42:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:51.775 11:42:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:51.775 [2024-11-15 11:42:52.581305] bdev_nvme.c:1985:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Connecting qpair 0x1c44ba0:1 started. 00:24:51.775 11:42:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:51.775 11:42:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@113 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:24:51.775 11:42:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:24:51.775 11:42:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # local max=10 00:24:51.775 11:42:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # (( max-- )) 00:24:51.775 [2024-11-15 11:42:52.585448] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpair 0x1c44ba0 was disconnected and freed. delete nvme_qpair. 00:24:51.775 11:42:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:24:51.775 11:42:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # get_bdev_list 00:24:51.775 11:42:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:51.775 11:42:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:51.775 11:42:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:51.775 11:42:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:24:51.775 11:42:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:24:51.775 11:42:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:24:51.775 11:42:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:52.031 11:42:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:24:52.031 11:42:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # return 0 00:24:52.031 11:42:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@114 -- # is_notification_count_eq 1 00:24:52.031 11:42:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:24:52.031 11:42:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:24:52.031 11:42:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:24:52.031 11:42:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # local max=10 00:24:52.031 11:42:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # (( max-- )) 00:24:52.031 11:42:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:24:52.031 11:42:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # get_notification_count 00:24:52.031 11:42:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 1 00:24:52.031 11:42:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:24:52.031 11:42:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:52.031 11:42:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:52.031 11:42:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:52.031 11:42:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:24:52.031 11:42:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:24:52.031 11:42:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # (( notification_count == expected_count )) 00:24:52.031 11:42:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # return 0 00:24:52.031 11:42:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@118 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 00:24:52.032 11:42:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:52.032 11:42:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:52.032 [2024-11-15 11:42:52.691403] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:24:52.032 [2024-11-15 11:42:52.692121] bdev_nvme.c:7366:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:24:52.032 [2024-11-15 11:42:52.692148] bdev_nvme.c:7347:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:24:52.032 11:42:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:52.032 11:42:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@120 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:24:52.032 11:42:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:24:52.032 11:42:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # local max=10 00:24:52.032 11:42:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # (( max-- )) 00:24:52.032 11:42:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:24:52.032 11:42:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # get_subsystem_names 00:24:52.032 11:42:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:24:52.032 11:42:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:52.032 11:42:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:52.032 11:42:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:24:52.032 11:42:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:24:52.032 11:42:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:24:52.032 11:42:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:52.032 11:42:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:52.032 11:42:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # return 0 00:24:52.032 11:42:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@121 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:24:52.032 11:42:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:24:52.032 11:42:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # local max=10 00:24:52.032 11:42:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # (( max-- )) 00:24:52.032 11:42:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:24:52.032 11:42:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # get_bdev_list 00:24:52.032 11:42:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:52.032 11:42:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:52.032 11:42:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:52.032 11:42:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:24:52.032 11:42:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:24:52.032 11:42:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:24:52.032 11:42:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:52.032 [2024-11-15 11:42:52.778424] bdev_nvme.c:7308:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new path for nvme0 00:24:52.032 11:42:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:24:52.032 11:42:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # return 0 00:24:52.032 11:42:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@122 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:24:52.032 11:42:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:24:52.032 11:42:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # local max=10 00:24:52.032 11:42:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # (( max-- )) 00:24:52.032 11:42:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:24:52.032 11:42:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # get_subsystem_paths nvme0 00:24:52.032 11:42:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:24:52.032 11:42:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:24:52.032 11:42:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:24:52.032 11:42:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:52.032 11:42:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:24:52.032 11:42:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:52.032 11:42:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:52.032 11:42:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # [[ 4420 == \4\4\2\0\ \4\4\2\1 ]] 00:24:52.032 11:42:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # sleep 1 00:24:52.032 [2024-11-15 11:42:52.881214] bdev_nvme.c:5634:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 2] ctrlr was created to 10.0.0.2:4421 00:24:52.032 [2024-11-15 11:42:52.881262] bdev_nvme.c:7203:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:24:52.032 [2024-11-15 11:42:52.881273] bdev_nvme.c:7162:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:24:52.032 [2024-11-15 11:42:52.881280] bdev_nvme.c:7162:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:24:53.400 11:42:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # (( max-- )) 00:24:53.400 11:42:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:24:53.400 11:42:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # get_subsystem_paths nvme0 00:24:53.400 11:42:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:24:53.400 11:42:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:24:53.400 11:42:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:24:53.400 11:42:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:53.400 11:42:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:24:53.400 11:42:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:53.400 11:42:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:53.400 11:42:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # [[ 4420 4421 == \4\4\2\0\ \4\4\2\1 ]] 00:24:53.400 11:42:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # return 0 00:24:53.400 11:42:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@123 -- # is_notification_count_eq 0 00:24:53.400 11:42:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:24:53.400 11:42:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:24:53.400 11:42:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:24:53.400 11:42:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # local max=10 00:24:53.400 11:42:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # (( max-- )) 00:24:53.400 11:42:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:24:53.400 11:42:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # get_notification_count 00:24:53.400 11:42:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:24:53.400 11:42:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:24:53.401 11:42:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:53.401 11:42:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:53.401 11:42:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:53.401 11:42:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:24:53.401 11:42:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:24:53.401 11:42:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # (( notification_count == expected_count )) 00:24:53.401 11:42:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # return 0 00:24:53.401 11:42:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@127 -- # rpc_cmd nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:24:53.401 11:42:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:53.401 11:42:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:53.401 [2024-11-15 11:42:53.963287] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:24:53.401 [2024-11-15 11:42:53.963318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:53.401 [2024-11-15 11:42:53.963332] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:24:53.401 [2024-11-15 11:42:53.963342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:53.401 [2024-11-15 11:42:53.963353] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:24:53.401 [2024-11-15 11:42:53.963364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:53.401 [2024-11-15 11:42:53.963376] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:24:53.401 [2024-11-15 11:42:53.963387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:53.401 [2024-11-15 11:42:53.963401] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c46890 is same with the state(6) to be set 00:24:53.401 [2024-11-15 11:42:53.963696] bdev_nvme.c:7366:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:24:53.401 [2024-11-15 11:42:53.963714] bdev_nvme.c:7347:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:24:53.401 11:42:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:53.401 11:42:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@129 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:24:53.401 11:42:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:24:53.401 11:42:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # local max=10 00:24:53.401 11:42:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # (( max-- )) 00:24:53.401 11:42:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:24:53.401 11:42:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # get_subsystem_names 00:24:53.401 11:42:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:24:53.401 11:42:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:53.401 11:42:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:53.401 11:42:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:24:53.401 11:42:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:24:53.401 [2024-11-15 11:42:53.973294] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c46890 (9): Bad file descriptor 00:24:53.401 11:42:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:24:53.401 11:42:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:53.401 [2024-11-15 11:42:53.983337] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:24:53.401 [2024-11-15 11:42:53.983357] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:24:53.401 [2024-11-15 11:42:53.983364] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:24:53.401 [2024-11-15 11:42:53.983370] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:24:53.401 [2024-11-15 11:42:53.983392] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:24:53.401 [2024-11-15 11:42:53.983531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.401 [2024-11-15 11:42:53.983551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c46890 with addr=10.0.0.2, port=4420 00:24:53.401 [2024-11-15 11:42:53.983563] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c46890 is same with the state(6) to be set 00:24:53.401 [2024-11-15 11:42:53.983579] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c46890 (9): Bad file descriptor 00:24:53.401 [2024-11-15 11:42:53.983594] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:24:53.401 [2024-11-15 11:42:53.983604] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:24:53.401 [2024-11-15 11:42:53.983615] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:24:53.401 [2024-11-15 11:42:53.983624] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:24:53.401 [2024-11-15 11:42:53.983631] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:24:53.401 [2024-11-15 11:42:53.983641] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:24:53.401 [2024-11-15 11:42:53.993425] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:24:53.401 [2024-11-15 11:42:53.993441] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:24:53.401 [2024-11-15 11:42:53.993447] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:24:53.401 [2024-11-15 11:42:53.993453] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:24:53.401 [2024-11-15 11:42:53.993477] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:24:53.401 [2024-11-15 11:42:53.993662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.401 [2024-11-15 11:42:53.993680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c46890 with addr=10.0.0.2, port=4420 00:24:53.401 [2024-11-15 11:42:53.993690] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c46890 is same with the state(6) to be set 00:24:53.401 [2024-11-15 11:42:53.993706] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c46890 (9): Bad file descriptor 00:24:53.401 [2024-11-15 11:42:53.993721] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:24:53.401 [2024-11-15 11:42:53.993730] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:24:53.401 [2024-11-15 11:42:53.993739] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:24:53.401 [2024-11-15 11:42:53.993748] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:24:53.401 [2024-11-15 11:42:53.993754] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:24:53.401 [2024-11-15 11:42:53.993760] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:24:53.401 11:42:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:53.401 11:42:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # return 0 00:24:53.401 11:42:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@130 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:24:53.401 11:42:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:24:53.401 11:42:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # local max=10 00:24:53.401 11:42:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # (( max-- )) 00:24:53.401 11:42:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:24:53.401 11:42:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # get_bdev_list 00:24:53.401 11:42:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:53.401 11:42:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:53.401 11:42:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:53.401 11:42:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:24:53.401 11:42:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:24:53.401 11:42:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:24:53.401 [2024-11-15 11:42:54.004198] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:24:53.401 [2024-11-15 11:42:54.004217] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:24:53.401 [2024-11-15 11:42:54.004223] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:24:53.401 [2024-11-15 11:42:54.004236] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:24:53.401 [2024-11-15 11:42:54.004257] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:24:53.401 [2024-11-15 11:42:54.004399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.401 [2024-11-15 11:42:54.004417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c46890 with addr=10.0.0.2, port=4420 00:24:53.401 [2024-11-15 11:42:54.004429] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c46890 is same with the state(6) to be set 00:24:53.401 [2024-11-15 11:42:54.004444] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c46890 (9): Bad file descriptor 00:24:53.401 [2024-11-15 11:42:54.004467] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:24:53.401 [2024-11-15 11:42:54.004477] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:24:53.401 [2024-11-15 11:42:54.004488] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:24:53.401 [2024-11-15 11:42:54.004496] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:24:53.401 [2024-11-15 11:42:54.004503] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:24:53.402 [2024-11-15 11:42:54.004509] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:24:53.402 [2024-11-15 11:42:54.014290] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:24:53.402 [2024-11-15 11:42:54.014311] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:24:53.402 [2024-11-15 11:42:54.014318] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:24:53.402 [2024-11-15 11:42:54.014323] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:24:53.402 [2024-11-15 11:42:54.014345] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:24:53.402 [2024-11-15 11:42:54.014483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.402 [2024-11-15 11:42:54.014502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c46890 with addr=10.0.0.2, port=4420 00:24:53.402 [2024-11-15 11:42:54.014513] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c46890 is same with the state(6) to be set 00:24:53.402 [2024-11-15 11:42:54.014528] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c46890 (9): Bad file descriptor 00:24:53.402 [2024-11-15 11:42:54.014542] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:24:53.402 [2024-11-15 11:42:54.014551] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:24:53.402 [2024-11-15 11:42:54.014561] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:24:53.402 [2024-11-15 11:42:54.014570] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:24:53.402 [2024-11-15 11:42:54.014576] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:24:53.402 [2024-11-15 11:42:54.014582] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:24:53.402 11:42:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:53.402 [2024-11-15 11:42:54.024380] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:24:53.402 [2024-11-15 11:42:54.024400] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:24:53.402 [2024-11-15 11:42:54.024406] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:24:53.402 [2024-11-15 11:42:54.024412] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:24:53.402 [2024-11-15 11:42:54.024431] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:24:53.402 [2024-11-15 11:42:54.024551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.402 [2024-11-15 11:42:54.024568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c46890 with addr=10.0.0.2, port=4420 00:24:53.402 [2024-11-15 11:42:54.024579] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c46890 is same with the state(6) to be set 00:24:53.402 [2024-11-15 11:42:54.024594] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c46890 (9): Bad file descriptor 00:24:53.402 [2024-11-15 11:42:54.024608] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:24:53.402 [2024-11-15 11:42:54.024617] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:24:53.402 [2024-11-15 11:42:54.024628] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:24:53.402 [2024-11-15 11:42:54.024638] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:24:53.402 [2024-11-15 11:42:54.024647] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:24:53.402 [2024-11-15 11:42:54.024653] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:24:53.402 [2024-11-15 11:42:54.034469] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:24:53.402 [2024-11-15 11:42:54.034485] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:24:53.402 [2024-11-15 11:42:54.034492] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:24:53.402 [2024-11-15 11:42:54.034498] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:24:53.402 [2024-11-15 11:42:54.034517] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:24:53.402 [2024-11-15 11:42:54.034622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.402 [2024-11-15 11:42:54.034637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c46890 with addr=10.0.0.2, port=4420 00:24:53.402 [2024-11-15 11:42:54.034647] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c46890 is same with the state(6) to be set 00:24:53.402 [2024-11-15 11:42:54.034662] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c46890 (9): Bad file descriptor 00:24:53.402 [2024-11-15 11:42:54.034676] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:24:53.402 [2024-11-15 11:42:54.034684] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:24:53.402 [2024-11-15 11:42:54.034694] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:24:53.402 [2024-11-15 11:42:54.034703] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:24:53.402 [2024-11-15 11:42:54.034709] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:24:53.402 [2024-11-15 11:42:54.034715] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:24:53.402 [2024-11-15 11:42:54.044552] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:24:53.402 [2024-11-15 11:42:54.044571] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:24:53.402 [2024-11-15 11:42:54.044578] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:24:53.402 [2024-11-15 11:42:54.044585] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:24:53.402 [2024-11-15 11:42:54.044604] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:24:53.402 [2024-11-15 11:42:54.044727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.402 [2024-11-15 11:42:54.044744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c46890 with addr=10.0.0.2, port=4420 00:24:53.402 [2024-11-15 11:42:54.044754] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c46890 is same with the state(6) to be set 00:24:53.402 [2024-11-15 11:42:54.044770] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c46890 (9): Bad file descriptor 00:24:53.402 [2024-11-15 11:42:54.044785] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:24:53.402 [2024-11-15 11:42:54.044795] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:24:53.402 [2024-11-15 11:42:54.044806] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:24:53.402 [2024-11-15 11:42:54.044815] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:24:53.402 [2024-11-15 11:42:54.044821] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:24:53.402 [2024-11-15 11:42:54.044827] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:24:53.402 11:42:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:24:53.402 11:42:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # return 0 00:24:53.402 11:42:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@131 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:24:53.402 11:42:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:24:53.402 11:42:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # local max=10 00:24:53.402 11:42:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # (( max-- )) 00:24:53.402 [2024-11-15 11:42:54.050369] bdev_nvme.c:7171:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 not found 00:24:53.402 [2024-11-15 11:42:54.050391] bdev_nvme.c:7162:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:24:53.402 11:42:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_SECOND_PORT"' ']]' 00:24:53.402 11:42:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # get_subsystem_paths nvme0 00:24:53.402 11:42:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:24:53.402 11:42:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:53.402 11:42:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:53.402 11:42:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:24:53.402 11:42:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:24:53.402 11:42:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:24:53.402 11:42:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:53.402 11:42:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # [[ 4421 == \4\4\2\1 ]] 00:24:53.402 11:42:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # return 0 00:24:53.402 11:42:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@132 -- # is_notification_count_eq 0 00:24:53.402 11:42:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:24:53.402 11:42:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:24:53.402 11:42:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:24:53.402 11:42:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # local max=10 00:24:53.402 11:42:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # (( max-- )) 00:24:53.402 11:42:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:24:53.402 11:42:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # get_notification_count 00:24:53.402 11:42:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:24:53.402 11:42:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:24:53.402 11:42:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:53.402 11:42:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:53.402 11:42:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:53.403 11:42:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:24:53.403 11:42:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:24:53.403 11:42:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # (( notification_count == expected_count )) 00:24:53.403 11:42:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # return 0 00:24:53.403 11:42:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@134 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_stop_discovery -b nvme 00:24:53.403 11:42:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:53.403 11:42:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:53.403 11:42:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:53.403 11:42:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@136 -- # waitforcondition '[[ "$(get_subsystem_names)" == "" ]]' 00:24:53.403 11:42:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # local 'cond=[[ "$(get_subsystem_names)" == "" ]]' 00:24:53.403 11:42:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # local max=10 00:24:53.403 11:42:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # (( max-- )) 00:24:53.403 11:42:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # eval '[[' '"$(get_subsystem_names)"' == '""' ']]' 00:24:53.403 11:42:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # get_subsystem_names 00:24:53.403 11:42:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:24:53.403 11:42:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:53.403 11:42:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:53.403 11:42:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:24:53.403 11:42:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:24:53.403 11:42:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:24:53.403 11:42:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:53.403 11:42:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # [[ '' == '' ]] 00:24:53.403 11:42:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # return 0 00:24:53.403 11:42:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@137 -- # waitforcondition '[[ "$(get_bdev_list)" == "" ]]' 00:24:53.403 11:42:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # local 'cond=[[ "$(get_bdev_list)" == "" ]]' 00:24:53.403 11:42:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # local max=10 00:24:53.403 11:42:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # (( max-- )) 00:24:53.403 11:42:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # eval '[[' '"$(get_bdev_list)"' == '""' ']]' 00:24:53.403 11:42:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # get_bdev_list 00:24:53.403 11:42:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:53.403 11:42:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:24:53.403 11:42:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:53.403 11:42:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:24:53.403 11:42:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:53.403 11:42:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:24:53.403 11:42:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:53.660 11:42:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # [[ '' == '' ]] 00:24:53.660 11:42:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # return 0 00:24:53.660 11:42:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@138 -- # is_notification_count_eq 2 00:24:53.660 11:42:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=2 00:24:53.660 11:42:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:24:53.660 11:42:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:24:53.660 11:42:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # local max=10 00:24:53.660 11:42:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # (( max-- )) 00:24:53.660 11:42:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:24:53.660 11:42:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # get_notification_count 00:24:53.660 11:42:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:24:53.660 11:42:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:24:53.660 11:42:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:53.660 11:42:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:53.660 11:42:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:53.660 11:42:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=2 00:24:53.660 11:42:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=4 00:24:53.660 11:42:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # (( notification_count == expected_count )) 00:24:53.660 11:42:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # return 0 00:24:53.660 11:42:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@141 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:24:53.660 11:42:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:53.660 11:42:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:54.589 [2024-11-15 11:42:55.323910] bdev_nvme.c:7384:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:24:54.589 [2024-11-15 11:42:55.323932] bdev_nvme.c:7470:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:24:54.589 [2024-11-15 11:42:55.323948] bdev_nvme.c:7347:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:24:54.589 [2024-11-15 11:42:55.412238] bdev_nvme.c:7313:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new subsystem nvme0 00:24:54.845 [2024-11-15 11:42:55.476974] bdev_nvme.c:5634:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 3] ctrlr was created to 10.0.0.2:4421 00:24:54.845 [2024-11-15 11:42:55.477784] bdev_nvme.c:1985:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 3] Connecting qpair 0x1c44210:1 started. 00:24:54.845 [2024-11-15 11:42:55.480037] bdev_nvme.c:7203:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:24:54.845 [2024-11-15 11:42:55.480072] bdev_nvme.c:7162:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:24:54.845 11:42:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:54.846 11:42:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@143 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:24:54.846 11:42:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@650 -- # local es=0 00:24:54.846 11:42:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:24:54.846 11:42:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:24:54.846 [2024-11-15 11:42:55.483209] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 3] qpair 0x1c44210 was disconnected and freed. delete nvme_qpair. 00:24:54.846 11:42:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:24:54.846 11:42:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:24:54.846 11:42:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:24:54.846 11:42:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@653 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:24:54.846 11:42:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:54.846 11:42:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:54.846 request: 00:24:54.846 { 00:24:54.846 "name": "nvme", 00:24:54.846 "trtype": "tcp", 00:24:54.846 "traddr": "10.0.0.2", 00:24:54.846 "adrfam": "ipv4", 00:24:54.846 "trsvcid": "8009", 00:24:54.846 "hostnqn": "nqn.2021-12.io.spdk:test", 00:24:54.846 "wait_for_attach": true, 00:24:54.846 "method": "bdev_nvme_start_discovery", 00:24:54.846 "req_id": 1 00:24:54.846 } 00:24:54.846 Got JSON-RPC error response 00:24:54.846 response: 00:24:54.846 { 00:24:54.846 "code": -17, 00:24:54.846 "message": "File exists" 00:24:54.846 } 00:24:54.846 11:42:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:24:54.846 11:42:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@653 -- # es=1 00:24:54.846 11:42:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:24:54.846 11:42:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:24:54.846 11:42:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:24:54.846 11:42:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@145 -- # get_discovery_ctrlrs 00:24:54.846 11:42:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:24:54.846 11:42:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:54.846 11:42:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:54.846 11:42:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:24:54.846 11:42:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:24:54.846 11:42:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:24:54.846 11:42:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:54.846 11:42:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@145 -- # [[ nvme == \n\v\m\e ]] 00:24:54.846 11:42:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@146 -- # get_bdev_list 00:24:54.846 11:42:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:54.846 11:42:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:24:54.846 11:42:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:24:54.846 11:42:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:54.846 11:42:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:24:54.846 11:42:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:54.846 11:42:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:54.846 11:42:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@146 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:24:54.846 11:42:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@149 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:24:54.846 11:42:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@650 -- # local es=0 00:24:54.846 11:42:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:24:54.846 11:42:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:24:54.846 11:42:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:24:54.846 11:42:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:24:54.846 11:42:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:24:54.846 11:42:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@653 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:24:54.846 11:42:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:54.846 11:42:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:54.846 request: 00:24:54.846 { 00:24:54.846 "name": "nvme_second", 00:24:54.846 "trtype": "tcp", 00:24:54.846 "traddr": "10.0.0.2", 00:24:54.846 "adrfam": "ipv4", 00:24:54.846 "trsvcid": "8009", 00:24:54.846 "hostnqn": "nqn.2021-12.io.spdk:test", 00:24:54.846 "wait_for_attach": true, 00:24:54.846 "method": "bdev_nvme_start_discovery", 00:24:54.846 "req_id": 1 00:24:54.846 } 00:24:54.846 Got JSON-RPC error response 00:24:54.846 response: 00:24:54.846 { 00:24:54.846 "code": -17, 00:24:54.846 "message": "File exists" 00:24:54.846 } 00:24:54.846 11:42:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:24:54.846 11:42:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@653 -- # es=1 00:24:54.846 11:42:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:24:54.846 11:42:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:24:54.846 11:42:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:24:54.846 11:42:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@151 -- # get_discovery_ctrlrs 00:24:54.846 11:42:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:24:54.846 11:42:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:54.846 11:42:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:54.846 11:42:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:24:54.846 11:42:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:24:54.846 11:42:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:24:54.846 11:42:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:54.846 11:42:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@151 -- # [[ nvme == \n\v\m\e ]] 00:24:54.846 11:42:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@152 -- # get_bdev_list 00:24:54.846 11:42:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:54.846 11:42:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:24:54.846 11:42:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:24:54.846 11:42:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:54.846 11:42:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:24:54.846 11:42:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:55.103 11:42:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:55.103 11:42:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@152 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:24:55.103 11:42:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@155 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:24:55.103 11:42:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@650 -- # local es=0 00:24:55.103 11:42:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:24:55.103 11:42:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:24:55.103 11:42:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:24:55.103 11:42:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:24:55.103 11:42:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:24:55.103 11:42:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@653 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:24:55.103 11:42:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:55.103 11:42:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:56.031 [2024-11-15 11:42:56.731632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.031 [2024-11-15 11:42:56.731668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c46510 with addr=10.0.0.2, port=8010 00:24:56.031 [2024-11-15 11:42:56.731686] nvme_tcp.c:2612:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:24:56.031 [2024-11-15 11:42:56.731695] nvme.c: 831:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:24:56.031 [2024-11-15 11:42:56.731705] bdev_nvme.c:7452:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:24:56.959 [2024-11-15 11:42:57.734003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.959 [2024-11-15 11:42:57.734035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c46510 with addr=10.0.0.2, port=8010 00:24:56.959 [2024-11-15 11:42:57.734051] nvme_tcp.c:2612:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:24:56.959 [2024-11-15 11:42:57.734060] nvme.c: 831:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:24:56.959 [2024-11-15 11:42:57.734069] bdev_nvme.c:7452:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:24:57.887 [2024-11-15 11:42:58.736198] bdev_nvme.c:7427:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] timed out while attaching discovery ctrlr 00:24:57.887 request: 00:24:57.887 { 00:24:57.887 "name": "nvme_second", 00:24:57.887 "trtype": "tcp", 00:24:57.887 "traddr": "10.0.0.2", 00:24:57.887 "adrfam": "ipv4", 00:24:57.887 "trsvcid": "8010", 00:24:57.887 "hostnqn": "nqn.2021-12.io.spdk:test", 00:24:57.887 "wait_for_attach": false, 00:24:57.887 "attach_timeout_ms": 3000, 00:24:58.145 "method": "bdev_nvme_start_discovery", 00:24:58.145 "req_id": 1 00:24:58.145 } 00:24:58.145 Got JSON-RPC error response 00:24:58.145 response: 00:24:58.145 { 00:24:58.145 "code": -110, 00:24:58.145 "message": "Connection timed out" 00:24:58.145 } 00:24:58.145 11:42:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:24:58.145 11:42:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@653 -- # es=1 00:24:58.145 11:42:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:24:58.145 11:42:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:24:58.145 11:42:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:24:58.145 11:42:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@157 -- # get_discovery_ctrlrs 00:24:58.145 11:42:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:24:58.145 11:42:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:24:58.145 11:42:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:24:58.145 11:42:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:24:58.145 11:42:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:58.145 11:42:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:58.145 11:42:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:58.145 11:42:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@157 -- # [[ nvme == \n\v\m\e ]] 00:24:58.145 11:42:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@159 -- # trap - SIGINT SIGTERM EXIT 00:24:58.145 11:42:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@161 -- # kill 1348158 00:24:58.145 11:42:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@162 -- # nvmftestfini 00:24:58.145 11:42:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@516 -- # nvmfcleanup 00:24:58.145 11:42:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@121 -- # sync 00:24:58.145 11:42:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:24:58.145 11:42:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@124 -- # set +e 00:24:58.145 11:42:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@125 -- # for i in {1..20} 00:24:58.145 11:42:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:24:58.145 rmmod nvme_tcp 00:24:58.145 rmmod nvme_fabrics 00:24:58.145 rmmod nvme_keyring 00:24:58.145 11:42:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:24:58.145 11:42:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@128 -- # set -e 00:24:58.145 11:42:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@129 -- # return 0 00:24:58.145 11:42:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@517 -- # '[' -n 1348136 ']' 00:24:58.145 11:42:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@518 -- # killprocess 1348136 00:24:58.145 11:42:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@952 -- # '[' -z 1348136 ']' 00:24:58.145 11:42:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@956 -- # kill -0 1348136 00:24:58.145 11:42:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@957 -- # uname 00:24:58.145 11:42:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:24:58.145 11:42:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 1348136 00:24:58.145 11:42:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:24:58.145 11:42:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:24:58.145 11:42:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@970 -- # echo 'killing process with pid 1348136' 00:24:58.145 killing process with pid 1348136 00:24:58.145 11:42:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@971 -- # kill 1348136 00:24:58.145 11:42:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@976 -- # wait 1348136 00:24:58.403 11:42:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:24:58.403 11:42:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:24:58.403 11:42:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:24:58.403 11:42:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@297 -- # iptr 00:24:58.403 11:42:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@791 -- # iptables-save 00:24:58.403 11:42:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@791 -- # iptables-restore 00:24:58.403 11:42:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:24:58.403 11:42:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:24:58.403 11:42:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@302 -- # remove_spdk_ns 00:24:58.403 11:42:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:58.403 11:42:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:58.403 11:42:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:00.301 11:43:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:25:00.301 00:25:00.301 real 0m16.924s 00:25:00.301 user 0m20.449s 00:25:00.301 sys 0m5.733s 00:25:00.301 11:43:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1128 -- # xtrace_disable 00:25:00.301 11:43:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:00.301 ************************************ 00:25:00.301 END TEST nvmf_host_discovery 00:25:00.301 ************************************ 00:25:00.558 11:43:01 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@27 -- # run_test nvmf_host_multipath_status /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:25:00.558 11:43:01 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:25:00.558 11:43:01 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1109 -- # xtrace_disable 00:25:00.558 11:43:01 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:25:00.558 ************************************ 00:25:00.558 START TEST nvmf_host_multipath_status 00:25:00.558 ************************************ 00:25:00.558 11:43:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:25:00.558 * Looking for test storage... 00:25:00.558 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:25:00.558 11:43:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:25:00.558 11:43:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1691 -- # lcov --version 00:25:00.558 11:43:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:25:00.558 11:43:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:25:00.558 11:43:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:25:00.558 11:43:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@333 -- # local ver1 ver1_l 00:25:00.558 11:43:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@334 -- # local ver2 ver2_l 00:25:00.558 11:43:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@336 -- # IFS=.-: 00:25:00.558 11:43:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@336 -- # read -ra ver1 00:25:00.558 11:43:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@337 -- # IFS=.-: 00:25:00.558 11:43:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@337 -- # read -ra ver2 00:25:00.559 11:43:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@338 -- # local 'op=<' 00:25:00.559 11:43:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@340 -- # ver1_l=2 00:25:00.559 11:43:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@341 -- # ver2_l=1 00:25:00.559 11:43:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:25:00.559 11:43:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@344 -- # case "$op" in 00:25:00.559 11:43:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@345 -- # : 1 00:25:00.559 11:43:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@364 -- # (( v = 0 )) 00:25:00.559 11:43:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:25:00.559 11:43:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@365 -- # decimal 1 00:25:00.559 11:43:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@353 -- # local d=1 00:25:00.559 11:43:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:25:00.559 11:43:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@355 -- # echo 1 00:25:00.559 11:43:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@365 -- # ver1[v]=1 00:25:00.559 11:43:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@366 -- # decimal 2 00:25:00.559 11:43:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@353 -- # local d=2 00:25:00.559 11:43:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:25:00.559 11:43:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@355 -- # echo 2 00:25:00.559 11:43:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@366 -- # ver2[v]=2 00:25:00.559 11:43:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:25:00.559 11:43:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:25:00.559 11:43:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@368 -- # return 0 00:25:00.559 11:43:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:25:00.559 11:43:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:25:00.559 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:00.559 --rc genhtml_branch_coverage=1 00:25:00.559 --rc genhtml_function_coverage=1 00:25:00.559 --rc genhtml_legend=1 00:25:00.559 --rc geninfo_all_blocks=1 00:25:00.559 --rc geninfo_unexecuted_blocks=1 00:25:00.559 00:25:00.559 ' 00:25:00.559 11:43:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:25:00.559 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:00.559 --rc genhtml_branch_coverage=1 00:25:00.559 --rc genhtml_function_coverage=1 00:25:00.559 --rc genhtml_legend=1 00:25:00.559 --rc geninfo_all_blocks=1 00:25:00.559 --rc geninfo_unexecuted_blocks=1 00:25:00.559 00:25:00.559 ' 00:25:00.559 11:43:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:25:00.559 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:00.559 --rc genhtml_branch_coverage=1 00:25:00.559 --rc genhtml_function_coverage=1 00:25:00.559 --rc genhtml_legend=1 00:25:00.559 --rc geninfo_all_blocks=1 00:25:00.559 --rc geninfo_unexecuted_blocks=1 00:25:00.559 00:25:00.559 ' 00:25:00.559 11:43:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:25:00.559 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:00.559 --rc genhtml_branch_coverage=1 00:25:00.559 --rc genhtml_function_coverage=1 00:25:00.559 --rc genhtml_legend=1 00:25:00.559 --rc geninfo_all_blocks=1 00:25:00.559 --rc geninfo_unexecuted_blocks=1 00:25:00.559 00:25:00.559 ' 00:25:00.559 11:43:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:25:00.559 11:43:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # uname -s 00:25:00.559 11:43:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:00.559 11:43:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:00.559 11:43:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:00.559 11:43:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:00.559 11:43:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:00.559 11:43:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:00.559 11:43:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:00.559 11:43:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:00.559 11:43:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:00.559 11:43:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:00.559 11:43:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:25:00.559 11:43:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@18 -- # NVME_HOSTID=00abaa28-3537-eb11-906e-0017a4403562 00:25:00.559 11:43:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:00.559 11:43:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:00.559 11:43:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:25:00.559 11:43:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:00.559 11:43:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:25:00.559 11:43:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@15 -- # shopt -s extglob 00:25:00.559 11:43:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:00.559 11:43:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:00.559 11:43:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:00.559 11:43:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:00.559 11:43:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:00.559 11:43:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:00.559 11:43:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@5 -- # export PATH 00:25:00.559 11:43:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:00.559 11:43:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@51 -- # : 0 00:25:00.559 11:43:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:25:00.816 11:43:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:25:00.816 11:43:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:25:00.816 11:43:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:00.816 11:43:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:00.816 11:43:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:25:00.816 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:25:00.816 11:43:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:25:00.816 11:43:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:25:00.816 11:43:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@55 -- # have_pci_nics=0 00:25:00.816 11:43:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@12 -- # MALLOC_BDEV_SIZE=64 00:25:00.816 11:43:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:25:00.816 11:43:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:25:00.816 11:43:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@16 -- # bpf_sh=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/bpftrace.sh 00:25:00.816 11:43:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@18 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:25:00.816 11:43:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@21 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:25:00.816 11:43:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@31 -- # nvmftestinit 00:25:00.816 11:43:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:25:00.816 11:43:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:00.816 11:43:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@476 -- # prepare_net_devs 00:25:00.816 11:43:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@438 -- # local -g is_hw=no 00:25:00.816 11:43:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@440 -- # remove_spdk_ns 00:25:00.816 11:43:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:00.816 11:43:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:00.816 11:43:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:00.816 11:43:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:25:00.816 11:43:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:25:00.816 11:43:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@309 -- # xtrace_disable 00:25:00.816 11:43:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:25:06.083 11:43:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:25:06.083 11:43:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@315 -- # pci_devs=() 00:25:06.083 11:43:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@315 -- # local -a pci_devs 00:25:06.083 11:43:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@316 -- # pci_net_devs=() 00:25:06.083 11:43:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:25:06.083 11:43:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@317 -- # pci_drivers=() 00:25:06.083 11:43:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@317 -- # local -A pci_drivers 00:25:06.083 11:43:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@319 -- # net_devs=() 00:25:06.083 11:43:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@319 -- # local -ga net_devs 00:25:06.083 11:43:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@320 -- # e810=() 00:25:06.083 11:43:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@320 -- # local -ga e810 00:25:06.083 11:43:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@321 -- # x722=() 00:25:06.083 11:43:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@321 -- # local -ga x722 00:25:06.083 11:43:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@322 -- # mlx=() 00:25:06.083 11:43:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@322 -- # local -ga mlx 00:25:06.083 11:43:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:06.083 11:43:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:06.083 11:43:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:06.083 11:43:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:06.083 11:43:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:06.083 11:43:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:06.083 11:43:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:06.083 11:43:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:25:06.083 11:43:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:06.083 11:43:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:06.083 11:43:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:06.083 11:43:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:06.083 11:43:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:25:06.083 11:43:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:25:06.083 11:43:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:25:06.083 11:43:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:25:06.083 11:43:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:25:06.083 11:43:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:25:06.083 11:43:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:25:06.083 11:43:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:25:06.083 Found 0000:af:00.0 (0x8086 - 0x159b) 00:25:06.083 11:43:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:25:06.083 11:43:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:25:06.084 11:43:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:06.084 11:43:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:06.084 11:43:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:25:06.084 11:43:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:25:06.084 11:43:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:25:06.084 Found 0000:af:00.1 (0x8086 - 0x159b) 00:25:06.084 11:43:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:25:06.084 11:43:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:25:06.084 11:43:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:06.084 11:43:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:06.084 11:43:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:25:06.084 11:43:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:25:06.084 11:43:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:25:06.084 11:43:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:25:06.084 11:43:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:25:06.084 11:43:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:06.084 11:43:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:25:06.084 11:43:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:06.084 11:43:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@418 -- # [[ up == up ]] 00:25:06.084 11:43:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:25:06.084 11:43:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:06.084 11:43:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:25:06.084 Found net devices under 0000:af:00.0: cvl_0_0 00:25:06.084 11:43:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:25:06.084 11:43:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:25:06.084 11:43:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:06.084 11:43:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:25:06.084 11:43:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:06.084 11:43:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@418 -- # [[ up == up ]] 00:25:06.084 11:43:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:25:06.084 11:43:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:06.084 11:43:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:25:06.084 Found net devices under 0000:af:00.1: cvl_0_1 00:25:06.084 11:43:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:25:06.084 11:43:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:25:06.084 11:43:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@442 -- # is_hw=yes 00:25:06.084 11:43:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:25:06.084 11:43:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:25:06.084 11:43:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:25:06.084 11:43:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:25:06.084 11:43:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:06.084 11:43:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:06.084 11:43:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:25:06.084 11:43:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:25:06.084 11:43:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:25:06.084 11:43:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:25:06.084 11:43:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:25:06.084 11:43:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:25:06.084 11:43:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:25:06.084 11:43:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:06.084 11:43:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:25:06.084 11:43:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:25:06.084 11:43:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:25:06.084 11:43:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:25:06.084 11:43:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:25:06.084 11:43:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:25:06.084 11:43:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:25:06.084 11:43:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:25:06.084 11:43:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:25:06.084 11:43:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:25:06.084 11:43:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:25:06.084 11:43:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:25:06.084 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:06.084 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.431 ms 00:25:06.084 00:25:06.084 --- 10.0.0.2 ping statistics --- 00:25:06.084 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:06.084 rtt min/avg/max/mdev = 0.431/0.431/0.431/0.000 ms 00:25:06.084 11:43:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:25:06.084 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:06.084 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.211 ms 00:25:06.084 00:25:06.084 --- 10.0.0.1 ping statistics --- 00:25:06.084 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:06.084 rtt min/avg/max/mdev = 0.211/0.211/0.211/0.000 ms 00:25:06.084 11:43:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:06.084 11:43:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@450 -- # return 0 00:25:06.084 11:43:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:25:06.084 11:43:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:06.084 11:43:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:25:06.084 11:43:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:25:06.084 11:43:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:06.084 11:43:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:25:06.084 11:43:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:25:06.084 11:43:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@33 -- # nvmfappstart -m 0x3 00:25:06.084 11:43:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:25:06.084 11:43:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@724 -- # xtrace_disable 00:25:06.084 11:43:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:25:06.084 11:43:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@509 -- # nvmfpid=1353373 00:25:06.084 11:43:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@510 -- # waitforlisten 1353373 00:25:06.084 11:43:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@833 -- # '[' -z 1353373 ']' 00:25:06.084 11:43:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:06.084 11:43:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@838 -- # local max_retries=100 00:25:06.084 11:43:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:06.084 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:06.084 11:43:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@842 -- # xtrace_disable 00:25:06.084 11:43:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:25:06.084 11:43:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:25:06.084 [2024-11-15 11:43:06.756058] Starting SPDK v25.01-pre git sha1 4b2d483c6 / DPDK 24.03.0 initialization... 00:25:06.084 [2024-11-15 11:43:06.756114] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:06.084 [2024-11-15 11:43:06.858096] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:25:06.084 [2024-11-15 11:43:06.907186] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:06.084 [2024-11-15 11:43:06.907227] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:06.084 [2024-11-15 11:43:06.907238] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:06.084 [2024-11-15 11:43:06.907246] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:06.084 [2024-11-15 11:43:06.907255] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:06.084 [2024-11-15 11:43:06.908745] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:25:06.084 [2024-11-15 11:43:06.908753] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:25:06.342 11:43:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:25:06.342 11:43:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@866 -- # return 0 00:25:06.342 11:43:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:25:06.342 11:43:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@730 -- # xtrace_disable 00:25:06.342 11:43:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:25:06.342 11:43:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:06.342 11:43:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@34 -- # nvmfapp_pid=1353373 00:25:06.342 11:43:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:25:06.599 [2024-11-15 11:43:07.321934] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:06.599 11:43:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:25:06.856 Malloc0 00:25:06.856 11:43:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@39 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -r -m 2 00:25:07.113 11:43:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:25:07.370 11:43:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:25:07.626 [2024-11-15 11:43:08.427403] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:07.626 11:43:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:25:07.883 [2024-11-15 11:43:08.692239] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:25:07.883 11:43:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 90 00:25:07.883 11:43:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@45 -- # bdevperf_pid=1353669 00:25:07.883 11:43:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@47 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:25:07.883 11:43:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@48 -- # waitforlisten 1353669 /var/tmp/bdevperf.sock 00:25:07.883 11:43:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@833 -- # '[' -z 1353669 ']' 00:25:07.883 11:43:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:25:07.883 11:43:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@838 -- # local max_retries=100 00:25:07.883 11:43:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:25:07.883 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:25:07.883 11:43:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@842 -- # xtrace_disable 00:25:07.883 11:43:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:25:08.140 11:43:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:25:08.140 11:43:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@866 -- # return 0 00:25:08.140 11:43:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:25:08.702 11:43:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:25:08.959 Nvme0n1 00:25:08.959 11:43:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:25:09.523 Nvme0n1 00:25:09.523 11:43:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@78 -- # sleep 2 00:25:09.523 11:43:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@76 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 120 -s /var/tmp/bdevperf.sock perform_tests 00:25:11.419 11:43:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@90 -- # set_ANA_state optimized optimized 00:25:11.419 11:43:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized 00:25:11.677 11:43:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:25:11.934 11:43:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@91 -- # sleep 1 00:25:12.865 11:43:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@92 -- # check_status true false true true true true 00:25:12.865 11:43:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:25:12.865 11:43:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:12.865 11:43:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:25:13.122 11:43:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:13.122 11:43:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:25:13.122 11:43:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:13.122 11:43:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:25:13.379 11:43:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:25:13.379 11:43:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:25:13.379 11:43:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:13.379 11:43:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:25:13.636 11:43:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:13.636 11:43:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:25:13.636 11:43:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:13.636 11:43:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:25:13.893 11:43:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:13.893 11:43:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:25:13.893 11:43:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:13.893 11:43:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:25:14.149 11:43:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:14.149 11:43:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:25:14.149 11:43:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:14.149 11:43:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:25:14.406 11:43:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:14.406 11:43:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@94 -- # set_ANA_state non_optimized optimized 00:25:14.406 11:43:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:25:14.663 11:43:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:25:14.920 11:43:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@95 -- # sleep 1 00:25:16.290 11:43:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@96 -- # check_status false true true true true true 00:25:16.290 11:43:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:25:16.290 11:43:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:16.290 11:43:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:25:16.290 11:43:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:25:16.290 11:43:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:25:16.290 11:43:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:16.290 11:43:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:25:16.547 11:43:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:16.547 11:43:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:25:16.547 11:43:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:16.547 11:43:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:25:16.803 11:43:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:16.803 11:43:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:25:16.804 11:43:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:16.804 11:43:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:25:17.060 11:43:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:17.060 11:43:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:25:17.060 11:43:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:17.060 11:43:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:25:17.317 11:43:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:17.317 11:43:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:25:17.317 11:43:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:17.317 11:43:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:25:17.573 11:43:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:17.573 11:43:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@100 -- # set_ANA_state non_optimized non_optimized 00:25:17.573 11:43:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:25:17.830 11:43:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n non_optimized 00:25:18.086 11:43:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@101 -- # sleep 1 00:25:19.456 11:43:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@102 -- # check_status true false true true true true 00:25:19.456 11:43:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:25:19.456 11:43:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:19.456 11:43:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:25:19.456 11:43:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:19.456 11:43:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:25:19.456 11:43:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:19.456 11:43:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:25:19.713 11:43:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:25:19.713 11:43:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:25:19.713 11:43:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:25:19.713 11:43:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:19.970 11:43:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:19.970 11:43:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:25:19.970 11:43:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:19.970 11:43:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:25:20.227 11:43:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:20.227 11:43:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:25:20.227 11:43:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:20.227 11:43:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:25:20.483 11:43:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:20.483 11:43:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:25:20.483 11:43:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:20.483 11:43:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:25:20.741 11:43:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:20.741 11:43:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@104 -- # set_ANA_state non_optimized inaccessible 00:25:20.741 11:43:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:25:20.998 11:43:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:25:21.255 11:43:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@105 -- # sleep 1 00:25:22.185 11:43:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@106 -- # check_status true false true true true false 00:25:22.185 11:43:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:25:22.441 11:43:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:22.441 11:43:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:25:22.441 11:43:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:22.441 11:43:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:25:22.441 11:43:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:22.441 11:43:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:25:22.697 11:43:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:25:22.697 11:43:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:25:22.697 11:43:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:22.697 11:43:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:25:22.953 11:43:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:22.953 11:43:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:25:22.953 11:43:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:22.953 11:43:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:25:22.953 11:43:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:22.953 11:43:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:25:22.953 11:43:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:22.953 11:43:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:25:23.210 11:43:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:23.210 11:43:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:25:23.210 11:43:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:23.210 11:43:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:25:23.467 11:43:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:25:23.467 11:43:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@108 -- # set_ANA_state inaccessible inaccessible 00:25:23.467 11:43:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:25:23.724 11:43:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:25:23.989 11:43:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@109 -- # sleep 1 00:25:24.923 11:43:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@110 -- # check_status false false true true false false 00:25:24.923 11:43:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:25:24.923 11:43:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:24.923 11:43:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:25:25.180 11:43:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:25:25.180 11:43:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:25:25.180 11:43:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:25.180 11:43:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:25:25.437 11:43:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:25:25.437 11:43:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:25:25.437 11:43:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:25.437 11:43:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:25:25.693 11:43:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:25.693 11:43:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:25:25.693 11:43:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:25.693 11:43:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:25:25.951 11:43:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:25.951 11:43:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:25:25.951 11:43:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:25.951 11:43:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:25:25.951 11:43:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:25:25.951 11:43:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:25:25.951 11:43:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:25.951 11:43:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:25:26.514 11:43:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:25:26.514 11:43:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@112 -- # set_ANA_state inaccessible optimized 00:25:26.514 11:43:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:25:26.514 11:43:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:25:27.078 11:43:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@113 -- # sleep 1 00:25:28.008 11:43:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@114 -- # check_status false true true true false true 00:25:28.008 11:43:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:25:28.008 11:43:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:28.008 11:43:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:25:28.266 11:43:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:25:28.266 11:43:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:25:28.266 11:43:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:28.266 11:43:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:25:28.266 11:43:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:28.266 11:43:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:25:28.266 11:43:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:28.266 11:43:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:25:28.523 11:43:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:28.523 11:43:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:25:28.523 11:43:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:28.523 11:43:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:25:28.780 11:43:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:28.780 11:43:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:25:29.037 11:43:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:29.037 11:43:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:25:29.294 11:43:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:25:29.294 11:43:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:25:29.295 11:43:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:29.295 11:43:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:25:29.551 11:43:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:29.551 11:43:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@116 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_multipath_policy -b Nvme0n1 -p active_active 00:25:29.808 11:43:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@119 -- # set_ANA_state optimized optimized 00:25:29.808 11:43:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized 00:25:30.065 11:43:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:25:30.322 11:43:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@120 -- # sleep 1 00:25:31.253 11:43:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@121 -- # check_status true true true true true true 00:25:31.253 11:43:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:25:31.253 11:43:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:31.253 11:43:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:25:31.510 11:43:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:31.510 11:43:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:25:31.510 11:43:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:31.510 11:43:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:25:31.766 11:43:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:31.766 11:43:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:25:31.766 11:43:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:31.767 11:43:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:25:32.023 11:43:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:32.023 11:43:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:25:32.023 11:43:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:32.023 11:43:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:25:32.281 11:43:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:32.281 11:43:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:25:32.281 11:43:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:32.281 11:43:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:25:32.538 11:43:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:32.538 11:43:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:25:32.538 11:43:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:32.538 11:43:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:25:32.795 11:43:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:32.795 11:43:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@123 -- # set_ANA_state non_optimized optimized 00:25:32.795 11:43:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:25:33.052 11:43:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:25:33.309 11:43:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@124 -- # sleep 1 00:25:34.679 11:43:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@125 -- # check_status false true true true true true 00:25:34.679 11:43:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:25:34.679 11:43:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:34.679 11:43:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:25:34.679 11:43:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:25:34.679 11:43:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:25:34.679 11:43:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:34.679 11:43:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:25:34.936 11:43:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:34.936 11:43:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:25:34.936 11:43:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:34.936 11:43:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:25:35.193 11:43:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:35.193 11:43:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:25:35.193 11:43:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:35.193 11:43:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:25:35.450 11:43:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:35.450 11:43:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:25:35.450 11:43:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:35.450 11:43:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:25:35.707 11:43:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:35.707 11:43:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:25:35.707 11:43:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:35.707 11:43:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:25:35.964 11:43:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:35.964 11:43:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@129 -- # set_ANA_state non_optimized non_optimized 00:25:35.964 11:43:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:25:36.221 11:43:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n non_optimized 00:25:36.478 11:43:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@130 -- # sleep 1 00:25:37.409 11:43:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@131 -- # check_status true true true true true true 00:25:37.409 11:43:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:25:37.409 11:43:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:37.409 11:43:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:25:37.666 11:43:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:37.666 11:43:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:25:37.666 11:43:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:37.666 11:43:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:25:38.230 11:43:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:38.230 11:43:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:25:38.230 11:43:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:38.230 11:43:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:25:38.230 11:43:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:38.230 11:43:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:25:38.230 11:43:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:38.230 11:43:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:25:38.488 11:43:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:38.488 11:43:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:25:38.488 11:43:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:38.488 11:43:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:25:38.488 11:43:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:38.488 11:43:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:25:38.488 11:43:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:38.488 11:43:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:25:38.745 11:43:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:38.745 11:43:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@133 -- # set_ANA_state non_optimized inaccessible 00:25:38.745 11:43:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:25:39.002 11:43:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:25:39.259 11:43:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@134 -- # sleep 1 00:25:40.189 11:43:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@135 -- # check_status true false true true true false 00:25:40.189 11:43:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:25:40.189 11:43:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:40.189 11:43:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:25:40.446 11:43:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:40.446 11:43:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:25:40.446 11:43:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:40.446 11:43:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:25:40.704 11:43:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:25:40.704 11:43:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:25:40.704 11:43:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:40.704 11:43:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:25:40.704 11:43:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:40.704 11:43:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:25:40.704 11:43:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:40.704 11:43:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:25:40.962 11:43:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:40.962 11:43:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:25:40.962 11:43:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:25:40.962 11:43:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:41.219 11:43:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:41.219 11:43:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:25:41.219 11:43:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:41.219 11:43:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:25:41.476 11:43:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:25:41.476 11:43:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@137 -- # killprocess 1353669 00:25:41.476 11:43:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@952 -- # '[' -z 1353669 ']' 00:25:41.476 11:43:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@956 -- # kill -0 1353669 00:25:41.476 11:43:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@957 -- # uname 00:25:41.476 11:43:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:25:41.476 11:43:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 1353669 00:25:41.476 11:43:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@958 -- # process_name=reactor_2 00:25:41.476 11:43:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@962 -- # '[' reactor_2 = sudo ']' 00:25:41.476 11:43:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@970 -- # echo 'killing process with pid 1353669' 00:25:41.476 killing process with pid 1353669 00:25:41.476 11:43:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@971 -- # kill 1353669 00:25:41.476 11:43:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@976 -- # wait 1353669 00:25:41.476 { 00:25:41.476 "results": [ 00:25:41.476 { 00:25:41.476 "job": "Nvme0n1", 00:25:41.476 "core_mask": "0x4", 00:25:41.476 "workload": "verify", 00:25:41.476 "status": "terminated", 00:25:41.476 "verify_range": { 00:25:41.476 "start": 0, 00:25:41.476 "length": 16384 00:25:41.476 }, 00:25:41.476 "queue_depth": 128, 00:25:41.476 "io_size": 4096, 00:25:41.476 "runtime": 31.855263, 00:25:41.476 "iops": 8896.740234101975, 00:25:41.476 "mibps": 34.75289153946084, 00:25:41.476 "io_failed": 0, 00:25:41.476 "io_timeout": 0, 00:25:41.476 "avg_latency_us": 14369.579867277758, 00:25:41.476 "min_latency_us": 114.96727272727273, 00:25:41.476 "max_latency_us": 4087539.898181818 00:25:41.476 } 00:25:41.476 ], 00:25:41.476 "core_count": 1 00:25:41.476 } 00:25:41.748 11:43:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@139 -- # wait 1353669 00:25:41.748 11:43:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@141 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:25:41.748 [2024-11-15 11:43:08.757714] Starting SPDK v25.01-pre git sha1 4b2d483c6 / DPDK 24.03.0 initialization... 00:25:41.748 [2024-11-15 11:43:08.757779] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1353669 ] 00:25:41.748 [2024-11-15 11:43:08.824157] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:41.748 [2024-11-15 11:43:08.862157] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:25:41.748 Running I/O for 90 seconds... 00:25:41.748 7808.00 IOPS, 30.50 MiB/s [2024-11-15T10:43:42.601Z] 7887.50 IOPS, 30.81 MiB/s [2024-11-15T10:43:42.601Z] 7903.00 IOPS, 30.87 MiB/s [2024-11-15T10:43:42.601Z] 7879.75 IOPS, 30.78 MiB/s [2024-11-15T10:43:42.601Z] 7889.20 IOPS, 30.82 MiB/s [2024-11-15T10:43:42.601Z] 8429.17 IOPS, 32.93 MiB/s [2024-11-15T10:43:42.601Z] 8983.14 IOPS, 35.09 MiB/s [2024-11-15T10:43:42.601Z] 9377.00 IOPS, 36.63 MiB/s [2024-11-15T10:43:42.601Z] 9526.56 IOPS, 37.21 MiB/s [2024-11-15T10:43:42.601Z] 9368.00 IOPS, 36.59 MiB/s [2024-11-15T10:43:42.601Z] 9227.27 IOPS, 36.04 MiB/s [2024-11-15T10:43:42.601Z] 9110.33 IOPS, 35.59 MiB/s [2024-11-15T10:43:42.601Z] 9019.85 IOPS, 35.23 MiB/s [2024-11-15T10:43:42.601Z] 8934.79 IOPS, 34.90 MiB/s [2024-11-15T10:43:42.601Z] [2024-11-15 11:43:24.348772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:88472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.748 [2024-11-15 11:43:24.348807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:41.748 [2024-11-15 11:43:24.348826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:88224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.748 [2024-11-15 11:43:24.348833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:25:41.748 [2024-11-15 11:43:24.348846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:88232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.748 [2024-11-15 11:43:24.348853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:25:41.748 [2024-11-15 11:43:24.348865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:88240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.748 [2024-11-15 11:43:24.348871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:25:41.748 [2024-11-15 11:43:24.348882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:88248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.748 [2024-11-15 11:43:24.348889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:25:41.748 [2024-11-15 11:43:24.348900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:88256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.748 [2024-11-15 11:43:24.348906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:25:41.748 [2024-11-15 11:43:24.348918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:88264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.748 [2024-11-15 11:43:24.348924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:25:41.748 [2024-11-15 11:43:24.348935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:88272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.748 [2024-11-15 11:43:24.348941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:25:41.748 [2024-11-15 11:43:24.348953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:88280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.748 [2024-11-15 11:43:24.348959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:25:41.748 [2024-11-15 11:43:24.348975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:88288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.748 [2024-11-15 11:43:24.348982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:25:41.748 [2024-11-15 11:43:24.348993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:88296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.748 [2024-11-15 11:43:24.349000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:25:41.748 [2024-11-15 11:43:24.349011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:88304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.748 [2024-11-15 11:43:24.349017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:25:41.748 [2024-11-15 11:43:24.349029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:88312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.748 [2024-11-15 11:43:24.349035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:25:41.748 [2024-11-15 11:43:24.349046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:88320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.748 [2024-11-15 11:43:24.349053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:25:41.748 [2024-11-15 11:43:24.349064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:88328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.748 [2024-11-15 11:43:24.349070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:25:41.748 [2024-11-15 11:43:24.349081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:88336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.748 [2024-11-15 11:43:24.349087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:25:41.748 [2024-11-15 11:43:24.349098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:88344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.749 [2024-11-15 11:43:24.349104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:25:41.749 [2024-11-15 11:43:24.349116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:88352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.749 [2024-11-15 11:43:24.349123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:25:41.749 [2024-11-15 11:43:24.349135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:88360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.749 [2024-11-15 11:43:24.349141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:25:41.749 [2024-11-15 11:43:24.349152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:88368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.749 [2024-11-15 11:43:24.349158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:25:41.749 [2024-11-15 11:43:24.349169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:88376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.749 [2024-11-15 11:43:24.349176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:25:41.749 [2024-11-15 11:43:24.349188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:88384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.749 [2024-11-15 11:43:24.349196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:25:41.749 [2024-11-15 11:43:24.349207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:88392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.749 [2024-11-15 11:43:24.349214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:25:41.749 [2024-11-15 11:43:24.349225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:88400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.749 [2024-11-15 11:43:24.349231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:25:41.749 [2024-11-15 11:43:24.349242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:88480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.749 [2024-11-15 11:43:24.349249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:25:41.749 [2024-11-15 11:43:24.349259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:88488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.749 [2024-11-15 11:43:24.349266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:25:41.749 [2024-11-15 11:43:24.349277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:88496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.749 [2024-11-15 11:43:24.349283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:25:41.749 [2024-11-15 11:43:24.349294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:88504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.749 [2024-11-15 11:43:24.349300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:25:41.749 [2024-11-15 11:43:24.349311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:88512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.749 [2024-11-15 11:43:24.349317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:25:41.749 [2024-11-15 11:43:24.349328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:88520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.749 [2024-11-15 11:43:24.349334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:25:41.749 [2024-11-15 11:43:24.349346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:88528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.749 [2024-11-15 11:43:24.349352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:25:41.749 [2024-11-15 11:43:24.350327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:88536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.749 [2024-11-15 11:43:24.350341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:41.749 [2024-11-15 11:43:24.350355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:88544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.749 [2024-11-15 11:43:24.350362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:41.749 [2024-11-15 11:43:24.350374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:88552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.749 [2024-11-15 11:43:24.350383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:25:41.749 [2024-11-15 11:43:24.350395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:88560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.749 [2024-11-15 11:43:24.350401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:25:41.749 [2024-11-15 11:43:24.350413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:88568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.749 [2024-11-15 11:43:24.350419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:25:41.749 [2024-11-15 11:43:24.350430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:88576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.749 [2024-11-15 11:43:24.350437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:25:41.749 [2024-11-15 11:43:24.350448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:88584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.749 [2024-11-15 11:43:24.350454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:25:41.749 [2024-11-15 11:43:24.350470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:88592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.749 [2024-11-15 11:43:24.350477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:25:41.749 [2024-11-15 11:43:24.350488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:88600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.749 [2024-11-15 11:43:24.350494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:25:41.749 [2024-11-15 11:43:24.350506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:88608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.749 [2024-11-15 11:43:24.350512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:25:41.749 [2024-11-15 11:43:24.350523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:88616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.749 [2024-11-15 11:43:24.350529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:25:41.749 [2024-11-15 11:43:24.350540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:88624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.749 [2024-11-15 11:43:24.350547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:25:41.749 [2024-11-15 11:43:24.350558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:88632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.749 [2024-11-15 11:43:24.350565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:25:41.749 [2024-11-15 11:43:24.350576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:88640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.749 [2024-11-15 11:43:24.350582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:25:41.749 [2024-11-15 11:43:24.350594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:88648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.749 [2024-11-15 11:43:24.350600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:25:41.749 [2024-11-15 11:43:24.350613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:88656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.749 [2024-11-15 11:43:24.350619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:25:41.749 [2024-11-15 11:43:24.350631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:88664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.749 [2024-11-15 11:43:24.350637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:25:41.749 [2024-11-15 11:43:24.350788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:88672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.749 [2024-11-15 11:43:24.350797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:25:41.749 [2024-11-15 11:43:24.350811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:88680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.749 [2024-11-15 11:43:24.350818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:25:41.749 [2024-11-15 11:43:24.350830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:88688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.749 [2024-11-15 11:43:24.350836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:25:41.749 [2024-11-15 11:43:24.350847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:88696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.749 [2024-11-15 11:43:24.350853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:25:41.749 [2024-11-15 11:43:24.350864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:88704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.749 [2024-11-15 11:43:24.350871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:25:41.749 [2024-11-15 11:43:24.350882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:88712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.749 [2024-11-15 11:43:24.350888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:25:41.749 [2024-11-15 11:43:24.350900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:88720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.749 [2024-11-15 11:43:24.350906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:25:41.750 [2024-11-15 11:43:24.350917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:88728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.750 [2024-11-15 11:43:24.350923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:25:41.750 [2024-11-15 11:43:24.350934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:88736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.750 [2024-11-15 11:43:24.350941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:25:41.750 [2024-11-15 11:43:24.350952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:88744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.750 [2024-11-15 11:43:24.350958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:25:41.750 [2024-11-15 11:43:24.350971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:88752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.750 [2024-11-15 11:43:24.350977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:25:41.750 [2024-11-15 11:43:24.350989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:88760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.750 [2024-11-15 11:43:24.350995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:25:41.750 [2024-11-15 11:43:24.351006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:88768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.750 [2024-11-15 11:43:24.351013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:25:41.750 [2024-11-15 11:43:24.351024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:88776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.750 [2024-11-15 11:43:24.351030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:25:41.750 [2024-11-15 11:43:24.351042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:88784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.750 [2024-11-15 11:43:24.351048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:25:41.750 [2024-11-15 11:43:24.351059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:88792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.750 [2024-11-15 11:43:24.351066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:41.750 [2024-11-15 11:43:24.351077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:88800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.750 [2024-11-15 11:43:24.351083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:41.750 [2024-11-15 11:43:24.351095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:88808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.750 [2024-11-15 11:43:24.351101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:25:41.750 [2024-11-15 11:43:24.351112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:88816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.750 [2024-11-15 11:43:24.351118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:25:41.750 [2024-11-15 11:43:24.351130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:88824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.750 [2024-11-15 11:43:24.351136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:25:41.750 [2024-11-15 11:43:24.351148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:88832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.750 [2024-11-15 11:43:24.351155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:25:41.750 [2024-11-15 11:43:24.351166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:88840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.750 [2024-11-15 11:43:24.351172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:25:41.750 [2024-11-15 11:43:24.351183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:88848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.750 [2024-11-15 11:43:24.351191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:25:41.750 [2024-11-15 11:43:24.351202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:88856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.750 [2024-11-15 11:43:24.351209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:25:41.750 [2024-11-15 11:43:24.351219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:88864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.750 [2024-11-15 11:43:24.351226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:25:41.750 [2024-11-15 11:43:24.351237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:88872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.750 [2024-11-15 11:43:24.351243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:25:41.750 [2024-11-15 11:43:24.351254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:88880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.750 [2024-11-15 11:43:24.351261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:25:41.750 [2024-11-15 11:43:24.351272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:88888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.750 [2024-11-15 11:43:24.351278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:25:41.750 [2024-11-15 11:43:24.351289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:88896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.750 [2024-11-15 11:43:24.351296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:25:41.750 [2024-11-15 11:43:24.351307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:88904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.750 [2024-11-15 11:43:24.351313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:25:41.750 [2024-11-15 11:43:24.351324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:88912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.750 [2024-11-15 11:43:24.351330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:25:41.750 [2024-11-15 11:43:24.351342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:88920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.750 [2024-11-15 11:43:24.351347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:25:41.750 [2024-11-15 11:43:24.351359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:88928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.750 [2024-11-15 11:43:24.351366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:25:41.750 [2024-11-15 11:43:24.351377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:88936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.750 [2024-11-15 11:43:24.351384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:25:41.750 [2024-11-15 11:43:24.351395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:88944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.750 [2024-11-15 11:43:24.351403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:25:41.750 [2024-11-15 11:43:24.351416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:88952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.750 [2024-11-15 11:43:24.351423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:25:41.750 [2024-11-15 11:43:24.351434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:88960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.750 [2024-11-15 11:43:24.351441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:25:41.750 [2024-11-15 11:43:24.351452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:88968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.750 [2024-11-15 11:43:24.351464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:25:41.750 [2024-11-15 11:43:24.351476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:88976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.750 [2024-11-15 11:43:24.351482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:25:41.750 [2024-11-15 11:43:24.351493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:88984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.750 [2024-11-15 11:43:24.351500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:25:41.750 [2024-11-15 11:43:24.351511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:88992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.750 [2024-11-15 11:43:24.351517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:25:41.750 [2024-11-15 11:43:24.351528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:89000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.750 [2024-11-15 11:43:24.351534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:25:41.750 [2024-11-15 11:43:24.351545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:89008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.750 [2024-11-15 11:43:24.351552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:25:41.750 [2024-11-15 11:43:24.351563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:89016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.750 [2024-11-15 11:43:24.351569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:25:41.750 [2024-11-15 11:43:24.351581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:89024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.750 [2024-11-15 11:43:24.351587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:25:41.751 [2024-11-15 11:43:24.351599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:89032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.751 [2024-11-15 11:43:24.351605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:25:41.751 [2024-11-15 11:43:24.351616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:89040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.751 [2024-11-15 11:43:24.351623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:41.751 [2024-11-15 11:43:24.351990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:89048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.751 [2024-11-15 11:43:24.352002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:41.751 [2024-11-15 11:43:24.352015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:89056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.751 [2024-11-15 11:43:24.352021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:41.751 [2024-11-15 11:43:24.352033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:89064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.751 [2024-11-15 11:43:24.352040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:25:41.751 [2024-11-15 11:43:24.352052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:89072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.751 [2024-11-15 11:43:24.352058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:25:41.751 [2024-11-15 11:43:24.352070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:89080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.751 [2024-11-15 11:43:24.352076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:25:41.751 [2024-11-15 11:43:24.352088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:89088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.751 [2024-11-15 11:43:24.352096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:25:41.751 [2024-11-15 11:43:24.352108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:89096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.751 [2024-11-15 11:43:24.352114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:25:41.751 [2024-11-15 11:43:24.352126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:89104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.751 [2024-11-15 11:43:24.352133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:25:41.751 [2024-11-15 11:43:24.352145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:89112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.751 [2024-11-15 11:43:24.352151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:25:41.751 [2024-11-15 11:43:24.352163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:88408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.751 [2024-11-15 11:43:24.352169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:25:41.751 [2024-11-15 11:43:24.352181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:88416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.751 [2024-11-15 11:43:24.352187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:25:41.751 [2024-11-15 11:43:24.352198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:88424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.751 [2024-11-15 11:43:24.352205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:25:41.751 [2024-11-15 11:43:24.352221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:88432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.751 [2024-11-15 11:43:24.352228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:25:41.751 [2024-11-15 11:43:24.352239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:88440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.751 [2024-11-15 11:43:24.352246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:25:41.751 [2024-11-15 11:43:24.352257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:88448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.751 [2024-11-15 11:43:24.352264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:25:41.751 [2024-11-15 11:43:24.352276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:88456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.751 [2024-11-15 11:43:24.352283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:25:41.751 [2024-11-15 11:43:24.352294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:88464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.751 [2024-11-15 11:43:24.352301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:25:41.751 [2024-11-15 11:43:24.352313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:89120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.751 [2024-11-15 11:43:24.352319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:25:41.751 [2024-11-15 11:43:24.352331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:89128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.751 [2024-11-15 11:43:24.352337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:25:41.751 [2024-11-15 11:43:24.352348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:89136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.751 [2024-11-15 11:43:24.352354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:25:41.751 [2024-11-15 11:43:24.352366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:89144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.751 [2024-11-15 11:43:24.352372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:25:41.751 [2024-11-15 11:43:24.352383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:89152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.751 [2024-11-15 11:43:24.352389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:25:41.751 [2024-11-15 11:43:24.352401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:89160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.751 [2024-11-15 11:43:24.352407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:25:41.751 [2024-11-15 11:43:24.352418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:89168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.751 [2024-11-15 11:43:24.352425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:25:41.751 [2024-11-15 11:43:24.352616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:89176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.751 [2024-11-15 11:43:24.352627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:25:41.751 [2024-11-15 11:43:24.352639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:89184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.751 [2024-11-15 11:43:24.352646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:25:41.751 [2024-11-15 11:43:24.352658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:89192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.751 [2024-11-15 11:43:24.352664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:25:41.751 [2024-11-15 11:43:24.352675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:89200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.751 [2024-11-15 11:43:24.352681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:25:41.751 [2024-11-15 11:43:24.352693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:89208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.751 [2024-11-15 11:43:24.352699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:25:41.751 [2024-11-15 11:43:24.352710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:89216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.751 [2024-11-15 11:43:24.352716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:25:41.751 [2024-11-15 11:43:24.352728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:89224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.751 [2024-11-15 11:43:24.352734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:25:41.751 [2024-11-15 11:43:24.352745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:89232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.751 [2024-11-15 11:43:24.352758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:25:41.751 [2024-11-15 11:43:24.353444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:89240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.751 [2024-11-15 11:43:24.353452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:41.751 [2024-11-15 11:43:24.353469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:88472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.751 [2024-11-15 11:43:24.353476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:41.751 [2024-11-15 11:43:24.353490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:88224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.751 [2024-11-15 11:43:24.353496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:25:41.751 [2024-11-15 11:43:24.353507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:88232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.751 [2024-11-15 11:43:24.353514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:25:41.751 [2024-11-15 11:43:24.353525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:88240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.752 [2024-11-15 11:43:24.353534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:25:41.752 [2024-11-15 11:43:24.353545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:88248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.752 [2024-11-15 11:43:24.353551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:25:41.752 [2024-11-15 11:43:24.353563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:88256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.752 [2024-11-15 11:43:24.353569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:25:41.752 [2024-11-15 11:43:24.353581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:88264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.752 [2024-11-15 11:43:24.353587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:25:41.752 [2024-11-15 11:43:24.353598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:88272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.752 [2024-11-15 11:43:24.353604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:25:41.752 [2024-11-15 11:43:24.353615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:88280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.752 [2024-11-15 11:43:24.353622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:25:41.752 [2024-11-15 11:43:24.353633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:88288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.752 [2024-11-15 11:43:24.353639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:25:41.752 [2024-11-15 11:43:24.353650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:88296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.752 [2024-11-15 11:43:24.353657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:25:41.752 [2024-11-15 11:43:24.353668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:88304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.752 [2024-11-15 11:43:24.353675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:25:41.752 [2024-11-15 11:43:24.353686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:88312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.752 [2024-11-15 11:43:24.353692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:25:41.752 [2024-11-15 11:43:24.353703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:88320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.752 [2024-11-15 11:43:24.353710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:25:41.752 [2024-11-15 11:43:24.353721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:88328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.752 [2024-11-15 11:43:24.353729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:25:41.752 [2024-11-15 11:43:24.353740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:88336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.752 [2024-11-15 11:43:24.353747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:25:41.752 [2024-11-15 11:43:24.353872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:88344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.752 [2024-11-15 11:43:24.353880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:25:41.752 [2024-11-15 11:43:24.353893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:88352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.752 [2024-11-15 11:43:24.353900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:25:41.752 [2024-11-15 11:43:24.353911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:88360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.752 [2024-11-15 11:43:24.353918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:25:41.752 [2024-11-15 11:43:24.353929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:88368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.752 [2024-11-15 11:43:24.353935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:25:41.752 [2024-11-15 11:43:24.353947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:88376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.752 [2024-11-15 11:43:24.353953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:25:41.752 [2024-11-15 11:43:24.353965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:88384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.752 [2024-11-15 11:43:24.353971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:25:41.752 [2024-11-15 11:43:24.353982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:88392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.752 [2024-11-15 11:43:24.353988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:25:41.752 [2024-11-15 11:43:24.353999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:88400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.752 [2024-11-15 11:43:24.354006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:25:41.752 [2024-11-15 11:43:24.354017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:88480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.752 [2024-11-15 11:43:24.354023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:25:41.752 [2024-11-15 11:43:24.354034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:88488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.752 [2024-11-15 11:43:24.354040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:25:41.752 [2024-11-15 11:43:24.354051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:88496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.752 [2024-11-15 11:43:24.354057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:25:41.752 [2024-11-15 11:43:24.354069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:88504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.752 [2024-11-15 11:43:24.354075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:25:41.752 [2024-11-15 11:43:24.354088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:88512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.752 [2024-11-15 11:43:24.354094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:25:41.752 [2024-11-15 11:43:24.354105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:88520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.752 [2024-11-15 11:43:24.354112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:25:41.752 [2024-11-15 11:43:24.355979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:88528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.752 [2024-11-15 11:43:24.355992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:25:41.752 [2024-11-15 11:43:24.356005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:88536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.752 [2024-11-15 11:43:24.356011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:41.752 [2024-11-15 11:43:24.356023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:88544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.752 [2024-11-15 11:43:24.356029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:41.752 [2024-11-15 11:43:24.356042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:88552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.752 [2024-11-15 11:43:24.356049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:25:41.752 [2024-11-15 11:43:24.356060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:88560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.752 [2024-11-15 11:43:24.356066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:25:41.752 [2024-11-15 11:43:24.356077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:88568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.752 [2024-11-15 11:43:24.356084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:25:41.753 [2024-11-15 11:43:24.356095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:88576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.753 [2024-11-15 11:43:24.356102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:25:41.753 [2024-11-15 11:43:24.356113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:88584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.753 [2024-11-15 11:43:24.356119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:25:41.753 [2024-11-15 11:43:24.356130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:88592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.753 [2024-11-15 11:43:24.356137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:25:41.753 [2024-11-15 11:43:24.356148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:88600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.753 [2024-11-15 11:43:24.356154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:25:41.753 [2024-11-15 11:43:24.356165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:88608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.753 [2024-11-15 11:43:24.356173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:25:41.753 [2024-11-15 11:43:24.356184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:88616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.753 [2024-11-15 11:43:24.356191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:25:41.753 [2024-11-15 11:43:24.356202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:88624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.753 [2024-11-15 11:43:24.356208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:25:41.753 [2024-11-15 11:43:24.356219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:88632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.753 [2024-11-15 11:43:24.356226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:25:41.753 [2024-11-15 11:43:24.356237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:88640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.753 [2024-11-15 11:43:24.356244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:25:41.753 [2024-11-15 11:43:24.356255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:88648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.753 [2024-11-15 11:43:24.356261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:25:41.753 [2024-11-15 11:43:24.356273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:88656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.753 [2024-11-15 11:43:24.356281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:25:41.753 [2024-11-15 11:43:24.356447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:88664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.753 [2024-11-15 11:43:24.356463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:25:41.753 [2024-11-15 11:43:24.356477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:88672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.753 [2024-11-15 11:43:24.356484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:25:41.753 [2024-11-15 11:43:24.356497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:88680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.753 [2024-11-15 11:43:24.356503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:25:41.753 [2024-11-15 11:43:24.356515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:88688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.753 [2024-11-15 11:43:24.356521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:25:41.753 [2024-11-15 11:43:24.356532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:88696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.753 [2024-11-15 11:43:24.356539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:25:41.753 [2024-11-15 11:43:24.356551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:88704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.753 [2024-11-15 11:43:24.356559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:25:41.753 [2024-11-15 11:43:24.356571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:88712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.753 [2024-11-15 11:43:24.356577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:25:41.753 [2024-11-15 11:43:24.356588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:88720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.753 [2024-11-15 11:43:24.356595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:25:41.753 [2024-11-15 11:43:24.356606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:88728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.753 [2024-11-15 11:43:24.356613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:25:41.753 [2024-11-15 11:43:24.356624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:88736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.753 [2024-11-15 11:43:24.356630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:25:41.753 [2024-11-15 11:43:24.356642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:88744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.753 [2024-11-15 11:43:24.356648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:25:41.753 [2024-11-15 11:43:24.356659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:88752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.753 [2024-11-15 11:43:24.356666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:25:41.753 [2024-11-15 11:43:24.356677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:88760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.753 [2024-11-15 11:43:24.356683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:25:41.753 [2024-11-15 11:43:24.356695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:88768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.753 [2024-11-15 11:43:24.356702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:25:41.753 [2024-11-15 11:43:24.356713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:88776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.753 [2024-11-15 11:43:24.356719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:25:41.753 [2024-11-15 11:43:24.356731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:88784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.753 [2024-11-15 11:43:24.356737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:25:41.753 [2024-11-15 11:43:24.356748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:88792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.753 [2024-11-15 11:43:24.356755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:41.753 [2024-11-15 11:43:24.356766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:88800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.753 [2024-11-15 11:43:24.356773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:41.753 [2024-11-15 11:43:24.356786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:88808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.753 [2024-11-15 11:43:24.356793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:25:41.753 [2024-11-15 11:43:24.356804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:88816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.753 [2024-11-15 11:43:24.356810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:25:41.753 [2024-11-15 11:43:24.356822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:88824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.753 [2024-11-15 11:43:24.356828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:25:41.753 [2024-11-15 11:43:24.356840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:88832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.753 [2024-11-15 11:43:24.356846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:25:41.753 [2024-11-15 11:43:24.356857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:88840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.753 [2024-11-15 11:43:24.356864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:25:41.753 [2024-11-15 11:43:24.356875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:88848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.753 [2024-11-15 11:43:24.356881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:25:41.753 [2024-11-15 11:43:24.356893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:88856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.753 [2024-11-15 11:43:24.356899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:25:41.753 [2024-11-15 11:43:24.356910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:88864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.753 [2024-11-15 11:43:24.356917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:25:41.753 [2024-11-15 11:43:24.356929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:88872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.753 [2024-11-15 11:43:24.356935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:25:41.753 [2024-11-15 11:43:24.356946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:88880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.754 [2024-11-15 11:43:24.356953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:25:41.754 [2024-11-15 11:43:24.366456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:88888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.754 [2024-11-15 11:43:24.366480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:25:41.754 [2024-11-15 11:43:24.366492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:88896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.754 [2024-11-15 11:43:24.366499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:25:41.754 [2024-11-15 11:43:24.366513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:88904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.754 [2024-11-15 11:43:24.366519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:25:41.754 [2024-11-15 11:43:24.366531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:88912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.754 [2024-11-15 11:43:24.366537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:25:41.754 [2024-11-15 11:43:24.366549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:88920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.754 [2024-11-15 11:43:24.366555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:25:41.754 [2024-11-15 11:43:24.366567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:88928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.754 [2024-11-15 11:43:24.366574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:25:41.754 [2024-11-15 11:43:24.366586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:88936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.754 [2024-11-15 11:43:24.366593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:25:41.754 [2024-11-15 11:43:24.366604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:88944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.754 [2024-11-15 11:43:24.366611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:25:41.754 [2024-11-15 11:43:24.366622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:88952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.754 [2024-11-15 11:43:24.366629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:25:41.754 [2024-11-15 11:43:24.366640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:88960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.754 [2024-11-15 11:43:24.366646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:25:41.754 [2024-11-15 11:43:24.366658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:88968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.754 [2024-11-15 11:43:24.366664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:25:41.754 [2024-11-15 11:43:24.366675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:88976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.754 [2024-11-15 11:43:24.366682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:25:41.754 [2024-11-15 11:43:24.366693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:88984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.754 [2024-11-15 11:43:24.366699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:25:41.754 [2024-11-15 11:43:24.366711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:88992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.754 [2024-11-15 11:43:24.366717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:25:41.754 [2024-11-15 11:43:24.366728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:89000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.754 [2024-11-15 11:43:24.366737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:25:41.754 [2024-11-15 11:43:24.366748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:89008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.754 [2024-11-15 11:43:24.366754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:25:41.754 [2024-11-15 11:43:24.366765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:89016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.754 [2024-11-15 11:43:24.366772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:25:41.754 [2024-11-15 11:43:24.366783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:89024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.754 [2024-11-15 11:43:24.366790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:25:41.754 [2024-11-15 11:43:24.366802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:89032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.754 [2024-11-15 11:43:24.366808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:25:41.754 [2024-11-15 11:43:24.367234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:89040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.754 [2024-11-15 11:43:24.367250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:41.754 [2024-11-15 11:43:24.367264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:89048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.754 [2024-11-15 11:43:24.367270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:41.754 [2024-11-15 11:43:24.367282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:89056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.754 [2024-11-15 11:43:24.367289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:41.754 [2024-11-15 11:43:24.367301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:89064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.754 [2024-11-15 11:43:24.367307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:25:41.754 [2024-11-15 11:43:24.367319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:89072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.754 [2024-11-15 11:43:24.367325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:25:41.754 [2024-11-15 11:43:24.367336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:89080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.754 [2024-11-15 11:43:24.367342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:25:41.754 [2024-11-15 11:43:24.367354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:89088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.754 [2024-11-15 11:43:24.367360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:25:41.754 [2024-11-15 11:43:24.367371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:89096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.754 [2024-11-15 11:43:24.367379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:25:41.754 [2024-11-15 11:43:24.367391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:89104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.754 [2024-11-15 11:43:24.367397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:25:41.754 [2024-11-15 11:43:24.367409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:89112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.754 [2024-11-15 11:43:24.367415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:25:41.754 [2024-11-15 11:43:24.367426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:88408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.754 [2024-11-15 11:43:24.367433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:25:41.754 [2024-11-15 11:43:24.367444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:88416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.754 [2024-11-15 11:43:24.367450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:25:41.754 [2024-11-15 11:43:24.367468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:88424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.754 [2024-11-15 11:43:24.367475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:25:41.754 [2024-11-15 11:43:24.367487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:88432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.754 [2024-11-15 11:43:24.367493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:25:41.754 [2024-11-15 11:43:24.367505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:88440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.754 [2024-11-15 11:43:24.367511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:25:41.754 [2024-11-15 11:43:24.367522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:88448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.754 [2024-11-15 11:43:24.367528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:25:41.754 [2024-11-15 11:43:24.367539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:88456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.754 [2024-11-15 11:43:24.367546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:25:41.754 [2024-11-15 11:43:24.367557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:88464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.754 [2024-11-15 11:43:24.367564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:25:41.754 [2024-11-15 11:43:24.367575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:89120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.754 [2024-11-15 11:43:24.367581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:25:41.754 [2024-11-15 11:43:24.367593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:89128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.755 [2024-11-15 11:43:24.367599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:25:41.755 [2024-11-15 11:43:24.367612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:89136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.755 [2024-11-15 11:43:24.367619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:25:41.755 [2024-11-15 11:43:24.367630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:89144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.755 [2024-11-15 11:43:24.367636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:25:41.755 [2024-11-15 11:43:24.367648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:89152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.755 [2024-11-15 11:43:24.367654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:25:41.755 [2024-11-15 11:43:24.367665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:89160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.755 [2024-11-15 11:43:24.367672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:25:41.755 [2024-11-15 11:43:24.367683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:89168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.755 [2024-11-15 11:43:24.367690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:25:41.755 [2024-11-15 11:43:24.367701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:89176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.755 [2024-11-15 11:43:24.367708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:25:41.755 [2024-11-15 11:43:24.367719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:89184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.755 [2024-11-15 11:43:24.367725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:25:41.755 [2024-11-15 11:43:24.367737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:89192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.755 [2024-11-15 11:43:24.367744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:25:41.755 [2024-11-15 11:43:24.367755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:89200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.755 [2024-11-15 11:43:24.367761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:25:41.755 [2024-11-15 11:43:24.367772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:89208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.755 [2024-11-15 11:43:24.367778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:25:41.755 [2024-11-15 11:43:24.367789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:89216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.755 [2024-11-15 11:43:24.367796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:25:41.755 [2024-11-15 11:43:24.367807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:89224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.755 [2024-11-15 11:43:24.367813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:25:41.755 [2024-11-15 11:43:24.367826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:89232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.755 [2024-11-15 11:43:24.367832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:25:41.755 [2024-11-15 11:43:24.367843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:89240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.755 [2024-11-15 11:43:24.367850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:41.755 [2024-11-15 11:43:24.367861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:88472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.755 [2024-11-15 11:43:24.367869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:41.755 [2024-11-15 11:43:24.367880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:88224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.755 [2024-11-15 11:43:24.367886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:25:41.755 [2024-11-15 11:43:24.367897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:88232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.755 [2024-11-15 11:43:24.367904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:25:41.755 [2024-11-15 11:43:24.367915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:88240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.755 [2024-11-15 11:43:24.367922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:25:41.755 [2024-11-15 11:43:24.367933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:88248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.755 [2024-11-15 11:43:24.367939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:25:41.755 [2024-11-15 11:43:24.367951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:88256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.755 [2024-11-15 11:43:24.367957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:25:41.755 [2024-11-15 11:43:24.367969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:88264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.755 [2024-11-15 11:43:24.367976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:25:41.755 [2024-11-15 11:43:24.367987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:88272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.755 [2024-11-15 11:43:24.367993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:25:41.755 [2024-11-15 11:43:24.368005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:88280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.755 [2024-11-15 11:43:24.368012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:25:41.755 [2024-11-15 11:43:24.368023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:88288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.755 [2024-11-15 11:43:24.368029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:25:41.755 [2024-11-15 11:43:24.368040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:88296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.755 [2024-11-15 11:43:24.368048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:25:41.755 [2024-11-15 11:43:24.368059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:88304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.755 [2024-11-15 11:43:24.368066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:25:41.755 [2024-11-15 11:43:24.368077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:88312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.755 [2024-11-15 11:43:24.368083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:25:41.755 [2024-11-15 11:43:24.368095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:88320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.755 [2024-11-15 11:43:24.368101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:25:41.755 [2024-11-15 11:43:24.368112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:88328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.755 [2024-11-15 11:43:24.368119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:25:41.755 [2024-11-15 11:43:24.368130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:88336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.755 [2024-11-15 11:43:24.368136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:25:41.755 [2024-11-15 11:43:24.368147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:88344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.755 [2024-11-15 11:43:24.368153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:25:41.755 [2024-11-15 11:43:24.368165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:88352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.755 [2024-11-15 11:43:24.368171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:25:41.755 [2024-11-15 11:43:24.368183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:88360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.755 [2024-11-15 11:43:24.368189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:25:41.755 [2024-11-15 11:43:24.368200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:88368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.755 [2024-11-15 11:43:24.368207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:25:41.755 [2024-11-15 11:43:24.368218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:88376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.755 [2024-11-15 11:43:24.368224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:25:41.755 [2024-11-15 11:43:24.368235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:88384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.755 [2024-11-15 11:43:24.368241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:25:41.755 [2024-11-15 11:43:24.368252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:88392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.755 [2024-11-15 11:43:24.368261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:25:41.755 [2024-11-15 11:43:24.368272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:88400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.755 [2024-11-15 11:43:24.368279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:25:41.755 [2024-11-15 11:43:24.368290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:88480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.756 [2024-11-15 11:43:24.368297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:25:41.756 [2024-11-15 11:43:24.368308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:88488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.756 [2024-11-15 11:43:24.368314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:25:41.756 [2024-11-15 11:43:24.368325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:88496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.756 [2024-11-15 11:43:24.368331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:25:41.756 [2024-11-15 11:43:24.368343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:88504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.756 [2024-11-15 11:43:24.368349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:25:41.756 [2024-11-15 11:43:24.368361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:88512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.756 [2024-11-15 11:43:24.368367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:25:41.756 [2024-11-15 11:43:24.368378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:88520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.756 [2024-11-15 11:43:24.368384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:25:41.756 [2024-11-15 11:43:24.368396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:88528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.756 [2024-11-15 11:43:24.368402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:25:41.756 [2024-11-15 11:43:24.368413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:88536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.756 [2024-11-15 11:43:24.368419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:41.756 [2024-11-15 11:43:24.368430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:88544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.756 [2024-11-15 11:43:24.368437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:41.756 [2024-11-15 11:43:24.368449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:88552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.756 [2024-11-15 11:43:24.368456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:25:41.756 [2024-11-15 11:43:24.368471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:88560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.756 [2024-11-15 11:43:24.368477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:25:41.756 [2024-11-15 11:43:24.368493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:88568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.756 [2024-11-15 11:43:24.368499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:25:41.756 [2024-11-15 11:43:24.368510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:88576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.756 [2024-11-15 11:43:24.368516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:25:41.756 [2024-11-15 11:43:24.368527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:88584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.756 [2024-11-15 11:43:24.368534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:25:41.756 [2024-11-15 11:43:24.368545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:88592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.756 [2024-11-15 11:43:24.368551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:25:41.756 [2024-11-15 11:43:24.368562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:88600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.756 [2024-11-15 11:43:24.368569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:25:41.756 [2024-11-15 11:43:24.368581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:88608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.756 [2024-11-15 11:43:24.368587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:25:41.756 [2024-11-15 11:43:24.368598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:88616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.756 [2024-11-15 11:43:24.368604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:25:41.756 [2024-11-15 11:43:24.368615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:88624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.756 [2024-11-15 11:43:24.368622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:25:41.756 [2024-11-15 11:43:24.368633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:88632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.756 [2024-11-15 11:43:24.368640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:25:41.756 [2024-11-15 11:43:24.368651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:88640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.756 [2024-11-15 11:43:24.368657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:25:41.756 [2024-11-15 11:43:24.368668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:88648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.756 [2024-11-15 11:43:24.368674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:25:41.756 [2024-11-15 11:43:24.369240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:88656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.756 [2024-11-15 11:43:24.369252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:25:41.756 [2024-11-15 11:43:24.369267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:88664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.756 [2024-11-15 11:43:24.369274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:25:41.756 [2024-11-15 11:43:24.369286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:88672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.756 [2024-11-15 11:43:24.369292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:25:41.756 [2024-11-15 11:43:24.369304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:88680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.756 [2024-11-15 11:43:24.369310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:25:41.756 [2024-11-15 11:43:24.369321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:88688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.756 [2024-11-15 11:43:24.369327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:25:41.756 [2024-11-15 11:43:24.369339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:88696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.756 [2024-11-15 11:43:24.369345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:25:41.756 [2024-11-15 11:43:24.369356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:88704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.756 [2024-11-15 11:43:24.369362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:25:41.756 [2024-11-15 11:43:24.369374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:88712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.756 [2024-11-15 11:43:24.369381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:25:41.756 [2024-11-15 11:43:24.369392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:88720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.756 [2024-11-15 11:43:24.369398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:25:41.756 [2024-11-15 11:43:24.369409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:88728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.756 [2024-11-15 11:43:24.369415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:25:41.756 [2024-11-15 11:43:24.369426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:88736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.756 [2024-11-15 11:43:24.369433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:25:41.756 [2024-11-15 11:43:24.369444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:88744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.756 [2024-11-15 11:43:24.369451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:25:41.756 [2024-11-15 11:43:24.369469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:88752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.756 [2024-11-15 11:43:24.369476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:25:41.756 [2024-11-15 11:43:24.369487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:88760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.756 [2024-11-15 11:43:24.369495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:25:41.756 [2024-11-15 11:43:24.369507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:88768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.756 [2024-11-15 11:43:24.369513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:25:41.756 [2024-11-15 11:43:24.369524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:88776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.756 [2024-11-15 11:43:24.369531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:25:41.756 [2024-11-15 11:43:24.369542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:88784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.756 [2024-11-15 11:43:24.369550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:25:41.757 [2024-11-15 11:43:24.369562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:88792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.757 [2024-11-15 11:43:24.369569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:41.757 [2024-11-15 11:43:24.369580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:88800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.757 [2024-11-15 11:43:24.369586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:41.757 [2024-11-15 11:43:24.369597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:88808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.757 [2024-11-15 11:43:24.369604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:25:41.757 [2024-11-15 11:43:24.369615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:88816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.757 [2024-11-15 11:43:24.369621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:25:41.757 [2024-11-15 11:43:24.369633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:88824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.757 [2024-11-15 11:43:24.369639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:25:41.757 [2024-11-15 11:43:24.369650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:88832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.757 [2024-11-15 11:43:24.369656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:25:41.757 [2024-11-15 11:43:24.369668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:88840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.757 [2024-11-15 11:43:24.369674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:25:41.757 [2024-11-15 11:43:24.369685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:88848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.757 [2024-11-15 11:43:24.369691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:25:41.757 [2024-11-15 11:43:24.369703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:88856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.757 [2024-11-15 11:43:24.369711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:25:41.757 [2024-11-15 11:43:24.369722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:88864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.757 [2024-11-15 11:43:24.369728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:25:41.757 [2024-11-15 11:43:24.369739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:88872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.757 [2024-11-15 11:43:24.369746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:25:41.757 [2024-11-15 11:43:24.369757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:88880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.757 [2024-11-15 11:43:24.369763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:25:41.757 [2024-11-15 11:43:24.369775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:88888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.757 [2024-11-15 11:43:24.369781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:25:41.757 [2024-11-15 11:43:24.369792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:88896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.757 [2024-11-15 11:43:24.369799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:25:41.757 [2024-11-15 11:43:24.369810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:88904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.757 [2024-11-15 11:43:24.369817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:25:41.757 [2024-11-15 11:43:24.369828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:88912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.757 [2024-11-15 11:43:24.369835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:25:41.757 [2024-11-15 11:43:24.369847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:88920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.757 [2024-11-15 11:43:24.369853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:25:41.757 [2024-11-15 11:43:24.369865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:88928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.757 [2024-11-15 11:43:24.369871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:25:41.757 [2024-11-15 11:43:24.369883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:88936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.757 [2024-11-15 11:43:24.369890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:25:41.757 [2024-11-15 11:43:24.369902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:88944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.757 [2024-11-15 11:43:24.369908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:25:41.757 [2024-11-15 11:43:24.369920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:88952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.757 [2024-11-15 11:43:24.369926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:25:41.757 [2024-11-15 11:43:24.369939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:88960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.757 [2024-11-15 11:43:24.369946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:25:41.757 [2024-11-15 11:43:24.369957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:88968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.757 [2024-11-15 11:43:24.369963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:25:41.757 [2024-11-15 11:43:24.369974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:88976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.757 [2024-11-15 11:43:24.369981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:25:41.757 [2024-11-15 11:43:24.369992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:88984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.757 [2024-11-15 11:43:24.369998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:25:41.757 [2024-11-15 11:43:24.370010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:88992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.757 [2024-11-15 11:43:24.370016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:25:41.757 [2024-11-15 11:43:24.370027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:89000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.757 [2024-11-15 11:43:24.370034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:25:41.757 [2024-11-15 11:43:24.370045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:89008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.757 [2024-11-15 11:43:24.370052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:25:41.757 [2024-11-15 11:43:24.370063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:89016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.757 [2024-11-15 11:43:24.370069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:25:41.757 [2024-11-15 11:43:24.370081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:89024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.757 [2024-11-15 11:43:24.370088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:25:41.757 [2024-11-15 11:43:24.370455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:89032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.757 [2024-11-15 11:43:24.370473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:25:41.757 [2024-11-15 11:43:24.370486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:89040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.757 [2024-11-15 11:43:24.370494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:41.757 [2024-11-15 11:43:24.370506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:89048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.757 [2024-11-15 11:43:24.370513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:41.757 [2024-11-15 11:43:24.370527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:89056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.757 [2024-11-15 11:43:24.370534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:41.758 [2024-11-15 11:43:24.370548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:89064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.758 [2024-11-15 11:43:24.370554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:25:41.758 [2024-11-15 11:43:24.370566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:89072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.758 [2024-11-15 11:43:24.370573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:25:41.758 [2024-11-15 11:43:24.370584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:89080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.758 [2024-11-15 11:43:24.370591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:25:41.758 [2024-11-15 11:43:24.370602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:89088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.758 [2024-11-15 11:43:24.370609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:25:41.758 [2024-11-15 11:43:24.370620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:89096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.758 [2024-11-15 11:43:24.370627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:25:41.758 [2024-11-15 11:43:24.370638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:89104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.758 [2024-11-15 11:43:24.370645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:25:41.758 [2024-11-15 11:43:24.370656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:89112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.758 [2024-11-15 11:43:24.370662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:25:41.758 [2024-11-15 11:43:24.370674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:88408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.758 [2024-11-15 11:43:24.370680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:25:41.758 [2024-11-15 11:43:24.370692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:88416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.758 [2024-11-15 11:43:24.370698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:25:41.758 [2024-11-15 11:43:24.370709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:88424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.758 [2024-11-15 11:43:24.370716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:25:41.758 [2024-11-15 11:43:24.370727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:88432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.758 [2024-11-15 11:43:24.370734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:25:41.758 [2024-11-15 11:43:24.370745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:88440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.758 [2024-11-15 11:43:24.370754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:25:41.758 [2024-11-15 11:43:24.370766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:88448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.758 [2024-11-15 11:43:24.370772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:25:41.758 [2024-11-15 11:43:24.370784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:88456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.758 [2024-11-15 11:43:24.370790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:25:41.758 [2024-11-15 11:43:24.370801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:88464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.758 [2024-11-15 11:43:24.370809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:25:41.758 [2024-11-15 11:43:24.370820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:89120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.758 [2024-11-15 11:43:24.376493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:25:41.758 [2024-11-15 11:43:24.376509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:89128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.758 [2024-11-15 11:43:24.376517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:25:41.758 [2024-11-15 11:43:24.376529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:89136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.758 [2024-11-15 11:43:24.376536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:25:41.758 [2024-11-15 11:43:24.376549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:89144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.758 [2024-11-15 11:43:24.376555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:25:41.758 [2024-11-15 11:43:24.376760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:89152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.758 [2024-11-15 11:43:24.376769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:25:41.758 [2024-11-15 11:43:24.376782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:89160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.758 [2024-11-15 11:43:24.376788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:25:41.758 [2024-11-15 11:43:24.376800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:89168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.758 [2024-11-15 11:43:24.376806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:25:41.758 [2024-11-15 11:43:24.376817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:89176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.758 [2024-11-15 11:43:24.376824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:25:41.758 [2024-11-15 11:43:24.376835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:89184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.758 [2024-11-15 11:43:24.376843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:25:41.758 [2024-11-15 11:43:24.376854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:89192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.758 [2024-11-15 11:43:24.376860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:25:41.758 [2024-11-15 11:43:24.376871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:89200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.758 [2024-11-15 11:43:24.376877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:25:41.758 [2024-11-15 11:43:24.376888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:89208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.758 [2024-11-15 11:43:24.376895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:25:41.758 [2024-11-15 11:43:24.376906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:89216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.758 [2024-11-15 11:43:24.376913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:25:41.758 [2024-11-15 11:43:24.376924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:89224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.758 [2024-11-15 11:43:24.376931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:25:41.758 [2024-11-15 11:43:24.376942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:89232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.758 [2024-11-15 11:43:24.376949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:25:41.758 [2024-11-15 11:43:24.376960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:89240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.758 [2024-11-15 11:43:24.376966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:41.758 [2024-11-15 11:43:24.376977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:88472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.758 [2024-11-15 11:43:24.376984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:41.758 [2024-11-15 11:43:24.376995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:88224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.758 [2024-11-15 11:43:24.377001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:25:41.758 [2024-11-15 11:43:24.377013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:88232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.758 [2024-11-15 11:43:24.377019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:25:41.758 [2024-11-15 11:43:24.377030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:88240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.758 [2024-11-15 11:43:24.377037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:25:41.758 [2024-11-15 11:43:24.377048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:88248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.758 [2024-11-15 11:43:24.377054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:25:41.758 [2024-11-15 11:43:24.377067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:88256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.758 [2024-11-15 11:43:24.377073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:25:41.758 [2024-11-15 11:43:24.377085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:88264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.758 [2024-11-15 11:43:24.377092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:25:41.758 [2024-11-15 11:43:24.377103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:88272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.759 [2024-11-15 11:43:24.377110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:25:41.759 [2024-11-15 11:43:24.377121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:88280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.759 [2024-11-15 11:43:24.377127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:25:41.759 [2024-11-15 11:43:24.377138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:88288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.759 [2024-11-15 11:43:24.377145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:25:41.759 [2024-11-15 11:43:24.377156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:88296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.759 [2024-11-15 11:43:24.377162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:25:41.759 [2024-11-15 11:43:24.377173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:88304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.759 [2024-11-15 11:43:24.377179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:25:41.759 [2024-11-15 11:43:24.377191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:88312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.759 [2024-11-15 11:43:24.377197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:25:41.759 [2024-11-15 11:43:24.377208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:88320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.759 [2024-11-15 11:43:24.377215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:25:41.759 [2024-11-15 11:43:24.377226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:88328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.759 [2024-11-15 11:43:24.377232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:25:41.759 [2024-11-15 11:43:24.377244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:88336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.759 [2024-11-15 11:43:24.377250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:25:41.759 [2024-11-15 11:43:24.377261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:88344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.759 [2024-11-15 11:43:24.377268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:25:41.759 [2024-11-15 11:43:24.377280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:88352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.759 [2024-11-15 11:43:24.377286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:25:41.759 [2024-11-15 11:43:24.377298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:88360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.759 [2024-11-15 11:43:24.377304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:25:41.759 [2024-11-15 11:43:24.377315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:88368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.759 [2024-11-15 11:43:24.377322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:25:41.759 [2024-11-15 11:43:24.377333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:88376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.759 [2024-11-15 11:43:24.377339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:25:41.759 [2024-11-15 11:43:24.377350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:88384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.759 [2024-11-15 11:43:24.377356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:25:41.759 [2024-11-15 11:43:24.377367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:88392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.759 [2024-11-15 11:43:24.377374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:25:41.759 [2024-11-15 11:43:24.377385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:88400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.759 [2024-11-15 11:43:24.377391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:25:41.759 [2024-11-15 11:43:24.377403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:88480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.759 [2024-11-15 11:43:24.377409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:25:41.759 [2024-11-15 11:43:24.377420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:88488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.759 [2024-11-15 11:43:24.377426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:25:41.759 [2024-11-15 11:43:24.377437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:88496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.759 [2024-11-15 11:43:24.377443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:25:41.759 [2024-11-15 11:43:24.377455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:88504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.759 [2024-11-15 11:43:24.377466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:25:41.759 [2024-11-15 11:43:24.377477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:88512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.759 [2024-11-15 11:43:24.377483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:25:41.759 [2024-11-15 11:43:24.377495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:88520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.759 [2024-11-15 11:43:24.377503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:25:41.759 [2024-11-15 11:43:24.377515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:88528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.759 [2024-11-15 11:43:24.377521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:25:41.759 [2024-11-15 11:43:24.377532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:88536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.759 [2024-11-15 11:43:24.377538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:41.759 [2024-11-15 11:43:24.377549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:88544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.759 [2024-11-15 11:43:24.377556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:41.759 [2024-11-15 11:43:24.377567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:88552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.759 [2024-11-15 11:43:24.377573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:25:41.759 [2024-11-15 11:43:24.377584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:88560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.759 [2024-11-15 11:43:24.377591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:25:41.759 [2024-11-15 11:43:24.377602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:88568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.759 [2024-11-15 11:43:24.377608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:25:41.759 [2024-11-15 11:43:24.377619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:88576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.759 [2024-11-15 11:43:24.377626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:25:41.759 [2024-11-15 11:43:24.377636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:88584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.759 [2024-11-15 11:43:24.377643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:25:41.759 [2024-11-15 11:43:24.377654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:88592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.759 [2024-11-15 11:43:24.377660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:25:41.759 [2024-11-15 11:43:24.377672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:88600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.759 [2024-11-15 11:43:24.377678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:25:41.759 [2024-11-15 11:43:24.377689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:88608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.759 [2024-11-15 11:43:24.377696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:25:41.759 [2024-11-15 11:43:24.377707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:88616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.759 [2024-11-15 11:43:24.377715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:25:41.759 [2024-11-15 11:43:24.377726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:88624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.759 [2024-11-15 11:43:24.377732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:25:41.759 [2024-11-15 11:43:24.377743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:88632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.759 [2024-11-15 11:43:24.377749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:25:41.759 [2024-11-15 11:43:24.377760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:88640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.759 [2024-11-15 11:43:24.377767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:25:41.759 [2024-11-15 11:43:24.377778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:88648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.759 [2024-11-15 11:43:24.377784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:25:41.759 [2024-11-15 11:43:24.377796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:88656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.760 [2024-11-15 11:43:24.377802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:25:41.760 [2024-11-15 11:43:24.377813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:88664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.760 [2024-11-15 11:43:24.377819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:25:41.760 [2024-11-15 11:43:24.377830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:88672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.760 [2024-11-15 11:43:24.377837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:25:41.760 [2024-11-15 11:43:24.377848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:88680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.760 [2024-11-15 11:43:24.377854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:25:41.760 [2024-11-15 11:43:24.377865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:88688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.760 [2024-11-15 11:43:24.377871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:25:41.760 [2024-11-15 11:43:24.377883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:88696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.760 [2024-11-15 11:43:24.377889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:25:41.760 [2024-11-15 11:43:24.377900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:88704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.760 [2024-11-15 11:43:24.377908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:25:41.760 [2024-11-15 11:43:24.377919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:88712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.760 [2024-11-15 11:43:24.377926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:25:41.760 [2024-11-15 11:43:24.377938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:88720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.760 [2024-11-15 11:43:24.377945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:25:41.760 [2024-11-15 11:43:24.377956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:88728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.760 [2024-11-15 11:43:24.377962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:25:41.760 [2024-11-15 11:43:24.377973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:88736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.760 [2024-11-15 11:43:24.377979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:25:41.760 [2024-11-15 11:43:24.377991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:88744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.760 [2024-11-15 11:43:24.377997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:25:41.760 [2024-11-15 11:43:24.378008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:88752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.760 [2024-11-15 11:43:24.378015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:25:41.760 [2024-11-15 11:43:24.378025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:88760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.760 [2024-11-15 11:43:24.378031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:25:41.760 [2024-11-15 11:43:24.378043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:88768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.760 [2024-11-15 11:43:24.378049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:25:41.760 [2024-11-15 11:43:24.378060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:88776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.760 [2024-11-15 11:43:24.378066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:25:41.760 [2024-11-15 11:43:24.378077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:88784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.760 [2024-11-15 11:43:24.378084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:25:41.760 [2024-11-15 11:43:24.378095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:88792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.760 [2024-11-15 11:43:24.378101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:41.760 [2024-11-15 11:43:24.378112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:88800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.760 [2024-11-15 11:43:24.378119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:41.760 [2024-11-15 11:43:24.378132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:88808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.760 [2024-11-15 11:43:24.378138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:25:41.760 [2024-11-15 11:43:24.378151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:88816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.760 [2024-11-15 11:43:24.378157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:25:41.760 [2024-11-15 11:43:24.378169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:88824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.760 [2024-11-15 11:43:24.378175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:25:41.760 [2024-11-15 11:43:24.378186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:88832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.760 [2024-11-15 11:43:24.378193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:25:41.760 [2024-11-15 11:43:24.378203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:88840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.760 [2024-11-15 11:43:24.378210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:25:41.760 [2024-11-15 11:43:24.378221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:88848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.760 [2024-11-15 11:43:24.378228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:25:41.760 [2024-11-15 11:43:24.378239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:88856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.760 [2024-11-15 11:43:24.378245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:25:41.760 [2024-11-15 11:43:24.378257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:88864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.760 [2024-11-15 11:43:24.378263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:25:41.760 [2024-11-15 11:43:24.378274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:88872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.760 [2024-11-15 11:43:24.378280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:25:41.760 [2024-11-15 11:43:24.378291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:88880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.760 [2024-11-15 11:43:24.378298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:25:41.760 [2024-11-15 11:43:24.378309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:88888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.760 [2024-11-15 11:43:24.378315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:25:41.760 [2024-11-15 11:43:24.378327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:88896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.760 [2024-11-15 11:43:24.378333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:25:41.760 [2024-11-15 11:43:24.378344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:88904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.760 [2024-11-15 11:43:24.378350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:25:41.760 [2024-11-15 11:43:24.378361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:88912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.760 [2024-11-15 11:43:24.378369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:25:41.760 [2024-11-15 11:43:24.378380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:88920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.760 [2024-11-15 11:43:24.378386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:25:41.760 [2024-11-15 11:43:24.378397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:88928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.760 [2024-11-15 11:43:24.378403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:25:41.760 [2024-11-15 11:43:24.378416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:88936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.760 [2024-11-15 11:43:24.378422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:25:41.760 [2024-11-15 11:43:24.378434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:88944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.760 [2024-11-15 11:43:24.378440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:25:41.760 [2024-11-15 11:43:24.378451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:88952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.760 [2024-11-15 11:43:24.378457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:25:41.760 [2024-11-15 11:43:24.378473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:88960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.760 [2024-11-15 11:43:24.378480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:25:41.760 [2024-11-15 11:43:24.378491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:88968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.761 [2024-11-15 11:43:24.378498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:25:41.761 [2024-11-15 11:43:24.378509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:88976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.761 [2024-11-15 11:43:24.378515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:25:41.761 [2024-11-15 11:43:24.378526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:88984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.761 [2024-11-15 11:43:24.378533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:25:41.761 [2024-11-15 11:43:24.378544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:88992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.761 [2024-11-15 11:43:24.378550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:25:41.761 [2024-11-15 11:43:24.378561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:89000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.761 [2024-11-15 11:43:24.378568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:25:41.761 [2024-11-15 11:43:24.378579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:89008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.761 [2024-11-15 11:43:24.378588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:25:41.761 [2024-11-15 11:43:24.378599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:89016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.761 [2024-11-15 11:43:24.378606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:25:41.761 [2024-11-15 11:43:24.379369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:89024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.761 [2024-11-15 11:43:24.379384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:25:41.761 [2024-11-15 11:43:24.379398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:89032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.761 [2024-11-15 11:43:24.379405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:25:41.761 [2024-11-15 11:43:24.379417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:89040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.761 [2024-11-15 11:43:24.379423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:41.761 [2024-11-15 11:43:24.379435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:89048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.761 [2024-11-15 11:43:24.379442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:41.761 [2024-11-15 11:43:24.379453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:89056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.761 [2024-11-15 11:43:24.379465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:41.761 [2024-11-15 11:43:24.379477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:89064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.761 [2024-11-15 11:43:24.379483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:25:41.761 [2024-11-15 11:43:24.379495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:89072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.761 [2024-11-15 11:43:24.379502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:25:41.761 [2024-11-15 11:43:24.379513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:89080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.761 [2024-11-15 11:43:24.379520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:25:41.761 [2024-11-15 11:43:24.379531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:89088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.761 [2024-11-15 11:43:24.379537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:25:41.761 [2024-11-15 11:43:24.379548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:89096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.761 [2024-11-15 11:43:24.379555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:25:41.761 [2024-11-15 11:43:24.379566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:89104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.761 [2024-11-15 11:43:24.379572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:25:41.761 [2024-11-15 11:43:24.379586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:89112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.761 [2024-11-15 11:43:24.379592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:25:41.761 [2024-11-15 11:43:24.379604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:88408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.761 [2024-11-15 11:43:24.379610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:25:41.761 [2024-11-15 11:43:24.379622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:88416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.761 [2024-11-15 11:43:24.379628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:25:41.761 [2024-11-15 11:43:24.379639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:88424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.761 [2024-11-15 11:43:24.379645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:25:41.761 [2024-11-15 11:43:24.379656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:88432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.761 [2024-11-15 11:43:24.379663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:25:41.761 [2024-11-15 11:43:24.379674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:88440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.761 [2024-11-15 11:43:24.379681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:25:41.761 [2024-11-15 11:43:24.379691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:88448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.761 [2024-11-15 11:43:24.379698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:25:41.761 [2024-11-15 11:43:24.379709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:88456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.761 [2024-11-15 11:43:24.379715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:25:41.761 [2024-11-15 11:43:24.379727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:88464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.761 [2024-11-15 11:43:24.379734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:25:41.761 [2024-11-15 11:43:24.379745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:89120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.761 [2024-11-15 11:43:24.379752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:25:41.761 [2024-11-15 11:43:24.379763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:89128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.761 [2024-11-15 11:43:24.379770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:25:41.761 [2024-11-15 11:43:24.379781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:89136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.761 [2024-11-15 11:43:24.379788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:25:41.761 [2024-11-15 11:43:24.380485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:89144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.761 [2024-11-15 11:43:24.380496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:25:41.761 [2024-11-15 11:43:24.380508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:89152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.761 [2024-11-15 11:43:24.380514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:25:41.761 [2024-11-15 11:43:24.380526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:89160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.761 [2024-11-15 11:43:24.380532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:25:41.761 [2024-11-15 11:43:24.380544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:89168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.761 [2024-11-15 11:43:24.380550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:25:41.761 [2024-11-15 11:43:24.380561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:89176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.761 [2024-11-15 11:43:24.380568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:25:41.762 [2024-11-15 11:43:24.380579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:89184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.762 [2024-11-15 11:43:24.380586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:25:41.762 [2024-11-15 11:43:24.380597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:89192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.762 [2024-11-15 11:43:24.380603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:25:41.762 [2024-11-15 11:43:24.380614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:89200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.762 [2024-11-15 11:43:24.380621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:25:41.762 [2024-11-15 11:43:24.380632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:89208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.762 [2024-11-15 11:43:24.380638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:25:41.762 [2024-11-15 11:43:24.380649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:89216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.762 [2024-11-15 11:43:24.380656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:25:41.762 [2024-11-15 11:43:24.380667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:89224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.762 [2024-11-15 11:43:24.380674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:25:41.762 [2024-11-15 11:43:24.380685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:89232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.762 [2024-11-15 11:43:24.380691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:25:41.762 [2024-11-15 11:43:24.380702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:89240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.762 [2024-11-15 11:43:24.380710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:41.762 [2024-11-15 11:43:24.380721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:88472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.762 [2024-11-15 11:43:24.380728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:41.762 [2024-11-15 11:43:24.380740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:88224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.762 [2024-11-15 11:43:24.380746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:25:41.762 [2024-11-15 11:43:24.380758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:88232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.762 [2024-11-15 11:43:24.380764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:25:41.762 [2024-11-15 11:43:24.380775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:88240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.762 [2024-11-15 11:43:24.380782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:25:41.762 [2024-11-15 11:43:24.380931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:88248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.762 [2024-11-15 11:43:24.380941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:25:41.762 [2024-11-15 11:43:24.380953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:88256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.762 [2024-11-15 11:43:24.380960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:25:41.762 [2024-11-15 11:43:24.380972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:88264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.762 [2024-11-15 11:43:24.380978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:25:41.762 [2024-11-15 11:43:24.380989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:88272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.762 [2024-11-15 11:43:24.380996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:25:41.762 [2024-11-15 11:43:24.381007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:88280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.762 [2024-11-15 11:43:24.381013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:25:41.762 [2024-11-15 11:43:24.381024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:88288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.762 [2024-11-15 11:43:24.381030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:25:41.762 [2024-11-15 11:43:24.381042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:88296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.762 [2024-11-15 11:43:24.381048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:25:41.762 [2024-11-15 11:43:24.381059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:88304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.762 [2024-11-15 11:43:24.381069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:25:41.762 [2024-11-15 11:43:24.381080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:88312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.762 [2024-11-15 11:43:24.381087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:25:41.762 [2024-11-15 11:43:24.381098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:88320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.762 [2024-11-15 11:43:24.381105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:25:41.762 [2024-11-15 11:43:24.381116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:88328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.762 [2024-11-15 11:43:24.381123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:25:41.762 [2024-11-15 11:43:24.381134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:88336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.762 [2024-11-15 11:43:24.381141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:25:41.762 [2024-11-15 11:43:24.381152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:88344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.762 [2024-11-15 11:43:24.381158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:25:41.762 [2024-11-15 11:43:24.381170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:88352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.762 [2024-11-15 11:43:24.381176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:25:41.762 [2024-11-15 11:43:24.381187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:88360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.762 [2024-11-15 11:43:24.381194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:25:41.762 [2024-11-15 11:43:24.381205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:88368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.762 [2024-11-15 11:43:24.381212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:25:41.762 [2024-11-15 11:43:24.381223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:88376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.762 [2024-11-15 11:43:24.381230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:25:41.762 [2024-11-15 11:43:24.381241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:88384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.762 [2024-11-15 11:43:24.381247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:25:41.762 [2024-11-15 11:43:24.381258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:88392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.762 [2024-11-15 11:43:24.381264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:25:41.762 [2024-11-15 11:43:24.381276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:88400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.762 [2024-11-15 11:43:24.381282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:25:41.762 [2024-11-15 11:43:24.381295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:88480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.762 [2024-11-15 11:43:24.381301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:25:41.762 [2024-11-15 11:43:24.383024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:88488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.762 [2024-11-15 11:43:24.383032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:25:41.762 [2024-11-15 11:43:24.383045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:88496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.762 [2024-11-15 11:43:24.383051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:25:41.762 [2024-11-15 11:43:24.383062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:88504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.762 [2024-11-15 11:43:24.383069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:25:41.762 [2024-11-15 11:43:24.383080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:88512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.762 [2024-11-15 11:43:24.383087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:25:41.762 [2024-11-15 11:43:24.383098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:88520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.762 [2024-11-15 11:43:24.383103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:25:41.762 [2024-11-15 11:43:24.383114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:88528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.762 [2024-11-15 11:43:24.383121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:25:41.763 [2024-11-15 11:43:24.383132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:88536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.763 [2024-11-15 11:43:24.383138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:41.763 [2024-11-15 11:43:24.383149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:88544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.763 [2024-11-15 11:43:24.383155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:41.763 [2024-11-15 11:43:24.383166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:88552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.763 [2024-11-15 11:43:24.383172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:25:41.763 [2024-11-15 11:43:24.383184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:88560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.763 [2024-11-15 11:43:24.383189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:25:41.763 [2024-11-15 11:43:24.383201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:88568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.763 [2024-11-15 11:43:24.383207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:25:41.763 [2024-11-15 11:43:24.383219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:88576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.763 [2024-11-15 11:43:24.383226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:25:41.763 [2024-11-15 11:43:24.383236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:88584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.763 [2024-11-15 11:43:24.383242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:25:41.763 [2024-11-15 11:43:24.383253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:88592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.763 [2024-11-15 11:43:24.383259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:25:41.763 [2024-11-15 11:43:24.383270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:88600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.763 [2024-11-15 11:43:24.383277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:25:41.763 [2024-11-15 11:43:24.383287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:88608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.763 [2024-11-15 11:43:24.383293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:25:41.763 [2024-11-15 11:43:24.383305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:88616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.763 [2024-11-15 11:43:24.383311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:25:41.763 [2024-11-15 11:43:24.383465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:88624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.763 [2024-11-15 11:43:24.383474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:25:41.763 [2024-11-15 11:43:24.383486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:88632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.763 [2024-11-15 11:43:24.383492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:25:41.763 [2024-11-15 11:43:24.383503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:88640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.763 [2024-11-15 11:43:24.383509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:25:41.763 [2024-11-15 11:43:24.383520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:88648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.763 [2024-11-15 11:43:24.383526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:25:41.763 [2024-11-15 11:43:24.383537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:88656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.763 [2024-11-15 11:43:24.383543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:25:41.763 [2024-11-15 11:43:24.383554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:88664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.763 [2024-11-15 11:43:24.383560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:25:41.763 [2024-11-15 11:43:24.383571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:88672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.763 [2024-11-15 11:43:24.383579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:25:41.763 [2024-11-15 11:43:24.383591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:88680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.763 [2024-11-15 11:43:24.383597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:25:41.763 [2024-11-15 11:43:24.383608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:88688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.763 [2024-11-15 11:43:24.383614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:25:41.763 [2024-11-15 11:43:24.383625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:88696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.763 [2024-11-15 11:43:24.383631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:25:41.763 [2024-11-15 11:43:24.383642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:88704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.763 [2024-11-15 11:43:24.383648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:25:41.763 [2024-11-15 11:43:24.383659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:88712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.763 [2024-11-15 11:43:24.383665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:25:41.763 [2024-11-15 11:43:24.383676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:88720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.763 [2024-11-15 11:43:24.383682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:25:41.763 [2024-11-15 11:43:24.383693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:88728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.763 [2024-11-15 11:43:24.383699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:25:41.763 [2024-11-15 11:43:24.383710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:88736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.763 [2024-11-15 11:43:24.383716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:25:41.763 [2024-11-15 11:43:24.383727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:88744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.763 [2024-11-15 11:43:24.383733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:25:41.763 [2024-11-15 11:43:24.383744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:88752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.763 [2024-11-15 11:43:24.383750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:25:41.763 [2024-11-15 11:43:24.383761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:88760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.763 [2024-11-15 11:43:24.383767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:25:41.763 [2024-11-15 11:43:24.383778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:88768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.763 [2024-11-15 11:43:24.383785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:25:41.763 [2024-11-15 11:43:24.383796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:88776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.763 [2024-11-15 11:43:24.383802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:25:41.763 [2024-11-15 11:43:24.383813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:88784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.763 [2024-11-15 11:43:24.383819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:25:41.763 [2024-11-15 11:43:24.383831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:88792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.763 [2024-11-15 11:43:24.383837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:41.763 [2024-11-15 11:43:24.383848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:88800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.763 [2024-11-15 11:43:24.383854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:41.763 [2024-11-15 11:43:24.383865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:88808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.763 [2024-11-15 11:43:24.383871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:25:41.763 [2024-11-15 11:43:24.383882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:88816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.763 [2024-11-15 11:43:24.383888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:25:41.763 [2024-11-15 11:43:24.383899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:88824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.763 [2024-11-15 11:43:24.383905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:25:41.763 [2024-11-15 11:43:24.383916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:88832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.763 [2024-11-15 11:43:24.383922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:25:41.763 [2024-11-15 11:43:24.383933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:88840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.763 [2024-11-15 11:43:24.383939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:25:41.764 [2024-11-15 11:43:24.383950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:88848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.764 [2024-11-15 11:43:24.383957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:25:41.764 [2024-11-15 11:43:24.383968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:88856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.764 [2024-11-15 11:43:24.383974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:25:41.764 [2024-11-15 11:43:24.383985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:88864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.764 [2024-11-15 11:43:24.383991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:25:41.764 [2024-11-15 11:43:24.384003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:88872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.764 [2024-11-15 11:43:24.384009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:25:41.764 [2024-11-15 11:43:24.384020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:88880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.764 [2024-11-15 11:43:24.384026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:25:41.764 [2024-11-15 11:43:24.384037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:88888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.764 [2024-11-15 11:43:24.384043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:25:41.764 [2024-11-15 11:43:24.384054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:88896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.764 [2024-11-15 11:43:24.384060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:25:41.764 [2024-11-15 11:43:24.384071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:88904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.764 [2024-11-15 11:43:24.384077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:25:41.764 [2024-11-15 11:43:24.384088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:88912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.764 [2024-11-15 11:43:24.384094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:25:41.764 [2024-11-15 11:43:24.384105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:88920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.764 [2024-11-15 11:43:24.384111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:25:41.764 [2024-11-15 11:43:24.384122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:88928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.764 [2024-11-15 11:43:24.384128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:25:41.764 [2024-11-15 11:43:24.384139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:88936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.764 [2024-11-15 11:43:24.384145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:25:41.764 [2024-11-15 11:43:24.384157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:88944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.764 [2024-11-15 11:43:24.384164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:25:41.764 [2024-11-15 11:43:24.384175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:88952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.764 [2024-11-15 11:43:24.384181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:25:41.764 [2024-11-15 11:43:24.384192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:88960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.764 [2024-11-15 11:43:24.384198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:25:41.764 [2024-11-15 11:43:24.384210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:88968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.764 [2024-11-15 11:43:24.384217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:25:41.764 [2024-11-15 11:43:24.384228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:88976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.764 [2024-11-15 11:43:24.384233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:25:41.764 [2024-11-15 11:43:24.384244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:88984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.764 [2024-11-15 11:43:24.384250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:25:41.764 [2024-11-15 11:43:24.384262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:88992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.764 [2024-11-15 11:43:24.384267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:25:41.764 [2024-11-15 11:43:24.384641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:89000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.764 [2024-11-15 11:43:24.384653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:25:41.764 [2024-11-15 11:43:24.384666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:89008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.764 [2024-11-15 11:43:24.384672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:25:41.764 [2024-11-15 11:43:24.384684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:89016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.764 [2024-11-15 11:43:24.384690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:25:41.764 [2024-11-15 11:43:24.384701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:89024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.764 [2024-11-15 11:43:24.384707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:25:41.764 [2024-11-15 11:43:24.384719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:89032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.764 [2024-11-15 11:43:24.384725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:25:41.764 [2024-11-15 11:43:24.384736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:89040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.764 [2024-11-15 11:43:24.384742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:41.764 [2024-11-15 11:43:24.384753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:89048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.764 [2024-11-15 11:43:24.384759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:41.764 [2024-11-15 11:43:24.384770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:89056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.764 [2024-11-15 11:43:24.384777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:41.764 [2024-11-15 11:43:24.384789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:89064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.764 [2024-11-15 11:43:24.384798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:25:41.764 [2024-11-15 11:43:24.384809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:89072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.764 [2024-11-15 11:43:24.384816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:25:41.764 [2024-11-15 11:43:24.384827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:89080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.764 [2024-11-15 11:43:24.384834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:25:41.764 [2024-11-15 11:43:24.384845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:89088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.764 [2024-11-15 11:43:24.384852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:25:41.764 [2024-11-15 11:43:24.384863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:89096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.764 [2024-11-15 11:43:24.384869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:25:41.764 [2024-11-15 11:43:24.384880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:89104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.764 [2024-11-15 11:43:24.384887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:25:41.764 [2024-11-15 11:43:24.384898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:89112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.764 [2024-11-15 11:43:24.384904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:25:41.764 [2024-11-15 11:43:24.384915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:88408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.764 [2024-11-15 11:43:24.384922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:25:41.764 [2024-11-15 11:43:24.384933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:88416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.764 [2024-11-15 11:43:24.384939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:25:41.764 [2024-11-15 11:43:24.384950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:88424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.764 [2024-11-15 11:43:24.384956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:25:41.764 [2024-11-15 11:43:24.384967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:88432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.764 [2024-11-15 11:43:24.384974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:25:41.764 [2024-11-15 11:43:24.384984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:88440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.765 [2024-11-15 11:43:24.384991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:25:41.765 [2024-11-15 11:43:24.385002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:88448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.765 [2024-11-15 11:43:24.385009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:25:41.765 [2024-11-15 11:43:24.385021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:88456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.765 [2024-11-15 11:43:24.385027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:25:41.765 [2024-11-15 11:43:24.385040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:88464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.765 [2024-11-15 11:43:24.385046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:25:41.765 [2024-11-15 11:43:24.385058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:89120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.765 [2024-11-15 11:43:24.385064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:25:41.765 [2024-11-15 11:43:24.385262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:89128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.765 [2024-11-15 11:43:24.385272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:25:41.765 [2024-11-15 11:43:24.385285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:89136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.765 [2024-11-15 11:43:24.385291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:25:41.765 [2024-11-15 11:43:24.385303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:89144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.765 [2024-11-15 11:43:24.385309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:25:41.765 [2024-11-15 11:43:24.385320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:89152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.765 [2024-11-15 11:43:24.385327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:25:41.765 [2024-11-15 11:43:24.385338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:89160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.765 [2024-11-15 11:43:24.385344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:25:41.765 [2024-11-15 11:43:24.385355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:89168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.765 [2024-11-15 11:43:24.385361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:25:41.765 [2024-11-15 11:43:24.385373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:89176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.765 [2024-11-15 11:43:24.385379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:25:41.765 [2024-11-15 11:43:24.385391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:89184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.765 [2024-11-15 11:43:24.385397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:25:41.765 [2024-11-15 11:43:24.386048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:89192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.765 [2024-11-15 11:43:24.386056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:25:41.765 [2024-11-15 11:43:24.386071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:89200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.765 [2024-11-15 11:43:24.386078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:25:41.765 [2024-11-15 11:43:24.386089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:89208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.765 [2024-11-15 11:43:24.386095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:25:41.765 [2024-11-15 11:43:24.386106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:89216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.765 [2024-11-15 11:43:24.386113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:25:41.765 [2024-11-15 11:43:24.386124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:89224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.765 [2024-11-15 11:43:24.386130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:25:41.765 [2024-11-15 11:43:24.386142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:89232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.765 [2024-11-15 11:43:24.386148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:25:41.765 [2024-11-15 11:43:24.386160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:89240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.765 [2024-11-15 11:43:24.386166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:41.765 [2024-11-15 11:43:24.386177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:88472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.765 [2024-11-15 11:43:24.386183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:41.765 [2024-11-15 11:43:24.386195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:88224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.765 [2024-11-15 11:43:24.386202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:25:41.765 [2024-11-15 11:43:24.386213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:88232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.765 [2024-11-15 11:43:24.386219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:25:41.765 [2024-11-15 11:43:24.386230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:88240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.765 [2024-11-15 11:43:24.386236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:25:41.765 [2024-11-15 11:43:24.386247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:88248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.765 [2024-11-15 11:43:24.386253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:25:41.765 [2024-11-15 11:43:24.386264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:88256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.765 [2024-11-15 11:43:24.386271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:25:41.765 [2024-11-15 11:43:24.386283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:88264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.765 [2024-11-15 11:43:24.386289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:25:41.765 [2024-11-15 11:43:24.386300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:88272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.765 [2024-11-15 11:43:24.386306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:25:41.765 [2024-11-15 11:43:24.386318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:88280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.765 [2024-11-15 11:43:24.386324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:25:41.765 [2024-11-15 11:43:24.386335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:88288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.765 [2024-11-15 11:43:24.386341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:25:41.765 [2024-11-15 11:43:24.386476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:88296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.765 [2024-11-15 11:43:24.386484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:25:41.765 [2024-11-15 11:43:24.386496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:88304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.765 [2024-11-15 11:43:24.386502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:25:41.765 [2024-11-15 11:43:24.386513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:88312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.765 [2024-11-15 11:43:24.386520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:25:41.765 [2024-11-15 11:43:24.386531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:88320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.765 [2024-11-15 11:43:24.386537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:25:41.765 [2024-11-15 11:43:24.386548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:88328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.765 [2024-11-15 11:43:24.386554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:25:41.765 [2024-11-15 11:43:24.386565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:88336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.765 [2024-11-15 11:43:24.386571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:25:41.765 [2024-11-15 11:43:24.386582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:88344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.765 [2024-11-15 11:43:24.386588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:25:41.765 [2024-11-15 11:43:24.386599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:88352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.765 [2024-11-15 11:43:24.386606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:25:41.765 [2024-11-15 11:43:24.386621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:88360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.765 [2024-11-15 11:43:24.386627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:25:41.765 [2024-11-15 11:43:24.386638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:88368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.766 [2024-11-15 11:43:24.386644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:25:41.766 [2024-11-15 11:43:24.386655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:88376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.766 [2024-11-15 11:43:24.386662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:25:41.766 [2024-11-15 11:43:24.386673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:88384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.766 [2024-11-15 11:43:24.386679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:25:41.766 [2024-11-15 11:43:24.386690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:88392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.766 [2024-11-15 11:43:24.386696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:25:41.766 [2024-11-15 11:43:24.386708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:88400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.766 [2024-11-15 11:43:24.386714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:25:41.766 [2024-11-15 11:43:24.388588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:88480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.766 [2024-11-15 11:43:24.388596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:25:41.766 [2024-11-15 11:43:24.388608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:88488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.766 [2024-11-15 11:43:24.388615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:25:41.766 [2024-11-15 11:43:24.388626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:88496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.766 [2024-11-15 11:43:24.388632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:25:41.766 [2024-11-15 11:43:24.388643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:88504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.766 [2024-11-15 11:43:24.388649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:25:41.766 [2024-11-15 11:43:24.388660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:88512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.766 [2024-11-15 11:43:24.388666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:25:41.766 [2024-11-15 11:43:24.388677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:88520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.766 [2024-11-15 11:43:24.388683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:25:41.766 [2024-11-15 11:43:24.388694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:88528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.766 [2024-11-15 11:43:24.388702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:25:41.766 [2024-11-15 11:43:24.388715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:88536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.766 [2024-11-15 11:43:24.388721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:41.766 [2024-11-15 11:43:24.388732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:88544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.766 [2024-11-15 11:43:24.388739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:41.766 [2024-11-15 11:43:24.388750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:88552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.766 [2024-11-15 11:43:24.388756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:25:41.766 [2024-11-15 11:43:24.388768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:88560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.766 [2024-11-15 11:43:24.388774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:25:41.766 [2024-11-15 11:43:24.388786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:88568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.766 [2024-11-15 11:43:24.388792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:25:41.766 [2024-11-15 11:43:24.388803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:88576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.766 [2024-11-15 11:43:24.388809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:25:41.766 [2024-11-15 11:43:24.388820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:88584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.766 [2024-11-15 11:43:24.388826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:25:41.766 [2024-11-15 11:43:24.388837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:88592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.766 [2024-11-15 11:43:24.388843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:25:41.766 [2024-11-15 11:43:24.388854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:88600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.766 [2024-11-15 11:43:24.388861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:25:41.766 [2024-11-15 11:43:24.388872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:88608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.766 [2024-11-15 11:43:24.388879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:25:41.766 [2024-11-15 11:43:24.389040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:88616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.766 [2024-11-15 11:43:24.389050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:25:41.766 [2024-11-15 11:43:24.389063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:88624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.766 [2024-11-15 11:43:24.389071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:25:41.766 [2024-11-15 11:43:24.389082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:88632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.766 [2024-11-15 11:43:24.389089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:25:41.766 [2024-11-15 11:43:24.389100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:88640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.766 [2024-11-15 11:43:24.389107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:25:41.766 [2024-11-15 11:43:24.389118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:88648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.766 [2024-11-15 11:43:24.389124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:25:41.766 [2024-11-15 11:43:24.389136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:88656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.766 [2024-11-15 11:43:24.389142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:25:41.766 [2024-11-15 11:43:24.389153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:88664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.766 [2024-11-15 11:43:24.389159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:25:41.766 [2024-11-15 11:43:24.389170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:88672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.766 [2024-11-15 11:43:24.389177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:25:41.766 [2024-11-15 11:43:24.389188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:88680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.766 [2024-11-15 11:43:24.389194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:25:41.766 [2024-11-15 11:43:24.389206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:88688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.766 [2024-11-15 11:43:24.389212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:25:41.766 [2024-11-15 11:43:24.389223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:88696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.766 [2024-11-15 11:43:24.389229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:25:41.766 [2024-11-15 11:43:24.389240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:88704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.766 [2024-11-15 11:43:24.389247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:25:41.766 [2024-11-15 11:43:24.389258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:88712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.766 [2024-11-15 11:43:24.389264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:25:41.767 [2024-11-15 11:43:24.389276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:88720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.767 [2024-11-15 11:43:24.389282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:25:41.767 [2024-11-15 11:43:24.389295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:88728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.767 [2024-11-15 11:43:24.389302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:25:41.767 [2024-11-15 11:43:24.389313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:88736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.767 [2024-11-15 11:43:24.389319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:25:41.767 [2024-11-15 11:43:24.389330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:88744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.767 [2024-11-15 11:43:24.389336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:25:41.767 [2024-11-15 11:43:24.389347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:88752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.767 [2024-11-15 11:43:24.389354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:25:41.767 [2024-11-15 11:43:24.389365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:88760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.767 [2024-11-15 11:43:24.389371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:25:41.767 [2024-11-15 11:43:24.389382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:88768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.767 [2024-11-15 11:43:24.389388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:25:41.767 [2024-11-15 11:43:24.389399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:88776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.767 [2024-11-15 11:43:24.389405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:25:41.767 [2024-11-15 11:43:24.389416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:88784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.767 [2024-11-15 11:43:24.389422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:25:41.767 [2024-11-15 11:43:24.389433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:88792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.767 [2024-11-15 11:43:24.389439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:41.767 [2024-11-15 11:43:24.389450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:88800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.767 [2024-11-15 11:43:24.389456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:41.767 [2024-11-15 11:43:24.389473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:88808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.767 [2024-11-15 11:43:24.389479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:25:41.767 [2024-11-15 11:43:24.389490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:88816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.767 [2024-11-15 11:43:24.389497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:25:41.767 [2024-11-15 11:43:24.389514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:88824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.767 [2024-11-15 11:43:24.389520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:25:41.767 [2024-11-15 11:43:24.389531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:88832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.767 [2024-11-15 11:43:24.389537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:25:41.767 [2024-11-15 11:43:24.389549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:88840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.767 [2024-11-15 11:43:24.389555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:25:41.767 [2024-11-15 11:43:24.389566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:88848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.767 [2024-11-15 11:43:24.389572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:25:41.767 [2024-11-15 11:43:24.389583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:88856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.767 [2024-11-15 11:43:24.389589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:25:41.767 [2024-11-15 11:43:24.389600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:88864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.767 [2024-11-15 11:43:24.389606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:25:41.767 [2024-11-15 11:43:24.389617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:88872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.767 [2024-11-15 11:43:24.389623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:25:41.767 [2024-11-15 11:43:24.389634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:88880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.767 [2024-11-15 11:43:24.389640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:25:41.767 [2024-11-15 11:43:24.389651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:88888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.767 [2024-11-15 11:43:24.389657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:25:41.767 [2024-11-15 11:43:24.389668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:88896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.767 [2024-11-15 11:43:24.389674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:25:41.767 [2024-11-15 11:43:24.389685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:88904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.767 [2024-11-15 11:43:24.389691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:25:41.767 [2024-11-15 11:43:24.389702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:88912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.767 [2024-11-15 11:43:24.389709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:25:41.767 [2024-11-15 11:43:24.389720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:88920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.767 [2024-11-15 11:43:24.389727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:25:41.767 [2024-11-15 11:43:24.389738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:88928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.767 [2024-11-15 11:43:24.389744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:25:41.767 [2024-11-15 11:43:24.389755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:88936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.767 [2024-11-15 11:43:24.389761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:25:41.767 [2024-11-15 11:43:24.389772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:88944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.767 [2024-11-15 11:43:24.389778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:25:41.767 [2024-11-15 11:43:24.389790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:88952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.767 [2024-11-15 11:43:24.389796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:25:41.767 [2024-11-15 11:43:24.389807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:88960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.767 [2024-11-15 11:43:24.389813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:25:41.767 [2024-11-15 11:43:24.389824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:88968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.767 [2024-11-15 11:43:24.389830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:25:41.767 [2024-11-15 11:43:24.389841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:88976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.767 [2024-11-15 11:43:24.389847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:25:41.767 [2024-11-15 11:43:24.389860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:88984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.767 [2024-11-15 11:43:24.389866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:25:41.767 [2024-11-15 11:43:24.390246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:88992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.767 [2024-11-15 11:43:24.390258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:25:41.767 [2024-11-15 11:43:24.390270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:89000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.767 [2024-11-15 11:43:24.390277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:25:41.767 [2024-11-15 11:43:24.390288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:89008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.767 [2024-11-15 11:43:24.390294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:25:41.767 [2024-11-15 11:43:24.390305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:89016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.767 [2024-11-15 11:43:24.390314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:25:41.767 [2024-11-15 11:43:24.390325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:89024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.768 [2024-11-15 11:43:24.390331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:25:41.768 [2024-11-15 11:43:24.390342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:89032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.768 [2024-11-15 11:43:24.390348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:25:41.768 [2024-11-15 11:43:24.390359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:89040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.768 [2024-11-15 11:43:24.390365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:41.768 [2024-11-15 11:43:24.390377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:89048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.768 [2024-11-15 11:43:24.390383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:41.768 [2024-11-15 11:43:24.390393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:89056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.768 [2024-11-15 11:43:24.390400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:41.768 [2024-11-15 11:43:24.390411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:89064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.768 [2024-11-15 11:43:24.390417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:25:41.768 [2024-11-15 11:43:24.390428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:89072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.768 [2024-11-15 11:43:24.390434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:25:41.768 [2024-11-15 11:43:24.390445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:89080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.768 [2024-11-15 11:43:24.390451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:25:41.768 [2024-11-15 11:43:24.390467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:89088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.768 [2024-11-15 11:43:24.390474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:25:41.768 [2024-11-15 11:43:24.390485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:89096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.768 [2024-11-15 11:43:24.395851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:25:41.768 [2024-11-15 11:43:24.395865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:89104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.768 [2024-11-15 11:43:24.395872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:25:41.768 [2024-11-15 11:43:24.395885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:89112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.768 [2024-11-15 11:43:24.395892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:25:41.768 [2024-11-15 11:43:24.395905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:88408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.768 [2024-11-15 11:43:24.395911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:25:41.768 [2024-11-15 11:43:24.395922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:88416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.768 [2024-11-15 11:43:24.395928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:25:41.768 [2024-11-15 11:43:24.395939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:88424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.768 [2024-11-15 11:43:24.395945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:25:41.768 [2024-11-15 11:43:24.395956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:88432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.768 [2024-11-15 11:43:24.395963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:25:41.768 [2024-11-15 11:43:24.395973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:88440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.768 [2024-11-15 11:43:24.395980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:25:41.768 [2024-11-15 11:43:24.395991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:88448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.768 [2024-11-15 11:43:24.395997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:25:41.768 [2024-11-15 11:43:24.396008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:88456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.768 [2024-11-15 11:43:24.396014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:25:41.768 [2024-11-15 11:43:24.396025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:88464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.768 [2024-11-15 11:43:24.396031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:25:41.768 [2024-11-15 11:43:24.396228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:89120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.768 [2024-11-15 11:43:24.396238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:25:41.768 [2024-11-15 11:43:24.396250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:89128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.768 [2024-11-15 11:43:24.396256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:25:41.768 [2024-11-15 11:43:24.396267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:89136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.768 [2024-11-15 11:43:24.396274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:25:41.768 [2024-11-15 11:43:24.396285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:89144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.768 [2024-11-15 11:43:24.396291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:25:41.768 [2024-11-15 11:43:24.396304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:89152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.768 [2024-11-15 11:43:24.396310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:25:41.768 [2024-11-15 11:43:24.396321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:89160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.768 [2024-11-15 11:43:24.396327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:25:41.768 [2024-11-15 11:43:24.396338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:89168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.768 [2024-11-15 11:43:24.396344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:25:41.768 [2024-11-15 11:43:24.396355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:89176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.768 [2024-11-15 11:43:24.396362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:25:41.768 [2024-11-15 11:43:24.396373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:89184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.768 [2024-11-15 11:43:24.396378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:25:41.768 [2024-11-15 11:43:24.396390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:89192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.768 [2024-11-15 11:43:24.396396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:25:41.768 [2024-11-15 11:43:24.396407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:89200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.768 [2024-11-15 11:43:24.396413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:25:41.768 [2024-11-15 11:43:24.396424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:89208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.768 [2024-11-15 11:43:24.396429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:25:41.768 [2024-11-15 11:43:24.396441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:89216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.768 [2024-11-15 11:43:24.396447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:25:41.768 [2024-11-15 11:43:24.396463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:89224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.768 [2024-11-15 11:43:24.396470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:25:41.768 [2024-11-15 11:43:24.396480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:89232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.768 [2024-11-15 11:43:24.396487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:25:41.768 [2024-11-15 11:43:24.396498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:89240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.768 [2024-11-15 11:43:24.396504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:41.768 [2024-11-15 11:43:24.396515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:88472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.768 [2024-11-15 11:43:24.396522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:41.768 [2024-11-15 11:43:24.396534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:88224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.768 [2024-11-15 11:43:24.396540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:25:41.769 [2024-11-15 11:43:24.396552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:88232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.769 [2024-11-15 11:43:24.396558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:25:41.769 [2024-11-15 11:43:24.396569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:88240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.769 [2024-11-15 11:43:24.396575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:25:41.769 [2024-11-15 11:43:24.396586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:88248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.769 [2024-11-15 11:43:24.396593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:25:41.769 [2024-11-15 11:43:24.396604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:88256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.769 [2024-11-15 11:43:24.396610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:25:41.769 [2024-11-15 11:43:24.396621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:88264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.769 [2024-11-15 11:43:24.396627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:25:41.769 [2024-11-15 11:43:24.396638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:88272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.769 [2024-11-15 11:43:24.396644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:25:41.769 [2024-11-15 11:43:24.396655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:88280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.769 [2024-11-15 11:43:24.396661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:25:41.769 [2024-11-15 11:43:24.396673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:88288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.769 [2024-11-15 11:43:24.396679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:25:41.769 [2024-11-15 11:43:24.396690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:88296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.769 [2024-11-15 11:43:24.396696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:25:41.769 [2024-11-15 11:43:24.396707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:88304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.769 [2024-11-15 11:43:24.396713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:25:41.769 [2024-11-15 11:43:24.396724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:88312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.769 [2024-11-15 11:43:24.396733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:25:41.769 [2024-11-15 11:43:24.396744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:88320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.769 [2024-11-15 11:43:24.396751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:25:41.769 [2024-11-15 11:43:24.396763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:88328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.769 [2024-11-15 11:43:24.396769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:25:41.769 [2024-11-15 11:43:24.396780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:88336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.769 [2024-11-15 11:43:24.396786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:25:41.769 [2024-11-15 11:43:24.396797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:88344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.769 [2024-11-15 11:43:24.396804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:25:41.769 [2024-11-15 11:43:24.396815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:88352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.769 [2024-11-15 11:43:24.396821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:25:41.769 [2024-11-15 11:43:24.396832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:88360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.769 [2024-11-15 11:43:24.396839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:25:41.769 [2024-11-15 11:43:24.396850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:88368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.769 [2024-11-15 11:43:24.396856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:25:41.769 [2024-11-15 11:43:24.396868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:88376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.769 [2024-11-15 11:43:24.396874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:25:41.769 [2024-11-15 11:43:24.396885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:88384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.769 [2024-11-15 11:43:24.396891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:25:41.769 [2024-11-15 11:43:24.396902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:88392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.769 [2024-11-15 11:43:24.396909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:25:41.769 [2024-11-15 11:43:24.396920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:88400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.769 [2024-11-15 11:43:24.396926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:25:41.769 [2024-11-15 11:43:24.396937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:88480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.769 [2024-11-15 11:43:24.396943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:25:41.769 [2024-11-15 11:43:24.396956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:88488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.769 [2024-11-15 11:43:24.396962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:25:41.769 [2024-11-15 11:43:24.396973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:88496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.769 [2024-11-15 11:43:24.396979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:25:41.769 [2024-11-15 11:43:24.396990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:88504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.769 [2024-11-15 11:43:24.396996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:25:41.769 [2024-11-15 11:43:24.397008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:88512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.769 [2024-11-15 11:43:24.397014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:25:41.769 [2024-11-15 11:43:24.397025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:88520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.769 [2024-11-15 11:43:24.397031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:25:41.769 [2024-11-15 11:43:24.397042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:88528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.769 [2024-11-15 11:43:24.397048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:25:41.769 [2024-11-15 11:43:24.397059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:88536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.769 [2024-11-15 11:43:24.397065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:41.769 [2024-11-15 11:43:24.397076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:88544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.769 [2024-11-15 11:43:24.397082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:41.769 [2024-11-15 11:43:24.397094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:88552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.769 [2024-11-15 11:43:24.397100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:25:41.769 [2024-11-15 11:43:24.397111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:88560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.769 [2024-11-15 11:43:24.397117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:25:41.769 [2024-11-15 11:43:24.397128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:88568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.769 [2024-11-15 11:43:24.397134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:25:41.769 [2024-11-15 11:43:24.397145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:88576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.769 [2024-11-15 11:43:24.397151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:25:41.769 [2024-11-15 11:43:24.397163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:88584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.769 [2024-11-15 11:43:24.397170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:25:41.769 [2024-11-15 11:43:24.397181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:88592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.769 [2024-11-15 11:43:24.397187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:25:41.769 [2024-11-15 11:43:24.397198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:88600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.769 [2024-11-15 11:43:24.397204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:25:41.769 [2024-11-15 11:43:24.397215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:88608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.770 [2024-11-15 11:43:24.397223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:25:41.770 [2024-11-15 11:43:24.397234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:88616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.770 [2024-11-15 11:43:24.397241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:25:41.770 [2024-11-15 11:43:24.397252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:88624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.770 [2024-11-15 11:43:24.397258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:25:41.770 [2024-11-15 11:43:24.397269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:88632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.770 [2024-11-15 11:43:24.397276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:25:41.770 [2024-11-15 11:43:24.397287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:88640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.770 [2024-11-15 11:43:24.397293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:25:41.770 [2024-11-15 11:43:24.397305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:88648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.770 [2024-11-15 11:43:24.397310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:25:41.770 [2024-11-15 11:43:24.397321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:88656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.770 [2024-11-15 11:43:24.397328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:25:41.770 [2024-11-15 11:43:24.397339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:88664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.770 [2024-11-15 11:43:24.397345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:25:41.770 [2024-11-15 11:43:24.397356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:88672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.770 [2024-11-15 11:43:24.397362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:25:41.770 [2024-11-15 11:43:24.397373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:88680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.770 [2024-11-15 11:43:24.397381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:25:41.770 [2024-11-15 11:43:24.397392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:88688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.770 [2024-11-15 11:43:24.397398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:25:41.770 [2024-11-15 11:43:24.397409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:88696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.770 [2024-11-15 11:43:24.397415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:25:41.770 [2024-11-15 11:43:24.397426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:88704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.770 [2024-11-15 11:43:24.397432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:25:41.770 [2024-11-15 11:43:24.397444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:88712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.770 [2024-11-15 11:43:24.397450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:25:41.770 [2024-11-15 11:43:24.397465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:88720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.770 [2024-11-15 11:43:24.397472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:25:41.770 [2024-11-15 11:43:24.397483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:88728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.770 [2024-11-15 11:43:24.397489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:25:41.770 [2024-11-15 11:43:24.397500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:88736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.770 [2024-11-15 11:43:24.397507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:25:41.770 [2024-11-15 11:43:24.397518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:88744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.770 [2024-11-15 11:43:24.397525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:25:41.770 [2024-11-15 11:43:24.397535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:88752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.770 [2024-11-15 11:43:24.397542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:25:41.770 [2024-11-15 11:43:24.397554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:88760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.770 [2024-11-15 11:43:24.397560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:25:41.770 [2024-11-15 11:43:24.397571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:88768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.770 [2024-11-15 11:43:24.397577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:25:41.770 [2024-11-15 11:43:24.397588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:88776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.770 [2024-11-15 11:43:24.397596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:25:41.770 [2024-11-15 11:43:24.397607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:88784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.770 [2024-11-15 11:43:24.397613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:25:41.770 [2024-11-15 11:43:24.397624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:88792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.770 [2024-11-15 11:43:24.397630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:41.770 [2024-11-15 11:43:24.397642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:88800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.770 [2024-11-15 11:43:24.397648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:41.770 [2024-11-15 11:43:24.397659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:88808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.770 [2024-11-15 11:43:24.397665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:25:41.770 [2024-11-15 11:43:24.397676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:88816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.770 [2024-11-15 11:43:24.397682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:25:41.770 [2024-11-15 11:43:24.397693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:88824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.770 [2024-11-15 11:43:24.397700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:25:41.770 [2024-11-15 11:43:24.397711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:88832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.770 [2024-11-15 11:43:24.397716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:25:41.770 [2024-11-15 11:43:24.397728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:88840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.770 [2024-11-15 11:43:24.397734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:25:41.770 [2024-11-15 11:43:24.397745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:88848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.770 [2024-11-15 11:43:24.397751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:25:41.770 [2024-11-15 11:43:24.397762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:88856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.770 [2024-11-15 11:43:24.397769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:25:41.770 [2024-11-15 11:43:24.397780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:88864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.770 [2024-11-15 11:43:24.397786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:25:41.770 [2024-11-15 11:43:24.397797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:88872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.770 [2024-11-15 11:43:24.397804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:25:41.771 [2024-11-15 11:43:24.397818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:88880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.771 [2024-11-15 11:43:24.397825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:25:41.771 [2024-11-15 11:43:24.397836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:88888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.771 [2024-11-15 11:43:24.397843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:25:41.771 [2024-11-15 11:43:24.397855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:88896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.771 [2024-11-15 11:43:24.397861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:25:41.771 [2024-11-15 11:43:24.397872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:88904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.771 [2024-11-15 11:43:24.397878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:25:41.771 [2024-11-15 11:43:24.397889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:88912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.771 [2024-11-15 11:43:24.397895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:25:41.771 [2024-11-15 11:43:24.397907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:88920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.771 [2024-11-15 11:43:24.397914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:25:41.771 [2024-11-15 11:43:24.397925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:88928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.771 [2024-11-15 11:43:24.397931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:25:41.771 [2024-11-15 11:43:24.397943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:88936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.771 [2024-11-15 11:43:24.397949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:25:41.771 [2024-11-15 11:43:24.397960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:88944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.771 [2024-11-15 11:43:24.397967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:25:41.771 [2024-11-15 11:43:24.397978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:88952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.771 [2024-11-15 11:43:24.397984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:25:41.771 [2024-11-15 11:43:24.397995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:88960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.771 [2024-11-15 11:43:24.398001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:25:41.771 [2024-11-15 11:43:24.398012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:88968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.771 [2024-11-15 11:43:24.398018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:25:41.771 [2024-11-15 11:43:24.398031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:88976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.771 [2024-11-15 11:43:24.398037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:25:41.771 [2024-11-15 11:43:24.398780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:88984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.771 [2024-11-15 11:43:24.398795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:25:41.771 [2024-11-15 11:43:24.398808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:88992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.771 [2024-11-15 11:43:24.398815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:25:41.771 [2024-11-15 11:43:24.398826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:89000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.771 [2024-11-15 11:43:24.398832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:25:41.771 [2024-11-15 11:43:24.398843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:89008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.771 [2024-11-15 11:43:24.398850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:25:41.771 [2024-11-15 11:43:24.398861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:89016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.771 [2024-11-15 11:43:24.398868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:25:41.771 [2024-11-15 11:43:24.398879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:89024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.771 [2024-11-15 11:43:24.398885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:25:41.771 [2024-11-15 11:43:24.398897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:89032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.771 [2024-11-15 11:43:24.398903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:25:41.771 [2024-11-15 11:43:24.398914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:89040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.771 [2024-11-15 11:43:24.398920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:41.771 [2024-11-15 11:43:24.398931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:89048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.771 [2024-11-15 11:43:24.398938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:41.771 [2024-11-15 11:43:24.398949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:89056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.771 [2024-11-15 11:43:24.398955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:41.771 [2024-11-15 11:43:24.398968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:89064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.771 [2024-11-15 11:43:24.398974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:25:41.771 [2024-11-15 11:43:24.398985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:89072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.771 [2024-11-15 11:43:24.398994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:25:41.771 [2024-11-15 11:43:24.399005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:89080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.771 [2024-11-15 11:43:24.399011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:25:41.771 [2024-11-15 11:43:24.399023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:89088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.771 [2024-11-15 11:43:24.399029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:25:41.771 [2024-11-15 11:43:24.399040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:89096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.771 [2024-11-15 11:43:24.399046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:25:41.771 [2024-11-15 11:43:24.399057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:89104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.771 [2024-11-15 11:43:24.399064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:25:41.771 [2024-11-15 11:43:24.399075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:89112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.771 [2024-11-15 11:43:24.399081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:25:41.771 [2024-11-15 11:43:24.399092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:88408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.771 [2024-11-15 11:43:24.399098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:25:41.771 [2024-11-15 11:43:24.399110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:88416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.771 [2024-11-15 11:43:24.399116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:25:41.771 [2024-11-15 11:43:24.399127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:88424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.771 [2024-11-15 11:43:24.399133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:25:41.771 [2024-11-15 11:43:24.399145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:88432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.771 [2024-11-15 11:43:24.399151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:25:41.771 [2024-11-15 11:43:24.399162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:88440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.771 [2024-11-15 11:43:24.399168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:25:41.771 [2024-11-15 11:43:24.399179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:88448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.771 [2024-11-15 11:43:24.399185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:25:41.771 [2024-11-15 11:43:24.399196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:88456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.771 [2024-11-15 11:43:24.399204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:25:41.771 [2024-11-15 11:43:24.399308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:88464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.771 [2024-11-15 11:43:24.399318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:25:41.772 [2024-11-15 11:43:24.399339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:89120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.772 [2024-11-15 11:43:24.399346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:25:41.772 [2024-11-15 11:43:24.399359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:89128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.772 [2024-11-15 11:43:24.399366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:25:41.772 [2024-11-15 11:43:24.399379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:89136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.772 [2024-11-15 11:43:24.399385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:25:41.772 [2024-11-15 11:43:24.399397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:89144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.772 [2024-11-15 11:43:24.399403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:25:41.772 [2024-11-15 11:43:24.399415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:89152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.772 [2024-11-15 11:43:24.399422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:25:41.772 [2024-11-15 11:43:24.399434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:89160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.772 [2024-11-15 11:43:24.399440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:25:41.772 [2024-11-15 11:43:24.399452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:89168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.772 [2024-11-15 11:43:24.399465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:25:41.772 [2024-11-15 11:43:24.399478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:89176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.772 [2024-11-15 11:43:24.399484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:25:41.772 [2024-11-15 11:43:24.399496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:89184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.772 [2024-11-15 11:43:24.399502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:25:41.772 [2024-11-15 11:43:24.399514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:89192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.772 [2024-11-15 11:43:24.399521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:25:41.772 [2024-11-15 11:43:24.399533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:89200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.772 [2024-11-15 11:43:24.399539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:25:41.772 [2024-11-15 11:43:24.399553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:89208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.772 [2024-11-15 11:43:24.399559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:25:41.772 [2024-11-15 11:43:24.399571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:89216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.772 [2024-11-15 11:43:24.399578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:25:41.772 [2024-11-15 11:43:24.399590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:89224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.772 [2024-11-15 11:43:24.399596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:25:41.772 [2024-11-15 11:43:24.399609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:89232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.772 [2024-11-15 11:43:24.399614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:25:41.772 [2024-11-15 11:43:24.400340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:89240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.772 [2024-11-15 11:43:24.400348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:41.772 [2024-11-15 11:43:24.400362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:88472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.772 [2024-11-15 11:43:24.400368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:41.772 [2024-11-15 11:43:24.400381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:88224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.772 [2024-11-15 11:43:24.400387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:25:41.772 [2024-11-15 11:43:24.400400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:88232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.772 [2024-11-15 11:43:24.400406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:25:41.772 [2024-11-15 11:43:24.400419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:88240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.772 [2024-11-15 11:43:24.400426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:25:41.772 [2024-11-15 11:43:24.400438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:88248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.772 [2024-11-15 11:43:24.400445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:25:41.772 [2024-11-15 11:43:24.400469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:88256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.772 [2024-11-15 11:43:24.400475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:25:41.772 [2024-11-15 11:43:24.400488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:88264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.772 [2024-11-15 11:43:24.400495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:25:41.772 [2024-11-15 11:43:24.400510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:88272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.772 [2024-11-15 11:43:24.400516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:25:41.772 [2024-11-15 11:43:24.400529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:88280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.772 [2024-11-15 11:43:24.400535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:25:41.772 [2024-11-15 11:43:24.400548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:88288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.772 [2024-11-15 11:43:24.400554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:25:41.772 [2024-11-15 11:43:24.400567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:88296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.772 [2024-11-15 11:43:24.400573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:25:41.772 [2024-11-15 11:43:24.400585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:88304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.772 [2024-11-15 11:43:24.400591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:25:41.772 [2024-11-15 11:43:24.400605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:88312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.772 [2024-11-15 11:43:24.400610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:25:41.772 [2024-11-15 11:43:24.400623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:88320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.772 [2024-11-15 11:43:24.400629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:25:41.772 [2024-11-15 11:43:24.400642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:88328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.772 [2024-11-15 11:43:24.400648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:25:41.772 [2024-11-15 11:43:24.400661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:88336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.772 [2024-11-15 11:43:24.400667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:25:41.772 [2024-11-15 11:43:24.400715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:88344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.772 [2024-11-15 11:43:24.400723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:25:41.772 [2024-11-15 11:43:24.400738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:88352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.772 [2024-11-15 11:43:24.400744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:25:41.772 [2024-11-15 11:43:24.400757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:88360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.772 [2024-11-15 11:43:24.400763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:25:41.772 [2024-11-15 11:43:24.400777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:88368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.772 [2024-11-15 11:43:24.400785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:25:41.772 [2024-11-15 11:43:24.400799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:88376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.772 [2024-11-15 11:43:24.400805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:25:41.772 [2024-11-15 11:43:24.400819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:88384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.772 [2024-11-15 11:43:24.400825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:25:41.773 [2024-11-15 11:43:24.400839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:88392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.773 [2024-11-15 11:43:24.400845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:25:41.773 [2024-11-15 11:43:24.400859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:88400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.773 [2024-11-15 11:43:24.400865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:25:41.773 [2024-11-15 11:43:24.400879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:88480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.773 [2024-11-15 11:43:24.400885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:25:41.773 [2024-11-15 11:43:24.400899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:88488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.773 [2024-11-15 11:43:24.400905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:25:41.773 [2024-11-15 11:43:24.400919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:88496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.773 [2024-11-15 11:43:24.400925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:25:41.773 [2024-11-15 11:43:24.400938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:88504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.773 [2024-11-15 11:43:24.400945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:25:41.773 [2024-11-15 11:43:24.400958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:88512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.773 [2024-11-15 11:43:24.400964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:25:41.773 [2024-11-15 11:43:24.400978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:88520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.773 [2024-11-15 11:43:24.400984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:25:41.773 [2024-11-15 11:43:24.402762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:88528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.773 [2024-11-15 11:43:24.402770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:25:41.773 [2024-11-15 11:43:24.402785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:88536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.773 [2024-11-15 11:43:24.402793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:41.773 [2024-11-15 11:43:24.402807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:88544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.773 [2024-11-15 11:43:24.402813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:41.773 [2024-11-15 11:43:24.402828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:88552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.773 [2024-11-15 11:43:24.402834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:25:41.773 [2024-11-15 11:43:24.402848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:88560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.773 [2024-11-15 11:43:24.402854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:25:41.773 [2024-11-15 11:43:24.402869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:88568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.773 [2024-11-15 11:43:24.402875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:25:41.773 [2024-11-15 11:43:24.402889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:88576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.773 [2024-11-15 11:43:24.402895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:25:41.773 [2024-11-15 11:43:24.402909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:88584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.773 [2024-11-15 11:43:24.402916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:25:41.773 [2024-11-15 11:43:24.402930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:88592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.773 [2024-11-15 11:43:24.402936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:25:41.773 [2024-11-15 11:43:24.402951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:88600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.773 [2024-11-15 11:43:24.402957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:25:41.773 [2024-11-15 11:43:24.402971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:88608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.773 [2024-11-15 11:43:24.402978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:25:41.773 [2024-11-15 11:43:24.402992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:88616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.773 [2024-11-15 11:43:24.402998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:25:41.773 [2024-11-15 11:43:24.403013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:88624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.773 [2024-11-15 11:43:24.403019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:25:41.773 [2024-11-15 11:43:24.403034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:88632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.773 [2024-11-15 11:43:24.403040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:25:41.773 [2024-11-15 11:43:24.403055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:88640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.773 [2024-11-15 11:43:24.403061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:25:41.773 [2024-11-15 11:43:24.403076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:88648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.773 [2024-11-15 11:43:24.403083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:25:41.773 [2024-11-15 11:43:24.403098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:88656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.773 [2024-11-15 11:43:24.403104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:25:41.773 [2024-11-15 11:43:24.403154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:88664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.773 [2024-11-15 11:43:24.403162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:25:41.773 [2024-11-15 11:43:24.403178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:88672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.773 [2024-11-15 11:43:24.403185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:25:41.773 [2024-11-15 11:43:24.403200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:88680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.773 [2024-11-15 11:43:24.403207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:25:41.773 [2024-11-15 11:43:24.403222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:88688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.773 [2024-11-15 11:43:24.403228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:25:41.773 [2024-11-15 11:43:24.403244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:88696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.773 [2024-11-15 11:43:24.403250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:25:41.773 [2024-11-15 11:43:24.403265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:88704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.773 [2024-11-15 11:43:24.403271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:25:41.773 [2024-11-15 11:43:24.403286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:88712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.773 [2024-11-15 11:43:24.403293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:25:41.773 [2024-11-15 11:43:24.403308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:88720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.773 [2024-11-15 11:43:24.403314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:25:41.773 [2024-11-15 11:43:24.403329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:88728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.773 [2024-11-15 11:43:24.403335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:25:41.773 [2024-11-15 11:43:24.403352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:88736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.774 [2024-11-15 11:43:24.403359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:25:41.774 [2024-11-15 11:43:24.403374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:88744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.774 [2024-11-15 11:43:24.403380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:25:41.774 [2024-11-15 11:43:24.403395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:88752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.774 [2024-11-15 11:43:24.403402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:25:41.774 [2024-11-15 11:43:24.403417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:88760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.774 [2024-11-15 11:43:24.403423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:25:41.774 [2024-11-15 11:43:24.403438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:88768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.774 [2024-11-15 11:43:24.403445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:25:41.774 [2024-11-15 11:43:24.403465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:88776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.774 [2024-11-15 11:43:24.403472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:25:41.774 [2024-11-15 11:43:24.403487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:88784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.774 [2024-11-15 11:43:24.403493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:25:41.774 [2024-11-15 11:43:24.403508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:88792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.774 [2024-11-15 11:43:24.403515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:41.774 [2024-11-15 11:43:24.403530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:88800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.774 [2024-11-15 11:43:24.403536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:41.774 [2024-11-15 11:43:24.403551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:88808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.774 [2024-11-15 11:43:24.403558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:25:41.774 [2024-11-15 11:43:24.403573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:88816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.774 [2024-11-15 11:43:24.403579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:25:41.774 [2024-11-15 11:43:24.403594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:88824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.774 [2024-11-15 11:43:24.403600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:25:41.774 [2024-11-15 11:43:24.403615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:88832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.774 [2024-11-15 11:43:24.403624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:25:41.774 [2024-11-15 11:43:24.403640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:88840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.774 [2024-11-15 11:43:24.403646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:25:41.774 [2024-11-15 11:43:24.403661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:88848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.774 [2024-11-15 11:43:24.403668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:25:41.774 [2024-11-15 11:43:24.403683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:88856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.774 [2024-11-15 11:43:24.403689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:25:41.774 [2024-11-15 11:43:24.403704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:88864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.774 [2024-11-15 11:43:24.403710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:25:41.774 [2024-11-15 11:43:24.403726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:88872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.774 [2024-11-15 11:43:24.403732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:25:41.774 [2024-11-15 11:43:24.403747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:88880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.774 [2024-11-15 11:43:24.403753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:25:41.774 [2024-11-15 11:43:24.403769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:88888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.774 [2024-11-15 11:43:24.403775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:25:41.774 [2024-11-15 11:43:24.403790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:88896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.774 [2024-11-15 11:43:24.403796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:25:41.774 [2024-11-15 11:43:24.403811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:88904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.774 [2024-11-15 11:43:24.403818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:25:41.774 [2024-11-15 11:43:24.403833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:88912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.774 [2024-11-15 11:43:24.403839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:25:41.774 [2024-11-15 11:43:24.403854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:88920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.774 [2024-11-15 11:43:24.403860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:25:41.774 [2024-11-15 11:43:24.403875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:88928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.774 [2024-11-15 11:43:24.403883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:25:41.774 [2024-11-15 11:43:24.403898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:88936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.774 [2024-11-15 11:43:24.403904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:25:41.774 [2024-11-15 11:43:24.403919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:88944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.774 [2024-11-15 11:43:24.403926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:25:41.774 [2024-11-15 11:43:24.403941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:88952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.774 [2024-11-15 11:43:24.403948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:25:41.774 [2024-11-15 11:43:24.403963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:88960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.774 [2024-11-15 11:43:24.403969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:25:41.774 [2024-11-15 11:43:24.403984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:88968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.774 [2024-11-15 11:43:24.403990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:25:41.774 [2024-11-15 11:43:24.404006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:88976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.774 [2024-11-15 11:43:24.404012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:25:41.774 8381.07 IOPS, 32.74 MiB/s [2024-11-15T10:43:42.627Z] 7857.25 IOPS, 30.69 MiB/s [2024-11-15T10:43:42.627Z] 7395.06 IOPS, 28.89 MiB/s [2024-11-15T10:43:42.627Z] 6984.22 IOPS, 27.28 MiB/s [2024-11-15T10:43:42.627Z] 7169.95 IOPS, 28.01 MiB/s [2024-11-15T10:43:42.627Z] 7410.00 IOPS, 28.95 MiB/s [2024-11-15T10:43:42.627Z] 7633.00 IOPS, 29.82 MiB/s [2024-11-15T10:43:42.627Z] 7845.82 IOPS, 30.65 MiB/s [2024-11-15T10:43:42.627Z] 8041.17 IOPS, 31.41 MiB/s [2024-11-15T10:43:42.627Z] 8212.08 IOPS, 32.08 MiB/s [2024-11-15T10:43:42.627Z] 8373.64 IOPS, 32.71 MiB/s [2024-11-15T10:43:42.627Z] 8509.04 IOPS, 33.24 MiB/s [2024-11-15T10:43:42.627Z] 8648.11 IOPS, 33.78 MiB/s [2024-11-15T10:43:42.627Z] 8777.68 IOPS, 34.29 MiB/s [2024-11-15T10:43:42.627Z] 8900.90 IOPS, 34.77 MiB/s [2024-11-15T10:43:42.627Z] [2024-11-15 11:43:39.974227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:33264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.774 [2024-11-15 11:43:39.974265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:25:41.774 [2024-11-15 11:43:39.974313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:33280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.774 [2024-11-15 11:43:39.974320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:25:41.774 [2024-11-15 11:43:39.974333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:33296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.774 [2024-11-15 11:43:39.974340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:25:41.775 [2024-11-15 11:43:39.974438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:33312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.775 [2024-11-15 11:43:39.974447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:25:41.775 [2024-11-15 11:43:39.974464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:33328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.775 [2024-11-15 11:43:39.974476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:41.775 [2024-11-15 11:43:39.974487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:33344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.775 [2024-11-15 11:43:39.974494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:41.775 [2024-11-15 11:43:39.974505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:33360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.775 [2024-11-15 11:43:39.974511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:25:41.775 8960.53 IOPS, 35.00 MiB/s [2024-11-15T10:43:42.628Z] 8927.42 IOPS, 34.87 MiB/s [2024-11-15T10:43:42.628Z] Received shutdown signal, test time was about 31.855848 seconds 00:25:41.775 00:25:41.775 Latency(us) 00:25:41.775 [2024-11-15T10:43:42.628Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:41.775 Job: Nvme0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:25:41.775 Verification LBA range: start 0x0 length 0x4000 00:25:41.775 Nvme0n1 : 31.86 8896.74 34.75 0.00 0.00 14369.58 114.97 4087539.90 00:25:41.775 [2024-11-15T10:43:42.628Z] =================================================================================================================== 00:25:41.775 [2024-11-15T10:43:42.628Z] Total : 8896.74 34.75 0.00 0.00 14369.58 114.97 4087539.90 00:25:41.775 11:43:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@143 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:25:42.032 11:43:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@145 -- # trap - SIGINT SIGTERM EXIT 00:25:42.032 11:43:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@147 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:25:42.032 11:43:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@148 -- # nvmftestfini 00:25:42.032 11:43:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@516 -- # nvmfcleanup 00:25:42.032 11:43:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@121 -- # sync 00:25:42.032 11:43:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:25:42.032 11:43:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@124 -- # set +e 00:25:42.032 11:43:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@125 -- # for i in {1..20} 00:25:42.032 11:43:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:25:42.032 rmmod nvme_tcp 00:25:42.032 rmmod nvme_fabrics 00:25:42.032 rmmod nvme_keyring 00:25:42.032 11:43:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:25:42.032 11:43:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@128 -- # set -e 00:25:42.032 11:43:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@129 -- # return 0 00:25:42.032 11:43:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@517 -- # '[' -n 1353373 ']' 00:25:42.032 11:43:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@518 -- # killprocess 1353373 00:25:42.032 11:43:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@952 -- # '[' -z 1353373 ']' 00:25:42.032 11:43:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@956 -- # kill -0 1353373 00:25:42.033 11:43:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@957 -- # uname 00:25:42.033 11:43:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:25:42.033 11:43:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 1353373 00:25:42.033 11:43:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:25:42.033 11:43:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:25:42.033 11:43:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@970 -- # echo 'killing process with pid 1353373' 00:25:42.033 killing process with pid 1353373 00:25:42.033 11:43:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@971 -- # kill 1353373 00:25:42.033 11:43:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@976 -- # wait 1353373 00:25:42.290 11:43:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:25:42.290 11:43:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:25:42.290 11:43:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:25:42.290 11:43:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@297 -- # iptr 00:25:42.290 11:43:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@791 -- # iptables-save 00:25:42.290 11:43:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:25:42.290 11:43:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@791 -- # iptables-restore 00:25:42.290 11:43:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:25:42.290 11:43:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@302 -- # remove_spdk_ns 00:25:42.290 11:43:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:42.290 11:43:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:42.290 11:43:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:44.190 11:43:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:25:44.190 00:25:44.190 real 0m43.798s 00:25:44.190 user 2m3.703s 00:25:44.190 sys 0m12.034s 00:25:44.190 11:43:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1128 -- # xtrace_disable 00:25:44.190 11:43:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:25:44.190 ************************************ 00:25:44.190 END TEST nvmf_host_multipath_status 00:25:44.190 ************************************ 00:25:44.190 11:43:45 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@28 -- # run_test nvmf_discovery_remove_ifc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:25:44.190 11:43:45 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:25:44.190 11:43:45 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1109 -- # xtrace_disable 00:25:44.190 11:43:45 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:25:44.448 ************************************ 00:25:44.448 START TEST nvmf_discovery_remove_ifc 00:25:44.448 ************************************ 00:25:44.448 11:43:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:25:44.448 * Looking for test storage... 00:25:44.448 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:25:44.448 11:43:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:25:44.448 11:43:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1691 -- # lcov --version 00:25:44.448 11:43:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:25:44.448 11:43:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:25:44.448 11:43:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:25:44.448 11:43:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:25:44.448 11:43:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:25:44.448 11:43:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@336 -- # IFS=.-: 00:25:44.448 11:43:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@336 -- # read -ra ver1 00:25:44.448 11:43:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@337 -- # IFS=.-: 00:25:44.448 11:43:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@337 -- # read -ra ver2 00:25:44.448 11:43:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@338 -- # local 'op=<' 00:25:44.448 11:43:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@340 -- # ver1_l=2 00:25:44.448 11:43:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@341 -- # ver2_l=1 00:25:44.448 11:43:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:25:44.448 11:43:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@344 -- # case "$op" in 00:25:44.448 11:43:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@345 -- # : 1 00:25:44.448 11:43:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@364 -- # (( v = 0 )) 00:25:44.448 11:43:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:25:44.448 11:43:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@365 -- # decimal 1 00:25:44.448 11:43:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@353 -- # local d=1 00:25:44.448 11:43:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:25:44.448 11:43:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@355 -- # echo 1 00:25:44.448 11:43:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@365 -- # ver1[v]=1 00:25:44.448 11:43:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@366 -- # decimal 2 00:25:44.448 11:43:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@353 -- # local d=2 00:25:44.448 11:43:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:25:44.448 11:43:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@355 -- # echo 2 00:25:44.448 11:43:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@366 -- # ver2[v]=2 00:25:44.448 11:43:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:25:44.448 11:43:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:25:44.448 11:43:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@368 -- # return 0 00:25:44.448 11:43:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:25:44.448 11:43:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:25:44.448 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:44.448 --rc genhtml_branch_coverage=1 00:25:44.448 --rc genhtml_function_coverage=1 00:25:44.448 --rc genhtml_legend=1 00:25:44.448 --rc geninfo_all_blocks=1 00:25:44.448 --rc geninfo_unexecuted_blocks=1 00:25:44.448 00:25:44.448 ' 00:25:44.448 11:43:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:25:44.448 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:44.448 --rc genhtml_branch_coverage=1 00:25:44.448 --rc genhtml_function_coverage=1 00:25:44.448 --rc genhtml_legend=1 00:25:44.448 --rc geninfo_all_blocks=1 00:25:44.448 --rc geninfo_unexecuted_blocks=1 00:25:44.448 00:25:44.448 ' 00:25:44.448 11:43:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:25:44.448 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:44.448 --rc genhtml_branch_coverage=1 00:25:44.448 --rc genhtml_function_coverage=1 00:25:44.448 --rc genhtml_legend=1 00:25:44.448 --rc geninfo_all_blocks=1 00:25:44.448 --rc geninfo_unexecuted_blocks=1 00:25:44.448 00:25:44.448 ' 00:25:44.448 11:43:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:25:44.448 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:44.448 --rc genhtml_branch_coverage=1 00:25:44.448 --rc genhtml_function_coverage=1 00:25:44.448 --rc genhtml_legend=1 00:25:44.448 --rc geninfo_all_blocks=1 00:25:44.448 --rc geninfo_unexecuted_blocks=1 00:25:44.448 00:25:44.448 ' 00:25:44.448 11:43:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:25:44.448 11:43:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # uname -s 00:25:44.448 11:43:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:44.448 11:43:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:44.448 11:43:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:44.448 11:43:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:44.448 11:43:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:44.448 11:43:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:44.448 11:43:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:44.448 11:43:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:44.448 11:43:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:44.448 11:43:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:44.449 11:43:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:25:44.449 11:43:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@18 -- # NVME_HOSTID=00abaa28-3537-eb11-906e-0017a4403562 00:25:44.449 11:43:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:44.449 11:43:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:44.449 11:43:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:25:44.449 11:43:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:44.449 11:43:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:25:44.449 11:43:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@15 -- # shopt -s extglob 00:25:44.449 11:43:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:44.449 11:43:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:44.449 11:43:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:44.449 11:43:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:44.449 11:43:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:44.449 11:43:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:44.449 11:43:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@5 -- # export PATH 00:25:44.449 11:43:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:44.449 11:43:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@51 -- # : 0 00:25:44.449 11:43:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:25:44.449 11:43:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:25:44.449 11:43:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:25:44.449 11:43:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:44.449 11:43:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:44.449 11:43:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:25:44.449 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:25:44.449 11:43:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:25:44.449 11:43:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:25:44.449 11:43:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@55 -- # have_pci_nics=0 00:25:44.449 11:43:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@14 -- # '[' tcp == rdma ']' 00:25:44.449 11:43:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@19 -- # discovery_port=8009 00:25:44.449 11:43:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@20 -- # discovery_nqn=nqn.2014-08.org.nvmexpress.discovery 00:25:44.449 11:43:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@23 -- # nqn=nqn.2016-06.io.spdk:cnode 00:25:44.449 11:43:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@25 -- # host_nqn=nqn.2021-12.io.spdk:test 00:25:44.449 11:43:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@26 -- # host_sock=/tmp/host.sock 00:25:44.449 11:43:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@39 -- # nvmftestinit 00:25:44.449 11:43:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:25:44.449 11:43:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:44.449 11:43:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@476 -- # prepare_net_devs 00:25:44.449 11:43:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@438 -- # local -g is_hw=no 00:25:44.449 11:43:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@440 -- # remove_spdk_ns 00:25:44.449 11:43:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:44.449 11:43:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:44.705 11:43:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:44.705 11:43:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:25:44.705 11:43:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:25:44.705 11:43:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@309 -- # xtrace_disable 00:25:44.705 11:43:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:25:49.960 11:43:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:25:49.960 11:43:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@315 -- # pci_devs=() 00:25:49.960 11:43:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@315 -- # local -a pci_devs 00:25:49.960 11:43:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@316 -- # pci_net_devs=() 00:25:49.960 11:43:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:25:49.960 11:43:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@317 -- # pci_drivers=() 00:25:49.960 11:43:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@317 -- # local -A pci_drivers 00:25:49.960 11:43:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@319 -- # net_devs=() 00:25:49.960 11:43:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@319 -- # local -ga net_devs 00:25:49.960 11:43:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@320 -- # e810=() 00:25:49.960 11:43:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@320 -- # local -ga e810 00:25:49.960 11:43:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@321 -- # x722=() 00:25:49.960 11:43:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@321 -- # local -ga x722 00:25:49.960 11:43:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@322 -- # mlx=() 00:25:49.960 11:43:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@322 -- # local -ga mlx 00:25:49.960 11:43:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:49.960 11:43:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:49.960 11:43:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:49.960 11:43:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:49.960 11:43:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:49.960 11:43:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:49.960 11:43:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:49.960 11:43:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:25:49.960 11:43:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:49.960 11:43:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:49.960 11:43:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:49.960 11:43:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:49.960 11:43:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:25:49.960 11:43:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:25:49.960 11:43:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:25:49.960 11:43:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:25:49.960 11:43:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:25:49.960 11:43:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:25:49.960 11:43:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:25:49.960 11:43:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:25:49.960 Found 0000:af:00.0 (0x8086 - 0x159b) 00:25:49.960 11:43:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:25:49.960 11:43:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:25:49.960 11:43:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:49.960 11:43:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:49.960 11:43:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:25:49.960 11:43:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:25:49.960 11:43:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:25:49.960 Found 0000:af:00.1 (0x8086 - 0x159b) 00:25:49.960 11:43:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:25:49.960 11:43:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:25:49.960 11:43:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:49.960 11:43:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:49.960 11:43:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:25:49.960 11:43:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:25:49.960 11:43:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:25:49.960 11:43:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:25:49.960 11:43:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:25:49.960 11:43:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:49.960 11:43:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:25:49.960 11:43:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:49.960 11:43:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@418 -- # [[ up == up ]] 00:25:49.960 11:43:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:25:49.960 11:43:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:49.960 11:43:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:25:49.960 Found net devices under 0000:af:00.0: cvl_0_0 00:25:49.960 11:43:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:25:49.960 11:43:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:25:49.960 11:43:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:49.960 11:43:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:25:49.960 11:43:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:49.960 11:43:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@418 -- # [[ up == up ]] 00:25:49.960 11:43:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:25:49.960 11:43:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:49.960 11:43:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:25:49.960 Found net devices under 0000:af:00.1: cvl_0_1 00:25:49.960 11:43:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:25:49.960 11:43:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:25:49.960 11:43:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@442 -- # is_hw=yes 00:25:49.960 11:43:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:25:49.961 11:43:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:25:49.961 11:43:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:25:49.961 11:43:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:25:49.961 11:43:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:49.961 11:43:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:49.961 11:43:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:25:49.961 11:43:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:25:49.961 11:43:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:25:49.961 11:43:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:25:49.961 11:43:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:25:49.961 11:43:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:25:49.961 11:43:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:25:49.961 11:43:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:49.961 11:43:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:25:49.961 11:43:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:25:49.961 11:43:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:25:49.961 11:43:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:25:50.219 11:43:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:25:50.219 11:43:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:25:50.219 11:43:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:25:50.219 11:43:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:25:50.219 11:43:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:25:50.219 11:43:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:25:50.219 11:43:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:25:50.219 11:43:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:25:50.219 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:50.219 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.421 ms 00:25:50.219 00:25:50.219 --- 10.0.0.2 ping statistics --- 00:25:50.219 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:50.219 rtt min/avg/max/mdev = 0.421/0.421/0.421/0.000 ms 00:25:50.219 11:43:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:25:50.219 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:50.219 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.225 ms 00:25:50.219 00:25:50.219 --- 10.0.0.1 ping statistics --- 00:25:50.219 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:50.219 rtt min/avg/max/mdev = 0.225/0.225/0.225/0.000 ms 00:25:50.219 11:43:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:50.219 11:43:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@450 -- # return 0 00:25:50.219 11:43:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:25:50.219 11:43:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:50.219 11:43:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:25:50.219 11:43:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:25:50.219 11:43:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:50.219 11:43:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:25:50.219 11:43:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:25:50.220 11:43:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@40 -- # nvmfappstart -m 0x2 00:25:50.220 11:43:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:25:50.220 11:43:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@724 -- # xtrace_disable 00:25:50.220 11:43:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:25:50.220 11:43:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@509 -- # nvmfpid=1363358 00:25:50.220 11:43:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@510 -- # waitforlisten 1363358 00:25:50.220 11:43:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:25:50.220 11:43:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@833 -- # '[' -z 1363358 ']' 00:25:50.220 11:43:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:50.220 11:43:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@838 -- # local max_retries=100 00:25:50.220 11:43:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:50.220 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:50.220 11:43:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@842 -- # xtrace_disable 00:25:50.220 11:43:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:25:50.477 [2024-11-15 11:43:51.079832] Starting SPDK v25.01-pre git sha1 4b2d483c6 / DPDK 24.03.0 initialization... 00:25:50.477 [2024-11-15 11:43:51.079892] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:50.477 [2024-11-15 11:43:51.151411] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:50.477 [2024-11-15 11:43:51.190455] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:50.477 [2024-11-15 11:43:51.190491] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:50.477 [2024-11-15 11:43:51.190498] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:50.477 [2024-11-15 11:43:51.190504] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:50.477 [2024-11-15 11:43:51.190508] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:50.477 [2024-11-15 11:43:51.191083] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:25:51.407 11:43:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:25:51.407 11:43:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@866 -- # return 0 00:25:51.407 11:43:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:25:51.407 11:43:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@730 -- # xtrace_disable 00:25:51.407 11:43:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:25:51.407 11:43:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:51.407 11:43:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@43 -- # rpc_cmd 00:25:51.407 11:43:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:51.407 11:43:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:25:51.407 [2024-11-15 11:43:52.015814] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:51.407 [2024-11-15 11:43:52.024001] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:25:51.407 null0 00:25:51.407 [2024-11-15 11:43:52.055980] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:51.407 11:43:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:51.407 11:43:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@59 -- # hostpid=1363632 00:25:51.407 11:43:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock --wait-for-rpc -L bdev_nvme 00:25:51.407 11:43:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@60 -- # waitforlisten 1363632 /tmp/host.sock 00:25:51.407 11:43:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@833 -- # '[' -z 1363632 ']' 00:25:51.407 11:43:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@837 -- # local rpc_addr=/tmp/host.sock 00:25:51.407 11:43:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@838 -- # local max_retries=100 00:25:51.407 11:43:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:25:51.407 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:25:51.407 11:43:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@842 -- # xtrace_disable 00:25:51.407 11:43:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:25:51.407 [2024-11-15 11:43:52.134413] Starting SPDK v25.01-pre git sha1 4b2d483c6 / DPDK 24.03.0 initialization... 00:25:51.407 [2024-11-15 11:43:52.134480] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1363632 ] 00:25:51.407 [2024-11-15 11:43:52.233048] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:51.663 [2024-11-15 11:43:52.283766] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:25:51.663 11:43:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:25:51.663 11:43:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@866 -- # return 0 00:25:51.663 11:43:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@62 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:25:51.663 11:43:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_set_options -e 1 00:25:51.663 11:43:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:51.663 11:43:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:25:51.663 11:43:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:51.663 11:43:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@66 -- # rpc_cmd -s /tmp/host.sock framework_start_init 00:25:51.663 11:43:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:51.663 11:43:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:25:51.663 11:43:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:51.663 11:43:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@69 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test --ctrlr-loss-timeout-sec 2 --reconnect-delay-sec 1 --fast-io-fail-timeout-sec 1 --wait-for-attach 00:25:51.663 11:43:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:51.663 11:43:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:25:53.029 [2024-11-15 11:43:53.502629] bdev_nvme.c:7384:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:25:53.029 [2024-11-15 11:43:53.502654] bdev_nvme.c:7470:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:25:53.029 [2024-11-15 11:43:53.502673] bdev_nvme.c:7347:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:25:53.029 [2024-11-15 11:43:53.588965] bdev_nvme.c:7313:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:25:53.029 [2024-11-15 11:43:53.683777] bdev_nvme.c:5634:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr was created to 10.0.0.2:4420 00:25:53.029 [2024-11-15 11:43:53.684782] bdev_nvme.c:1985:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Connecting qpair 0x64d320:1 started. 00:25:53.029 [2024-11-15 11:43:53.686657] bdev_nvme.c:8180:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:25:53.029 [2024-11-15 11:43:53.686708] bdev_nvme.c:8180:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:25:53.029 [2024-11-15 11:43:53.686734] bdev_nvme.c:8180:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:25:53.029 [2024-11-15 11:43:53.686752] bdev_nvme.c:7203:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:25:53.029 [2024-11-15 11:43:53.686776] bdev_nvme.c:7162:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:25:53.029 11:43:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:53.029 11:43:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@72 -- # wait_for_bdev nvme0n1 00:25:53.029 11:43:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:25:53.029 [2024-11-15 11:43:53.691569] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpair 0x64d320 was disconnected and freed. delete nvme_qpair. 00:25:53.029 11:43:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:53.029 11:43:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:25:53.029 11:43:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:53.029 11:43:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:25:53.029 11:43:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:25:53.029 11:43:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:25:53.029 11:43:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:53.029 11:43:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != \n\v\m\e\0\n\1 ]] 00:25:53.029 11:43:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@75 -- # ip netns exec cvl_0_0_ns_spdk ip addr del 10.0.0.2/24 dev cvl_0_0 00:25:53.029 11:43:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@76 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 down 00:25:53.029 11:43:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@79 -- # wait_for_bdev '' 00:25:53.029 11:43:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:25:53.029 11:43:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:53.029 11:43:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:25:53.029 11:43:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:25:53.029 11:43:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:53.029 11:43:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:25:53.029 11:43:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:25:53.029 11:43:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:53.285 11:43:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:25:53.285 11:43:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:25:54.217 11:43:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:25:54.217 11:43:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:54.217 11:43:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:25:54.217 11:43:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:54.217 11:43:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:25:54.217 11:43:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:25:54.217 11:43:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:25:54.217 11:43:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:54.217 11:43:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:25:54.217 11:43:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:25:55.151 11:43:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:25:55.151 11:43:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:55.151 11:43:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:25:55.151 11:43:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:55.151 11:43:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:25:55.151 11:43:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:25:55.151 11:43:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:25:55.151 11:43:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:55.151 11:43:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:25:55.151 11:43:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:25:56.522 11:43:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:25:56.522 11:43:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:56.522 11:43:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:25:56.522 11:43:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:25:56.522 11:43:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:56.522 11:43:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:25:56.522 11:43:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:25:56.522 11:43:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:56.522 11:43:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:25:56.522 11:43:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:25:57.452 11:43:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:25:57.452 11:43:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:57.452 11:43:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:25:57.452 11:43:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:57.452 11:43:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:25:57.452 11:43:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:25:57.452 11:43:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:25:57.452 11:43:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:57.452 11:43:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:25:57.452 11:43:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:25:58.383 11:43:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:25:58.383 11:43:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:58.383 11:43:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:25:58.383 11:43:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:58.383 11:43:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:25:58.383 11:43:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:25:58.383 11:43:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:25:58.383 [2024-11-15 11:43:59.127892] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 110: Connection timed out 00:25:58.383 [2024-11-15 11:43:59.127942] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:25:58.383 [2024-11-15 11:43:59.127958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:58.383 [2024-11-15 11:43:59.127971] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:25:58.383 [2024-11-15 11:43:59.127982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:58.383 [2024-11-15 11:43:59.127994] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:25:58.383 [2024-11-15 11:43:59.128004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:58.383 [2024-11-15 11:43:59.128016] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:25:58.383 [2024-11-15 11:43:59.128026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:58.383 [2024-11-15 11:43:59.128037] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:25:58.383 [2024-11-15 11:43:59.128048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:58.383 [2024-11-15 11:43:59.128059] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x629b50 is same with the state(6) to be set 00:25:58.383 [2024-11-15 11:43:59.137914] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x629b50 (9): Bad file descriptor 00:25:58.383 11:43:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:58.383 [2024-11-15 11:43:59.147957] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:25:58.383 [2024-11-15 11:43:59.147974] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:25:58.383 [2024-11-15 11:43:59.147981] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:25:58.383 [2024-11-15 11:43:59.147996] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:25:58.383 [2024-11-15 11:43:59.148021] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:25:58.383 11:43:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:25:58.383 11:43:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:25:59.316 11:44:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:25:59.573 11:44:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:59.573 11:44:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:25:59.573 11:44:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:59.573 11:44:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:25:59.573 11:44:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:25:59.573 11:44:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:25:59.573 [2024-11-15 11:44:00.182492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 110 00:25:59.573 [2024-11-15 11:44:00.182570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x629b50 with addr=10.0.0.2, port=4420 00:25:59.573 [2024-11-15 11:44:00.182602] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x629b50 is same with the state(6) to be set 00:25:59.573 [2024-11-15 11:44:00.182659] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x629b50 (9): Bad file descriptor 00:25:59.573 [2024-11-15 11:44:00.183621] bdev_nvme.c:3168:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] Unable to perform failover, already in progress. 00:25:59.573 [2024-11-15 11:44:00.183687] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:25:59.573 [2024-11-15 11:44:00.183710] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:25:59.573 [2024-11-15 11:44:00.183732] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:25:59.573 [2024-11-15 11:44:00.183753] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:25:59.573 [2024-11-15 11:44:00.183769] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:25:59.573 [2024-11-15 11:44:00.183782] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:25:59.573 [2024-11-15 11:44:00.183805] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:25:59.573 [2024-11-15 11:44:00.183819] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:25:59.573 11:44:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:59.573 11:44:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:25:59.573 11:44:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:26:00.503 [2024-11-15 11:44:01.186338] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:26:00.503 [2024-11-15 11:44:01.186365] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:26:00.503 [2024-11-15 11:44:01.186381] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:26:00.503 [2024-11-15 11:44:01.186391] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:26:00.503 [2024-11-15 11:44:01.186401] nvme_ctrlr.c:1098:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] already in failed state 00:26:00.503 [2024-11-15 11:44:01.186427] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:26:00.503 [2024-11-15 11:44:01.186434] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:26:00.503 [2024-11-15 11:44:01.186440] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:26:00.503 [2024-11-15 11:44:01.186473] bdev_nvme.c:7135:remove_discovery_entry: *INFO*: Discovery[10.0.0.2:8009] Remove discovery entry: nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 00:26:00.503 [2024-11-15 11:44:01.186502] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:26:00.503 [2024-11-15 11:44:01.186516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:00.503 [2024-11-15 11:44:01.186530] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:26:00.503 [2024-11-15 11:44:01.186540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:00.503 [2024-11-15 11:44:01.186551] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:26:00.503 [2024-11-15 11:44:01.186561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:00.503 [2024-11-15 11:44:01.186572] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:26:00.503 [2024-11-15 11:44:01.186582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:00.503 [2024-11-15 11:44:01.186593] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:26:00.503 [2024-11-15 11:44:01.186603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:00.503 [2024-11-15 11:44:01.186613] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery, 1] in failed state. 00:26:00.503 [2024-11-15 11:44:01.187320] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x618e50 (9): Bad file descriptor 00:26:00.503 [2024-11-15 11:44:01.188337] nvme_fabric.c: 214:nvme_fabric_prop_get_cmd_async: *ERROR*: Failed to send Property Get fabrics command 00:26:00.503 [2024-11-15 11:44:01.188352] nvme_ctrlr.c:1217:nvme_ctrlr_shutdown_async: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery, 1] Failed to read the CC register 00:26:00.503 11:44:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:26:00.503 11:44:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:26:00.503 11:44:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:00.503 11:44:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:26:00.503 11:44:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:00.503 11:44:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:00.503 11:44:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:26:00.503 11:44:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:00.503 11:44:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != '' ]] 00:26:00.503 11:44:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@82 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:26:00.503 11:44:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@83 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:26:00.760 11:44:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@86 -- # wait_for_bdev nvme1n1 00:26:00.760 11:44:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:26:00.760 11:44:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:00.760 11:44:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:26:00.760 11:44:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:26:00.760 11:44:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:00.760 11:44:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:26:00.760 11:44:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:00.760 11:44:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:00.760 11:44:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:26:00.760 11:44:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:26:01.690 11:44:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:26:01.690 11:44:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:01.690 11:44:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:01.690 11:44:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:26:01.690 11:44:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:01.690 11:44:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:26:01.691 11:44:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:26:01.691 11:44:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:01.691 11:44:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:26:01.691 11:44:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:26:02.621 [2024-11-15 11:44:03.237642] bdev_nvme.c:7384:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:26:02.621 [2024-11-15 11:44:03.237664] bdev_nvme.c:7470:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:26:02.621 [2024-11-15 11:44:03.237685] bdev_nvme.c:7347:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:26:02.621 [2024-11-15 11:44:03.323963] bdev_nvme.c:7313:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme1 00:26:02.621 [2024-11-15 11:44:03.418787] bdev_nvme.c:5634:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 2] ctrlr was created to 10.0.0.2:4420 00:26:02.621 [2024-11-15 11:44:03.419526] bdev_nvme.c:1985:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 2] Connecting qpair 0x61ddc0:1 started. 00:26:02.621 [2024-11-15 11:44:03.420968] bdev_nvme.c:8180:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:26:02.621 [2024-11-15 11:44:03.421008] bdev_nvme.c:8180:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:26:02.621 [2024-11-15 11:44:03.421032] bdev_nvme.c:8180:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:26:02.621 [2024-11-15 11:44:03.421050] bdev_nvme.c:7203:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme1 done 00:26:02.621 [2024-11-15 11:44:03.421060] bdev_nvme.c:7162:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:26:02.621 [2024-11-15 11:44:03.426530] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 2] qpair 0x61ddc0 was disconnected and freed. delete nvme_qpair. 00:26:02.621 11:44:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:26:02.621 11:44:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:02.621 11:44:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:26:02.621 11:44:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:02.621 11:44:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:02.621 11:44:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:26:02.621 11:44:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:26:02.878 11:44:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:02.878 11:44:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme1n1 != \n\v\m\e\1\n\1 ]] 00:26:02.878 11:44:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@88 -- # trap - SIGINT SIGTERM EXIT 00:26:02.878 11:44:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@90 -- # killprocess 1363632 00:26:02.878 11:44:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@952 -- # '[' -z 1363632 ']' 00:26:02.878 11:44:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@956 -- # kill -0 1363632 00:26:02.878 11:44:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@957 -- # uname 00:26:02.878 11:44:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:26:02.878 11:44:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 1363632 00:26:02.878 11:44:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:26:02.878 11:44:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:26:02.878 11:44:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@970 -- # echo 'killing process with pid 1363632' 00:26:02.878 killing process with pid 1363632 00:26:02.878 11:44:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@971 -- # kill 1363632 00:26:02.878 11:44:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@976 -- # wait 1363632 00:26:02.878 11:44:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@91 -- # nvmftestfini 00:26:02.878 11:44:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@516 -- # nvmfcleanup 00:26:02.878 11:44:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@121 -- # sync 00:26:02.878 11:44:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:26:02.878 11:44:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@124 -- # set +e 00:26:02.878 11:44:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@125 -- # for i in {1..20} 00:26:02.878 11:44:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:26:03.136 rmmod nvme_tcp 00:26:03.136 rmmod nvme_fabrics 00:26:03.136 rmmod nvme_keyring 00:26:03.136 11:44:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:26:03.136 11:44:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@128 -- # set -e 00:26:03.136 11:44:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@129 -- # return 0 00:26:03.136 11:44:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@517 -- # '[' -n 1363358 ']' 00:26:03.136 11:44:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@518 -- # killprocess 1363358 00:26:03.136 11:44:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@952 -- # '[' -z 1363358 ']' 00:26:03.136 11:44:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@956 -- # kill -0 1363358 00:26:03.136 11:44:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@957 -- # uname 00:26:03.136 11:44:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:26:03.136 11:44:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 1363358 00:26:03.136 11:44:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:26:03.136 11:44:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:26:03.136 11:44:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@970 -- # echo 'killing process with pid 1363358' 00:26:03.136 killing process with pid 1363358 00:26:03.136 11:44:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@971 -- # kill 1363358 00:26:03.136 11:44:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@976 -- # wait 1363358 00:26:03.395 11:44:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:26:03.395 11:44:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:26:03.395 11:44:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:26:03.395 11:44:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@297 -- # iptr 00:26:03.395 11:44:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@791 -- # iptables-save 00:26:03.395 11:44:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:26:03.395 11:44:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@791 -- # iptables-restore 00:26:03.395 11:44:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:26:03.395 11:44:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@302 -- # remove_spdk_ns 00:26:03.395 11:44:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:03.395 11:44:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:03.395 11:44:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:05.298 11:44:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:26:05.298 00:26:05.298 real 0m21.000s 00:26:05.298 user 0m25.693s 00:26:05.298 sys 0m5.727s 00:26:05.298 11:44:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1128 -- # xtrace_disable 00:26:05.298 11:44:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:05.298 ************************************ 00:26:05.298 END TEST nvmf_discovery_remove_ifc 00:26:05.298 ************************************ 00:26:05.298 11:44:06 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@29 -- # run_test nvmf_identify_kernel_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:26:05.298 11:44:06 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:26:05.298 11:44:06 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1109 -- # xtrace_disable 00:26:05.298 11:44:06 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:26:05.298 ************************************ 00:26:05.298 START TEST nvmf_identify_kernel_target 00:26:05.298 ************************************ 00:26:05.298 11:44:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:26:05.556 * Looking for test storage... 00:26:05.556 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:26:05.556 11:44:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:26:05.556 11:44:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1691 -- # lcov --version 00:26:05.556 11:44:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:26:05.556 11:44:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:26:05.556 11:44:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:26:05.556 11:44:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:26:05.556 11:44:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:26:05.556 11:44:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@336 -- # IFS=.-: 00:26:05.556 11:44:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@336 -- # read -ra ver1 00:26:05.556 11:44:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@337 -- # IFS=.-: 00:26:05.556 11:44:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@337 -- # read -ra ver2 00:26:05.556 11:44:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@338 -- # local 'op=<' 00:26:05.556 11:44:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@340 -- # ver1_l=2 00:26:05.556 11:44:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@341 -- # ver2_l=1 00:26:05.556 11:44:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:26:05.556 11:44:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@344 -- # case "$op" in 00:26:05.556 11:44:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@345 -- # : 1 00:26:05.556 11:44:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:26:05.556 11:44:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:26:05.556 11:44:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@365 -- # decimal 1 00:26:05.556 11:44:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@353 -- # local d=1 00:26:05.556 11:44:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:26:05.556 11:44:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@355 -- # echo 1 00:26:05.556 11:44:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@365 -- # ver1[v]=1 00:26:05.556 11:44:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@366 -- # decimal 2 00:26:05.556 11:44:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@353 -- # local d=2 00:26:05.556 11:44:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:26:05.556 11:44:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@355 -- # echo 2 00:26:05.556 11:44:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@366 -- # ver2[v]=2 00:26:05.556 11:44:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:26:05.557 11:44:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:26:05.557 11:44:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@368 -- # return 0 00:26:05.557 11:44:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:26:05.557 11:44:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:26:05.557 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:05.557 --rc genhtml_branch_coverage=1 00:26:05.557 --rc genhtml_function_coverage=1 00:26:05.557 --rc genhtml_legend=1 00:26:05.557 --rc geninfo_all_blocks=1 00:26:05.557 --rc geninfo_unexecuted_blocks=1 00:26:05.557 00:26:05.557 ' 00:26:05.557 11:44:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:26:05.557 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:05.557 --rc genhtml_branch_coverage=1 00:26:05.557 --rc genhtml_function_coverage=1 00:26:05.557 --rc genhtml_legend=1 00:26:05.557 --rc geninfo_all_blocks=1 00:26:05.557 --rc geninfo_unexecuted_blocks=1 00:26:05.557 00:26:05.557 ' 00:26:05.557 11:44:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:26:05.557 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:05.557 --rc genhtml_branch_coverage=1 00:26:05.557 --rc genhtml_function_coverage=1 00:26:05.557 --rc genhtml_legend=1 00:26:05.557 --rc geninfo_all_blocks=1 00:26:05.557 --rc geninfo_unexecuted_blocks=1 00:26:05.557 00:26:05.557 ' 00:26:05.557 11:44:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:26:05.557 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:05.557 --rc genhtml_branch_coverage=1 00:26:05.557 --rc genhtml_function_coverage=1 00:26:05.557 --rc genhtml_legend=1 00:26:05.557 --rc geninfo_all_blocks=1 00:26:05.557 --rc geninfo_unexecuted_blocks=1 00:26:05.557 00:26:05.557 ' 00:26:05.557 11:44:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:26:05.557 11:44:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # uname -s 00:26:05.557 11:44:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:05.557 11:44:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:05.557 11:44:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:05.557 11:44:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:05.557 11:44:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:26:05.557 11:44:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:26:05.557 11:44:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:05.557 11:44:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:26:05.557 11:44:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:05.557 11:44:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:26:05.557 11:44:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:26:05.557 11:44:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@18 -- # NVME_HOSTID=00abaa28-3537-eb11-906e-0017a4403562 00:26:05.557 11:44:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:05.557 11:44:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:26:05.557 11:44:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:26:05.557 11:44:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:26:05.557 11:44:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:26:05.557 11:44:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@15 -- # shopt -s extglob 00:26:05.557 11:44:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:05.557 11:44:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:05.557 11:44:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:05.557 11:44:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:05.557 11:44:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:05.557 11:44:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:05.557 11:44:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@5 -- # export PATH 00:26:05.557 11:44:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:05.557 11:44:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@51 -- # : 0 00:26:05.557 11:44:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:26:05.557 11:44:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:26:05.557 11:44:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:26:05.557 11:44:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:05.557 11:44:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:05.557 11:44:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:26:05.557 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:26:05.557 11:44:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:26:05.557 11:44:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:26:05.557 11:44:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:26:05.557 11:44:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@11 -- # nvmftestinit 00:26:05.557 11:44:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:26:05.557 11:44:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:05.557 11:44:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@476 -- # prepare_net_devs 00:26:05.557 11:44:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@438 -- # local -g is_hw=no 00:26:05.557 11:44:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@440 -- # remove_spdk_ns 00:26:05.557 11:44:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:05.557 11:44:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:05.557 11:44:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:05.557 11:44:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:26:05.557 11:44:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:26:05.557 11:44:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@309 -- # xtrace_disable 00:26:05.557 11:44:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@10 -- # set +x 00:26:12.118 11:44:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:26:12.118 11:44:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@315 -- # pci_devs=() 00:26:12.118 11:44:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@315 -- # local -a pci_devs 00:26:12.118 11:44:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@316 -- # pci_net_devs=() 00:26:12.119 11:44:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:26:12.119 11:44:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@317 -- # pci_drivers=() 00:26:12.119 11:44:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@317 -- # local -A pci_drivers 00:26:12.119 11:44:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@319 -- # net_devs=() 00:26:12.119 11:44:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@319 -- # local -ga net_devs 00:26:12.119 11:44:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@320 -- # e810=() 00:26:12.119 11:44:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@320 -- # local -ga e810 00:26:12.119 11:44:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@321 -- # x722=() 00:26:12.119 11:44:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@321 -- # local -ga x722 00:26:12.119 11:44:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@322 -- # mlx=() 00:26:12.119 11:44:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@322 -- # local -ga mlx 00:26:12.119 11:44:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:26:12.119 11:44:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:26:12.119 11:44:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:26:12.119 11:44:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:26:12.119 11:44:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:26:12.119 11:44:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:26:12.119 11:44:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:26:12.119 11:44:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:26:12.119 11:44:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:26:12.119 11:44:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:26:12.119 11:44:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:26:12.119 11:44:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:26:12.119 11:44:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:26:12.119 11:44:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:26:12.119 11:44:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:26:12.119 11:44:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:26:12.119 11:44:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:26:12.119 11:44:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:26:12.119 11:44:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:26:12.119 11:44:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:26:12.119 Found 0000:af:00.0 (0x8086 - 0x159b) 00:26:12.119 11:44:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:26:12.119 11:44:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:26:12.119 11:44:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:12.119 11:44:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:12.119 11:44:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:26:12.119 11:44:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:26:12.119 11:44:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:26:12.119 Found 0000:af:00.1 (0x8086 - 0x159b) 00:26:12.119 11:44:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:26:12.119 11:44:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:26:12.119 11:44:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:12.119 11:44:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:12.119 11:44:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:26:12.119 11:44:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:26:12.119 11:44:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:26:12.119 11:44:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:26:12.119 11:44:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:26:12.119 11:44:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:12.119 11:44:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:26:12.119 11:44:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:12.119 11:44:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:26:12.119 11:44:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:26:12.119 11:44:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:12.119 11:44:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:26:12.119 Found net devices under 0000:af:00.0: cvl_0_0 00:26:12.119 11:44:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:26:12.119 11:44:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:26:12.119 11:44:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:12.119 11:44:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:26:12.119 11:44:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:12.119 11:44:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:26:12.119 11:44:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:26:12.119 11:44:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:12.119 11:44:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:26:12.119 Found net devices under 0000:af:00.1: cvl_0_1 00:26:12.119 11:44:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:26:12.119 11:44:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:26:12.119 11:44:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@442 -- # is_hw=yes 00:26:12.119 11:44:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:26:12.119 11:44:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:26:12.119 11:44:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:26:12.119 11:44:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:26:12.119 11:44:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:26:12.119 11:44:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:26:12.119 11:44:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:26:12.119 11:44:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:26:12.119 11:44:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:26:12.119 11:44:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:26:12.119 11:44:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:26:12.119 11:44:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:26:12.119 11:44:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:26:12.119 11:44:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:26:12.119 11:44:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:26:12.119 11:44:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:26:12.119 11:44:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:26:12.119 11:44:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:26:12.119 11:44:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:26:12.119 11:44:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:26:12.119 11:44:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:26:12.119 11:44:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:26:12.119 11:44:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:26:12.119 11:44:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:26:12.119 11:44:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:26:12.119 11:44:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:26:12.119 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:26:12.119 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.493 ms 00:26:12.119 00:26:12.119 --- 10.0.0.2 ping statistics --- 00:26:12.119 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:12.120 rtt min/avg/max/mdev = 0.493/0.493/0.493/0.000 ms 00:26:12.120 11:44:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:26:12.120 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:26:12.120 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.230 ms 00:26:12.120 00:26:12.120 --- 10.0.0.1 ping statistics --- 00:26:12.120 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:12.120 rtt min/avg/max/mdev = 0.230/0.230/0.230/0.000 ms 00:26:12.120 11:44:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:26:12.120 11:44:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@450 -- # return 0 00:26:12.120 11:44:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:26:12.120 11:44:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:26:12.120 11:44:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:26:12.120 11:44:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:26:12.120 11:44:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:26:12.120 11:44:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:26:12.120 11:44:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:26:12.120 11:44:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@13 -- # trap 'nvmftestfini || :; clean_kernel_target' EXIT 00:26:12.120 11:44:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # get_main_ns_ip 00:26:12.120 11:44:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@769 -- # local ip 00:26:12.120 11:44:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:12.120 11:44:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:12.120 11:44:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:12.120 11:44:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:12.120 11:44:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:12.120 11:44:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:12.120 11:44:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:12.120 11:44:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:12.120 11:44:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:12.120 11:44:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # target_ip=10.0.0.1 00:26:12.120 11:44:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@16 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:26:12.120 11:44:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@660 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:26:12.120 11:44:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@662 -- # nvmet=/sys/kernel/config/nvmet 00:26:12.120 11:44:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@663 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:26:12.120 11:44:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@664 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:26:12.120 11:44:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@665 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:26:12.120 11:44:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@667 -- # local block nvme 00:26:12.120 11:44:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@669 -- # [[ ! -e /sys/module/nvmet ]] 00:26:12.120 11:44:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@670 -- # modprobe nvmet 00:26:12.120 11:44:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@673 -- # [[ -e /sys/kernel/config/nvmet ]] 00:26:12.120 11:44:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@675 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:26:14.118 Waiting for block devices as requested 00:26:14.118 0000:86:00.0 (8086 0a54): vfio-pci -> nvme 00:26:14.118 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:26:14.118 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:26:14.387 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:26:14.387 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:26:14.387 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:26:14.387 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:26:14.645 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:26:14.645 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:26:14.645 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:26:14.645 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:26:14.902 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:26:14.902 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:26:14.902 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:26:15.161 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:26:15.161 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:26:15.161 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:26:15.419 11:44:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:26:15.419 11:44:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n1 ]] 00:26:15.419 11:44:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@680 -- # is_block_zoned nvme0n1 00:26:15.419 11:44:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1648 -- # local device=nvme0n1 00:26:15.419 11:44:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:26:15.419 11:44:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:26:15.419 11:44:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@681 -- # block_in_use nvme0n1 00:26:15.419 11:44:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@381 -- # local block=nvme0n1 pt 00:26:15.419 11:44:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@390 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:26:15.419 No valid GPT data, bailing 00:26:15.419 11:44:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:26:15.419 11:44:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # pt= 00:26:15.419 11:44:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@395 -- # return 1 00:26:15.419 11:44:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n1 00:26:15.419 11:44:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@684 -- # [[ -b /dev/nvme0n1 ]] 00:26:15.419 11:44:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@686 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:26:15.419 11:44:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@687 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:26:15.419 11:44:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@688 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:26:15.419 11:44:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@693 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:26:15.419 11:44:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@695 -- # echo 1 00:26:15.419 11:44:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@696 -- # echo /dev/nvme0n1 00:26:15.419 11:44:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@697 -- # echo 1 00:26:15.419 11:44:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@699 -- # echo 10.0.0.1 00:26:15.419 11:44:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@700 -- # echo tcp 00:26:15.419 11:44:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@701 -- # echo 4420 00:26:15.419 11:44:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@702 -- # echo ipv4 00:26:15.419 11:44:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@705 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:26:15.419 11:44:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@708 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid=00abaa28-3537-eb11-906e-0017a4403562 -a 10.0.0.1 -t tcp -s 4420 00:26:15.419 00:26:15.419 Discovery Log Number of Records 2, Generation counter 2 00:26:15.419 =====Discovery Log Entry 0====== 00:26:15.419 trtype: tcp 00:26:15.419 adrfam: ipv4 00:26:15.419 subtype: current discovery subsystem 00:26:15.419 treq: not specified, sq flow control disable supported 00:26:15.419 portid: 1 00:26:15.419 trsvcid: 4420 00:26:15.419 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:26:15.419 traddr: 10.0.0.1 00:26:15.419 eflags: none 00:26:15.419 sectype: none 00:26:15.419 =====Discovery Log Entry 1====== 00:26:15.419 trtype: tcp 00:26:15.419 adrfam: ipv4 00:26:15.419 subtype: nvme subsystem 00:26:15.419 treq: not specified, sq flow control disable supported 00:26:15.419 portid: 1 00:26:15.420 trsvcid: 4420 00:26:15.420 subnqn: nqn.2016-06.io.spdk:testnqn 00:26:15.420 traddr: 10.0.0.1 00:26:15.420 eflags: none 00:26:15.420 sectype: none 00:26:15.420 11:44:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 00:26:15.420 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' 00:26:15.678 ===================================================== 00:26:15.678 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2014-08.org.nvmexpress.discovery 00:26:15.678 ===================================================== 00:26:15.678 Controller Capabilities/Features 00:26:15.678 ================================ 00:26:15.678 Vendor ID: 0000 00:26:15.678 Subsystem Vendor ID: 0000 00:26:15.678 Serial Number: 46e9ab77bb0e8b39a80a 00:26:15.678 Model Number: Linux 00:26:15.678 Firmware Version: 6.8.9-20 00:26:15.678 Recommended Arb Burst: 0 00:26:15.678 IEEE OUI Identifier: 00 00 00 00:26:15.678 Multi-path I/O 00:26:15.678 May have multiple subsystem ports: No 00:26:15.678 May have multiple controllers: No 00:26:15.678 Associated with SR-IOV VF: No 00:26:15.678 Max Data Transfer Size: Unlimited 00:26:15.678 Max Number of Namespaces: 0 00:26:15.678 Max Number of I/O Queues: 1024 00:26:15.678 NVMe Specification Version (VS): 1.3 00:26:15.678 NVMe Specification Version (Identify): 1.3 00:26:15.678 Maximum Queue Entries: 1024 00:26:15.678 Contiguous Queues Required: No 00:26:15.678 Arbitration Mechanisms Supported 00:26:15.678 Weighted Round Robin: Not Supported 00:26:15.678 Vendor Specific: Not Supported 00:26:15.678 Reset Timeout: 7500 ms 00:26:15.678 Doorbell Stride: 4 bytes 00:26:15.678 NVM Subsystem Reset: Not Supported 00:26:15.678 Command Sets Supported 00:26:15.678 NVM Command Set: Supported 00:26:15.678 Boot Partition: Not Supported 00:26:15.678 Memory Page Size Minimum: 4096 bytes 00:26:15.678 Memory Page Size Maximum: 4096 bytes 00:26:15.678 Persistent Memory Region: Not Supported 00:26:15.678 Optional Asynchronous Events Supported 00:26:15.678 Namespace Attribute Notices: Not Supported 00:26:15.678 Firmware Activation Notices: Not Supported 00:26:15.678 ANA Change Notices: Not Supported 00:26:15.678 PLE Aggregate Log Change Notices: Not Supported 00:26:15.678 LBA Status Info Alert Notices: Not Supported 00:26:15.678 EGE Aggregate Log Change Notices: Not Supported 00:26:15.678 Normal NVM Subsystem Shutdown event: Not Supported 00:26:15.678 Zone Descriptor Change Notices: Not Supported 00:26:15.678 Discovery Log Change Notices: Supported 00:26:15.678 Controller Attributes 00:26:15.678 128-bit Host Identifier: Not Supported 00:26:15.678 Non-Operational Permissive Mode: Not Supported 00:26:15.679 NVM Sets: Not Supported 00:26:15.679 Read Recovery Levels: Not Supported 00:26:15.679 Endurance Groups: Not Supported 00:26:15.679 Predictable Latency Mode: Not Supported 00:26:15.679 Traffic Based Keep ALive: Not Supported 00:26:15.679 Namespace Granularity: Not Supported 00:26:15.679 SQ Associations: Not Supported 00:26:15.679 UUID List: Not Supported 00:26:15.679 Multi-Domain Subsystem: Not Supported 00:26:15.679 Fixed Capacity Management: Not Supported 00:26:15.679 Variable Capacity Management: Not Supported 00:26:15.679 Delete Endurance Group: Not Supported 00:26:15.679 Delete NVM Set: Not Supported 00:26:15.679 Extended LBA Formats Supported: Not Supported 00:26:15.679 Flexible Data Placement Supported: Not Supported 00:26:15.679 00:26:15.679 Controller Memory Buffer Support 00:26:15.679 ================================ 00:26:15.679 Supported: No 00:26:15.679 00:26:15.679 Persistent Memory Region Support 00:26:15.679 ================================ 00:26:15.679 Supported: No 00:26:15.679 00:26:15.679 Admin Command Set Attributes 00:26:15.679 ============================ 00:26:15.679 Security Send/Receive: Not Supported 00:26:15.679 Format NVM: Not Supported 00:26:15.679 Firmware Activate/Download: Not Supported 00:26:15.679 Namespace Management: Not Supported 00:26:15.679 Device Self-Test: Not Supported 00:26:15.679 Directives: Not Supported 00:26:15.679 NVMe-MI: Not Supported 00:26:15.679 Virtualization Management: Not Supported 00:26:15.679 Doorbell Buffer Config: Not Supported 00:26:15.679 Get LBA Status Capability: Not Supported 00:26:15.679 Command & Feature Lockdown Capability: Not Supported 00:26:15.679 Abort Command Limit: 1 00:26:15.679 Async Event Request Limit: 1 00:26:15.679 Number of Firmware Slots: N/A 00:26:15.679 Firmware Slot 1 Read-Only: N/A 00:26:15.679 Firmware Activation Without Reset: N/A 00:26:15.679 Multiple Update Detection Support: N/A 00:26:15.679 Firmware Update Granularity: No Information Provided 00:26:15.679 Per-Namespace SMART Log: No 00:26:15.679 Asymmetric Namespace Access Log Page: Not Supported 00:26:15.679 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:26:15.679 Command Effects Log Page: Not Supported 00:26:15.679 Get Log Page Extended Data: Supported 00:26:15.679 Telemetry Log Pages: Not Supported 00:26:15.679 Persistent Event Log Pages: Not Supported 00:26:15.679 Supported Log Pages Log Page: May Support 00:26:15.679 Commands Supported & Effects Log Page: Not Supported 00:26:15.679 Feature Identifiers & Effects Log Page:May Support 00:26:15.679 NVMe-MI Commands & Effects Log Page: May Support 00:26:15.679 Data Area 4 for Telemetry Log: Not Supported 00:26:15.679 Error Log Page Entries Supported: 1 00:26:15.679 Keep Alive: Not Supported 00:26:15.679 00:26:15.679 NVM Command Set Attributes 00:26:15.679 ========================== 00:26:15.679 Submission Queue Entry Size 00:26:15.679 Max: 1 00:26:15.679 Min: 1 00:26:15.679 Completion Queue Entry Size 00:26:15.679 Max: 1 00:26:15.679 Min: 1 00:26:15.679 Number of Namespaces: 0 00:26:15.679 Compare Command: Not Supported 00:26:15.679 Write Uncorrectable Command: Not Supported 00:26:15.679 Dataset Management Command: Not Supported 00:26:15.679 Write Zeroes Command: Not Supported 00:26:15.679 Set Features Save Field: Not Supported 00:26:15.679 Reservations: Not Supported 00:26:15.679 Timestamp: Not Supported 00:26:15.679 Copy: Not Supported 00:26:15.679 Volatile Write Cache: Not Present 00:26:15.679 Atomic Write Unit (Normal): 1 00:26:15.679 Atomic Write Unit (PFail): 1 00:26:15.679 Atomic Compare & Write Unit: 1 00:26:15.679 Fused Compare & Write: Not Supported 00:26:15.679 Scatter-Gather List 00:26:15.679 SGL Command Set: Supported 00:26:15.679 SGL Keyed: Not Supported 00:26:15.679 SGL Bit Bucket Descriptor: Not Supported 00:26:15.679 SGL Metadata Pointer: Not Supported 00:26:15.679 Oversized SGL: Not Supported 00:26:15.679 SGL Metadata Address: Not Supported 00:26:15.679 SGL Offset: Supported 00:26:15.679 Transport SGL Data Block: Not Supported 00:26:15.679 Replay Protected Memory Block: Not Supported 00:26:15.679 00:26:15.679 Firmware Slot Information 00:26:15.679 ========================= 00:26:15.679 Active slot: 0 00:26:15.679 00:26:15.679 00:26:15.679 Error Log 00:26:15.679 ========= 00:26:15.679 00:26:15.679 Active Namespaces 00:26:15.679 ================= 00:26:15.679 Discovery Log Page 00:26:15.679 ================== 00:26:15.679 Generation Counter: 2 00:26:15.679 Number of Records: 2 00:26:15.679 Record Format: 0 00:26:15.679 00:26:15.679 Discovery Log Entry 0 00:26:15.679 ---------------------- 00:26:15.679 Transport Type: 3 (TCP) 00:26:15.679 Address Family: 1 (IPv4) 00:26:15.679 Subsystem Type: 3 (Current Discovery Subsystem) 00:26:15.679 Entry Flags: 00:26:15.679 Duplicate Returned Information: 0 00:26:15.679 Explicit Persistent Connection Support for Discovery: 0 00:26:15.679 Transport Requirements: 00:26:15.679 Secure Channel: Not Specified 00:26:15.679 Port ID: 1 (0x0001) 00:26:15.679 Controller ID: 65535 (0xffff) 00:26:15.679 Admin Max SQ Size: 32 00:26:15.679 Transport Service Identifier: 4420 00:26:15.679 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:26:15.679 Transport Address: 10.0.0.1 00:26:15.679 Discovery Log Entry 1 00:26:15.679 ---------------------- 00:26:15.679 Transport Type: 3 (TCP) 00:26:15.679 Address Family: 1 (IPv4) 00:26:15.679 Subsystem Type: 2 (NVM Subsystem) 00:26:15.679 Entry Flags: 00:26:15.679 Duplicate Returned Information: 0 00:26:15.679 Explicit Persistent Connection Support for Discovery: 0 00:26:15.679 Transport Requirements: 00:26:15.679 Secure Channel: Not Specified 00:26:15.679 Port ID: 1 (0x0001) 00:26:15.679 Controller ID: 65535 (0xffff) 00:26:15.679 Admin Max SQ Size: 32 00:26:15.679 Transport Service Identifier: 4420 00:26:15.679 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:testnqn 00:26:15.679 Transport Address: 10.0.0.1 00:26:15.679 11:44:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:26:15.679 get_feature(0x01) failed 00:26:15.679 get_feature(0x02) failed 00:26:15.679 get_feature(0x04) failed 00:26:15.679 ===================================================== 00:26:15.679 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:26:15.679 ===================================================== 00:26:15.679 Controller Capabilities/Features 00:26:15.679 ================================ 00:26:15.679 Vendor ID: 0000 00:26:15.679 Subsystem Vendor ID: 0000 00:26:15.679 Serial Number: 76ab61f3937bd345e7ca 00:26:15.679 Model Number: SPDK-nqn.2016-06.io.spdk:testnqn 00:26:15.679 Firmware Version: 6.8.9-20 00:26:15.679 Recommended Arb Burst: 6 00:26:15.679 IEEE OUI Identifier: 00 00 00 00:26:15.679 Multi-path I/O 00:26:15.679 May have multiple subsystem ports: Yes 00:26:15.679 May have multiple controllers: Yes 00:26:15.680 Associated with SR-IOV VF: No 00:26:15.680 Max Data Transfer Size: Unlimited 00:26:15.680 Max Number of Namespaces: 1024 00:26:15.680 Max Number of I/O Queues: 128 00:26:15.680 NVMe Specification Version (VS): 1.3 00:26:15.680 NVMe Specification Version (Identify): 1.3 00:26:15.680 Maximum Queue Entries: 1024 00:26:15.680 Contiguous Queues Required: No 00:26:15.680 Arbitration Mechanisms Supported 00:26:15.680 Weighted Round Robin: Not Supported 00:26:15.680 Vendor Specific: Not Supported 00:26:15.680 Reset Timeout: 7500 ms 00:26:15.680 Doorbell Stride: 4 bytes 00:26:15.680 NVM Subsystem Reset: Not Supported 00:26:15.680 Command Sets Supported 00:26:15.680 NVM Command Set: Supported 00:26:15.680 Boot Partition: Not Supported 00:26:15.680 Memory Page Size Minimum: 4096 bytes 00:26:15.680 Memory Page Size Maximum: 4096 bytes 00:26:15.680 Persistent Memory Region: Not Supported 00:26:15.680 Optional Asynchronous Events Supported 00:26:15.680 Namespace Attribute Notices: Supported 00:26:15.680 Firmware Activation Notices: Not Supported 00:26:15.680 ANA Change Notices: Supported 00:26:15.680 PLE Aggregate Log Change Notices: Not Supported 00:26:15.680 LBA Status Info Alert Notices: Not Supported 00:26:15.680 EGE Aggregate Log Change Notices: Not Supported 00:26:15.680 Normal NVM Subsystem Shutdown event: Not Supported 00:26:15.680 Zone Descriptor Change Notices: Not Supported 00:26:15.680 Discovery Log Change Notices: Not Supported 00:26:15.680 Controller Attributes 00:26:15.680 128-bit Host Identifier: Supported 00:26:15.680 Non-Operational Permissive Mode: Not Supported 00:26:15.680 NVM Sets: Not Supported 00:26:15.680 Read Recovery Levels: Not Supported 00:26:15.680 Endurance Groups: Not Supported 00:26:15.680 Predictable Latency Mode: Not Supported 00:26:15.680 Traffic Based Keep ALive: Supported 00:26:15.680 Namespace Granularity: Not Supported 00:26:15.680 SQ Associations: Not Supported 00:26:15.680 UUID List: Not Supported 00:26:15.680 Multi-Domain Subsystem: Not Supported 00:26:15.680 Fixed Capacity Management: Not Supported 00:26:15.680 Variable Capacity Management: Not Supported 00:26:15.680 Delete Endurance Group: Not Supported 00:26:15.680 Delete NVM Set: Not Supported 00:26:15.680 Extended LBA Formats Supported: Not Supported 00:26:15.680 Flexible Data Placement Supported: Not Supported 00:26:15.680 00:26:15.680 Controller Memory Buffer Support 00:26:15.680 ================================ 00:26:15.680 Supported: No 00:26:15.680 00:26:15.680 Persistent Memory Region Support 00:26:15.680 ================================ 00:26:15.680 Supported: No 00:26:15.680 00:26:15.680 Admin Command Set Attributes 00:26:15.680 ============================ 00:26:15.680 Security Send/Receive: Not Supported 00:26:15.680 Format NVM: Not Supported 00:26:15.680 Firmware Activate/Download: Not Supported 00:26:15.680 Namespace Management: Not Supported 00:26:15.680 Device Self-Test: Not Supported 00:26:15.680 Directives: Not Supported 00:26:15.680 NVMe-MI: Not Supported 00:26:15.680 Virtualization Management: Not Supported 00:26:15.680 Doorbell Buffer Config: Not Supported 00:26:15.680 Get LBA Status Capability: Not Supported 00:26:15.680 Command & Feature Lockdown Capability: Not Supported 00:26:15.680 Abort Command Limit: 4 00:26:15.680 Async Event Request Limit: 4 00:26:15.680 Number of Firmware Slots: N/A 00:26:15.680 Firmware Slot 1 Read-Only: N/A 00:26:15.680 Firmware Activation Without Reset: N/A 00:26:15.680 Multiple Update Detection Support: N/A 00:26:15.680 Firmware Update Granularity: No Information Provided 00:26:15.680 Per-Namespace SMART Log: Yes 00:26:15.680 Asymmetric Namespace Access Log Page: Supported 00:26:15.680 ANA Transition Time : 10 sec 00:26:15.680 00:26:15.680 Asymmetric Namespace Access Capabilities 00:26:15.680 ANA Optimized State : Supported 00:26:15.680 ANA Non-Optimized State : Supported 00:26:15.680 ANA Inaccessible State : Supported 00:26:15.680 ANA Persistent Loss State : Supported 00:26:15.680 ANA Change State : Supported 00:26:15.680 ANAGRPID is not changed : No 00:26:15.680 Non-Zero ANAGRPID for NS Mgmt Cmd : Not Supported 00:26:15.680 00:26:15.680 ANA Group Identifier Maximum : 128 00:26:15.680 Number of ANA Group Identifiers : 128 00:26:15.680 Max Number of Allowed Namespaces : 1024 00:26:15.680 Subsystem NQN: nqn.2016-06.io.spdk:testnqn 00:26:15.680 Command Effects Log Page: Supported 00:26:15.680 Get Log Page Extended Data: Supported 00:26:15.680 Telemetry Log Pages: Not Supported 00:26:15.680 Persistent Event Log Pages: Not Supported 00:26:15.680 Supported Log Pages Log Page: May Support 00:26:15.680 Commands Supported & Effects Log Page: Not Supported 00:26:15.680 Feature Identifiers & Effects Log Page:May Support 00:26:15.680 NVMe-MI Commands & Effects Log Page: May Support 00:26:15.680 Data Area 4 for Telemetry Log: Not Supported 00:26:15.680 Error Log Page Entries Supported: 128 00:26:15.680 Keep Alive: Supported 00:26:15.680 Keep Alive Granularity: 1000 ms 00:26:15.680 00:26:15.680 NVM Command Set Attributes 00:26:15.680 ========================== 00:26:15.680 Submission Queue Entry Size 00:26:15.680 Max: 64 00:26:15.680 Min: 64 00:26:15.680 Completion Queue Entry Size 00:26:15.680 Max: 16 00:26:15.680 Min: 16 00:26:15.680 Number of Namespaces: 1024 00:26:15.680 Compare Command: Not Supported 00:26:15.680 Write Uncorrectable Command: Not Supported 00:26:15.680 Dataset Management Command: Supported 00:26:15.680 Write Zeroes Command: Supported 00:26:15.680 Set Features Save Field: Not Supported 00:26:15.680 Reservations: Not Supported 00:26:15.680 Timestamp: Not Supported 00:26:15.680 Copy: Not Supported 00:26:15.680 Volatile Write Cache: Present 00:26:15.680 Atomic Write Unit (Normal): 1 00:26:15.680 Atomic Write Unit (PFail): 1 00:26:15.680 Atomic Compare & Write Unit: 1 00:26:15.680 Fused Compare & Write: Not Supported 00:26:15.680 Scatter-Gather List 00:26:15.680 SGL Command Set: Supported 00:26:15.680 SGL Keyed: Not Supported 00:26:15.680 SGL Bit Bucket Descriptor: Not Supported 00:26:15.680 SGL Metadata Pointer: Not Supported 00:26:15.680 Oversized SGL: Not Supported 00:26:15.680 SGL Metadata Address: Not Supported 00:26:15.680 SGL Offset: Supported 00:26:15.680 Transport SGL Data Block: Not Supported 00:26:15.680 Replay Protected Memory Block: Not Supported 00:26:15.680 00:26:15.680 Firmware Slot Information 00:26:15.680 ========================= 00:26:15.680 Active slot: 0 00:26:15.680 00:26:15.680 Asymmetric Namespace Access 00:26:15.680 =========================== 00:26:15.680 Change Count : 0 00:26:15.680 Number of ANA Group Descriptors : 1 00:26:15.680 ANA Group Descriptor : 0 00:26:15.680 ANA Group ID : 1 00:26:15.680 Number of NSID Values : 1 00:26:15.680 Change Count : 0 00:26:15.680 ANA State : 1 00:26:15.680 Namespace Identifier : 1 00:26:15.680 00:26:15.680 Commands Supported and Effects 00:26:15.680 ============================== 00:26:15.680 Admin Commands 00:26:15.680 -------------- 00:26:15.680 Get Log Page (02h): Supported 00:26:15.680 Identify (06h): Supported 00:26:15.680 Abort (08h): Supported 00:26:15.680 Set Features (09h): Supported 00:26:15.680 Get Features (0Ah): Supported 00:26:15.680 Asynchronous Event Request (0Ch): Supported 00:26:15.680 Keep Alive (18h): Supported 00:26:15.680 I/O Commands 00:26:15.680 ------------ 00:26:15.680 Flush (00h): Supported 00:26:15.680 Write (01h): Supported LBA-Change 00:26:15.680 Read (02h): Supported 00:26:15.680 Write Zeroes (08h): Supported LBA-Change 00:26:15.680 Dataset Management (09h): Supported 00:26:15.680 00:26:15.680 Error Log 00:26:15.680 ========= 00:26:15.680 Entry: 0 00:26:15.680 Error Count: 0x3 00:26:15.680 Submission Queue Id: 0x0 00:26:15.680 Command Id: 0x5 00:26:15.680 Phase Bit: 0 00:26:15.680 Status Code: 0x2 00:26:15.680 Status Code Type: 0x0 00:26:15.680 Do Not Retry: 1 00:26:15.680 Error Location: 0x28 00:26:15.680 LBA: 0x0 00:26:15.680 Namespace: 0x0 00:26:15.681 Vendor Log Page: 0x0 00:26:15.681 ----------- 00:26:15.681 Entry: 1 00:26:15.681 Error Count: 0x2 00:26:15.681 Submission Queue Id: 0x0 00:26:15.681 Command Id: 0x5 00:26:15.681 Phase Bit: 0 00:26:15.681 Status Code: 0x2 00:26:15.681 Status Code Type: 0x0 00:26:15.681 Do Not Retry: 1 00:26:15.681 Error Location: 0x28 00:26:15.681 LBA: 0x0 00:26:15.681 Namespace: 0x0 00:26:15.681 Vendor Log Page: 0x0 00:26:15.681 ----------- 00:26:15.681 Entry: 2 00:26:15.681 Error Count: 0x1 00:26:15.681 Submission Queue Id: 0x0 00:26:15.681 Command Id: 0x4 00:26:15.681 Phase Bit: 0 00:26:15.681 Status Code: 0x2 00:26:15.681 Status Code Type: 0x0 00:26:15.681 Do Not Retry: 1 00:26:15.681 Error Location: 0x28 00:26:15.681 LBA: 0x0 00:26:15.681 Namespace: 0x0 00:26:15.681 Vendor Log Page: 0x0 00:26:15.681 00:26:15.681 Number of Queues 00:26:15.681 ================ 00:26:15.681 Number of I/O Submission Queues: 128 00:26:15.681 Number of I/O Completion Queues: 128 00:26:15.681 00:26:15.681 ZNS Specific Controller Data 00:26:15.681 ============================ 00:26:15.681 Zone Append Size Limit: 0 00:26:15.681 00:26:15.681 00:26:15.681 Active Namespaces 00:26:15.681 ================= 00:26:15.681 get_feature(0x05) failed 00:26:15.681 Namespace ID:1 00:26:15.681 Command Set Identifier: NVM (00h) 00:26:15.681 Deallocate: Supported 00:26:15.681 Deallocated/Unwritten Error: Not Supported 00:26:15.681 Deallocated Read Value: Unknown 00:26:15.681 Deallocate in Write Zeroes: Not Supported 00:26:15.681 Deallocated Guard Field: 0xFFFF 00:26:15.681 Flush: Supported 00:26:15.681 Reservation: Not Supported 00:26:15.681 Namespace Sharing Capabilities: Multiple Controllers 00:26:15.681 Size (in LBAs): 1953525168 (931GiB) 00:26:15.681 Capacity (in LBAs): 1953525168 (931GiB) 00:26:15.681 Utilization (in LBAs): 1953525168 (931GiB) 00:26:15.681 UUID: 14e4e76f-3ae0-4b69-8d9d-4225d7084d75 00:26:15.681 Thin Provisioning: Not Supported 00:26:15.681 Per-NS Atomic Units: Yes 00:26:15.681 Atomic Boundary Size (Normal): 0 00:26:15.681 Atomic Boundary Size (PFail): 0 00:26:15.681 Atomic Boundary Offset: 0 00:26:15.681 NGUID/EUI64 Never Reused: No 00:26:15.681 ANA group ID: 1 00:26:15.681 Namespace Write Protected: No 00:26:15.681 Number of LBA Formats: 1 00:26:15.681 Current LBA Format: LBA Format #00 00:26:15.681 LBA Format #00: Data Size: 512 Metadata Size: 0 00:26:15.681 00:26:15.681 11:44:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # nvmftestfini 00:26:15.681 11:44:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@516 -- # nvmfcleanup 00:26:15.681 11:44:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@121 -- # sync 00:26:15.681 11:44:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:26:15.681 11:44:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@124 -- # set +e 00:26:15.681 11:44:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:26:15.681 11:44:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:26:15.681 rmmod nvme_tcp 00:26:15.681 rmmod nvme_fabrics 00:26:15.681 11:44:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:26:15.681 11:44:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@128 -- # set -e 00:26:15.681 11:44:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@129 -- # return 0 00:26:15.681 11:44:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:26:15.681 11:44:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:26:15.681 11:44:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:26:15.681 11:44:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:26:15.681 11:44:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@297 -- # iptr 00:26:15.681 11:44:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@791 -- # iptables-save 00:26:15.681 11:44:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:26:15.681 11:44:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@791 -- # iptables-restore 00:26:15.681 11:44:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:26:15.681 11:44:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@302 -- # remove_spdk_ns 00:26:15.681 11:44:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:15.681 11:44:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:15.681 11:44:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:18.208 11:44:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:26:18.208 11:44:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # clean_kernel_target 00:26:18.208 11:44:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@712 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:26:18.208 11:44:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@714 -- # echo 0 00:26:18.208 11:44:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@716 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:26:18.208 11:44:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@717 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:26:18.208 11:44:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@718 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:26:18.208 11:44:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@719 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:26:18.208 11:44:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@721 -- # modules=(/sys/module/nvmet/holders/*) 00:26:18.208 11:44:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@723 -- # modprobe -r nvmet_tcp nvmet 00:26:18.208 11:44:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@726 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:26:20.104 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:26:20.104 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:26:20.104 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:26:20.104 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:26:20.104 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:26:20.104 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:26:20.104 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:26:20.362 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:26:20.363 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:26:20.363 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:26:20.363 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:26:20.363 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:26:20.363 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:26:20.363 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:26:20.363 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:26:20.363 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:26:21.295 0000:86:00.0 (8086 0a54): nvme -> vfio-pci 00:26:21.295 00:26:21.295 real 0m15.909s 00:26:21.295 user 0m3.825s 00:26:21.295 sys 0m8.110s 00:26:21.295 11:44:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1128 -- # xtrace_disable 00:26:21.295 11:44:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@10 -- # set +x 00:26:21.295 ************************************ 00:26:21.295 END TEST nvmf_identify_kernel_target 00:26:21.295 ************************************ 00:26:21.295 11:44:22 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@30 -- # run_test nvmf_auth_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/auth.sh --transport=tcp 00:26:21.295 11:44:22 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:26:21.295 11:44:22 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1109 -- # xtrace_disable 00:26:21.295 11:44:22 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:26:21.295 ************************************ 00:26:21.295 START TEST nvmf_auth_host 00:26:21.295 ************************************ 00:26:21.295 11:44:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/auth.sh --transport=tcp 00:26:21.553 * Looking for test storage... 00:26:21.554 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:26:21.554 11:44:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:26:21.554 11:44:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1691 -- # lcov --version 00:26:21.554 11:44:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:26:21.554 11:44:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:26:21.554 11:44:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:26:21.554 11:44:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@333 -- # local ver1 ver1_l 00:26:21.554 11:44:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@334 -- # local ver2 ver2_l 00:26:21.554 11:44:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@336 -- # IFS=.-: 00:26:21.554 11:44:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@336 -- # read -ra ver1 00:26:21.554 11:44:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@337 -- # IFS=.-: 00:26:21.554 11:44:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@337 -- # read -ra ver2 00:26:21.554 11:44:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@338 -- # local 'op=<' 00:26:21.554 11:44:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@340 -- # ver1_l=2 00:26:21.554 11:44:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@341 -- # ver2_l=1 00:26:21.554 11:44:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:26:21.554 11:44:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@344 -- # case "$op" in 00:26:21.554 11:44:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@345 -- # : 1 00:26:21.554 11:44:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@364 -- # (( v = 0 )) 00:26:21.554 11:44:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:26:21.554 11:44:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@365 -- # decimal 1 00:26:21.554 11:44:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@353 -- # local d=1 00:26:21.554 11:44:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:26:21.554 11:44:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@355 -- # echo 1 00:26:21.554 11:44:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@365 -- # ver1[v]=1 00:26:21.554 11:44:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@366 -- # decimal 2 00:26:21.554 11:44:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@353 -- # local d=2 00:26:21.554 11:44:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:26:21.554 11:44:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@355 -- # echo 2 00:26:21.554 11:44:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@366 -- # ver2[v]=2 00:26:21.554 11:44:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:26:21.554 11:44:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:26:21.554 11:44:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@368 -- # return 0 00:26:21.554 11:44:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:26:21.554 11:44:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:26:21.554 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:21.554 --rc genhtml_branch_coverage=1 00:26:21.554 --rc genhtml_function_coverage=1 00:26:21.554 --rc genhtml_legend=1 00:26:21.554 --rc geninfo_all_blocks=1 00:26:21.554 --rc geninfo_unexecuted_blocks=1 00:26:21.554 00:26:21.554 ' 00:26:21.554 11:44:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:26:21.554 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:21.554 --rc genhtml_branch_coverage=1 00:26:21.554 --rc genhtml_function_coverage=1 00:26:21.554 --rc genhtml_legend=1 00:26:21.554 --rc geninfo_all_blocks=1 00:26:21.554 --rc geninfo_unexecuted_blocks=1 00:26:21.554 00:26:21.554 ' 00:26:21.554 11:44:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:26:21.554 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:21.554 --rc genhtml_branch_coverage=1 00:26:21.554 --rc genhtml_function_coverage=1 00:26:21.554 --rc genhtml_legend=1 00:26:21.554 --rc geninfo_all_blocks=1 00:26:21.554 --rc geninfo_unexecuted_blocks=1 00:26:21.554 00:26:21.554 ' 00:26:21.554 11:44:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:26:21.554 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:21.554 --rc genhtml_branch_coverage=1 00:26:21.554 --rc genhtml_function_coverage=1 00:26:21.554 --rc genhtml_legend=1 00:26:21.554 --rc geninfo_all_blocks=1 00:26:21.554 --rc geninfo_unexecuted_blocks=1 00:26:21.554 00:26:21.554 ' 00:26:21.554 11:44:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:26:21.554 11:44:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@7 -- # uname -s 00:26:21.554 11:44:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:21.554 11:44:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:21.554 11:44:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:21.554 11:44:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:21.554 11:44:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:26:21.554 11:44:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:26:21.554 11:44:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:21.554 11:44:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:26:21.554 11:44:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:21.554 11:44:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:26:21.554 11:44:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:26:21.554 11:44:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@18 -- # NVME_HOSTID=00abaa28-3537-eb11-906e-0017a4403562 00:26:21.554 11:44:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:21.554 11:44:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:26:21.554 11:44:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:26:21.554 11:44:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:26:21.554 11:44:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:26:21.554 11:44:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@15 -- # shopt -s extglob 00:26:21.554 11:44:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:21.554 11:44:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:21.554 11:44:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:21.554 11:44:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:21.554 11:44:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:21.554 11:44:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:21.554 11:44:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@5 -- # export PATH 00:26:21.554 11:44:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:21.554 11:44:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@51 -- # : 0 00:26:21.554 11:44:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:26:21.554 11:44:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:26:21.554 11:44:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:26:21.554 11:44:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:21.554 11:44:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:21.554 11:44:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:26:21.554 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:26:21.554 11:44:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:26:21.554 11:44:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:26:21.554 11:44:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@55 -- # have_pci_nics=0 00:26:21.554 11:44:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:26:21.554 11:44:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@16 -- # dhgroups=("ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:26:21.554 11:44:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@17 -- # subnqn=nqn.2024-02.io.spdk:cnode0 00:26:21.555 11:44:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@18 -- # hostnqn=nqn.2024-02.io.spdk:host0 00:26:21.555 11:44:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@19 -- # nvmet_subsys=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:26:21.555 11:44:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@20 -- # nvmet_host=/sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:26:21.555 11:44:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@21 -- # keys=() 00:26:21.555 11:44:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@21 -- # ckeys=() 00:26:21.555 11:44:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@68 -- # nvmftestinit 00:26:21.555 11:44:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:26:21.555 11:44:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:21.555 11:44:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@476 -- # prepare_net_devs 00:26:21.555 11:44:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@438 -- # local -g is_hw=no 00:26:21.555 11:44:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@440 -- # remove_spdk_ns 00:26:21.555 11:44:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:21.555 11:44:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:21.555 11:44:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:21.555 11:44:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:26:21.555 11:44:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:26:21.555 11:44:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@309 -- # xtrace_disable 00:26:21.555 11:44:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:26.817 11:44:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:26:26.817 11:44:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@315 -- # pci_devs=() 00:26:26.817 11:44:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@315 -- # local -a pci_devs 00:26:26.817 11:44:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@316 -- # pci_net_devs=() 00:26:26.817 11:44:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:26:26.817 11:44:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@317 -- # pci_drivers=() 00:26:26.817 11:44:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@317 -- # local -A pci_drivers 00:26:27.076 11:44:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@319 -- # net_devs=() 00:26:27.076 11:44:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@319 -- # local -ga net_devs 00:26:27.076 11:44:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@320 -- # e810=() 00:26:27.076 11:44:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@320 -- # local -ga e810 00:26:27.076 11:44:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@321 -- # x722=() 00:26:27.076 11:44:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@321 -- # local -ga x722 00:26:27.076 11:44:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@322 -- # mlx=() 00:26:27.076 11:44:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@322 -- # local -ga mlx 00:26:27.076 11:44:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:26:27.076 11:44:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:26:27.076 11:44:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:26:27.076 11:44:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:26:27.076 11:44:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:26:27.076 11:44:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:26:27.076 11:44:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:26:27.076 11:44:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:26:27.076 11:44:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:26:27.076 11:44:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:26:27.076 11:44:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:26:27.076 11:44:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:26:27.076 11:44:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:26:27.076 11:44:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:26:27.076 11:44:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:26:27.076 11:44:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:26:27.076 11:44:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:26:27.076 11:44:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:26:27.076 11:44:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:26:27.076 11:44:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:26:27.076 Found 0000:af:00.0 (0x8086 - 0x159b) 00:26:27.076 11:44:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:26:27.076 11:44:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:26:27.076 11:44:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:27.076 11:44:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:27.076 11:44:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:26:27.076 11:44:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:26:27.076 11:44:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:26:27.076 Found 0000:af:00.1 (0x8086 - 0x159b) 00:26:27.076 11:44:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:26:27.076 11:44:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:26:27.076 11:44:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:27.076 11:44:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:27.076 11:44:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:26:27.076 11:44:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:26:27.076 11:44:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:26:27.076 11:44:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:26:27.076 11:44:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:26:27.076 11:44:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:27.076 11:44:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:26:27.076 11:44:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:27.076 11:44:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@418 -- # [[ up == up ]] 00:26:27.076 11:44:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:26:27.076 11:44:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:27.076 11:44:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:26:27.076 Found net devices under 0000:af:00.0: cvl_0_0 00:26:27.076 11:44:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:26:27.076 11:44:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:26:27.076 11:44:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:27.076 11:44:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:26:27.076 11:44:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:27.076 11:44:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@418 -- # [[ up == up ]] 00:26:27.076 11:44:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:26:27.076 11:44:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:27.076 11:44:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:26:27.076 Found net devices under 0000:af:00.1: cvl_0_1 00:26:27.076 11:44:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:26:27.076 11:44:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:26:27.076 11:44:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@442 -- # is_hw=yes 00:26:27.076 11:44:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:26:27.076 11:44:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:26:27.076 11:44:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:26:27.076 11:44:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:26:27.076 11:44:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:26:27.076 11:44:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:26:27.076 11:44:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:26:27.076 11:44:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:26:27.076 11:44:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:26:27.076 11:44:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:26:27.076 11:44:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:26:27.076 11:44:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:26:27.076 11:44:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:26:27.076 11:44:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:26:27.076 11:44:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:26:27.076 11:44:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:26:27.076 11:44:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:26:27.076 11:44:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:26:27.076 11:44:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:26:27.076 11:44:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:26:27.076 11:44:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:26:27.076 11:44:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:26:27.076 11:44:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:26:27.076 11:44:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:26:27.076 11:44:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:26:27.076 11:44:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:26:27.076 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:26:27.076 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.398 ms 00:26:27.076 00:26:27.076 --- 10.0.0.2 ping statistics --- 00:26:27.076 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:27.076 rtt min/avg/max/mdev = 0.398/0.398/0.398/0.000 ms 00:26:27.077 11:44:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:26:27.333 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:26:27.333 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.210 ms 00:26:27.333 00:26:27.334 --- 10.0.0.1 ping statistics --- 00:26:27.334 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:27.334 rtt min/avg/max/mdev = 0.210/0.210/0.210/0.000 ms 00:26:27.334 11:44:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:26:27.334 11:44:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@450 -- # return 0 00:26:27.334 11:44:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:26:27.334 11:44:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:26:27.334 11:44:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:26:27.334 11:44:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:26:27.334 11:44:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:26:27.334 11:44:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:26:27.334 11:44:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:26:27.334 11:44:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@69 -- # nvmfappstart -L nvme_auth 00:26:27.334 11:44:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:26:27.334 11:44:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@724 -- # xtrace_disable 00:26:27.334 11:44:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:27.334 11:44:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@509 -- # nvmfpid=1375777 00:26:27.334 11:44:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@510 -- # waitforlisten 1375777 00:26:27.334 11:44:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@833 -- # '[' -z 1375777 ']' 00:26:27.334 11:44:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:27.334 11:44:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@838 -- # local max_retries=100 00:26:27.334 11:44:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:27.334 11:44:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvme_auth 00:26:27.334 11:44:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@842 -- # xtrace_disable 00:26:27.334 11:44:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:27.591 11:44:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:26:27.592 11:44:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@866 -- # return 0 00:26:27.592 11:44:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:26:27.592 11:44:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@730 -- # xtrace_disable 00:26:27.592 11:44:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:27.592 11:44:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:27.592 11:44:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@70 -- # trap 'cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log; cleanup' SIGINT SIGTERM EXIT 00:26:27.592 11:44:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key null 32 00:26:27.592 11:44:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:26:27.592 11:44:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:26:27.592 11:44:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:26:27.592 11:44:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=null 00:26:27.592 11:44:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=32 00:26:27.592 11:44:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:26:27.592 11:44:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=1968ad8347a752406312818760dc5939 00:26:27.592 11:44:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-null.XXX 00:26:27.592 11:44:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-null.JYx 00:26:27.592 11:44:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 1968ad8347a752406312818760dc5939 0 00:26:27.592 11:44:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 1968ad8347a752406312818760dc5939 0 00:26:27.592 11:44:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:26:27.592 11:44:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:26:27.592 11:44:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=1968ad8347a752406312818760dc5939 00:26:27.592 11:44:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=0 00:26:27.592 11:44:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:26:27.592 11:44:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-null.JYx 00:26:27.592 11:44:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-null.JYx 00:26:27.592 11:44:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # keys[0]=/tmp/spdk.key-null.JYx 00:26:27.592 11:44:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key sha512 64 00:26:27.592 11:44:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:26:27.592 11:44:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:26:27.592 11:44:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:26:27.592 11:44:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha512 00:26:27.592 11:44:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=64 00:26:27.592 11:44:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 32 /dev/urandom 00:26:27.592 11:44:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=94166e63c32cd92f81bfb59572fd81afa38b90bc9a526658b1d49ca33c65cb2d 00:26:27.592 11:44:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha512.XXX 00:26:27.592 11:44:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha512.cAb 00:26:27.592 11:44:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 94166e63c32cd92f81bfb59572fd81afa38b90bc9a526658b1d49ca33c65cb2d 3 00:26:27.592 11:44:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 94166e63c32cd92f81bfb59572fd81afa38b90bc9a526658b1d49ca33c65cb2d 3 00:26:27.592 11:44:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:26:27.592 11:44:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:26:27.592 11:44:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=94166e63c32cd92f81bfb59572fd81afa38b90bc9a526658b1d49ca33c65cb2d 00:26:27.592 11:44:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=3 00:26:27.592 11:44:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:26:27.592 11:44:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha512.cAb 00:26:27.592 11:44:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha512.cAb 00:26:27.592 11:44:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # ckeys[0]=/tmp/spdk.key-sha512.cAb 00:26:27.592 11:44:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key null 48 00:26:27.592 11:44:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:26:27.592 11:44:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:26:27.592 11:44:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:26:27.592 11:44:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=null 00:26:27.592 11:44:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=48 00:26:27.850 11:44:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:26:27.850 11:44:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=446f18a3bd2d5be6b6ac8dffa82c6d7ed59e720eb5724303 00:26:27.850 11:44:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-null.XXX 00:26:27.850 11:44:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-null.Q8f 00:26:27.850 11:44:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 446f18a3bd2d5be6b6ac8dffa82c6d7ed59e720eb5724303 0 00:26:27.850 11:44:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 446f18a3bd2d5be6b6ac8dffa82c6d7ed59e720eb5724303 0 00:26:27.850 11:44:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:26:27.850 11:44:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:26:27.850 11:44:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=446f18a3bd2d5be6b6ac8dffa82c6d7ed59e720eb5724303 00:26:27.850 11:44:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=0 00:26:27.850 11:44:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:26:27.850 11:44:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-null.Q8f 00:26:27.850 11:44:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-null.Q8f 00:26:27.850 11:44:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # keys[1]=/tmp/spdk.key-null.Q8f 00:26:27.850 11:44:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key sha384 48 00:26:27.850 11:44:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:26:27.850 11:44:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:26:27.850 11:44:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:26:27.850 11:44:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha384 00:26:27.850 11:44:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=48 00:26:27.850 11:44:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:26:27.850 11:44:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=ca1b3b45bf80a451dd7abb4a8736faca3a0327259dcb0e5b 00:26:27.850 11:44:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha384.XXX 00:26:27.850 11:44:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha384.B6k 00:26:27.850 11:44:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key ca1b3b45bf80a451dd7abb4a8736faca3a0327259dcb0e5b 2 00:26:27.850 11:44:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 ca1b3b45bf80a451dd7abb4a8736faca3a0327259dcb0e5b 2 00:26:27.850 11:44:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:26:27.850 11:44:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:26:27.850 11:44:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=ca1b3b45bf80a451dd7abb4a8736faca3a0327259dcb0e5b 00:26:27.850 11:44:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=2 00:26:27.850 11:44:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:26:27.850 11:44:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha384.B6k 00:26:27.850 11:44:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha384.B6k 00:26:27.850 11:44:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # ckeys[1]=/tmp/spdk.key-sha384.B6k 00:26:27.850 11:44:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:26:27.850 11:44:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:26:27.850 11:44:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:26:27.850 11:44:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:26:27.850 11:44:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha256 00:26:27.850 11:44:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=32 00:26:27.850 11:44:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:26:27.850 11:44:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=cad4f2bed88d38a54c0c165b5805d800 00:26:27.850 11:44:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha256.XXX 00:26:27.850 11:44:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha256.8BN 00:26:27.850 11:44:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key cad4f2bed88d38a54c0c165b5805d800 1 00:26:27.850 11:44:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 cad4f2bed88d38a54c0c165b5805d800 1 00:26:27.850 11:44:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:26:27.850 11:44:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:26:27.850 11:44:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=cad4f2bed88d38a54c0c165b5805d800 00:26:27.850 11:44:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=1 00:26:27.850 11:44:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:26:27.850 11:44:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha256.8BN 00:26:27.850 11:44:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha256.8BN 00:26:27.850 11:44:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # keys[2]=/tmp/spdk.key-sha256.8BN 00:26:27.850 11:44:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:26:27.850 11:44:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:26:27.850 11:44:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:26:27.850 11:44:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:26:27.850 11:44:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha256 00:26:27.850 11:44:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=32 00:26:27.850 11:44:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:26:27.850 11:44:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=f22da8b4942d6227016385eaa8fc3df3 00:26:27.850 11:44:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha256.XXX 00:26:27.850 11:44:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha256.1Fb 00:26:27.850 11:44:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key f22da8b4942d6227016385eaa8fc3df3 1 00:26:27.850 11:44:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 f22da8b4942d6227016385eaa8fc3df3 1 00:26:27.850 11:44:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:26:27.850 11:44:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:26:27.850 11:44:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=f22da8b4942d6227016385eaa8fc3df3 00:26:27.850 11:44:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=1 00:26:27.850 11:44:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:26:27.850 11:44:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha256.1Fb 00:26:27.850 11:44:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha256.1Fb 00:26:27.850 11:44:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # ckeys[2]=/tmp/spdk.key-sha256.1Fb 00:26:27.850 11:44:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key sha384 48 00:26:27.850 11:44:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:26:27.850 11:44:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:26:27.850 11:44:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:26:27.850 11:44:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha384 00:26:27.850 11:44:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=48 00:26:27.850 11:44:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:26:28.108 11:44:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=06c9bde9203df19e624bab8801868f297d62e45c5c2cbee7 00:26:28.108 11:44:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha384.XXX 00:26:28.108 11:44:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha384.219 00:26:28.108 11:44:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 06c9bde9203df19e624bab8801868f297d62e45c5c2cbee7 2 00:26:28.108 11:44:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 06c9bde9203df19e624bab8801868f297d62e45c5c2cbee7 2 00:26:28.108 11:44:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:26:28.108 11:44:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:26:28.108 11:44:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=06c9bde9203df19e624bab8801868f297d62e45c5c2cbee7 00:26:28.108 11:44:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=2 00:26:28.108 11:44:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:26:28.108 11:44:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha384.219 00:26:28.108 11:44:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha384.219 00:26:28.108 11:44:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # keys[3]=/tmp/spdk.key-sha384.219 00:26:28.108 11:44:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key null 32 00:26:28.108 11:44:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:26:28.108 11:44:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:26:28.108 11:44:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:26:28.108 11:44:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=null 00:26:28.108 11:44:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=32 00:26:28.108 11:44:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:26:28.108 11:44:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=85c41aa518e1602af81159722a8fe406 00:26:28.108 11:44:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-null.XXX 00:26:28.108 11:44:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-null.p95 00:26:28.108 11:44:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 85c41aa518e1602af81159722a8fe406 0 00:26:28.108 11:44:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 85c41aa518e1602af81159722a8fe406 0 00:26:28.108 11:44:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:26:28.108 11:44:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:26:28.108 11:44:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=85c41aa518e1602af81159722a8fe406 00:26:28.108 11:44:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=0 00:26:28.108 11:44:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:26:28.108 11:44:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-null.p95 00:26:28.108 11:44:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-null.p95 00:26:28.108 11:44:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # ckeys[3]=/tmp/spdk.key-null.p95 00:26:28.108 11:44:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # gen_dhchap_key sha512 64 00:26:28.108 11:44:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:26:28.108 11:44:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:26:28.108 11:44:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:26:28.108 11:44:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha512 00:26:28.108 11:44:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=64 00:26:28.108 11:44:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 32 /dev/urandom 00:26:28.108 11:44:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=a254b00dffc910ffad49d2619f5c9bfaee169b94980711bf48dc6ef16e386338 00:26:28.108 11:44:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha512.XXX 00:26:28.108 11:44:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha512.maH 00:26:28.108 11:44:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key a254b00dffc910ffad49d2619f5c9bfaee169b94980711bf48dc6ef16e386338 3 00:26:28.108 11:44:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 a254b00dffc910ffad49d2619f5c9bfaee169b94980711bf48dc6ef16e386338 3 00:26:28.108 11:44:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:26:28.108 11:44:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:26:28.108 11:44:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=a254b00dffc910ffad49d2619f5c9bfaee169b94980711bf48dc6ef16e386338 00:26:28.108 11:44:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=3 00:26:28.108 11:44:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:26:28.108 11:44:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha512.maH 00:26:28.109 11:44:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha512.maH 00:26:28.109 11:44:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # keys[4]=/tmp/spdk.key-sha512.maH 00:26:28.109 11:44:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # ckeys[4]= 00:26:28.109 11:44:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@79 -- # waitforlisten 1375777 00:26:28.109 11:44:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@833 -- # '[' -z 1375777 ']' 00:26:28.109 11:44:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:28.109 11:44:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@838 -- # local max_retries=100 00:26:28.109 11:44:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:28.109 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:28.109 11:44:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@842 -- # xtrace_disable 00:26:28.109 11:44:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:28.366 11:44:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:26:28.366 11:44:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@866 -- # return 0 00:26:28.366 11:44:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:26:28.366 11:44:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.JYx 00:26:28.366 11:44:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:28.366 11:44:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:28.366 11:44:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:28.366 11:44:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha512.cAb ]] 00:26:28.366 11:44:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.cAb 00:26:28.366 11:44:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:28.366 11:44:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:28.366 11:44:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:28.366 11:44:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:26:28.366 11:44:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-null.Q8f 00:26:28.366 11:44:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:28.366 11:44:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:28.366 11:44:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:28.366 11:44:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha384.B6k ]] 00:26:28.366 11:44:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.B6k 00:26:28.366 11:44:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:28.366 11:44:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:28.366 11:44:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:28.366 11:44:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:26:28.366 11:44:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha256.8BN 00:26:28.366 11:44:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:28.366 11:44:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:28.366 11:44:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:28.366 11:44:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha256.1Fb ]] 00:26:28.366 11:44:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.1Fb 00:26:28.366 11:44:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:28.366 11:44:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:28.366 11:44:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:28.366 11:44:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:26:28.366 11:44:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha384.219 00:26:28.367 11:44:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:28.367 11:44:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:28.624 11:44:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:28.624 11:44:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-null.p95 ]] 00:26:28.624 11:44:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey3 /tmp/spdk.key-null.p95 00:26:28.624 11:44:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:28.624 11:44:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:28.624 11:44:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:28.624 11:44:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:26:28.624 11:44:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key4 /tmp/spdk.key-sha512.maH 00:26:28.624 11:44:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:28.624 11:44:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:28.624 11:44:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:28.624 11:44:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n '' ]] 00:26:28.624 11:44:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@85 -- # nvmet_auth_init 00:26:28.624 11:44:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@35 -- # get_main_ns_ip 00:26:28.624 11:44:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:28.624 11:44:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:28.624 11:44:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:28.624 11:44:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:28.624 11:44:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:28.624 11:44:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:28.624 11:44:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:28.624 11:44:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:28.624 11:44:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:28.624 11:44:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:28.624 11:44:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@35 -- # configure_kernel_target nqn.2024-02.io.spdk:cnode0 10.0.0.1 00:26:28.624 11:44:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@660 -- # local kernel_name=nqn.2024-02.io.spdk:cnode0 kernel_target_ip=10.0.0.1 00:26:28.624 11:44:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@662 -- # nvmet=/sys/kernel/config/nvmet 00:26:28.624 11:44:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@663 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:26:28.624 11:44:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@664 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:26:28.624 11:44:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@665 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:26:28.624 11:44:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@667 -- # local block nvme 00:26:28.624 11:44:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@669 -- # [[ ! -e /sys/module/nvmet ]] 00:26:28.624 11:44:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@670 -- # modprobe nvmet 00:26:28.624 11:44:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@673 -- # [[ -e /sys/kernel/config/nvmet ]] 00:26:28.624 11:44:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@675 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:26:31.146 Waiting for block devices as requested 00:26:31.146 0000:86:00.0 (8086 0a54): vfio-pci -> nvme 00:26:31.403 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:26:31.403 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:26:31.403 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:26:31.661 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:26:31.661 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:26:31.661 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:26:31.918 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:26:31.918 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:26:31.918 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:26:31.918 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:26:32.175 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:26:32.175 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:26:32.175 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:26:32.432 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:26:32.432 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:26:32.432 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:26:32.996 11:44:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:26:32.996 11:44:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n1 ]] 00:26:32.996 11:44:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@680 -- # is_block_zoned nvme0n1 00:26:32.996 11:44:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1648 -- # local device=nvme0n1 00:26:32.996 11:44:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:26:32.996 11:44:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:26:32.996 11:44:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@681 -- # block_in_use nvme0n1 00:26:32.996 11:44:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@381 -- # local block=nvme0n1 pt 00:26:32.996 11:44:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@390 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:26:32.996 No valid GPT data, bailing 00:26:32.996 11:44:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:26:32.996 11:44:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # pt= 00:26:32.996 11:44:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@395 -- # return 1 00:26:32.996 11:44:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n1 00:26:32.996 11:44:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@684 -- # [[ -b /dev/nvme0n1 ]] 00:26:32.996 11:44:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@686 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:26:32.996 11:44:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@687 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:26:32.996 11:44:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@688 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:26:32.996 11:44:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@693 -- # echo SPDK-nqn.2024-02.io.spdk:cnode0 00:26:32.996 11:44:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@695 -- # echo 1 00:26:32.996 11:44:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@696 -- # echo /dev/nvme0n1 00:26:32.996 11:44:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@697 -- # echo 1 00:26:32.996 11:44:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@699 -- # echo 10.0.0.1 00:26:32.996 11:44:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@700 -- # echo tcp 00:26:32.996 11:44:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@701 -- # echo 4420 00:26:32.996 11:44:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@702 -- # echo ipv4 00:26:32.996 11:44:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@705 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 /sys/kernel/config/nvmet/ports/1/subsystems/ 00:26:33.254 11:44:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@708 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid=00abaa28-3537-eb11-906e-0017a4403562 -a 10.0.0.1 -t tcp -s 4420 00:26:33.254 00:26:33.254 Discovery Log Number of Records 2, Generation counter 2 00:26:33.254 =====Discovery Log Entry 0====== 00:26:33.254 trtype: tcp 00:26:33.254 adrfam: ipv4 00:26:33.254 subtype: current discovery subsystem 00:26:33.254 treq: not specified, sq flow control disable supported 00:26:33.254 portid: 1 00:26:33.254 trsvcid: 4420 00:26:33.254 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:26:33.254 traddr: 10.0.0.1 00:26:33.254 eflags: none 00:26:33.254 sectype: none 00:26:33.254 =====Discovery Log Entry 1====== 00:26:33.254 trtype: tcp 00:26:33.254 adrfam: ipv4 00:26:33.254 subtype: nvme subsystem 00:26:33.254 treq: not specified, sq flow control disable supported 00:26:33.254 portid: 1 00:26:33.254 trsvcid: 4420 00:26:33.254 subnqn: nqn.2024-02.io.spdk:cnode0 00:26:33.254 traddr: 10.0.0.1 00:26:33.254 eflags: none 00:26:33.254 sectype: none 00:26:33.254 11:44:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@36 -- # mkdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:26:33.254 11:44:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@37 -- # echo 0 00:26:33.254 11:44:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@38 -- # ln -s /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:26:33.254 11:44:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@88 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:26:33.254 11:44:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:33.254 11:44:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:33.254 11:44:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:26:33.254 11:44:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:26:33.254 11:44:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NDQ2ZjE4YTNiZDJkNWJlNmI2YWM4ZGZmYTgyYzZkN2VkNTllNzIwZWI1NzI0MzAzP7rlLg==: 00:26:33.254 11:44:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:Y2ExYjNiNDViZjgwYTQ1MWRkN2FiYjRhODczNmZhY2EzYTAzMjcyNTlkY2IwZTVi9o9JOw==: 00:26:33.254 11:44:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:33.254 11:44:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:26:33.254 11:44:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NDQ2ZjE4YTNiZDJkNWJlNmI2YWM4ZGZmYTgyYzZkN2VkNTllNzIwZWI1NzI0MzAzP7rlLg==: 00:26:33.254 11:44:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:Y2ExYjNiNDViZjgwYTQ1MWRkN2FiYjRhODczNmZhY2EzYTAzMjcyNTlkY2IwZTVi9o9JOw==: ]] 00:26:33.254 11:44:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:Y2ExYjNiNDViZjgwYTQ1MWRkN2FiYjRhODczNmZhY2EzYTAzMjcyNTlkY2IwZTVi9o9JOw==: 00:26:33.254 11:44:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:26:33.254 11:44:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@94 -- # printf %s sha256,sha384,sha512 00:26:33.254 11:44:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:26:33.254 11:44:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@94 -- # printf %s ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:26:33.254 11:44:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # connect_authenticate sha256,sha384,sha512 ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 1 00:26:33.254 11:44:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:33.254 11:44:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256,sha384,sha512 00:26:33.254 11:44:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:26:33.254 11:44:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:26:33.254 11:44:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:33.254 11:44:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:26:33.254 11:44:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:33.254 11:44:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:33.254 11:44:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:33.254 11:44:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:33.254 11:44:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:33.254 11:44:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:33.254 11:44:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:33.254 11:44:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:33.254 11:44:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:33.254 11:44:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:33.254 11:44:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:33.254 11:44:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:33.254 11:44:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:33.254 11:44:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:33.254 11:44:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:26:33.254 11:44:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:33.254 11:44:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:33.513 nvme0n1 00:26:33.513 11:44:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:33.513 11:44:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:33.513 11:44:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:33.513 11:44:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:33.513 11:44:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:33.513 11:44:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:33.513 11:44:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:33.513 11:44:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:33.513 11:44:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:33.513 11:44:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:33.513 11:44:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:33.513 11:44:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:26:33.513 11:44:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:26:33.513 11:44:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:33.513 11:44:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 0 00:26:33.513 11:44:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:33.513 11:44:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:33.513 11:44:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:26:33.513 11:44:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:26:33.513 11:44:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MTk2OGFkODM0N2E3NTI0MDYzMTI4MTg3NjBkYzU5MznVpECY: 00:26:33.513 11:44:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:OTQxNjZlNjNjMzJjZDkyZjgxYmZiNTk1NzJmZDgxYWZhMzhiOTBiYzlhNTI2NjU4YjFkNDljYTMzYzY1Y2IyZEUWTI8=: 00:26:33.513 11:44:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:33.513 11:44:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:26:33.513 11:44:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MTk2OGFkODM0N2E3NTI0MDYzMTI4MTg3NjBkYzU5MznVpECY: 00:26:33.513 11:44:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:OTQxNjZlNjNjMzJjZDkyZjgxYmZiNTk1NzJmZDgxYWZhMzhiOTBiYzlhNTI2NjU4YjFkNDljYTMzYzY1Y2IyZEUWTI8=: ]] 00:26:33.513 11:44:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:OTQxNjZlNjNjMzJjZDkyZjgxYmZiNTk1NzJmZDgxYWZhMzhiOTBiYzlhNTI2NjU4YjFkNDljYTMzYzY1Y2IyZEUWTI8=: 00:26:33.513 11:44:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 0 00:26:33.513 11:44:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:33.513 11:44:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:33.513 11:44:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:26:33.513 11:44:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:26:33.513 11:44:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:33.513 11:44:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:26:33.513 11:44:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:33.513 11:44:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:33.513 11:44:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:33.513 11:44:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:33.513 11:44:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:33.513 11:44:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:33.513 11:44:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:33.513 11:44:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:33.513 11:44:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:33.513 11:44:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:33.513 11:44:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:33.513 11:44:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:33.513 11:44:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:33.513 11:44:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:33.513 11:44:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:26:33.513 11:44:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:33.513 11:44:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:33.513 nvme0n1 00:26:33.513 11:44:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:33.513 11:44:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:33.513 11:44:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:33.513 11:44:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:33.513 11:44:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:33.513 11:44:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:33.772 11:44:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:33.772 11:44:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:33.772 11:44:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:33.772 11:44:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:33.772 11:44:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:33.772 11:44:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:33.772 11:44:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:26:33.772 11:44:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:33.772 11:44:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:33.772 11:44:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:26:33.772 11:44:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:26:33.772 11:44:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NDQ2ZjE4YTNiZDJkNWJlNmI2YWM4ZGZmYTgyYzZkN2VkNTllNzIwZWI1NzI0MzAzP7rlLg==: 00:26:33.772 11:44:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:Y2ExYjNiNDViZjgwYTQ1MWRkN2FiYjRhODczNmZhY2EzYTAzMjcyNTlkY2IwZTVi9o9JOw==: 00:26:33.772 11:44:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:33.772 11:44:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:26:33.772 11:44:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NDQ2ZjE4YTNiZDJkNWJlNmI2YWM4ZGZmYTgyYzZkN2VkNTllNzIwZWI1NzI0MzAzP7rlLg==: 00:26:33.772 11:44:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:Y2ExYjNiNDViZjgwYTQ1MWRkN2FiYjRhODczNmZhY2EzYTAzMjcyNTlkY2IwZTVi9o9JOw==: ]] 00:26:33.772 11:44:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:Y2ExYjNiNDViZjgwYTQ1MWRkN2FiYjRhODczNmZhY2EzYTAzMjcyNTlkY2IwZTVi9o9JOw==: 00:26:33.772 11:44:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 1 00:26:33.772 11:44:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:33.772 11:44:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:33.772 11:44:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:26:33.772 11:44:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:26:33.772 11:44:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:33.772 11:44:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:26:33.772 11:44:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:33.772 11:44:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:33.772 11:44:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:33.772 11:44:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:33.772 11:44:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:33.772 11:44:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:33.772 11:44:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:33.772 11:44:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:33.772 11:44:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:33.772 11:44:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:33.772 11:44:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:33.772 11:44:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:33.772 11:44:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:33.772 11:44:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:33.772 11:44:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:26:33.772 11:44:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:33.772 11:44:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:33.772 nvme0n1 00:26:33.772 11:44:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:33.772 11:44:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:33.772 11:44:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:33.772 11:44:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:33.772 11:44:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:33.772 11:44:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:34.041 11:44:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:34.041 11:44:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:34.041 11:44:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:34.041 11:44:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:34.041 11:44:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:34.041 11:44:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:34.041 11:44:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:26:34.041 11:44:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:34.042 11:44:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:34.042 11:44:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:26:34.042 11:44:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:26:34.042 11:44:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:Y2FkNGYyYmVkODhkMzhhNTRjMGMxNjViNTgwNWQ4MDBJMSyK: 00:26:34.042 11:44:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZjIyZGE4YjQ5NDJkNjIyNzAxNjM4NWVhYThmYzNkZjMqr6rB: 00:26:34.042 11:44:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:34.042 11:44:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:26:34.042 11:44:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:Y2FkNGYyYmVkODhkMzhhNTRjMGMxNjViNTgwNWQ4MDBJMSyK: 00:26:34.042 11:44:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZjIyZGE4YjQ5NDJkNjIyNzAxNjM4NWVhYThmYzNkZjMqr6rB: ]] 00:26:34.042 11:44:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZjIyZGE4YjQ5NDJkNjIyNzAxNjM4NWVhYThmYzNkZjMqr6rB: 00:26:34.042 11:44:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 2 00:26:34.042 11:44:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:34.042 11:44:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:34.042 11:44:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:26:34.042 11:44:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:26:34.042 11:44:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:34.042 11:44:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:26:34.042 11:44:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:34.042 11:44:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:34.042 11:44:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:34.042 11:44:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:34.042 11:44:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:34.042 11:44:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:34.043 11:44:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:34.043 11:44:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:34.043 11:44:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:34.043 11:44:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:34.043 11:44:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:34.043 11:44:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:34.043 11:44:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:34.043 11:44:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:34.043 11:44:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:26:34.043 11:44:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:34.043 11:44:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:34.043 nvme0n1 00:26:34.043 11:44:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:34.043 11:44:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:34.043 11:44:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:34.043 11:44:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:34.043 11:44:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:34.043 11:44:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:34.043 11:44:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:34.043 11:44:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:34.043 11:44:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:34.043 11:44:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:34.043 11:44:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:34.043 11:44:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:34.043 11:44:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 3 00:26:34.043 11:44:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:34.043 11:44:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:34.043 11:44:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:26:34.043 11:44:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:26:34.043 11:44:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MDZjOWJkZTkyMDNkZjE5ZTYyNGJhYjg4MDE4NjhmMjk3ZDYyZTQ1YzVjMmNiZWU3Xz1Vgw==: 00:26:34.044 11:44:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ODVjNDFhYTUxOGUxNjAyYWY4MTE1OTcyMmE4ZmU0MDZsvCOG: 00:26:34.044 11:44:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:34.044 11:44:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:26:34.044 11:44:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MDZjOWJkZTkyMDNkZjE5ZTYyNGJhYjg4MDE4NjhmMjk3ZDYyZTQ1YzVjMmNiZWU3Xz1Vgw==: 00:26:34.044 11:44:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ODVjNDFhYTUxOGUxNjAyYWY4MTE1OTcyMmE4ZmU0MDZsvCOG: ]] 00:26:34.044 11:44:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ODVjNDFhYTUxOGUxNjAyYWY4MTE1OTcyMmE4ZmU0MDZsvCOG: 00:26:34.044 11:44:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 3 00:26:34.044 11:44:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:34.044 11:44:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:34.044 11:44:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:26:34.044 11:44:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:26:34.044 11:44:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:34.044 11:44:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:26:34.044 11:44:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:34.044 11:44:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:34.311 11:44:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:34.311 11:44:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:34.311 11:44:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:34.311 11:44:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:34.311 11:44:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:34.311 11:44:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:34.311 11:44:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:34.311 11:44:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:34.311 11:44:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:34.311 11:44:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:34.311 11:44:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:34.311 11:44:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:34.311 11:44:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:26:34.311 11:44:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:34.311 11:44:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:34.311 nvme0n1 00:26:34.311 11:44:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:34.311 11:44:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:34.311 11:44:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:34.311 11:44:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:34.311 11:44:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:34.311 11:44:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:34.311 11:44:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:34.311 11:44:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:34.311 11:44:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:34.311 11:44:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:34.311 11:44:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:34.311 11:44:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:34.311 11:44:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 4 00:26:34.311 11:44:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:34.311 11:44:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:34.311 11:44:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:26:34.311 11:44:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:26:34.311 11:44:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YTI1NGIwMGRmZmM5MTBmZmFkNDlkMjYxOWY1YzliZmFlZTE2OWI5NDk4MDcxMWJmNDhkYzZlZjE2ZTM4NjMzOMnYP/o=: 00:26:34.311 11:44:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:26:34.311 11:44:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:34.311 11:44:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:26:34.311 11:44:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YTI1NGIwMGRmZmM5MTBmZmFkNDlkMjYxOWY1YzliZmFlZTE2OWI5NDk4MDcxMWJmNDhkYzZlZjE2ZTM4NjMzOMnYP/o=: 00:26:34.311 11:44:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:26:34.311 11:44:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 4 00:26:34.311 11:44:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:34.311 11:44:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:34.311 11:44:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:26:34.311 11:44:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:26:34.311 11:44:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:34.311 11:44:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:26:34.311 11:44:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:34.311 11:44:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:34.311 11:44:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:34.311 11:44:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:34.311 11:44:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:34.311 11:44:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:34.311 11:44:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:34.311 11:44:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:34.311 11:44:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:34.311 11:44:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:34.311 11:44:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:34.311 11:44:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:34.311 11:44:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:34.311 11:44:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:34.311 11:44:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:26:34.311 11:44:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:34.311 11:44:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:34.569 nvme0n1 00:26:34.569 11:44:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:34.569 11:44:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:34.569 11:44:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:34.569 11:44:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:34.569 11:44:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:34.569 11:44:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:34.569 11:44:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:34.569 11:44:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:34.569 11:44:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:34.569 11:44:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:34.570 11:44:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:34.570 11:44:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:26:34.570 11:44:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:34.570 11:44:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 0 00:26:34.570 11:44:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:34.570 11:44:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:34.570 11:44:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:26:34.570 11:44:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:26:34.570 11:44:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MTk2OGFkODM0N2E3NTI0MDYzMTI4MTg3NjBkYzU5MznVpECY: 00:26:34.570 11:44:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:OTQxNjZlNjNjMzJjZDkyZjgxYmZiNTk1NzJmZDgxYWZhMzhiOTBiYzlhNTI2NjU4YjFkNDljYTMzYzY1Y2IyZEUWTI8=: 00:26:34.570 11:44:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:34.570 11:44:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:26:34.570 11:44:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MTk2OGFkODM0N2E3NTI0MDYzMTI4MTg3NjBkYzU5MznVpECY: 00:26:34.570 11:44:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:OTQxNjZlNjNjMzJjZDkyZjgxYmZiNTk1NzJmZDgxYWZhMzhiOTBiYzlhNTI2NjU4YjFkNDljYTMzYzY1Y2IyZEUWTI8=: ]] 00:26:34.570 11:44:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:OTQxNjZlNjNjMzJjZDkyZjgxYmZiNTk1NzJmZDgxYWZhMzhiOTBiYzlhNTI2NjU4YjFkNDljYTMzYzY1Y2IyZEUWTI8=: 00:26:34.570 11:44:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 0 00:26:34.570 11:44:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:34.570 11:44:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:34.570 11:44:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:26:34.570 11:44:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:26:34.570 11:44:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:34.570 11:44:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:26:34.570 11:44:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:34.570 11:44:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:34.570 11:44:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:34.570 11:44:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:34.570 11:44:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:34.570 11:44:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:34.570 11:44:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:34.570 11:44:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:34.570 11:44:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:34.570 11:44:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:34.570 11:44:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:34.570 11:44:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:34.570 11:44:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:34.570 11:44:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:34.570 11:44:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:26:34.570 11:44:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:34.570 11:44:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:34.828 nvme0n1 00:26:34.828 11:44:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:34.828 11:44:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:34.828 11:44:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:34.828 11:44:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:34.828 11:44:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:34.828 11:44:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:34.828 11:44:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:34.828 11:44:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:34.828 11:44:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:34.828 11:44:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:34.828 11:44:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:34.828 11:44:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:34.828 11:44:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 1 00:26:34.828 11:44:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:34.828 11:44:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:34.828 11:44:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:26:34.828 11:44:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:26:34.828 11:44:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NDQ2ZjE4YTNiZDJkNWJlNmI2YWM4ZGZmYTgyYzZkN2VkNTllNzIwZWI1NzI0MzAzP7rlLg==: 00:26:34.828 11:44:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:Y2ExYjNiNDViZjgwYTQ1MWRkN2FiYjRhODczNmZhY2EzYTAzMjcyNTlkY2IwZTVi9o9JOw==: 00:26:34.828 11:44:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:34.828 11:44:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:26:34.828 11:44:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NDQ2ZjE4YTNiZDJkNWJlNmI2YWM4ZGZmYTgyYzZkN2VkNTllNzIwZWI1NzI0MzAzP7rlLg==: 00:26:34.828 11:44:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:Y2ExYjNiNDViZjgwYTQ1MWRkN2FiYjRhODczNmZhY2EzYTAzMjcyNTlkY2IwZTVi9o9JOw==: ]] 00:26:34.828 11:44:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:Y2ExYjNiNDViZjgwYTQ1MWRkN2FiYjRhODczNmZhY2EzYTAzMjcyNTlkY2IwZTVi9o9JOw==: 00:26:34.828 11:44:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 1 00:26:34.828 11:44:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:34.828 11:44:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:34.828 11:44:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:26:34.828 11:44:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:26:34.828 11:44:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:34.828 11:44:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:26:34.828 11:44:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:34.828 11:44:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:34.828 11:44:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:34.828 11:44:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:34.828 11:44:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:34.828 11:44:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:34.828 11:44:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:34.828 11:44:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:34.828 11:44:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:34.828 11:44:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:34.828 11:44:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:34.828 11:44:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:34.828 11:44:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:34.828 11:44:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:34.828 11:44:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:26:34.828 11:44:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:34.828 11:44:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:35.086 nvme0n1 00:26:35.086 11:44:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:35.086 11:44:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:35.086 11:44:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:35.086 11:44:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:35.086 11:44:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:35.086 11:44:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:35.086 11:44:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:35.086 11:44:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:35.086 11:44:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:35.086 11:44:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:35.086 11:44:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:35.086 11:44:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:35.086 11:44:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 2 00:26:35.086 11:44:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:35.086 11:44:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:35.086 11:44:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:26:35.086 11:44:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:26:35.086 11:44:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:Y2FkNGYyYmVkODhkMzhhNTRjMGMxNjViNTgwNWQ4MDBJMSyK: 00:26:35.086 11:44:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZjIyZGE4YjQ5NDJkNjIyNzAxNjM4NWVhYThmYzNkZjMqr6rB: 00:26:35.087 11:44:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:35.087 11:44:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:26:35.087 11:44:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:Y2FkNGYyYmVkODhkMzhhNTRjMGMxNjViNTgwNWQ4MDBJMSyK: 00:26:35.087 11:44:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZjIyZGE4YjQ5NDJkNjIyNzAxNjM4NWVhYThmYzNkZjMqr6rB: ]] 00:26:35.087 11:44:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZjIyZGE4YjQ5NDJkNjIyNzAxNjM4NWVhYThmYzNkZjMqr6rB: 00:26:35.087 11:44:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 2 00:26:35.087 11:44:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:35.087 11:44:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:35.087 11:44:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:26:35.087 11:44:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:26:35.087 11:44:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:35.087 11:44:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:26:35.087 11:44:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:35.087 11:44:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:35.087 11:44:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:35.087 11:44:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:35.087 11:44:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:35.087 11:44:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:35.087 11:44:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:35.087 11:44:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:35.087 11:44:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:35.087 11:44:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:35.087 11:44:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:35.087 11:44:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:35.087 11:44:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:35.087 11:44:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:35.087 11:44:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:26:35.087 11:44:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:35.087 11:44:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:35.345 nvme0n1 00:26:35.345 11:44:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:35.345 11:44:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:35.345 11:44:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:35.345 11:44:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:35.345 11:44:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:35.345 11:44:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:35.345 11:44:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:35.345 11:44:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:35.345 11:44:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:35.345 11:44:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:35.345 11:44:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:35.345 11:44:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:35.345 11:44:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 3 00:26:35.345 11:44:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:35.345 11:44:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:35.345 11:44:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:26:35.345 11:44:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:26:35.345 11:44:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MDZjOWJkZTkyMDNkZjE5ZTYyNGJhYjg4MDE4NjhmMjk3ZDYyZTQ1YzVjMmNiZWU3Xz1Vgw==: 00:26:35.345 11:44:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ODVjNDFhYTUxOGUxNjAyYWY4MTE1OTcyMmE4ZmU0MDZsvCOG: 00:26:35.345 11:44:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:35.345 11:44:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:26:35.345 11:44:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MDZjOWJkZTkyMDNkZjE5ZTYyNGJhYjg4MDE4NjhmMjk3ZDYyZTQ1YzVjMmNiZWU3Xz1Vgw==: 00:26:35.345 11:44:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ODVjNDFhYTUxOGUxNjAyYWY4MTE1OTcyMmE4ZmU0MDZsvCOG: ]] 00:26:35.345 11:44:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ODVjNDFhYTUxOGUxNjAyYWY4MTE1OTcyMmE4ZmU0MDZsvCOG: 00:26:35.345 11:44:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 3 00:26:35.345 11:44:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:35.345 11:44:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:35.345 11:44:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:26:35.345 11:44:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:26:35.345 11:44:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:35.345 11:44:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:26:35.345 11:44:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:35.345 11:44:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:35.345 11:44:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:35.345 11:44:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:35.345 11:44:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:35.345 11:44:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:35.345 11:44:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:35.345 11:44:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:35.345 11:44:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:35.345 11:44:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:35.345 11:44:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:35.345 11:44:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:35.345 11:44:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:35.345 11:44:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:35.345 11:44:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:26:35.345 11:44:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:35.345 11:44:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:35.603 nvme0n1 00:26:35.603 11:44:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:35.604 11:44:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:35.604 11:44:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:35.604 11:44:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:35.604 11:44:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:35.604 11:44:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:35.604 11:44:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:35.604 11:44:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:35.604 11:44:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:35.604 11:44:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:35.604 11:44:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:35.604 11:44:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:35.604 11:44:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 4 00:26:35.604 11:44:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:35.604 11:44:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:35.604 11:44:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:26:35.604 11:44:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:26:35.604 11:44:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YTI1NGIwMGRmZmM5MTBmZmFkNDlkMjYxOWY1YzliZmFlZTE2OWI5NDk4MDcxMWJmNDhkYzZlZjE2ZTM4NjMzOMnYP/o=: 00:26:35.604 11:44:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:26:35.604 11:44:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:35.604 11:44:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:26:35.604 11:44:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YTI1NGIwMGRmZmM5MTBmZmFkNDlkMjYxOWY1YzliZmFlZTE2OWI5NDk4MDcxMWJmNDhkYzZlZjE2ZTM4NjMzOMnYP/o=: 00:26:35.604 11:44:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:26:35.604 11:44:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 4 00:26:35.604 11:44:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:35.604 11:44:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:35.604 11:44:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:26:35.604 11:44:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:26:35.604 11:44:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:35.604 11:44:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:26:35.604 11:44:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:35.604 11:44:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:35.604 11:44:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:35.604 11:44:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:35.604 11:44:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:35.604 11:44:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:35.604 11:44:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:35.604 11:44:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:35.604 11:44:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:35.604 11:44:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:35.604 11:44:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:35.604 11:44:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:35.604 11:44:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:35.604 11:44:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:35.604 11:44:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:26:35.604 11:44:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:35.604 11:44:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:35.863 nvme0n1 00:26:35.863 11:44:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:35.863 11:44:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:35.863 11:44:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:35.863 11:44:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:35.863 11:44:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:35.863 11:44:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:35.863 11:44:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:35.863 11:44:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:35.863 11:44:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:35.863 11:44:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:35.863 11:44:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:35.863 11:44:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:26:35.863 11:44:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:35.863 11:44:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 0 00:26:35.863 11:44:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:35.863 11:44:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:35.863 11:44:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:26:35.863 11:44:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:26:35.863 11:44:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MTk2OGFkODM0N2E3NTI0MDYzMTI4MTg3NjBkYzU5MznVpECY: 00:26:35.863 11:44:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:OTQxNjZlNjNjMzJjZDkyZjgxYmZiNTk1NzJmZDgxYWZhMzhiOTBiYzlhNTI2NjU4YjFkNDljYTMzYzY1Y2IyZEUWTI8=: 00:26:35.863 11:44:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:35.863 11:44:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:26:35.863 11:44:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MTk2OGFkODM0N2E3NTI0MDYzMTI4MTg3NjBkYzU5MznVpECY: 00:26:35.863 11:44:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:OTQxNjZlNjNjMzJjZDkyZjgxYmZiNTk1NzJmZDgxYWZhMzhiOTBiYzlhNTI2NjU4YjFkNDljYTMzYzY1Y2IyZEUWTI8=: ]] 00:26:35.863 11:44:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:OTQxNjZlNjNjMzJjZDkyZjgxYmZiNTk1NzJmZDgxYWZhMzhiOTBiYzlhNTI2NjU4YjFkNDljYTMzYzY1Y2IyZEUWTI8=: 00:26:35.863 11:44:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 0 00:26:35.863 11:44:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:35.863 11:44:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:35.863 11:44:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:26:35.863 11:44:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:26:35.863 11:44:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:35.863 11:44:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:26:35.863 11:44:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:35.863 11:44:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:36.121 11:44:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:36.121 11:44:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:36.122 11:44:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:36.122 11:44:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:36.122 11:44:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:36.122 11:44:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:36.122 11:44:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:36.122 11:44:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:36.122 11:44:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:36.122 11:44:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:36.122 11:44:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:36.122 11:44:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:36.122 11:44:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:26:36.122 11:44:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:36.122 11:44:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:36.380 nvme0n1 00:26:36.380 11:44:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:36.380 11:44:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:36.380 11:44:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:36.380 11:44:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:36.380 11:44:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:36.380 11:44:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:36.380 11:44:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:36.380 11:44:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:36.380 11:44:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:36.380 11:44:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:36.380 11:44:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:36.380 11:44:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:36.380 11:44:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 1 00:26:36.380 11:44:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:36.380 11:44:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:36.380 11:44:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:26:36.380 11:44:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:26:36.380 11:44:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NDQ2ZjE4YTNiZDJkNWJlNmI2YWM4ZGZmYTgyYzZkN2VkNTllNzIwZWI1NzI0MzAzP7rlLg==: 00:26:36.380 11:44:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:Y2ExYjNiNDViZjgwYTQ1MWRkN2FiYjRhODczNmZhY2EzYTAzMjcyNTlkY2IwZTVi9o9JOw==: 00:26:36.380 11:44:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:36.380 11:44:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:26:36.380 11:44:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NDQ2ZjE4YTNiZDJkNWJlNmI2YWM4ZGZmYTgyYzZkN2VkNTllNzIwZWI1NzI0MzAzP7rlLg==: 00:26:36.380 11:44:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:Y2ExYjNiNDViZjgwYTQ1MWRkN2FiYjRhODczNmZhY2EzYTAzMjcyNTlkY2IwZTVi9o9JOw==: ]] 00:26:36.380 11:44:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:Y2ExYjNiNDViZjgwYTQ1MWRkN2FiYjRhODczNmZhY2EzYTAzMjcyNTlkY2IwZTVi9o9JOw==: 00:26:36.380 11:44:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 1 00:26:36.380 11:44:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:36.380 11:44:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:36.380 11:44:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:26:36.380 11:44:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:26:36.380 11:44:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:36.380 11:44:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:26:36.380 11:44:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:36.380 11:44:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:36.380 11:44:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:36.380 11:44:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:36.380 11:44:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:36.380 11:44:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:36.380 11:44:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:36.380 11:44:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:36.380 11:44:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:36.380 11:44:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:36.380 11:44:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:36.380 11:44:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:36.380 11:44:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:36.380 11:44:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:36.380 11:44:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:26:36.380 11:44:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:36.380 11:44:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:36.639 nvme0n1 00:26:36.639 11:44:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:36.639 11:44:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:36.639 11:44:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:36.639 11:44:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:36.639 11:44:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:36.639 11:44:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:36.639 11:44:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:36.639 11:44:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:36.639 11:44:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:36.639 11:44:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:36.639 11:44:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:36.639 11:44:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:36.639 11:44:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 2 00:26:36.639 11:44:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:36.639 11:44:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:36.639 11:44:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:26:36.639 11:44:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:26:36.639 11:44:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:Y2FkNGYyYmVkODhkMzhhNTRjMGMxNjViNTgwNWQ4MDBJMSyK: 00:26:36.639 11:44:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZjIyZGE4YjQ5NDJkNjIyNzAxNjM4NWVhYThmYzNkZjMqr6rB: 00:26:36.639 11:44:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:36.639 11:44:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:26:36.639 11:44:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:Y2FkNGYyYmVkODhkMzhhNTRjMGMxNjViNTgwNWQ4MDBJMSyK: 00:26:36.639 11:44:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZjIyZGE4YjQ5NDJkNjIyNzAxNjM4NWVhYThmYzNkZjMqr6rB: ]] 00:26:36.639 11:44:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZjIyZGE4YjQ5NDJkNjIyNzAxNjM4NWVhYThmYzNkZjMqr6rB: 00:26:36.639 11:44:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 2 00:26:36.639 11:44:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:36.639 11:44:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:36.639 11:44:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:26:36.639 11:44:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:26:36.639 11:44:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:36.639 11:44:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:26:36.640 11:44:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:36.640 11:44:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:36.640 11:44:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:36.640 11:44:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:36.640 11:44:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:36.640 11:44:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:36.640 11:44:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:36.640 11:44:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:36.640 11:44:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:36.640 11:44:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:36.640 11:44:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:36.640 11:44:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:36.640 11:44:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:36.640 11:44:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:36.640 11:44:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:26:36.640 11:44:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:36.640 11:44:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:36.898 nvme0n1 00:26:36.898 11:44:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:36.898 11:44:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:36.898 11:44:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:36.898 11:44:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:36.898 11:44:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:36.898 11:44:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:37.156 11:44:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:37.156 11:44:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:37.156 11:44:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:37.156 11:44:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:37.156 11:44:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:37.156 11:44:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:37.156 11:44:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 3 00:26:37.156 11:44:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:37.156 11:44:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:37.156 11:44:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:26:37.156 11:44:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:26:37.156 11:44:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MDZjOWJkZTkyMDNkZjE5ZTYyNGJhYjg4MDE4NjhmMjk3ZDYyZTQ1YzVjMmNiZWU3Xz1Vgw==: 00:26:37.156 11:44:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ODVjNDFhYTUxOGUxNjAyYWY4MTE1OTcyMmE4ZmU0MDZsvCOG: 00:26:37.156 11:44:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:37.156 11:44:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:26:37.156 11:44:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MDZjOWJkZTkyMDNkZjE5ZTYyNGJhYjg4MDE4NjhmMjk3ZDYyZTQ1YzVjMmNiZWU3Xz1Vgw==: 00:26:37.156 11:44:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ODVjNDFhYTUxOGUxNjAyYWY4MTE1OTcyMmE4ZmU0MDZsvCOG: ]] 00:26:37.156 11:44:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ODVjNDFhYTUxOGUxNjAyYWY4MTE1OTcyMmE4ZmU0MDZsvCOG: 00:26:37.156 11:44:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 3 00:26:37.156 11:44:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:37.156 11:44:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:37.156 11:44:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:26:37.156 11:44:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:26:37.156 11:44:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:37.156 11:44:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:26:37.156 11:44:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:37.156 11:44:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:37.156 11:44:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:37.156 11:44:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:37.156 11:44:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:37.156 11:44:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:37.156 11:44:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:37.156 11:44:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:37.156 11:44:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:37.156 11:44:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:37.156 11:44:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:37.156 11:44:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:37.156 11:44:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:37.156 11:44:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:37.156 11:44:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:26:37.156 11:44:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:37.156 11:44:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:37.415 nvme0n1 00:26:37.415 11:44:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:37.415 11:44:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:37.415 11:44:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:37.415 11:44:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:37.415 11:44:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:37.415 11:44:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:37.415 11:44:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:37.415 11:44:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:37.415 11:44:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:37.415 11:44:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:37.415 11:44:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:37.415 11:44:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:37.415 11:44:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 4 00:26:37.415 11:44:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:37.415 11:44:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:37.415 11:44:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:26:37.415 11:44:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:26:37.415 11:44:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YTI1NGIwMGRmZmM5MTBmZmFkNDlkMjYxOWY1YzliZmFlZTE2OWI5NDk4MDcxMWJmNDhkYzZlZjE2ZTM4NjMzOMnYP/o=: 00:26:37.415 11:44:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:26:37.415 11:44:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:37.415 11:44:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:26:37.415 11:44:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YTI1NGIwMGRmZmM5MTBmZmFkNDlkMjYxOWY1YzliZmFlZTE2OWI5NDk4MDcxMWJmNDhkYzZlZjE2ZTM4NjMzOMnYP/o=: 00:26:37.415 11:44:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:26:37.415 11:44:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 4 00:26:37.415 11:44:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:37.415 11:44:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:37.415 11:44:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:26:37.415 11:44:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:26:37.415 11:44:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:37.415 11:44:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:26:37.415 11:44:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:37.415 11:44:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:37.415 11:44:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:37.415 11:44:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:37.415 11:44:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:37.415 11:44:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:37.415 11:44:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:37.415 11:44:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:37.415 11:44:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:37.415 11:44:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:37.415 11:44:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:37.415 11:44:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:37.415 11:44:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:37.415 11:44:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:37.415 11:44:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:26:37.415 11:44:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:37.415 11:44:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:37.674 nvme0n1 00:26:37.674 11:44:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:37.674 11:44:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:37.674 11:44:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:37.674 11:44:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:37.674 11:44:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:37.674 11:44:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:37.674 11:44:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:37.674 11:44:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:37.674 11:44:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:37.674 11:44:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:37.674 11:44:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:37.674 11:44:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:26:37.674 11:44:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:37.674 11:44:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 0 00:26:37.674 11:44:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:37.674 11:44:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:37.674 11:44:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:26:37.674 11:44:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:26:37.674 11:44:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MTk2OGFkODM0N2E3NTI0MDYzMTI4MTg3NjBkYzU5MznVpECY: 00:26:37.674 11:44:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:OTQxNjZlNjNjMzJjZDkyZjgxYmZiNTk1NzJmZDgxYWZhMzhiOTBiYzlhNTI2NjU4YjFkNDljYTMzYzY1Y2IyZEUWTI8=: 00:26:37.674 11:44:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:37.674 11:44:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:26:37.674 11:44:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MTk2OGFkODM0N2E3NTI0MDYzMTI4MTg3NjBkYzU5MznVpECY: 00:26:37.674 11:44:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:OTQxNjZlNjNjMzJjZDkyZjgxYmZiNTk1NzJmZDgxYWZhMzhiOTBiYzlhNTI2NjU4YjFkNDljYTMzYzY1Y2IyZEUWTI8=: ]] 00:26:37.674 11:44:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:OTQxNjZlNjNjMzJjZDkyZjgxYmZiNTk1NzJmZDgxYWZhMzhiOTBiYzlhNTI2NjU4YjFkNDljYTMzYzY1Y2IyZEUWTI8=: 00:26:37.674 11:44:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 0 00:26:37.674 11:44:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:37.674 11:44:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:37.674 11:44:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:26:37.674 11:44:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:26:37.674 11:44:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:37.675 11:44:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:26:37.675 11:44:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:37.675 11:44:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:37.675 11:44:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:37.933 11:44:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:37.933 11:44:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:37.933 11:44:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:37.933 11:44:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:37.933 11:44:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:37.933 11:44:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:37.933 11:44:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:37.933 11:44:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:37.933 11:44:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:37.933 11:44:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:37.933 11:44:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:37.933 11:44:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:26:37.933 11:44:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:37.933 11:44:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:38.190 nvme0n1 00:26:38.190 11:44:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:38.190 11:44:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:38.190 11:44:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:38.190 11:44:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:38.190 11:44:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:38.190 11:44:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:38.448 11:44:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:38.448 11:44:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:38.448 11:44:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:38.448 11:44:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:38.448 11:44:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:38.448 11:44:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:38.448 11:44:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 1 00:26:38.448 11:44:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:38.448 11:44:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:38.448 11:44:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:26:38.448 11:44:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:26:38.448 11:44:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NDQ2ZjE4YTNiZDJkNWJlNmI2YWM4ZGZmYTgyYzZkN2VkNTllNzIwZWI1NzI0MzAzP7rlLg==: 00:26:38.448 11:44:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:Y2ExYjNiNDViZjgwYTQ1MWRkN2FiYjRhODczNmZhY2EzYTAzMjcyNTlkY2IwZTVi9o9JOw==: 00:26:38.448 11:44:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:38.448 11:44:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:26:38.448 11:44:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NDQ2ZjE4YTNiZDJkNWJlNmI2YWM4ZGZmYTgyYzZkN2VkNTllNzIwZWI1NzI0MzAzP7rlLg==: 00:26:38.448 11:44:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:Y2ExYjNiNDViZjgwYTQ1MWRkN2FiYjRhODczNmZhY2EzYTAzMjcyNTlkY2IwZTVi9o9JOw==: ]] 00:26:38.448 11:44:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:Y2ExYjNiNDViZjgwYTQ1MWRkN2FiYjRhODczNmZhY2EzYTAzMjcyNTlkY2IwZTVi9o9JOw==: 00:26:38.448 11:44:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 1 00:26:38.448 11:44:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:38.448 11:44:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:38.448 11:44:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:26:38.448 11:44:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:26:38.448 11:44:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:38.448 11:44:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:26:38.448 11:44:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:38.448 11:44:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:38.448 11:44:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:38.448 11:44:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:38.448 11:44:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:38.448 11:44:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:38.448 11:44:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:38.448 11:44:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:38.448 11:44:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:38.448 11:44:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:38.448 11:44:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:38.448 11:44:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:38.448 11:44:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:38.448 11:44:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:38.448 11:44:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:26:38.448 11:44:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:38.448 11:44:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:39.016 nvme0n1 00:26:39.016 11:44:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:39.016 11:44:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:39.016 11:44:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:39.016 11:44:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:39.016 11:44:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:39.016 11:44:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:39.016 11:44:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:39.016 11:44:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:39.016 11:44:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:39.016 11:44:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:39.016 11:44:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:39.016 11:44:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:39.016 11:44:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 2 00:26:39.016 11:44:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:39.016 11:44:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:39.016 11:44:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:26:39.016 11:44:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:26:39.016 11:44:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:Y2FkNGYyYmVkODhkMzhhNTRjMGMxNjViNTgwNWQ4MDBJMSyK: 00:26:39.016 11:44:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZjIyZGE4YjQ5NDJkNjIyNzAxNjM4NWVhYThmYzNkZjMqr6rB: 00:26:39.016 11:44:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:39.016 11:44:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:26:39.016 11:44:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:Y2FkNGYyYmVkODhkMzhhNTRjMGMxNjViNTgwNWQ4MDBJMSyK: 00:26:39.016 11:44:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZjIyZGE4YjQ5NDJkNjIyNzAxNjM4NWVhYThmYzNkZjMqr6rB: ]] 00:26:39.016 11:44:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZjIyZGE4YjQ5NDJkNjIyNzAxNjM4NWVhYThmYzNkZjMqr6rB: 00:26:39.016 11:44:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 2 00:26:39.016 11:44:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:39.016 11:44:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:39.016 11:44:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:26:39.016 11:44:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:26:39.016 11:44:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:39.016 11:44:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:26:39.016 11:44:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:39.016 11:44:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:39.016 11:44:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:39.016 11:44:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:39.016 11:44:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:39.016 11:44:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:39.016 11:44:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:39.016 11:44:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:39.016 11:44:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:39.016 11:44:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:39.016 11:44:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:39.016 11:44:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:39.016 11:44:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:39.016 11:44:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:39.016 11:44:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:26:39.016 11:44:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:39.016 11:44:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:39.274 nvme0n1 00:26:39.274 11:44:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:39.274 11:44:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:39.274 11:44:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:39.274 11:44:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:39.274 11:44:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:39.274 11:44:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:39.532 11:44:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:39.532 11:44:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:39.532 11:44:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:39.532 11:44:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:39.532 11:44:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:39.532 11:44:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:39.532 11:44:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 3 00:26:39.532 11:44:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:39.532 11:44:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:39.532 11:44:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:26:39.532 11:44:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:26:39.532 11:44:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MDZjOWJkZTkyMDNkZjE5ZTYyNGJhYjg4MDE4NjhmMjk3ZDYyZTQ1YzVjMmNiZWU3Xz1Vgw==: 00:26:39.532 11:44:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ODVjNDFhYTUxOGUxNjAyYWY4MTE1OTcyMmE4ZmU0MDZsvCOG: 00:26:39.532 11:44:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:39.532 11:44:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:26:39.532 11:44:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MDZjOWJkZTkyMDNkZjE5ZTYyNGJhYjg4MDE4NjhmMjk3ZDYyZTQ1YzVjMmNiZWU3Xz1Vgw==: 00:26:39.532 11:44:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ODVjNDFhYTUxOGUxNjAyYWY4MTE1OTcyMmE4ZmU0MDZsvCOG: ]] 00:26:39.532 11:44:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ODVjNDFhYTUxOGUxNjAyYWY4MTE1OTcyMmE4ZmU0MDZsvCOG: 00:26:39.532 11:44:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 3 00:26:39.532 11:44:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:39.532 11:44:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:39.532 11:44:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:26:39.532 11:44:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:26:39.532 11:44:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:39.532 11:44:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:26:39.532 11:44:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:39.532 11:44:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:39.532 11:44:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:39.532 11:44:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:39.532 11:44:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:39.532 11:44:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:39.532 11:44:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:39.532 11:44:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:39.532 11:44:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:39.532 11:44:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:39.532 11:44:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:39.532 11:44:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:39.532 11:44:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:39.532 11:44:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:39.532 11:44:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:26:39.532 11:44:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:39.532 11:44:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:40.100 nvme0n1 00:26:40.100 11:44:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:40.100 11:44:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:40.100 11:44:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:40.100 11:44:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:40.100 11:44:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:40.100 11:44:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:40.100 11:44:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:40.100 11:44:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:40.100 11:44:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:40.100 11:44:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:40.100 11:44:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:40.100 11:44:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:40.100 11:44:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 4 00:26:40.100 11:44:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:40.100 11:44:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:40.100 11:44:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:26:40.100 11:44:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:26:40.100 11:44:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YTI1NGIwMGRmZmM5MTBmZmFkNDlkMjYxOWY1YzliZmFlZTE2OWI5NDk4MDcxMWJmNDhkYzZlZjE2ZTM4NjMzOMnYP/o=: 00:26:40.100 11:44:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:26:40.100 11:44:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:40.100 11:44:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:26:40.100 11:44:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YTI1NGIwMGRmZmM5MTBmZmFkNDlkMjYxOWY1YzliZmFlZTE2OWI5NDk4MDcxMWJmNDhkYzZlZjE2ZTM4NjMzOMnYP/o=: 00:26:40.100 11:44:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:26:40.100 11:44:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 4 00:26:40.100 11:44:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:40.100 11:44:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:40.100 11:44:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:26:40.100 11:44:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:26:40.100 11:44:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:40.100 11:44:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:26:40.100 11:44:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:40.100 11:44:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:40.100 11:44:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:40.100 11:44:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:40.100 11:44:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:40.100 11:44:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:40.100 11:44:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:40.100 11:44:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:40.100 11:44:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:40.100 11:44:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:40.100 11:44:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:40.100 11:44:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:40.100 11:44:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:40.100 11:44:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:40.100 11:44:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:26:40.100 11:44:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:40.100 11:44:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:40.666 nvme0n1 00:26:40.666 11:44:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:40.666 11:44:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:40.666 11:44:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:40.666 11:44:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:40.666 11:44:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:40.666 11:44:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:40.666 11:44:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:40.666 11:44:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:40.666 11:44:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:40.666 11:44:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:40.666 11:44:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:40.666 11:44:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:26:40.666 11:44:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:40.666 11:44:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 0 00:26:40.666 11:44:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:40.666 11:44:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:40.666 11:44:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:26:40.666 11:44:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:26:40.666 11:44:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MTk2OGFkODM0N2E3NTI0MDYzMTI4MTg3NjBkYzU5MznVpECY: 00:26:40.666 11:44:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:OTQxNjZlNjNjMzJjZDkyZjgxYmZiNTk1NzJmZDgxYWZhMzhiOTBiYzlhNTI2NjU4YjFkNDljYTMzYzY1Y2IyZEUWTI8=: 00:26:40.666 11:44:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:40.666 11:44:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:26:40.666 11:44:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MTk2OGFkODM0N2E3NTI0MDYzMTI4MTg3NjBkYzU5MznVpECY: 00:26:40.666 11:44:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:OTQxNjZlNjNjMzJjZDkyZjgxYmZiNTk1NzJmZDgxYWZhMzhiOTBiYzlhNTI2NjU4YjFkNDljYTMzYzY1Y2IyZEUWTI8=: ]] 00:26:40.666 11:44:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:OTQxNjZlNjNjMzJjZDkyZjgxYmZiNTk1NzJmZDgxYWZhMzhiOTBiYzlhNTI2NjU4YjFkNDljYTMzYzY1Y2IyZEUWTI8=: 00:26:40.666 11:44:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 0 00:26:40.666 11:44:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:40.666 11:44:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:40.666 11:44:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:26:40.666 11:44:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:26:40.666 11:44:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:40.666 11:44:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:26:40.666 11:44:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:40.666 11:44:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:40.666 11:44:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:40.666 11:44:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:40.666 11:44:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:40.666 11:44:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:40.666 11:44:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:40.666 11:44:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:40.666 11:44:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:40.666 11:44:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:40.666 11:44:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:40.666 11:44:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:40.666 11:44:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:40.666 11:44:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:40.666 11:44:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:26:40.666 11:44:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:40.666 11:44:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:41.600 nvme0n1 00:26:41.600 11:44:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:41.600 11:44:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:41.600 11:44:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:41.600 11:44:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:41.600 11:44:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:41.600 11:44:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:41.600 11:44:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:41.600 11:44:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:41.600 11:44:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:41.600 11:44:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:41.600 11:44:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:41.600 11:44:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:41.600 11:44:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 1 00:26:41.600 11:44:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:41.600 11:44:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:41.600 11:44:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:26:41.600 11:44:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:26:41.600 11:44:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NDQ2ZjE4YTNiZDJkNWJlNmI2YWM4ZGZmYTgyYzZkN2VkNTllNzIwZWI1NzI0MzAzP7rlLg==: 00:26:41.600 11:44:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:Y2ExYjNiNDViZjgwYTQ1MWRkN2FiYjRhODczNmZhY2EzYTAzMjcyNTlkY2IwZTVi9o9JOw==: 00:26:41.600 11:44:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:41.600 11:44:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:26:41.600 11:44:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NDQ2ZjE4YTNiZDJkNWJlNmI2YWM4ZGZmYTgyYzZkN2VkNTllNzIwZWI1NzI0MzAzP7rlLg==: 00:26:41.600 11:44:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:Y2ExYjNiNDViZjgwYTQ1MWRkN2FiYjRhODczNmZhY2EzYTAzMjcyNTlkY2IwZTVi9o9JOw==: ]] 00:26:41.600 11:44:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:Y2ExYjNiNDViZjgwYTQ1MWRkN2FiYjRhODczNmZhY2EzYTAzMjcyNTlkY2IwZTVi9o9JOw==: 00:26:41.600 11:44:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 1 00:26:41.600 11:44:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:41.600 11:44:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:41.600 11:44:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:26:41.600 11:44:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:26:41.600 11:44:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:41.600 11:44:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:26:41.600 11:44:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:41.600 11:44:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:41.600 11:44:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:41.600 11:44:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:41.600 11:44:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:41.600 11:44:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:41.600 11:44:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:41.600 11:44:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:41.600 11:44:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:41.600 11:44:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:41.600 11:44:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:41.600 11:44:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:41.600 11:44:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:41.600 11:44:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:41.600 11:44:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:26:41.600 11:44:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:41.600 11:44:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:42.167 nvme0n1 00:26:42.167 11:44:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:42.167 11:44:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:42.167 11:44:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:42.167 11:44:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:42.167 11:44:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:42.167 11:44:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:42.167 11:44:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:42.167 11:44:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:42.167 11:44:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:42.167 11:44:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:42.425 11:44:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:42.425 11:44:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:42.425 11:44:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 2 00:26:42.425 11:44:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:42.425 11:44:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:42.425 11:44:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:26:42.425 11:44:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:26:42.425 11:44:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:Y2FkNGYyYmVkODhkMzhhNTRjMGMxNjViNTgwNWQ4MDBJMSyK: 00:26:42.425 11:44:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZjIyZGE4YjQ5NDJkNjIyNzAxNjM4NWVhYThmYzNkZjMqr6rB: 00:26:42.425 11:44:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:42.425 11:44:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:26:42.425 11:44:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:Y2FkNGYyYmVkODhkMzhhNTRjMGMxNjViNTgwNWQ4MDBJMSyK: 00:26:42.425 11:44:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZjIyZGE4YjQ5NDJkNjIyNzAxNjM4NWVhYThmYzNkZjMqr6rB: ]] 00:26:42.425 11:44:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZjIyZGE4YjQ5NDJkNjIyNzAxNjM4NWVhYThmYzNkZjMqr6rB: 00:26:42.425 11:44:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 2 00:26:42.425 11:44:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:42.425 11:44:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:42.425 11:44:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:26:42.425 11:44:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:26:42.425 11:44:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:42.425 11:44:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:26:42.425 11:44:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:42.425 11:44:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:42.425 11:44:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:42.425 11:44:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:42.425 11:44:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:42.425 11:44:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:42.425 11:44:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:42.425 11:44:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:42.425 11:44:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:42.425 11:44:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:42.425 11:44:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:42.425 11:44:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:42.425 11:44:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:42.425 11:44:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:42.425 11:44:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:26:42.425 11:44:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:42.425 11:44:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:42.991 nvme0n1 00:26:42.991 11:44:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:42.991 11:44:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:42.991 11:44:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:42.991 11:44:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:42.991 11:44:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:43.250 11:44:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:43.250 11:44:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:43.250 11:44:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:43.250 11:44:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:43.250 11:44:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:43.250 11:44:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:43.250 11:44:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:43.250 11:44:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 3 00:26:43.250 11:44:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:43.250 11:44:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:43.250 11:44:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:26:43.250 11:44:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:26:43.250 11:44:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MDZjOWJkZTkyMDNkZjE5ZTYyNGJhYjg4MDE4NjhmMjk3ZDYyZTQ1YzVjMmNiZWU3Xz1Vgw==: 00:26:43.250 11:44:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ODVjNDFhYTUxOGUxNjAyYWY4MTE1OTcyMmE4ZmU0MDZsvCOG: 00:26:43.250 11:44:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:43.250 11:44:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:26:43.250 11:44:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MDZjOWJkZTkyMDNkZjE5ZTYyNGJhYjg4MDE4NjhmMjk3ZDYyZTQ1YzVjMmNiZWU3Xz1Vgw==: 00:26:43.250 11:44:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ODVjNDFhYTUxOGUxNjAyYWY4MTE1OTcyMmE4ZmU0MDZsvCOG: ]] 00:26:43.250 11:44:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ODVjNDFhYTUxOGUxNjAyYWY4MTE1OTcyMmE4ZmU0MDZsvCOG: 00:26:43.250 11:44:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 3 00:26:43.250 11:44:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:43.250 11:44:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:43.250 11:44:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:26:43.250 11:44:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:26:43.250 11:44:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:43.250 11:44:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:26:43.250 11:44:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:43.250 11:44:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:43.250 11:44:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:43.250 11:44:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:43.250 11:44:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:43.250 11:44:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:43.250 11:44:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:43.250 11:44:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:43.250 11:44:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:43.250 11:44:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:43.250 11:44:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:43.250 11:44:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:43.250 11:44:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:43.250 11:44:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:43.250 11:44:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:26:43.250 11:44:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:43.250 11:44:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:43.817 nvme0n1 00:26:43.817 11:44:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:43.817 11:44:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:43.817 11:44:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:43.817 11:44:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:43.817 11:44:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:44.074 11:44:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:44.074 11:44:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:44.074 11:44:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:44.074 11:44:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:44.074 11:44:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:44.074 11:44:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:44.074 11:44:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:44.074 11:44:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 4 00:26:44.074 11:44:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:44.074 11:44:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:44.074 11:44:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:26:44.074 11:44:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:26:44.074 11:44:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YTI1NGIwMGRmZmM5MTBmZmFkNDlkMjYxOWY1YzliZmFlZTE2OWI5NDk4MDcxMWJmNDhkYzZlZjE2ZTM4NjMzOMnYP/o=: 00:26:44.074 11:44:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:26:44.074 11:44:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:44.074 11:44:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:26:44.075 11:44:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YTI1NGIwMGRmZmM5MTBmZmFkNDlkMjYxOWY1YzliZmFlZTE2OWI5NDk4MDcxMWJmNDhkYzZlZjE2ZTM4NjMzOMnYP/o=: 00:26:44.075 11:44:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:26:44.075 11:44:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 4 00:26:44.075 11:44:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:44.075 11:44:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:44.075 11:44:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:26:44.075 11:44:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:26:44.075 11:44:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:44.075 11:44:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:26:44.075 11:44:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:44.075 11:44:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:44.075 11:44:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:44.075 11:44:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:44.075 11:44:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:44.075 11:44:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:44.075 11:44:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:44.075 11:44:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:44.075 11:44:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:44.075 11:44:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:44.075 11:44:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:44.075 11:44:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:44.075 11:44:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:44.075 11:44:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:44.075 11:44:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:26:44.075 11:44:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:44.075 11:44:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:44.641 nvme0n1 00:26:44.641 11:44:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:44.641 11:44:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:44.641 11:44:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:44.641 11:44:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:44.641 11:44:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:44.641 11:44:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:44.899 11:44:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:44.899 11:44:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:44.899 11:44:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:44.899 11:44:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:44.899 11:44:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:44.899 11:44:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:26:44.899 11:44:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:26:44.899 11:44:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:44.899 11:44:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 0 00:26:44.899 11:44:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:44.899 11:44:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:44.899 11:44:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:26:44.899 11:44:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:26:44.899 11:44:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MTk2OGFkODM0N2E3NTI0MDYzMTI4MTg3NjBkYzU5MznVpECY: 00:26:44.899 11:44:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:OTQxNjZlNjNjMzJjZDkyZjgxYmZiNTk1NzJmZDgxYWZhMzhiOTBiYzlhNTI2NjU4YjFkNDljYTMzYzY1Y2IyZEUWTI8=: 00:26:44.899 11:44:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:44.899 11:44:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:26:44.899 11:44:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MTk2OGFkODM0N2E3NTI0MDYzMTI4MTg3NjBkYzU5MznVpECY: 00:26:44.899 11:44:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:OTQxNjZlNjNjMzJjZDkyZjgxYmZiNTk1NzJmZDgxYWZhMzhiOTBiYzlhNTI2NjU4YjFkNDljYTMzYzY1Y2IyZEUWTI8=: ]] 00:26:44.899 11:44:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:OTQxNjZlNjNjMzJjZDkyZjgxYmZiNTk1NzJmZDgxYWZhMzhiOTBiYzlhNTI2NjU4YjFkNDljYTMzYzY1Y2IyZEUWTI8=: 00:26:44.899 11:44:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 0 00:26:44.899 11:44:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:44.899 11:44:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:44.899 11:44:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:26:44.899 11:44:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:26:44.899 11:44:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:44.899 11:44:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:26:44.899 11:44:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:44.899 11:44:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:44.899 11:44:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:44.899 11:44:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:44.899 11:44:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:44.899 11:44:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:44.899 11:44:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:44.899 11:44:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:44.899 11:44:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:44.899 11:44:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:44.899 11:44:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:44.899 11:44:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:44.899 11:44:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:44.899 11:44:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:44.899 11:44:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:26:44.899 11:44:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:44.899 11:44:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:44.899 nvme0n1 00:26:44.899 11:44:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:44.899 11:44:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:44.899 11:44:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:44.899 11:44:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:44.899 11:44:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:44.899 11:44:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:44.899 11:44:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:44.900 11:44:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:44.900 11:44:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:44.900 11:44:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:45.158 11:44:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:45.158 11:44:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:45.158 11:44:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 1 00:26:45.158 11:44:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:45.158 11:44:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:45.158 11:44:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:26:45.158 11:44:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:26:45.158 11:44:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NDQ2ZjE4YTNiZDJkNWJlNmI2YWM4ZGZmYTgyYzZkN2VkNTllNzIwZWI1NzI0MzAzP7rlLg==: 00:26:45.158 11:44:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:Y2ExYjNiNDViZjgwYTQ1MWRkN2FiYjRhODczNmZhY2EzYTAzMjcyNTlkY2IwZTVi9o9JOw==: 00:26:45.158 11:44:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:45.158 11:44:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:26:45.158 11:44:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NDQ2ZjE4YTNiZDJkNWJlNmI2YWM4ZGZmYTgyYzZkN2VkNTllNzIwZWI1NzI0MzAzP7rlLg==: 00:26:45.158 11:44:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:Y2ExYjNiNDViZjgwYTQ1MWRkN2FiYjRhODczNmZhY2EzYTAzMjcyNTlkY2IwZTVi9o9JOw==: ]] 00:26:45.158 11:44:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:Y2ExYjNiNDViZjgwYTQ1MWRkN2FiYjRhODczNmZhY2EzYTAzMjcyNTlkY2IwZTVi9o9JOw==: 00:26:45.158 11:44:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 1 00:26:45.158 11:44:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:45.158 11:44:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:45.158 11:44:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:26:45.158 11:44:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:26:45.158 11:44:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:45.158 11:44:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:26:45.158 11:44:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:45.158 11:44:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:45.158 11:44:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:45.158 11:44:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:45.158 11:44:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:45.158 11:44:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:45.158 11:44:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:45.158 11:44:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:45.158 11:44:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:45.158 11:44:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:45.158 11:44:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:45.158 11:44:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:45.158 11:44:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:45.158 11:44:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:45.158 11:44:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:26:45.158 11:44:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:45.158 11:44:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:45.158 nvme0n1 00:26:45.158 11:44:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:45.158 11:44:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:45.158 11:44:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:45.158 11:44:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:45.158 11:44:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:45.158 11:44:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:45.158 11:44:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:45.158 11:44:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:45.158 11:44:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:45.158 11:44:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:45.158 11:44:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:45.158 11:44:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:45.158 11:44:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 2 00:26:45.158 11:44:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:45.158 11:44:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:45.158 11:44:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:26:45.158 11:44:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:26:45.158 11:44:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:Y2FkNGYyYmVkODhkMzhhNTRjMGMxNjViNTgwNWQ4MDBJMSyK: 00:26:45.158 11:44:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZjIyZGE4YjQ5NDJkNjIyNzAxNjM4NWVhYThmYzNkZjMqr6rB: 00:26:45.158 11:44:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:45.158 11:44:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:26:45.158 11:44:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:Y2FkNGYyYmVkODhkMzhhNTRjMGMxNjViNTgwNWQ4MDBJMSyK: 00:26:45.158 11:44:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZjIyZGE4YjQ5NDJkNjIyNzAxNjM4NWVhYThmYzNkZjMqr6rB: ]] 00:26:45.158 11:44:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZjIyZGE4YjQ5NDJkNjIyNzAxNjM4NWVhYThmYzNkZjMqr6rB: 00:26:45.158 11:44:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 2 00:26:45.158 11:44:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:45.158 11:44:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:45.158 11:44:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:26:45.158 11:44:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:26:45.158 11:44:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:45.158 11:44:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:26:45.158 11:44:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:45.158 11:44:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:45.416 11:44:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:45.416 11:44:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:45.416 11:44:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:45.416 11:44:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:45.416 11:44:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:45.416 11:44:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:45.416 11:44:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:45.416 11:44:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:45.416 11:44:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:45.416 11:44:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:45.416 11:44:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:45.416 11:44:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:45.416 11:44:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:26:45.416 11:44:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:45.416 11:44:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:45.416 nvme0n1 00:26:45.416 11:44:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:45.416 11:44:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:45.416 11:44:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:45.416 11:44:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:45.416 11:44:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:45.416 11:44:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:45.416 11:44:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:45.416 11:44:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:45.416 11:44:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:45.416 11:44:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:45.416 11:44:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:45.416 11:44:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:45.416 11:44:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 3 00:26:45.416 11:44:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:45.416 11:44:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:45.416 11:44:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:26:45.416 11:44:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:26:45.416 11:44:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MDZjOWJkZTkyMDNkZjE5ZTYyNGJhYjg4MDE4NjhmMjk3ZDYyZTQ1YzVjMmNiZWU3Xz1Vgw==: 00:26:45.417 11:44:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ODVjNDFhYTUxOGUxNjAyYWY4MTE1OTcyMmE4ZmU0MDZsvCOG: 00:26:45.417 11:44:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:45.417 11:44:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:26:45.417 11:44:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MDZjOWJkZTkyMDNkZjE5ZTYyNGJhYjg4MDE4NjhmMjk3ZDYyZTQ1YzVjMmNiZWU3Xz1Vgw==: 00:26:45.417 11:44:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ODVjNDFhYTUxOGUxNjAyYWY4MTE1OTcyMmE4ZmU0MDZsvCOG: ]] 00:26:45.417 11:44:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ODVjNDFhYTUxOGUxNjAyYWY4MTE1OTcyMmE4ZmU0MDZsvCOG: 00:26:45.417 11:44:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 3 00:26:45.417 11:44:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:45.417 11:44:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:45.417 11:44:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:26:45.417 11:44:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:26:45.417 11:44:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:45.417 11:44:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:26:45.417 11:44:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:45.417 11:44:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:45.417 11:44:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:45.417 11:44:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:45.417 11:44:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:45.417 11:44:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:45.417 11:44:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:45.417 11:44:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:45.417 11:44:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:45.417 11:44:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:45.417 11:44:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:45.417 11:44:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:45.417 11:44:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:45.417 11:44:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:45.417 11:44:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:26:45.417 11:44:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:45.417 11:44:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:45.675 nvme0n1 00:26:45.675 11:44:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:45.675 11:44:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:45.675 11:44:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:45.675 11:44:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:45.675 11:44:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:45.675 11:44:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:45.675 11:44:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:45.675 11:44:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:45.675 11:44:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:45.675 11:44:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:45.675 11:44:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:45.675 11:44:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:45.675 11:44:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 4 00:26:45.675 11:44:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:45.675 11:44:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:45.675 11:44:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:26:45.675 11:44:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:26:45.675 11:44:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YTI1NGIwMGRmZmM5MTBmZmFkNDlkMjYxOWY1YzliZmFlZTE2OWI5NDk4MDcxMWJmNDhkYzZlZjE2ZTM4NjMzOMnYP/o=: 00:26:45.675 11:44:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:26:45.675 11:44:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:45.675 11:44:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:26:45.675 11:44:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YTI1NGIwMGRmZmM5MTBmZmFkNDlkMjYxOWY1YzliZmFlZTE2OWI5NDk4MDcxMWJmNDhkYzZlZjE2ZTM4NjMzOMnYP/o=: 00:26:45.675 11:44:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:26:45.675 11:44:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 4 00:26:45.675 11:44:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:45.675 11:44:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:45.675 11:44:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:26:45.675 11:44:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:26:45.675 11:44:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:45.675 11:44:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:26:45.675 11:44:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:45.675 11:44:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:45.675 11:44:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:45.675 11:44:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:45.675 11:44:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:45.675 11:44:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:45.675 11:44:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:45.675 11:44:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:45.675 11:44:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:45.675 11:44:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:45.675 11:44:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:45.675 11:44:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:45.675 11:44:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:45.675 11:44:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:45.675 11:44:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:26:45.675 11:44:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:45.675 11:44:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:45.934 nvme0n1 00:26:45.934 11:44:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:45.934 11:44:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:45.934 11:44:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:45.934 11:44:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:45.934 11:44:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:45.934 11:44:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:45.934 11:44:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:45.934 11:44:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:45.934 11:44:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:45.934 11:44:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:45.934 11:44:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:45.934 11:44:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:26:45.934 11:44:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:45.934 11:44:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 0 00:26:45.934 11:44:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:45.934 11:44:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:45.934 11:44:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:26:45.934 11:44:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:26:45.935 11:44:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MTk2OGFkODM0N2E3NTI0MDYzMTI4MTg3NjBkYzU5MznVpECY: 00:26:45.935 11:44:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:OTQxNjZlNjNjMzJjZDkyZjgxYmZiNTk1NzJmZDgxYWZhMzhiOTBiYzlhNTI2NjU4YjFkNDljYTMzYzY1Y2IyZEUWTI8=: 00:26:45.935 11:44:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:45.935 11:44:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:26:45.935 11:44:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MTk2OGFkODM0N2E3NTI0MDYzMTI4MTg3NjBkYzU5MznVpECY: 00:26:45.935 11:44:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:OTQxNjZlNjNjMzJjZDkyZjgxYmZiNTk1NzJmZDgxYWZhMzhiOTBiYzlhNTI2NjU4YjFkNDljYTMzYzY1Y2IyZEUWTI8=: ]] 00:26:45.935 11:44:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:OTQxNjZlNjNjMzJjZDkyZjgxYmZiNTk1NzJmZDgxYWZhMzhiOTBiYzlhNTI2NjU4YjFkNDljYTMzYzY1Y2IyZEUWTI8=: 00:26:45.935 11:44:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 0 00:26:45.935 11:44:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:45.935 11:44:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:45.935 11:44:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:26:45.935 11:44:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:26:45.935 11:44:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:45.935 11:44:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:26:45.935 11:44:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:45.935 11:44:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:45.935 11:44:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:45.935 11:44:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:45.935 11:44:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:45.935 11:44:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:45.935 11:44:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:45.935 11:44:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:45.935 11:44:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:45.935 11:44:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:45.935 11:44:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:45.935 11:44:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:45.935 11:44:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:45.935 11:44:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:45.935 11:44:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:26:45.935 11:44:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:45.935 11:44:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:46.193 nvme0n1 00:26:46.193 11:44:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:46.193 11:44:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:46.193 11:44:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:46.193 11:44:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:46.193 11:44:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:46.193 11:44:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:46.193 11:44:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:46.193 11:44:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:46.194 11:44:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:46.194 11:44:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:46.194 11:44:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:46.194 11:44:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:46.194 11:44:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 1 00:26:46.194 11:44:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:46.194 11:44:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:46.194 11:44:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:26:46.194 11:44:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:26:46.194 11:44:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NDQ2ZjE4YTNiZDJkNWJlNmI2YWM4ZGZmYTgyYzZkN2VkNTllNzIwZWI1NzI0MzAzP7rlLg==: 00:26:46.194 11:44:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:Y2ExYjNiNDViZjgwYTQ1MWRkN2FiYjRhODczNmZhY2EzYTAzMjcyNTlkY2IwZTVi9o9JOw==: 00:26:46.194 11:44:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:46.194 11:44:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:26:46.194 11:44:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NDQ2ZjE4YTNiZDJkNWJlNmI2YWM4ZGZmYTgyYzZkN2VkNTllNzIwZWI1NzI0MzAzP7rlLg==: 00:26:46.194 11:44:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:Y2ExYjNiNDViZjgwYTQ1MWRkN2FiYjRhODczNmZhY2EzYTAzMjcyNTlkY2IwZTVi9o9JOw==: ]] 00:26:46.194 11:44:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:Y2ExYjNiNDViZjgwYTQ1MWRkN2FiYjRhODczNmZhY2EzYTAzMjcyNTlkY2IwZTVi9o9JOw==: 00:26:46.194 11:44:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 1 00:26:46.194 11:44:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:46.194 11:44:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:46.194 11:44:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:26:46.194 11:44:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:26:46.194 11:44:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:46.194 11:44:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:26:46.194 11:44:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:46.194 11:44:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:46.194 11:44:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:46.194 11:44:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:46.194 11:44:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:46.194 11:44:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:46.194 11:44:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:46.194 11:44:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:46.194 11:44:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:46.194 11:44:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:46.194 11:44:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:46.194 11:44:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:46.194 11:44:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:46.194 11:44:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:46.194 11:44:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:26:46.194 11:44:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:46.194 11:44:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:46.452 nvme0n1 00:26:46.452 11:44:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:46.452 11:44:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:46.452 11:44:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:46.452 11:44:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:46.452 11:44:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:46.452 11:44:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:46.452 11:44:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:46.452 11:44:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:46.452 11:44:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:46.452 11:44:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:46.452 11:44:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:46.452 11:44:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:46.452 11:44:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 2 00:26:46.452 11:44:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:46.452 11:44:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:46.452 11:44:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:26:46.452 11:44:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:26:46.452 11:44:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:Y2FkNGYyYmVkODhkMzhhNTRjMGMxNjViNTgwNWQ4MDBJMSyK: 00:26:46.452 11:44:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZjIyZGE4YjQ5NDJkNjIyNzAxNjM4NWVhYThmYzNkZjMqr6rB: 00:26:46.452 11:44:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:46.452 11:44:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:26:46.452 11:44:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:Y2FkNGYyYmVkODhkMzhhNTRjMGMxNjViNTgwNWQ4MDBJMSyK: 00:26:46.452 11:44:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZjIyZGE4YjQ5NDJkNjIyNzAxNjM4NWVhYThmYzNkZjMqr6rB: ]] 00:26:46.452 11:44:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZjIyZGE4YjQ5NDJkNjIyNzAxNjM4NWVhYThmYzNkZjMqr6rB: 00:26:46.452 11:44:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 2 00:26:46.452 11:44:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:46.452 11:44:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:46.452 11:44:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:26:46.452 11:44:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:26:46.452 11:44:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:46.452 11:44:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:26:46.452 11:44:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:46.452 11:44:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:46.452 11:44:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:46.452 11:44:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:46.452 11:44:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:46.452 11:44:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:46.452 11:44:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:46.452 11:44:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:46.452 11:44:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:46.452 11:44:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:46.452 11:44:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:46.452 11:44:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:46.452 11:44:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:46.452 11:44:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:46.452 11:44:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:26:46.452 11:44:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:46.452 11:44:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:46.711 nvme0n1 00:26:46.711 11:44:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:46.711 11:44:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:46.711 11:44:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:46.711 11:44:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:46.711 11:44:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:46.711 11:44:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:46.711 11:44:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:46.711 11:44:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:46.711 11:44:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:46.711 11:44:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:46.711 11:44:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:46.711 11:44:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:46.711 11:44:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 3 00:26:46.711 11:44:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:46.711 11:44:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:46.711 11:44:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:26:46.711 11:44:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:26:46.711 11:44:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MDZjOWJkZTkyMDNkZjE5ZTYyNGJhYjg4MDE4NjhmMjk3ZDYyZTQ1YzVjMmNiZWU3Xz1Vgw==: 00:26:46.711 11:44:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ODVjNDFhYTUxOGUxNjAyYWY4MTE1OTcyMmE4ZmU0MDZsvCOG: 00:26:46.711 11:44:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:46.711 11:44:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:26:46.711 11:44:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MDZjOWJkZTkyMDNkZjE5ZTYyNGJhYjg4MDE4NjhmMjk3ZDYyZTQ1YzVjMmNiZWU3Xz1Vgw==: 00:26:46.711 11:44:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ODVjNDFhYTUxOGUxNjAyYWY4MTE1OTcyMmE4ZmU0MDZsvCOG: ]] 00:26:46.711 11:44:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ODVjNDFhYTUxOGUxNjAyYWY4MTE1OTcyMmE4ZmU0MDZsvCOG: 00:26:46.711 11:44:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 3 00:26:46.711 11:44:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:46.711 11:44:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:46.711 11:44:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:26:46.711 11:44:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:26:46.711 11:44:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:46.711 11:44:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:26:46.711 11:44:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:46.711 11:44:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:46.711 11:44:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:46.711 11:44:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:46.711 11:44:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:46.711 11:44:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:46.711 11:44:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:46.711 11:44:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:46.711 11:44:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:46.711 11:44:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:46.711 11:44:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:46.711 11:44:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:46.711 11:44:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:46.711 11:44:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:46.711 11:44:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:26:46.711 11:44:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:46.711 11:44:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:46.970 nvme0n1 00:26:46.970 11:44:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:46.970 11:44:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:46.970 11:44:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:46.970 11:44:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:46.970 11:44:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:46.970 11:44:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:46.970 11:44:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:46.970 11:44:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:46.970 11:44:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:46.970 11:44:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:46.970 11:44:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:46.970 11:44:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:46.970 11:44:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 4 00:26:46.970 11:44:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:46.970 11:44:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:46.970 11:44:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:26:46.970 11:44:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:26:46.970 11:44:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YTI1NGIwMGRmZmM5MTBmZmFkNDlkMjYxOWY1YzliZmFlZTE2OWI5NDk4MDcxMWJmNDhkYzZlZjE2ZTM4NjMzOMnYP/o=: 00:26:46.970 11:44:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:26:46.970 11:44:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:46.970 11:44:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:26:46.970 11:44:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YTI1NGIwMGRmZmM5MTBmZmFkNDlkMjYxOWY1YzliZmFlZTE2OWI5NDk4MDcxMWJmNDhkYzZlZjE2ZTM4NjMzOMnYP/o=: 00:26:46.970 11:44:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:26:46.970 11:44:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 4 00:26:46.970 11:44:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:46.970 11:44:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:46.970 11:44:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:26:46.970 11:44:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:26:46.970 11:44:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:46.970 11:44:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:26:46.970 11:44:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:46.970 11:44:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:46.970 11:44:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:46.970 11:44:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:46.970 11:44:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:46.970 11:44:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:46.970 11:44:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:46.970 11:44:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:46.970 11:44:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:46.970 11:44:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:46.970 11:44:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:46.970 11:44:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:46.970 11:44:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:46.970 11:44:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:46.970 11:44:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:26:46.970 11:44:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:46.970 11:44:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:47.228 nvme0n1 00:26:47.228 11:44:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:47.228 11:44:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:47.228 11:44:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:47.228 11:44:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:47.228 11:44:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:47.228 11:44:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:47.228 11:44:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:47.228 11:44:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:47.228 11:44:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:47.228 11:44:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:47.228 11:44:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:47.228 11:44:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:26:47.228 11:44:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:47.228 11:44:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 0 00:26:47.228 11:44:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:47.228 11:44:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:47.228 11:44:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:26:47.228 11:44:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:26:47.228 11:44:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MTk2OGFkODM0N2E3NTI0MDYzMTI4MTg3NjBkYzU5MznVpECY: 00:26:47.228 11:44:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:OTQxNjZlNjNjMzJjZDkyZjgxYmZiNTk1NzJmZDgxYWZhMzhiOTBiYzlhNTI2NjU4YjFkNDljYTMzYzY1Y2IyZEUWTI8=: 00:26:47.228 11:44:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:47.228 11:44:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:26:47.228 11:44:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MTk2OGFkODM0N2E3NTI0MDYzMTI4MTg3NjBkYzU5MznVpECY: 00:26:47.228 11:44:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:OTQxNjZlNjNjMzJjZDkyZjgxYmZiNTk1NzJmZDgxYWZhMzhiOTBiYzlhNTI2NjU4YjFkNDljYTMzYzY1Y2IyZEUWTI8=: ]] 00:26:47.228 11:44:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:OTQxNjZlNjNjMzJjZDkyZjgxYmZiNTk1NzJmZDgxYWZhMzhiOTBiYzlhNTI2NjU4YjFkNDljYTMzYzY1Y2IyZEUWTI8=: 00:26:47.228 11:44:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 0 00:26:47.228 11:44:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:47.228 11:44:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:47.228 11:44:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:26:47.229 11:44:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:26:47.229 11:44:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:47.229 11:44:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:26:47.229 11:44:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:47.229 11:44:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:47.229 11:44:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:47.229 11:44:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:47.229 11:44:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:47.229 11:44:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:47.229 11:44:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:47.229 11:44:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:47.229 11:44:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:47.229 11:44:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:47.229 11:44:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:47.229 11:44:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:47.229 11:44:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:47.229 11:44:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:47.229 11:44:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:26:47.229 11:44:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:47.229 11:44:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:47.487 nvme0n1 00:26:47.487 11:44:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:47.487 11:44:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:47.487 11:44:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:47.487 11:44:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:47.487 11:44:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:47.487 11:44:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:47.745 11:44:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:47.745 11:44:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:47.745 11:44:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:47.745 11:44:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:47.745 11:44:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:47.745 11:44:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:47.745 11:44:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 1 00:26:47.745 11:44:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:47.745 11:44:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:47.745 11:44:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:26:47.745 11:44:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:26:47.745 11:44:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NDQ2ZjE4YTNiZDJkNWJlNmI2YWM4ZGZmYTgyYzZkN2VkNTllNzIwZWI1NzI0MzAzP7rlLg==: 00:26:47.745 11:44:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:Y2ExYjNiNDViZjgwYTQ1MWRkN2FiYjRhODczNmZhY2EzYTAzMjcyNTlkY2IwZTVi9o9JOw==: 00:26:47.745 11:44:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:47.745 11:44:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:26:47.745 11:44:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NDQ2ZjE4YTNiZDJkNWJlNmI2YWM4ZGZmYTgyYzZkN2VkNTllNzIwZWI1NzI0MzAzP7rlLg==: 00:26:47.745 11:44:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:Y2ExYjNiNDViZjgwYTQ1MWRkN2FiYjRhODczNmZhY2EzYTAzMjcyNTlkY2IwZTVi9o9JOw==: ]] 00:26:47.745 11:44:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:Y2ExYjNiNDViZjgwYTQ1MWRkN2FiYjRhODczNmZhY2EzYTAzMjcyNTlkY2IwZTVi9o9JOw==: 00:26:47.745 11:44:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 1 00:26:47.745 11:44:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:47.745 11:44:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:47.745 11:44:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:26:47.745 11:44:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:26:47.745 11:44:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:47.745 11:44:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:26:47.745 11:44:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:47.745 11:44:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:47.745 11:44:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:47.745 11:44:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:47.745 11:44:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:47.745 11:44:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:47.745 11:44:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:47.745 11:44:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:47.745 11:44:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:47.745 11:44:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:47.745 11:44:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:47.745 11:44:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:47.745 11:44:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:47.745 11:44:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:47.745 11:44:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:26:47.745 11:44:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:47.745 11:44:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:48.004 nvme0n1 00:26:48.004 11:44:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:48.004 11:44:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:48.004 11:44:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:48.004 11:44:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:48.004 11:44:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:48.004 11:44:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:48.004 11:44:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:48.004 11:44:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:48.004 11:44:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:48.004 11:44:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:48.004 11:44:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:48.004 11:44:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:48.004 11:44:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 2 00:26:48.004 11:44:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:48.004 11:44:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:48.004 11:44:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:26:48.004 11:44:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:26:48.004 11:44:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:Y2FkNGYyYmVkODhkMzhhNTRjMGMxNjViNTgwNWQ4MDBJMSyK: 00:26:48.004 11:44:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZjIyZGE4YjQ5NDJkNjIyNzAxNjM4NWVhYThmYzNkZjMqr6rB: 00:26:48.004 11:44:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:48.004 11:44:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:26:48.004 11:44:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:Y2FkNGYyYmVkODhkMzhhNTRjMGMxNjViNTgwNWQ4MDBJMSyK: 00:26:48.004 11:44:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZjIyZGE4YjQ5NDJkNjIyNzAxNjM4NWVhYThmYzNkZjMqr6rB: ]] 00:26:48.004 11:44:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZjIyZGE4YjQ5NDJkNjIyNzAxNjM4NWVhYThmYzNkZjMqr6rB: 00:26:48.004 11:44:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 2 00:26:48.004 11:44:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:48.004 11:44:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:48.004 11:44:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:26:48.004 11:44:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:26:48.004 11:44:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:48.004 11:44:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:26:48.004 11:44:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:48.004 11:44:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:48.004 11:44:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:48.004 11:44:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:48.004 11:44:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:48.004 11:44:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:48.004 11:44:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:48.004 11:44:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:48.004 11:44:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:48.004 11:44:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:48.004 11:44:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:48.004 11:44:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:48.004 11:44:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:48.004 11:44:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:48.004 11:44:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:26:48.004 11:44:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:48.004 11:44:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:48.262 nvme0n1 00:26:48.262 11:44:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:48.262 11:44:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:48.263 11:44:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:48.263 11:44:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:48.263 11:44:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:48.263 11:44:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:48.263 11:44:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:48.263 11:44:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:48.263 11:44:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:48.263 11:44:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:48.263 11:44:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:48.263 11:44:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:48.263 11:44:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 3 00:26:48.263 11:44:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:48.263 11:44:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:48.263 11:44:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:26:48.263 11:44:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:26:48.263 11:44:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MDZjOWJkZTkyMDNkZjE5ZTYyNGJhYjg4MDE4NjhmMjk3ZDYyZTQ1YzVjMmNiZWU3Xz1Vgw==: 00:26:48.263 11:44:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ODVjNDFhYTUxOGUxNjAyYWY4MTE1OTcyMmE4ZmU0MDZsvCOG: 00:26:48.263 11:44:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:48.263 11:44:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:26:48.263 11:44:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MDZjOWJkZTkyMDNkZjE5ZTYyNGJhYjg4MDE4NjhmMjk3ZDYyZTQ1YzVjMmNiZWU3Xz1Vgw==: 00:26:48.263 11:44:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ODVjNDFhYTUxOGUxNjAyYWY4MTE1OTcyMmE4ZmU0MDZsvCOG: ]] 00:26:48.263 11:44:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ODVjNDFhYTUxOGUxNjAyYWY4MTE1OTcyMmE4ZmU0MDZsvCOG: 00:26:48.263 11:44:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 3 00:26:48.263 11:44:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:48.263 11:44:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:48.263 11:44:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:26:48.263 11:44:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:26:48.263 11:44:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:48.263 11:44:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:26:48.263 11:44:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:48.263 11:44:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:48.263 11:44:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:48.263 11:44:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:48.263 11:44:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:48.263 11:44:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:48.263 11:44:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:48.263 11:44:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:48.263 11:44:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:48.263 11:44:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:48.263 11:44:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:48.263 11:44:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:48.263 11:44:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:48.263 11:44:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:48.263 11:44:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:26:48.263 11:44:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:48.263 11:44:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:48.521 nvme0n1 00:26:48.521 11:44:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:48.779 11:44:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:48.779 11:44:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:48.779 11:44:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:48.779 11:44:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:48.779 11:44:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:48.779 11:44:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:48.779 11:44:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:48.779 11:44:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:48.779 11:44:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:48.779 11:44:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:48.779 11:44:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:48.779 11:44:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 4 00:26:48.779 11:44:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:48.779 11:44:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:48.780 11:44:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:26:48.780 11:44:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:26:48.780 11:44:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YTI1NGIwMGRmZmM5MTBmZmFkNDlkMjYxOWY1YzliZmFlZTE2OWI5NDk4MDcxMWJmNDhkYzZlZjE2ZTM4NjMzOMnYP/o=: 00:26:48.780 11:44:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:26:48.780 11:44:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:48.780 11:44:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:26:48.780 11:44:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YTI1NGIwMGRmZmM5MTBmZmFkNDlkMjYxOWY1YzliZmFlZTE2OWI5NDk4MDcxMWJmNDhkYzZlZjE2ZTM4NjMzOMnYP/o=: 00:26:48.780 11:44:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:26:48.780 11:44:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 4 00:26:48.780 11:44:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:48.780 11:44:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:48.780 11:44:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:26:48.780 11:44:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:26:48.780 11:44:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:48.780 11:44:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:26:48.780 11:44:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:48.780 11:44:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:48.780 11:44:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:48.780 11:44:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:48.780 11:44:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:48.780 11:44:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:48.780 11:44:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:48.780 11:44:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:48.780 11:44:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:48.780 11:44:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:48.780 11:44:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:48.780 11:44:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:48.780 11:44:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:48.780 11:44:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:48.780 11:44:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:26:48.780 11:44:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:48.780 11:44:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:49.038 nvme0n1 00:26:49.038 11:44:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:49.038 11:44:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:49.038 11:44:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:49.038 11:44:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:49.038 11:44:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:49.038 11:44:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:49.038 11:44:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:49.038 11:44:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:49.038 11:44:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:49.038 11:44:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:49.038 11:44:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:49.038 11:44:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:26:49.038 11:44:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:49.038 11:44:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 0 00:26:49.038 11:44:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:49.038 11:44:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:49.038 11:44:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:26:49.038 11:44:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:26:49.038 11:44:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MTk2OGFkODM0N2E3NTI0MDYzMTI4MTg3NjBkYzU5MznVpECY: 00:26:49.038 11:44:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:OTQxNjZlNjNjMzJjZDkyZjgxYmZiNTk1NzJmZDgxYWZhMzhiOTBiYzlhNTI2NjU4YjFkNDljYTMzYzY1Y2IyZEUWTI8=: 00:26:49.038 11:44:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:49.038 11:44:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:26:49.038 11:44:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MTk2OGFkODM0N2E3NTI0MDYzMTI4MTg3NjBkYzU5MznVpECY: 00:26:49.038 11:44:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:OTQxNjZlNjNjMzJjZDkyZjgxYmZiNTk1NzJmZDgxYWZhMzhiOTBiYzlhNTI2NjU4YjFkNDljYTMzYzY1Y2IyZEUWTI8=: ]] 00:26:49.038 11:44:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:OTQxNjZlNjNjMzJjZDkyZjgxYmZiNTk1NzJmZDgxYWZhMzhiOTBiYzlhNTI2NjU4YjFkNDljYTMzYzY1Y2IyZEUWTI8=: 00:26:49.038 11:44:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 0 00:26:49.038 11:44:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:49.038 11:44:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:49.038 11:44:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:26:49.038 11:44:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:26:49.038 11:44:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:49.038 11:44:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:26:49.038 11:44:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:49.038 11:44:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:49.038 11:44:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:49.038 11:44:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:49.038 11:44:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:49.038 11:44:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:49.038 11:44:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:49.038 11:44:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:49.039 11:44:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:49.039 11:44:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:49.039 11:44:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:49.039 11:44:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:49.039 11:44:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:49.039 11:44:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:49.039 11:44:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:26:49.039 11:44:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:49.039 11:44:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:49.605 nvme0n1 00:26:49.605 11:44:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:49.605 11:44:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:49.605 11:44:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:49.605 11:44:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:49.605 11:44:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:49.605 11:44:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:49.605 11:44:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:49.605 11:44:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:49.605 11:44:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:49.605 11:44:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:49.605 11:44:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:49.605 11:44:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:49.605 11:44:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 1 00:26:49.605 11:44:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:49.605 11:44:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:49.605 11:44:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:26:49.605 11:44:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:26:49.605 11:44:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NDQ2ZjE4YTNiZDJkNWJlNmI2YWM4ZGZmYTgyYzZkN2VkNTllNzIwZWI1NzI0MzAzP7rlLg==: 00:26:49.605 11:44:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:Y2ExYjNiNDViZjgwYTQ1MWRkN2FiYjRhODczNmZhY2EzYTAzMjcyNTlkY2IwZTVi9o9JOw==: 00:26:49.605 11:44:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:49.605 11:44:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:26:49.605 11:44:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NDQ2ZjE4YTNiZDJkNWJlNmI2YWM4ZGZmYTgyYzZkN2VkNTllNzIwZWI1NzI0MzAzP7rlLg==: 00:26:49.605 11:44:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:Y2ExYjNiNDViZjgwYTQ1MWRkN2FiYjRhODczNmZhY2EzYTAzMjcyNTlkY2IwZTVi9o9JOw==: ]] 00:26:49.605 11:44:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:Y2ExYjNiNDViZjgwYTQ1MWRkN2FiYjRhODczNmZhY2EzYTAzMjcyNTlkY2IwZTVi9o9JOw==: 00:26:49.605 11:44:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 1 00:26:49.605 11:44:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:49.605 11:44:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:49.605 11:44:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:26:49.605 11:44:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:26:49.605 11:44:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:49.605 11:44:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:26:49.605 11:44:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:49.605 11:44:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:49.605 11:44:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:49.605 11:44:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:49.605 11:44:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:49.605 11:44:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:49.605 11:44:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:49.605 11:44:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:49.605 11:44:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:49.605 11:44:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:49.605 11:44:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:49.605 11:44:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:49.605 11:44:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:49.605 11:44:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:49.605 11:44:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:26:49.605 11:44:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:49.605 11:44:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:50.172 nvme0n1 00:26:50.172 11:44:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:50.172 11:44:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:50.172 11:44:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:50.172 11:44:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:50.172 11:44:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:50.172 11:44:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:50.172 11:44:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:50.172 11:44:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:50.172 11:44:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:50.172 11:44:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:50.172 11:44:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:50.172 11:44:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:50.172 11:44:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 2 00:26:50.172 11:44:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:50.172 11:44:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:50.172 11:44:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:26:50.172 11:44:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:26:50.172 11:44:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:Y2FkNGYyYmVkODhkMzhhNTRjMGMxNjViNTgwNWQ4MDBJMSyK: 00:26:50.172 11:44:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZjIyZGE4YjQ5NDJkNjIyNzAxNjM4NWVhYThmYzNkZjMqr6rB: 00:26:50.172 11:44:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:50.172 11:44:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:26:50.172 11:44:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:Y2FkNGYyYmVkODhkMzhhNTRjMGMxNjViNTgwNWQ4MDBJMSyK: 00:26:50.172 11:44:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZjIyZGE4YjQ5NDJkNjIyNzAxNjM4NWVhYThmYzNkZjMqr6rB: ]] 00:26:50.172 11:44:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZjIyZGE4YjQ5NDJkNjIyNzAxNjM4NWVhYThmYzNkZjMqr6rB: 00:26:50.172 11:44:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 2 00:26:50.172 11:44:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:50.172 11:44:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:50.172 11:44:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:26:50.172 11:44:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:26:50.172 11:44:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:50.172 11:44:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:26:50.172 11:44:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:50.172 11:44:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:50.172 11:44:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:50.172 11:44:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:50.172 11:44:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:50.172 11:44:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:50.172 11:44:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:50.172 11:44:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:50.172 11:44:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:50.172 11:44:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:50.172 11:44:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:50.172 11:44:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:50.172 11:44:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:50.172 11:44:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:50.172 11:44:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:26:50.172 11:44:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:50.172 11:44:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:50.738 nvme0n1 00:26:50.738 11:44:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:50.738 11:44:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:50.738 11:44:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:50.739 11:44:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:50.739 11:44:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:50.739 11:44:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:50.739 11:44:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:50.739 11:44:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:50.739 11:44:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:50.739 11:44:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:50.739 11:44:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:50.739 11:44:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:50.739 11:44:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 3 00:26:50.739 11:44:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:50.739 11:44:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:50.739 11:44:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:26:50.739 11:44:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:26:50.739 11:44:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MDZjOWJkZTkyMDNkZjE5ZTYyNGJhYjg4MDE4NjhmMjk3ZDYyZTQ1YzVjMmNiZWU3Xz1Vgw==: 00:26:50.739 11:44:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ODVjNDFhYTUxOGUxNjAyYWY4MTE1OTcyMmE4ZmU0MDZsvCOG: 00:26:50.739 11:44:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:50.739 11:44:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:26:50.739 11:44:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MDZjOWJkZTkyMDNkZjE5ZTYyNGJhYjg4MDE4NjhmMjk3ZDYyZTQ1YzVjMmNiZWU3Xz1Vgw==: 00:26:50.739 11:44:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ODVjNDFhYTUxOGUxNjAyYWY4MTE1OTcyMmE4ZmU0MDZsvCOG: ]] 00:26:50.739 11:44:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ODVjNDFhYTUxOGUxNjAyYWY4MTE1OTcyMmE4ZmU0MDZsvCOG: 00:26:50.739 11:44:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 3 00:26:50.739 11:44:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:50.739 11:44:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:50.739 11:44:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:26:50.739 11:44:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:26:50.739 11:44:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:50.739 11:44:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:26:50.739 11:44:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:50.739 11:44:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:50.739 11:44:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:50.739 11:44:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:50.739 11:44:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:50.739 11:44:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:50.739 11:44:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:50.739 11:44:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:50.739 11:44:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:50.739 11:44:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:50.739 11:44:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:50.739 11:44:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:50.739 11:44:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:50.739 11:44:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:50.739 11:44:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:26:50.739 11:44:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:50.739 11:44:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:51.306 nvme0n1 00:26:51.306 11:44:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:51.306 11:44:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:51.306 11:44:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:51.306 11:44:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:51.306 11:44:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:51.306 11:44:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:51.306 11:44:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:51.306 11:44:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:51.306 11:44:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:51.306 11:44:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:51.306 11:44:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:51.306 11:44:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:51.306 11:44:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 4 00:26:51.306 11:44:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:51.306 11:44:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:51.306 11:44:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:26:51.306 11:44:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:26:51.306 11:44:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YTI1NGIwMGRmZmM5MTBmZmFkNDlkMjYxOWY1YzliZmFlZTE2OWI5NDk4MDcxMWJmNDhkYzZlZjE2ZTM4NjMzOMnYP/o=: 00:26:51.306 11:44:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:26:51.306 11:44:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:51.306 11:44:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:26:51.306 11:44:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YTI1NGIwMGRmZmM5MTBmZmFkNDlkMjYxOWY1YzliZmFlZTE2OWI5NDk4MDcxMWJmNDhkYzZlZjE2ZTM4NjMzOMnYP/o=: 00:26:51.306 11:44:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:26:51.306 11:44:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 4 00:26:51.306 11:44:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:51.306 11:44:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:51.306 11:44:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:26:51.306 11:44:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:26:51.306 11:44:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:51.306 11:44:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:26:51.306 11:44:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:51.306 11:44:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:51.306 11:44:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:51.306 11:44:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:51.306 11:44:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:51.306 11:44:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:51.306 11:44:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:51.306 11:44:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:51.306 11:44:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:51.306 11:44:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:51.306 11:44:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:51.306 11:44:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:51.306 11:44:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:51.306 11:44:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:51.306 11:44:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:26:51.306 11:44:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:51.306 11:44:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:51.564 nvme0n1 00:26:51.564 11:44:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:51.823 11:44:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:51.823 11:44:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:51.823 11:44:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:51.823 11:44:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:51.823 11:44:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:51.823 11:44:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:51.823 11:44:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:51.823 11:44:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:51.823 11:44:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:51.823 11:44:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:51.823 11:44:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:26:51.823 11:44:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:51.823 11:44:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 0 00:26:51.823 11:44:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:51.823 11:44:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:51.823 11:44:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:26:51.823 11:44:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:26:51.823 11:44:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MTk2OGFkODM0N2E3NTI0MDYzMTI4MTg3NjBkYzU5MznVpECY: 00:26:51.823 11:44:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:OTQxNjZlNjNjMzJjZDkyZjgxYmZiNTk1NzJmZDgxYWZhMzhiOTBiYzlhNTI2NjU4YjFkNDljYTMzYzY1Y2IyZEUWTI8=: 00:26:51.823 11:44:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:51.823 11:44:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:26:51.823 11:44:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MTk2OGFkODM0N2E3NTI0MDYzMTI4MTg3NjBkYzU5MznVpECY: 00:26:51.823 11:44:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:OTQxNjZlNjNjMzJjZDkyZjgxYmZiNTk1NzJmZDgxYWZhMzhiOTBiYzlhNTI2NjU4YjFkNDljYTMzYzY1Y2IyZEUWTI8=: ]] 00:26:51.823 11:44:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:OTQxNjZlNjNjMzJjZDkyZjgxYmZiNTk1NzJmZDgxYWZhMzhiOTBiYzlhNTI2NjU4YjFkNDljYTMzYzY1Y2IyZEUWTI8=: 00:26:51.823 11:44:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 0 00:26:51.823 11:44:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:51.823 11:44:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:51.823 11:44:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:26:51.823 11:44:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:26:51.823 11:44:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:51.823 11:44:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:26:51.823 11:44:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:51.823 11:44:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:51.823 11:44:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:51.823 11:44:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:51.823 11:44:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:51.823 11:44:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:51.823 11:44:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:51.823 11:44:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:51.823 11:44:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:51.823 11:44:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:51.823 11:44:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:51.823 11:44:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:51.823 11:44:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:51.823 11:44:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:51.823 11:44:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:26:51.823 11:44:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:51.823 11:44:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:52.758 nvme0n1 00:26:52.758 11:44:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:52.758 11:44:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:52.758 11:44:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:52.758 11:44:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:52.758 11:44:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:52.758 11:44:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:52.758 11:44:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:52.758 11:44:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:52.758 11:44:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:52.758 11:44:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:52.758 11:44:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:52.758 11:44:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:52.758 11:44:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 1 00:26:52.758 11:44:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:52.758 11:44:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:52.758 11:44:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:26:52.758 11:44:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:26:52.758 11:44:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NDQ2ZjE4YTNiZDJkNWJlNmI2YWM4ZGZmYTgyYzZkN2VkNTllNzIwZWI1NzI0MzAzP7rlLg==: 00:26:52.758 11:44:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:Y2ExYjNiNDViZjgwYTQ1MWRkN2FiYjRhODczNmZhY2EzYTAzMjcyNTlkY2IwZTVi9o9JOw==: 00:26:52.758 11:44:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:52.758 11:44:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:26:52.758 11:44:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NDQ2ZjE4YTNiZDJkNWJlNmI2YWM4ZGZmYTgyYzZkN2VkNTllNzIwZWI1NzI0MzAzP7rlLg==: 00:26:52.758 11:44:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:Y2ExYjNiNDViZjgwYTQ1MWRkN2FiYjRhODczNmZhY2EzYTAzMjcyNTlkY2IwZTVi9o9JOw==: ]] 00:26:52.758 11:44:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:Y2ExYjNiNDViZjgwYTQ1MWRkN2FiYjRhODczNmZhY2EzYTAzMjcyNTlkY2IwZTVi9o9JOw==: 00:26:52.758 11:44:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 1 00:26:52.758 11:44:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:52.758 11:44:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:52.758 11:44:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:26:52.758 11:44:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:26:52.758 11:44:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:52.758 11:44:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:26:52.758 11:44:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:52.758 11:44:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:52.758 11:44:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:52.758 11:44:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:52.758 11:44:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:52.759 11:44:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:52.759 11:44:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:52.759 11:44:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:52.759 11:44:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:52.759 11:44:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:52.759 11:44:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:52.759 11:44:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:52.759 11:44:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:52.759 11:44:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:52.759 11:44:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:26:52.759 11:44:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:52.759 11:44:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:53.325 nvme0n1 00:26:53.325 11:44:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:53.325 11:44:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:53.325 11:44:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:53.325 11:44:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:53.325 11:44:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:53.325 11:44:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:53.583 11:44:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:53.583 11:44:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:53.583 11:44:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:53.583 11:44:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:53.583 11:44:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:53.583 11:44:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:53.583 11:44:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 2 00:26:53.583 11:44:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:53.583 11:44:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:53.583 11:44:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:26:53.583 11:44:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:26:53.583 11:44:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:Y2FkNGYyYmVkODhkMzhhNTRjMGMxNjViNTgwNWQ4MDBJMSyK: 00:26:53.583 11:44:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZjIyZGE4YjQ5NDJkNjIyNzAxNjM4NWVhYThmYzNkZjMqr6rB: 00:26:53.583 11:44:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:53.583 11:44:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:26:53.583 11:44:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:Y2FkNGYyYmVkODhkMzhhNTRjMGMxNjViNTgwNWQ4MDBJMSyK: 00:26:53.583 11:44:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZjIyZGE4YjQ5NDJkNjIyNzAxNjM4NWVhYThmYzNkZjMqr6rB: ]] 00:26:53.583 11:44:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZjIyZGE4YjQ5NDJkNjIyNzAxNjM4NWVhYThmYzNkZjMqr6rB: 00:26:53.583 11:44:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 2 00:26:53.583 11:44:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:53.583 11:44:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:53.583 11:44:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:26:53.583 11:44:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:26:53.583 11:44:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:53.583 11:44:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:26:53.583 11:44:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:53.583 11:44:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:53.583 11:44:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:53.583 11:44:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:53.583 11:44:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:53.583 11:44:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:53.583 11:44:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:53.583 11:44:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:53.583 11:44:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:53.583 11:44:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:53.583 11:44:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:53.583 11:44:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:53.583 11:44:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:53.583 11:44:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:53.583 11:44:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:26:53.583 11:44:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:53.583 11:44:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:54.514 nvme0n1 00:26:54.514 11:44:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:54.514 11:44:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:54.514 11:44:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:54.514 11:44:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:54.514 11:44:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:54.514 11:44:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:54.514 11:44:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:54.514 11:44:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:54.514 11:44:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:54.514 11:44:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:54.514 11:44:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:54.514 11:44:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:54.514 11:44:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 3 00:26:54.514 11:44:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:54.514 11:44:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:54.514 11:44:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:26:54.514 11:44:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:26:54.514 11:44:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MDZjOWJkZTkyMDNkZjE5ZTYyNGJhYjg4MDE4NjhmMjk3ZDYyZTQ1YzVjMmNiZWU3Xz1Vgw==: 00:26:54.514 11:44:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ODVjNDFhYTUxOGUxNjAyYWY4MTE1OTcyMmE4ZmU0MDZsvCOG: 00:26:54.514 11:44:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:54.514 11:44:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:26:54.514 11:44:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MDZjOWJkZTkyMDNkZjE5ZTYyNGJhYjg4MDE4NjhmMjk3ZDYyZTQ1YzVjMmNiZWU3Xz1Vgw==: 00:26:54.514 11:44:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ODVjNDFhYTUxOGUxNjAyYWY4MTE1OTcyMmE4ZmU0MDZsvCOG: ]] 00:26:54.514 11:44:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ODVjNDFhYTUxOGUxNjAyYWY4MTE1OTcyMmE4ZmU0MDZsvCOG: 00:26:54.514 11:44:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 3 00:26:54.514 11:44:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:54.514 11:44:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:54.514 11:44:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:26:54.514 11:44:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:26:54.514 11:44:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:54.514 11:44:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:26:54.514 11:44:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:54.514 11:44:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:54.514 11:44:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:54.514 11:44:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:54.514 11:44:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:54.514 11:44:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:54.514 11:44:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:54.514 11:44:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:54.514 11:44:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:54.514 11:44:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:54.515 11:44:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:54.515 11:44:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:54.515 11:44:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:54.515 11:44:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:54.515 11:44:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:26:54.515 11:44:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:54.515 11:44:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:55.078 nvme0n1 00:26:55.078 11:44:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:55.078 11:44:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:55.078 11:44:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:55.078 11:44:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:55.078 11:44:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:55.078 11:44:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:55.078 11:44:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:55.078 11:44:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:55.078 11:44:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:55.078 11:44:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:55.335 11:44:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:55.335 11:44:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:55.335 11:44:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 4 00:26:55.335 11:44:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:55.335 11:44:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:55.335 11:44:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:26:55.335 11:44:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:26:55.335 11:44:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YTI1NGIwMGRmZmM5MTBmZmFkNDlkMjYxOWY1YzliZmFlZTE2OWI5NDk4MDcxMWJmNDhkYzZlZjE2ZTM4NjMzOMnYP/o=: 00:26:55.335 11:44:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:26:55.335 11:44:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:55.335 11:44:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:26:55.335 11:44:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YTI1NGIwMGRmZmM5MTBmZmFkNDlkMjYxOWY1YzliZmFlZTE2OWI5NDk4MDcxMWJmNDhkYzZlZjE2ZTM4NjMzOMnYP/o=: 00:26:55.335 11:44:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:26:55.335 11:44:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 4 00:26:55.335 11:44:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:55.335 11:44:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:55.335 11:44:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:26:55.335 11:44:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:26:55.335 11:44:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:55.335 11:44:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:26:55.335 11:44:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:55.335 11:44:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:55.335 11:44:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:55.335 11:44:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:55.335 11:44:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:55.335 11:44:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:55.335 11:44:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:55.335 11:44:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:55.335 11:44:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:55.335 11:44:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:55.335 11:44:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:55.335 11:44:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:55.335 11:44:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:55.335 11:44:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:55.335 11:44:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:26:55.335 11:44:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:55.335 11:44:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:55.900 nvme0n1 00:26:55.900 11:44:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:55.900 11:44:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:55.900 11:44:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:55.900 11:44:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:55.900 11:44:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:56.157 11:44:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:56.157 11:44:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:56.157 11:44:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:56.157 11:44:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:56.157 11:44:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:56.157 11:44:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:56.157 11:44:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:26:56.157 11:44:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:26:56.157 11:44:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:56.157 11:44:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 0 00:26:56.157 11:44:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:56.157 11:44:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:56.157 11:44:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:26:56.157 11:44:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:26:56.157 11:44:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MTk2OGFkODM0N2E3NTI0MDYzMTI4MTg3NjBkYzU5MznVpECY: 00:26:56.157 11:44:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:OTQxNjZlNjNjMzJjZDkyZjgxYmZiNTk1NzJmZDgxYWZhMzhiOTBiYzlhNTI2NjU4YjFkNDljYTMzYzY1Y2IyZEUWTI8=: 00:26:56.157 11:44:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:56.157 11:44:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:26:56.157 11:44:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MTk2OGFkODM0N2E3NTI0MDYzMTI4MTg3NjBkYzU5MznVpECY: 00:26:56.157 11:44:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:OTQxNjZlNjNjMzJjZDkyZjgxYmZiNTk1NzJmZDgxYWZhMzhiOTBiYzlhNTI2NjU4YjFkNDljYTMzYzY1Y2IyZEUWTI8=: ]] 00:26:56.157 11:44:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:OTQxNjZlNjNjMzJjZDkyZjgxYmZiNTk1NzJmZDgxYWZhMzhiOTBiYzlhNTI2NjU4YjFkNDljYTMzYzY1Y2IyZEUWTI8=: 00:26:56.157 11:44:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 0 00:26:56.157 11:44:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:56.157 11:44:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:56.157 11:44:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:26:56.157 11:44:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:26:56.157 11:44:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:56.157 11:44:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:26:56.157 11:44:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:56.157 11:44:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:56.157 11:44:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:56.157 11:44:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:56.157 11:44:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:56.157 11:44:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:56.157 11:44:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:56.157 11:44:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:56.157 11:44:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:56.157 11:44:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:56.157 11:44:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:56.157 11:44:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:56.157 11:44:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:56.157 11:44:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:56.157 11:44:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:26:56.157 11:44:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:56.157 11:44:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:56.157 nvme0n1 00:26:56.157 11:44:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:56.157 11:44:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:56.157 11:44:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:56.157 11:44:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:56.158 11:44:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:56.158 11:44:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:56.414 11:44:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:56.414 11:44:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:56.414 11:44:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:56.414 11:44:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:56.414 11:44:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:56.414 11:44:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:56.415 11:44:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 1 00:26:56.415 11:44:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:56.415 11:44:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:56.415 11:44:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:26:56.415 11:44:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:26:56.415 11:44:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NDQ2ZjE4YTNiZDJkNWJlNmI2YWM4ZGZmYTgyYzZkN2VkNTllNzIwZWI1NzI0MzAzP7rlLg==: 00:26:56.415 11:44:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:Y2ExYjNiNDViZjgwYTQ1MWRkN2FiYjRhODczNmZhY2EzYTAzMjcyNTlkY2IwZTVi9o9JOw==: 00:26:56.415 11:44:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:56.415 11:44:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:26:56.415 11:44:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NDQ2ZjE4YTNiZDJkNWJlNmI2YWM4ZGZmYTgyYzZkN2VkNTllNzIwZWI1NzI0MzAzP7rlLg==: 00:26:56.415 11:44:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:Y2ExYjNiNDViZjgwYTQ1MWRkN2FiYjRhODczNmZhY2EzYTAzMjcyNTlkY2IwZTVi9o9JOw==: ]] 00:26:56.415 11:44:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:Y2ExYjNiNDViZjgwYTQ1MWRkN2FiYjRhODczNmZhY2EzYTAzMjcyNTlkY2IwZTVi9o9JOw==: 00:26:56.415 11:44:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 1 00:26:56.415 11:44:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:56.415 11:44:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:56.415 11:44:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:26:56.415 11:44:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:26:56.415 11:44:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:56.415 11:44:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:26:56.415 11:44:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:56.415 11:44:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:56.415 11:44:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:56.415 11:44:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:56.415 11:44:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:56.415 11:44:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:56.415 11:44:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:56.415 11:44:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:56.415 11:44:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:56.415 11:44:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:56.415 11:44:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:56.415 11:44:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:56.415 11:44:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:56.415 11:44:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:56.415 11:44:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:26:56.415 11:44:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:56.415 11:44:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:56.415 nvme0n1 00:26:56.415 11:44:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:56.415 11:44:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:56.415 11:44:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:56.415 11:44:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:56.415 11:44:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:56.415 11:44:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:56.415 11:44:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:56.415 11:44:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:56.415 11:44:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:56.415 11:44:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:56.415 11:44:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:56.415 11:44:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:56.415 11:44:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 2 00:26:56.415 11:44:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:56.415 11:44:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:56.415 11:44:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:26:56.415 11:44:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:26:56.415 11:44:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:Y2FkNGYyYmVkODhkMzhhNTRjMGMxNjViNTgwNWQ4MDBJMSyK: 00:26:56.415 11:44:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZjIyZGE4YjQ5NDJkNjIyNzAxNjM4NWVhYThmYzNkZjMqr6rB: 00:26:56.415 11:44:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:56.415 11:44:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:26:56.415 11:44:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:Y2FkNGYyYmVkODhkMzhhNTRjMGMxNjViNTgwNWQ4MDBJMSyK: 00:26:56.674 11:44:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZjIyZGE4YjQ5NDJkNjIyNzAxNjM4NWVhYThmYzNkZjMqr6rB: ]] 00:26:56.674 11:44:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZjIyZGE4YjQ5NDJkNjIyNzAxNjM4NWVhYThmYzNkZjMqr6rB: 00:26:56.674 11:44:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 2 00:26:56.674 11:44:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:56.674 11:44:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:56.674 11:44:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:26:56.674 11:44:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:26:56.674 11:44:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:56.674 11:44:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:26:56.674 11:44:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:56.674 11:44:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:56.674 11:44:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:56.674 11:44:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:56.674 11:44:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:56.674 11:44:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:56.674 11:44:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:56.674 11:44:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:56.674 11:44:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:56.674 11:44:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:56.674 11:44:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:56.674 11:44:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:56.674 11:44:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:56.674 11:44:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:56.674 11:44:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:26:56.674 11:44:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:56.674 11:44:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:56.674 nvme0n1 00:26:56.674 11:44:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:56.674 11:44:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:56.674 11:44:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:56.674 11:44:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:56.674 11:44:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:56.674 11:44:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:56.674 11:44:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:56.674 11:44:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:56.674 11:44:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:56.674 11:44:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:56.674 11:44:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:56.674 11:44:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:56.674 11:44:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 3 00:26:56.674 11:44:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:56.674 11:44:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:56.674 11:44:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:26:56.674 11:44:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:26:56.674 11:44:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MDZjOWJkZTkyMDNkZjE5ZTYyNGJhYjg4MDE4NjhmMjk3ZDYyZTQ1YzVjMmNiZWU3Xz1Vgw==: 00:26:56.674 11:44:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ODVjNDFhYTUxOGUxNjAyYWY4MTE1OTcyMmE4ZmU0MDZsvCOG: 00:26:56.674 11:44:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:56.674 11:44:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:26:56.674 11:44:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MDZjOWJkZTkyMDNkZjE5ZTYyNGJhYjg4MDE4NjhmMjk3ZDYyZTQ1YzVjMmNiZWU3Xz1Vgw==: 00:26:56.674 11:44:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ODVjNDFhYTUxOGUxNjAyYWY4MTE1OTcyMmE4ZmU0MDZsvCOG: ]] 00:26:56.674 11:44:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ODVjNDFhYTUxOGUxNjAyYWY4MTE1OTcyMmE4ZmU0MDZsvCOG: 00:26:56.674 11:44:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 3 00:26:56.674 11:44:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:56.674 11:44:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:56.674 11:44:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:26:56.675 11:44:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:26:56.675 11:44:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:56.675 11:44:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:26:56.675 11:44:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:56.675 11:44:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:56.675 11:44:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:56.675 11:44:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:56.675 11:44:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:56.675 11:44:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:56.675 11:44:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:56.675 11:44:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:56.675 11:44:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:56.675 11:44:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:56.675 11:44:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:56.675 11:44:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:56.675 11:44:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:56.675 11:44:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:56.675 11:44:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:26:56.675 11:44:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:56.675 11:44:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:56.934 nvme0n1 00:26:56.934 11:44:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:56.934 11:44:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:56.934 11:44:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:56.934 11:44:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:56.934 11:44:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:56.934 11:44:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:56.934 11:44:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:56.934 11:44:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:56.934 11:44:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:56.934 11:44:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:56.934 11:44:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:56.934 11:44:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:56.934 11:44:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 4 00:26:56.934 11:44:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:56.934 11:44:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:56.934 11:44:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:26:56.934 11:44:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:26:56.934 11:44:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YTI1NGIwMGRmZmM5MTBmZmFkNDlkMjYxOWY1YzliZmFlZTE2OWI5NDk4MDcxMWJmNDhkYzZlZjE2ZTM4NjMzOMnYP/o=: 00:26:56.934 11:44:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:26:56.934 11:44:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:56.934 11:44:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:26:56.934 11:44:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YTI1NGIwMGRmZmM5MTBmZmFkNDlkMjYxOWY1YzliZmFlZTE2OWI5NDk4MDcxMWJmNDhkYzZlZjE2ZTM4NjMzOMnYP/o=: 00:26:56.934 11:44:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:26:56.934 11:44:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 4 00:26:56.934 11:44:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:56.934 11:44:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:56.934 11:44:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:26:56.934 11:44:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:26:56.934 11:44:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:56.934 11:44:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:26:56.934 11:44:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:56.934 11:44:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:56.934 11:44:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:56.934 11:44:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:56.934 11:44:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:56.934 11:44:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:56.934 11:44:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:56.934 11:44:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:56.934 11:44:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:56.934 11:44:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:56.934 11:44:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:56.934 11:44:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:56.934 11:44:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:56.934 11:44:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:56.934 11:44:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:26:56.934 11:44:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:56.934 11:44:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:57.193 nvme0n1 00:26:57.193 11:44:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:57.194 11:44:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:57.194 11:44:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:57.194 11:44:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:57.194 11:44:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:57.194 11:44:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:57.194 11:44:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:57.194 11:44:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:57.194 11:44:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:57.194 11:44:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:57.194 11:44:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:57.194 11:44:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:26:57.194 11:44:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:57.194 11:44:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 0 00:26:57.194 11:44:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:57.194 11:44:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:57.194 11:44:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:26:57.194 11:44:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:26:57.194 11:44:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MTk2OGFkODM0N2E3NTI0MDYzMTI4MTg3NjBkYzU5MznVpECY: 00:26:57.194 11:44:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:OTQxNjZlNjNjMzJjZDkyZjgxYmZiNTk1NzJmZDgxYWZhMzhiOTBiYzlhNTI2NjU4YjFkNDljYTMzYzY1Y2IyZEUWTI8=: 00:26:57.194 11:44:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:57.194 11:44:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:26:57.194 11:44:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MTk2OGFkODM0N2E3NTI0MDYzMTI4MTg3NjBkYzU5MznVpECY: 00:26:57.194 11:44:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:OTQxNjZlNjNjMzJjZDkyZjgxYmZiNTk1NzJmZDgxYWZhMzhiOTBiYzlhNTI2NjU4YjFkNDljYTMzYzY1Y2IyZEUWTI8=: ]] 00:26:57.194 11:44:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:OTQxNjZlNjNjMzJjZDkyZjgxYmZiNTk1NzJmZDgxYWZhMzhiOTBiYzlhNTI2NjU4YjFkNDljYTMzYzY1Y2IyZEUWTI8=: 00:26:57.194 11:44:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 0 00:26:57.194 11:44:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:57.194 11:44:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:57.194 11:44:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:26:57.194 11:44:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:26:57.194 11:44:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:57.194 11:44:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:26:57.194 11:44:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:57.194 11:44:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:57.194 11:44:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:57.194 11:44:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:57.194 11:44:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:57.194 11:44:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:57.194 11:44:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:57.194 11:44:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:57.194 11:44:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:57.194 11:44:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:57.194 11:44:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:57.194 11:44:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:57.194 11:44:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:57.194 11:44:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:57.194 11:44:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:26:57.194 11:44:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:57.194 11:44:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:57.453 nvme0n1 00:26:57.453 11:44:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:57.453 11:44:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:57.453 11:44:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:57.453 11:44:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:57.453 11:44:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:57.453 11:44:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:57.453 11:44:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:57.453 11:44:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:57.453 11:44:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:57.453 11:44:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:57.453 11:44:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:57.453 11:44:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:57.453 11:44:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 1 00:26:57.453 11:44:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:57.453 11:44:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:57.453 11:44:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:26:57.453 11:44:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:26:57.453 11:44:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NDQ2ZjE4YTNiZDJkNWJlNmI2YWM4ZGZmYTgyYzZkN2VkNTllNzIwZWI1NzI0MzAzP7rlLg==: 00:26:57.453 11:44:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:Y2ExYjNiNDViZjgwYTQ1MWRkN2FiYjRhODczNmZhY2EzYTAzMjcyNTlkY2IwZTVi9o9JOw==: 00:26:57.453 11:44:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:57.453 11:44:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:26:57.453 11:44:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NDQ2ZjE4YTNiZDJkNWJlNmI2YWM4ZGZmYTgyYzZkN2VkNTllNzIwZWI1NzI0MzAzP7rlLg==: 00:26:57.453 11:44:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:Y2ExYjNiNDViZjgwYTQ1MWRkN2FiYjRhODczNmZhY2EzYTAzMjcyNTlkY2IwZTVi9o9JOw==: ]] 00:26:57.453 11:44:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:Y2ExYjNiNDViZjgwYTQ1MWRkN2FiYjRhODczNmZhY2EzYTAzMjcyNTlkY2IwZTVi9o9JOw==: 00:26:57.453 11:44:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 1 00:26:57.453 11:44:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:57.453 11:44:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:57.453 11:44:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:26:57.453 11:44:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:26:57.453 11:44:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:57.454 11:44:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:26:57.454 11:44:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:57.454 11:44:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:57.454 11:44:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:57.454 11:44:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:57.454 11:44:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:57.454 11:44:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:57.454 11:44:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:57.454 11:44:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:57.454 11:44:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:57.454 11:44:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:57.454 11:44:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:57.454 11:44:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:57.454 11:44:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:57.454 11:44:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:57.454 11:44:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:26:57.454 11:44:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:57.454 11:44:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:57.714 nvme0n1 00:26:57.714 11:44:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:57.714 11:44:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:57.714 11:44:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:57.714 11:44:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:57.714 11:44:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:57.714 11:44:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:57.714 11:44:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:57.714 11:44:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:57.714 11:44:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:57.714 11:44:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:57.714 11:44:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:57.714 11:44:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:57.714 11:44:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 2 00:26:57.714 11:44:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:57.714 11:44:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:57.714 11:44:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:26:57.714 11:44:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:26:57.714 11:44:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:Y2FkNGYyYmVkODhkMzhhNTRjMGMxNjViNTgwNWQ4MDBJMSyK: 00:26:57.714 11:44:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZjIyZGE4YjQ5NDJkNjIyNzAxNjM4NWVhYThmYzNkZjMqr6rB: 00:26:57.714 11:44:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:57.714 11:44:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:26:57.714 11:44:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:Y2FkNGYyYmVkODhkMzhhNTRjMGMxNjViNTgwNWQ4MDBJMSyK: 00:26:57.714 11:44:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZjIyZGE4YjQ5NDJkNjIyNzAxNjM4NWVhYThmYzNkZjMqr6rB: ]] 00:26:57.714 11:44:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZjIyZGE4YjQ5NDJkNjIyNzAxNjM4NWVhYThmYzNkZjMqr6rB: 00:26:57.714 11:44:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 2 00:26:57.714 11:44:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:57.714 11:44:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:57.714 11:44:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:26:57.714 11:44:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:26:57.714 11:44:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:57.714 11:44:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:26:57.714 11:44:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:57.714 11:44:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:57.714 11:44:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:57.714 11:44:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:57.714 11:44:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:57.714 11:44:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:57.714 11:44:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:57.714 11:44:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:57.714 11:44:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:57.714 11:44:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:57.714 11:44:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:57.714 11:44:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:57.714 11:44:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:57.714 11:44:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:57.714 11:44:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:26:57.714 11:44:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:57.714 11:44:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:57.974 nvme0n1 00:26:57.974 11:44:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:57.974 11:44:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:57.974 11:44:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:57.974 11:44:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:57.974 11:44:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:57.974 11:44:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:57.974 11:44:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:57.974 11:44:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:57.974 11:44:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:57.974 11:44:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:57.974 11:44:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:57.974 11:44:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:57.974 11:44:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 3 00:26:57.974 11:44:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:57.974 11:44:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:57.974 11:44:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:26:57.974 11:44:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:26:57.974 11:44:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MDZjOWJkZTkyMDNkZjE5ZTYyNGJhYjg4MDE4NjhmMjk3ZDYyZTQ1YzVjMmNiZWU3Xz1Vgw==: 00:26:57.974 11:44:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ODVjNDFhYTUxOGUxNjAyYWY4MTE1OTcyMmE4ZmU0MDZsvCOG: 00:26:57.974 11:44:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:57.974 11:44:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:26:57.974 11:44:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MDZjOWJkZTkyMDNkZjE5ZTYyNGJhYjg4MDE4NjhmMjk3ZDYyZTQ1YzVjMmNiZWU3Xz1Vgw==: 00:26:57.974 11:44:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ODVjNDFhYTUxOGUxNjAyYWY4MTE1OTcyMmE4ZmU0MDZsvCOG: ]] 00:26:57.974 11:44:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ODVjNDFhYTUxOGUxNjAyYWY4MTE1OTcyMmE4ZmU0MDZsvCOG: 00:26:57.974 11:44:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 3 00:26:57.974 11:44:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:57.974 11:44:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:57.974 11:44:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:26:57.974 11:44:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:26:57.974 11:44:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:57.974 11:44:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:26:57.974 11:44:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:57.974 11:44:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:57.974 11:44:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:57.974 11:44:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:57.974 11:44:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:57.974 11:44:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:57.974 11:44:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:57.974 11:44:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:57.974 11:44:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:57.974 11:44:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:57.974 11:44:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:57.974 11:44:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:57.974 11:44:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:57.974 11:44:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:57.974 11:44:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:26:57.974 11:44:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:57.974 11:44:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:58.233 nvme0n1 00:26:58.233 11:44:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:58.233 11:44:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:58.233 11:44:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:58.233 11:44:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:58.233 11:44:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:58.233 11:44:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:58.233 11:44:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:58.233 11:44:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:58.233 11:44:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:58.233 11:44:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:58.234 11:44:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:58.234 11:44:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:58.234 11:44:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 4 00:26:58.234 11:44:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:58.234 11:44:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:58.234 11:44:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:26:58.234 11:44:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:26:58.234 11:44:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YTI1NGIwMGRmZmM5MTBmZmFkNDlkMjYxOWY1YzliZmFlZTE2OWI5NDk4MDcxMWJmNDhkYzZlZjE2ZTM4NjMzOMnYP/o=: 00:26:58.234 11:44:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:26:58.234 11:44:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:58.234 11:44:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:26:58.234 11:44:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YTI1NGIwMGRmZmM5MTBmZmFkNDlkMjYxOWY1YzliZmFlZTE2OWI5NDk4MDcxMWJmNDhkYzZlZjE2ZTM4NjMzOMnYP/o=: 00:26:58.234 11:44:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:26:58.234 11:44:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 4 00:26:58.234 11:44:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:58.234 11:44:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:58.234 11:44:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:26:58.234 11:44:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:26:58.234 11:44:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:58.234 11:44:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:26:58.234 11:44:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:58.234 11:44:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:58.234 11:44:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:58.234 11:44:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:58.234 11:44:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:58.234 11:44:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:58.234 11:44:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:58.234 11:44:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:58.234 11:44:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:58.234 11:44:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:58.234 11:44:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:58.234 11:44:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:58.234 11:44:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:58.234 11:44:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:58.234 11:44:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:26:58.234 11:44:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:58.234 11:44:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:58.493 nvme0n1 00:26:58.493 11:44:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:58.493 11:44:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:58.493 11:44:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:58.493 11:44:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:58.493 11:44:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:58.493 11:44:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:58.493 11:44:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:58.493 11:44:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:58.493 11:44:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:58.493 11:44:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:58.493 11:44:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:58.493 11:44:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:26:58.493 11:44:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:58.493 11:44:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 0 00:26:58.493 11:44:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:58.493 11:44:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:58.493 11:44:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:26:58.493 11:44:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:26:58.493 11:44:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MTk2OGFkODM0N2E3NTI0MDYzMTI4MTg3NjBkYzU5MznVpECY: 00:26:58.493 11:44:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:OTQxNjZlNjNjMzJjZDkyZjgxYmZiNTk1NzJmZDgxYWZhMzhiOTBiYzlhNTI2NjU4YjFkNDljYTMzYzY1Y2IyZEUWTI8=: 00:26:58.493 11:44:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:58.493 11:44:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:26:58.493 11:44:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MTk2OGFkODM0N2E3NTI0MDYzMTI4MTg3NjBkYzU5MznVpECY: 00:26:58.493 11:44:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:OTQxNjZlNjNjMzJjZDkyZjgxYmZiNTk1NzJmZDgxYWZhMzhiOTBiYzlhNTI2NjU4YjFkNDljYTMzYzY1Y2IyZEUWTI8=: ]] 00:26:58.493 11:44:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:OTQxNjZlNjNjMzJjZDkyZjgxYmZiNTk1NzJmZDgxYWZhMzhiOTBiYzlhNTI2NjU4YjFkNDljYTMzYzY1Y2IyZEUWTI8=: 00:26:58.493 11:44:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 0 00:26:58.493 11:44:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:58.493 11:44:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:58.493 11:44:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:26:58.493 11:44:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:26:58.493 11:44:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:58.493 11:44:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:26:58.493 11:44:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:58.493 11:44:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:58.493 11:44:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:58.493 11:44:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:58.493 11:44:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:58.493 11:44:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:58.493 11:44:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:58.493 11:44:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:58.493 11:44:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:58.493 11:44:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:58.493 11:44:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:58.493 11:44:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:58.493 11:44:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:58.493 11:44:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:58.493 11:44:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:26:58.493 11:44:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:58.493 11:44:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:58.752 nvme0n1 00:26:58.752 11:44:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:58.752 11:44:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:58.752 11:44:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:58.752 11:44:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:58.752 11:44:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:58.752 11:44:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:59.010 11:44:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:59.010 11:44:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:59.010 11:44:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:59.010 11:44:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:59.010 11:44:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:59.010 11:44:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:59.010 11:44:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 1 00:26:59.010 11:44:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:59.010 11:44:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:59.010 11:44:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:26:59.010 11:44:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:26:59.011 11:44:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NDQ2ZjE4YTNiZDJkNWJlNmI2YWM4ZGZmYTgyYzZkN2VkNTllNzIwZWI1NzI0MzAzP7rlLg==: 00:26:59.011 11:44:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:Y2ExYjNiNDViZjgwYTQ1MWRkN2FiYjRhODczNmZhY2EzYTAzMjcyNTlkY2IwZTVi9o9JOw==: 00:26:59.011 11:44:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:59.011 11:44:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:26:59.011 11:44:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NDQ2ZjE4YTNiZDJkNWJlNmI2YWM4ZGZmYTgyYzZkN2VkNTllNzIwZWI1NzI0MzAzP7rlLg==: 00:26:59.011 11:44:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:Y2ExYjNiNDViZjgwYTQ1MWRkN2FiYjRhODczNmZhY2EzYTAzMjcyNTlkY2IwZTVi9o9JOw==: ]] 00:26:59.011 11:44:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:Y2ExYjNiNDViZjgwYTQ1MWRkN2FiYjRhODczNmZhY2EzYTAzMjcyNTlkY2IwZTVi9o9JOw==: 00:26:59.011 11:44:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 1 00:26:59.011 11:44:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:59.011 11:44:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:59.011 11:44:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:26:59.011 11:44:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:26:59.011 11:44:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:59.011 11:44:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:26:59.011 11:44:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:59.011 11:44:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:59.011 11:44:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:59.011 11:44:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:59.011 11:44:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:59.011 11:44:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:59.011 11:44:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:59.011 11:44:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:59.011 11:44:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:59.011 11:44:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:59.011 11:44:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:59.011 11:44:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:59.011 11:44:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:59.011 11:44:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:59.011 11:44:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:26:59.011 11:44:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:59.011 11:44:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:59.270 nvme0n1 00:26:59.270 11:44:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:59.270 11:44:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:59.270 11:44:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:59.270 11:44:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:59.270 11:44:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:59.270 11:44:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:59.270 11:44:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:59.270 11:44:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:59.270 11:44:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:59.270 11:44:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:59.270 11:44:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:59.270 11:44:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:59.270 11:44:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 2 00:26:59.270 11:44:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:59.270 11:44:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:59.270 11:44:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:26:59.270 11:44:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:26:59.270 11:44:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:Y2FkNGYyYmVkODhkMzhhNTRjMGMxNjViNTgwNWQ4MDBJMSyK: 00:26:59.270 11:44:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZjIyZGE4YjQ5NDJkNjIyNzAxNjM4NWVhYThmYzNkZjMqr6rB: 00:26:59.270 11:44:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:59.270 11:44:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:26:59.270 11:44:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:Y2FkNGYyYmVkODhkMzhhNTRjMGMxNjViNTgwNWQ4MDBJMSyK: 00:26:59.270 11:44:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZjIyZGE4YjQ5NDJkNjIyNzAxNjM4NWVhYThmYzNkZjMqr6rB: ]] 00:26:59.270 11:44:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZjIyZGE4YjQ5NDJkNjIyNzAxNjM4NWVhYThmYzNkZjMqr6rB: 00:26:59.270 11:44:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 2 00:26:59.270 11:44:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:59.270 11:44:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:59.270 11:44:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:26:59.270 11:44:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:26:59.270 11:44:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:59.270 11:44:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:26:59.270 11:44:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:59.271 11:44:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:59.271 11:45:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:59.271 11:45:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:59.271 11:45:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:59.271 11:45:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:59.271 11:45:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:59.271 11:45:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:59.271 11:45:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:59.271 11:45:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:59.271 11:45:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:59.271 11:45:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:59.271 11:45:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:59.271 11:45:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:59.271 11:45:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:26:59.271 11:45:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:59.271 11:45:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:59.530 nvme0n1 00:26:59.530 11:45:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:59.530 11:45:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:59.530 11:45:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:59.530 11:45:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:59.530 11:45:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:59.530 11:45:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:59.530 11:45:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:59.530 11:45:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:59.530 11:45:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:59.530 11:45:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:59.530 11:45:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:59.530 11:45:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:59.530 11:45:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 3 00:26:59.530 11:45:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:59.530 11:45:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:59.530 11:45:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:26:59.530 11:45:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:26:59.530 11:45:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MDZjOWJkZTkyMDNkZjE5ZTYyNGJhYjg4MDE4NjhmMjk3ZDYyZTQ1YzVjMmNiZWU3Xz1Vgw==: 00:26:59.530 11:45:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ODVjNDFhYTUxOGUxNjAyYWY4MTE1OTcyMmE4ZmU0MDZsvCOG: 00:26:59.530 11:45:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:59.530 11:45:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:26:59.530 11:45:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MDZjOWJkZTkyMDNkZjE5ZTYyNGJhYjg4MDE4NjhmMjk3ZDYyZTQ1YzVjMmNiZWU3Xz1Vgw==: 00:26:59.530 11:45:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ODVjNDFhYTUxOGUxNjAyYWY4MTE1OTcyMmE4ZmU0MDZsvCOG: ]] 00:26:59.530 11:45:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ODVjNDFhYTUxOGUxNjAyYWY4MTE1OTcyMmE4ZmU0MDZsvCOG: 00:26:59.530 11:45:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 3 00:26:59.530 11:45:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:59.530 11:45:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:59.530 11:45:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:26:59.530 11:45:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:26:59.530 11:45:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:59.530 11:45:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:26:59.530 11:45:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:59.530 11:45:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:59.530 11:45:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:59.530 11:45:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:59.530 11:45:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:59.530 11:45:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:59.530 11:45:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:59.530 11:45:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:59.530 11:45:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:59.530 11:45:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:59.530 11:45:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:59.530 11:45:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:59.530 11:45:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:59.530 11:45:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:59.530 11:45:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:26:59.530 11:45:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:59.530 11:45:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:59.789 nvme0n1 00:26:59.789 11:45:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:59.789 11:45:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:59.789 11:45:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:59.789 11:45:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:59.789 11:45:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:00.048 11:45:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:00.048 11:45:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:00.048 11:45:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:00.048 11:45:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:00.048 11:45:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:00.048 11:45:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:00.048 11:45:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:00.048 11:45:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 4 00:27:00.048 11:45:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:00.048 11:45:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:00.048 11:45:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:27:00.048 11:45:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:27:00.048 11:45:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YTI1NGIwMGRmZmM5MTBmZmFkNDlkMjYxOWY1YzliZmFlZTE2OWI5NDk4MDcxMWJmNDhkYzZlZjE2ZTM4NjMzOMnYP/o=: 00:27:00.048 11:45:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:27:00.048 11:45:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:00.048 11:45:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:27:00.048 11:45:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YTI1NGIwMGRmZmM5MTBmZmFkNDlkMjYxOWY1YzliZmFlZTE2OWI5NDk4MDcxMWJmNDhkYzZlZjE2ZTM4NjMzOMnYP/o=: 00:27:00.048 11:45:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:27:00.048 11:45:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 4 00:27:00.048 11:45:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:00.048 11:45:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:00.048 11:45:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:27:00.048 11:45:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:27:00.048 11:45:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:00.048 11:45:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:27:00.048 11:45:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:00.048 11:45:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:00.048 11:45:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:00.048 11:45:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:00.048 11:45:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:00.048 11:45:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:00.048 11:45:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:00.048 11:45:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:00.048 11:45:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:00.048 11:45:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:00.049 11:45:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:00.049 11:45:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:00.049 11:45:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:00.049 11:45:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:00.049 11:45:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:27:00.049 11:45:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:00.049 11:45:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:00.308 nvme0n1 00:27:00.308 11:45:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:00.308 11:45:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:00.308 11:45:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:00.308 11:45:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:00.308 11:45:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:00.308 11:45:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:00.308 11:45:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:00.308 11:45:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:00.308 11:45:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:00.308 11:45:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:00.308 11:45:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:00.308 11:45:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:27:00.308 11:45:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:00.308 11:45:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 0 00:27:00.308 11:45:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:00.308 11:45:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:00.308 11:45:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:27:00.308 11:45:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:27:00.308 11:45:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MTk2OGFkODM0N2E3NTI0MDYzMTI4MTg3NjBkYzU5MznVpECY: 00:27:00.308 11:45:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:OTQxNjZlNjNjMzJjZDkyZjgxYmZiNTk1NzJmZDgxYWZhMzhiOTBiYzlhNTI2NjU4YjFkNDljYTMzYzY1Y2IyZEUWTI8=: 00:27:00.308 11:45:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:00.308 11:45:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:27:00.308 11:45:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MTk2OGFkODM0N2E3NTI0MDYzMTI4MTg3NjBkYzU5MznVpECY: 00:27:00.308 11:45:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:OTQxNjZlNjNjMzJjZDkyZjgxYmZiNTk1NzJmZDgxYWZhMzhiOTBiYzlhNTI2NjU4YjFkNDljYTMzYzY1Y2IyZEUWTI8=: ]] 00:27:00.308 11:45:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:OTQxNjZlNjNjMzJjZDkyZjgxYmZiNTk1NzJmZDgxYWZhMzhiOTBiYzlhNTI2NjU4YjFkNDljYTMzYzY1Y2IyZEUWTI8=: 00:27:00.308 11:45:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 0 00:27:00.308 11:45:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:00.308 11:45:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:00.308 11:45:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:27:00.308 11:45:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:27:00.308 11:45:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:00.308 11:45:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:27:00.308 11:45:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:00.308 11:45:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:00.308 11:45:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:00.308 11:45:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:00.308 11:45:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:00.308 11:45:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:00.308 11:45:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:00.308 11:45:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:00.308 11:45:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:00.308 11:45:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:00.308 11:45:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:00.308 11:45:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:00.308 11:45:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:00.308 11:45:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:00.308 11:45:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:27:00.308 11:45:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:00.308 11:45:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:00.876 nvme0n1 00:27:00.876 11:45:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:00.876 11:45:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:00.876 11:45:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:00.876 11:45:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:00.876 11:45:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:00.876 11:45:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:00.876 11:45:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:00.876 11:45:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:00.876 11:45:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:00.876 11:45:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:00.876 11:45:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:00.876 11:45:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:00.876 11:45:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 1 00:27:00.876 11:45:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:00.876 11:45:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:00.876 11:45:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:27:00.876 11:45:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:27:00.876 11:45:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NDQ2ZjE4YTNiZDJkNWJlNmI2YWM4ZGZmYTgyYzZkN2VkNTllNzIwZWI1NzI0MzAzP7rlLg==: 00:27:00.876 11:45:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:Y2ExYjNiNDViZjgwYTQ1MWRkN2FiYjRhODczNmZhY2EzYTAzMjcyNTlkY2IwZTVi9o9JOw==: 00:27:00.876 11:45:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:00.876 11:45:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:27:00.876 11:45:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NDQ2ZjE4YTNiZDJkNWJlNmI2YWM4ZGZmYTgyYzZkN2VkNTllNzIwZWI1NzI0MzAzP7rlLg==: 00:27:00.876 11:45:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:Y2ExYjNiNDViZjgwYTQ1MWRkN2FiYjRhODczNmZhY2EzYTAzMjcyNTlkY2IwZTVi9o9JOw==: ]] 00:27:00.876 11:45:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:Y2ExYjNiNDViZjgwYTQ1MWRkN2FiYjRhODczNmZhY2EzYTAzMjcyNTlkY2IwZTVi9o9JOw==: 00:27:00.876 11:45:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 1 00:27:00.876 11:45:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:00.876 11:45:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:00.876 11:45:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:27:00.876 11:45:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:27:00.876 11:45:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:00.876 11:45:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:27:00.876 11:45:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:00.876 11:45:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:00.876 11:45:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:00.876 11:45:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:00.876 11:45:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:00.876 11:45:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:00.876 11:45:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:00.876 11:45:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:00.876 11:45:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:00.876 11:45:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:00.876 11:45:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:00.876 11:45:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:00.876 11:45:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:00.876 11:45:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:00.876 11:45:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:27:00.876 11:45:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:00.876 11:45:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:01.444 nvme0n1 00:27:01.444 11:45:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:01.444 11:45:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:01.444 11:45:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:01.444 11:45:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:01.444 11:45:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:01.444 11:45:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:01.444 11:45:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:01.444 11:45:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:01.444 11:45:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:01.444 11:45:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:01.444 11:45:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:01.444 11:45:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:01.444 11:45:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 2 00:27:01.444 11:45:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:01.444 11:45:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:01.444 11:45:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:27:01.444 11:45:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:27:01.444 11:45:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:Y2FkNGYyYmVkODhkMzhhNTRjMGMxNjViNTgwNWQ4MDBJMSyK: 00:27:01.444 11:45:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZjIyZGE4YjQ5NDJkNjIyNzAxNjM4NWVhYThmYzNkZjMqr6rB: 00:27:01.444 11:45:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:01.444 11:45:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:27:01.444 11:45:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:Y2FkNGYyYmVkODhkMzhhNTRjMGMxNjViNTgwNWQ4MDBJMSyK: 00:27:01.444 11:45:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZjIyZGE4YjQ5NDJkNjIyNzAxNjM4NWVhYThmYzNkZjMqr6rB: ]] 00:27:01.444 11:45:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZjIyZGE4YjQ5NDJkNjIyNzAxNjM4NWVhYThmYzNkZjMqr6rB: 00:27:01.444 11:45:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 2 00:27:01.444 11:45:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:01.444 11:45:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:01.444 11:45:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:27:01.444 11:45:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:27:01.444 11:45:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:01.444 11:45:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:27:01.444 11:45:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:01.444 11:45:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:01.444 11:45:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:01.444 11:45:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:01.444 11:45:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:01.444 11:45:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:01.444 11:45:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:01.444 11:45:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:01.444 11:45:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:01.444 11:45:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:01.444 11:45:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:01.444 11:45:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:01.444 11:45:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:01.444 11:45:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:01.444 11:45:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:27:01.444 11:45:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:01.444 11:45:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:02.012 nvme0n1 00:27:02.012 11:45:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:02.012 11:45:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:02.012 11:45:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:02.012 11:45:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:02.012 11:45:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:02.012 11:45:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:02.012 11:45:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:02.012 11:45:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:02.012 11:45:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:02.012 11:45:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:02.012 11:45:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:02.012 11:45:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:02.012 11:45:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 3 00:27:02.012 11:45:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:02.012 11:45:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:02.012 11:45:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:27:02.012 11:45:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:27:02.012 11:45:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MDZjOWJkZTkyMDNkZjE5ZTYyNGJhYjg4MDE4NjhmMjk3ZDYyZTQ1YzVjMmNiZWU3Xz1Vgw==: 00:27:02.012 11:45:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ODVjNDFhYTUxOGUxNjAyYWY4MTE1OTcyMmE4ZmU0MDZsvCOG: 00:27:02.012 11:45:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:02.012 11:45:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:27:02.012 11:45:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MDZjOWJkZTkyMDNkZjE5ZTYyNGJhYjg4MDE4NjhmMjk3ZDYyZTQ1YzVjMmNiZWU3Xz1Vgw==: 00:27:02.012 11:45:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ODVjNDFhYTUxOGUxNjAyYWY4MTE1OTcyMmE4ZmU0MDZsvCOG: ]] 00:27:02.012 11:45:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ODVjNDFhYTUxOGUxNjAyYWY4MTE1OTcyMmE4ZmU0MDZsvCOG: 00:27:02.012 11:45:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 3 00:27:02.012 11:45:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:02.012 11:45:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:02.012 11:45:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:27:02.012 11:45:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:27:02.012 11:45:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:02.012 11:45:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:27:02.012 11:45:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:02.012 11:45:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:02.012 11:45:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:02.012 11:45:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:02.012 11:45:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:02.012 11:45:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:02.012 11:45:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:02.012 11:45:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:02.012 11:45:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:02.012 11:45:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:02.012 11:45:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:02.012 11:45:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:02.012 11:45:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:02.012 11:45:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:02.012 11:45:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:27:02.012 11:45:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:02.012 11:45:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:02.581 nvme0n1 00:27:02.581 11:45:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:02.581 11:45:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:02.581 11:45:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:02.581 11:45:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:02.581 11:45:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:02.581 11:45:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:02.581 11:45:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:02.581 11:45:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:02.581 11:45:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:02.581 11:45:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:02.581 11:45:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:02.581 11:45:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:02.581 11:45:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 4 00:27:02.581 11:45:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:02.581 11:45:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:02.581 11:45:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:27:02.581 11:45:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:27:02.581 11:45:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YTI1NGIwMGRmZmM5MTBmZmFkNDlkMjYxOWY1YzliZmFlZTE2OWI5NDk4MDcxMWJmNDhkYzZlZjE2ZTM4NjMzOMnYP/o=: 00:27:02.581 11:45:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:27:02.581 11:45:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:02.581 11:45:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:27:02.581 11:45:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YTI1NGIwMGRmZmM5MTBmZmFkNDlkMjYxOWY1YzliZmFlZTE2OWI5NDk4MDcxMWJmNDhkYzZlZjE2ZTM4NjMzOMnYP/o=: 00:27:02.581 11:45:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:27:02.581 11:45:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 4 00:27:02.581 11:45:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:02.581 11:45:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:02.581 11:45:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:27:02.581 11:45:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:27:02.581 11:45:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:02.581 11:45:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:27:02.581 11:45:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:02.581 11:45:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:02.581 11:45:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:02.581 11:45:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:02.581 11:45:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:02.581 11:45:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:02.581 11:45:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:02.581 11:45:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:02.581 11:45:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:02.581 11:45:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:02.581 11:45:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:02.581 11:45:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:02.581 11:45:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:02.581 11:45:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:02.581 11:45:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:27:02.581 11:45:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:02.581 11:45:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:03.149 nvme0n1 00:27:03.149 11:45:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:03.149 11:45:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:03.149 11:45:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:03.149 11:45:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:03.149 11:45:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:03.149 11:45:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:03.149 11:45:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:03.149 11:45:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:03.149 11:45:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:03.149 11:45:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:03.149 11:45:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:03.149 11:45:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:27:03.149 11:45:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:03.149 11:45:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 0 00:27:03.149 11:45:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:03.149 11:45:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:03.149 11:45:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:27:03.149 11:45:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:27:03.149 11:45:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MTk2OGFkODM0N2E3NTI0MDYzMTI4MTg3NjBkYzU5MznVpECY: 00:27:03.149 11:45:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:OTQxNjZlNjNjMzJjZDkyZjgxYmZiNTk1NzJmZDgxYWZhMzhiOTBiYzlhNTI2NjU4YjFkNDljYTMzYzY1Y2IyZEUWTI8=: 00:27:03.149 11:45:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:03.149 11:45:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:27:03.149 11:45:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MTk2OGFkODM0N2E3NTI0MDYzMTI4MTg3NjBkYzU5MznVpECY: 00:27:03.149 11:45:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:OTQxNjZlNjNjMzJjZDkyZjgxYmZiNTk1NzJmZDgxYWZhMzhiOTBiYzlhNTI2NjU4YjFkNDljYTMzYzY1Y2IyZEUWTI8=: ]] 00:27:03.149 11:45:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:OTQxNjZlNjNjMzJjZDkyZjgxYmZiNTk1NzJmZDgxYWZhMzhiOTBiYzlhNTI2NjU4YjFkNDljYTMzYzY1Y2IyZEUWTI8=: 00:27:03.149 11:45:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 0 00:27:03.149 11:45:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:03.149 11:45:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:03.149 11:45:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:27:03.149 11:45:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:27:03.149 11:45:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:03.149 11:45:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:27:03.149 11:45:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:03.149 11:45:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:03.149 11:45:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:03.149 11:45:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:03.149 11:45:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:03.149 11:45:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:03.149 11:45:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:03.149 11:45:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:03.149 11:45:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:03.149 11:45:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:03.149 11:45:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:03.149 11:45:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:03.149 11:45:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:03.149 11:45:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:03.149 11:45:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:27:03.149 11:45:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:03.149 11:45:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:04.085 nvme0n1 00:27:04.085 11:45:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:04.085 11:45:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:04.085 11:45:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:04.085 11:45:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:04.085 11:45:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:04.085 11:45:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:04.085 11:45:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:04.085 11:45:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:04.085 11:45:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:04.085 11:45:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:04.085 11:45:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:04.085 11:45:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:04.085 11:45:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 1 00:27:04.085 11:45:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:04.085 11:45:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:04.085 11:45:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:27:04.085 11:45:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:27:04.085 11:45:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NDQ2ZjE4YTNiZDJkNWJlNmI2YWM4ZGZmYTgyYzZkN2VkNTllNzIwZWI1NzI0MzAzP7rlLg==: 00:27:04.085 11:45:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:Y2ExYjNiNDViZjgwYTQ1MWRkN2FiYjRhODczNmZhY2EzYTAzMjcyNTlkY2IwZTVi9o9JOw==: 00:27:04.085 11:45:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:04.085 11:45:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:27:04.085 11:45:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NDQ2ZjE4YTNiZDJkNWJlNmI2YWM4ZGZmYTgyYzZkN2VkNTllNzIwZWI1NzI0MzAzP7rlLg==: 00:27:04.085 11:45:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:Y2ExYjNiNDViZjgwYTQ1MWRkN2FiYjRhODczNmZhY2EzYTAzMjcyNTlkY2IwZTVi9o9JOw==: ]] 00:27:04.085 11:45:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:Y2ExYjNiNDViZjgwYTQ1MWRkN2FiYjRhODczNmZhY2EzYTAzMjcyNTlkY2IwZTVi9o9JOw==: 00:27:04.085 11:45:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 1 00:27:04.085 11:45:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:04.085 11:45:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:04.085 11:45:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:27:04.085 11:45:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:27:04.085 11:45:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:04.085 11:45:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:27:04.085 11:45:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:04.085 11:45:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:04.085 11:45:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:04.085 11:45:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:04.085 11:45:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:04.085 11:45:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:04.085 11:45:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:04.085 11:45:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:04.085 11:45:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:04.085 11:45:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:04.085 11:45:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:04.085 11:45:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:04.085 11:45:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:04.085 11:45:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:04.085 11:45:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:27:04.085 11:45:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:04.085 11:45:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:04.652 nvme0n1 00:27:04.652 11:45:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:04.652 11:45:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:04.652 11:45:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:04.652 11:45:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:04.652 11:45:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:04.652 11:45:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:04.652 11:45:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:04.652 11:45:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:04.652 11:45:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:04.652 11:45:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:04.652 11:45:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:04.652 11:45:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:04.652 11:45:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 2 00:27:04.652 11:45:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:04.652 11:45:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:04.652 11:45:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:27:04.652 11:45:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:27:04.652 11:45:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:Y2FkNGYyYmVkODhkMzhhNTRjMGMxNjViNTgwNWQ4MDBJMSyK: 00:27:04.652 11:45:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZjIyZGE4YjQ5NDJkNjIyNzAxNjM4NWVhYThmYzNkZjMqr6rB: 00:27:04.652 11:45:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:04.652 11:45:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:27:04.652 11:45:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:Y2FkNGYyYmVkODhkMzhhNTRjMGMxNjViNTgwNWQ4MDBJMSyK: 00:27:04.652 11:45:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZjIyZGE4YjQ5NDJkNjIyNzAxNjM4NWVhYThmYzNkZjMqr6rB: ]] 00:27:04.652 11:45:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZjIyZGE4YjQ5NDJkNjIyNzAxNjM4NWVhYThmYzNkZjMqr6rB: 00:27:04.652 11:45:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 2 00:27:04.652 11:45:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:04.652 11:45:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:04.652 11:45:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:27:04.652 11:45:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:27:04.911 11:45:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:04.911 11:45:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:27:04.911 11:45:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:04.911 11:45:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:04.911 11:45:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:04.911 11:45:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:04.911 11:45:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:04.911 11:45:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:04.911 11:45:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:04.911 11:45:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:04.911 11:45:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:04.911 11:45:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:04.911 11:45:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:04.911 11:45:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:04.911 11:45:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:04.911 11:45:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:04.911 11:45:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:27:04.911 11:45:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:04.911 11:45:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:05.658 nvme0n1 00:27:05.658 11:45:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:05.658 11:45:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:05.658 11:45:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:05.658 11:45:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:05.658 11:45:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:05.658 11:45:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:05.658 11:45:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:05.658 11:45:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:05.658 11:45:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:05.658 11:45:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:05.658 11:45:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:05.658 11:45:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:05.658 11:45:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 3 00:27:05.658 11:45:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:05.658 11:45:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:05.658 11:45:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:27:05.658 11:45:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:27:05.658 11:45:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MDZjOWJkZTkyMDNkZjE5ZTYyNGJhYjg4MDE4NjhmMjk3ZDYyZTQ1YzVjMmNiZWU3Xz1Vgw==: 00:27:05.658 11:45:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ODVjNDFhYTUxOGUxNjAyYWY4MTE1OTcyMmE4ZmU0MDZsvCOG: 00:27:05.658 11:45:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:05.658 11:45:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:27:05.658 11:45:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MDZjOWJkZTkyMDNkZjE5ZTYyNGJhYjg4MDE4NjhmMjk3ZDYyZTQ1YzVjMmNiZWU3Xz1Vgw==: 00:27:05.658 11:45:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ODVjNDFhYTUxOGUxNjAyYWY4MTE1OTcyMmE4ZmU0MDZsvCOG: ]] 00:27:05.658 11:45:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ODVjNDFhYTUxOGUxNjAyYWY4MTE1OTcyMmE4ZmU0MDZsvCOG: 00:27:05.658 11:45:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 3 00:27:05.659 11:45:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:05.659 11:45:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:05.659 11:45:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:27:05.659 11:45:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:27:05.659 11:45:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:05.659 11:45:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:27:05.659 11:45:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:05.659 11:45:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:05.659 11:45:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:05.659 11:45:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:05.659 11:45:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:05.659 11:45:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:05.659 11:45:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:05.659 11:45:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:05.659 11:45:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:05.659 11:45:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:05.659 11:45:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:05.659 11:45:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:05.659 11:45:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:05.659 11:45:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:05.659 11:45:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:27:05.659 11:45:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:05.659 11:45:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:06.256 nvme0n1 00:27:06.256 11:45:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:06.256 11:45:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:06.256 11:45:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:06.256 11:45:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:06.256 11:45:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:06.256 11:45:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:06.513 11:45:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:06.513 11:45:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:06.513 11:45:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:06.513 11:45:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:06.513 11:45:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:06.513 11:45:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:06.513 11:45:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 4 00:27:06.514 11:45:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:06.514 11:45:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:06.514 11:45:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:27:06.514 11:45:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:27:06.514 11:45:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YTI1NGIwMGRmZmM5MTBmZmFkNDlkMjYxOWY1YzliZmFlZTE2OWI5NDk4MDcxMWJmNDhkYzZlZjE2ZTM4NjMzOMnYP/o=: 00:27:06.514 11:45:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:27:06.514 11:45:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:06.514 11:45:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:27:06.514 11:45:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YTI1NGIwMGRmZmM5MTBmZmFkNDlkMjYxOWY1YzliZmFlZTE2OWI5NDk4MDcxMWJmNDhkYzZlZjE2ZTM4NjMzOMnYP/o=: 00:27:06.514 11:45:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:27:06.514 11:45:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 4 00:27:06.514 11:45:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:06.514 11:45:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:06.514 11:45:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:27:06.514 11:45:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:27:06.514 11:45:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:06.514 11:45:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:27:06.514 11:45:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:06.514 11:45:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:06.514 11:45:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:06.514 11:45:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:06.514 11:45:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:06.514 11:45:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:06.514 11:45:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:06.514 11:45:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:06.514 11:45:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:06.514 11:45:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:06.514 11:45:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:06.514 11:45:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:06.514 11:45:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:06.514 11:45:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:06.514 11:45:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:27:06.514 11:45:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:06.514 11:45:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:07.080 nvme0n1 00:27:07.080 11:45:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:07.340 11:45:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:07.340 11:45:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:07.340 11:45:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:07.340 11:45:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:07.340 11:45:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:07.340 11:45:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:07.340 11:45:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:07.340 11:45:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:07.340 11:45:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:07.340 11:45:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:07.340 11:45:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:27:07.340 11:45:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:07.340 11:45:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:07.340 11:45:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:07.340 11:45:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:27:07.340 11:45:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NDQ2ZjE4YTNiZDJkNWJlNmI2YWM4ZGZmYTgyYzZkN2VkNTllNzIwZWI1NzI0MzAzP7rlLg==: 00:27:07.340 11:45:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:Y2ExYjNiNDViZjgwYTQ1MWRkN2FiYjRhODczNmZhY2EzYTAzMjcyNTlkY2IwZTVi9o9JOw==: 00:27:07.340 11:45:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:07.340 11:45:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:07.340 11:45:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NDQ2ZjE4YTNiZDJkNWJlNmI2YWM4ZGZmYTgyYzZkN2VkNTllNzIwZWI1NzI0MzAzP7rlLg==: 00:27:07.340 11:45:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:Y2ExYjNiNDViZjgwYTQ1MWRkN2FiYjRhODczNmZhY2EzYTAzMjcyNTlkY2IwZTVi9o9JOw==: ]] 00:27:07.340 11:45:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:Y2ExYjNiNDViZjgwYTQ1MWRkN2FiYjRhODczNmZhY2EzYTAzMjcyNTlkY2IwZTVi9o9JOw==: 00:27:07.340 11:45:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@111 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:27:07.340 11:45:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:07.340 11:45:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:07.340 11:45:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:07.340 11:45:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@112 -- # get_main_ns_ip 00:27:07.340 11:45:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:07.340 11:45:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:07.340 11:45:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:07.340 11:45:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:07.340 11:45:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:07.340 11:45:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:07.340 11:45:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:07.340 11:45:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:07.340 11:45:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:07.340 11:45:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:07.340 11:45:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@112 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:27:07.340 11:45:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@650 -- # local es=0 00:27:07.340 11:45:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:27:07.340 11:45:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:27:07.340 11:45:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:27:07.340 11:45:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:27:07.340 11:45:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:27:07.340 11:45:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:27:07.340 11:45:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:07.340 11:45:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:07.340 request: 00:27:07.340 { 00:27:07.340 "name": "nvme0", 00:27:07.340 "trtype": "tcp", 00:27:07.340 "traddr": "10.0.0.1", 00:27:07.340 "adrfam": "ipv4", 00:27:07.340 "trsvcid": "4420", 00:27:07.340 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:27:07.340 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:27:07.340 "prchk_reftag": false, 00:27:07.340 "prchk_guard": false, 00:27:07.340 "hdgst": false, 00:27:07.340 "ddgst": false, 00:27:07.340 "allow_unrecognized_csi": false, 00:27:07.340 "method": "bdev_nvme_attach_controller", 00:27:07.340 "req_id": 1 00:27:07.340 } 00:27:07.340 Got JSON-RPC error response 00:27:07.340 response: 00:27:07.340 { 00:27:07.340 "code": -5, 00:27:07.340 "message": "Input/output error" 00:27:07.340 } 00:27:07.340 11:45:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:27:07.340 11:45:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # es=1 00:27:07.340 11:45:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:27:07.340 11:45:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:27:07.340 11:45:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:27:07.340 11:45:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # rpc_cmd bdev_nvme_get_controllers 00:27:07.340 11:45:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # jq length 00:27:07.340 11:45:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:07.340 11:45:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:07.340 11:45:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:07.340 11:45:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # (( 0 == 0 )) 00:27:07.340 11:45:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@117 -- # get_main_ns_ip 00:27:07.340 11:45:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:07.340 11:45:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:07.340 11:45:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:07.340 11:45:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:07.340 11:45:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:07.340 11:45:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:07.340 11:45:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:07.340 11:45:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:07.340 11:45:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:07.340 11:45:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:07.340 11:45:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@117 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:27:07.340 11:45:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@650 -- # local es=0 00:27:07.340 11:45:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:27:07.340 11:45:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:27:07.340 11:45:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:27:07.340 11:45:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:27:07.340 11:45:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:27:07.340 11:45:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:27:07.341 11:45:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:07.341 11:45:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:07.601 request: 00:27:07.601 { 00:27:07.601 "name": "nvme0", 00:27:07.601 "trtype": "tcp", 00:27:07.601 "traddr": "10.0.0.1", 00:27:07.601 "adrfam": "ipv4", 00:27:07.601 "trsvcid": "4420", 00:27:07.601 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:27:07.601 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:27:07.601 "prchk_reftag": false, 00:27:07.601 "prchk_guard": false, 00:27:07.601 "hdgst": false, 00:27:07.601 "ddgst": false, 00:27:07.601 "dhchap_key": "key2", 00:27:07.601 "allow_unrecognized_csi": false, 00:27:07.601 "method": "bdev_nvme_attach_controller", 00:27:07.601 "req_id": 1 00:27:07.601 } 00:27:07.601 Got JSON-RPC error response 00:27:07.601 response: 00:27:07.601 { 00:27:07.601 "code": -5, 00:27:07.601 "message": "Input/output error" 00:27:07.601 } 00:27:07.601 11:45:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:27:07.601 11:45:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # es=1 00:27:07.601 11:45:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:27:07.601 11:45:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:27:07.601 11:45:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:27:07.601 11:45:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # rpc_cmd bdev_nvme_get_controllers 00:27:07.601 11:45:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # jq length 00:27:07.601 11:45:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:07.601 11:45:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:07.601 11:45:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:07.601 11:45:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # (( 0 == 0 )) 00:27:07.601 11:45:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@123 -- # get_main_ns_ip 00:27:07.601 11:45:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:07.601 11:45:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:07.601 11:45:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:07.601 11:45:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:07.601 11:45:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:07.601 11:45:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:07.601 11:45:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:07.601 11:45:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:07.601 11:45:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:07.601 11:45:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:07.601 11:45:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@123 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:27:07.601 11:45:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@650 -- # local es=0 00:27:07.601 11:45:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:27:07.601 11:45:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:27:07.601 11:45:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:27:07.601 11:45:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:27:07.601 11:45:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:27:07.601 11:45:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:27:07.601 11:45:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:07.601 11:45:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:07.601 request: 00:27:07.601 { 00:27:07.601 "name": "nvme0", 00:27:07.601 "trtype": "tcp", 00:27:07.601 "traddr": "10.0.0.1", 00:27:07.601 "adrfam": "ipv4", 00:27:07.601 "trsvcid": "4420", 00:27:07.601 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:27:07.601 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:27:07.601 "prchk_reftag": false, 00:27:07.601 "prchk_guard": false, 00:27:07.601 "hdgst": false, 00:27:07.601 "ddgst": false, 00:27:07.601 "dhchap_key": "key1", 00:27:07.601 "dhchap_ctrlr_key": "ckey2", 00:27:07.601 "allow_unrecognized_csi": false, 00:27:07.601 "method": "bdev_nvme_attach_controller", 00:27:07.601 "req_id": 1 00:27:07.601 } 00:27:07.601 Got JSON-RPC error response 00:27:07.601 response: 00:27:07.601 { 00:27:07.601 "code": -5, 00:27:07.601 "message": "Input/output error" 00:27:07.601 } 00:27:07.601 11:45:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:27:07.601 11:45:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # es=1 00:27:07.601 11:45:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:27:07.601 11:45:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:27:07.601 11:45:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:27:07.601 11:45:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@128 -- # get_main_ns_ip 00:27:07.601 11:45:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:07.601 11:45:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:07.601 11:45:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:07.601 11:45:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:07.601 11:45:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:07.601 11:45:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:07.601 11:45:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:07.601 11:45:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:07.601 11:45:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:07.601 11:45:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:07.601 11:45:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@128 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:27:07.601 11:45:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:07.601 11:45:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:07.861 nvme0n1 00:27:07.861 11:45:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:07.861 11:45:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@132 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:27:07.861 11:45:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:07.861 11:45:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:07.861 11:45:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:07.861 11:45:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:27:07.861 11:45:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:Y2FkNGYyYmVkODhkMzhhNTRjMGMxNjViNTgwNWQ4MDBJMSyK: 00:27:07.861 11:45:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZjIyZGE4YjQ5NDJkNjIyNzAxNjM4NWVhYThmYzNkZjMqr6rB: 00:27:07.861 11:45:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:07.861 11:45:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:07.861 11:45:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:Y2FkNGYyYmVkODhkMzhhNTRjMGMxNjViNTgwNWQ4MDBJMSyK: 00:27:07.861 11:45:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZjIyZGE4YjQ5NDJkNjIyNzAxNjM4NWVhYThmYzNkZjMqr6rB: ]] 00:27:07.861 11:45:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZjIyZGE4YjQ5NDJkNjIyNzAxNjM4NWVhYThmYzNkZjMqr6rB: 00:27:07.861 11:45:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@133 -- # rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:27:07.861 11:45:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:07.861 11:45:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:07.861 11:45:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:07.861 11:45:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@134 -- # jq -r '.[].name' 00:27:07.861 11:45:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@134 -- # rpc_cmd bdev_nvme_get_controllers 00:27:07.861 11:45:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:07.861 11:45:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:07.861 11:45:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:07.861 11:45:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@134 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:07.861 11:45:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@136 -- # NOT rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:27:07.861 11:45:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@650 -- # local es=0 00:27:07.861 11:45:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:27:07.861 11:45:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:27:07.861 11:45:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:27:07.861 11:45:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:27:07.861 11:45:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:27:07.861 11:45:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:27:07.861 11:45:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:07.861 11:45:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:07.861 request: 00:27:07.861 { 00:27:07.861 "name": "nvme0", 00:27:07.861 "dhchap_key": "key1", 00:27:07.861 "dhchap_ctrlr_key": "ckey2", 00:27:07.861 "method": "bdev_nvme_set_keys", 00:27:07.861 "req_id": 1 00:27:07.861 } 00:27:07.861 Got JSON-RPC error response 00:27:07.861 response: 00:27:07.861 { 00:27:07.861 "code": -13, 00:27:07.861 "message": "Permission denied" 00:27:07.861 } 00:27:07.861 11:45:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:27:07.861 11:45:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # es=1 00:27:07.861 11:45:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:27:07.861 11:45:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:27:07.861 11:45:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:27:07.861 11:45:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # rpc_cmd bdev_nvme_get_controllers 00:27:07.861 11:45:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # jq length 00:27:07.861 11:45:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:07.861 11:45:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:07.861 11:45:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:07.861 11:45:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # (( 1 != 0 )) 00:27:07.861 11:45:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@138 -- # sleep 1s 00:27:09.237 11:45:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # rpc_cmd bdev_nvme_get_controllers 00:27:09.237 11:45:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # jq length 00:27:09.237 11:45:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:09.237 11:45:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:09.237 11:45:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:09.237 11:45:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # (( 1 != 0 )) 00:27:09.237 11:45:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@138 -- # sleep 1s 00:27:10.169 11:45:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # rpc_cmd bdev_nvme_get_controllers 00:27:10.169 11:45:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # jq length 00:27:10.169 11:45:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:10.169 11:45:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:10.169 11:45:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:10.169 11:45:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # (( 0 != 0 )) 00:27:10.169 11:45:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@141 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:27:10.169 11:45:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:10.169 11:45:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:10.169 11:45:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:10.169 11:45:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:27:10.170 11:45:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NDQ2ZjE4YTNiZDJkNWJlNmI2YWM4ZGZmYTgyYzZkN2VkNTllNzIwZWI1NzI0MzAzP7rlLg==: 00:27:10.170 11:45:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:Y2ExYjNiNDViZjgwYTQ1MWRkN2FiYjRhODczNmZhY2EzYTAzMjcyNTlkY2IwZTVi9o9JOw==: 00:27:10.170 11:45:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:10.170 11:45:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:10.170 11:45:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NDQ2ZjE4YTNiZDJkNWJlNmI2YWM4ZGZmYTgyYzZkN2VkNTllNzIwZWI1NzI0MzAzP7rlLg==: 00:27:10.170 11:45:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:Y2ExYjNiNDViZjgwYTQ1MWRkN2FiYjRhODczNmZhY2EzYTAzMjcyNTlkY2IwZTVi9o9JOw==: ]] 00:27:10.170 11:45:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:Y2ExYjNiNDViZjgwYTQ1MWRkN2FiYjRhODczNmZhY2EzYTAzMjcyNTlkY2IwZTVi9o9JOw==: 00:27:10.170 11:45:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@142 -- # get_main_ns_ip 00:27:10.170 11:45:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:10.170 11:45:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:10.170 11:45:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:10.170 11:45:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:10.170 11:45:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:10.170 11:45:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:10.170 11:45:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:10.170 11:45:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:10.170 11:45:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:10.170 11:45:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:10.170 11:45:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@142 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:27:10.170 11:45:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:10.170 11:45:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:10.170 nvme0n1 00:27:10.170 11:45:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:10.170 11:45:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@146 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:27:10.170 11:45:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:10.170 11:45:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:10.170 11:45:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:10.170 11:45:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:27:10.170 11:45:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:Y2FkNGYyYmVkODhkMzhhNTRjMGMxNjViNTgwNWQ4MDBJMSyK: 00:27:10.170 11:45:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZjIyZGE4YjQ5NDJkNjIyNzAxNjM4NWVhYThmYzNkZjMqr6rB: 00:27:10.170 11:45:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:10.170 11:45:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:10.170 11:45:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:Y2FkNGYyYmVkODhkMzhhNTRjMGMxNjViNTgwNWQ4MDBJMSyK: 00:27:10.170 11:45:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZjIyZGE4YjQ5NDJkNjIyNzAxNjM4NWVhYThmYzNkZjMqr6rB: ]] 00:27:10.170 11:45:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZjIyZGE4YjQ5NDJkNjIyNzAxNjM4NWVhYThmYzNkZjMqr6rB: 00:27:10.170 11:45:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@147 -- # NOT rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey1 00:27:10.170 11:45:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@650 -- # local es=0 00:27:10.170 11:45:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey1 00:27:10.170 11:45:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:27:10.170 11:45:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:27:10.170 11:45:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:27:10.170 11:45:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:27:10.170 11:45:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey1 00:27:10.170 11:45:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:10.170 11:45:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:10.429 request: 00:27:10.430 { 00:27:10.430 "name": "nvme0", 00:27:10.430 "dhchap_key": "key2", 00:27:10.430 "dhchap_ctrlr_key": "ckey1", 00:27:10.430 "method": "bdev_nvme_set_keys", 00:27:10.430 "req_id": 1 00:27:10.430 } 00:27:10.430 Got JSON-RPC error response 00:27:10.430 response: 00:27:10.430 { 00:27:10.430 "code": -13, 00:27:10.430 "message": "Permission denied" 00:27:10.430 } 00:27:10.430 11:45:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:27:10.430 11:45:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # es=1 00:27:10.430 11:45:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:27:10.430 11:45:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:27:10.430 11:45:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:27:10.430 11:45:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # rpc_cmd bdev_nvme_get_controllers 00:27:10.430 11:45:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # jq length 00:27:10.430 11:45:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:10.430 11:45:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:10.430 11:45:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:10.430 11:45:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # (( 1 != 0 )) 00:27:10.430 11:45:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@149 -- # sleep 1s 00:27:11.368 11:45:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # rpc_cmd bdev_nvme_get_controllers 00:27:11.368 11:45:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # jq length 00:27:11.368 11:45:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:11.368 11:45:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:11.368 11:45:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:11.368 11:45:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # (( 0 != 0 )) 00:27:11.368 11:45:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@152 -- # trap - SIGINT SIGTERM EXIT 00:27:11.368 11:45:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@153 -- # cleanup 00:27:11.368 11:45:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@24 -- # nvmftestfini 00:27:11.368 11:45:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@516 -- # nvmfcleanup 00:27:11.368 11:45:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@121 -- # sync 00:27:11.368 11:45:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:27:11.368 11:45:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@124 -- # set +e 00:27:11.368 11:45:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@125 -- # for i in {1..20} 00:27:11.368 11:45:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:27:11.368 rmmod nvme_tcp 00:27:11.368 rmmod nvme_fabrics 00:27:11.368 11:45:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:27:11.368 11:45:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@128 -- # set -e 00:27:11.368 11:45:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@129 -- # return 0 00:27:11.368 11:45:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@517 -- # '[' -n 1375777 ']' 00:27:11.368 11:45:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@518 -- # killprocess 1375777 00:27:11.368 11:45:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@952 -- # '[' -z 1375777 ']' 00:27:11.368 11:45:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@956 -- # kill -0 1375777 00:27:11.368 11:45:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@957 -- # uname 00:27:11.368 11:45:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:27:11.368 11:45:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 1375777 00:27:11.627 11:45:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:27:11.627 11:45:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:27:11.627 11:45:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@970 -- # echo 'killing process with pid 1375777' 00:27:11.627 killing process with pid 1375777 00:27:11.627 11:45:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@971 -- # kill 1375777 00:27:11.627 11:45:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@976 -- # wait 1375777 00:27:11.627 11:45:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:27:11.627 11:45:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:27:11.627 11:45:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:27:11.627 11:45:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@297 -- # iptr 00:27:11.627 11:45:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@791 -- # iptables-save 00:27:11.627 11:45:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:27:11.627 11:45:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@791 -- # iptables-restore 00:27:11.627 11:45:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:27:11.627 11:45:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@302 -- # remove_spdk_ns 00:27:11.627 11:45:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:11.627 11:45:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:27:11.627 11:45:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:14.156 11:45:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:27:14.156 11:45:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@25 -- # rm /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:27:14.156 11:45:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@26 -- # rmdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:27:14.156 11:45:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@27 -- # clean_kernel_target 00:27:14.156 11:45:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@712 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 ]] 00:27:14.156 11:45:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@714 -- # echo 0 00:27:14.156 11:45:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@716 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2024-02.io.spdk:cnode0 00:27:14.157 11:45:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@717 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:27:14.157 11:45:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@718 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:27:14.157 11:45:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@719 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:27:14.157 11:45:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@721 -- # modules=(/sys/module/nvmet/holders/*) 00:27:14.157 11:45:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@723 -- # modprobe -r nvmet_tcp nvmet 00:27:14.157 11:45:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:27:16.692 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:27:16.692 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:27:16.692 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:27:16.692 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:27:16.692 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:27:16.692 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:27:16.692 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:27:16.692 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:27:16.692 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:27:16.692 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:27:16.692 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:27:16.692 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:27:16.692 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:27:16.692 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:27:16.692 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:27:16.692 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:27:17.266 0000:86:00.0 (8086 0a54): nvme -> vfio-pci 00:27:17.526 11:45:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@28 -- # rm -f /tmp/spdk.key-null.JYx /tmp/spdk.key-null.Q8f /tmp/spdk.key-sha256.8BN /tmp/spdk.key-sha384.219 /tmp/spdk.key-sha512.maH /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log 00:27:17.526 11:45:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:27:20.062 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:27:20.062 0000:86:00.0 (8086 0a54): Already using the vfio-pci driver 00:27:20.062 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:27:20.062 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:27:20.062 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:27:20.062 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:27:20.062 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:27:20.062 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:27:20.062 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:27:20.062 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:27:20.062 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:27:20.062 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:27:20.062 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:27:20.062 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:27:20.062 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:27:20.062 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:27:20.062 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:27:20.062 00:27:20.062 real 0m58.462s 00:27:20.062 user 0m53.850s 00:27:20.062 sys 0m12.017s 00:27:20.062 11:45:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1128 -- # xtrace_disable 00:27:20.062 11:45:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:20.062 ************************************ 00:27:20.063 END TEST nvmf_auth_host 00:27:20.063 ************************************ 00:27:20.063 11:45:20 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@32 -- # [[ tcp == \t\c\p ]] 00:27:20.063 11:45:20 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@33 -- # run_test nvmf_digest /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/digest.sh --transport=tcp 00:27:20.063 11:45:20 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:27:20.063 11:45:20 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1109 -- # xtrace_disable 00:27:20.063 11:45:20 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:27:20.063 ************************************ 00:27:20.063 START TEST nvmf_digest 00:27:20.063 ************************************ 00:27:20.063 11:45:20 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/digest.sh --transport=tcp 00:27:20.063 * Looking for test storage... 00:27:20.063 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:27:20.063 11:45:20 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:27:20.063 11:45:20 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1691 -- # lcov --version 00:27:20.063 11:45:20 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:27:20.063 11:45:20 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:27:20.063 11:45:20 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:27:20.063 11:45:20 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@333 -- # local ver1 ver1_l 00:27:20.063 11:45:20 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@334 -- # local ver2 ver2_l 00:27:20.063 11:45:20 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@336 -- # IFS=.-: 00:27:20.063 11:45:20 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@336 -- # read -ra ver1 00:27:20.063 11:45:20 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@337 -- # IFS=.-: 00:27:20.063 11:45:20 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@337 -- # read -ra ver2 00:27:20.063 11:45:20 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@338 -- # local 'op=<' 00:27:20.063 11:45:20 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@340 -- # ver1_l=2 00:27:20.063 11:45:20 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@341 -- # ver2_l=1 00:27:20.063 11:45:20 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:27:20.063 11:45:20 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@344 -- # case "$op" in 00:27:20.063 11:45:20 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@345 -- # : 1 00:27:20.063 11:45:20 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@364 -- # (( v = 0 )) 00:27:20.063 11:45:20 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:27:20.063 11:45:20 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@365 -- # decimal 1 00:27:20.063 11:45:20 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@353 -- # local d=1 00:27:20.063 11:45:20 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:27:20.063 11:45:20 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@355 -- # echo 1 00:27:20.063 11:45:20 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@365 -- # ver1[v]=1 00:27:20.063 11:45:20 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@366 -- # decimal 2 00:27:20.063 11:45:20 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@353 -- # local d=2 00:27:20.063 11:45:20 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:27:20.063 11:45:20 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@355 -- # echo 2 00:27:20.063 11:45:20 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@366 -- # ver2[v]=2 00:27:20.063 11:45:20 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:27:20.063 11:45:20 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:27:20.063 11:45:20 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@368 -- # return 0 00:27:20.063 11:45:20 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:27:20.063 11:45:20 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:27:20.063 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:20.063 --rc genhtml_branch_coverage=1 00:27:20.063 --rc genhtml_function_coverage=1 00:27:20.063 --rc genhtml_legend=1 00:27:20.063 --rc geninfo_all_blocks=1 00:27:20.063 --rc geninfo_unexecuted_blocks=1 00:27:20.063 00:27:20.063 ' 00:27:20.063 11:45:20 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:27:20.063 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:20.063 --rc genhtml_branch_coverage=1 00:27:20.063 --rc genhtml_function_coverage=1 00:27:20.063 --rc genhtml_legend=1 00:27:20.063 --rc geninfo_all_blocks=1 00:27:20.063 --rc geninfo_unexecuted_blocks=1 00:27:20.063 00:27:20.063 ' 00:27:20.063 11:45:20 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:27:20.063 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:20.063 --rc genhtml_branch_coverage=1 00:27:20.063 --rc genhtml_function_coverage=1 00:27:20.063 --rc genhtml_legend=1 00:27:20.063 --rc geninfo_all_blocks=1 00:27:20.063 --rc geninfo_unexecuted_blocks=1 00:27:20.063 00:27:20.063 ' 00:27:20.063 11:45:20 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:27:20.063 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:20.063 --rc genhtml_branch_coverage=1 00:27:20.063 --rc genhtml_function_coverage=1 00:27:20.063 --rc genhtml_legend=1 00:27:20.063 --rc geninfo_all_blocks=1 00:27:20.063 --rc geninfo_unexecuted_blocks=1 00:27:20.063 00:27:20.063 ' 00:27:20.063 11:45:20 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:27:20.063 11:45:20 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@7 -- # uname -s 00:27:20.063 11:45:20 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:20.063 11:45:20 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:20.063 11:45:20 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:20.063 11:45:20 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:20.063 11:45:20 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:20.063 11:45:20 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:20.063 11:45:20 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:20.063 11:45:20 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:20.063 11:45:20 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:20.063 11:45:20 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:20.063 11:45:20 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:27:20.063 11:45:20 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@18 -- # NVME_HOSTID=00abaa28-3537-eb11-906e-0017a4403562 00:27:20.063 11:45:20 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:20.063 11:45:20 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:20.063 11:45:20 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:27:20.063 11:45:20 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:27:20.063 11:45:20 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:27:20.063 11:45:20 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@15 -- # shopt -s extglob 00:27:20.063 11:45:20 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:20.063 11:45:20 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:20.063 11:45:20 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:20.063 11:45:20 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:20.063 11:45:20 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:20.063 11:45:20 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:20.063 11:45:20 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@5 -- # export PATH 00:27:20.063 11:45:20 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:20.063 11:45:20 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@51 -- # : 0 00:27:20.063 11:45:20 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:27:20.063 11:45:20 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:27:20.063 11:45:20 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:27:20.063 11:45:20 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:20.063 11:45:20 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:20.063 11:45:20 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:27:20.063 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:27:20.063 11:45:20 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:27:20.064 11:45:20 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:27:20.064 11:45:20 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@55 -- # have_pci_nics=0 00:27:20.064 11:45:20 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@14 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:27:20.064 11:45:20 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@15 -- # bperfsock=/var/tmp/bperf.sock 00:27:20.064 11:45:20 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@16 -- # runtime=2 00:27:20.064 11:45:20 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@136 -- # [[ tcp != \t\c\p ]] 00:27:20.064 11:45:20 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@138 -- # nvmftestinit 00:27:20.064 11:45:20 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:27:20.064 11:45:20 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:20.064 11:45:20 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@476 -- # prepare_net_devs 00:27:20.064 11:45:20 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@438 -- # local -g is_hw=no 00:27:20.064 11:45:20 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@440 -- # remove_spdk_ns 00:27:20.064 11:45:20 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:20.064 11:45:20 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:27:20.064 11:45:20 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:20.064 11:45:20 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:27:20.064 11:45:20 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:27:20.064 11:45:20 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@309 -- # xtrace_disable 00:27:20.064 11:45:20 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:27:25.335 11:45:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:27:25.335 11:45:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@315 -- # pci_devs=() 00:27:25.335 11:45:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@315 -- # local -a pci_devs 00:27:25.335 11:45:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@316 -- # pci_net_devs=() 00:27:25.335 11:45:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:27:25.335 11:45:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@317 -- # pci_drivers=() 00:27:25.335 11:45:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@317 -- # local -A pci_drivers 00:27:25.335 11:45:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@319 -- # net_devs=() 00:27:25.335 11:45:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@319 -- # local -ga net_devs 00:27:25.335 11:45:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@320 -- # e810=() 00:27:25.335 11:45:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@320 -- # local -ga e810 00:27:25.335 11:45:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@321 -- # x722=() 00:27:25.335 11:45:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@321 -- # local -ga x722 00:27:25.335 11:45:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@322 -- # mlx=() 00:27:25.335 11:45:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@322 -- # local -ga mlx 00:27:25.335 11:45:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:27:25.335 11:45:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:27:25.335 11:45:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:27:25.335 11:45:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:27:25.335 11:45:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:27:25.335 11:45:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:27:25.335 11:45:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:27:25.335 11:45:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:27:25.335 11:45:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:27:25.335 11:45:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:27:25.335 11:45:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:27:25.335 11:45:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:27:25.335 11:45:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:27:25.335 11:45:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:27:25.335 11:45:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:27:25.335 11:45:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:27:25.335 11:45:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:27:25.335 11:45:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:27:25.335 11:45:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:27:25.335 11:45:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:27:25.335 Found 0000:af:00.0 (0x8086 - 0x159b) 00:27:25.335 11:45:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:27:25.335 11:45:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:27:25.335 11:45:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:25.335 11:45:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:25.335 11:45:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:27:25.335 11:45:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:27:25.335 11:45:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:27:25.335 Found 0000:af:00.1 (0x8086 - 0x159b) 00:27:25.335 11:45:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:27:25.335 11:45:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:27:25.335 11:45:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:25.335 11:45:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:25.335 11:45:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:27:25.335 11:45:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:27:25.335 11:45:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:27:25.335 11:45:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:27:25.335 11:45:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:27:25.335 11:45:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:25.335 11:45:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:27:25.335 11:45:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:25.335 11:45:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@418 -- # [[ up == up ]] 00:27:25.335 11:45:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:27:25.335 11:45:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:25.335 11:45:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:27:25.335 Found net devices under 0000:af:00.0: cvl_0_0 00:27:25.335 11:45:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:27:25.335 11:45:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:27:25.335 11:45:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:25.335 11:45:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:27:25.335 11:45:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:25.335 11:45:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@418 -- # [[ up == up ]] 00:27:25.335 11:45:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:27:25.335 11:45:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:25.335 11:45:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:27:25.335 Found net devices under 0000:af:00.1: cvl_0_1 00:27:25.335 11:45:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:27:25.335 11:45:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:27:25.336 11:45:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@442 -- # is_hw=yes 00:27:25.336 11:45:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:27:25.336 11:45:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:27:25.336 11:45:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:27:25.336 11:45:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:27:25.336 11:45:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:27:25.336 11:45:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:27:25.336 11:45:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:27:25.336 11:45:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:27:25.336 11:45:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:27:25.336 11:45:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:27:25.336 11:45:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:27:25.336 11:45:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:27:25.336 11:45:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:27:25.336 11:45:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:27:25.336 11:45:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:27:25.336 11:45:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:27:25.336 11:45:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:27:25.336 11:45:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:27:25.336 11:45:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:27:25.336 11:45:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:27:25.336 11:45:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:27:25.336 11:45:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:27:25.336 11:45:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:27:25.336 11:45:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:27:25.336 11:45:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:27:25.336 11:45:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:27:25.336 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:27:25.336 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.485 ms 00:27:25.336 00:27:25.336 --- 10.0.0.2 ping statistics --- 00:27:25.336 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:25.336 rtt min/avg/max/mdev = 0.485/0.485/0.485/0.000 ms 00:27:25.336 11:45:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:27:25.336 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:27:25.336 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.198 ms 00:27:25.336 00:27:25.336 --- 10.0.0.1 ping statistics --- 00:27:25.336 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:25.336 rtt min/avg/max/mdev = 0.198/0.198/0.198/0.000 ms 00:27:25.336 11:45:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:27:25.336 11:45:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@450 -- # return 0 00:27:25.336 11:45:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:27:25.336 11:45:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:27:25.336 11:45:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:27:25.336 11:45:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:27:25.336 11:45:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:27:25.336 11:45:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:27:25.336 11:45:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:27:25.595 11:45:26 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@140 -- # trap cleanup SIGINT SIGTERM EXIT 00:27:25.595 11:45:26 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@141 -- # [[ 0 -eq 1 ]] 00:27:25.595 11:45:26 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@145 -- # run_test nvmf_digest_clean run_digest 00:27:25.595 11:45:26 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:27:25.595 11:45:26 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1109 -- # xtrace_disable 00:27:25.595 11:45:26 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:27:25.595 ************************************ 00:27:25.595 START TEST nvmf_digest_clean 00:27:25.595 ************************************ 00:27:25.595 11:45:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1127 -- # run_digest 00:27:25.595 11:45:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@120 -- # local dsa_initiator 00:27:25.595 11:45:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # [[ '' == \d\s\a\_\i\n\i\t\i\a\t\o\r ]] 00:27:25.595 11:45:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # dsa_initiator=false 00:27:25.595 11:45:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@123 -- # tgt_params=("--wait-for-rpc") 00:27:25.595 11:45:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@124 -- # nvmfappstart --wait-for-rpc 00:27:25.595 11:45:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:27:25.595 11:45:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@724 -- # xtrace_disable 00:27:25.595 11:45:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:27:25.595 11:45:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@509 -- # nvmfpid=1391705 00:27:25.595 11:45:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@510 -- # waitforlisten 1391705 00:27:25.595 11:45:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@833 -- # '[' -z 1391705 ']' 00:27:25.595 11:45:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:25.595 11:45:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # local max_retries=100 00:27:25.595 11:45:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:25.595 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:25.595 11:45:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # xtrace_disable 00:27:25.595 11:45:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:27:25.595 11:45:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:27:25.595 [2024-11-15 11:45:26.288946] Starting SPDK v25.01-pre git sha1 4b2d483c6 / DPDK 24.03.0 initialization... 00:27:25.595 [2024-11-15 11:45:26.289000] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:25.595 [2024-11-15 11:45:26.388664] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:25.595 [2024-11-15 11:45:26.437253] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:25.595 [2024-11-15 11:45:26.437292] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:25.595 [2024-11-15 11:45:26.437303] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:27:25.595 [2024-11-15 11:45:26.437312] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:27:25.595 [2024-11-15 11:45:26.437320] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:25.595 [2024-11-15 11:45:26.438032] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:27:25.854 11:45:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:27:25.854 11:45:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@866 -- # return 0 00:27:25.854 11:45:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:27:25.854 11:45:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@730 -- # xtrace_disable 00:27:25.854 11:45:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:27:25.854 11:45:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:25.854 11:45:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@125 -- # [[ '' == \d\s\a\_\t\a\r\g\e\t ]] 00:27:25.854 11:45:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@126 -- # common_target_config 00:27:25.854 11:45:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@43 -- # rpc_cmd 00:27:25.854 11:45:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:25.854 11:45:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:27:25.854 null0 00:27:25.854 [2024-11-15 11:45:26.628853] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:25.854 [2024-11-15 11:45:26.653085] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:25.854 11:45:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:25.854 11:45:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@128 -- # run_bperf randread 4096 128 false 00:27:25.854 11:45:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:27:25.854 11:45:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:27:25.854 11:45:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:27:25.854 11:45:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:27:25.854 11:45:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:27:25.854 11:45:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:27:25.854 11:45:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=1391726 00:27:25.854 11:45:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 1391726 /var/tmp/bperf.sock 00:27:25.854 11:45:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@833 -- # '[' -z 1391726 ']' 00:27:25.854 11:45:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:27:25.854 11:45:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bperf.sock 00:27:25.854 11:45:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # local max_retries=100 00:27:25.854 11:45:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:27:25.854 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:27:25.854 11:45:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # xtrace_disable 00:27:25.854 11:45:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:27:25.854 [2024-11-15 11:45:26.684515] Starting SPDK v25.01-pre git sha1 4b2d483c6 / DPDK 24.03.0 initialization... 00:27:25.854 [2024-11-15 11:45:26.684552] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1391726 ] 00:27:26.114 [2024-11-15 11:45:26.737763] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:26.114 [2024-11-15 11:45:26.778997] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:27:26.114 11:45:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:27:26.114 11:45:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@866 -- # return 0 00:27:26.114 11:45:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:27:26.114 11:45:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:27:26.114 11:45:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:27:26.373 11:45:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:27:26.373 11:45:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:27:26.941 nvme0n1 00:27:26.941 11:45:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:27:26.941 11:45:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:27:26.941 Running I/O for 2 seconds... 00:27:29.253 17521.00 IOPS, 68.44 MiB/s [2024-11-15T10:45:30.106Z] 17687.50 IOPS, 69.09 MiB/s 00:27:29.253 Latency(us) 00:27:29.253 [2024-11-15T10:45:30.106Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:29.253 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:27:29.253 nvme0n1 : 2.04 17365.62 67.83 0.00 0.00 7220.69 2636.33 46232.67 00:27:29.253 [2024-11-15T10:45:30.106Z] =================================================================================================================== 00:27:29.253 [2024-11-15T10:45:30.106Z] Total : 17365.62 67.83 0.00 0.00 7220.69 2636.33 46232.67 00:27:29.253 { 00:27:29.253 "results": [ 00:27:29.253 { 00:27:29.253 "job": "nvme0n1", 00:27:29.253 "core_mask": "0x2", 00:27:29.253 "workload": "randread", 00:27:29.253 "status": "finished", 00:27:29.253 "queue_depth": 128, 00:27:29.253 "io_size": 4096, 00:27:29.253 "runtime": 2.044442, 00:27:29.253 "iops": 17365.618589326576, 00:27:29.253 "mibps": 67.83444761455694, 00:27:29.253 "io_failed": 0, 00:27:29.253 "io_timeout": 0, 00:27:29.253 "avg_latency_us": 7220.690875700646, 00:27:29.253 "min_latency_us": 2636.3345454545456, 00:27:29.253 "max_latency_us": 46232.66909090909 00:27:29.254 } 00:27:29.254 ], 00:27:29.254 "core_count": 1 00:27:29.254 } 00:27:29.254 11:45:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:27:29.254 11:45:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:27:29.254 11:45:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:27:29.254 11:45:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:27:29.254 | select(.opcode=="crc32c") 00:27:29.254 | "\(.module_name) \(.executed)"' 00:27:29.254 11:45:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:27:29.254 11:45:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:27:29.254 11:45:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:27:29.254 11:45:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:27:29.254 11:45:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:27:29.254 11:45:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 1391726 00:27:29.254 11:45:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # '[' -z 1391726 ']' 00:27:29.254 11:45:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # kill -0 1391726 00:27:29.254 11:45:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@957 -- # uname 00:27:29.254 11:45:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:27:29.254 11:45:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 1391726 00:27:29.254 11:45:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:27:29.254 11:45:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:27:29.254 11:45:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@970 -- # echo 'killing process with pid 1391726' 00:27:29.254 killing process with pid 1391726 00:27:29.254 11:45:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@971 -- # kill 1391726 00:27:29.254 Received shutdown signal, test time was about 2.000000 seconds 00:27:29.254 00:27:29.254 Latency(us) 00:27:29.254 [2024-11-15T10:45:30.107Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:29.254 [2024-11-15T10:45:30.107Z] =================================================================================================================== 00:27:29.254 [2024-11-15T10:45:30.107Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:27:29.254 11:45:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@976 -- # wait 1391726 00:27:29.513 11:45:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@129 -- # run_bperf randread 131072 16 false 00:27:29.513 11:45:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:27:29.513 11:45:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:27:29.513 11:45:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:27:29.513 11:45:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:27:29.513 11:45:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:27:29.513 11:45:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:27:29.513 11:45:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=1392387 00:27:29.513 11:45:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 1392387 /var/tmp/bperf.sock 00:27:29.513 11:45:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:27:29.513 11:45:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@833 -- # '[' -z 1392387 ']' 00:27:29.513 11:45:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bperf.sock 00:27:29.513 11:45:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # local max_retries=100 00:27:29.513 11:45:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:27:29.513 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:27:29.513 11:45:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # xtrace_disable 00:27:29.513 11:45:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:27:29.513 [2024-11-15 11:45:30.277473] Starting SPDK v25.01-pre git sha1 4b2d483c6 / DPDK 24.03.0 initialization... 00:27:29.513 [2024-11-15 11:45:30.277521] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1392387 ] 00:27:29.513 I/O size of 131072 is greater than zero copy threshold (65536). 00:27:29.513 Zero copy mechanism will not be used. 00:27:29.513 [2024-11-15 11:45:30.332970] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:29.772 [2024-11-15 11:45:30.370209] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:27:29.772 11:45:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:27:29.772 11:45:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@866 -- # return 0 00:27:29.772 11:45:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:27:29.772 11:45:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:27:29.772 11:45:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:27:30.032 11:45:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:27:30.032 11:45:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:27:30.650 nvme0n1 00:27:30.650 11:45:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:27:30.650 11:45:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:27:30.650 I/O size of 131072 is greater than zero copy threshold (65536). 00:27:30.650 Zero copy mechanism will not be used. 00:27:30.650 Running I/O for 2 seconds... 00:27:32.960 4720.00 IOPS, 590.00 MiB/s [2024-11-15T10:45:33.813Z] 4715.00 IOPS, 589.38 MiB/s 00:27:32.960 Latency(us) 00:27:32.960 [2024-11-15T10:45:33.813Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:32.960 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:27:32.960 nvme0n1 : 2.00 4717.86 589.73 0.00 0.00 3388.88 1228.80 5213.09 00:27:32.960 [2024-11-15T10:45:33.813Z] =================================================================================================================== 00:27:32.960 [2024-11-15T10:45:33.813Z] Total : 4717.86 589.73 0.00 0.00 3388.88 1228.80 5213.09 00:27:32.960 { 00:27:32.960 "results": [ 00:27:32.960 { 00:27:32.960 "job": "nvme0n1", 00:27:32.960 "core_mask": "0x2", 00:27:32.960 "workload": "randread", 00:27:32.960 "status": "finished", 00:27:32.960 "queue_depth": 16, 00:27:32.960 "io_size": 131072, 00:27:32.960 "runtime": 2.002181, 00:27:32.960 "iops": 4717.85517892738, 00:27:32.960 "mibps": 589.7318973659225, 00:27:32.960 "io_failed": 0, 00:27:32.960 "io_timeout": 0, 00:27:32.960 "avg_latency_us": 3388.8812180239834, 00:27:32.960 "min_latency_us": 1228.8, 00:27:32.960 "max_latency_us": 5213.090909090909 00:27:32.960 } 00:27:32.960 ], 00:27:32.960 "core_count": 1 00:27:32.961 } 00:27:32.961 11:45:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:27:32.961 11:45:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:27:32.961 11:45:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:27:32.961 11:45:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:27:32.961 | select(.opcode=="crc32c") 00:27:32.961 | "\(.module_name) \(.executed)"' 00:27:32.961 11:45:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:27:32.961 11:45:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:27:32.961 11:45:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:27:32.961 11:45:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:27:32.961 11:45:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:27:32.961 11:45:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 1392387 00:27:32.961 11:45:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # '[' -z 1392387 ']' 00:27:32.961 11:45:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # kill -0 1392387 00:27:32.961 11:45:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@957 -- # uname 00:27:32.961 11:45:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:27:32.961 11:45:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 1392387 00:27:32.961 11:45:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:27:32.961 11:45:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:27:32.961 11:45:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@970 -- # echo 'killing process with pid 1392387' 00:27:32.961 killing process with pid 1392387 00:27:32.961 11:45:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@971 -- # kill 1392387 00:27:32.961 Received shutdown signal, test time was about 2.000000 seconds 00:27:32.961 00:27:32.961 Latency(us) 00:27:32.961 [2024-11-15T10:45:33.814Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:32.961 [2024-11-15T10:45:33.814Z] =================================================================================================================== 00:27:32.961 [2024-11-15T10:45:33.814Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:27:32.961 11:45:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@976 -- # wait 1392387 00:27:33.220 11:45:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@130 -- # run_bperf randwrite 4096 128 false 00:27:33.220 11:45:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:27:33.220 11:45:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:27:33.220 11:45:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:27:33.220 11:45:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:27:33.220 11:45:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:27:33.220 11:45:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:27:33.220 11:45:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=1393051 00:27:33.220 11:45:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 1393051 /var/tmp/bperf.sock 00:27:33.220 11:45:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:27:33.220 11:45:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@833 -- # '[' -z 1393051 ']' 00:27:33.220 11:45:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bperf.sock 00:27:33.221 11:45:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # local max_retries=100 00:27:33.221 11:45:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:27:33.221 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:27:33.221 11:45:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # xtrace_disable 00:27:33.221 11:45:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:27:33.221 [2024-11-15 11:45:33.955959] Starting SPDK v25.01-pre git sha1 4b2d483c6 / DPDK 24.03.0 initialization... 00:27:33.221 [2024-11-15 11:45:33.956020] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1393051 ] 00:27:33.221 [2024-11-15 11:45:34.021860] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:33.221 [2024-11-15 11:45:34.062134] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:27:33.479 11:45:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:27:33.479 11:45:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@866 -- # return 0 00:27:33.479 11:45:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:27:33.479 11:45:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:27:33.480 11:45:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:27:33.739 11:45:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:27:33.739 11:45:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:27:34.306 nvme0n1 00:27:34.306 11:45:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:27:34.306 11:45:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:27:34.306 Running I/O for 2 seconds... 00:27:36.616 17429.00 IOPS, 68.08 MiB/s [2024-11-15T10:45:37.469Z] 17510.50 IOPS, 68.40 MiB/s 00:27:36.616 Latency(us) 00:27:36.616 [2024-11-15T10:45:37.469Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:36.616 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:27:36.616 nvme0n1 : 2.01 17512.94 68.41 0.00 0.00 7293.84 6494.02 16443.58 00:27:36.616 [2024-11-15T10:45:37.469Z] =================================================================================================================== 00:27:36.616 [2024-11-15T10:45:37.469Z] Total : 17512.94 68.41 0.00 0.00 7293.84 6494.02 16443.58 00:27:36.616 { 00:27:36.616 "results": [ 00:27:36.616 { 00:27:36.616 "job": "nvme0n1", 00:27:36.616 "core_mask": "0x2", 00:27:36.616 "workload": "randwrite", 00:27:36.616 "status": "finished", 00:27:36.616 "queue_depth": 128, 00:27:36.616 "io_size": 4096, 00:27:36.616 "runtime": 2.00703, 00:27:36.616 "iops": 17512.94200883893, 00:27:36.616 "mibps": 68.40992972202707, 00:27:36.616 "io_failed": 0, 00:27:36.616 "io_timeout": 0, 00:27:36.616 "avg_latency_us": 7293.842679295156, 00:27:36.616 "min_latency_us": 6494.021818181818, 00:27:36.616 "max_latency_us": 16443.578181818182 00:27:36.616 } 00:27:36.616 ], 00:27:36.616 "core_count": 1 00:27:36.616 } 00:27:36.616 11:45:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:27:36.616 11:45:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:27:36.616 11:45:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:27:36.616 11:45:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:27:36.616 11:45:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:27:36.616 | select(.opcode=="crc32c") 00:27:36.616 | "\(.module_name) \(.executed)"' 00:27:36.616 11:45:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:27:36.616 11:45:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:27:36.616 11:45:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:27:36.616 11:45:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:27:36.616 11:45:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 1393051 00:27:36.616 11:45:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # '[' -z 1393051 ']' 00:27:36.616 11:45:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # kill -0 1393051 00:27:36.616 11:45:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@957 -- # uname 00:27:36.616 11:45:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:27:36.616 11:45:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 1393051 00:27:36.875 11:45:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:27:36.875 11:45:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:27:36.875 11:45:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@970 -- # echo 'killing process with pid 1393051' 00:27:36.875 killing process with pid 1393051 00:27:36.875 11:45:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@971 -- # kill 1393051 00:27:36.875 Received shutdown signal, test time was about 2.000000 seconds 00:27:36.875 00:27:36.875 Latency(us) 00:27:36.875 [2024-11-15T10:45:37.728Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:36.875 [2024-11-15T10:45:37.728Z] =================================================================================================================== 00:27:36.875 [2024-11-15T10:45:37.728Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:27:36.875 11:45:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@976 -- # wait 1393051 00:27:36.875 11:45:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@131 -- # run_bperf randwrite 131072 16 false 00:27:36.875 11:45:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:27:36.875 11:45:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:27:36.875 11:45:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:27:36.875 11:45:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:27:36.875 11:45:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:27:36.875 11:45:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:27:36.875 11:45:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=1393669 00:27:36.875 11:45:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 1393669 /var/tmp/bperf.sock 00:27:36.875 11:45:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:27:36.875 11:45:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@833 -- # '[' -z 1393669 ']' 00:27:36.875 11:45:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bperf.sock 00:27:36.875 11:45:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # local max_retries=100 00:27:36.875 11:45:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:27:36.875 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:27:36.875 11:45:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # xtrace_disable 00:27:36.875 11:45:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:27:36.875 [2024-11-15 11:45:37.724251] Starting SPDK v25.01-pre git sha1 4b2d483c6 / DPDK 24.03.0 initialization... 00:27:36.875 [2024-11-15 11:45:37.724311] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1393669 ] 00:27:36.875 I/O size of 131072 is greater than zero copy threshold (65536). 00:27:36.876 Zero copy mechanism will not be used. 00:27:37.134 [2024-11-15 11:45:37.790820] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:37.134 [2024-11-15 11:45:37.831083] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:27:37.134 11:45:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:27:37.134 11:45:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@866 -- # return 0 00:27:37.134 11:45:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:27:37.134 11:45:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:27:37.134 11:45:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:27:37.702 11:45:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:27:37.702 11:45:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:27:37.961 nvme0n1 00:27:37.962 11:45:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:27:37.962 11:45:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:27:37.962 I/O size of 131072 is greater than zero copy threshold (65536). 00:27:37.962 Zero copy mechanism will not be used. 00:27:37.962 Running I/O for 2 seconds... 00:27:40.270 5197.00 IOPS, 649.62 MiB/s [2024-11-15T10:45:41.123Z] 5044.50 IOPS, 630.56 MiB/s 00:27:40.270 Latency(us) 00:27:40.270 [2024-11-15T10:45:41.123Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:40.270 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:27:40.270 nvme0n1 : 2.00 5044.38 630.55 0.00 0.00 3167.22 2383.13 5838.66 00:27:40.270 [2024-11-15T10:45:41.124Z] =================================================================================================================== 00:27:40.271 [2024-11-15T10:45:41.124Z] Total : 5044.38 630.55 0.00 0.00 3167.22 2383.13 5838.66 00:27:40.271 { 00:27:40.271 "results": [ 00:27:40.271 { 00:27:40.271 "job": "nvme0n1", 00:27:40.271 "core_mask": "0x2", 00:27:40.271 "workload": "randwrite", 00:27:40.271 "status": "finished", 00:27:40.271 "queue_depth": 16, 00:27:40.271 "io_size": 131072, 00:27:40.271 "runtime": 2.004012, 00:27:40.271 "iops": 5044.380971770628, 00:27:40.271 "mibps": 630.5476214713285, 00:27:40.271 "io_failed": 0, 00:27:40.271 "io_timeout": 0, 00:27:40.271 "avg_latency_us": 3167.216280721949, 00:27:40.271 "min_latency_us": 2383.1272727272726, 00:27:40.271 "max_latency_us": 5838.6618181818185 00:27:40.271 } 00:27:40.271 ], 00:27:40.271 "core_count": 1 00:27:40.271 } 00:27:40.271 11:45:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:27:40.271 11:45:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:27:40.271 11:45:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:27:40.271 11:45:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:27:40.271 | select(.opcode=="crc32c") 00:27:40.271 | "\(.module_name) \(.executed)"' 00:27:40.271 11:45:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:27:40.271 11:45:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:27:40.271 11:45:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:27:40.271 11:45:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:27:40.271 11:45:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:27:40.271 11:45:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 1393669 00:27:40.271 11:45:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # '[' -z 1393669 ']' 00:27:40.271 11:45:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # kill -0 1393669 00:27:40.271 11:45:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@957 -- # uname 00:27:40.271 11:45:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:27:40.271 11:45:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 1393669 00:27:40.530 11:45:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:27:40.530 11:45:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:27:40.530 11:45:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@970 -- # echo 'killing process with pid 1393669' 00:27:40.530 killing process with pid 1393669 00:27:40.530 11:45:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@971 -- # kill 1393669 00:27:40.530 Received shutdown signal, test time was about 2.000000 seconds 00:27:40.530 00:27:40.530 Latency(us) 00:27:40.530 [2024-11-15T10:45:41.383Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:40.530 [2024-11-15T10:45:41.383Z] =================================================================================================================== 00:27:40.530 [2024-11-15T10:45:41.383Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:27:40.530 11:45:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@976 -- # wait 1393669 00:27:40.530 11:45:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@132 -- # killprocess 1391705 00:27:40.530 11:45:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # '[' -z 1391705 ']' 00:27:40.530 11:45:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # kill -0 1391705 00:27:40.530 11:45:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@957 -- # uname 00:27:40.530 11:45:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:27:40.530 11:45:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 1391705 00:27:40.788 11:45:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:27:40.788 11:45:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:27:40.788 11:45:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@970 -- # echo 'killing process with pid 1391705' 00:27:40.788 killing process with pid 1391705 00:27:40.788 11:45:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@971 -- # kill 1391705 00:27:40.788 11:45:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@976 -- # wait 1391705 00:27:40.788 00:27:40.788 real 0m15.354s 00:27:40.788 user 0m30.995s 00:27:40.788 sys 0m4.370s 00:27:40.788 11:45:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1128 -- # xtrace_disable 00:27:40.788 11:45:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:27:40.788 ************************************ 00:27:40.788 END TEST nvmf_digest_clean 00:27:40.788 ************************************ 00:27:40.788 11:45:41 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@147 -- # run_test nvmf_digest_error run_digest_error 00:27:40.788 11:45:41 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:27:40.788 11:45:41 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1109 -- # xtrace_disable 00:27:40.788 11:45:41 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:27:41.047 ************************************ 00:27:41.047 START TEST nvmf_digest_error 00:27:41.047 ************************************ 00:27:41.047 11:45:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1127 -- # run_digest_error 00:27:41.047 11:45:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@102 -- # nvmfappstart --wait-for-rpc 00:27:41.047 11:45:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:27:41.047 11:45:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@724 -- # xtrace_disable 00:27:41.047 11:45:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:27:41.047 11:45:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@509 -- # nvmfpid=1394410 00:27:41.047 11:45:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@510 -- # waitforlisten 1394410 00:27:41.047 11:45:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:27:41.047 11:45:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@833 -- # '[' -z 1394410 ']' 00:27:41.047 11:45:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:41.047 11:45:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # local max_retries=100 00:27:41.047 11:45:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:41.047 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:41.047 11:45:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # xtrace_disable 00:27:41.047 11:45:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:27:41.047 [2024-11-15 11:45:41.711225] Starting SPDK v25.01-pre git sha1 4b2d483c6 / DPDK 24.03.0 initialization... 00:27:41.047 [2024-11-15 11:45:41.711278] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:41.047 [2024-11-15 11:45:41.812573] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:41.048 [2024-11-15 11:45:41.859844] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:41.048 [2024-11-15 11:45:41.859884] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:41.048 [2024-11-15 11:45:41.859895] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:27:41.048 [2024-11-15 11:45:41.859905] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:27:41.048 [2024-11-15 11:45:41.859913] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:41.048 [2024-11-15 11:45:41.860602] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:27:41.306 11:45:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:27:41.306 11:45:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@866 -- # return 0 00:27:41.306 11:45:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:27:41.306 11:45:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@730 -- # xtrace_disable 00:27:41.306 11:45:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:27:41.306 11:45:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:41.306 11:45:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@104 -- # rpc_cmd accel_assign_opc -o crc32c -m error 00:27:41.306 11:45:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:41.306 11:45:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:27:41.306 [2024-11-15 11:45:41.973231] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation crc32c will be assigned to module error 00:27:41.306 11:45:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:41.306 11:45:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@105 -- # common_target_config 00:27:41.306 11:45:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@43 -- # rpc_cmd 00:27:41.306 11:45:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:41.306 11:45:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:27:41.306 null0 00:27:41.306 [2024-11-15 11:45:42.071723] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:41.306 [2024-11-15 11:45:42.095951] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:41.306 11:45:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:41.306 11:45:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@108 -- # run_bperf_err randread 4096 128 00:27:41.306 11:45:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:27:41.306 11:45:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:27:41.306 11:45:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:27:41.306 11:45:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:27:41.306 11:45:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=1394445 00:27:41.307 11:45:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 1394445 /var/tmp/bperf.sock 00:27:41.307 11:45:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z 00:27:41.307 11:45:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@833 -- # '[' -z 1394445 ']' 00:27:41.307 11:45:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bperf.sock 00:27:41.307 11:45:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # local max_retries=100 00:27:41.307 11:45:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:27:41.307 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:27:41.307 11:45:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # xtrace_disable 00:27:41.307 11:45:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:27:41.307 [2024-11-15 11:45:42.153339] Starting SPDK v25.01-pre git sha1 4b2d483c6 / DPDK 24.03.0 initialization... 00:27:41.307 [2024-11-15 11:45:42.153393] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1394445 ] 00:27:41.565 [2024-11-15 11:45:42.219659] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:41.565 [2024-11-15 11:45:42.259839] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:27:41.565 11:45:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:27:41.565 11:45:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@866 -- # return 0 00:27:41.565 11:45:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:27:41.565 11:45:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:27:41.825 11:45:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:27:41.825 11:45:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:41.825 11:45:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:27:41.825 11:45:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:41.825 11:45:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:27:41.825 11:45:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:27:42.395 nvme0n1 00:27:42.395 11:45:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:27:42.395 11:45:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:42.395 11:45:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:27:42.395 11:45:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:42.395 11:45:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:27:42.395 11:45:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:27:42.395 Running I/O for 2 seconds... 00:27:42.395 [2024-11-15 11:45:43.148097] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1287ca0) 00:27:42.395 [2024-11-15 11:45:43.148129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:23112 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.395 [2024-11-15 11:45:43.148139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:42.395 [2024-11-15 11:45:43.164249] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1287ca0) 00:27:42.395 [2024-11-15 11:45:43.164273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:22601 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.395 [2024-11-15 11:45:43.164282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:42.395 [2024-11-15 11:45:43.180344] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1287ca0) 00:27:42.395 [2024-11-15 11:45:43.180369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:17082 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.395 [2024-11-15 11:45:43.180378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:42.395 [2024-11-15 11:45:43.196255] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1287ca0) 00:27:42.395 [2024-11-15 11:45:43.196276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:8 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.395 [2024-11-15 11:45:43.196285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:42.395 [2024-11-15 11:45:43.212059] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1287ca0) 00:27:42.395 [2024-11-15 11:45:43.212080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:23933 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.395 [2024-11-15 11:45:43.212088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:42.395 [2024-11-15 11:45:43.223047] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1287ca0) 00:27:42.395 [2024-11-15 11:45:43.223067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:10820 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.395 [2024-11-15 11:45:43.223079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:42.395 [2024-11-15 11:45:43.238833] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1287ca0) 00:27:42.395 [2024-11-15 11:45:43.238854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17590 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.395 [2024-11-15 11:45:43.238863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:42.655 [2024-11-15 11:45:43.254543] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1287ca0) 00:27:42.655 [2024-11-15 11:45:43.254565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:22339 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.655 [2024-11-15 11:45:43.254573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:42.655 [2024-11-15 11:45:43.267863] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1287ca0) 00:27:42.655 [2024-11-15 11:45:43.267883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:3457 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.655 [2024-11-15 11:45:43.267891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:42.655 [2024-11-15 11:45:43.279885] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1287ca0) 00:27:42.655 [2024-11-15 11:45:43.279905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:21649 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.655 [2024-11-15 11:45:43.279913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:42.655 [2024-11-15 11:45:43.296556] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1287ca0) 00:27:42.655 [2024-11-15 11:45:43.296576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:18780 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.655 [2024-11-15 11:45:43.296583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:42.655 [2024-11-15 11:45:43.309233] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1287ca0) 00:27:42.655 [2024-11-15 11:45:43.309251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:8696 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.655 [2024-11-15 11:45:43.309259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:42.655 [2024-11-15 11:45:43.325975] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1287ca0) 00:27:42.655 [2024-11-15 11:45:43.325994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:20120 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.655 [2024-11-15 11:45:43.326002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:42.655 [2024-11-15 11:45:43.339679] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1287ca0) 00:27:42.655 [2024-11-15 11:45:43.339699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:3436 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.655 [2024-11-15 11:45:43.339706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:42.655 [2024-11-15 11:45:43.353704] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1287ca0) 00:27:42.655 [2024-11-15 11:45:43.353728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:18377 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.655 [2024-11-15 11:45:43.353736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:42.655 [2024-11-15 11:45:43.368296] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1287ca0) 00:27:42.655 [2024-11-15 11:45:43.368316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:23038 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.655 [2024-11-15 11:45:43.368325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:42.655 [2024-11-15 11:45:43.382595] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1287ca0) 00:27:42.655 [2024-11-15 11:45:43.382615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:566 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.655 [2024-11-15 11:45:43.382623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:42.655 [2024-11-15 11:45:43.397219] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1287ca0) 00:27:42.655 [2024-11-15 11:45:43.397239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:1479 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.655 [2024-11-15 11:45:43.397246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:42.655 [2024-11-15 11:45:43.414757] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1287ca0) 00:27:42.655 [2024-11-15 11:45:43.414777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:2889 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.655 [2024-11-15 11:45:43.414785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:42.655 [2024-11-15 11:45:43.425465] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1287ca0) 00:27:42.655 [2024-11-15 11:45:43.425485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:24284 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.655 [2024-11-15 11:45:43.425492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:42.655 [2024-11-15 11:45:43.439931] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1287ca0) 00:27:42.656 [2024-11-15 11:45:43.439950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:2302 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.656 [2024-11-15 11:45:43.439958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:42.656 [2024-11-15 11:45:43.454403] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1287ca0) 00:27:42.656 [2024-11-15 11:45:43.454422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:16192 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.656 [2024-11-15 11:45:43.454430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:42.656 [2024-11-15 11:45:43.468736] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1287ca0) 00:27:42.656 [2024-11-15 11:45:43.468755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:19798 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.656 [2024-11-15 11:45:43.468763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:42.656 [2024-11-15 11:45:43.483086] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1287ca0) 00:27:42.656 [2024-11-15 11:45:43.483106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:14284 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.656 [2024-11-15 11:45:43.483114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:42.656 [2024-11-15 11:45:43.497738] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1287ca0) 00:27:42.656 [2024-11-15 11:45:43.497757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:5316 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.656 [2024-11-15 11:45:43.497765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:42.924 [2024-11-15 11:45:43.511846] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1287ca0) 00:27:42.924 [2024-11-15 11:45:43.511867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:9767 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.924 [2024-11-15 11:45:43.511875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:42.924 [2024-11-15 11:45:43.528225] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1287ca0) 00:27:42.924 [2024-11-15 11:45:43.528244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:12189 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.924 [2024-11-15 11:45:43.528252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:42.924 [2024-11-15 11:45:43.542859] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1287ca0) 00:27:42.924 [2024-11-15 11:45:43.542879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:5555 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.924 [2024-11-15 11:45:43.542887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:42.924 [2024-11-15 11:45:43.557382] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1287ca0) 00:27:42.924 [2024-11-15 11:45:43.557402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:3723 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.924 [2024-11-15 11:45:43.557410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:42.924 [2024-11-15 11:45:43.571758] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1287ca0) 00:27:42.924 [2024-11-15 11:45:43.571778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:6012 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.924 [2024-11-15 11:45:43.571787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:42.924 [2024-11-15 11:45:43.587506] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1287ca0) 00:27:42.924 [2024-11-15 11:45:43.587526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:5625 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.924 [2024-11-15 11:45:43.587534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:42.924 [2024-11-15 11:45:43.601635] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1287ca0) 00:27:42.924 [2024-11-15 11:45:43.601656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:12585 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.924 [2024-11-15 11:45:43.601667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:42.924 [2024-11-15 11:45:43.616269] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1287ca0) 00:27:42.924 [2024-11-15 11:45:43.616289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:10375 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.924 [2024-11-15 11:45:43.616297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:42.924 [2024-11-15 11:45:43.630781] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1287ca0) 00:27:42.924 [2024-11-15 11:45:43.630802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:21230 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.924 [2024-11-15 11:45:43.630810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:42.924 [2024-11-15 11:45:43.645095] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1287ca0) 00:27:42.924 [2024-11-15 11:45:43.645114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:2166 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.924 [2024-11-15 11:45:43.645122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:42.924 [2024-11-15 11:45:43.660600] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1287ca0) 00:27:42.924 [2024-11-15 11:45:43.660620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:1142 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.924 [2024-11-15 11:45:43.660628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:42.924 [2024-11-15 11:45:43.674838] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1287ca0) 00:27:42.924 [2024-11-15 11:45:43.674858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:18458 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.924 [2024-11-15 11:45:43.674867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:42.925 [2024-11-15 11:45:43.690868] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1287ca0) 00:27:42.925 [2024-11-15 11:45:43.690890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:8412 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.925 [2024-11-15 11:45:43.690898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:42.925 [2024-11-15 11:45:43.704430] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1287ca0) 00:27:42.925 [2024-11-15 11:45:43.704449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:14670 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.925 [2024-11-15 11:45:43.704462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:42.925 [2024-11-15 11:45:43.720404] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1287ca0) 00:27:42.925 [2024-11-15 11:45:43.720424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:402 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.925 [2024-11-15 11:45:43.720432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:42.925 [2024-11-15 11:45:43.733667] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1287ca0) 00:27:42.925 [2024-11-15 11:45:43.733690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:3034 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.925 [2024-11-15 11:45:43.733698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:42.925 [2024-11-15 11:45:43.746184] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1287ca0) 00:27:42.925 [2024-11-15 11:45:43.746204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:20566 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.925 [2024-11-15 11:45:43.746212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:42.925 [2024-11-15 11:45:43.760535] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1287ca0) 00:27:42.925 [2024-11-15 11:45:43.760554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:1773 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.925 [2024-11-15 11:45:43.760562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:42.925 [2024-11-15 11:45:43.774833] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1287ca0) 00:27:42.925 [2024-11-15 11:45:43.774851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:9251 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.925 [2024-11-15 11:45:43.774859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:43.190 [2024-11-15 11:45:43.789213] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1287ca0) 00:27:43.190 [2024-11-15 11:45:43.789233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:22921 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:43.190 [2024-11-15 11:45:43.789241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:43.190 [2024-11-15 11:45:43.803575] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1287ca0) 00:27:43.190 [2024-11-15 11:45:43.803594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:8476 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:43.190 [2024-11-15 11:45:43.803603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:43.190 [2024-11-15 11:45:43.820509] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1287ca0) 00:27:43.190 [2024-11-15 11:45:43.820529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:10735 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:43.190 [2024-11-15 11:45:43.820537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:43.190 [2024-11-15 11:45:43.834762] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1287ca0) 00:27:43.190 [2024-11-15 11:45:43.834782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:12161 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:43.190 [2024-11-15 11:45:43.834789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:43.190 [2024-11-15 11:45:43.849111] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1287ca0) 00:27:43.190 [2024-11-15 11:45:43.849130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12314 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:43.190 [2024-11-15 11:45:43.849138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:43.190 [2024-11-15 11:45:43.863413] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1287ca0) 00:27:43.190 [2024-11-15 11:45:43.863433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:6557 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:43.190 [2024-11-15 11:45:43.863441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:43.190 [2024-11-15 11:45:43.877747] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1287ca0) 00:27:43.190 [2024-11-15 11:45:43.877767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:1825 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:43.190 [2024-11-15 11:45:43.877775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:43.190 [2024-11-15 11:45:43.892137] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1287ca0) 00:27:43.190 [2024-11-15 11:45:43.892157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:11445 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:43.190 [2024-11-15 11:45:43.892165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:43.190 [2024-11-15 11:45:43.906496] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1287ca0) 00:27:43.190 [2024-11-15 11:45:43.906515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:19890 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:43.190 [2024-11-15 11:45:43.906523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:43.190 [2024-11-15 11:45:43.920761] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1287ca0) 00:27:43.190 [2024-11-15 11:45:43.920780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:2296 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:43.190 [2024-11-15 11:45:43.920788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:43.190 [2024-11-15 11:45:43.935070] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1287ca0) 00:27:43.190 [2024-11-15 11:45:43.935089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:24956 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:43.190 [2024-11-15 11:45:43.935096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:43.190 [2024-11-15 11:45:43.949555] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1287ca0) 00:27:43.190 [2024-11-15 11:45:43.949575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:21621 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:43.190 [2024-11-15 11:45:43.949583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:43.190 [2024-11-15 11:45:43.963963] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1287ca0) 00:27:43.190 [2024-11-15 11:45:43.963982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:17827 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:43.190 [2024-11-15 11:45:43.963989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:43.190 [2024-11-15 11:45:43.978198] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1287ca0) 00:27:43.190 [2024-11-15 11:45:43.978221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:17662 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:43.190 [2024-11-15 11:45:43.978229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:43.190 [2024-11-15 11:45:43.992574] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1287ca0) 00:27:43.190 [2024-11-15 11:45:43.992594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:1068 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:43.190 [2024-11-15 11:45:43.992602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:43.190 [2024-11-15 11:45:44.006983] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1287ca0) 00:27:43.190 [2024-11-15 11:45:44.007004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:1320 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:43.190 [2024-11-15 11:45:44.007011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:43.190 [2024-11-15 11:45:44.021360] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1287ca0) 00:27:43.191 [2024-11-15 11:45:44.021379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:6769 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:43.191 [2024-11-15 11:45:44.021386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:43.191 [2024-11-15 11:45:44.035704] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1287ca0) 00:27:43.191 [2024-11-15 11:45:44.035724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:12333 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:43.191 [2024-11-15 11:45:44.035732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:43.464 [2024-11-15 11:45:44.050255] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1287ca0) 00:27:43.465 [2024-11-15 11:45:44.050276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:1165 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:43.465 [2024-11-15 11:45:44.050284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:43.465 [2024-11-15 11:45:44.067354] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1287ca0) 00:27:43.465 [2024-11-15 11:45:44.067374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:11495 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:43.465 [2024-11-15 11:45:44.067382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:43.465 [2024-11-15 11:45:44.081733] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1287ca0) 00:27:43.465 [2024-11-15 11:45:44.081753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21457 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:43.465 [2024-11-15 11:45:44.081760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:43.465 [2024-11-15 11:45:44.096104] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1287ca0) 00:27:43.465 [2024-11-15 11:45:44.096125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:949 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:43.465 [2024-11-15 11:45:44.096133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:43.465 [2024-11-15 11:45:44.110480] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1287ca0) 00:27:43.465 [2024-11-15 11:45:44.110500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:4139 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:43.465 [2024-11-15 11:45:44.110507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:43.465 [2024-11-15 11:45:44.124820] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1287ca0) 00:27:43.465 [2024-11-15 11:45:44.124841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:20344 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:43.465 [2024-11-15 11:45:44.124848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:43.465 17393.00 IOPS, 67.94 MiB/s [2024-11-15T10:45:44.318Z] [2024-11-15 11:45:44.136311] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1287ca0) 00:27:43.465 [2024-11-15 11:45:44.136333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:16548 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:43.465 [2024-11-15 11:45:44.136341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:43.465 [2024-11-15 11:45:44.151277] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1287ca0) 00:27:43.465 [2024-11-15 11:45:44.151296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:20854 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:43.465 [2024-11-15 11:45:44.151304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:43.465 [2024-11-15 11:45:44.165939] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1287ca0) 00:27:43.465 [2024-11-15 11:45:44.165959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:1729 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:43.465 [2024-11-15 11:45:44.165966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:43.465 [2024-11-15 11:45:44.180275] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1287ca0) 00:27:43.465 [2024-11-15 11:45:44.180295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:16468 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:43.465 [2024-11-15 11:45:44.180303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:43.465 [2024-11-15 11:45:44.194829] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1287ca0) 00:27:43.465 [2024-11-15 11:45:44.194849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:19710 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:43.465 [2024-11-15 11:45:44.194856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:43.465 [2024-11-15 11:45:44.209334] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1287ca0) 00:27:43.465 [2024-11-15 11:45:44.209355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:5631 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:43.465 [2024-11-15 11:45:44.209363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:43.465 [2024-11-15 11:45:44.225426] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1287ca0) 00:27:43.465 [2024-11-15 11:45:44.225446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:24634 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:43.465 [2024-11-15 11:45:44.225462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:43.465 [2024-11-15 11:45:44.239594] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1287ca0) 00:27:43.465 [2024-11-15 11:45:44.239613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:13473 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:43.465 [2024-11-15 11:45:44.239621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:43.465 [2024-11-15 11:45:44.254969] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1287ca0) 00:27:43.465 [2024-11-15 11:45:44.254988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:6181 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:43.465 [2024-11-15 11:45:44.254996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:43.465 [2024-11-15 11:45:44.269573] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1287ca0) 00:27:43.465 [2024-11-15 11:45:44.269593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:18063 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:43.465 [2024-11-15 11:45:44.269601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:43.465 [2024-11-15 11:45:44.284051] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1287ca0) 00:27:43.465 [2024-11-15 11:45:44.284072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:15026 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:43.465 [2024-11-15 11:45:44.284079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:43.465 [2024-11-15 11:45:44.298488] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1287ca0) 00:27:43.465 [2024-11-15 11:45:44.298507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:16696 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:43.465 [2024-11-15 11:45:44.298516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:43.465 [2024-11-15 11:45:44.312967] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1287ca0) 00:27:43.465 [2024-11-15 11:45:44.312987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:18057 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:43.465 [2024-11-15 11:45:44.312995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:43.731 [2024-11-15 11:45:44.326696] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1287ca0) 00:27:43.731 [2024-11-15 11:45:44.326716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:7958 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:43.731 [2024-11-15 11:45:44.326724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:43.731 [2024-11-15 11:45:44.341103] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1287ca0) 00:27:43.731 [2024-11-15 11:45:44.341122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:11380 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:43.731 [2024-11-15 11:45:44.341130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:43.731 [2024-11-15 11:45:44.357137] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1287ca0) 00:27:43.731 [2024-11-15 11:45:44.357160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:12447 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:43.731 [2024-11-15 11:45:44.357168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:43.731 [2024-11-15 11:45:44.371341] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1287ca0) 00:27:43.731 [2024-11-15 11:45:44.371359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:2000 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:43.731 [2024-11-15 11:45:44.371367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:43.731 [2024-11-15 11:45:44.385418] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1287ca0) 00:27:43.731 [2024-11-15 11:45:44.385437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:2101 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:43.731 [2024-11-15 11:45:44.385445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:43.731 [2024-11-15 11:45:44.399788] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1287ca0) 00:27:43.731 [2024-11-15 11:45:44.399807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:10749 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:43.731 [2024-11-15 11:45:44.399815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:43.731 [2024-11-15 11:45:44.414218] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1287ca0) 00:27:43.731 [2024-11-15 11:45:44.414237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:9488 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:43.731 [2024-11-15 11:45:44.414245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:43.731 [2024-11-15 11:45:44.428909] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1287ca0) 00:27:43.731 [2024-11-15 11:45:44.428928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:17613 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:43.731 [2024-11-15 11:45:44.428936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:43.731 [2024-11-15 11:45:44.442874] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1287ca0) 00:27:43.731 [2024-11-15 11:45:44.442894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:8504 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:43.731 [2024-11-15 11:45:44.442901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:43.731 [2024-11-15 11:45:44.457356] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1287ca0) 00:27:43.731 [2024-11-15 11:45:44.457376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:10230 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:43.731 [2024-11-15 11:45:44.457383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:43.731 [2024-11-15 11:45:44.471737] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1287ca0) 00:27:43.731 [2024-11-15 11:45:44.471757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:11887 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:43.731 [2024-11-15 11:45:44.471765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:43.731 [2024-11-15 11:45:44.486106] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1287ca0) 00:27:43.731 [2024-11-15 11:45:44.486124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:17795 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:43.731 [2024-11-15 11:45:44.486132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:43.732 [2024-11-15 11:45:44.500525] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1287ca0) 00:27:43.732 [2024-11-15 11:45:44.500544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:6689 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:43.732 [2024-11-15 11:45:44.500552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:43.732 [2024-11-15 11:45:44.514793] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1287ca0) 00:27:43.732 [2024-11-15 11:45:44.514812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:8509 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:43.732 [2024-11-15 11:45:44.514820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:43.732 [2024-11-15 11:45:44.530420] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1287ca0) 00:27:43.732 [2024-11-15 11:45:44.530441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:87 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:43.732 [2024-11-15 11:45:44.530449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:43.732 [2024-11-15 11:45:44.544210] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1287ca0) 00:27:43.732 [2024-11-15 11:45:44.544230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:7972 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:43.732 [2024-11-15 11:45:44.544238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:43.732 [2024-11-15 11:45:44.556335] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1287ca0) 00:27:43.732 [2024-11-15 11:45:44.556355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:20896 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:43.732 [2024-11-15 11:45:44.556363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:43.732 [2024-11-15 11:45:44.573017] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1287ca0) 00:27:43.732 [2024-11-15 11:45:44.573037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:24796 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:43.732 [2024-11-15 11:45:44.573044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:43.995 [2024-11-15 11:45:44.586299] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1287ca0) 00:27:43.995 [2024-11-15 11:45:44.586318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:21516 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:43.995 [2024-11-15 11:45:44.586326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:43.995 [2024-11-15 11:45:44.600813] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1287ca0) 00:27:43.996 [2024-11-15 11:45:44.600835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:9751 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:43.996 [2024-11-15 11:45:44.600843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:43.996 [2024-11-15 11:45:44.616641] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1287ca0) 00:27:43.996 [2024-11-15 11:45:44.616662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:16147 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:43.996 [2024-11-15 11:45:44.616670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:43.996 [2024-11-15 11:45:44.630779] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1287ca0) 00:27:43.996 [2024-11-15 11:45:44.630798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:12302 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:43.996 [2024-11-15 11:45:44.630805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:43.996 [2024-11-15 11:45:44.645158] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1287ca0) 00:27:43.996 [2024-11-15 11:45:44.645177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:17032 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:43.996 [2024-11-15 11:45:44.645184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:43.996 [2024-11-15 11:45:44.658665] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1287ca0) 00:27:43.996 [2024-11-15 11:45:44.658684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:14318 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:43.996 [2024-11-15 11:45:44.658692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:43.996 [2024-11-15 11:45:44.675646] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1287ca0) 00:27:43.996 [2024-11-15 11:45:44.675665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:23691 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:43.996 [2024-11-15 11:45:44.675673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:43.996 [2024-11-15 11:45:44.690001] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1287ca0) 00:27:43.996 [2024-11-15 11:45:44.690019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16450 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:43.996 [2024-11-15 11:45:44.690027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:43.996 [2024-11-15 11:45:44.704358] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1287ca0) 00:27:43.996 [2024-11-15 11:45:44.704377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:1650 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:43.996 [2024-11-15 11:45:44.704385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:43.996 [2024-11-15 11:45:44.718742] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1287ca0) 00:27:43.996 [2024-11-15 11:45:44.718761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:363 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:43.996 [2024-11-15 11:45:44.718768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:43.996 [2024-11-15 11:45:44.733143] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1287ca0) 00:27:43.996 [2024-11-15 11:45:44.733161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:4293 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:43.996 [2024-11-15 11:45:44.733169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:43.996 [2024-11-15 11:45:44.747523] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1287ca0) 00:27:43.996 [2024-11-15 11:45:44.747543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:1294 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:43.996 [2024-11-15 11:45:44.747551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:43.996 [2024-11-15 11:45:44.761830] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1287ca0) 00:27:43.996 [2024-11-15 11:45:44.761849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:14882 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:43.996 [2024-11-15 11:45:44.761857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:43.996 [2024-11-15 11:45:44.776156] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1287ca0) 00:27:43.996 [2024-11-15 11:45:44.776175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:19587 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:43.996 [2024-11-15 11:45:44.776183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:43.996 [2024-11-15 11:45:44.790848] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1287ca0) 00:27:43.996 [2024-11-15 11:45:44.790867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:16691 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:43.996 [2024-11-15 11:45:44.790874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:43.996 [2024-11-15 11:45:44.804403] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1287ca0) 00:27:43.996 [2024-11-15 11:45:44.804423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:3650 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:43.996 [2024-11-15 11:45:44.804431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:43.996 [2024-11-15 11:45:44.819831] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1287ca0) 00:27:43.996 [2024-11-15 11:45:44.819850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:13725 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:43.996 [2024-11-15 11:45:44.819858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:43.996 [2024-11-15 11:45:44.834348] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1287ca0) 00:27:43.996 [2024-11-15 11:45:44.834367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:10841 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:43.996 [2024-11-15 11:45:44.834375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:44.256 [2024-11-15 11:45:44.848793] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1287ca0) 00:27:44.256 [2024-11-15 11:45:44.848813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:14323 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:44.256 [2024-11-15 11:45:44.848824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:44.256 [2024-11-15 11:45:44.863967] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1287ca0) 00:27:44.256 [2024-11-15 11:45:44.863987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:4569 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:44.256 [2024-11-15 11:45:44.863994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:44.256 [2024-11-15 11:45:44.878237] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1287ca0) 00:27:44.256 [2024-11-15 11:45:44.878256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:21661 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:44.256 [2024-11-15 11:45:44.878264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:44.256 [2024-11-15 11:45:44.892618] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1287ca0) 00:27:44.256 [2024-11-15 11:45:44.892637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:22850 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:44.256 [2024-11-15 11:45:44.892644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:44.256 [2024-11-15 11:45:44.906808] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1287ca0) 00:27:44.256 [2024-11-15 11:45:44.906827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:24021 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:44.256 [2024-11-15 11:45:44.906834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:44.256 [2024-11-15 11:45:44.921990] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1287ca0) 00:27:44.256 [2024-11-15 11:45:44.922010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:17915 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:44.256 [2024-11-15 11:45:44.922017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:44.256 [2024-11-15 11:45:44.935025] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1287ca0) 00:27:44.256 [2024-11-15 11:45:44.935044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:4234 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:44.256 [2024-11-15 11:45:44.935051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:44.256 [2024-11-15 11:45:44.947255] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1287ca0) 00:27:44.256 [2024-11-15 11:45:44.947275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:8894 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:44.256 [2024-11-15 11:45:44.947282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:44.256 [2024-11-15 11:45:44.961627] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1287ca0) 00:27:44.256 [2024-11-15 11:45:44.961647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:21593 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:44.256 [2024-11-15 11:45:44.961655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:44.256 [2024-11-15 11:45:44.979355] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1287ca0) 00:27:44.256 [2024-11-15 11:45:44.979377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:23303 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:44.257 [2024-11-15 11:45:44.979385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:44.257 [2024-11-15 11:45:44.992887] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1287ca0) 00:27:44.257 [2024-11-15 11:45:44.992906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:12921 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:44.257 [2024-11-15 11:45:44.992914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:44.257 [2024-11-15 11:45:45.007758] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1287ca0) 00:27:44.257 [2024-11-15 11:45:45.007777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:18821 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:44.257 [2024-11-15 11:45:45.007784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:44.257 [2024-11-15 11:45:45.021503] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1287ca0) 00:27:44.257 [2024-11-15 11:45:45.021523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:17326 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:44.257 [2024-11-15 11:45:45.021530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:44.257 [2024-11-15 11:45:45.035846] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1287ca0) 00:27:44.257 [2024-11-15 11:45:45.035864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:12680 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:44.257 [2024-11-15 11:45:45.035872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:44.257 [2024-11-15 11:45:45.048850] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1287ca0) 00:27:44.257 [2024-11-15 11:45:45.048870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:24883 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:44.257 [2024-11-15 11:45:45.048877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:44.257 [2024-11-15 11:45:45.063776] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1287ca0) 00:27:44.257 [2024-11-15 11:45:45.063795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:14198 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:44.257 [2024-11-15 11:45:45.063803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:44.257 [2024-11-15 11:45:45.077937] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1287ca0) 00:27:44.257 [2024-11-15 11:45:45.077957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:19636 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:44.257 [2024-11-15 11:45:45.077965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:44.257 [2024-11-15 11:45:45.094965] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1287ca0) 00:27:44.257 [2024-11-15 11:45:45.094986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:16440 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:44.257 [2024-11-15 11:45:45.094993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:44.515 [2024-11-15 11:45:45.109471] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1287ca0) 00:27:44.516 [2024-11-15 11:45:45.109492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:19248 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:44.516 [2024-11-15 11:45:45.109500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:44.516 [2024-11-15 11:45:45.122231] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1287ca0) 00:27:44.516 [2024-11-15 11:45:45.122252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:25512 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:44.516 [2024-11-15 11:45:45.122260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:44.516 17502.00 IOPS, 68.37 MiB/s 00:27:44.516 Latency(us) 00:27:44.516 [2024-11-15T10:45:45.369Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:44.516 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:27:44.516 nvme0n1 : 2.00 17537.74 68.51 0.00 0.00 7293.25 3991.74 20018.27 00:27:44.516 [2024-11-15T10:45:45.369Z] =================================================================================================================== 00:27:44.516 [2024-11-15T10:45:45.369Z] Total : 17537.74 68.51 0.00 0.00 7293.25 3991.74 20018.27 00:27:44.516 { 00:27:44.516 "results": [ 00:27:44.516 { 00:27:44.516 "job": "nvme0n1", 00:27:44.516 "core_mask": "0x2", 00:27:44.516 "workload": "randread", 00:27:44.516 "status": "finished", 00:27:44.516 "queue_depth": 128, 00:27:44.516 "io_size": 4096, 00:27:44.516 "runtime": 2.003223, 00:27:44.516 "iops": 17537.737935317236, 00:27:44.516 "mibps": 68.50678880983295, 00:27:44.516 "io_failed": 0, 00:27:44.516 "io_timeout": 0, 00:27:44.516 "avg_latency_us": 7293.245270098227, 00:27:44.516 "min_latency_us": 3991.7381818181816, 00:27:44.516 "max_latency_us": 20018.269090909092 00:27:44.516 } 00:27:44.516 ], 00:27:44.516 "core_count": 1 00:27:44.516 } 00:27:44.516 11:45:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:27:44.516 11:45:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:27:44.516 11:45:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:27:44.516 | .driver_specific 00:27:44.516 | .nvme_error 00:27:44.516 | .status_code 00:27:44.516 | .command_transient_transport_error' 00:27:44.516 11:45:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:27:44.792 11:45:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 137 > 0 )) 00:27:44.792 11:45:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 1394445 00:27:44.792 11:45:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # '[' -z 1394445 ']' 00:27:44.792 11:45:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # kill -0 1394445 00:27:44.792 11:45:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@957 -- # uname 00:27:44.792 11:45:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:27:44.792 11:45:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 1394445 00:27:44.792 11:45:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:27:44.792 11:45:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:27:44.792 11:45:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@970 -- # echo 'killing process with pid 1394445' 00:27:44.792 killing process with pid 1394445 00:27:44.792 11:45:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@971 -- # kill 1394445 00:27:44.792 Received shutdown signal, test time was about 2.000000 seconds 00:27:44.792 00:27:44.792 Latency(us) 00:27:44.792 [2024-11-15T10:45:45.645Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:44.792 [2024-11-15T10:45:45.645Z] =================================================================================================================== 00:27:44.792 [2024-11-15T10:45:45.645Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:27:44.792 11:45:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@976 -- # wait 1394445 00:27:45.051 11:45:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@109 -- # run_bperf_err randread 131072 16 00:27:45.051 11:45:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:27:45.051 11:45:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:27:45.051 11:45:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:27:45.051 11:45:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:27:45.051 11:45:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=1395217 00:27:45.051 11:45:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 1395217 /var/tmp/bperf.sock 00:27:45.051 11:45:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z 00:27:45.051 11:45:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@833 -- # '[' -z 1395217 ']' 00:27:45.051 11:45:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bperf.sock 00:27:45.051 11:45:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # local max_retries=100 00:27:45.051 11:45:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:27:45.051 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:27:45.051 11:45:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # xtrace_disable 00:27:45.051 11:45:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:27:45.051 [2024-11-15 11:45:45.703343] Starting SPDK v25.01-pre git sha1 4b2d483c6 / DPDK 24.03.0 initialization... 00:27:45.051 [2024-11-15 11:45:45.703403] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1395217 ] 00:27:45.051 I/O size of 131072 is greater than zero copy threshold (65536). 00:27:45.051 Zero copy mechanism will not be used. 00:27:45.051 [2024-11-15 11:45:45.769287] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:45.051 [2024-11-15 11:45:45.809623] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:27:45.310 11:45:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:27:45.310 11:45:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@866 -- # return 0 00:27:45.310 11:45:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:27:45.310 11:45:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:27:45.568 11:45:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:27:45.568 11:45:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:45.568 11:45:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:27:45.568 11:45:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:45.568 11:45:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:27:45.568 11:45:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:27:45.827 nvme0n1 00:27:45.827 11:45:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:27:45.827 11:45:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:45.827 11:45:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:27:45.827 11:45:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:45.827 11:45:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:27:45.827 11:45:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:27:46.087 I/O size of 131072 is greater than zero copy threshold (65536). 00:27:46.087 Zero copy mechanism will not be used. 00:27:46.087 Running I/O for 2 seconds... 00:27:46.087 [2024-11-15 11:45:46.748249] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1613790) 00:27:46.087 [2024-11-15 11:45:46.748289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:1696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.087 [2024-11-15 11:45:46.748299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:46.087 [2024-11-15 11:45:46.755261] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1613790) 00:27:46.087 [2024-11-15 11:45:46.755287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:15744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.087 [2024-11-15 11:45:46.755295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:46.087 [2024-11-15 11:45:46.762453] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1613790) 00:27:46.087 [2024-11-15 11:45:46.762487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.087 [2024-11-15 11:45:46.762496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:46.087 [2024-11-15 11:45:46.769429] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1613790) 00:27:46.087 [2024-11-15 11:45:46.769451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:10112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.088 [2024-11-15 11:45:46.769467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:46.088 [2024-11-15 11:45:46.776447] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1613790) 00:27:46.088 [2024-11-15 11:45:46.776481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.088 [2024-11-15 11:45:46.776489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:46.088 [2024-11-15 11:45:46.783334] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1613790) 00:27:46.088 [2024-11-15 11:45:46.783360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:14368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.088 [2024-11-15 11:45:46.783367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:46.088 [2024-11-15 11:45:46.790186] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1613790) 00:27:46.088 [2024-11-15 11:45:46.790208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:9280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.088 [2024-11-15 11:45:46.790216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:46.088 [2024-11-15 11:45:46.797097] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1613790) 00:27:46.088 [2024-11-15 11:45:46.797119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:10496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.088 [2024-11-15 11:45:46.797127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:46.088 [2024-11-15 11:45:46.804024] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1613790) 00:27:46.088 [2024-11-15 11:45:46.804047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:14240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.088 [2024-11-15 11:45:46.804054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:46.088 [2024-11-15 11:45:46.811005] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1613790) 00:27:46.088 [2024-11-15 11:45:46.811027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:21440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.088 [2024-11-15 11:45:46.811034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:46.088 [2024-11-15 11:45:46.818212] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1613790) 00:27:46.088 [2024-11-15 11:45:46.818235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:10688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.088 [2024-11-15 11:45:46.818244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:46.088 [2024-11-15 11:45:46.825351] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1613790) 00:27:46.088 [2024-11-15 11:45:46.825374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:11264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.088 [2024-11-15 11:45:46.825382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:46.088 [2024-11-15 11:45:46.832543] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1613790) 00:27:46.088 [2024-11-15 11:45:46.832566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:2368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.088 [2024-11-15 11:45:46.832575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:46.088 [2024-11-15 11:45:46.839451] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1613790) 00:27:46.088 [2024-11-15 11:45:46.839480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:23808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.088 [2024-11-15 11:45:46.839489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:46.088 [2024-11-15 11:45:46.846186] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1613790) 00:27:46.088 [2024-11-15 11:45:46.846208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:12224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.088 [2024-11-15 11:45:46.846217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:46.088 [2024-11-15 11:45:46.849740] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1613790) 00:27:46.088 [2024-11-15 11:45:46.849763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:11200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.088 [2024-11-15 11:45:46.849772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:46.088 [2024-11-15 11:45:46.856404] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1613790) 00:27:46.088 [2024-11-15 11:45:46.856425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:1952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.088 [2024-11-15 11:45:46.856434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:46.088 [2024-11-15 11:45:46.862876] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1613790) 00:27:46.088 [2024-11-15 11:45:46.862897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:2464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.088 [2024-11-15 11:45:46.862906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:46.088 [2024-11-15 11:45:46.869269] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1613790) 00:27:46.088 [2024-11-15 11:45:46.869292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:18944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.088 [2024-11-15 11:45:46.869302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:46.088 [2024-11-15 11:45:46.875506] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1613790) 00:27:46.088 [2024-11-15 11:45:46.875530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:12640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.088 [2024-11-15 11:45:46.875540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:46.088 [2024-11-15 11:45:46.881875] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1613790) 00:27:46.088 [2024-11-15 11:45:46.881898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:22848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.088 [2024-11-15 11:45:46.881906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:46.088 [2024-11-15 11:45:46.888315] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1613790) 00:27:46.088 [2024-11-15 11:45:46.888337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:15872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.088 [2024-11-15 11:45:46.888346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:46.088 [2024-11-15 11:45:46.895053] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1613790) 00:27:46.088 [2024-11-15 11:45:46.895075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:21088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.088 [2024-11-15 11:45:46.895087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:46.088 [2024-11-15 11:45:46.901663] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1613790) 00:27:46.088 [2024-11-15 11:45:46.901686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:17440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.088 [2024-11-15 11:45:46.901694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:46.088 [2024-11-15 11:45:46.908078] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1613790) 00:27:46.088 [2024-11-15 11:45:46.908099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:7968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.088 [2024-11-15 11:45:46.908107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:46.088 [2024-11-15 11:45:46.914817] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1613790) 00:27:46.088 [2024-11-15 11:45:46.914840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:5184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.088 [2024-11-15 11:45:46.914848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:46.088 [2024-11-15 11:45:46.921594] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1613790) 00:27:46.088 [2024-11-15 11:45:46.921616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:7232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.088 [2024-11-15 11:45:46.921624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:46.088 [2024-11-15 11:45:46.928486] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1613790) 00:27:46.088 [2024-11-15 11:45:46.928507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:5376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.088 [2024-11-15 11:45:46.928515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:46.088 [2024-11-15 11:45:46.934829] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1613790) 00:27:46.088 [2024-11-15 11:45:46.934850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:4992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.088 [2024-11-15 11:45:46.934858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:46.349 [2024-11-15 11:45:46.941549] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1613790) 00:27:46.349 [2024-11-15 11:45:46.941570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:3264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.349 [2024-11-15 11:45:46.941577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:46.349 [2024-11-15 11:45:46.948269] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1613790) 00:27:46.349 [2024-11-15 11:45:46.948289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:21888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.349 [2024-11-15 11:45:46.948297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:46.349 [2024-11-15 11:45:46.954956] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1613790) 00:27:46.349 [2024-11-15 11:45:46.954980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:16256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.349 [2024-11-15 11:45:46.954987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:46.349 [2024-11-15 11:45:46.961609] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1613790) 00:27:46.349 [2024-11-15 11:45:46.961629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:3968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.349 [2024-11-15 11:45:46.961637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:46.349 [2024-11-15 11:45:46.968274] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1613790) 00:27:46.349 [2024-11-15 11:45:46.968294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:18752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.349 [2024-11-15 11:45:46.968301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:46.349 [2024-11-15 11:45:46.974983] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1613790) 00:27:46.349 [2024-11-15 11:45:46.975004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:13792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.349 [2024-11-15 11:45:46.975011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:46.349 [2024-11-15 11:45:46.981659] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1613790) 00:27:46.349 [2024-11-15 11:45:46.981679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:22624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.349 [2024-11-15 11:45:46.981686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:46.349 [2024-11-15 11:45:46.988373] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1613790) 00:27:46.349 [2024-11-15 11:45:46.988394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:5536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.349 [2024-11-15 11:45:46.988401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:46.349 [2024-11-15 11:45:46.995027] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1613790) 00:27:46.349 [2024-11-15 11:45:46.995048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:24192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.349 [2024-11-15 11:45:46.995056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:46.349 [2024-11-15 11:45:47.001472] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1613790) 00:27:46.349 [2024-11-15 11:45:47.001492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:6848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.349 [2024-11-15 11:45:47.001500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:46.349 [2024-11-15 11:45:47.008228] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1613790) 00:27:46.349 [2024-11-15 11:45:47.008249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:17216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.349 [2024-11-15 11:45:47.008257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:46.349 [2024-11-15 11:45:47.015018] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1613790) 00:27:46.349 [2024-11-15 11:45:47.015039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.349 [2024-11-15 11:45:47.015047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:46.349 [2024-11-15 11:45:47.021815] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1613790) 00:27:46.349 [2024-11-15 11:45:47.021836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:13952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.349 [2024-11-15 11:45:47.021843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:46.349 [2024-11-15 11:45:47.028529] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1613790) 00:27:46.349 [2024-11-15 11:45:47.028550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:15424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.349 [2024-11-15 11:45:47.028557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:46.349 [2024-11-15 11:45:47.035179] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1613790) 00:27:46.349 [2024-11-15 11:45:47.035200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:20800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.349 [2024-11-15 11:45:47.035207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:46.349 [2024-11-15 11:45:47.041839] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1613790) 00:27:46.349 [2024-11-15 11:45:47.041859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:23744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.349 [2024-11-15 11:45:47.041867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:46.349 [2024-11-15 11:45:47.048672] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1613790) 00:27:46.349 [2024-11-15 11:45:47.048693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:24864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.349 [2024-11-15 11:45:47.048700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:46.349 [2024-11-15 11:45:47.055352] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1613790) 00:27:46.349 [2024-11-15 11:45:47.055376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:4032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.349 [2024-11-15 11:45:47.055383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:46.349 [2024-11-15 11:45:47.061834] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1613790) 00:27:46.349 [2024-11-15 11:45:47.061855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:5440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.349 [2024-11-15 11:45:47.061862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:46.349 [2024-11-15 11:45:47.068496] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1613790) 00:27:46.349 [2024-11-15 11:45:47.068516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:19776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.349 [2024-11-15 11:45:47.068531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:46.349 [2024-11-15 11:45:47.074932] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1613790) 00:27:46.349 [2024-11-15 11:45:47.074952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:8416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.349 [2024-11-15 11:45:47.074960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:46.349 [2024-11-15 11:45:47.081648] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1613790) 00:27:46.349 [2024-11-15 11:45:47.081669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:13568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.349 [2024-11-15 11:45:47.081676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:46.349 [2024-11-15 11:45:47.088086] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1613790) 00:27:46.349 [2024-11-15 11:45:47.088107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:18624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.349 [2024-11-15 11:45:47.088114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:46.349 [2024-11-15 11:45:47.094749] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1613790) 00:27:46.350 [2024-11-15 11:45:47.094769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:19136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.350 [2024-11-15 11:45:47.094777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:46.350 [2024-11-15 11:45:47.101444] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1613790) 00:27:46.350 [2024-11-15 11:45:47.101469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:2528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.350 [2024-11-15 11:45:47.101477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:46.350 [2024-11-15 11:45:47.108144] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1613790) 00:27:46.350 [2024-11-15 11:45:47.108164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:10688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.350 [2024-11-15 11:45:47.108171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:46.350 [2024-11-15 11:45:47.114802] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1613790) 00:27:46.350 [2024-11-15 11:45:47.114823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:20192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.350 [2024-11-15 11:45:47.114830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:46.350 [2024-11-15 11:45:47.121542] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1613790) 00:27:46.350 [2024-11-15 11:45:47.121562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:12736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.350 [2024-11-15 11:45:47.121569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:46.350 [2024-11-15 11:45:47.128260] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1613790) 00:27:46.350 [2024-11-15 11:45:47.128280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:4704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.350 [2024-11-15 11:45:47.128287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:46.350 [2024-11-15 11:45:47.134984] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1613790) 00:27:46.350 [2024-11-15 11:45:47.135004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:7584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.350 [2024-11-15 11:45:47.135012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:46.350 [2024-11-15 11:45:47.141448] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1613790) 00:27:46.350 [2024-11-15 11:45:47.141474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:21664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.350 [2024-11-15 11:45:47.141482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:46.350 [2024-11-15 11:45:47.148182] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1613790) 00:27:46.350 [2024-11-15 11:45:47.148202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:1280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.350 [2024-11-15 11:45:47.148209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:46.350 [2024-11-15 11:45:47.154842] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1613790) 00:27:46.350 [2024-11-15 11:45:47.154863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.350 [2024-11-15 11:45:47.154870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:46.350 [2024-11-15 11:45:47.161524] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1613790) 00:27:46.350 [2024-11-15 11:45:47.161543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:11872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.350 [2024-11-15 11:45:47.161550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:46.350 [2024-11-15 11:45:47.167911] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1613790) 00:27:46.350 [2024-11-15 11:45:47.167931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:1376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.350 [2024-11-15 11:45:47.167938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:46.350 [2024-11-15 11:45:47.174315] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1613790) 00:27:46.350 [2024-11-15 11:45:47.174336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:11584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.350 [2024-11-15 11:45:47.174343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:46.350 [2024-11-15 11:45:47.180818] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1613790) 00:27:46.350 [2024-11-15 11:45:47.180839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.350 [2024-11-15 11:45:47.180849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:46.350 [2024-11-15 11:45:47.187318] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1613790) 00:27:46.350 [2024-11-15 11:45:47.187337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:22432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.350 [2024-11-15 11:45:47.187344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:46.350 [2024-11-15 11:45:47.194075] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1613790) 00:27:46.350 [2024-11-15 11:45:47.194096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:5696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.350 [2024-11-15 11:45:47.194103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:46.610 [2024-11-15 11:45:47.200560] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1613790) 00:27:46.610 [2024-11-15 11:45:47.200582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:20704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.610 [2024-11-15 11:45:47.200590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:46.610 [2024-11-15 11:45:47.206961] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1613790) 00:27:46.610 [2024-11-15 11:45:47.206981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:11232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.610 [2024-11-15 11:45:47.206989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:46.610 [2024-11-15 11:45:47.213623] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1613790) 00:27:46.610 [2024-11-15 11:45:47.213644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.610 [2024-11-15 11:45:47.213651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:46.610 [2024-11-15 11:45:47.220284] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1613790) 00:27:46.610 [2024-11-15 11:45:47.220304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:10880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.610 [2024-11-15 11:45:47.220311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:46.610 [2024-11-15 11:45:47.226909] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1613790) 00:27:46.610 [2024-11-15 11:45:47.226930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:3360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.610 [2024-11-15 11:45:47.226937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:46.610 [2024-11-15 11:45:47.233613] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1613790) 00:27:46.610 [2024-11-15 11:45:47.233633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.610 [2024-11-15 11:45:47.233641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:46.610 [2024-11-15 11:45:47.240271] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1613790) 00:27:46.610 [2024-11-15 11:45:47.240294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:2176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.610 [2024-11-15 11:45:47.240301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:46.610 [2024-11-15 11:45:47.246931] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1613790) 00:27:46.610 [2024-11-15 11:45:47.246951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:1856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.610 [2024-11-15 11:45:47.246958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:46.610 [2024-11-15 11:45:47.254776] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1613790) 00:27:46.610 [2024-11-15 11:45:47.254797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:1088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.610 [2024-11-15 11:45:47.254804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:46.610 [2024-11-15 11:45:47.263719] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1613790) 00:27:46.610 [2024-11-15 11:45:47.263741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.610 [2024-11-15 11:45:47.263749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:46.610 [2024-11-15 11:45:47.271942] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1613790) 00:27:46.610 [2024-11-15 11:45:47.271963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:12672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.610 [2024-11-15 11:45:47.271971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:46.610 [2024-11-15 11:45:47.280058] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1613790) 00:27:46.610 [2024-11-15 11:45:47.280079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:11168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.610 [2024-11-15 11:45:47.280087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:46.610 [2024-11-15 11:45:47.287637] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1613790) 00:27:46.610 [2024-11-15 11:45:47.287657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:19200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.610 [2024-11-15 11:45:47.287664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:46.610 [2024-11-15 11:45:47.294376] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1613790) 00:27:46.610 [2024-11-15 11:45:47.294396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:21248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.610 [2024-11-15 11:45:47.294404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:46.610 [2024-11-15 11:45:47.301143] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1613790) 00:27:46.610 [2024-11-15 11:45:47.301163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:5600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.610 [2024-11-15 11:45:47.301171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:46.610 [2024-11-15 11:45:47.307827] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1613790) 00:27:46.610 [2024-11-15 11:45:47.307847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.611 [2024-11-15 11:45:47.307855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:46.611 [2024-11-15 11:45:47.314521] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1613790) 00:27:46.611 [2024-11-15 11:45:47.314541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:16416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.611 [2024-11-15 11:45:47.314548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:46.611 [2024-11-15 11:45:47.321359] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1613790) 00:27:46.611 [2024-11-15 11:45:47.321379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:24736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.611 [2024-11-15 11:45:47.321387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:46.611 [2024-11-15 11:45:47.328071] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1613790) 00:27:46.611 [2024-11-15 11:45:47.328091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:14080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.611 [2024-11-15 11:45:47.328098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:46.611 [2024-11-15 11:45:47.334642] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1613790) 00:27:46.611 [2024-11-15 11:45:47.334663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:16000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.611 [2024-11-15 11:45:47.334670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:46.611 [2024-11-15 11:45:47.341366] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1613790) 00:27:46.611 [2024-11-15 11:45:47.341387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:1120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.611 [2024-11-15 11:45:47.341394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:46.611 [2024-11-15 11:45:47.348079] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1613790) 00:27:46.611 [2024-11-15 11:45:47.348100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:10016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.611 [2024-11-15 11:45:47.348107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:46.611 [2024-11-15 11:45:47.354836] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1613790) 00:27:46.611 [2024-11-15 11:45:47.354856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:8416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.611 [2024-11-15 11:45:47.354864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:46.611 [2024-11-15 11:45:47.361569] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1613790) 00:27:46.611 [2024-11-15 11:45:47.361589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:17696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.611 [2024-11-15 11:45:47.361601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:46.611 [2024-11-15 11:45:47.368202] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1613790) 00:27:46.611 [2024-11-15 11:45:47.368223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:10848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.611 [2024-11-15 11:45:47.368231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:46.611 [2024-11-15 11:45:47.374928] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1613790) 00:27:46.611 [2024-11-15 11:45:47.374948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:6880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.611 [2024-11-15 11:45:47.374956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:46.611 [2024-11-15 11:45:47.381635] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1613790) 00:27:46.611 [2024-11-15 11:45:47.381656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.611 [2024-11-15 11:45:47.381664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:46.611 [2024-11-15 11:45:47.388349] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1613790) 00:27:46.611 [2024-11-15 11:45:47.388369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:7232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.611 [2024-11-15 11:45:47.388377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:46.611 [2024-11-15 11:45:47.395038] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1613790) 00:27:46.611 [2024-11-15 11:45:47.395057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:6848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.611 [2024-11-15 11:45:47.395065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:46.611 [2024-11-15 11:45:47.401469] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1613790) 00:27:46.611 [2024-11-15 11:45:47.401488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:3488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.611 [2024-11-15 11:45:47.401495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:46.611 [2024-11-15 11:45:47.408093] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1613790) 00:27:46.611 [2024-11-15 11:45:47.408113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:1888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.611 [2024-11-15 11:45:47.408120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:46.611 [2024-11-15 11:45:47.414777] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1613790) 00:27:46.611 [2024-11-15 11:45:47.414797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:18688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.611 [2024-11-15 11:45:47.414805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:46.611 [2024-11-15 11:45:47.421465] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1613790) 00:27:46.611 [2024-11-15 11:45:47.421489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:1088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.611 [2024-11-15 11:45:47.421496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:46.611 [2024-11-15 11:45:47.428173] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1613790) 00:27:46.611 [2024-11-15 11:45:47.428193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:32 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.611 [2024-11-15 11:45:47.428201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:46.611 [2024-11-15 11:45:47.433884] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1613790) 00:27:46.611 [2024-11-15 11:45:47.433906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:21856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.611 [2024-11-15 11:45:47.433914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:46.611 [2024-11-15 11:45:47.440659] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1613790) 00:27:46.611 [2024-11-15 11:45:47.440679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:19104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.611 [2024-11-15 11:45:47.440687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:46.611 [2024-11-15 11:45:47.447361] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1613790) 00:27:46.611 [2024-11-15 11:45:47.447380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:20384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.611 [2024-11-15 11:45:47.447388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:46.611 [2024-11-15 11:45:47.454084] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1613790) 00:27:46.611 [2024-11-15 11:45:47.454104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:3104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.611 [2024-11-15 11:45:47.454111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:46.871 [2024-11-15 11:45:47.460794] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1613790) 00:27:46.871 [2024-11-15 11:45:47.460815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:10944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.871 [2024-11-15 11:45:47.460823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:46.871 [2024-11-15 11:45:47.467442] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1613790) 00:27:46.871 [2024-11-15 11:45:47.467469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.871 [2024-11-15 11:45:47.467477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:46.871 [2024-11-15 11:45:47.474043] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1613790) 00:27:46.871 [2024-11-15 11:45:47.474062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:13184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.871 [2024-11-15 11:45:47.474069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:46.871 [2024-11-15 11:45:47.481140] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1613790) 00:27:46.871 [2024-11-15 11:45:47.481160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:11808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.871 [2024-11-15 11:45:47.481168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:46.871 [2024-11-15 11:45:47.489393] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1613790) 00:27:46.871 [2024-11-15 11:45:47.489413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:7456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.871 [2024-11-15 11:45:47.489421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:46.871 [2024-11-15 11:45:47.498613] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1613790) 00:27:46.871 [2024-11-15 11:45:47.498635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:9984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.871 [2024-11-15 11:45:47.498643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:46.871 [2024-11-15 11:45:47.507148] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1613790) 00:27:46.871 [2024-11-15 11:45:47.507169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:7296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.871 [2024-11-15 11:45:47.507177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:46.871 [2024-11-15 11:45:47.515760] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1613790) 00:27:46.871 [2024-11-15 11:45:47.515782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:22560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.871 [2024-11-15 11:45:47.515790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:46.871 [2024-11-15 11:45:47.524643] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1613790) 00:27:46.871 [2024-11-15 11:45:47.524667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:23680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.871 [2024-11-15 11:45:47.524675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:46.871 [2024-11-15 11:45:47.533286] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1613790) 00:27:46.871 [2024-11-15 11:45:47.533309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:4672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.871 [2024-11-15 11:45:47.533317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:46.871 [2024-11-15 11:45:47.541983] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1613790) 00:27:46.871 [2024-11-15 11:45:47.542004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:13664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.871 [2024-11-15 11:45:47.542011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:46.871 [2024-11-15 11:45:47.549899] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1613790) 00:27:46.871 [2024-11-15 11:45:47.549920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:7744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.871 [2024-11-15 11:45:47.549930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:46.871 [2024-11-15 11:45:47.558383] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1613790) 00:27:46.871 [2024-11-15 11:45:47.558404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:11808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.871 [2024-11-15 11:45:47.558412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:46.872 [2024-11-15 11:45:47.566810] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1613790) 00:27:46.872 [2024-11-15 11:45:47.566831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:19200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.872 [2024-11-15 11:45:47.566838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:46.872 [2024-11-15 11:45:47.575488] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1613790) 00:27:46.872 [2024-11-15 11:45:47.575509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:18144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.872 [2024-11-15 11:45:47.575516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:46.872 [2024-11-15 11:45:47.584215] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1613790) 00:27:46.872 [2024-11-15 11:45:47.584236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:3808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.872 [2024-11-15 11:45:47.584244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:46.872 [2024-11-15 11:45:47.592818] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1613790) 00:27:46.872 [2024-11-15 11:45:47.592838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.872 [2024-11-15 11:45:47.592846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:46.872 [2024-11-15 11:45:47.600910] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1613790) 00:27:46.872 [2024-11-15 11:45:47.600932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:8832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.872 [2024-11-15 11:45:47.600940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:46.872 [2024-11-15 11:45:47.608206] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1613790) 00:27:46.872 [2024-11-15 11:45:47.608227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:23456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.872 [2024-11-15 11:45:47.608234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:46.872 [2024-11-15 11:45:47.615698] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1613790) 00:27:46.872 [2024-11-15 11:45:47.615719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:23072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.872 [2024-11-15 11:45:47.615727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:46.872 [2024-11-15 11:45:47.622838] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1613790) 00:27:46.872 [2024-11-15 11:45:47.622859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:19904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.872 [2024-11-15 11:45:47.622866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:46.872 [2024-11-15 11:45:47.630512] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1613790) 00:27:46.872 [2024-11-15 11:45:47.630532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:21536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.872 [2024-11-15 11:45:47.630540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:46.872 [2024-11-15 11:45:47.638794] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1613790) 00:27:46.872 [2024-11-15 11:45:47.638815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:10304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.872 [2024-11-15 11:45:47.638824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:46.872 [2024-11-15 11:45:47.647039] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1613790) 00:27:46.872 [2024-11-15 11:45:47.647059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:6720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.872 [2024-11-15 11:45:47.647067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:46.872 [2024-11-15 11:45:47.655578] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1613790) 00:27:46.872 [2024-11-15 11:45:47.655599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:12576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.872 [2024-11-15 11:45:47.655607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:46.872 [2024-11-15 11:45:47.663919] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1613790) 00:27:46.872 [2024-11-15 11:45:47.663940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:23744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.872 [2024-11-15 11:45:47.663948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:46.872 [2024-11-15 11:45:47.672411] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1613790) 00:27:46.872 [2024-11-15 11:45:47.672433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:4992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.872 [2024-11-15 11:45:47.672441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:46.872 [2024-11-15 11:45:47.680256] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1613790) 00:27:46.872 [2024-11-15 11:45:47.680277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:25408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.872 [2024-11-15 11:45:47.680285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:46.872 [2024-11-15 11:45:47.687897] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1613790) 00:27:46.872 [2024-11-15 11:45:47.687919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:5504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.872 [2024-11-15 11:45:47.687931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:46.872 [2024-11-15 11:45:47.694706] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1613790) 00:27:46.872 [2024-11-15 11:45:47.694727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.872 [2024-11-15 11:45:47.694734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:46.872 [2024-11-15 11:45:47.701499] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1613790) 00:27:46.872 [2024-11-15 11:45:47.701520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:17920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.872 [2024-11-15 11:45:47.701527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:46.872 [2024-11-15 11:45:47.708318] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1613790) 00:27:46.872 [2024-11-15 11:45:47.708338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:2912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.872 [2024-11-15 11:45:47.708346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:46.872 [2024-11-15 11:45:47.714766] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1613790) 00:27:46.872 [2024-11-15 11:45:47.714787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:14048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.872 [2024-11-15 11:45:47.714794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:46.872 [2024-11-15 11:45:47.721402] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1613790) 00:27:46.872 [2024-11-15 11:45:47.721422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.872 [2024-11-15 11:45:47.721430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:47.132 [2024-11-15 11:45:47.727630] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1613790) 00:27:47.132 [2024-11-15 11:45:47.727651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:18080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.132 [2024-11-15 11:45:47.727659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:47.132 [2024-11-15 11:45:47.734160] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1613790) 00:27:47.132 [2024-11-15 11:45:47.734181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:15104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.132 [2024-11-15 11:45:47.734188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:47.132 [2024-11-15 11:45:47.740469] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1613790) 00:27:47.132 [2024-11-15 11:45:47.740489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:17088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.132 [2024-11-15 11:45:47.740497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:47.132 4427.00 IOPS, 553.38 MiB/s [2024-11-15T10:45:47.985Z] [2024-11-15 11:45:47.747533] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1613790) 00:27:47.132 [2024-11-15 11:45:47.747558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:18240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.132 [2024-11-15 11:45:47.747566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:47.132 [2024-11-15 11:45:47.754282] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1613790) 00:27:47.132 [2024-11-15 11:45:47.754302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:9696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.132 [2024-11-15 11:45:47.754309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:47.132 [2024-11-15 11:45:47.761035] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1613790) 00:27:47.132 [2024-11-15 11:45:47.761055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:21504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.132 [2024-11-15 11:45:47.761062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:47.132 [2024-11-15 11:45:47.767796] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1613790) 00:27:47.132 [2024-11-15 11:45:47.767817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:11520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.132 [2024-11-15 11:45:47.767824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:47.132 [2024-11-15 11:45:47.774518] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1613790) 00:27:47.132 [2024-11-15 11:45:47.774538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:24640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.132 [2024-11-15 11:45:47.774545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:47.132 [2024-11-15 11:45:47.781254] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1613790) 00:27:47.132 [2024-11-15 11:45:47.781274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.132 [2024-11-15 11:45:47.781281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:47.132 [2024-11-15 11:45:47.788032] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1613790) 00:27:47.132 [2024-11-15 11:45:47.788053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:3808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.132 [2024-11-15 11:45:47.788060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:47.132 [2024-11-15 11:45:47.794806] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1613790) 00:27:47.132 [2024-11-15 11:45:47.794826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:2880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.132 [2024-11-15 11:45:47.794833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:47.132 [2024-11-15 11:45:47.801403] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1613790) 00:27:47.132 [2024-11-15 11:45:47.801424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:14432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.132 [2024-11-15 11:45:47.801432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:47.132 [2024-11-15 11:45:47.808083] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1613790) 00:27:47.132 [2024-11-15 11:45:47.808103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:6464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.133 [2024-11-15 11:45:47.808111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:47.133 [2024-11-15 11:45:47.814864] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1613790) 00:27:47.133 [2024-11-15 11:45:47.814884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:17792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.133 [2024-11-15 11:45:47.814892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:47.133 [2024-11-15 11:45:47.821570] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1613790) 00:27:47.133 [2024-11-15 11:45:47.821590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:13600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.133 [2024-11-15 11:45:47.821597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:47.133 [2024-11-15 11:45:47.828303] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1613790) 00:27:47.133 [2024-11-15 11:45:47.828323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:16576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.133 [2024-11-15 11:45:47.828330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:47.133 [2024-11-15 11:45:47.835111] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1613790) 00:27:47.133 [2024-11-15 11:45:47.835132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:23712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.133 [2024-11-15 11:45:47.835139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:47.133 [2024-11-15 11:45:47.841890] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1613790) 00:27:47.133 [2024-11-15 11:45:47.841909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:21600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.133 [2024-11-15 11:45:47.841916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:47.133 [2024-11-15 11:45:47.848654] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1613790) 00:27:47.133 [2024-11-15 11:45:47.848674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:6240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.133 [2024-11-15 11:45:47.848681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:47.133 [2024-11-15 11:45:47.855438] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1613790) 00:27:47.133 [2024-11-15 11:45:47.855463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:3232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.133 [2024-11-15 11:45:47.855470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:47.133 [2024-11-15 11:45:47.862184] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1613790) 00:27:47.133 [2024-11-15 11:45:47.862204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:2432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.133 [2024-11-15 11:45:47.862214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:47.133 [2024-11-15 11:45:47.868932] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1613790) 00:27:47.133 [2024-11-15 11:45:47.868951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:1408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.133 [2024-11-15 11:45:47.868958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:47.133 [2024-11-15 11:45:47.875636] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1613790) 00:27:47.133 [2024-11-15 11:45:47.875656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:22656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.133 [2024-11-15 11:45:47.875663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:47.133 [2024-11-15 11:45:47.882357] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1613790) 00:27:47.133 [2024-11-15 11:45:47.882378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:15680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.133 [2024-11-15 11:45:47.882385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:47.133 [2024-11-15 11:45:47.889093] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1613790) 00:27:47.133 [2024-11-15 11:45:47.889113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.133 [2024-11-15 11:45:47.889121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:47.133 [2024-11-15 11:45:47.895924] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1613790) 00:27:47.133 [2024-11-15 11:45:47.895944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:18752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.133 [2024-11-15 11:45:47.895952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:47.133 [2024-11-15 11:45:47.902351] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1613790) 00:27:47.133 [2024-11-15 11:45:47.902371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:13184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.133 [2024-11-15 11:45:47.902379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:47.133 [2024-11-15 11:45:47.908862] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1613790) 00:27:47.133 [2024-11-15 11:45:47.908882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:21856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.133 [2024-11-15 11:45:47.908889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:47.133 [2024-11-15 11:45:47.915088] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1613790) 00:27:47.133 [2024-11-15 11:45:47.915108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:10304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.133 [2024-11-15 11:45:47.915115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:47.133 [2024-11-15 11:45:47.921575] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1613790) 00:27:47.133 [2024-11-15 11:45:47.921595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:15744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.133 [2024-11-15 11:45:47.921602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:47.133 [2024-11-15 11:45:47.928062] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1613790) 00:27:47.133 [2024-11-15 11:45:47.928083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.133 [2024-11-15 11:45:47.928090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:47.133 [2024-11-15 11:45:47.934534] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1613790) 00:27:47.133 [2024-11-15 11:45:47.934553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:1728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.133 [2024-11-15 11:45:47.934561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:47.133 [2024-11-15 11:45:47.940969] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1613790) 00:27:47.133 [2024-11-15 11:45:47.940990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:17824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.133 [2024-11-15 11:45:47.940997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:47.133 [2024-11-15 11:45:47.947799] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1613790) 00:27:47.133 [2024-11-15 11:45:47.947819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:19104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.133 [2024-11-15 11:45:47.947827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:47.133 [2024-11-15 11:45:47.954354] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1613790) 00:27:47.133 [2024-11-15 11:45:47.954373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:11840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.133 [2024-11-15 11:45:47.954380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:47.133 [2024-11-15 11:45:47.960988] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1613790) 00:27:47.133 [2024-11-15 11:45:47.961009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:17472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.133 [2024-11-15 11:45:47.961016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:47.133 [2024-11-15 11:45:47.967283] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1613790) 00:27:47.133 [2024-11-15 11:45:47.967303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:4896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.133 [2024-11-15 11:45:47.967310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:47.133 [2024-11-15 11:45:47.973709] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1613790) 00:27:47.133 [2024-11-15 11:45:47.973729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:3808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.133 [2024-11-15 11:45:47.973743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:47.133 [2024-11-15 11:45:47.980132] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1613790) 00:27:47.133 [2024-11-15 11:45:47.980152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:15232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.133 [2024-11-15 11:45:47.980160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:47.394 [2024-11-15 11:45:47.986154] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1613790) 00:27:47.394 [2024-11-15 11:45:47.986175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:8000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.394 [2024-11-15 11:45:47.986183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:47.394 [2024-11-15 11:45:47.992529] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1613790) 00:27:47.394 [2024-11-15 11:45:47.992548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.394 [2024-11-15 11:45:47.992556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:47.394 [2024-11-15 11:45:47.999102] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1613790) 00:27:47.394 [2024-11-15 11:45:47.999122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:2624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.394 [2024-11-15 11:45:47.999130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:47.394 [2024-11-15 11:45:48.005624] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1613790) 00:27:47.394 [2024-11-15 11:45:48.005645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:23104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.394 [2024-11-15 11:45:48.005653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:47.394 [2024-11-15 11:45:48.012155] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1613790) 00:27:47.394 [2024-11-15 11:45:48.012175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.394 [2024-11-15 11:45:48.012182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:47.394 [2024-11-15 11:45:48.018760] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1613790) 00:27:47.394 [2024-11-15 11:45:48.018779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:8064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.394 [2024-11-15 11:45:48.018787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:47.394 [2024-11-15 11:45:48.025267] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1613790) 00:27:47.394 [2024-11-15 11:45:48.025287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:21280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.394 [2024-11-15 11:45:48.025295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:47.394 [2024-11-15 11:45:48.031862] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1613790) 00:27:47.394 [2024-11-15 11:45:48.031886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:19968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.394 [2024-11-15 11:45:48.031894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:47.395 [2024-11-15 11:45:48.038155] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1613790) 00:27:47.395 [2024-11-15 11:45:48.038176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.395 [2024-11-15 11:45:48.038183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:47.395 [2024-11-15 11:45:48.044321] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1613790) 00:27:47.395 [2024-11-15 11:45:48.044341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:6528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.395 [2024-11-15 11:45:48.044348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:47.395 [2024-11-15 11:45:48.051124] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1613790) 00:27:47.395 [2024-11-15 11:45:48.051144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:19840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.395 [2024-11-15 11:45:48.051151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:47.395 [2024-11-15 11:45:48.057723] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1613790) 00:27:47.395 [2024-11-15 11:45:48.057744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:10400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.395 [2024-11-15 11:45:48.057751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:47.395 [2024-11-15 11:45:48.064307] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1613790) 00:27:47.395 [2024-11-15 11:45:48.064328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.395 [2024-11-15 11:45:48.064335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:47.395 [2024-11-15 11:45:48.070857] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1613790) 00:27:47.395 [2024-11-15 11:45:48.070877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:9888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.395 [2024-11-15 11:45:48.070884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:47.395 [2024-11-15 11:45:48.077324] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1613790) 00:27:47.395 [2024-11-15 11:45:48.077345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:18368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.395 [2024-11-15 11:45:48.077352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:47.395 [2024-11-15 11:45:48.083620] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1613790) 00:27:47.395 [2024-11-15 11:45:48.083640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:12768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.395 [2024-11-15 11:45:48.083648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:47.395 [2024-11-15 11:45:48.090124] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1613790) 00:27:47.395 [2024-11-15 11:45:48.090144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:25216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.395 [2024-11-15 11:45:48.090151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:47.395 [2024-11-15 11:45:48.096603] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1613790) 00:27:47.395 [2024-11-15 11:45:48.096625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:22912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.395 [2024-11-15 11:45:48.096632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:47.395 [2024-11-15 11:45:48.103390] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1613790) 00:27:47.395 [2024-11-15 11:45:48.103411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:7712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.395 [2024-11-15 11:45:48.103419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:47.395 [2024-11-15 11:45:48.109877] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1613790) 00:27:47.395 [2024-11-15 11:45:48.109898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:10080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.395 [2024-11-15 11:45:48.109905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:47.395 [2024-11-15 11:45:48.116073] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1613790) 00:27:47.395 [2024-11-15 11:45:48.116094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:23584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.395 [2024-11-15 11:45:48.116101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:47.395 [2024-11-15 11:45:48.122600] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1613790) 00:27:47.395 [2024-11-15 11:45:48.122622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:5760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.395 [2024-11-15 11:45:48.122629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:47.395 [2024-11-15 11:45:48.129136] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1613790) 00:27:47.395 [2024-11-15 11:45:48.129157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:18048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.395 [2024-11-15 11:45:48.129165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:47.395 [2024-11-15 11:45:48.135344] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1613790) 00:27:47.395 [2024-11-15 11:45:48.135365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:20640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.395 [2024-11-15 11:45:48.135372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:47.395 [2024-11-15 11:45:48.141920] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1613790) 00:27:47.395 [2024-11-15 11:45:48.141940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:20544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.395 [2024-11-15 11:45:48.141956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:47.395 [2024-11-15 11:45:48.148448] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1613790) 00:27:47.395 [2024-11-15 11:45:48.148486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:6272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.395 [2024-11-15 11:45:48.148495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:47.395 [2024-11-15 11:45:48.154699] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1613790) 00:27:47.395 [2024-11-15 11:45:48.154722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.395 [2024-11-15 11:45:48.154730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:47.395 [2024-11-15 11:45:48.160961] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1613790) 00:27:47.395 [2024-11-15 11:45:48.160981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:1920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.395 [2024-11-15 11:45:48.160989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:47.395 [2024-11-15 11:45:48.167311] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1613790) 00:27:47.395 [2024-11-15 11:45:48.167332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:13888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.395 [2024-11-15 11:45:48.167339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:47.395 [2024-11-15 11:45:48.173635] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1613790) 00:27:47.395 [2024-11-15 11:45:48.173656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:2112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.395 [2024-11-15 11:45:48.173663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:47.396 [2024-11-15 11:45:48.180076] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1613790) 00:27:47.396 [2024-11-15 11:45:48.180097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:6272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.396 [2024-11-15 11:45:48.180105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:47.396 [2024-11-15 11:45:48.186433] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1613790) 00:27:47.396 [2024-11-15 11:45:48.186454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:13280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.396 [2024-11-15 11:45:48.186469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:47.396 [2024-11-15 11:45:48.192859] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1613790) 00:27:47.396 [2024-11-15 11:45:48.192880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:13056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.396 [2024-11-15 11:45:48.192887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:47.396 [2024-11-15 11:45:48.199411] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1613790) 00:27:47.396 [2024-11-15 11:45:48.199435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:18528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.396 [2024-11-15 11:45:48.199442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:47.396 [2024-11-15 11:45:48.205900] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1613790) 00:27:47.396 [2024-11-15 11:45:48.205920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:2208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.396 [2024-11-15 11:45:48.205927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:47.396 [2024-11-15 11:45:48.212331] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1613790) 00:27:47.396 [2024-11-15 11:45:48.212351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:20224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.396 [2024-11-15 11:45:48.212358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:47.396 [2024-11-15 11:45:48.219061] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1613790) 00:27:47.396 [2024-11-15 11:45:48.219082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:14784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.396 [2024-11-15 11:45:48.219089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:47.396 [2024-11-15 11:45:48.226009] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1613790) 00:27:47.396 [2024-11-15 11:45:48.226036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:10624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.396 [2024-11-15 11:45:48.226048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:47.396 [2024-11-15 11:45:48.232632] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1613790) 00:27:47.396 [2024-11-15 11:45:48.232654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:2944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.396 [2024-11-15 11:45:48.232662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:47.396 [2024-11-15 11:45:48.239233] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1613790) 00:27:47.396 [2024-11-15 11:45:48.239255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:3264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.396 [2024-11-15 11:45:48.239263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:47.656 [2024-11-15 11:45:48.245829] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1613790) 00:27:47.656 [2024-11-15 11:45:48.245852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:22496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.657 [2024-11-15 11:45:48.245860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:47.657 [2024-11-15 11:45:48.252068] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1613790) 00:27:47.657 [2024-11-15 11:45:48.252090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:8800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.657 [2024-11-15 11:45:48.252099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:47.657 [2024-11-15 11:45:48.258618] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1613790) 00:27:47.657 [2024-11-15 11:45:48.258640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:13696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.657 [2024-11-15 11:45:48.258647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:47.657 [2024-11-15 11:45:48.265058] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1613790) 00:27:47.657 [2024-11-15 11:45:48.265080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:21248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.657 [2024-11-15 11:45:48.265087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:47.657 [2024-11-15 11:45:48.271371] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1613790) 00:27:47.657 [2024-11-15 11:45:48.271392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:23680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.657 [2024-11-15 11:45:48.271400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:47.657 [2024-11-15 11:45:48.277934] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1613790) 00:27:47.657 [2024-11-15 11:45:48.277955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:6112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.657 [2024-11-15 11:45:48.277963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:47.657 [2024-11-15 11:45:48.284472] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1613790) 00:27:47.657 [2024-11-15 11:45:48.284492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:6368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.657 [2024-11-15 11:45:48.284500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:47.657 [2024-11-15 11:45:48.290930] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1613790) 00:27:47.657 [2024-11-15 11:45:48.290951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:1312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.657 [2024-11-15 11:45:48.290959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:47.657 [2024-11-15 11:45:48.297329] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1613790) 00:27:47.657 [2024-11-15 11:45:48.297350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:24768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.657 [2024-11-15 11:45:48.297358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:47.657 [2024-11-15 11:45:48.303850] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1613790) 00:27:47.657 [2024-11-15 11:45:48.303871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:8256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.657 [2024-11-15 11:45:48.303879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:47.657 [2024-11-15 11:45:48.310328] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1613790) 00:27:47.657 [2024-11-15 11:45:48.310349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:5600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.657 [2024-11-15 11:45:48.310361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:47.657 [2024-11-15 11:45:48.316778] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1613790) 00:27:47.657 [2024-11-15 11:45:48.316799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:9472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.657 [2024-11-15 11:45:48.316806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:47.657 [2024-11-15 11:45:48.323432] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1613790) 00:27:47.657 [2024-11-15 11:45:48.323453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:16320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.657 [2024-11-15 11:45:48.323466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:47.657 [2024-11-15 11:45:48.329923] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1613790) 00:27:47.657 [2024-11-15 11:45:48.329944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:1152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.657 [2024-11-15 11:45:48.329951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:47.657 [2024-11-15 11:45:48.336415] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1613790) 00:27:47.657 [2024-11-15 11:45:48.336436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:1632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.657 [2024-11-15 11:45:48.336443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:47.657 [2024-11-15 11:45:48.344105] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1613790) 00:27:47.657 [2024-11-15 11:45:48.344126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:3680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.657 [2024-11-15 11:45:48.344134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:47.657 [2024-11-15 11:45:48.352749] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1613790) 00:27:47.657 [2024-11-15 11:45:48.352770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:13024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.657 [2024-11-15 11:45:48.352778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:47.657 [2024-11-15 11:45:48.361707] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1613790) 00:27:47.657 [2024-11-15 11:45:48.361728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:20288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.657 [2024-11-15 11:45:48.361736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:47.657 [2024-11-15 11:45:48.370964] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1613790) 00:27:47.657 [2024-11-15 11:45:48.370985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:19136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.657 [2024-11-15 11:45:48.370993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:47.657 [2024-11-15 11:45:48.379711] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1613790) 00:27:47.657 [2024-11-15 11:45:48.379738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:3808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.657 [2024-11-15 11:45:48.379747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:47.657 [2024-11-15 11:45:48.388825] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1613790) 00:27:47.657 [2024-11-15 11:45:48.388847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:1280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.657 [2024-11-15 11:45:48.388856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:47.657 [2024-11-15 11:45:48.397921] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1613790) 00:27:47.657 [2024-11-15 11:45:48.397943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:22848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.657 [2024-11-15 11:45:48.397951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:47.657 [2024-11-15 11:45:48.406522] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1613790) 00:27:47.657 [2024-11-15 11:45:48.406544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:14624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.657 [2024-11-15 11:45:48.406551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:47.657 [2024-11-15 11:45:48.415829] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1613790) 00:27:47.657 [2024-11-15 11:45:48.415852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:14848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.657 [2024-11-15 11:45:48.415860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:47.657 [2024-11-15 11:45:48.425292] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1613790) 00:27:47.657 [2024-11-15 11:45:48.425314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.657 [2024-11-15 11:45:48.425322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:47.657 [2024-11-15 11:45:48.434239] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1613790) 00:27:47.657 [2024-11-15 11:45:48.434261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:13600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.657 [2024-11-15 11:45:48.434269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:47.657 [2024-11-15 11:45:48.442688] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1613790) 00:27:47.657 [2024-11-15 11:45:48.442708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:8608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.657 [2024-11-15 11:45:48.442716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:47.657 [2024-11-15 11:45:48.451617] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1613790) 00:27:47.657 [2024-11-15 11:45:48.451639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:21920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.657 [2024-11-15 11:45:48.451651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:47.657 [2024-11-15 11:45:48.460225] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1613790) 00:27:47.657 [2024-11-15 11:45:48.460246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:1504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.657 [2024-11-15 11:45:48.460254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:47.657 [2024-11-15 11:45:48.467891] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1613790) 00:27:47.657 [2024-11-15 11:45:48.467913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:14208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.658 [2024-11-15 11:45:48.467920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:47.658 [2024-11-15 11:45:48.475314] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1613790) 00:27:47.658 [2024-11-15 11:45:48.475335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:23232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.658 [2024-11-15 11:45:48.475343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:47.658 [2024-11-15 11:45:48.482306] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1613790) 00:27:47.658 [2024-11-15 11:45:48.482326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.658 [2024-11-15 11:45:48.482333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:47.658 [2024-11-15 11:45:48.489218] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1613790) 00:27:47.658 [2024-11-15 11:45:48.489239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:4320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.658 [2024-11-15 11:45:48.489247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:47.658 [2024-11-15 11:45:48.496118] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1613790) 00:27:47.658 [2024-11-15 11:45:48.496139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:1824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.658 [2024-11-15 11:45:48.496146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:47.658 [2024-11-15 11:45:48.503349] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1613790) 00:27:47.658 [2024-11-15 11:45:48.503370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:4224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.658 [2024-11-15 11:45:48.503377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:47.918 [2024-11-15 11:45:48.510281] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1613790) 00:27:47.918 [2024-11-15 11:45:48.510303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:19232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.918 [2024-11-15 11:45:48.510311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:47.918 [2024-11-15 11:45:48.517173] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1613790) 00:27:47.918 [2024-11-15 11:45:48.517197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:22464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.918 [2024-11-15 11:45:48.517204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:47.918 [2024-11-15 11:45:48.523952] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1613790) 00:27:47.918 [2024-11-15 11:45:48.523973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:8384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.918 [2024-11-15 11:45:48.523980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:47.918 [2024-11-15 11:45:48.530818] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1613790) 00:27:47.918 [2024-11-15 11:45:48.530839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:10464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.918 [2024-11-15 11:45:48.530846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:47.918 [2024-11-15 11:45:48.537930] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1613790) 00:27:47.918 [2024-11-15 11:45:48.537951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:7744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.918 [2024-11-15 11:45:48.537958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:47.918 [2024-11-15 11:45:48.544892] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1613790) 00:27:47.918 [2024-11-15 11:45:48.544913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:10208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.918 [2024-11-15 11:45:48.544920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:47.918 [2024-11-15 11:45:48.551903] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1613790) 00:27:47.918 [2024-11-15 11:45:48.551923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:22272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.918 [2024-11-15 11:45:48.551931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:47.918 [2024-11-15 11:45:48.559352] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1613790) 00:27:47.918 [2024-11-15 11:45:48.559372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:2144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.918 [2024-11-15 11:45:48.559379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:47.918 [2024-11-15 11:45:48.566879] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1613790) 00:27:47.918 [2024-11-15 11:45:48.566900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:19648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.918 [2024-11-15 11:45:48.566908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:47.918 [2024-11-15 11:45:48.573805] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1613790) 00:27:47.918 [2024-11-15 11:45:48.573825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:7232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.918 [2024-11-15 11:45:48.573833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:47.918 [2024-11-15 11:45:48.580425] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1613790) 00:27:47.918 [2024-11-15 11:45:48.580445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:14944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.918 [2024-11-15 11:45:48.580453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:47.918 [2024-11-15 11:45:48.587027] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1613790) 00:27:47.918 [2024-11-15 11:45:48.587047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:8448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.918 [2024-11-15 11:45:48.587055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:47.918 [2024-11-15 11:45:48.593523] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1613790) 00:27:47.918 [2024-11-15 11:45:48.593543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:19968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.918 [2024-11-15 11:45:48.593550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:47.918 [2024-11-15 11:45:48.600160] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1613790) 00:27:47.918 [2024-11-15 11:45:48.600180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.918 [2024-11-15 11:45:48.600187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:47.919 [2024-11-15 11:45:48.606873] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1613790) 00:27:47.919 [2024-11-15 11:45:48.606892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:4608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.919 [2024-11-15 11:45:48.606899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:47.919 [2024-11-15 11:45:48.613709] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1613790) 00:27:47.919 [2024-11-15 11:45:48.613730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:2816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.919 [2024-11-15 11:45:48.613738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:47.919 [2024-11-15 11:45:48.620467] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1613790) 00:27:47.919 [2024-11-15 11:45:48.620488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:2016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.919 [2024-11-15 11:45:48.620495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:47.919 [2024-11-15 11:45:48.627073] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1613790) 00:27:47.919 [2024-11-15 11:45:48.627092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:23744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.919 [2024-11-15 11:45:48.627100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:47.919 [2024-11-15 11:45:48.633880] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1613790) 00:27:47.919 [2024-11-15 11:45:48.633900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:8768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.919 [2024-11-15 11:45:48.633910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:47.919 [2024-11-15 11:45:48.640455] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1613790) 00:27:47.919 [2024-11-15 11:45:48.640481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:10368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.919 [2024-11-15 11:45:48.640489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:47.919 [2024-11-15 11:45:48.647083] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1613790) 00:27:47.919 [2024-11-15 11:45:48.647103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:5632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.919 [2024-11-15 11:45:48.647111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:47.919 [2024-11-15 11:45:48.653746] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1613790) 00:27:47.919 [2024-11-15 11:45:48.653765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:3392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.919 [2024-11-15 11:45:48.653773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:47.919 [2024-11-15 11:45:48.660270] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1613790) 00:27:47.919 [2024-11-15 11:45:48.660290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:12736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.919 [2024-11-15 11:45:48.660297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:47.919 [2024-11-15 11:45:48.666978] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1613790) 00:27:47.919 [2024-11-15 11:45:48.666998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.919 [2024-11-15 11:45:48.667005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:47.919 [2024-11-15 11:45:48.673806] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1613790) 00:27:47.919 [2024-11-15 11:45:48.673826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:21056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.919 [2024-11-15 11:45:48.673834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:47.919 [2024-11-15 11:45:48.680614] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1613790) 00:27:47.919 [2024-11-15 11:45:48.680634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:10976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.919 [2024-11-15 11:45:48.680641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:47.919 [2024-11-15 11:45:48.687215] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1613790) 00:27:47.919 [2024-11-15 11:45:48.687235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:20832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.919 [2024-11-15 11:45:48.687243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:47.919 [2024-11-15 11:45:48.693907] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1613790) 00:27:47.919 [2024-11-15 11:45:48.693931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:6368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.919 [2024-11-15 11:45:48.693938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:47.919 [2024-11-15 11:45:48.701395] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1613790) 00:27:47.919 [2024-11-15 11:45:48.701416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:10240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.919 [2024-11-15 11:45:48.701423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:47.919 [2024-11-15 11:45:48.710020] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1613790) 00:27:47.919 [2024-11-15 11:45:48.710041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:5792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.919 [2024-11-15 11:45:48.710049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:47.919 [2024-11-15 11:45:48.718258] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1613790) 00:27:47.919 [2024-11-15 11:45:48.718279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:23488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.919 [2024-11-15 11:45:48.718287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:47.919 [2024-11-15 11:45:48.726737] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1613790) 00:27:47.919 [2024-11-15 11:45:48.726758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:6688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.919 [2024-11-15 11:45:48.726766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:47.919 [2024-11-15 11:45:48.735617] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1613790) 00:27:47.919 [2024-11-15 11:45:48.735638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:17728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.919 [2024-11-15 11:45:48.735647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:47.919 [2024-11-15 11:45:48.743380] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1613790) 00:27:47.919 [2024-11-15 11:45:48.743402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:20448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.919 [2024-11-15 11:45:48.743410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:47.919 4453.00 IOPS, 556.62 MiB/s [2024-11-15T10:45:48.772Z] [2024-11-15 11:45:48.751852] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1613790) 00:27:47.919 [2024-11-15 11:45:48.751874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.919 [2024-11-15 11:45:48.751881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:47.919 00:27:47.919 Latency(us) 00:27:47.919 [2024-11-15T10:45:48.772Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:47.919 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:27:47.919 nvme0n1 : 2.01 4455.69 556.96 0.00 0.00 3587.24 852.71 10247.45 00:27:47.919 [2024-11-15T10:45:48.772Z] =================================================================================================================== 00:27:47.919 [2024-11-15T10:45:48.772Z] Total : 4455.69 556.96 0.00 0.00 3587.24 852.71 10247.45 00:27:47.919 { 00:27:47.919 "results": [ 00:27:47.919 { 00:27:47.919 "job": "nvme0n1", 00:27:47.919 "core_mask": "0x2", 00:27:47.919 "workload": "randread", 00:27:47.919 "status": "finished", 00:27:47.919 "queue_depth": 16, 00:27:47.919 "io_size": 131072, 00:27:47.919 "runtime": 2.005751, 00:27:47.919 "iops": 4455.687670104614, 00:27:47.919 "mibps": 556.9609587630767, 00:27:47.919 "io_failed": 0, 00:27:47.919 "io_timeout": 0, 00:27:47.919 "avg_latency_us": 3587.244067665578, 00:27:47.919 "min_latency_us": 852.7127272727273, 00:27:47.919 "max_latency_us": 10247.447272727273 00:27:47.919 } 00:27:47.919 ], 00:27:47.919 "core_count": 1 00:27:47.919 } 00:27:48.178 11:45:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:27:48.178 11:45:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:27:48.178 11:45:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:27:48.178 11:45:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:27:48.178 | .driver_specific 00:27:48.178 | .nvme_error 00:27:48.178 | .status_code 00:27:48.178 | .command_transient_transport_error' 00:27:48.438 11:45:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 289 > 0 )) 00:27:48.438 11:45:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 1395217 00:27:48.438 11:45:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # '[' -z 1395217 ']' 00:27:48.438 11:45:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # kill -0 1395217 00:27:48.438 11:45:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@957 -- # uname 00:27:48.438 11:45:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:27:48.438 11:45:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 1395217 00:27:48.438 11:45:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:27:48.438 11:45:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:27:48.438 11:45:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@970 -- # echo 'killing process with pid 1395217' 00:27:48.438 killing process with pid 1395217 00:27:48.438 11:45:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@971 -- # kill 1395217 00:27:48.438 Received shutdown signal, test time was about 2.000000 seconds 00:27:48.438 00:27:48.438 Latency(us) 00:27:48.438 [2024-11-15T10:45:49.291Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:48.438 [2024-11-15T10:45:49.291Z] =================================================================================================================== 00:27:48.438 [2024-11-15T10:45:49.291Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:27:48.438 11:45:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@976 -- # wait 1395217 00:27:48.438 11:45:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@114 -- # run_bperf_err randwrite 4096 128 00:27:48.438 11:45:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:27:48.438 11:45:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:27:48.438 11:45:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:27:48.438 11:45:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:27:48.438 11:45:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=1395766 00:27:48.438 11:45:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 1395766 /var/tmp/bperf.sock 00:27:48.438 11:45:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z 00:27:48.438 11:45:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@833 -- # '[' -z 1395766 ']' 00:27:48.438 11:45:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bperf.sock 00:27:48.438 11:45:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # local max_retries=100 00:27:48.438 11:45:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:27:48.438 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:27:48.438 11:45:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # xtrace_disable 00:27:48.438 11:45:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:27:48.697 [2024-11-15 11:45:49.327601] Starting SPDK v25.01-pre git sha1 4b2d483c6 / DPDK 24.03.0 initialization... 00:27:48.697 [2024-11-15 11:45:49.327666] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1395766 ] 00:27:48.697 [2024-11-15 11:45:49.393862] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:48.697 [2024-11-15 11:45:49.427724] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:27:48.697 11:45:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:27:48.697 11:45:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@866 -- # return 0 00:27:48.697 11:45:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:27:48.955 11:45:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:27:49.213 11:45:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:27:49.213 11:45:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:49.213 11:45:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:27:49.213 11:45:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:49.213 11:45:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:27:49.213 11:45:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:27:49.471 nvme0n1 00:27:49.730 11:45:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:27:49.730 11:45:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:49.730 11:45:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:27:49.730 11:45:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:49.730 11:45:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:27:49.730 11:45:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:27:49.730 Running I/O for 2 seconds... 00:27:49.730 [2024-11-15 11:45:50.486826] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b3cce0) with pdu=0x200016efd208 00:27:49.730 [2024-11-15 11:45:50.487876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:2819 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:49.730 [2024-11-15 11:45:50.487905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:49.730 [2024-11-15 11:45:50.500784] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b3cce0) with pdu=0x200016efda78 00:27:49.730 [2024-11-15 11:45:50.501789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:23353 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:49.730 [2024-11-15 11:45:50.501812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:49.730 [2024-11-15 11:45:50.514741] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b3cce0) with pdu=0x200016efe2e8 00:27:49.730 [2024-11-15 11:45:50.515745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:20980 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:49.730 [2024-11-15 11:45:50.515764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:27:49.730 [2024-11-15 11:45:50.528767] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b3cce0) with pdu=0x200016eff3c8 00:27:49.730 [2024-11-15 11:45:50.529707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:5943 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:49.730 [2024-11-15 11:45:50.529726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:27:49.730 [2024-11-15 11:45:50.542787] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b3cce0) with pdu=0x200016efeb58 00:27:49.730 [2024-11-15 11:45:50.543722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:11793 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:49.730 [2024-11-15 11:45:50.543741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:27:49.730 [2024-11-15 11:45:50.562260] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b3cce0) with pdu=0x200016efeb58 00:27:49.730 [2024-11-15 11:45:50.564664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:23113 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:49.730 [2024-11-15 11:45:50.564682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:27:49.730 [2024-11-15 11:45:50.576225] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b3cce0) with pdu=0x200016eff3c8 00:27:49.730 [2024-11-15 11:45:50.578589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:6906 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:49.730 [2024-11-15 11:45:50.578607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:27:49.989 [2024-11-15 11:45:50.590205] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b3cce0) with pdu=0x200016efe2e8 00:27:49.989 [2024-11-15 11:45:50.592552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:21775 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:49.989 [2024-11-15 11:45:50.592570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:27:49.989 [2024-11-15 11:45:50.604144] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b3cce0) with pdu=0x200016efda78 00:27:49.989 [2024-11-15 11:45:50.606452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12153 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:49.989 [2024-11-15 11:45:50.606473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:27:49.989 [2024-11-15 11:45:50.618079] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b3cce0) with pdu=0x200016efd208 00:27:49.989 [2024-11-15 11:45:50.620376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:12802 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:49.989 [2024-11-15 11:45:50.620394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:27:49.989 [2024-11-15 11:45:50.632055] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b3cce0) with pdu=0x200016efc998 00:27:49.989 [2024-11-15 11:45:50.634338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20127 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:49.989 [2024-11-15 11:45:50.634357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:27:49.989 [2024-11-15 11:45:50.646034] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b3cce0) with pdu=0x200016efc128 00:27:49.990 [2024-11-15 11:45:50.648284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:22556 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:49.990 [2024-11-15 11:45:50.648302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:27:49.990 [2024-11-15 11:45:50.659972] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b3cce0) with pdu=0x200016efb8b8 00:27:49.990 [2024-11-15 11:45:50.662204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:19455 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:49.990 [2024-11-15 11:45:50.662222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:27:49.990 [2024-11-15 11:45:50.673920] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b3cce0) with pdu=0x200016efb048 00:27:49.990 [2024-11-15 11:45:50.676120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:8337 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:49.990 [2024-11-15 11:45:50.676137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:27:49.990 [2024-11-15 11:45:50.687847] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b3cce0) with pdu=0x200016efa7d8 00:27:49.990 [2024-11-15 11:45:50.690027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:21212 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:49.990 [2024-11-15 11:45:50.690045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:27:49.990 [2024-11-15 11:45:50.701783] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b3cce0) with pdu=0x200016ef9f68 00:27:49.990 [2024-11-15 11:45:50.703942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:1920 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:49.990 [2024-11-15 11:45:50.703960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:27:49.990 [2024-11-15 11:45:50.715849] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b3cce0) with pdu=0x200016ef96f8 00:27:49.990 [2024-11-15 11:45:50.717985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:1108 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:49.990 [2024-11-15 11:45:50.718003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:49.990 [2024-11-15 11:45:50.729801] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b3cce0) with pdu=0x200016ef8e88 00:27:49.990 [2024-11-15 11:45:50.731914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:19351 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:49.990 [2024-11-15 11:45:50.731935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:27:49.990 [2024-11-15 11:45:50.743737] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b3cce0) with pdu=0x200016ef8618 00:27:49.990 [2024-11-15 11:45:50.745823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:19610 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:49.990 [2024-11-15 11:45:50.745841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:27:49.990 [2024-11-15 11:45:50.757662] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b3cce0) with pdu=0x200016ef7da8 00:27:49.990 [2024-11-15 11:45:50.759726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:4305 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:49.990 [2024-11-15 11:45:50.759743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:27:49.990 [2024-11-15 11:45:50.771566] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b3cce0) with pdu=0x200016ef7538 00:27:49.990 [2024-11-15 11:45:50.773612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:24346 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:49.990 [2024-11-15 11:45:50.773629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:27:49.990 [2024-11-15 11:45:50.785511] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b3cce0) with pdu=0x200016ef6cc8 00:27:49.990 [2024-11-15 11:45:50.787500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:8734 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:49.990 [2024-11-15 11:45:50.787518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:27:49.990 [2024-11-15 11:45:50.799412] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b3cce0) with pdu=0x200016ef6458 00:27:49.990 [2024-11-15 11:45:50.801346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:7603 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:49.990 [2024-11-15 11:45:50.801364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:27:49.990 [2024-11-15 11:45:50.813358] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b3cce0) with pdu=0x200016ef5be8 00:27:49.990 [2024-11-15 11:45:50.815330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:16918 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:49.990 [2024-11-15 11:45:50.815348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:27:49.990 [2024-11-15 11:45:50.827294] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b3cce0) with pdu=0x200016ef5378 00:27:49.990 [2024-11-15 11:45:50.829241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:25313 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:49.990 [2024-11-15 11:45:50.829259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:27:50.250 [2024-11-15 11:45:50.841213] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b3cce0) with pdu=0x200016ef4b08 00:27:50.250 [2024-11-15 11:45:50.843163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:22181 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:50.250 [2024-11-15 11:45:50.843181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:27:50.250 [2024-11-15 11:45:50.855136] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b3cce0) with pdu=0x200016ef4298 00:27:50.250 [2024-11-15 11:45:50.857049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:23281 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:50.250 [2024-11-15 11:45:50.857067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:27:50.250 [2024-11-15 11:45:50.869069] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b3cce0) with pdu=0x200016ef3a28 00:27:50.250 [2024-11-15 11:45:50.870947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:3010 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:50.250 [2024-11-15 11:45:50.870965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:27:50.250 [2024-11-15 11:45:50.882993] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b3cce0) with pdu=0x200016ef31b8 00:27:50.250 [2024-11-15 11:45:50.884849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:11308 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:50.250 [2024-11-15 11:45:50.884867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:27:50.250 [2024-11-15 11:45:50.896935] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b3cce0) with pdu=0x200016ef2948 00:27:50.250 [2024-11-15 11:45:50.898740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:19738 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:50.250 [2024-11-15 11:45:50.898758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:27:50.250 [2024-11-15 11:45:50.910857] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b3cce0) with pdu=0x200016ef20d8 00:27:50.250 [2024-11-15 11:45:50.912642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:22029 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:50.250 [2024-11-15 11:45:50.912659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:27:50.250 [2024-11-15 11:45:50.924775] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b3cce0) with pdu=0x200016ef1868 00:27:50.250 [2024-11-15 11:45:50.926550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:12989 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:50.250 [2024-11-15 11:45:50.926567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:27:50.250 [2024-11-15 11:45:50.938724] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b3cce0) with pdu=0x200016ef0ff8 00:27:50.250 [2024-11-15 11:45:50.940490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:25452 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:50.250 [2024-11-15 11:45:50.940508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:50.250 [2024-11-15 11:45:50.952664] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b3cce0) with pdu=0x200016ef0788 00:27:50.250 [2024-11-15 11:45:50.954402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:7384 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:50.250 [2024-11-15 11:45:50.954419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:27:50.250 [2024-11-15 11:45:50.966605] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b3cce0) with pdu=0x200016eeff18 00:27:50.250 [2024-11-15 11:45:50.968319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:18353 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:50.250 [2024-11-15 11:45:50.968338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:27:50.250 [2024-11-15 11:45:50.980533] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b3cce0) with pdu=0x200016eef6a8 00:27:50.250 [2024-11-15 11:45:50.982199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:5723 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:50.250 [2024-11-15 11:45:50.982216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:27:50.250 [2024-11-15 11:45:50.994445] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b3cce0) with pdu=0x200016eeee38 00:27:50.250 [2024-11-15 11:45:50.996113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:19202 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:50.250 [2024-11-15 11:45:50.996130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:27:50.250 [2024-11-15 11:45:51.008392] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b3cce0) with pdu=0x200016eee5c8 00:27:50.250 [2024-11-15 11:45:51.010041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:17071 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:50.250 [2024-11-15 11:45:51.010058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:27:50.250 [2024-11-15 11:45:51.022304] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b3cce0) with pdu=0x200016eedd58 00:27:50.250 [2024-11-15 11:45:51.023925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:24949 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:50.250 [2024-11-15 11:45:51.023942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:27:50.250 [2024-11-15 11:45:51.036228] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b3cce0) with pdu=0x200016eed4e8 00:27:50.250 [2024-11-15 11:45:51.037828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:14947 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:50.250 [2024-11-15 11:45:51.037848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:27:50.250 [2024-11-15 11:45:51.050408] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b3cce0) with pdu=0x200016eecc78 00:27:50.250 [2024-11-15 11:45:51.051992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:4415 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:50.250 [2024-11-15 11:45:51.052009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:27:50.250 [2024-11-15 11:45:51.064327] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b3cce0) with pdu=0x200016eec408 00:27:50.250 [2024-11-15 11:45:51.065878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:23260 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:50.250 [2024-11-15 11:45:51.065896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:27:50.250 [2024-11-15 11:45:51.078251] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b3cce0) with pdu=0x200016eebb98 00:27:50.250 [2024-11-15 11:45:51.079786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:16356 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:50.250 [2024-11-15 11:45:51.079804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:27:50.250 [2024-11-15 11:45:51.092204] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b3cce0) with pdu=0x200016eeb328 00:27:50.250 [2024-11-15 11:45:51.093709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:24841 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:50.250 [2024-11-15 11:45:51.093729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:27:50.510 [2024-11-15 11:45:51.106143] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b3cce0) with pdu=0x200016eeaab8 00:27:50.510 [2024-11-15 11:45:51.107631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:19047 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:50.510 [2024-11-15 11:45:51.107649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:27:50.510 [2024-11-15 11:45:51.120074] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b3cce0) with pdu=0x200016eea248 00:27:50.510 [2024-11-15 11:45:51.121527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:16140 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:50.510 [2024-11-15 11:45:51.121545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:27:50.510 [2024-11-15 11:45:51.134004] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b3cce0) with pdu=0x200016ee99d8 00:27:50.510 [2024-11-15 11:45:51.135351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:14455 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:50.510 [2024-11-15 11:45:51.135369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:27:50.510 [2024-11-15 11:45:51.147941] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b3cce0) with pdu=0x200016ee9168 00:27:50.510 [2024-11-15 11:45:51.149379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:23794 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:50.510 [2024-11-15 11:45:51.149397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:27:50.510 [2024-11-15 11:45:51.161932] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b3cce0) with pdu=0x200016ee88f8 00:27:50.510 [2024-11-15 11:45:51.163314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:10921 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:50.510 [2024-11-15 11:45:51.163332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:50.510 [2024-11-15 11:45:51.175844] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b3cce0) with pdu=0x200016ee8088 00:27:50.510 [2024-11-15 11:45:51.177184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:16478 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:50.511 [2024-11-15 11:45:51.177201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:27:50.511 [2024-11-15 11:45:51.189770] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b3cce0) with pdu=0x200016ee7818 00:27:50.511 [2024-11-15 11:45:51.191143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:11161 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:50.511 [2024-11-15 11:45:51.191160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:27:50.511 [2024-11-15 11:45:51.203735] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b3cce0) with pdu=0x200016ee6fa8 00:27:50.511 [2024-11-15 11:45:51.204981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:24373 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:50.511 [2024-11-15 11:45:51.204998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:27:50.511 [2024-11-15 11:45:51.217655] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b3cce0) with pdu=0x200016ee6738 00:27:50.511 [2024-11-15 11:45:51.218954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:5159 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:50.511 [2024-11-15 11:45:51.218972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:27:50.511 [2024-11-15 11:45:51.231617] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b3cce0) with pdu=0x200016ee5ec8 00:27:50.511 [2024-11-15 11:45:51.232887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:12636 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:50.511 [2024-11-15 11:45:51.232904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:27:50.511 [2024-11-15 11:45:51.245566] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b3cce0) with pdu=0x200016ee5658 00:27:50.511 [2024-11-15 11:45:51.246805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:15009 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:50.511 [2024-11-15 11:45:51.246823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:27:50.511 [2024-11-15 11:45:51.259520] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b3cce0) with pdu=0x200016ee4de8 00:27:50.511 [2024-11-15 11:45:51.260718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:22872 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:50.511 [2024-11-15 11:45:51.260735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:27:50.511 [2024-11-15 11:45:51.273492] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b3cce0) with pdu=0x200016ee4578 00:27:50.511 [2024-11-15 11:45:51.274696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:9459 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:50.511 [2024-11-15 11:45:51.274714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:27:50.511 [2024-11-15 11:45:51.287403] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b3cce0) with pdu=0x200016ee3d08 00:27:50.511 [2024-11-15 11:45:51.288492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:10247 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:50.511 [2024-11-15 11:45:51.288510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:27:50.511 [2024-11-15 11:45:51.301314] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b3cce0) with pdu=0x200016ee3498 00:27:50.511 [2024-11-15 11:45:51.302376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:13786 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:50.511 [2024-11-15 11:45:51.302393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:27:50.511 [2024-11-15 11:45:51.318094] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b3cce0) with pdu=0x200016ef3e60 00:27:50.511 [2024-11-15 11:45:51.319971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:6991 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:50.511 [2024-11-15 11:45:51.319989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:27:50.511 [2024-11-15 11:45:51.332019] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b3cce0) with pdu=0x200016ef35f0 00:27:50.511 [2024-11-15 11:45:51.333810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:21775 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:50.511 [2024-11-15 11:45:51.333828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:27:50.511 [2024-11-15 11:45:51.345944] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b3cce0) with pdu=0x200016ef2d80 00:27:50.511 [2024-11-15 11:45:51.347695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:16331 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:50.511 [2024-11-15 11:45:51.347713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:27:50.511 [2024-11-15 11:45:51.357243] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b3cce0) with pdu=0x200016efb8b8 00:27:50.511 [2024-11-15 11:45:51.358336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:14973 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:50.511 [2024-11-15 11:45:51.358354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:27:50.771 [2024-11-15 11:45:51.371191] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b3cce0) with pdu=0x200016efc128 00:27:50.771 [2024-11-15 11:45:51.372265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:16767 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:50.771 [2024-11-15 11:45:51.372283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:27:50.771 [2024-11-15 11:45:51.385130] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b3cce0) with pdu=0x200016efc998 00:27:50.771 [2024-11-15 11:45:51.386086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:11371 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:50.771 [2024-11-15 11:45:51.386104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:27:50.771 [2024-11-15 11:45:51.399065] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b3cce0) with pdu=0x200016efd208 00:27:50.771 [2024-11-15 11:45:51.399998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:16262 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:50.771 [2024-11-15 11:45:51.400015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:50.771 [2024-11-15 11:45:51.413006] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b3cce0) with pdu=0x200016efda78 00:27:50.771 [2024-11-15 11:45:51.413983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:179 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:50.771 [2024-11-15 11:45:51.414001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.771 [2024-11-15 11:45:51.426944] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b3cce0) with pdu=0x200016efe2e8 00:27:50.771 [2024-11-15 11:45:51.427904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:6848 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:50.771 [2024-11-15 11:45:51.427922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:27:50.771 [2024-11-15 11:45:51.440871] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b3cce0) with pdu=0x200016eff3c8 00:27:50.771 [2024-11-15 11:45:51.441801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:8427 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:50.771 [2024-11-15 11:45:51.441819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:27:50.771 [2024-11-15 11:45:51.454797] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b3cce0) with pdu=0x200016efeb58 00:27:50.771 [2024-11-15 11:45:51.455703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:3973 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:50.771 [2024-11-15 11:45:51.455724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:27:50.771 17968.00 IOPS, 70.19 MiB/s [2024-11-15T10:45:51.624Z] [2024-11-15 11:45:51.475628] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b3cce0) with pdu=0x200016efeb58 00:27:50.771 [2024-11-15 11:45:51.478029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:8599 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:50.771 [2024-11-15 11:45:51.478048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:27:50.771 [2024-11-15 11:45:51.489596] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b3cce0) with pdu=0x200016eff3c8 00:27:50.771 [2024-11-15 11:45:51.491944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:23125 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:50.771 [2024-11-15 11:45:51.491963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:27:50.771 [2024-11-15 11:45:51.503574] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b3cce0) with pdu=0x200016efe2e8 00:27:50.771 [2024-11-15 11:45:51.505901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:13404 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:50.771 [2024-11-15 11:45:51.505919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:27:50.771 [2024-11-15 11:45:51.517499] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b3cce0) with pdu=0x200016efda78 00:27:50.771 [2024-11-15 11:45:51.519755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:13190 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:50.771 [2024-11-15 11:45:51.519773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:27:50.771 [2024-11-15 11:45:51.531433] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b3cce0) with pdu=0x200016efd208 00:27:50.771 [2024-11-15 11:45:51.533643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:305 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:50.771 [2024-11-15 11:45:51.533661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:27:50.771 [2024-11-15 11:45:51.545385] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b3cce0) with pdu=0x200016efc998 00:27:50.771 [2024-11-15 11:45:51.547570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:16885 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:50.771 [2024-11-15 11:45:51.547587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:27:50.771 [2024-11-15 11:45:51.559302] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b3cce0) with pdu=0x200016efc128 00:27:50.771 [2024-11-15 11:45:51.561546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:14980 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:50.771 [2024-11-15 11:45:51.561564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:27:50.771 [2024-11-15 11:45:51.573232] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b3cce0) with pdu=0x200016efb8b8 00:27:50.771 [2024-11-15 11:45:51.575467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:182 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:50.771 [2024-11-15 11:45:51.575484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:27:50.771 [2024-11-15 11:45:51.587187] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b3cce0) with pdu=0x200016efb048 00:27:50.771 [2024-11-15 11:45:51.589404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:21627 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:50.771 [2024-11-15 11:45:51.589422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:27:50.771 [2024-11-15 11:45:51.601120] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b3cce0) with pdu=0x200016efa7d8 00:27:50.771 [2024-11-15 11:45:51.603207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:25540 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:50.771 [2024-11-15 11:45:51.603225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:27:50.771 [2024-11-15 11:45:51.615065] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b3cce0) with pdu=0x200016ef9f68 00:27:50.771 [2024-11-15 11:45:51.617219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:18195 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:50.771 [2024-11-15 11:45:51.617237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:27:51.031 [2024-11-15 11:45:51.629033] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b3cce0) with pdu=0x200016ef96f8 00:27:51.031 [2024-11-15 11:45:51.631155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:11187 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:51.031 [2024-11-15 11:45:51.631174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:51.031 [2024-11-15 11:45:51.642935] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b3cce0) with pdu=0x200016ef8e88 00:27:51.031 [2024-11-15 11:45:51.645055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:15199 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:51.031 [2024-11-15 11:45:51.645073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:27:51.031 [2024-11-15 11:45:51.656873] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b3cce0) with pdu=0x200016ef8618 00:27:51.031 [2024-11-15 11:45:51.658892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:10330 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:51.031 [2024-11-15 11:45:51.658909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:27:51.031 [2024-11-15 11:45:51.670766] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b3cce0) with pdu=0x200016ef7da8 00:27:51.031 [2024-11-15 11:45:51.672833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:5036 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:51.031 [2024-11-15 11:45:51.672850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:27:51.031 [2024-11-15 11:45:51.684664] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b3cce0) with pdu=0x200016ef7538 00:27:51.031 [2024-11-15 11:45:51.686695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:14985 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:51.031 [2024-11-15 11:45:51.686713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:27:51.031 [2024-11-15 11:45:51.698613] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b3cce0) with pdu=0x200016ef6cc8 00:27:51.031 [2024-11-15 11:45:51.700610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:6838 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:51.031 [2024-11-15 11:45:51.700627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:27:51.031 [2024-11-15 11:45:51.712524] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b3cce0) with pdu=0x200016ef6458 00:27:51.031 [2024-11-15 11:45:51.714495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:4282 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:51.031 [2024-11-15 11:45:51.714512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:27:51.031 [2024-11-15 11:45:51.726563] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b3cce0) with pdu=0x200016ef5be8 00:27:51.031 [2024-11-15 11:45:51.728511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:18908 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:51.031 [2024-11-15 11:45:51.728529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:27:51.031 [2024-11-15 11:45:51.740467] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b3cce0) with pdu=0x200016ef5378 00:27:51.031 [2024-11-15 11:45:51.742385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:24249 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:51.031 [2024-11-15 11:45:51.742402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:27:51.031 [2024-11-15 11:45:51.754353] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b3cce0) with pdu=0x200016ef4b08 00:27:51.031 [2024-11-15 11:45:51.756282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:6730 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:51.031 [2024-11-15 11:45:51.756300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:27:51.031 [2024-11-15 11:45:51.768261] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b3cce0) with pdu=0x200016ef4298 00:27:51.031 [2024-11-15 11:45:51.770165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:13403 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:51.031 [2024-11-15 11:45:51.770182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:27:51.031 [2024-11-15 11:45:51.782160] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b3cce0) with pdu=0x200016ef3a28 00:27:51.031 [2024-11-15 11:45:51.784013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:21801 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:51.031 [2024-11-15 11:45:51.784030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:27:51.032 [2024-11-15 11:45:51.796058] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b3cce0) with pdu=0x200016ef31b8 00:27:51.032 [2024-11-15 11:45:51.797892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:12076 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:51.032 [2024-11-15 11:45:51.797909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:27:51.032 [2024-11-15 11:45:51.809987] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b3cce0) with pdu=0x200016ef2948 00:27:51.032 [2024-11-15 11:45:51.811823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:14035 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:51.032 [2024-11-15 11:45:51.811841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:27:51.032 [2024-11-15 11:45:51.823876] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b3cce0) with pdu=0x200016ef20d8 00:27:51.032 [2024-11-15 11:45:51.825663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:3873 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:51.032 [2024-11-15 11:45:51.825683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:27:51.032 [2024-11-15 11:45:51.837779] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b3cce0) with pdu=0x200016ef1868 00:27:51.032 [2024-11-15 11:45:51.839539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:19714 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:51.032 [2024-11-15 11:45:51.839557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:27:51.032 [2024-11-15 11:45:51.851713] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b3cce0) with pdu=0x200016ef0ff8 00:27:51.032 [2024-11-15 11:45:51.853445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:1603 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:51.032 [2024-11-15 11:45:51.853466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:51.032 [2024-11-15 11:45:51.865611] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b3cce0) with pdu=0x200016ef0788 00:27:51.032 [2024-11-15 11:45:51.867325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:16381 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:51.032 [2024-11-15 11:45:51.867342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:27:51.032 [2024-11-15 11:45:51.879526] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b3cce0) with pdu=0x200016eeff18 00:27:51.032 [2024-11-15 11:45:51.881220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:10083 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:51.032 [2024-11-15 11:45:51.881237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:27:51.292 [2024-11-15 11:45:51.893467] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b3cce0) with pdu=0x200016eef6a8 00:27:51.292 [2024-11-15 11:45:51.895132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:18121 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:51.292 [2024-11-15 11:45:51.895150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:27:51.292 [2024-11-15 11:45:51.907366] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b3cce0) with pdu=0x200016eeee38 00:27:51.292 [2024-11-15 11:45:51.909017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:9161 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:51.292 [2024-11-15 11:45:51.909035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:27:51.292 [2024-11-15 11:45:51.921284] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b3cce0) with pdu=0x200016eee5c8 00:27:51.292 [2024-11-15 11:45:51.922925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:9823 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:51.292 [2024-11-15 11:45:51.922943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:27:51.292 [2024-11-15 11:45:51.935204] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b3cce0) with pdu=0x200016eedd58 00:27:51.292 [2024-11-15 11:45:51.936800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:20454 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:51.292 [2024-11-15 11:45:51.936817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:27:51.292 [2024-11-15 11:45:51.949099] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b3cce0) with pdu=0x200016eed4e8 00:27:51.292 [2024-11-15 11:45:51.950680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:11217 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:51.292 [2024-11-15 11:45:51.950697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:27:51.292 [2024-11-15 11:45:51.963013] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b3cce0) with pdu=0x200016eecc78 00:27:51.292 [2024-11-15 11:45:51.964563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:4863 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:51.292 [2024-11-15 11:45:51.964581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:27:51.292 [2024-11-15 11:45:51.976926] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b3cce0) with pdu=0x200016eec408 00:27:51.292 [2024-11-15 11:45:51.978474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:9604 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:51.292 [2024-11-15 11:45:51.978492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:27:51.292 [2024-11-15 11:45:51.990843] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b3cce0) with pdu=0x200016eebb98 00:27:51.292 [2024-11-15 11:45:51.992367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:20960 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:51.292 [2024-11-15 11:45:51.992384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:27:51.292 [2024-11-15 11:45:52.004768] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b3cce0) with pdu=0x200016eeb328 00:27:51.292 [2024-11-15 11:45:52.006269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:2906 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:51.292 [2024-11-15 11:45:52.006287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:27:51.292 [2024-11-15 11:45:52.018706] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b3cce0) with pdu=0x200016eeaab8 00:27:51.292 [2024-11-15 11:45:52.020219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:11406 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:51.292 [2024-11-15 11:45:52.020237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:27:51.292 [2024-11-15 11:45:52.032626] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b3cce0) with pdu=0x200016eea248 00:27:51.292 [2024-11-15 11:45:52.034081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:17893 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:51.292 [2024-11-15 11:45:52.034098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:27:51.292 [2024-11-15 11:45:52.046775] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b3cce0) with pdu=0x200016ee99d8 00:27:51.292 [2024-11-15 11:45:52.048186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:20213 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:51.292 [2024-11-15 11:45:52.048204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:27:51.292 [2024-11-15 11:45:52.060664] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b3cce0) with pdu=0x200016ee9168 00:27:51.292 [2024-11-15 11:45:52.062076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:15510 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:51.292 [2024-11-15 11:45:52.062093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:27:51.292 [2024-11-15 11:45:52.074599] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b3cce0) with pdu=0x200016ee88f8 00:27:51.292 [2024-11-15 11:45:52.075982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:7319 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:51.292 [2024-11-15 11:45:52.076000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:51.292 [2024-11-15 11:45:52.088508] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b3cce0) with pdu=0x200016ee8088 00:27:51.292 [2024-11-15 11:45:52.089878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:10988 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:51.292 [2024-11-15 11:45:52.089896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:27:51.292 [2024-11-15 11:45:52.102435] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b3cce0) with pdu=0x200016ee7818 00:27:51.292 [2024-11-15 11:45:52.103756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:15164 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:51.292 [2024-11-15 11:45:52.103773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:27:51.292 [2024-11-15 11:45:52.116354] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b3cce0) with pdu=0x200016ee6fa8 00:27:51.292 [2024-11-15 11:45:52.117657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:23718 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:51.292 [2024-11-15 11:45:52.117674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:27:51.292 [2024-11-15 11:45:52.130248] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b3cce0) with pdu=0x200016ee6738 00:27:51.292 [2024-11-15 11:45:52.131517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:11429 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:51.292 [2024-11-15 11:45:52.131535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:27:51.552 [2024-11-15 11:45:52.144148] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b3cce0) with pdu=0x200016ee5ec8 00:27:51.552 [2024-11-15 11:45:52.145404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:242 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:51.552 [2024-11-15 11:45:52.145422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:27:51.552 [2024-11-15 11:45:52.158096] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b3cce0) with pdu=0x200016ee5658 00:27:51.552 [2024-11-15 11:45:52.159339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:2874 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:51.552 [2024-11-15 11:45:52.159356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:27:51.552 [2024-11-15 11:45:52.172039] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b3cce0) with pdu=0x200016ee4de8 00:27:51.552 [2024-11-15 11:45:52.173263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:18588 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:51.552 [2024-11-15 11:45:52.173281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:27:51.552 [2024-11-15 11:45:52.186197] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b3cce0) with pdu=0x200016ee4578 00:27:51.552 [2024-11-15 11:45:52.187403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:2488 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:51.552 [2024-11-15 11:45:52.187424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:27:51.552 [2024-11-15 11:45:52.200107] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b3cce0) with pdu=0x200016ee3d08 00:27:51.552 [2024-11-15 11:45:52.201218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:1857 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:51.552 [2024-11-15 11:45:52.201235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:27:51.552 [2024-11-15 11:45:52.214026] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b3cce0) with pdu=0x200016ee3498 00:27:51.552 [2024-11-15 11:45:52.215176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:25061 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:51.552 [2024-11-15 11:45:52.215193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:27:51.552 [2024-11-15 11:45:52.227992] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b3cce0) with pdu=0x200016ee2c28 00:27:51.552 [2024-11-15 11:45:52.229122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:18297 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:51.552 [2024-11-15 11:45:52.229140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:27:51.552 [2024-11-15 11:45:52.241918] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b3cce0) with pdu=0x200016ee23b8 00:27:51.552 [2024-11-15 11:45:52.243025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:16956 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:51.552 [2024-11-15 11:45:52.243043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:27:51.552 [2024-11-15 11:45:52.255855] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b3cce0) with pdu=0x200016ee1b48 00:27:51.552 [2024-11-15 11:45:52.256946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:6673 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:51.552 [2024-11-15 11:45:52.256963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:27:51.552 [2024-11-15 11:45:52.269804] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b3cce0) with pdu=0x200016ee12d8 00:27:51.552 [2024-11-15 11:45:52.270874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:3745 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:51.552 [2024-11-15 11:45:52.270892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:27:51.552 [2024-11-15 11:45:52.283722] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b3cce0) with pdu=0x200016ee0a68 00:27:51.552 [2024-11-15 11:45:52.284751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:21551 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:51.552 [2024-11-15 11:45:52.284768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:27:51.552 [2024-11-15 11:45:52.297634] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b3cce0) with pdu=0x200016ee01f8 00:27:51.552 [2024-11-15 11:45:52.298630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:1039 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:51.552 [2024-11-15 11:45:52.298648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:51.552 [2024-11-15 11:45:52.311580] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b3cce0) with pdu=0x200016edf988 00:27:51.552 [2024-11-15 11:45:52.312587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13453 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:51.552 [2024-11-15 11:45:52.312604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:27:51.552 [2024-11-15 11:45:52.325506] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b3cce0) with pdu=0x200016edf118 00:27:51.552 [2024-11-15 11:45:52.326492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:12101 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:51.552 [2024-11-15 11:45:52.326510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:27:51.552 [2024-11-15 11:45:52.339438] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b3cce0) with pdu=0x200016ede8a8 00:27:51.552 [2024-11-15 11:45:52.340405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:25504 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:51.552 [2024-11-15 11:45:52.340423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:27:51.552 [2024-11-15 11:45:52.358797] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b3cce0) with pdu=0x200016ede038 00:27:51.552 [2024-11-15 11:45:52.361198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:10877 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:51.552 [2024-11-15 11:45:52.361215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:27:51.552 [2024-11-15 11:45:52.372705] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b3cce0) with pdu=0x200016ede8a8 00:27:51.552 [2024-11-15 11:45:52.375087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:23246 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:51.552 [2024-11-15 11:45:52.375104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:27:51.552 [2024-11-15 11:45:52.386640] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b3cce0) with pdu=0x200016edf118 00:27:51.552 [2024-11-15 11:45:52.388999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:7702 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:51.552 [2024-11-15 11:45:52.389016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:27:51.552 [2024-11-15 11:45:52.400572] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b3cce0) with pdu=0x200016edf988 00:27:51.553 [2024-11-15 11:45:52.402904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:5788 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:51.553 [2024-11-15 11:45:52.402921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:27:51.811 [2024-11-15 11:45:52.414508] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b3cce0) with pdu=0x200016ee01f8 00:27:51.811 [2024-11-15 11:45:52.416821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:20550 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:51.811 [2024-11-15 11:45:52.416839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:27:51.811 [2024-11-15 11:45:52.428424] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b3cce0) with pdu=0x200016ee0a68 00:27:51.811 [2024-11-15 11:45:52.430716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:8158 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:51.811 [2024-11-15 11:45:52.430733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:27:51.811 [2024-11-15 11:45:52.442373] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b3cce0) with pdu=0x200016ee12d8 00:27:51.811 [2024-11-15 11:45:52.444638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:10186 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:51.811 [2024-11-15 11:45:52.444655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:27:51.811 [2024-11-15 11:45:52.456305] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b3cce0) with pdu=0x200016ee1b48 00:27:51.812 [2024-11-15 11:45:52.458520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9190 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:51.812 [2024-11-15 11:45:52.458536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:27:51.812 [2024-11-15 11:45:52.470234] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b3cce0) with pdu=0x200016ee23b8 00:27:51.812 [2024-11-15 11:45:52.472968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:8597 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:51.812 [2024-11-15 11:45:52.472986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:27:51.812 18092.50 IOPS, 70.67 MiB/s 00:27:51.812 Latency(us) 00:27:51.812 [2024-11-15T10:45:52.665Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:51.812 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:27:51.812 nvme0n1 : 2.01 18135.02 70.84 0.00 0.00 7049.12 4110.89 19541.64 00:27:51.812 [2024-11-15T10:45:52.665Z] =================================================================================================================== 00:27:51.812 [2024-11-15T10:45:52.665Z] Total : 18135.02 70.84 0.00 0.00 7049.12 4110.89 19541.64 00:27:51.812 { 00:27:51.812 "results": [ 00:27:51.812 { 00:27:51.812 "job": "nvme0n1", 00:27:51.812 "core_mask": "0x2", 00:27:51.812 "workload": "randwrite", 00:27:51.812 "status": "finished", 00:27:51.812 "queue_depth": 128, 00:27:51.812 "io_size": 4096, 00:27:51.812 "runtime": 2.005843, 00:27:51.812 "iops": 18135.018543325674, 00:27:51.812 "mibps": 70.83991618486591, 00:27:51.812 "io_failed": 0, 00:27:51.812 "io_timeout": 0, 00:27:51.812 "avg_latency_us": 7049.117565627686, 00:27:51.812 "min_latency_us": 4110.894545454546, 00:27:51.812 "max_latency_us": 19541.643636363635 00:27:51.812 } 00:27:51.812 ], 00:27:51.812 "core_count": 1 00:27:51.812 } 00:27:51.812 11:45:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:27:51.812 11:45:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:27:51.812 11:45:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:27:51.812 11:45:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:27:51.812 | .driver_specific 00:27:51.812 | .nvme_error 00:27:51.812 | .status_code 00:27:51.812 | .command_transient_transport_error' 00:27:52.070 11:45:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 142 > 0 )) 00:27:52.070 11:45:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 1395766 00:27:52.070 11:45:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # '[' -z 1395766 ']' 00:27:52.070 11:45:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # kill -0 1395766 00:27:52.071 11:45:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@957 -- # uname 00:27:52.071 11:45:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:27:52.071 11:45:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 1395766 00:27:52.071 11:45:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:27:52.071 11:45:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:27:52.071 11:45:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@970 -- # echo 'killing process with pid 1395766' 00:27:52.071 killing process with pid 1395766 00:27:52.071 11:45:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@971 -- # kill 1395766 00:27:52.071 Received shutdown signal, test time was about 2.000000 seconds 00:27:52.071 00:27:52.071 Latency(us) 00:27:52.071 [2024-11-15T10:45:52.924Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:52.071 [2024-11-15T10:45:52.924Z] =================================================================================================================== 00:27:52.071 [2024-11-15T10:45:52.924Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:27:52.071 11:45:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@976 -- # wait 1395766 00:27:52.329 11:45:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@115 -- # run_bperf_err randwrite 131072 16 00:27:52.329 11:45:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:27:52.329 11:45:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:27:52.329 11:45:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:27:52.329 11:45:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:27:52.329 11:45:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=1396470 00:27:52.329 11:45:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 1396470 /var/tmp/bperf.sock 00:27:52.329 11:45:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z 00:27:52.329 11:45:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@833 -- # '[' -z 1396470 ']' 00:27:52.329 11:45:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bperf.sock 00:27:52.329 11:45:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # local max_retries=100 00:27:52.329 11:45:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:27:52.329 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:27:52.329 11:45:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # xtrace_disable 00:27:52.329 11:45:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:27:52.329 [2024-11-15 11:45:53.041761] Starting SPDK v25.01-pre git sha1 4b2d483c6 / DPDK 24.03.0 initialization... 00:27:52.329 [2024-11-15 11:45:53.041822] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1396470 ] 00:27:52.330 I/O size of 131072 is greater than zero copy threshold (65536). 00:27:52.330 Zero copy mechanism will not be used. 00:27:52.330 [2024-11-15 11:45:53.108124] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:52.330 [2024-11-15 11:45:53.148626] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:27:52.587 11:45:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:27:52.587 11:45:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@866 -- # return 0 00:27:52.587 11:45:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:27:52.587 11:45:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:27:52.845 11:45:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:27:52.845 11:45:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:52.845 11:45:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:27:52.845 11:45:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:52.845 11:45:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:27:52.845 11:45:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:27:53.414 nvme0n1 00:27:53.414 11:45:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:27:53.414 11:45:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:53.414 11:45:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:27:53.414 11:45:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:53.414 11:45:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:27:53.414 11:45:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:27:53.414 I/O size of 131072 is greater than zero copy threshold (65536). 00:27:53.414 Zero copy mechanism will not be used. 00:27:53.414 Running I/O for 2 seconds... 00:27:53.414 [2024-11-15 11:45:54.146326] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b3d020) with pdu=0x200016eff3c8 00:27:53.414 [2024-11-15 11:45:54.146414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:53.414 [2024-11-15 11:45:54.146439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:53.414 [2024-11-15 11:45:54.153061] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b3d020) with pdu=0x200016eff3c8 00:27:53.414 [2024-11-15 11:45:54.153203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:53.414 [2024-11-15 11:45:54.153223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:53.414 [2024-11-15 11:45:54.160123] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b3d020) with pdu=0x200016eff3c8 00:27:53.414 [2024-11-15 11:45:54.160254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:53.414 [2024-11-15 11:45:54.160273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:53.414 [2024-11-15 11:45:54.167847] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b3d020) with pdu=0x200016eff3c8 00:27:53.414 [2024-11-15 11:45:54.167945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:53.414 [2024-11-15 11:45:54.167964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:53.414 [2024-11-15 11:45:54.174838] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b3d020) with pdu=0x200016eff3c8 00:27:53.414 [2024-11-15 11:45:54.174916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:53.414 [2024-11-15 11:45:54.174938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:53.414 [2024-11-15 11:45:54.182752] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b3d020) with pdu=0x200016eff3c8 00:27:53.414 [2024-11-15 11:45:54.182838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:53.414 [2024-11-15 11:45:54.182858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:53.414 [2024-11-15 11:45:54.189468] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b3d020) with pdu=0x200016eff3c8 00:27:53.414 [2024-11-15 11:45:54.189573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:53.414 [2024-11-15 11:45:54.189591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:53.414 [2024-11-15 11:45:54.195626] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b3d020) with pdu=0x200016eff3c8 00:27:53.414 [2024-11-15 11:45:54.195719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:53.414 [2024-11-15 11:45:54.195737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:53.414 [2024-11-15 11:45:54.202348] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b3d020) with pdu=0x200016eff3c8 00:27:53.414 [2024-11-15 11:45:54.202485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:53.414 [2024-11-15 11:45:54.202503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:53.414 [2024-11-15 11:45:54.209284] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b3d020) with pdu=0x200016eff3c8 00:27:53.414 [2024-11-15 11:45:54.209397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:53.414 [2024-11-15 11:45:54.209414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:53.414 [2024-11-15 11:45:54.216216] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b3d020) with pdu=0x200016eff3c8 00:27:53.414 [2024-11-15 11:45:54.216342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:53.414 [2024-11-15 11:45:54.216359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:53.414 [2024-11-15 11:45:54.223825] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b3d020) with pdu=0x200016eff3c8 00:27:53.414 [2024-11-15 11:45:54.223963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:53.414 [2024-11-15 11:45:54.223981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:53.414 [2024-11-15 11:45:54.229989] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b3d020) with pdu=0x200016eff3c8 00:27:53.414 [2024-11-15 11:45:54.230114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:53.414 [2024-11-15 11:45:54.230132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:53.414 [2024-11-15 11:45:54.236335] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b3d020) with pdu=0x200016eff3c8 00:27:53.414 [2024-11-15 11:45:54.236470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:53.414 [2024-11-15 11:45:54.236504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:53.414 [2024-11-15 11:45:54.242593] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b3d020) with pdu=0x200016eff3c8 00:27:53.414 [2024-11-15 11:45:54.242668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:53.414 [2024-11-15 11:45:54.242685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:53.414 [2024-11-15 11:45:54.248916] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b3d020) with pdu=0x200016eff3c8 00:27:53.414 [2024-11-15 11:45:54.249062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:53.414 [2024-11-15 11:45:54.249080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:53.414 [2024-11-15 11:45:54.256292] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b3d020) with pdu=0x200016eff3c8 00:27:53.414 [2024-11-15 11:45:54.256415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:53.414 [2024-11-15 11:45:54.256433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:53.414 [2024-11-15 11:45:54.263186] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b3d020) with pdu=0x200016eff3c8 00:27:53.414 [2024-11-15 11:45:54.263315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:53.414 [2024-11-15 11:45:54.263334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:53.675 [2024-11-15 11:45:54.269540] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b3d020) with pdu=0x200016eff3c8 00:27:53.675 [2024-11-15 11:45:54.269638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:53.675 [2024-11-15 11:45:54.269655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:53.675 [2024-11-15 11:45:54.275674] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b3d020) with pdu=0x200016eff3c8 00:27:53.675 [2024-11-15 11:45:54.275779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:53.675 [2024-11-15 11:45:54.275796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:53.675 [2024-11-15 11:45:54.282481] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b3d020) with pdu=0x200016eff3c8 00:27:53.675 [2024-11-15 11:45:54.282606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:53.675 [2024-11-15 11:45:54.282624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:53.675 [2024-11-15 11:45:54.289298] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b3d020) with pdu=0x200016eff3c8 00:27:53.675 [2024-11-15 11:45:54.289423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:53.675 [2024-11-15 11:45:54.289441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:53.675 [2024-11-15 11:45:54.296122] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b3d020) with pdu=0x200016eff3c8 00:27:53.675 [2024-11-15 11:45:54.296268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:53.675 [2024-11-15 11:45:54.296286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:53.675 [2024-11-15 11:45:54.303005] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b3d020) with pdu=0x200016eff3c8 00:27:53.675 [2024-11-15 11:45:54.303157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:53.675 [2024-11-15 11:45:54.303175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:53.675 [2024-11-15 11:45:54.309951] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b3d020) with pdu=0x200016eff3c8 00:27:53.675 [2024-11-15 11:45:54.310068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:53.675 [2024-11-15 11:45:54.310086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:53.675 [2024-11-15 11:45:54.317387] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b3d020) with pdu=0x200016eff3c8 00:27:53.675 [2024-11-15 11:45:54.317515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:53.675 [2024-11-15 11:45:54.317533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:53.675 [2024-11-15 11:45:54.324784] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b3d020) with pdu=0x200016eff3c8 00:27:53.675 [2024-11-15 11:45:54.324895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:53.675 [2024-11-15 11:45:54.324912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:53.675 [2024-11-15 11:45:54.331641] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b3d020) with pdu=0x200016eff3c8 00:27:53.675 [2024-11-15 11:45:54.331762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:53.675 [2024-11-15 11:45:54.331779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:53.675 [2024-11-15 11:45:54.338251] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b3d020) with pdu=0x200016eff3c8 00:27:53.675 [2024-11-15 11:45:54.338382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:53.675 [2024-11-15 11:45:54.338398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:53.675 [2024-11-15 11:45:54.345842] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b3d020) with pdu=0x200016eff3c8 00:27:53.675 [2024-11-15 11:45:54.345923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:53.675 [2024-11-15 11:45:54.345941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:53.675 [2024-11-15 11:45:54.353666] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b3d020) with pdu=0x200016eff3c8 00:27:53.675 [2024-11-15 11:45:54.353775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:53.675 [2024-11-15 11:45:54.353796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:53.675 [2024-11-15 11:45:54.361078] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b3d020) with pdu=0x200016eff3c8 00:27:53.675 [2024-11-15 11:45:54.361154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:53.675 [2024-11-15 11:45:54.361171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:53.675 [2024-11-15 11:45:54.369028] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b3d020) with pdu=0x200016eff3c8 00:27:53.675 [2024-11-15 11:45:54.369130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:53.675 [2024-11-15 11:45:54.369147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:53.675 [2024-11-15 11:45:54.375740] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b3d020) with pdu=0x200016eff3c8 00:27:53.675 [2024-11-15 11:45:54.375867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:53.675 [2024-11-15 11:45:54.375884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:53.675 [2024-11-15 11:45:54.381831] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b3d020) with pdu=0x200016eff3c8 00:27:53.675 [2024-11-15 11:45:54.381953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:53.675 [2024-11-15 11:45:54.381971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:53.675 [2024-11-15 11:45:54.388406] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b3d020) with pdu=0x200016eff3c8 00:27:53.675 [2024-11-15 11:45:54.388556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:53.675 [2024-11-15 11:45:54.388574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:53.675 [2024-11-15 11:45:54.395690] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b3d020) with pdu=0x200016eff3c8 00:27:53.675 [2024-11-15 11:45:54.395770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:53.675 [2024-11-15 11:45:54.395788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:53.675 [2024-11-15 11:45:54.402010] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b3d020) with pdu=0x200016eff3c8 00:27:53.675 [2024-11-15 11:45:54.402124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:53.675 [2024-11-15 11:45:54.402142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:53.675 [2024-11-15 11:45:54.408437] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b3d020) with pdu=0x200016eff3c8 00:27:53.675 [2024-11-15 11:45:54.408610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:53.675 [2024-11-15 11:45:54.408628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:53.675 [2024-11-15 11:45:54.414608] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b3d020) with pdu=0x200016eff3c8 00:27:53.675 [2024-11-15 11:45:54.414770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:53.675 [2024-11-15 11:45:54.414788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:53.675 [2024-11-15 11:45:54.421082] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b3d020) with pdu=0x200016eff3c8 00:27:53.675 [2024-11-15 11:45:54.421203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:53.675 [2024-11-15 11:45:54.421220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:53.675 [2024-11-15 11:45:54.427943] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b3d020) with pdu=0x200016eff3c8 00:27:53.675 [2024-11-15 11:45:54.428091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:53.675 [2024-11-15 11:45:54.428109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:53.675 [2024-11-15 11:45:54.434156] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b3d020) with pdu=0x200016eff3c8 00:27:53.675 [2024-11-15 11:45:54.434271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:53.675 [2024-11-15 11:45:54.434289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:53.676 [2024-11-15 11:45:54.440111] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b3d020) with pdu=0x200016eff3c8 00:27:53.676 [2024-11-15 11:45:54.440185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:53.676 [2024-11-15 11:45:54.440203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:53.676 [2024-11-15 11:45:54.446318] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b3d020) with pdu=0x200016eff3c8 00:27:53.676 [2024-11-15 11:45:54.446431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:53.676 [2024-11-15 11:45:54.446449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:53.676 [2024-11-15 11:45:54.453020] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b3d020) with pdu=0x200016eff3c8 00:27:53.676 [2024-11-15 11:45:54.453263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:53.676 [2024-11-15 11:45:54.453280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:53.676 [2024-11-15 11:45:54.459828] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b3d020) with pdu=0x200016eff3c8 00:27:53.676 [2024-11-15 11:45:54.459951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:53.676 [2024-11-15 11:45:54.459969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:53.676 [2024-11-15 11:45:54.466147] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b3d020) with pdu=0x200016eff3c8 00:27:53.676 [2024-11-15 11:45:54.466265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:53.676 [2024-11-15 11:45:54.466283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:53.676 [2024-11-15 11:45:54.472419] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b3d020) with pdu=0x200016eff3c8 00:27:53.676 [2024-11-15 11:45:54.472587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:53.676 [2024-11-15 11:45:54.472604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:53.676 [2024-11-15 11:45:54.479079] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b3d020) with pdu=0x200016eff3c8 00:27:53.676 [2024-11-15 11:45:54.479216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:53.676 [2024-11-15 11:45:54.479234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:53.676 [2024-11-15 11:45:54.486690] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b3d020) with pdu=0x200016eff3c8 00:27:53.676 [2024-11-15 11:45:54.486792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:53.676 [2024-11-15 11:45:54.486810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:53.676 [2024-11-15 11:45:54.493869] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b3d020) with pdu=0x200016eff3c8 00:27:53.676 [2024-11-15 11:45:54.494132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:53.676 [2024-11-15 11:45:54.494151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:53.676 [2024-11-15 11:45:54.503162] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b3d020) with pdu=0x200016eff3c8 00:27:53.676 [2024-11-15 11:45:54.503238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:53.676 [2024-11-15 11:45:54.503255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:53.676 [2024-11-15 11:45:54.510804] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b3d020) with pdu=0x200016eff3c8 00:27:53.676 [2024-11-15 11:45:54.510939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:53.676 [2024-11-15 11:45:54.510956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:53.676 [2024-11-15 11:45:54.518995] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b3d020) with pdu=0x200016eff3c8 00:27:53.676 [2024-11-15 11:45:54.519072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:53.676 [2024-11-15 11:45:54.519089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:53.940 [2024-11-15 11:45:54.526021] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b3d020) with pdu=0x200016eff3c8 00:27:53.940 [2024-11-15 11:45:54.526157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:53.940 [2024-11-15 11:45:54.526175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:53.940 [2024-11-15 11:45:54.532943] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b3d020) with pdu=0x200016eff3c8 00:27:53.940 [2024-11-15 11:45:54.533052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:53.940 [2024-11-15 11:45:54.533076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:53.940 [2024-11-15 11:45:54.539985] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b3d020) with pdu=0x200016eff3c8 00:27:53.940 [2024-11-15 11:45:54.540114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:53.940 [2024-11-15 11:45:54.540131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:53.940 [2024-11-15 11:45:54.548650] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b3d020) with pdu=0x200016eff3c8 00:27:53.940 [2024-11-15 11:45:54.548764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:53.940 [2024-11-15 11:45:54.548782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:53.940 [2024-11-15 11:45:54.556512] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b3d020) with pdu=0x200016eff3c8 00:27:53.940 [2024-11-15 11:45:54.556633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:53.940 [2024-11-15 11:45:54.556650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:53.940 [2024-11-15 11:45:54.564228] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b3d020) with pdu=0x200016eff3c8 00:27:53.940 [2024-11-15 11:45:54.564314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:53.940 [2024-11-15 11:45:54.564332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:53.940 [2024-11-15 11:45:54.571143] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b3d020) with pdu=0x200016eff3c8 00:27:53.940 [2024-11-15 11:45:54.571232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:53.940 [2024-11-15 11:45:54.571249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:53.940 [2024-11-15 11:45:54.577961] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b3d020) with pdu=0x200016eff3c8 00:27:53.940 [2024-11-15 11:45:54.578039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:53.940 [2024-11-15 11:45:54.578057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:53.940 [2024-11-15 11:45:54.584981] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b3d020) with pdu=0x200016eff3c8 00:27:53.940 [2024-11-15 11:45:54.585070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:53.940 [2024-11-15 11:45:54.585088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:53.941 [2024-11-15 11:45:54.592375] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b3d020) with pdu=0x200016eff3c8 00:27:53.941 [2024-11-15 11:45:54.592487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:53.941 [2024-11-15 11:45:54.592505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:53.941 [2024-11-15 11:45:54.598861] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b3d020) with pdu=0x200016eff3c8 00:27:53.941 [2024-11-15 11:45:54.598951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:53.941 [2024-11-15 11:45:54.598969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:53.941 [2024-11-15 11:45:54.604913] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b3d020) with pdu=0x200016eff3c8 00:27:53.941 [2024-11-15 11:45:54.605002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:53.941 [2024-11-15 11:45:54.605021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:53.941 [2024-11-15 11:45:54.610910] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b3d020) with pdu=0x200016eff3c8 00:27:53.941 [2024-11-15 11:45:54.610996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:53.941 [2024-11-15 11:45:54.611013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:53.941 [2024-11-15 11:45:54.616832] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b3d020) with pdu=0x200016eff3c8 00:27:53.941 [2024-11-15 11:45:54.616914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:53.941 [2024-11-15 11:45:54.616932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:53.941 [2024-11-15 11:45:54.622925] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b3d020) with pdu=0x200016eff3c8 00:27:53.941 [2024-11-15 11:45:54.623012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:53.941 [2024-11-15 11:45:54.623030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:53.941 [2024-11-15 11:45:54.629008] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b3d020) with pdu=0x200016eff3c8 00:27:53.941 [2024-11-15 11:45:54.629089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:53.941 [2024-11-15 11:45:54.629106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:53.941 [2024-11-15 11:45:54.635015] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b3d020) with pdu=0x200016eff3c8 00:27:53.941 [2024-11-15 11:45:54.635138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:53.941 [2024-11-15 11:45:54.635157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:53.941 [2024-11-15 11:45:54.640979] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b3d020) with pdu=0x200016eff3c8 00:27:53.941 [2024-11-15 11:45:54.641065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:53.941 [2024-11-15 11:45:54.641083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:53.941 [2024-11-15 11:45:54.647088] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b3d020) with pdu=0x200016eff3c8 00:27:53.941 [2024-11-15 11:45:54.647223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:53.941 [2024-11-15 11:45:54.647240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:53.941 [2024-11-15 11:45:54.653283] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b3d020) with pdu=0x200016eff3c8 00:27:53.941 [2024-11-15 11:45:54.653372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:53.941 [2024-11-15 11:45:54.653390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:53.941 [2024-11-15 11:45:54.659408] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b3d020) with pdu=0x200016eff3c8 00:27:53.941 [2024-11-15 11:45:54.659512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:53.941 [2024-11-15 11:45:54.659530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:53.941 [2024-11-15 11:45:54.665374] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b3d020) with pdu=0x200016eff3c8 00:27:53.941 [2024-11-15 11:45:54.665481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:53.941 [2024-11-15 11:45:54.665499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:53.941 [2024-11-15 11:45:54.671411] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b3d020) with pdu=0x200016eff3c8 00:27:53.941 [2024-11-15 11:45:54.671531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:53.941 [2024-11-15 11:45:54.671548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:53.941 [2024-11-15 11:45:54.678141] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b3d020) with pdu=0x200016eff3c8 00:27:53.941 [2024-11-15 11:45:54.678283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:53.941 [2024-11-15 11:45:54.678301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:53.941 [2024-11-15 11:45:54.685498] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b3d020) with pdu=0x200016eff3c8 00:27:53.941 [2024-11-15 11:45:54.685731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:53.941 [2024-11-15 11:45:54.685750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:53.941 [2024-11-15 11:45:54.692623] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b3d020) with pdu=0x200016eff3c8 00:27:53.941 [2024-11-15 11:45:54.692735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:53.941 [2024-11-15 11:45:54.692752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:53.941 [2024-11-15 11:45:54.699638] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b3d020) with pdu=0x200016eff3c8 00:27:53.941 [2024-11-15 11:45:54.699862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:53.941 [2024-11-15 11:45:54.699879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:53.941 [2024-11-15 11:45:54.706777] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b3d020) with pdu=0x200016eff3c8 00:27:53.941 [2024-11-15 11:45:54.706918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:53.941 [2024-11-15 11:45:54.706941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:53.941 [2024-11-15 11:45:54.714342] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b3d020) with pdu=0x200016eff3c8 00:27:53.941 [2024-11-15 11:45:54.714478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:53.941 [2024-11-15 11:45:54.714496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:53.941 [2024-11-15 11:45:54.721616] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b3d020) with pdu=0x200016eff3c8 00:27:53.941 [2024-11-15 11:45:54.721753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:53.941 [2024-11-15 11:45:54.721769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:53.941 [2024-11-15 11:45:54.729388] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b3d020) with pdu=0x200016eff3c8 00:27:53.941 [2024-11-15 11:45:54.729656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:53.941 [2024-11-15 11:45:54.729675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:53.941 [2024-11-15 11:45:54.737485] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b3d020) with pdu=0x200016eff3c8 00:27:53.941 [2024-11-15 11:45:54.737585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:53.941 [2024-11-15 11:45:54.737603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:53.941 [2024-11-15 11:45:54.746299] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b3d020) with pdu=0x200016eff3c8 00:27:53.941 [2024-11-15 11:45:54.746413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:53.941 [2024-11-15 11:45:54.746431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:53.941 [2024-11-15 11:45:54.755109] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b3d020) with pdu=0x200016eff3c8 00:27:53.941 [2024-11-15 11:45:54.755347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:53.941 [2024-11-15 11:45:54.755367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:53.941 [2024-11-15 11:45:54.764179] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b3d020) with pdu=0x200016eff3c8 00:27:53.941 [2024-11-15 11:45:54.764336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:53.941 [2024-11-15 11:45:54.764353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:53.941 [2024-11-15 11:45:54.772564] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b3d020) with pdu=0x200016eff3c8 00:27:53.941 [2024-11-15 11:45:54.772675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:53.941 [2024-11-15 11:45:54.772692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:53.941 [2024-11-15 11:45:54.780141] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b3d020) with pdu=0x200016eff3c8 00:27:53.942 [2024-11-15 11:45:54.780215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:53.942 [2024-11-15 11:45:54.780233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:54.201 [2024-11-15 11:45:54.787876] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b3d020) with pdu=0x200016eff3c8 00:27:54.201 [2024-11-15 11:45:54.787947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.201 [2024-11-15 11:45:54.787965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:54.201 [2024-11-15 11:45:54.795583] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b3d020) with pdu=0x200016eff3c8 00:27:54.202 [2024-11-15 11:45:54.795666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.202 [2024-11-15 11:45:54.795684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:54.202 [2024-11-15 11:45:54.803435] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b3d020) with pdu=0x200016eff3c8 00:27:54.202 [2024-11-15 11:45:54.803542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.202 [2024-11-15 11:45:54.803560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:54.202 [2024-11-15 11:45:54.811981] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b3d020) with pdu=0x200016eff3c8 00:27:54.202 [2024-11-15 11:45:54.812053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.202 [2024-11-15 11:45:54.812070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:54.202 [2024-11-15 11:45:54.819947] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b3d020) with pdu=0x200016eff3c8 00:27:54.202 [2024-11-15 11:45:54.820020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.202 [2024-11-15 11:45:54.820038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:54.202 [2024-11-15 11:45:54.827103] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b3d020) with pdu=0x200016eff3c8 00:27:54.202 [2024-11-15 11:45:54.827174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.202 [2024-11-15 11:45:54.827191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:54.202 [2024-11-15 11:45:54.833970] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b3d020) with pdu=0x200016eff3c8 00:27:54.202 [2024-11-15 11:45:54.834048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.202 [2024-11-15 11:45:54.834066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:54.202 [2024-11-15 11:45:54.840557] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b3d020) with pdu=0x200016eff3c8 00:27:54.202 [2024-11-15 11:45:54.840628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.202 [2024-11-15 11:45:54.840646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:54.202 [2024-11-15 11:45:54.847286] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b3d020) with pdu=0x200016eff3c8 00:27:54.202 [2024-11-15 11:45:54.847357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.202 [2024-11-15 11:45:54.847374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:54.202 [2024-11-15 11:45:54.854482] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b3d020) with pdu=0x200016eff3c8 00:27:54.202 [2024-11-15 11:45:54.854555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.202 [2024-11-15 11:45:54.854572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:54.202 [2024-11-15 11:45:54.861177] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b3d020) with pdu=0x200016eff3c8 00:27:54.202 [2024-11-15 11:45:54.861251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.202 [2024-11-15 11:45:54.861268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:54.202 [2024-11-15 11:45:54.867762] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b3d020) with pdu=0x200016eff3c8 00:27:54.202 [2024-11-15 11:45:54.867835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.202 [2024-11-15 11:45:54.867852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:54.202 [2024-11-15 11:45:54.874540] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b3d020) with pdu=0x200016eff3c8 00:27:54.202 [2024-11-15 11:45:54.874613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.202 [2024-11-15 11:45:54.874630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:54.202 [2024-11-15 11:45:54.881375] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b3d020) with pdu=0x200016eff3c8 00:27:54.202 [2024-11-15 11:45:54.881479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.202 [2024-11-15 11:45:54.881497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:54.202 [2024-11-15 11:45:54.888046] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b3d020) with pdu=0x200016eff3c8 00:27:54.202 [2024-11-15 11:45:54.888113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.202 [2024-11-15 11:45:54.888130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:54.202 [2024-11-15 11:45:54.894499] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b3d020) with pdu=0x200016eff3c8 00:27:54.202 [2024-11-15 11:45:54.894571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.202 [2024-11-15 11:45:54.894588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:54.202 [2024-11-15 11:45:54.900505] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b3d020) with pdu=0x200016eff3c8 00:27:54.202 [2024-11-15 11:45:54.900598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.202 [2024-11-15 11:45:54.900618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:54.202 [2024-11-15 11:45:54.906315] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b3d020) with pdu=0x200016eff3c8 00:27:54.202 [2024-11-15 11:45:54.906390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.202 [2024-11-15 11:45:54.906407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:54.202 [2024-11-15 11:45:54.912167] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b3d020) with pdu=0x200016eff3c8 00:27:54.202 [2024-11-15 11:45:54.912256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.202 [2024-11-15 11:45:54.912274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:54.202 [2024-11-15 11:45:54.918014] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b3d020) with pdu=0x200016eff3c8 00:27:54.202 [2024-11-15 11:45:54.918081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.202 [2024-11-15 11:45:54.918098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:54.202 [2024-11-15 11:45:54.924020] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b3d020) with pdu=0x200016eff3c8 00:27:54.202 [2024-11-15 11:45:54.924092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.202 [2024-11-15 11:45:54.924109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:54.202 [2024-11-15 11:45:54.929882] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b3d020) with pdu=0x200016eff3c8 00:27:54.202 [2024-11-15 11:45:54.929969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.202 [2024-11-15 11:45:54.929986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:54.202 [2024-11-15 11:45:54.935859] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b3d020) with pdu=0x200016eff3c8 00:27:54.202 [2024-11-15 11:45:54.935930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.202 [2024-11-15 11:45:54.935948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:54.202 [2024-11-15 11:45:54.942661] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b3d020) with pdu=0x200016eff3c8 00:27:54.202 [2024-11-15 11:45:54.942760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.202 [2024-11-15 11:45:54.942778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:54.202 [2024-11-15 11:45:54.949039] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b3d020) with pdu=0x200016eff3c8 00:27:54.202 [2024-11-15 11:45:54.949133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.202 [2024-11-15 11:45:54.949152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:54.202 [2024-11-15 11:45:54.954936] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b3d020) with pdu=0x200016eff3c8 00:27:54.202 [2024-11-15 11:45:54.955032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.202 [2024-11-15 11:45:54.955049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:54.202 [2024-11-15 11:45:54.960832] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b3d020) with pdu=0x200016eff3c8 00:27:54.202 [2024-11-15 11:45:54.960901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.202 [2024-11-15 11:45:54.960918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:54.202 [2024-11-15 11:45:54.966722] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b3d020) with pdu=0x200016eff3c8 00:27:54.202 [2024-11-15 11:45:54.966794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.202 [2024-11-15 11:45:54.966811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:54.202 [2024-11-15 11:45:54.972757] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b3d020) with pdu=0x200016eff3c8 00:27:54.203 [2024-11-15 11:45:54.972850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.203 [2024-11-15 11:45:54.972868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:54.203 [2024-11-15 11:45:54.979207] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b3d020) with pdu=0x200016eff3c8 00:27:54.203 [2024-11-15 11:45:54.979291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.203 [2024-11-15 11:45:54.979308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:54.203 [2024-11-15 11:45:54.985095] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b3d020) with pdu=0x200016eff3c8 00:27:54.203 [2024-11-15 11:45:54.985165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.203 [2024-11-15 11:45:54.985183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:54.203 [2024-11-15 11:45:54.990953] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b3d020) with pdu=0x200016eff3c8 00:27:54.203 [2024-11-15 11:45:54.991046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.203 [2024-11-15 11:45:54.991063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:54.203 [2024-11-15 11:45:54.996911] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b3d020) with pdu=0x200016eff3c8 00:27:54.203 [2024-11-15 11:45:54.996987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.203 [2024-11-15 11:45:54.997005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:54.203 [2024-11-15 11:45:55.002786] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b3d020) with pdu=0x200016eff3c8 00:27:54.203 [2024-11-15 11:45:55.002875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.203 [2024-11-15 11:45:55.002893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:54.203 [2024-11-15 11:45:55.008569] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b3d020) with pdu=0x200016eff3c8 00:27:54.203 [2024-11-15 11:45:55.008661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.203 [2024-11-15 11:45:55.008679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:54.203 [2024-11-15 11:45:55.014437] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b3d020) with pdu=0x200016eff3c8 00:27:54.203 [2024-11-15 11:45:55.014538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.203 [2024-11-15 11:45:55.014555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:54.203 [2024-11-15 11:45:55.021032] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b3d020) with pdu=0x200016eff3c8 00:27:54.203 [2024-11-15 11:45:55.021119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.203 [2024-11-15 11:45:55.021136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:54.203 [2024-11-15 11:45:55.027594] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b3d020) with pdu=0x200016eff3c8 00:27:54.203 [2024-11-15 11:45:55.027682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.203 [2024-11-15 11:45:55.027700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:54.203 [2024-11-15 11:45:55.034473] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b3d020) with pdu=0x200016eff3c8 00:27:54.203 [2024-11-15 11:45:55.034556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.203 [2024-11-15 11:45:55.034573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:54.203 [2024-11-15 11:45:55.040325] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b3d020) with pdu=0x200016eff3c8 00:27:54.203 [2024-11-15 11:45:55.040425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.203 [2024-11-15 11:45:55.040443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:54.203 [2024-11-15 11:45:55.046431] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b3d020) with pdu=0x200016eff3c8 00:27:54.203 [2024-11-15 11:45:55.046530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.203 [2024-11-15 11:45:55.046548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:54.463 [2024-11-15 11:45:55.052281] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b3d020) with pdu=0x200016eff3c8 00:27:54.463 [2024-11-15 11:45:55.052407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.463 [2024-11-15 11:45:55.052425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:54.463 [2024-11-15 11:45:55.058671] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b3d020) with pdu=0x200016eff3c8 00:27:54.463 [2024-11-15 11:45:55.058797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:64 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.463 [2024-11-15 11:45:55.058818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:54.463 [2024-11-15 11:45:55.065597] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b3d020) with pdu=0x200016eff3c8 00:27:54.463 [2024-11-15 11:45:55.065717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.463 [2024-11-15 11:45:55.065733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:54.463 [2024-11-15 11:45:55.072161] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b3d020) with pdu=0x200016eff3c8 00:27:54.463 [2024-11-15 11:45:55.072377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.463 [2024-11-15 11:45:55.072400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:54.463 [2024-11-15 11:45:55.078050] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b3d020) with pdu=0x200016eff3c8 00:27:54.463 [2024-11-15 11:45:55.078235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.463 [2024-11-15 11:45:55.078252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:54.463 [2024-11-15 11:45:55.083953] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b3d020) with pdu=0x200016eff3c8 00:27:54.463 [2024-11-15 11:45:55.084177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.463 [2024-11-15 11:45:55.084197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:54.463 [2024-11-15 11:45:55.091113] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b3d020) with pdu=0x200016eff3c8 00:27:54.463 [2024-11-15 11:45:55.091327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.463 [2024-11-15 11:45:55.091344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:54.463 [2024-11-15 11:45:55.096793] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b3d020) with pdu=0x200016eff3c8 00:27:54.463 [2024-11-15 11:45:55.097011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.463 [2024-11-15 11:45:55.097030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:54.463 [2024-11-15 11:45:55.102422] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b3d020) with pdu=0x200016eff3c8 00:27:54.463 [2024-11-15 11:45:55.102653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.463 [2024-11-15 11:45:55.102679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:54.463 [2024-11-15 11:45:55.108007] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b3d020) with pdu=0x200016eff3c8 00:27:54.463 [2024-11-15 11:45:55.108231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.463 [2024-11-15 11:45:55.108249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:54.463 [2024-11-15 11:45:55.113621] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b3d020) with pdu=0x200016eff3c8 00:27:54.463 [2024-11-15 11:45:55.113848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.464 [2024-11-15 11:45:55.113866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:54.464 [2024-11-15 11:45:55.119599] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b3d020) with pdu=0x200016eff3c8 00:27:54.464 [2024-11-15 11:45:55.119807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.464 [2024-11-15 11:45:55.119825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:54.464 [2024-11-15 11:45:55.125869] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b3d020) with pdu=0x200016eff3c8 00:27:54.464 [2024-11-15 11:45:55.126093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.464 [2024-11-15 11:45:55.126111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:54.464 [2024-11-15 11:45:55.132088] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b3d020) with pdu=0x200016eff3c8 00:27:54.464 [2024-11-15 11:45:55.132309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.464 [2024-11-15 11:45:55.132335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:54.464 [2024-11-15 11:45:55.138649] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b3d020) with pdu=0x200016eff3c8 00:27:54.464 [2024-11-15 11:45:55.138797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.464 [2024-11-15 11:45:55.138815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:54.464 4565.00 IOPS, 570.62 MiB/s [2024-11-15T10:45:55.317Z] [2024-11-15 11:45:55.145091] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b3d020) with pdu=0x200016eff3c8 00:27:54.464 [2024-11-15 11:45:55.145293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.464 [2024-11-15 11:45:55.145310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:54.464 [2024-11-15 11:45:55.150687] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b3d020) with pdu=0x200016eff3c8 00:27:54.464 [2024-11-15 11:45:55.150890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.464 [2024-11-15 11:45:55.150908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:54.464 [2024-11-15 11:45:55.156149] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b3d020) with pdu=0x200016eff3c8 00:27:54.464 [2024-11-15 11:45:55.156355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.464 [2024-11-15 11:45:55.156374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:54.464 [2024-11-15 11:45:55.161725] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b3d020) with pdu=0x200016eff3c8 00:27:54.464 [2024-11-15 11:45:55.161934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.464 [2024-11-15 11:45:55.161953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:54.464 [2024-11-15 11:45:55.167347] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b3d020) with pdu=0x200016eff3c8 00:27:54.464 [2024-11-15 11:45:55.167559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.464 [2024-11-15 11:45:55.167579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:54.464 [2024-11-15 11:45:55.172960] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b3d020) with pdu=0x200016eff3c8 00:27:54.464 [2024-11-15 11:45:55.173163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.464 [2024-11-15 11:45:55.173181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:54.464 [2024-11-15 11:45:55.178533] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b3d020) with pdu=0x200016eff3c8 00:27:54.464 [2024-11-15 11:45:55.178737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.464 [2024-11-15 11:45:55.178755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:54.464 [2024-11-15 11:45:55.184086] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b3d020) with pdu=0x200016eff3c8 00:27:54.464 [2024-11-15 11:45:55.184298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.464 [2024-11-15 11:45:55.184316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:54.464 [2024-11-15 11:45:55.189779] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b3d020) with pdu=0x200016eff3c8 00:27:54.464 [2024-11-15 11:45:55.189993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.464 [2024-11-15 11:45:55.190011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:54.464 [2024-11-15 11:45:55.195392] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b3d020) with pdu=0x200016eff3c8 00:27:54.464 [2024-11-15 11:45:55.195605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.464 [2024-11-15 11:45:55.195623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:54.464 [2024-11-15 11:45:55.200993] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b3d020) with pdu=0x200016eff3c8 00:27:54.464 [2024-11-15 11:45:55.201200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.464 [2024-11-15 11:45:55.201217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:54.464 [2024-11-15 11:45:55.206490] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b3d020) with pdu=0x200016eff3c8 00:27:54.464 [2024-11-15 11:45:55.206693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.464 [2024-11-15 11:45:55.206711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:54.464 [2024-11-15 11:45:55.212048] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b3d020) with pdu=0x200016eff3c8 00:27:54.464 [2024-11-15 11:45:55.212264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.464 [2024-11-15 11:45:55.212284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:54.464 [2024-11-15 11:45:55.217585] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b3d020) with pdu=0x200016eff3c8 00:27:54.464 [2024-11-15 11:45:55.217782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.464 [2024-11-15 11:45:55.217800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:54.464 [2024-11-15 11:45:55.223180] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b3d020) with pdu=0x200016eff3c8 00:27:54.464 [2024-11-15 11:45:55.223381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.464 [2024-11-15 11:45:55.223399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:54.464 [2024-11-15 11:45:55.228728] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b3d020) with pdu=0x200016eff3c8 00:27:54.464 [2024-11-15 11:45:55.228929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.464 [2024-11-15 11:45:55.228947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:54.464 [2024-11-15 11:45:55.234227] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b3d020) with pdu=0x200016eff3c8 00:27:54.464 [2024-11-15 11:45:55.234428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.464 [2024-11-15 11:45:55.234446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:54.464 [2024-11-15 11:45:55.239798] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b3d020) with pdu=0x200016eff3c8 00:27:54.464 [2024-11-15 11:45:55.240003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.464 [2024-11-15 11:45:55.240020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:54.464 [2024-11-15 11:45:55.245369] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b3d020) with pdu=0x200016eff3c8 00:27:54.464 [2024-11-15 11:45:55.245579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.464 [2024-11-15 11:45:55.245597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:54.464 [2024-11-15 11:45:55.250955] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b3d020) with pdu=0x200016eff3c8 00:27:54.464 [2024-11-15 11:45:55.251163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.464 [2024-11-15 11:45:55.251181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:54.464 [2024-11-15 11:45:55.256511] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b3d020) with pdu=0x200016eff3c8 00:27:54.464 [2024-11-15 11:45:55.256720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.464 [2024-11-15 11:45:55.256738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:54.464 [2024-11-15 11:45:55.262048] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b3d020) with pdu=0x200016eff3c8 00:27:54.464 [2024-11-15 11:45:55.262250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.464 [2024-11-15 11:45:55.262268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:54.464 [2024-11-15 11:45:55.267622] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b3d020) with pdu=0x200016eff3c8 00:27:54.464 [2024-11-15 11:45:55.267828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.465 [2024-11-15 11:45:55.267845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:54.465 [2024-11-15 11:45:55.273110] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b3d020) with pdu=0x200016eff3c8 00:27:54.465 [2024-11-15 11:45:55.273314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.465 [2024-11-15 11:45:55.273331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:54.465 [2024-11-15 11:45:55.278714] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b3d020) with pdu=0x200016eff3c8 00:27:54.465 [2024-11-15 11:45:55.278917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.465 [2024-11-15 11:45:55.278934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:54.465 [2024-11-15 11:45:55.284203] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b3d020) with pdu=0x200016eff3c8 00:27:54.465 [2024-11-15 11:45:55.284407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.465 [2024-11-15 11:45:55.284425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:54.465 [2024-11-15 11:45:55.289741] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b3d020) with pdu=0x200016eff3c8 00:27:54.465 [2024-11-15 11:45:55.289945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.465 [2024-11-15 11:45:55.289962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:54.465 [2024-11-15 11:45:55.295271] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b3d020) with pdu=0x200016eff3c8 00:27:54.465 [2024-11-15 11:45:55.295480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.465 [2024-11-15 11:45:55.295498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:54.465 [2024-11-15 11:45:55.300848] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b3d020) with pdu=0x200016eff3c8 00:27:54.465 [2024-11-15 11:45:55.301067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.465 [2024-11-15 11:45:55.301085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:54.465 [2024-11-15 11:45:55.306433] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b3d020) with pdu=0x200016eff3c8 00:27:54.465 [2024-11-15 11:45:55.306644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.465 [2024-11-15 11:45:55.306662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:54.465 [2024-11-15 11:45:55.312000] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b3d020) with pdu=0x200016eff3c8 00:27:54.465 [2024-11-15 11:45:55.312212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.465 [2024-11-15 11:45:55.312230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:54.725 [2024-11-15 11:45:55.317577] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b3d020) with pdu=0x200016eff3c8 00:27:54.725 [2024-11-15 11:45:55.317787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.725 [2024-11-15 11:45:55.317804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:54.725 [2024-11-15 11:45:55.323137] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b3d020) with pdu=0x200016eff3c8 00:27:54.725 [2024-11-15 11:45:55.323340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.725 [2024-11-15 11:45:55.323358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:54.725 [2024-11-15 11:45:55.328679] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b3d020) with pdu=0x200016eff3c8 00:27:54.725 [2024-11-15 11:45:55.328884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.725 [2024-11-15 11:45:55.328901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:54.725 [2024-11-15 11:45:55.334149] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b3d020) with pdu=0x200016eff3c8 00:27:54.725 [2024-11-15 11:45:55.334355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.725 [2024-11-15 11:45:55.334372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:54.725 [2024-11-15 11:45:55.339702] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b3d020) with pdu=0x200016eff3c8 00:27:54.725 [2024-11-15 11:45:55.339907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.725 [2024-11-15 11:45:55.339924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:54.725 [2024-11-15 11:45:55.345357] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b3d020) with pdu=0x200016eff3c8 00:27:54.725 [2024-11-15 11:45:55.345568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.725 [2024-11-15 11:45:55.345586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:54.725 [2024-11-15 11:45:55.350859] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b3d020) with pdu=0x200016eff3c8 00:27:54.725 [2024-11-15 11:45:55.351068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.725 [2024-11-15 11:45:55.351086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:54.725 [2024-11-15 11:45:55.356420] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b3d020) with pdu=0x200016eff3c8 00:27:54.725 [2024-11-15 11:45:55.356633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.725 [2024-11-15 11:45:55.356653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:54.725 [2024-11-15 11:45:55.362002] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b3d020) with pdu=0x200016eff3c8 00:27:54.725 [2024-11-15 11:45:55.362207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.725 [2024-11-15 11:45:55.362225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:54.725 [2024-11-15 11:45:55.367632] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b3d020) with pdu=0x200016eff3c8 00:27:54.725 [2024-11-15 11:45:55.367838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.725 [2024-11-15 11:45:55.367856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:54.725 [2024-11-15 11:45:55.373223] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b3d020) with pdu=0x200016eff3c8 00:27:54.725 [2024-11-15 11:45:55.373426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.725 [2024-11-15 11:45:55.373444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:54.725 [2024-11-15 11:45:55.378842] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b3d020) with pdu=0x200016eff3c8 00:27:54.725 [2024-11-15 11:45:55.379043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.725 [2024-11-15 11:45:55.379060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:54.725 [2024-11-15 11:45:55.384389] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b3d020) with pdu=0x200016eff3c8 00:27:54.726 [2024-11-15 11:45:55.384600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.726 [2024-11-15 11:45:55.384617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:54.726 [2024-11-15 11:45:55.389976] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b3d020) with pdu=0x200016eff3c8 00:27:54.726 [2024-11-15 11:45:55.390176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.726 [2024-11-15 11:45:55.390193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:54.726 [2024-11-15 11:45:55.395597] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b3d020) with pdu=0x200016eff3c8 00:27:54.726 [2024-11-15 11:45:55.395795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.726 [2024-11-15 11:45:55.395812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:54.726 [2024-11-15 11:45:55.401181] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b3d020) with pdu=0x200016eff3c8 00:27:54.726 [2024-11-15 11:45:55.401386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.726 [2024-11-15 11:45:55.401403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:54.726 [2024-11-15 11:45:55.406725] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b3d020) with pdu=0x200016eff3c8 00:27:54.726 [2024-11-15 11:45:55.406934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.726 [2024-11-15 11:45:55.406951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:54.726 [2024-11-15 11:45:55.412276] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b3d020) with pdu=0x200016eff3c8 00:27:54.726 [2024-11-15 11:45:55.412509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.726 [2024-11-15 11:45:55.412527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:54.726 [2024-11-15 11:45:55.417833] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b3d020) with pdu=0x200016eff3c8 00:27:54.726 [2024-11-15 11:45:55.418048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.726 [2024-11-15 11:45:55.418066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:54.726 [2024-11-15 11:45:55.423425] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b3d020) with pdu=0x200016eff3c8 00:27:54.726 [2024-11-15 11:45:55.423638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.726 [2024-11-15 11:45:55.423655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:54.726 [2024-11-15 11:45:55.428945] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b3d020) with pdu=0x200016eff3c8 00:27:54.726 [2024-11-15 11:45:55.429153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.726 [2024-11-15 11:45:55.429170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:54.726 [2024-11-15 11:45:55.434555] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b3d020) with pdu=0x200016eff3c8 00:27:54.726 [2024-11-15 11:45:55.434755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.726 [2024-11-15 11:45:55.434772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:54.726 [2024-11-15 11:45:55.440398] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b3d020) with pdu=0x200016eff3c8 00:27:54.726 [2024-11-15 11:45:55.440608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.726 [2024-11-15 11:45:55.440625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:54.726 [2024-11-15 11:45:55.447008] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b3d020) with pdu=0x200016eff3c8 00:27:54.726 [2024-11-15 11:45:55.447302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.726 [2024-11-15 11:45:55.447321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:54.726 [2024-11-15 11:45:55.453876] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b3d020) with pdu=0x200016eff3c8 00:27:54.726 [2024-11-15 11:45:55.454066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.726 [2024-11-15 11:45:55.454083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:54.726 [2024-11-15 11:45:55.460879] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b3d020) with pdu=0x200016eff3c8 00:27:54.726 [2024-11-15 11:45:55.461079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.726 [2024-11-15 11:45:55.461097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:54.726 [2024-11-15 11:45:55.468002] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b3d020) with pdu=0x200016eff3c8 00:27:54.726 [2024-11-15 11:45:55.468206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.726 [2024-11-15 11:45:55.468224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:54.726 [2024-11-15 11:45:55.475122] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b3d020) with pdu=0x200016eff3c8 00:27:54.726 [2024-11-15 11:45:55.475316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.726 [2024-11-15 11:45:55.475333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:54.726 [2024-11-15 11:45:55.481958] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b3d020) with pdu=0x200016eff3c8 00:27:54.726 [2024-11-15 11:45:55.482238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.726 [2024-11-15 11:45:55.482256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:54.726 [2024-11-15 11:45:55.490310] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b3d020) with pdu=0x200016eff3c8 00:27:54.726 [2024-11-15 11:45:55.490509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.726 [2024-11-15 11:45:55.490527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:54.726 [2024-11-15 11:45:55.496614] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b3d020) with pdu=0x200016eff3c8 00:27:54.726 [2024-11-15 11:45:55.496814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.726 [2024-11-15 11:45:55.496832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:54.726 [2024-11-15 11:45:55.502235] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b3d020) with pdu=0x200016eff3c8 00:27:54.726 [2024-11-15 11:45:55.502434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.726 [2024-11-15 11:45:55.502452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:54.726 [2024-11-15 11:45:55.508081] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b3d020) with pdu=0x200016eff3c8 00:27:54.726 [2024-11-15 11:45:55.508285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.726 [2024-11-15 11:45:55.508302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:54.726 [2024-11-15 11:45:55.514501] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b3d020) with pdu=0x200016eff3c8 00:27:54.726 [2024-11-15 11:45:55.514750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.726 [2024-11-15 11:45:55.514772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:54.726 [2024-11-15 11:45:55.521065] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b3d020) with pdu=0x200016eff3c8 00:27:54.726 [2024-11-15 11:45:55.521330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.726 [2024-11-15 11:45:55.521349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:54.726 [2024-11-15 11:45:55.527683] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b3d020) with pdu=0x200016eff3c8 00:27:54.726 [2024-11-15 11:45:55.527887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.726 [2024-11-15 11:45:55.527905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:54.726 [2024-11-15 11:45:55.534292] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b3d020) with pdu=0x200016eff3c8 00:27:54.726 [2024-11-15 11:45:55.534531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.726 [2024-11-15 11:45:55.534550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:54.726 [2024-11-15 11:45:55.541221] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b3d020) with pdu=0x200016eff3c8 00:27:54.726 [2024-11-15 11:45:55.541420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.726 [2024-11-15 11:45:55.541438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:54.726 [2024-11-15 11:45:55.546931] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b3d020) with pdu=0x200016eff3c8 00:27:54.727 [2024-11-15 11:45:55.547130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.727 [2024-11-15 11:45:55.547147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:54.727 [2024-11-15 11:45:55.552570] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b3d020) with pdu=0x200016eff3c8 00:27:54.727 [2024-11-15 11:45:55.552776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.727 [2024-11-15 11:45:55.552794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:54.727 [2024-11-15 11:45:55.558332] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b3d020) with pdu=0x200016eff3c8 00:27:54.727 [2024-11-15 11:45:55.558546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.727 [2024-11-15 11:45:55.558563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:54.727 [2024-11-15 11:45:55.564045] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b3d020) with pdu=0x200016eff3c8 00:27:54.727 [2024-11-15 11:45:55.564241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.727 [2024-11-15 11:45:55.564258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:54.727 [2024-11-15 11:45:55.570788] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b3d020) with pdu=0x200016eff3c8 00:27:54.727 [2024-11-15 11:45:55.571021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.727 [2024-11-15 11:45:55.571039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:54.987 [2024-11-15 11:45:55.577196] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b3d020) with pdu=0x200016eff3c8 00:27:54.987 [2024-11-15 11:45:55.577408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.987 [2024-11-15 11:45:55.577426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:54.987 [2024-11-15 11:45:55.584219] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b3d020) with pdu=0x200016eff3c8 00:27:54.987 [2024-11-15 11:45:55.584474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.987 [2024-11-15 11:45:55.584493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:54.987 [2024-11-15 11:45:55.590961] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b3d020) with pdu=0x200016eff3c8 00:27:54.987 [2024-11-15 11:45:55.591163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.987 [2024-11-15 11:45:55.591181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:54.987 [2024-11-15 11:45:55.596771] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b3d020) with pdu=0x200016eff3c8 00:27:54.987 [2024-11-15 11:45:55.596960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.987 [2024-11-15 11:45:55.596977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:54.987 [2024-11-15 11:45:55.602869] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b3d020) with pdu=0x200016eff3c8 00:27:54.987 [2024-11-15 11:45:55.603075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.987 [2024-11-15 11:45:55.603092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:54.987 [2024-11-15 11:45:55.608470] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b3d020) with pdu=0x200016eff3c8 00:27:54.987 [2024-11-15 11:45:55.608669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.987 [2024-11-15 11:45:55.608687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:54.987 [2024-11-15 11:45:55.614082] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b3d020) with pdu=0x200016eff3c8 00:27:54.987 [2024-11-15 11:45:55.614292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.987 [2024-11-15 11:45:55.614309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:54.987 [2024-11-15 11:45:55.619682] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b3d020) with pdu=0x200016eff3c8 00:27:54.987 [2024-11-15 11:45:55.619886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.987 [2024-11-15 11:45:55.619903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:54.987 [2024-11-15 11:45:55.625249] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b3d020) with pdu=0x200016eff3c8 00:27:54.987 [2024-11-15 11:45:55.625453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.987 [2024-11-15 11:45:55.625476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:54.987 [2024-11-15 11:45:55.630858] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b3d020) with pdu=0x200016eff3c8 00:27:54.987 [2024-11-15 11:45:55.631071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.987 [2024-11-15 11:45:55.631089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:54.987 [2024-11-15 11:45:55.636439] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b3d020) with pdu=0x200016eff3c8 00:27:54.987 [2024-11-15 11:45:55.636683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.987 [2024-11-15 11:45:55.636703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:54.987 [2024-11-15 11:45:55.642057] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b3d020) with pdu=0x200016eff3c8 00:27:54.987 [2024-11-15 11:45:55.642267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.987 [2024-11-15 11:45:55.642284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:54.987 [2024-11-15 11:45:55.647590] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b3d020) with pdu=0x200016eff3c8 00:27:54.987 [2024-11-15 11:45:55.647795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.987 [2024-11-15 11:45:55.647813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:54.987 [2024-11-15 11:45:55.653186] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b3d020) with pdu=0x200016eff3c8 00:27:54.987 [2024-11-15 11:45:55.653386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.987 [2024-11-15 11:45:55.653404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:54.987 [2024-11-15 11:45:55.659112] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b3d020) with pdu=0x200016eff3c8 00:27:54.987 [2024-11-15 11:45:55.659318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.987 [2024-11-15 11:45:55.659335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:54.987 [2024-11-15 11:45:55.665156] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b3d020) with pdu=0x200016eff3c8 00:27:54.987 [2024-11-15 11:45:55.665363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.987 [2024-11-15 11:45:55.665381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:54.987 [2024-11-15 11:45:55.670747] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b3d020) with pdu=0x200016eff3c8 00:27:54.987 [2024-11-15 11:45:55.670972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.987 [2024-11-15 11:45:55.670993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:54.987 [2024-11-15 11:45:55.676330] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b3d020) with pdu=0x200016eff3c8 00:27:54.987 [2024-11-15 11:45:55.676549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.988 [2024-11-15 11:45:55.676567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:54.988 [2024-11-15 11:45:55.681891] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b3d020) with pdu=0x200016eff3c8 00:27:54.988 [2024-11-15 11:45:55.682106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.988 [2024-11-15 11:45:55.682124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:54.988 [2024-11-15 11:45:55.687457] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b3d020) with pdu=0x200016eff3c8 00:27:54.988 [2024-11-15 11:45:55.687668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.988 [2024-11-15 11:45:55.687686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:54.988 [2024-11-15 11:45:55.693110] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b3d020) with pdu=0x200016eff3c8 00:27:54.988 [2024-11-15 11:45:55.693315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.988 [2024-11-15 11:45:55.693333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:54.988 [2024-11-15 11:45:55.699175] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b3d020) with pdu=0x200016eff3c8 00:27:54.988 [2024-11-15 11:45:55.699382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.988 [2024-11-15 11:45:55.699399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:54.988 [2024-11-15 11:45:55.705484] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b3d020) with pdu=0x200016eff3c8 00:27:54.988 [2024-11-15 11:45:55.705708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.988 [2024-11-15 11:45:55.705726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:54.988 [2024-11-15 11:45:55.713266] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b3d020) with pdu=0x200016eff3c8 00:27:54.988 [2024-11-15 11:45:55.713497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.988 [2024-11-15 11:45:55.713517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:54.988 [2024-11-15 11:45:55.720021] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b3d020) with pdu=0x200016eff3c8 00:27:54.988 [2024-11-15 11:45:55.720213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.988 [2024-11-15 11:45:55.720230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:54.988 [2024-11-15 11:45:55.726010] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b3d020) with pdu=0x200016eff3c8 00:27:54.988 [2024-11-15 11:45:55.726210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.988 [2024-11-15 11:45:55.726227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:54.988 [2024-11-15 11:45:55.732131] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b3d020) with pdu=0x200016eff3c8 00:27:54.988 [2024-11-15 11:45:55.732327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.988 [2024-11-15 11:45:55.732345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:54.988 [2024-11-15 11:45:55.738245] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b3d020) with pdu=0x200016eff3c8 00:27:54.988 [2024-11-15 11:45:55.738454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.988 [2024-11-15 11:45:55.738477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:54.988 [2024-11-15 11:45:55.744032] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b3d020) with pdu=0x200016eff3c8 00:27:54.988 [2024-11-15 11:45:55.744248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.988 [2024-11-15 11:45:55.744266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:54.988 [2024-11-15 11:45:55.750163] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b3d020) with pdu=0x200016eff3c8 00:27:54.988 [2024-11-15 11:45:55.750371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.988 [2024-11-15 11:45:55.750388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:54.988 [2024-11-15 11:45:55.757842] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b3d020) with pdu=0x200016eff3c8 00:27:54.988 [2024-11-15 11:45:55.758088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.988 [2024-11-15 11:45:55.758107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:54.988 [2024-11-15 11:45:55.764234] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b3d020) with pdu=0x200016eff3c8 00:27:54.988 [2024-11-15 11:45:55.764441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.988 [2024-11-15 11:45:55.764464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:54.988 [2024-11-15 11:45:55.770075] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b3d020) with pdu=0x200016eff3c8 00:27:54.988 [2024-11-15 11:45:55.770300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.988 [2024-11-15 11:45:55.770319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:54.988 [2024-11-15 11:45:55.775804] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b3d020) with pdu=0x200016eff3c8 00:27:54.988 [2024-11-15 11:45:55.776013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.988 [2024-11-15 11:45:55.776031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:54.988 [2024-11-15 11:45:55.781602] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b3d020) with pdu=0x200016eff3c8 00:27:54.988 [2024-11-15 11:45:55.781796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.988 [2024-11-15 11:45:55.781814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:54.988 [2024-11-15 11:45:55.787959] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b3d020) with pdu=0x200016eff3c8 00:27:54.988 [2024-11-15 11:45:55.788159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.988 [2024-11-15 11:45:55.788176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:54.988 [2024-11-15 11:45:55.794086] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b3d020) with pdu=0x200016eff3c8 00:27:54.988 [2024-11-15 11:45:55.794291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.988 [2024-11-15 11:45:55.794308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:54.988 [2024-11-15 11:45:55.799638] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b3d020) with pdu=0x200016eff3c8 00:27:54.988 [2024-11-15 11:45:55.799854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.988 [2024-11-15 11:45:55.799871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:54.988 [2024-11-15 11:45:55.805203] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b3d020) with pdu=0x200016eff3c8 00:27:54.988 [2024-11-15 11:45:55.805410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.988 [2024-11-15 11:45:55.805427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:54.988 [2024-11-15 11:45:55.810787] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b3d020) with pdu=0x200016eff3c8 00:27:54.988 [2024-11-15 11:45:55.810992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.988 [2024-11-15 11:45:55.811010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:54.988 [2024-11-15 11:45:55.816386] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b3d020) with pdu=0x200016eff3c8 00:27:54.988 [2024-11-15 11:45:55.816602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.988 [2024-11-15 11:45:55.816619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:54.988 [2024-11-15 11:45:55.821963] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b3d020) with pdu=0x200016eff3c8 00:27:54.988 [2024-11-15 11:45:55.822167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.988 [2024-11-15 11:45:55.822184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:54.988 [2024-11-15 11:45:55.827557] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b3d020) with pdu=0x200016eff3c8 00:27:54.988 [2024-11-15 11:45:55.827756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.988 [2024-11-15 11:45:55.827778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:54.989 [2024-11-15 11:45:55.833165] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b3d020) with pdu=0x200016eff3c8 00:27:54.989 [2024-11-15 11:45:55.833380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.989 [2024-11-15 11:45:55.833398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:55.248 [2024-11-15 11:45:55.838791] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b3d020) with pdu=0x200016eff3c8 00:27:55.248 [2024-11-15 11:45:55.839008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.248 [2024-11-15 11:45:55.839026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:55.248 [2024-11-15 11:45:55.844316] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b3d020) with pdu=0x200016eff3c8 00:27:55.248 [2024-11-15 11:45:55.844530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.248 [2024-11-15 11:45:55.844548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:55.248 [2024-11-15 11:45:55.850124] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b3d020) with pdu=0x200016eff3c8 00:27:55.248 [2024-11-15 11:45:55.850331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.248 [2024-11-15 11:45:55.850348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:55.248 [2024-11-15 11:45:55.856257] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b3d020) with pdu=0x200016eff3c8 00:27:55.248 [2024-11-15 11:45:55.856467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.248 [2024-11-15 11:45:55.856485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:55.248 [2024-11-15 11:45:55.862438] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b3d020) with pdu=0x200016eff3c8 00:27:55.249 [2024-11-15 11:45:55.862643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.249 [2024-11-15 11:45:55.862660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:55.249 [2024-11-15 11:45:55.868606] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b3d020) with pdu=0x200016eff3c8 00:27:55.249 [2024-11-15 11:45:55.868816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.249 [2024-11-15 11:45:55.868833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:55.249 [2024-11-15 11:45:55.875038] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b3d020) with pdu=0x200016eff3c8 00:27:55.249 [2024-11-15 11:45:55.875237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.249 [2024-11-15 11:45:55.875254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:55.249 [2024-11-15 11:45:55.881134] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b3d020) with pdu=0x200016eff3c8 00:27:55.249 [2024-11-15 11:45:55.881335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.249 [2024-11-15 11:45:55.881353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:55.249 [2024-11-15 11:45:55.886910] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b3d020) with pdu=0x200016eff3c8 00:27:55.249 [2024-11-15 11:45:55.887111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.249 [2024-11-15 11:45:55.887129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:55.249 [2024-11-15 11:45:55.893027] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b3d020) with pdu=0x200016eff3c8 00:27:55.249 [2024-11-15 11:45:55.893233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.249 [2024-11-15 11:45:55.893250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:55.249 [2024-11-15 11:45:55.899804] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b3d020) with pdu=0x200016eff3c8 00:27:55.249 [2024-11-15 11:45:55.900009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.249 [2024-11-15 11:45:55.900026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:55.249 [2024-11-15 11:45:55.905959] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b3d020) with pdu=0x200016eff3c8 00:27:55.249 [2024-11-15 11:45:55.906166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.249 [2024-11-15 11:45:55.906183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:55.249 [2024-11-15 11:45:55.912535] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b3d020) with pdu=0x200016eff3c8 00:27:55.249 [2024-11-15 11:45:55.912742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.249 [2024-11-15 11:45:55.912759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:55.249 [2024-11-15 11:45:55.918605] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b3d020) with pdu=0x200016eff3c8 00:27:55.249 [2024-11-15 11:45:55.918803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.249 [2024-11-15 11:45:55.918821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:55.249 [2024-11-15 11:45:55.924710] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b3d020) with pdu=0x200016eff3c8 00:27:55.249 [2024-11-15 11:45:55.924933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.249 [2024-11-15 11:45:55.924951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:55.249 [2024-11-15 11:45:55.930561] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b3d020) with pdu=0x200016eff3c8 00:27:55.249 [2024-11-15 11:45:55.930762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.249 [2024-11-15 11:45:55.930780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:55.249 [2024-11-15 11:45:55.936575] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b3d020) with pdu=0x200016eff3c8 00:27:55.249 [2024-11-15 11:45:55.936786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.249 [2024-11-15 11:45:55.936803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:55.249 [2024-11-15 11:45:55.942772] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b3d020) with pdu=0x200016eff3c8 00:27:55.249 [2024-11-15 11:45:55.942976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.249 [2024-11-15 11:45:55.942993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:55.249 [2024-11-15 11:45:55.949014] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b3d020) with pdu=0x200016eff3c8 00:27:55.249 [2024-11-15 11:45:55.949227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.249 [2024-11-15 11:45:55.949245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:55.249 [2024-11-15 11:45:55.955277] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b3d020) with pdu=0x200016eff3c8 00:27:55.249 [2024-11-15 11:45:55.955487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.249 [2024-11-15 11:45:55.955505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:55.249 [2024-11-15 11:45:55.961157] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b3d020) with pdu=0x200016eff3c8 00:27:55.249 [2024-11-15 11:45:55.961353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.249 [2024-11-15 11:45:55.961370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:55.249 [2024-11-15 11:45:55.966886] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b3d020) with pdu=0x200016eff3c8 00:27:55.249 [2024-11-15 11:45:55.967091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.249 [2024-11-15 11:45:55.967108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:55.249 [2024-11-15 11:45:55.972855] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b3d020) with pdu=0x200016eff3c8 00:27:55.249 [2024-11-15 11:45:55.973061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.249 [2024-11-15 11:45:55.973079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:55.249 [2024-11-15 11:45:55.979040] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b3d020) with pdu=0x200016eff3c8 00:27:55.249 [2024-11-15 11:45:55.979246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.249 [2024-11-15 11:45:55.979264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:55.249 [2024-11-15 11:45:55.984969] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b3d020) with pdu=0x200016eff3c8 00:27:55.249 [2024-11-15 11:45:55.985192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.249 [2024-11-15 11:45:55.985213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:55.249 [2024-11-15 11:45:55.991132] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b3d020) with pdu=0x200016eff3c8 00:27:55.249 [2024-11-15 11:45:55.991336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.249 [2024-11-15 11:45:55.991353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:55.249 [2024-11-15 11:45:55.997003] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b3d020) with pdu=0x200016eff3c8 00:27:55.249 [2024-11-15 11:45:55.997209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.249 [2024-11-15 11:45:55.997226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:55.249 [2024-11-15 11:45:56.003270] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b3d020) with pdu=0x200016eff3c8 00:27:55.249 [2024-11-15 11:45:56.003480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.249 [2024-11-15 11:45:56.003498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:55.249 [2024-11-15 11:45:56.009587] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b3d020) with pdu=0x200016eff3c8 00:27:55.249 [2024-11-15 11:45:56.009785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.249 [2024-11-15 11:45:56.009803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:55.249 [2024-11-15 11:45:56.016073] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b3d020) with pdu=0x200016eff3c8 00:27:55.249 [2024-11-15 11:45:56.016284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.249 [2024-11-15 11:45:56.016303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:55.250 [2024-11-15 11:45:56.022839] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b3d020) with pdu=0x200016eff3c8 00:27:55.250 [2024-11-15 11:45:56.023043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.250 [2024-11-15 11:45:56.023060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:55.250 [2024-11-15 11:45:56.028928] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b3d020) with pdu=0x200016eff3c8 00:27:55.250 [2024-11-15 11:45:56.029117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.250 [2024-11-15 11:45:56.029135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:55.250 [2024-11-15 11:45:56.034390] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b3d020) with pdu=0x200016eff3c8 00:27:55.250 [2024-11-15 11:45:56.034601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.250 [2024-11-15 11:45:56.034619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:55.250 [2024-11-15 11:45:56.039888] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b3d020) with pdu=0x200016eff3c8 00:27:55.250 [2024-11-15 11:45:56.040094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.250 [2024-11-15 11:45:56.040112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:55.250 [2024-11-15 11:45:56.045431] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b3d020) with pdu=0x200016eff3c8 00:27:55.250 [2024-11-15 11:45:56.045646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.250 [2024-11-15 11:45:56.045664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:55.250 [2024-11-15 11:45:56.051256] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b3d020) with pdu=0x200016eff3c8 00:27:55.250 [2024-11-15 11:45:56.051468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.250 [2024-11-15 11:45:56.051487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:55.250 [2024-11-15 11:45:56.056719] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b3d020) with pdu=0x200016eff3c8 00:27:55.250 [2024-11-15 11:45:56.056933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.250 [2024-11-15 11:45:56.056951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:55.250 [2024-11-15 11:45:56.062194] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b3d020) with pdu=0x200016eff3c8 00:27:55.250 [2024-11-15 11:45:56.062403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.250 [2024-11-15 11:45:56.062420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:55.250 [2024-11-15 11:45:56.067659] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b3d020) with pdu=0x200016eff3c8 00:27:55.250 [2024-11-15 11:45:56.067864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.250 [2024-11-15 11:45:56.067882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:55.250 [2024-11-15 11:45:56.073167] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b3d020) with pdu=0x200016eff3c8 00:27:55.250 [2024-11-15 11:45:56.073374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.250 [2024-11-15 11:45:56.073391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:55.250 [2024-11-15 11:45:56.078711] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b3d020) with pdu=0x200016eff3c8 00:27:55.250 [2024-11-15 11:45:56.078911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.250 [2024-11-15 11:45:56.078928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:55.250 [2024-11-15 11:45:56.084768] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b3d020) with pdu=0x200016eff3c8 00:27:55.250 [2024-11-15 11:45:56.084973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.250 [2024-11-15 11:45:56.084991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:55.250 [2024-11-15 11:45:56.091138] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b3d020) with pdu=0x200016eff3c8 00:27:55.250 [2024-11-15 11:45:56.091345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.250 [2024-11-15 11:45:56.091362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:55.250 [2024-11-15 11:45:56.096728] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b3d020) with pdu=0x200016eff3c8 00:27:55.250 [2024-11-15 11:45:56.096931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.250 [2024-11-15 11:45:56.096948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:55.509 [2024-11-15 11:45:56.102378] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b3d020) with pdu=0x200016eff3c8 00:27:55.509 [2024-11-15 11:45:56.102589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.509 [2024-11-15 11:45:56.102607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:55.509 [2024-11-15 11:45:56.107986] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b3d020) with pdu=0x200016eff3c8 00:27:55.509 [2024-11-15 11:45:56.108195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.509 [2024-11-15 11:45:56.108213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:55.509 [2024-11-15 11:45:56.113706] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b3d020) with pdu=0x200016eff3c8 00:27:55.509 [2024-11-15 11:45:56.113910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.509 [2024-11-15 11:45:56.113927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:55.509 [2024-11-15 11:45:56.119318] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b3d020) with pdu=0x200016eff3c8 00:27:55.509 [2024-11-15 11:45:56.119531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.509 [2024-11-15 11:45:56.119549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:55.509 [2024-11-15 11:45:56.124856] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b3d020) with pdu=0x200016eff3c8 00:27:55.509 [2024-11-15 11:45:56.125072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.509 [2024-11-15 11:45:56.125092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:55.509 [2024-11-15 11:45:56.130386] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b3d020) with pdu=0x200016eff3c8 00:27:55.510 [2024-11-15 11:45:56.130597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.510 [2024-11-15 11:45:56.130614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:55.510 [2024-11-15 11:45:56.136036] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b3d020) with pdu=0x200016eff3c8 00:27:55.510 [2024-11-15 11:45:56.136245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.510 [2024-11-15 11:45:56.136266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:55.510 [2024-11-15 11:45:56.142087] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b3d020) with pdu=0x200016eff3c8 00:27:55.510 [2024-11-15 11:45:56.142289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.510 [2024-11-15 11:45:56.142306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:55.510 4913.50 IOPS, 614.19 MiB/s 00:27:55.510 Latency(us) 00:27:55.510 [2024-11-15T10:45:56.363Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:55.510 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:27:55.510 nvme0n1 : 2.00 4913.03 614.13 0.00 0.00 3251.74 2398.02 12273.11 00:27:55.510 [2024-11-15T10:45:56.363Z] =================================================================================================================== 00:27:55.510 [2024-11-15T10:45:56.363Z] Total : 4913.03 614.13 0.00 0.00 3251.74 2398.02 12273.11 00:27:55.510 { 00:27:55.510 "results": [ 00:27:55.510 { 00:27:55.510 "job": "nvme0n1", 00:27:55.510 "core_mask": "0x2", 00:27:55.510 "workload": "randwrite", 00:27:55.510 "status": "finished", 00:27:55.510 "queue_depth": 16, 00:27:55.510 "io_size": 131072, 00:27:55.510 "runtime": 2.003447, 00:27:55.510 "iops": 4913.032388678113, 00:27:55.510 "mibps": 614.1290485847642, 00:27:55.510 "io_failed": 0, 00:27:55.510 "io_timeout": 0, 00:27:55.510 "avg_latency_us": 3251.7427229318478, 00:27:55.510 "min_latency_us": 2398.021818181818, 00:27:55.510 "max_latency_us": 12273.105454545455 00:27:55.510 } 00:27:55.510 ], 00:27:55.510 "core_count": 1 00:27:55.510 } 00:27:55.510 11:45:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:27:55.510 11:45:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:27:55.510 11:45:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:27:55.510 | .driver_specific 00:27:55.510 | .nvme_error 00:27:55.510 | .status_code 00:27:55.510 | .command_transient_transport_error' 00:27:55.510 11:45:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:27:55.769 11:45:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 318 > 0 )) 00:27:55.769 11:45:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 1396470 00:27:55.769 11:45:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # '[' -z 1396470 ']' 00:27:55.769 11:45:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # kill -0 1396470 00:27:55.769 11:45:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@957 -- # uname 00:27:55.769 11:45:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:27:55.769 11:45:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 1396470 00:27:55.769 11:45:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:27:55.769 11:45:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:27:55.769 11:45:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@970 -- # echo 'killing process with pid 1396470' 00:27:55.769 killing process with pid 1396470 00:27:55.769 11:45:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@971 -- # kill 1396470 00:27:55.769 Received shutdown signal, test time was about 2.000000 seconds 00:27:55.769 00:27:55.769 Latency(us) 00:27:55.769 [2024-11-15T10:45:56.622Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:55.769 [2024-11-15T10:45:56.622Z] =================================================================================================================== 00:27:55.769 [2024-11-15T10:45:56.622Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:27:55.770 11:45:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@976 -- # wait 1396470 00:27:56.029 11:45:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@116 -- # killprocess 1394410 00:27:56.029 11:45:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # '[' -z 1394410 ']' 00:27:56.029 11:45:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # kill -0 1394410 00:27:56.029 11:45:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@957 -- # uname 00:27:56.029 11:45:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:27:56.029 11:45:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 1394410 00:27:56.029 11:45:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:27:56.029 11:45:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:27:56.029 11:45:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@970 -- # echo 'killing process with pid 1394410' 00:27:56.029 killing process with pid 1394410 00:27:56.029 11:45:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@971 -- # kill 1394410 00:27:56.029 11:45:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@976 -- # wait 1394410 00:27:56.289 00:27:56.289 real 0m15.267s 00:27:56.289 user 0m30.918s 00:27:56.289 sys 0m4.376s 00:27:56.289 11:45:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1128 -- # xtrace_disable 00:27:56.289 11:45:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:27:56.289 ************************************ 00:27:56.289 END TEST nvmf_digest_error 00:27:56.289 ************************************ 00:27:56.289 11:45:56 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@149 -- # trap - SIGINT SIGTERM EXIT 00:27:56.289 11:45:56 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@150 -- # nvmftestfini 00:27:56.289 11:45:56 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@516 -- # nvmfcleanup 00:27:56.289 11:45:56 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@121 -- # sync 00:27:56.289 11:45:56 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:27:56.289 11:45:56 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@124 -- # set +e 00:27:56.289 11:45:56 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@125 -- # for i in {1..20} 00:27:56.289 11:45:56 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:27:56.289 rmmod nvme_tcp 00:27:56.289 rmmod nvme_fabrics 00:27:56.289 rmmod nvme_keyring 00:27:56.289 11:45:56 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:27:56.289 11:45:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@128 -- # set -e 00:27:56.289 11:45:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@129 -- # return 0 00:27:56.289 11:45:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@517 -- # '[' -n 1394410 ']' 00:27:56.289 11:45:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@518 -- # killprocess 1394410 00:27:56.289 11:45:57 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@952 -- # '[' -z 1394410 ']' 00:27:56.289 11:45:57 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@956 -- # kill -0 1394410 00:27:56.289 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 956: kill: (1394410) - No such process 00:27:56.289 11:45:57 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@979 -- # echo 'Process with pid 1394410 is not found' 00:27:56.289 Process with pid 1394410 is not found 00:27:56.289 11:45:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:27:56.289 11:45:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:27:56.289 11:45:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:27:56.289 11:45:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@297 -- # iptr 00:27:56.289 11:45:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@791 -- # iptables-save 00:27:56.289 11:45:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@791 -- # iptables-restore 00:27:56.289 11:45:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:27:56.289 11:45:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:27:56.289 11:45:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@302 -- # remove_spdk_ns 00:27:56.289 11:45:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:56.289 11:45:57 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:27:56.289 11:45:57 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:58.825 11:45:59 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:27:58.825 00:27:58.825 real 0m38.439s 00:27:58.825 user 1m3.475s 00:27:58.825 sys 0m12.932s 00:27:58.825 11:45:59 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1128 -- # xtrace_disable 00:27:58.825 11:45:59 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:27:58.825 ************************************ 00:27:58.825 END TEST nvmf_digest 00:27:58.825 ************************************ 00:27:58.825 11:45:59 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@36 -- # [[ 0 -eq 1 ]] 00:27:58.825 11:45:59 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@41 -- # [[ 0 -eq 1 ]] 00:27:58.825 11:45:59 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@46 -- # [[ phy == phy ]] 00:27:58.825 11:45:59 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@47 -- # run_test nvmf_bdevperf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh --transport=tcp 00:27:58.825 11:45:59 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:27:58.825 11:45:59 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1109 -- # xtrace_disable 00:27:58.825 11:45:59 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:27:58.825 ************************************ 00:27:58.825 START TEST nvmf_bdevperf 00:27:58.825 ************************************ 00:27:58.825 11:45:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh --transport=tcp 00:27:58.825 * Looking for test storage... 00:27:58.825 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:27:58.825 11:45:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:27:58.825 11:45:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1691 -- # lcov --version 00:27:58.825 11:45:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:27:58.825 11:45:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:27:58.825 11:45:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:27:58.825 11:45:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@333 -- # local ver1 ver1_l 00:27:58.825 11:45:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@334 -- # local ver2 ver2_l 00:27:58.825 11:45:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@336 -- # IFS=.-: 00:27:58.825 11:45:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@336 -- # read -ra ver1 00:27:58.825 11:45:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@337 -- # IFS=.-: 00:27:58.825 11:45:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@337 -- # read -ra ver2 00:27:58.825 11:45:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@338 -- # local 'op=<' 00:27:58.825 11:45:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@340 -- # ver1_l=2 00:27:58.825 11:45:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@341 -- # ver2_l=1 00:27:58.825 11:45:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:27:58.825 11:45:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@344 -- # case "$op" in 00:27:58.825 11:45:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@345 -- # : 1 00:27:58.825 11:45:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@364 -- # (( v = 0 )) 00:27:58.825 11:45:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:27:58.825 11:45:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@365 -- # decimal 1 00:27:58.825 11:45:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@353 -- # local d=1 00:27:58.825 11:45:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:27:58.825 11:45:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@355 -- # echo 1 00:27:58.825 11:45:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@365 -- # ver1[v]=1 00:27:58.825 11:45:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@366 -- # decimal 2 00:27:58.825 11:45:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@353 -- # local d=2 00:27:58.825 11:45:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:27:58.825 11:45:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@355 -- # echo 2 00:27:58.825 11:45:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@366 -- # ver2[v]=2 00:27:58.825 11:45:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:27:58.826 11:45:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:27:58.826 11:45:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@368 -- # return 0 00:27:58.826 11:45:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:27:58.826 11:45:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:27:58.826 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:58.826 --rc genhtml_branch_coverage=1 00:27:58.826 --rc genhtml_function_coverage=1 00:27:58.826 --rc genhtml_legend=1 00:27:58.826 --rc geninfo_all_blocks=1 00:27:58.826 --rc geninfo_unexecuted_blocks=1 00:27:58.826 00:27:58.826 ' 00:27:58.826 11:45:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:27:58.826 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:58.826 --rc genhtml_branch_coverage=1 00:27:58.826 --rc genhtml_function_coverage=1 00:27:58.826 --rc genhtml_legend=1 00:27:58.826 --rc geninfo_all_blocks=1 00:27:58.826 --rc geninfo_unexecuted_blocks=1 00:27:58.826 00:27:58.826 ' 00:27:58.826 11:45:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:27:58.826 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:58.826 --rc genhtml_branch_coverage=1 00:27:58.826 --rc genhtml_function_coverage=1 00:27:58.826 --rc genhtml_legend=1 00:27:58.826 --rc geninfo_all_blocks=1 00:27:58.826 --rc geninfo_unexecuted_blocks=1 00:27:58.826 00:27:58.826 ' 00:27:58.826 11:45:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:27:58.826 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:58.826 --rc genhtml_branch_coverage=1 00:27:58.826 --rc genhtml_function_coverage=1 00:27:58.826 --rc genhtml_legend=1 00:27:58.826 --rc geninfo_all_blocks=1 00:27:58.826 --rc geninfo_unexecuted_blocks=1 00:27:58.826 00:27:58.826 ' 00:27:58.826 11:45:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:27:58.826 11:45:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@7 -- # uname -s 00:27:58.826 11:45:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:58.826 11:45:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:58.826 11:45:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:58.826 11:45:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:58.826 11:45:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:58.826 11:45:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:58.826 11:45:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:58.826 11:45:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:58.826 11:45:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:58.826 11:45:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:58.826 11:45:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:27:58.826 11:45:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@18 -- # NVME_HOSTID=00abaa28-3537-eb11-906e-0017a4403562 00:27:58.826 11:45:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:58.826 11:45:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:58.826 11:45:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:27:58.826 11:45:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:27:58.826 11:45:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:27:58.826 11:45:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@15 -- # shopt -s extglob 00:27:58.826 11:45:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:58.826 11:45:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:58.826 11:45:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:58.826 11:45:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:58.826 11:45:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:58.826 11:45:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:58.826 11:45:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@5 -- # export PATH 00:27:58.826 11:45:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:58.826 11:45:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@51 -- # : 0 00:27:58.826 11:45:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:27:58.826 11:45:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:27:58.826 11:45:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:27:58.826 11:45:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:58.826 11:45:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:58.826 11:45:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:27:58.826 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:27:58.826 11:45:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:27:58.826 11:45:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:27:58.826 11:45:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@55 -- # have_pci_nics=0 00:27:58.826 11:45:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@11 -- # MALLOC_BDEV_SIZE=64 00:27:58.826 11:45:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:27:58.826 11:45:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@24 -- # nvmftestinit 00:27:58.826 11:45:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:27:58.826 11:45:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:58.826 11:45:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@476 -- # prepare_net_devs 00:27:58.826 11:45:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@438 -- # local -g is_hw=no 00:27:58.826 11:45:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@440 -- # remove_spdk_ns 00:27:58.826 11:45:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:58.826 11:45:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:27:58.826 11:45:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:58.826 11:45:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:27:58.826 11:45:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:27:58.826 11:45:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@309 -- # xtrace_disable 00:27:58.826 11:45:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:28:04.098 11:46:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:28:04.098 11:46:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@315 -- # pci_devs=() 00:28:04.098 11:46:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@315 -- # local -a pci_devs 00:28:04.098 11:46:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@316 -- # pci_net_devs=() 00:28:04.098 11:46:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:28:04.098 11:46:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@317 -- # pci_drivers=() 00:28:04.098 11:46:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@317 -- # local -A pci_drivers 00:28:04.098 11:46:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@319 -- # net_devs=() 00:28:04.098 11:46:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@319 -- # local -ga net_devs 00:28:04.098 11:46:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@320 -- # e810=() 00:28:04.098 11:46:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@320 -- # local -ga e810 00:28:04.098 11:46:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@321 -- # x722=() 00:28:04.098 11:46:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@321 -- # local -ga x722 00:28:04.098 11:46:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@322 -- # mlx=() 00:28:04.098 11:46:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@322 -- # local -ga mlx 00:28:04.098 11:46:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:28:04.098 11:46:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:28:04.098 11:46:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:28:04.098 11:46:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:28:04.098 11:46:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:28:04.098 11:46:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:28:04.098 11:46:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:28:04.098 11:46:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:28:04.098 11:46:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:28:04.098 11:46:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:28:04.098 11:46:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:28:04.098 11:46:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:28:04.098 11:46:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:28:04.098 11:46:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:28:04.098 11:46:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:28:04.098 11:46:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:28:04.098 11:46:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:28:04.098 11:46:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:28:04.098 11:46:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:28:04.098 11:46:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:28:04.098 Found 0000:af:00.0 (0x8086 - 0x159b) 00:28:04.098 11:46:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:28:04.098 11:46:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:28:04.098 11:46:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:04.098 11:46:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:04.098 11:46:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:28:04.098 11:46:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:28:04.098 11:46:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:28:04.098 Found 0000:af:00.1 (0x8086 - 0x159b) 00:28:04.098 11:46:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:28:04.098 11:46:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:28:04.098 11:46:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:04.098 11:46:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:04.098 11:46:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:28:04.098 11:46:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:28:04.098 11:46:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:28:04.098 11:46:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:28:04.098 11:46:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:28:04.098 11:46:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:04.098 11:46:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:28:04.098 11:46:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:04.098 11:46:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@418 -- # [[ up == up ]] 00:28:04.098 11:46:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:28:04.098 11:46:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:04.098 11:46:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:28:04.098 Found net devices under 0000:af:00.0: cvl_0_0 00:28:04.098 11:46:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:28:04.098 11:46:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:28:04.098 11:46:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:04.098 11:46:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:28:04.098 11:46:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:04.098 11:46:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@418 -- # [[ up == up ]] 00:28:04.098 11:46:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:28:04.098 11:46:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:04.098 11:46:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:28:04.098 Found net devices under 0000:af:00.1: cvl_0_1 00:28:04.098 11:46:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:28:04.098 11:46:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:28:04.098 11:46:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@442 -- # is_hw=yes 00:28:04.098 11:46:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:28:04.099 11:46:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:28:04.099 11:46:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:28:04.099 11:46:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:28:04.099 11:46:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:28:04.099 11:46:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:28:04.099 11:46:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:28:04.099 11:46:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:28:04.099 11:46:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:28:04.099 11:46:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:28:04.099 11:46:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:28:04.099 11:46:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:28:04.099 11:46:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:28:04.099 11:46:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:28:04.099 11:46:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:28:04.099 11:46:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:28:04.099 11:46:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:28:04.099 11:46:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:28:04.099 11:46:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:28:04.099 11:46:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:28:04.099 11:46:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:28:04.099 11:46:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:28:04.099 11:46:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:28:04.099 11:46:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:28:04.099 11:46:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:28:04.099 11:46:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:28:04.099 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:28:04.099 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.441 ms 00:28:04.099 00:28:04.099 --- 10.0.0.2 ping statistics --- 00:28:04.099 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:04.099 rtt min/avg/max/mdev = 0.441/0.441/0.441/0.000 ms 00:28:04.099 11:46:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:28:04.099 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:28:04.099 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.233 ms 00:28:04.099 00:28:04.099 --- 10.0.0.1 ping statistics --- 00:28:04.099 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:04.099 rtt min/avg/max/mdev = 0.233/0.233/0.233/0.000 ms 00:28:04.099 11:46:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:28:04.099 11:46:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@450 -- # return 0 00:28:04.099 11:46:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:28:04.099 11:46:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:28:04.099 11:46:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:28:04.099 11:46:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:28:04.099 11:46:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:28:04.099 11:46:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:28:04.099 11:46:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:28:04.099 11:46:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@25 -- # tgt_init 00:28:04.099 11:46:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@15 -- # nvmfappstart -m 0xE 00:28:04.099 11:46:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:28:04.099 11:46:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@724 -- # xtrace_disable 00:28:04.099 11:46:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:28:04.099 11:46:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@509 -- # nvmfpid=1400574 00:28:04.099 11:46:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@510 -- # waitforlisten 1400574 00:28:04.099 11:46:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@833 -- # '[' -z 1400574 ']' 00:28:04.099 11:46:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:04.099 11:46:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@838 -- # local max_retries=100 00:28:04.099 11:46:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:04.099 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:04.099 11:46:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@842 -- # xtrace_disable 00:28:04.099 11:46:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:28:04.099 11:46:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:28:04.099 [2024-11-15 11:46:04.398027] Starting SPDK v25.01-pre git sha1 4b2d483c6 / DPDK 24.03.0 initialization... 00:28:04.099 [2024-11-15 11:46:04.398085] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:04.099 [2024-11-15 11:46:04.469863] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:28:04.099 [2024-11-15 11:46:04.510292] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:04.099 [2024-11-15 11:46:04.510325] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:04.099 [2024-11-15 11:46:04.510332] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:28:04.099 [2024-11-15 11:46:04.510338] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:28:04.099 [2024-11-15 11:46:04.510343] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:04.099 [2024-11-15 11:46:04.511773] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:28:04.099 [2024-11-15 11:46:04.511877] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:28:04.099 [2024-11-15 11:46:04.511878] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:28:04.099 11:46:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:28:04.099 11:46:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@866 -- # return 0 00:28:04.099 11:46:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:28:04.099 11:46:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@730 -- # xtrace_disable 00:28:04.099 11:46:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:28:04.099 11:46:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:04.099 11:46:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:28:04.099 11:46:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:04.099 11:46:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:28:04.099 [2024-11-15 11:46:04.662715] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:04.099 11:46:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:04.099 11:46:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@18 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:28:04.099 11:46:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:04.099 11:46:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:28:04.099 Malloc0 00:28:04.099 11:46:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:04.099 11:46:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:28:04.099 11:46:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:04.099 11:46:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:28:04.099 11:46:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:04.099 11:46:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:28:04.099 11:46:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:04.099 11:46:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:28:04.099 11:46:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:04.099 11:46:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:28:04.100 11:46:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:04.100 11:46:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:28:04.100 [2024-11-15 11:46:04.722178] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:04.100 11:46:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:04.100 11:46:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 128 -o 4096 -w verify -t 1 00:28:04.100 11:46:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@27 -- # gen_nvmf_target_json 00:28:04.100 11:46:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@560 -- # config=() 00:28:04.100 11:46:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@560 -- # local subsystem config 00:28:04.100 11:46:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:28:04.100 11:46:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:28:04.100 { 00:28:04.100 "params": { 00:28:04.100 "name": "Nvme$subsystem", 00:28:04.100 "trtype": "$TEST_TRANSPORT", 00:28:04.100 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:04.100 "adrfam": "ipv4", 00:28:04.100 "trsvcid": "$NVMF_PORT", 00:28:04.100 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:04.100 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:04.100 "hdgst": ${hdgst:-false}, 00:28:04.100 "ddgst": ${ddgst:-false} 00:28:04.100 }, 00:28:04.100 "method": "bdev_nvme_attach_controller" 00:28:04.100 } 00:28:04.100 EOF 00:28:04.100 )") 00:28:04.100 11:46:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@582 -- # cat 00:28:04.100 11:46:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@584 -- # jq . 00:28:04.100 11:46:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@585 -- # IFS=, 00:28:04.100 11:46:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:28:04.100 "params": { 00:28:04.100 "name": "Nvme1", 00:28:04.100 "trtype": "tcp", 00:28:04.100 "traddr": "10.0.0.2", 00:28:04.100 "adrfam": "ipv4", 00:28:04.100 "trsvcid": "4420", 00:28:04.100 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:28:04.100 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:28:04.100 "hdgst": false, 00:28:04.100 "ddgst": false 00:28:04.100 }, 00:28:04.100 "method": "bdev_nvme_attach_controller" 00:28:04.100 }' 00:28:04.100 [2024-11-15 11:46:04.778245] Starting SPDK v25.01-pre git sha1 4b2d483c6 / DPDK 24.03.0 initialization... 00:28:04.100 [2024-11-15 11:46:04.778303] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1400605 ] 00:28:04.100 [2024-11-15 11:46:04.874844] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:04.100 [2024-11-15 11:46:04.924891] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:28:04.668 Running I/O for 1 seconds... 00:28:05.606 10546.00 IOPS, 41.20 MiB/s 00:28:05.606 Latency(us) 00:28:05.606 [2024-11-15T10:46:06.459Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:05.606 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:28:05.606 Verification LBA range: start 0x0 length 0x4000 00:28:05.606 Nvme1n1 : 1.01 10566.44 41.28 0.00 0.00 12043.47 2964.01 11319.85 00:28:05.606 [2024-11-15T10:46:06.459Z] =================================================================================================================== 00:28:05.606 [2024-11-15T10:46:06.459Z] Total : 10566.44 41.28 0.00 0.00 12043.47 2964.01 11319.85 00:28:05.606 11:46:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@30 -- # bdevperfpid=1400973 00:28:05.606 11:46:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@32 -- # sleep 3 00:28:05.606 11:46:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -q 128 -o 4096 -w verify -t 15 -f 00:28:05.606 11:46:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@29 -- # gen_nvmf_target_json 00:28:05.606 11:46:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@560 -- # config=() 00:28:05.606 11:46:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@560 -- # local subsystem config 00:28:05.606 11:46:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:28:05.606 11:46:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:28:05.606 { 00:28:05.606 "params": { 00:28:05.606 "name": "Nvme$subsystem", 00:28:05.606 "trtype": "$TEST_TRANSPORT", 00:28:05.606 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:05.606 "adrfam": "ipv4", 00:28:05.606 "trsvcid": "$NVMF_PORT", 00:28:05.606 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:05.606 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:05.606 "hdgst": ${hdgst:-false}, 00:28:05.606 "ddgst": ${ddgst:-false} 00:28:05.606 }, 00:28:05.606 "method": "bdev_nvme_attach_controller" 00:28:05.606 } 00:28:05.606 EOF 00:28:05.606 )") 00:28:05.606 11:46:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@582 -- # cat 00:28:05.864 11:46:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@584 -- # jq . 00:28:05.864 11:46:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@585 -- # IFS=, 00:28:05.864 11:46:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:28:05.864 "params": { 00:28:05.864 "name": "Nvme1", 00:28:05.864 "trtype": "tcp", 00:28:05.864 "traddr": "10.0.0.2", 00:28:05.864 "adrfam": "ipv4", 00:28:05.864 "trsvcid": "4420", 00:28:05.864 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:28:05.864 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:28:05.864 "hdgst": false, 00:28:05.864 "ddgst": false 00:28:05.864 }, 00:28:05.864 "method": "bdev_nvme_attach_controller" 00:28:05.864 }' 00:28:05.864 [2024-11-15 11:46:06.501412] Starting SPDK v25.01-pre git sha1 4b2d483c6 / DPDK 24.03.0 initialization... 00:28:05.864 [2024-11-15 11:46:06.501480] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1400973 ] 00:28:05.864 [2024-11-15 11:46:06.596808] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:05.864 [2024-11-15 11:46:06.644435] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:28:06.429 Running I/O for 15 seconds... 00:28:08.304 10335.00 IOPS, 40.37 MiB/s [2024-11-15T10:46:09.728Z] 10287.00 IOPS, 40.18 MiB/s [2024-11-15T10:46:09.728Z] 11:46:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@33 -- # kill -9 1400574 00:28:08.875 11:46:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@35 -- # sleep 3 00:28:08.875 [2024-11-15 11:46:09.464121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:74224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:08.875 [2024-11-15 11:46:09.464170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.875 [2024-11-15 11:46:09.464192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:74232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:08.875 [2024-11-15 11:46:09.464206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.875 [2024-11-15 11:46:09.464220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:74240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:08.875 [2024-11-15 11:46:09.464233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.875 [2024-11-15 11:46:09.464246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:74248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:08.875 [2024-11-15 11:46:09.464258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.875 [2024-11-15 11:46:09.464272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:74256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:08.875 [2024-11-15 11:46:09.464287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.875 [2024-11-15 11:46:09.464303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:74264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:08.875 [2024-11-15 11:46:09.464315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.875 [2024-11-15 11:46:09.464327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:74272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:08.875 [2024-11-15 11:46:09.464341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.875 [2024-11-15 11:46:09.464356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:74280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:08.875 [2024-11-15 11:46:09.464368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.875 [2024-11-15 11:46:09.464383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:74288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:08.875 [2024-11-15 11:46:09.464394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.875 [2024-11-15 11:46:09.464410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:74296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:08.875 [2024-11-15 11:46:09.464424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.875 [2024-11-15 11:46:09.464438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:74304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:08.875 [2024-11-15 11:46:09.464451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.875 [2024-11-15 11:46:09.464476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:74312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:08.875 [2024-11-15 11:46:09.464487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.875 [2024-11-15 11:46:09.464500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:74320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:08.875 [2024-11-15 11:46:09.464512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.875 [2024-11-15 11:46:09.464529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:74328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:08.875 [2024-11-15 11:46:09.464540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.875 [2024-11-15 11:46:09.464553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:74336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:08.875 [2024-11-15 11:46:09.464564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.875 [2024-11-15 11:46:09.464576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:74344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:08.875 [2024-11-15 11:46:09.464586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.875 [2024-11-15 11:46:09.464598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:74352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:08.875 [2024-11-15 11:46:09.464608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.875 [2024-11-15 11:46:09.464623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:74360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:08.875 [2024-11-15 11:46:09.464633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.875 [2024-11-15 11:46:09.464645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:74368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:08.875 [2024-11-15 11:46:09.464655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.875 [2024-11-15 11:46:09.464668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:74376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:08.875 [2024-11-15 11:46:09.464678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.875 [2024-11-15 11:46:09.464690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:74384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:08.875 [2024-11-15 11:46:09.464700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.875 [2024-11-15 11:46:09.464712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:74392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:08.875 [2024-11-15 11:46:09.464722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.875 [2024-11-15 11:46:09.464734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:74400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:08.875 [2024-11-15 11:46:09.464744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.875 [2024-11-15 11:46:09.464756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:74408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:08.875 [2024-11-15 11:46:09.464768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.875 [2024-11-15 11:46:09.464780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:74416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:08.875 [2024-11-15 11:46:09.464791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.875 [2024-11-15 11:46:09.464803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:74424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:08.875 [2024-11-15 11:46:09.464816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.875 [2024-11-15 11:46:09.464828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:74432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:08.875 [2024-11-15 11:46:09.464838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.875 [2024-11-15 11:46:09.464850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:74440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:08.875 [2024-11-15 11:46:09.464861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.875 [2024-11-15 11:46:09.464873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:74448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:08.875 [2024-11-15 11:46:09.464883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.875 [2024-11-15 11:46:09.464895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:74456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:08.875 [2024-11-15 11:46:09.464905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.875 [2024-11-15 11:46:09.464917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:74464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:08.875 [2024-11-15 11:46:09.464927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.875 [2024-11-15 11:46:09.464939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:74472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:08.875 [2024-11-15 11:46:09.464949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.875 [2024-11-15 11:46:09.464961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:74480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:08.875 [2024-11-15 11:46:09.464971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.875 [2024-11-15 11:46:09.464984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:74488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:08.875 [2024-11-15 11:46:09.464994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.875 [2024-11-15 11:46:09.465006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:74496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:08.875 [2024-11-15 11:46:09.465015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.875 [2024-11-15 11:46:09.465027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:74504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:08.876 [2024-11-15 11:46:09.465037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.876 [2024-11-15 11:46:09.465049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:74512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:08.876 [2024-11-15 11:46:09.465060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.876 [2024-11-15 11:46:09.465072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:74520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:08.876 [2024-11-15 11:46:09.465082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.876 [2024-11-15 11:46:09.465094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:74528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:08.876 [2024-11-15 11:46:09.465107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.876 [2024-11-15 11:46:09.465119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:74536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:08.876 [2024-11-15 11:46:09.465128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.876 [2024-11-15 11:46:09.465140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:74544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:08.876 [2024-11-15 11:46:09.465150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.876 [2024-11-15 11:46:09.465162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:74552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:08.876 [2024-11-15 11:46:09.465172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.876 [2024-11-15 11:46:09.465184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:74560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:08.876 [2024-11-15 11:46:09.465193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.876 [2024-11-15 11:46:09.465206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:74568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:08.876 [2024-11-15 11:46:09.465216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.876 [2024-11-15 11:46:09.465228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:74576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:08.876 [2024-11-15 11:46:09.465238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.876 [2024-11-15 11:46:09.465249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:74584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:08.876 [2024-11-15 11:46:09.465259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.876 [2024-11-15 11:46:09.465272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:74088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.876 [2024-11-15 11:46:09.465282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.876 [2024-11-15 11:46:09.465294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:74592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:08.876 [2024-11-15 11:46:09.465304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.876 [2024-11-15 11:46:09.465316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:74600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:08.876 [2024-11-15 11:46:09.465326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.876 [2024-11-15 11:46:09.465338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:74608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:08.876 [2024-11-15 11:46:09.465349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.876 [2024-11-15 11:46:09.465361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:74616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:08.876 [2024-11-15 11:46:09.465371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.876 [2024-11-15 11:46:09.465386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:74624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:08.876 [2024-11-15 11:46:09.465396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.876 [2024-11-15 11:46:09.465408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:74632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:08.876 [2024-11-15 11:46:09.465418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.876 [2024-11-15 11:46:09.465430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:74640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:08.876 [2024-11-15 11:46:09.465439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.876 [2024-11-15 11:46:09.465452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:74648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:08.876 [2024-11-15 11:46:09.465467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.876 [2024-11-15 11:46:09.465479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:74656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:08.876 [2024-11-15 11:46:09.465489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.876 [2024-11-15 11:46:09.465501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:74664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:08.876 [2024-11-15 11:46:09.465511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.876 [2024-11-15 11:46:09.465523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:74672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:08.876 [2024-11-15 11:46:09.465533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.876 [2024-11-15 11:46:09.465545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:74680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:08.876 [2024-11-15 11:46:09.465554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.876 [2024-11-15 11:46:09.465567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:74688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:08.876 [2024-11-15 11:46:09.465577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.876 [2024-11-15 11:46:09.465589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:74696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:08.876 [2024-11-15 11:46:09.465599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.876 [2024-11-15 11:46:09.465611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:74704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:08.876 [2024-11-15 11:46:09.465621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.876 [2024-11-15 11:46:09.465633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:74712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:08.876 [2024-11-15 11:46:09.465643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.876 [2024-11-15 11:46:09.465656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:74720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:08.876 [2024-11-15 11:46:09.465667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.876 [2024-11-15 11:46:09.465679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:74728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:08.876 [2024-11-15 11:46:09.465689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.876 [2024-11-15 11:46:09.465702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:74736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:08.876 [2024-11-15 11:46:09.465712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.876 [2024-11-15 11:46:09.465724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:74744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:08.876 [2024-11-15 11:46:09.465733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.876 [2024-11-15 11:46:09.465746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:74752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:08.876 [2024-11-15 11:46:09.465756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.876 [2024-11-15 11:46:09.465768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:74760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:08.876 [2024-11-15 11:46:09.465778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.876 [2024-11-15 11:46:09.465790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:74768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:08.876 [2024-11-15 11:46:09.465801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.876 [2024-11-15 11:46:09.465813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:74776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:08.876 [2024-11-15 11:46:09.465822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.876 [2024-11-15 11:46:09.465835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:74784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:08.876 [2024-11-15 11:46:09.465844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.876 [2024-11-15 11:46:09.465856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:74792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:08.876 [2024-11-15 11:46:09.465866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.876 [2024-11-15 11:46:09.465880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:74800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:08.876 [2024-11-15 11:46:09.465890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.876 [2024-11-15 11:46:09.465902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:74808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:08.876 [2024-11-15 11:46:09.465912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.877 [2024-11-15 11:46:09.465925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:74816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:08.877 [2024-11-15 11:46:09.465934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.877 [2024-11-15 11:46:09.465948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:74824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:08.877 [2024-11-15 11:46:09.465958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.877 [2024-11-15 11:46:09.465971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:74832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:08.877 [2024-11-15 11:46:09.465981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.877 [2024-11-15 11:46:09.465992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:74840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:08.877 [2024-11-15 11:46:09.466002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.877 [2024-11-15 11:46:09.466014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:74848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:08.877 [2024-11-15 11:46:09.466025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.877 [2024-11-15 11:46:09.466038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:74856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:08.877 [2024-11-15 11:46:09.466048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.877 [2024-11-15 11:46:09.466060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:74864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:08.877 [2024-11-15 11:46:09.466070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.877 [2024-11-15 11:46:09.466082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:74872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:08.877 [2024-11-15 11:46:09.466093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.877 [2024-11-15 11:46:09.466105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:74880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:08.877 [2024-11-15 11:46:09.466115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.877 [2024-11-15 11:46:09.466127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:74888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:08.877 [2024-11-15 11:46:09.466138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.877 [2024-11-15 11:46:09.466150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:74896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:08.877 [2024-11-15 11:46:09.466160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.877 [2024-11-15 11:46:09.466172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:74904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:08.877 [2024-11-15 11:46:09.466181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.877 [2024-11-15 11:46:09.466193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:74912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:08.877 [2024-11-15 11:46:09.466203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.877 [2024-11-15 11:46:09.466215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:74920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:08.877 [2024-11-15 11:46:09.466225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.877 [2024-11-15 11:46:09.466242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:74928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:08.877 [2024-11-15 11:46:09.466252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.877 [2024-11-15 11:46:09.466266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:74936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:08.877 [2024-11-15 11:46:09.466276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.877 [2024-11-15 11:46:09.466288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:74944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:08.877 [2024-11-15 11:46:09.466297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.877 [2024-11-15 11:46:09.466309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:74952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:08.877 [2024-11-15 11:46:09.466320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.877 [2024-11-15 11:46:09.466332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:74960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:08.877 [2024-11-15 11:46:09.466342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.877 [2024-11-15 11:46:09.466353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:74968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:08.877 [2024-11-15 11:46:09.466363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.877 [2024-11-15 11:46:09.466376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:74976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:08.877 [2024-11-15 11:46:09.466386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.877 [2024-11-15 11:46:09.466398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:74984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:08.877 [2024-11-15 11:46:09.466407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.877 [2024-11-15 11:46:09.466426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:74992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:08.877 [2024-11-15 11:46:09.466436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.877 [2024-11-15 11:46:09.466449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:75000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:08.877 [2024-11-15 11:46:09.466570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.877 [2024-11-15 11:46:09.466585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:75008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:08.877 [2024-11-15 11:46:09.466595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.877 [2024-11-15 11:46:09.466608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:75016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:08.877 [2024-11-15 11:46:09.466618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.877 [2024-11-15 11:46:09.466630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:75024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:08.877 [2024-11-15 11:46:09.466642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.877 [2024-11-15 11:46:09.466654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:75032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:08.877 [2024-11-15 11:46:09.466664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.877 [2024-11-15 11:46:09.466677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:75040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:08.877 [2024-11-15 11:46:09.466686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.877 [2024-11-15 11:46:09.466698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:75048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:08.877 [2024-11-15 11:46:09.466708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.877 [2024-11-15 11:46:09.466721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:75056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:08.877 [2024-11-15 11:46:09.466731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.877 [2024-11-15 11:46:09.466743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:75064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:08.877 [2024-11-15 11:46:09.466753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.877 [2024-11-15 11:46:09.466765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:75072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:08.877 [2024-11-15 11:46:09.466776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.877 [2024-11-15 11:46:09.466788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:75080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:08.877 [2024-11-15 11:46:09.466798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.877 [2024-11-15 11:46:09.466809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:75088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:08.877 [2024-11-15 11:46:09.466820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.877 [2024-11-15 11:46:09.466832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:75096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:08.877 [2024-11-15 11:46:09.466842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.877 [2024-11-15 11:46:09.466854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:74096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.877 [2024-11-15 11:46:09.466864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.877 [2024-11-15 11:46:09.466876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:74104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.877 [2024-11-15 11:46:09.466887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.877 [2024-11-15 11:46:09.466901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:74112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.877 [2024-11-15 11:46:09.466911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.877 [2024-11-15 11:46:09.466925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:74120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.877 [2024-11-15 11:46:09.466936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.878 [2024-11-15 11:46:09.466948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:74128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.878 [2024-11-15 11:46:09.466958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.878 [2024-11-15 11:46:09.466970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:74136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.878 [2024-11-15 11:46:09.466980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.878 [2024-11-15 11:46:09.466993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:74144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.878 [2024-11-15 11:46:09.467002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.878 [2024-11-15 11:46:09.467015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:74152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.878 [2024-11-15 11:46:09.467025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.878 [2024-11-15 11:46:09.467037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:74160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.878 [2024-11-15 11:46:09.467047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.878 [2024-11-15 11:46:09.467059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:74168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.878 [2024-11-15 11:46:09.467069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.878 [2024-11-15 11:46:09.467081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:74176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.878 [2024-11-15 11:46:09.467093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.878 [2024-11-15 11:46:09.467105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:74184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.878 [2024-11-15 11:46:09.467115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.878 [2024-11-15 11:46:09.467127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:74192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.878 [2024-11-15 11:46:09.467137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.878 [2024-11-15 11:46:09.467150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:74200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.878 [2024-11-15 11:46:09.467160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.878 [2024-11-15 11:46:09.467172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:74208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.878 [2024-11-15 11:46:09.467182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.878 [2024-11-15 11:46:09.467194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:75104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:08.878 [2024-11-15 11:46:09.467206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.878 [2024-11-15 11:46:09.467218] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24fb660 is same with the state(6) to be set 00:28:08.878 [2024-11-15 11:46:09.467230] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:28:08.878 [2024-11-15 11:46:09.467238] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:28:08.878 [2024-11-15 11:46:09.467247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:74216 len:8 PRP1 0x0 PRP2 0x0 00:28:08.878 [2024-11-15 11:46:09.467258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.878 [2024-11-15 11:46:09.471559] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:08.878 [2024-11-15 11:46:09.471626] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2505a40 (9): Bad file descriptor 00:28:08.878 [2024-11-15 11:46:09.472400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.878 [2024-11-15 11:46:09.472423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2505a40 with addr=10.0.0.2, port=4420 00:28:08.878 [2024-11-15 11:46:09.472435] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2505a40 is same with the state(6) to be set 00:28:08.878 [2024-11-15 11:46:09.472711] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2505a40 (9): Bad file descriptor 00:28:08.878 [2024-11-15 11:46:09.472979] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:08.878 [2024-11-15 11:46:09.472991] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:08.878 [2024-11-15 11:46:09.473002] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:08.878 [2024-11-15 11:46:09.473013] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:08.878 [2024-11-15 11:46:09.486700] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:08.878 [2024-11-15 11:46:09.487153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.878 [2024-11-15 11:46:09.487201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2505a40 with addr=10.0.0.2, port=4420 00:28:08.878 [2024-11-15 11:46:09.487227] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2505a40 is same with the state(6) to be set 00:28:08.878 [2024-11-15 11:46:09.487831] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2505a40 (9): Bad file descriptor 00:28:08.878 [2024-11-15 11:46:09.488348] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:08.878 [2024-11-15 11:46:09.488361] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:08.878 [2024-11-15 11:46:09.488372] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:08.878 [2024-11-15 11:46:09.488382] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:08.878 [2024-11-15 11:46:09.501278] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:08.878 [2024-11-15 11:46:09.501668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.878 [2024-11-15 11:46:09.501693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2505a40 with addr=10.0.0.2, port=4420 00:28:08.878 [2024-11-15 11:46:09.501704] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2505a40 is same with the state(6) to be set 00:28:08.878 [2024-11-15 11:46:09.501972] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2505a40 (9): Bad file descriptor 00:28:08.878 [2024-11-15 11:46:09.502246] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:08.878 [2024-11-15 11:46:09.502260] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:08.878 [2024-11-15 11:46:09.502270] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:08.878 [2024-11-15 11:46:09.502281] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:08.878 [2024-11-15 11:46:09.515951] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:08.878 [2024-11-15 11:46:09.516442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.878 [2024-11-15 11:46:09.516500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2505a40 with addr=10.0.0.2, port=4420 00:28:08.878 [2024-11-15 11:46:09.516525] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2505a40 is same with the state(6) to be set 00:28:08.878 [2024-11-15 11:46:09.517110] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2505a40 (9): Bad file descriptor 00:28:08.878 [2024-11-15 11:46:09.517415] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:08.878 [2024-11-15 11:46:09.517428] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:08.878 [2024-11-15 11:46:09.517438] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:08.878 [2024-11-15 11:46:09.517449] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:08.878 [2024-11-15 11:46:09.530601] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:08.878 [2024-11-15 11:46:09.531128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.878 [2024-11-15 11:46:09.531153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2505a40 with addr=10.0.0.2, port=4420 00:28:08.878 [2024-11-15 11:46:09.531164] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2505a40 is same with the state(6) to be set 00:28:08.878 [2024-11-15 11:46:09.531432] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2505a40 (9): Bad file descriptor 00:28:08.878 [2024-11-15 11:46:09.531707] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:08.878 [2024-11-15 11:46:09.531721] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:08.878 [2024-11-15 11:46:09.531730] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:08.878 [2024-11-15 11:46:09.531741] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:08.878 [2024-11-15 11:46:09.545355] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:08.878 [2024-11-15 11:46:09.545862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.878 [2024-11-15 11:46:09.545887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2505a40 with addr=10.0.0.2, port=4420 00:28:08.878 [2024-11-15 11:46:09.545899] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2505a40 is same with the state(6) to be set 00:28:08.878 [2024-11-15 11:46:09.546167] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2505a40 (9): Bad file descriptor 00:28:08.878 [2024-11-15 11:46:09.546437] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:08.878 [2024-11-15 11:46:09.546449] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:08.878 [2024-11-15 11:46:09.546472] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:08.878 [2024-11-15 11:46:09.546482] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:08.878 [2024-11-15 11:46:09.560087] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:08.878 [2024-11-15 11:46:09.560618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.878 [2024-11-15 11:46:09.560644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2505a40 with addr=10.0.0.2, port=4420 00:28:08.878 [2024-11-15 11:46:09.560655] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2505a40 is same with the state(6) to be set 00:28:08.879 [2024-11-15 11:46:09.560923] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2505a40 (9): Bad file descriptor 00:28:08.879 [2024-11-15 11:46:09.561190] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:08.879 [2024-11-15 11:46:09.561204] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:08.879 [2024-11-15 11:46:09.561214] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:08.879 [2024-11-15 11:46:09.561225] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:08.879 [2024-11-15 11:46:09.574875] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:08.879 [2024-11-15 11:46:09.575434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.879 [2024-11-15 11:46:09.575464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2505a40 with addr=10.0.0.2, port=4420 00:28:08.879 [2024-11-15 11:46:09.575476] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2505a40 is same with the state(6) to be set 00:28:08.879 [2024-11-15 11:46:09.575744] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2505a40 (9): Bad file descriptor 00:28:08.879 [2024-11-15 11:46:09.576012] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:08.879 [2024-11-15 11:46:09.576025] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:08.879 [2024-11-15 11:46:09.576036] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:08.879 [2024-11-15 11:46:09.576047] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:08.879 [2024-11-15 11:46:09.589698] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:08.879 [2024-11-15 11:46:09.590253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.879 [2024-11-15 11:46:09.590277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2505a40 with addr=10.0.0.2, port=4420 00:28:08.879 [2024-11-15 11:46:09.590288] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2505a40 is same with the state(6) to be set 00:28:08.879 [2024-11-15 11:46:09.590563] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2505a40 (9): Bad file descriptor 00:28:08.879 [2024-11-15 11:46:09.590832] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:08.879 [2024-11-15 11:46:09.590845] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:08.879 [2024-11-15 11:46:09.590856] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:08.879 [2024-11-15 11:46:09.590867] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:08.879 [2024-11-15 11:46:09.604281] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:08.879 [2024-11-15 11:46:09.604838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.879 [2024-11-15 11:46:09.604862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2505a40 with addr=10.0.0.2, port=4420 00:28:08.879 [2024-11-15 11:46:09.604874] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2505a40 is same with the state(6) to be set 00:28:08.879 [2024-11-15 11:46:09.605142] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2505a40 (9): Bad file descriptor 00:28:08.879 [2024-11-15 11:46:09.605410] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:08.879 [2024-11-15 11:46:09.605423] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:08.879 [2024-11-15 11:46:09.605433] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:08.879 [2024-11-15 11:46:09.605444] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:08.879 [2024-11-15 11:46:09.619107] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:08.879 [2024-11-15 11:46:09.619633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.879 [2024-11-15 11:46:09.619658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2505a40 with addr=10.0.0.2, port=4420 00:28:08.879 [2024-11-15 11:46:09.619669] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2505a40 is same with the state(6) to be set 00:28:08.879 [2024-11-15 11:46:09.619937] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2505a40 (9): Bad file descriptor 00:28:08.879 [2024-11-15 11:46:09.620207] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:08.879 [2024-11-15 11:46:09.620220] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:08.879 [2024-11-15 11:46:09.620229] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:08.879 [2024-11-15 11:46:09.620240] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:08.879 [2024-11-15 11:46:09.633899] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:08.879 [2024-11-15 11:46:09.634340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.879 [2024-11-15 11:46:09.634364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2505a40 with addr=10.0.0.2, port=4420 00:28:08.879 [2024-11-15 11:46:09.634375] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2505a40 is same with the state(6) to be set 00:28:08.879 [2024-11-15 11:46:09.634648] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2505a40 (9): Bad file descriptor 00:28:08.879 [2024-11-15 11:46:09.634917] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:08.879 [2024-11-15 11:46:09.634931] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:08.879 [2024-11-15 11:46:09.634941] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:08.879 [2024-11-15 11:46:09.634952] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:08.879 [2024-11-15 11:46:09.648617] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:08.879 [2024-11-15 11:46:09.649006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.879 [2024-11-15 11:46:09.649030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2505a40 with addr=10.0.0.2, port=4420 00:28:08.879 [2024-11-15 11:46:09.649046] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2505a40 is same with the state(6) to be set 00:28:08.879 [2024-11-15 11:46:09.649313] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2505a40 (9): Bad file descriptor 00:28:08.879 [2024-11-15 11:46:09.649587] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:08.879 [2024-11-15 11:46:09.649601] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:08.879 [2024-11-15 11:46:09.649611] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:08.879 [2024-11-15 11:46:09.649622] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:08.879 [2024-11-15 11:46:09.663271] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:08.879 [2024-11-15 11:46:09.663671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.879 [2024-11-15 11:46:09.663695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2505a40 with addr=10.0.0.2, port=4420 00:28:08.879 [2024-11-15 11:46:09.663706] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2505a40 is same with the state(6) to be set 00:28:08.879 [2024-11-15 11:46:09.663973] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2505a40 (9): Bad file descriptor 00:28:08.879 [2024-11-15 11:46:09.664242] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:08.879 [2024-11-15 11:46:09.664254] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:08.879 [2024-11-15 11:46:09.664265] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:08.879 [2024-11-15 11:46:09.664275] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:08.879 [2024-11-15 11:46:09.677937] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:08.879 [2024-11-15 11:46:09.678388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.880 [2024-11-15 11:46:09.678412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2505a40 with addr=10.0.0.2, port=4420 00:28:08.880 [2024-11-15 11:46:09.678423] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2505a40 is same with the state(6) to be set 00:28:08.880 [2024-11-15 11:46:09.678696] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2505a40 (9): Bad file descriptor 00:28:08.880 [2024-11-15 11:46:09.678967] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:08.880 [2024-11-15 11:46:09.678980] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:08.880 [2024-11-15 11:46:09.678990] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:08.880 [2024-11-15 11:46:09.679000] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:08.880 [2024-11-15 11:46:09.692658] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:08.880 [2024-11-15 11:46:09.693113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.880 [2024-11-15 11:46:09.693137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2505a40 with addr=10.0.0.2, port=4420 00:28:08.880 [2024-11-15 11:46:09.693148] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2505a40 is same with the state(6) to be set 00:28:08.880 [2024-11-15 11:46:09.693416] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2505a40 (9): Bad file descriptor 00:28:08.880 [2024-11-15 11:46:09.693695] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:08.880 [2024-11-15 11:46:09.693710] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:08.880 [2024-11-15 11:46:09.693720] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:08.880 [2024-11-15 11:46:09.693731] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:08.880 [2024-11-15 11:46:09.707366] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:08.880 [2024-11-15 11:46:09.707924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.880 [2024-11-15 11:46:09.707970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2505a40 with addr=10.0.0.2, port=4420 00:28:08.880 [2024-11-15 11:46:09.707994] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2505a40 is same with the state(6) to be set 00:28:08.880 [2024-11-15 11:46:09.708591] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2505a40 (9): Bad file descriptor 00:28:08.880 [2024-11-15 11:46:09.708885] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:08.880 [2024-11-15 11:46:09.708903] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:08.880 [2024-11-15 11:46:09.708918] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:08.880 [2024-11-15 11:46:09.708933] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:08.880 [2024-11-15 11:46:09.722469] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:08.880 [2024-11-15 11:46:09.722865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.880 [2024-11-15 11:46:09.722889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2505a40 with addr=10.0.0.2, port=4420 00:28:08.880 [2024-11-15 11:46:09.722901] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2505a40 is same with the state(6) to be set 00:28:08.880 [2024-11-15 11:46:09.723169] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2505a40 (9): Bad file descriptor 00:28:09.139 [2024-11-15 11:46:09.723438] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:09.139 [2024-11-15 11:46:09.723452] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:09.139 [2024-11-15 11:46:09.723471] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:09.139 [2024-11-15 11:46:09.723482] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:09.139 [2024-11-15 11:46:09.737133] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:09.140 [2024-11-15 11:46:09.737689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:09.140 [2024-11-15 11:46:09.737714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2505a40 with addr=10.0.0.2, port=4420 00:28:09.140 [2024-11-15 11:46:09.737726] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2505a40 is same with the state(6) to be set 00:28:09.140 [2024-11-15 11:46:09.737994] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2505a40 (9): Bad file descriptor 00:28:09.140 [2024-11-15 11:46:09.738263] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:09.140 [2024-11-15 11:46:09.738276] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:09.140 [2024-11-15 11:46:09.738291] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:09.140 [2024-11-15 11:46:09.738302] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:09.140 [2024-11-15 11:46:09.751952] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:09.140 [2024-11-15 11:46:09.752512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:09.140 [2024-11-15 11:46:09.752560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2505a40 with addr=10.0.0.2, port=4420 00:28:09.140 [2024-11-15 11:46:09.752583] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2505a40 is same with the state(6) to be set 00:28:09.140 [2024-11-15 11:46:09.753033] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2505a40 (9): Bad file descriptor 00:28:09.140 [2024-11-15 11:46:09.753303] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:09.140 [2024-11-15 11:46:09.753316] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:09.140 [2024-11-15 11:46:09.753326] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:09.140 [2024-11-15 11:46:09.753336] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:09.140 [2024-11-15 11:46:09.766733] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:09.140 [2024-11-15 11:46:09.767280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:09.140 [2024-11-15 11:46:09.767303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2505a40 with addr=10.0.0.2, port=4420 00:28:09.140 [2024-11-15 11:46:09.767314] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2505a40 is same with the state(6) to be set 00:28:09.140 [2024-11-15 11:46:09.767589] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2505a40 (9): Bad file descriptor 00:28:09.140 [2024-11-15 11:46:09.767858] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:09.140 [2024-11-15 11:46:09.767872] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:09.140 [2024-11-15 11:46:09.767883] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:09.140 [2024-11-15 11:46:09.767893] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:09.140 [2024-11-15 11:46:09.781536] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:09.140 [2024-11-15 11:46:09.782036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:09.140 [2024-11-15 11:46:09.782081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2505a40 with addr=10.0.0.2, port=4420 00:28:09.140 [2024-11-15 11:46:09.782105] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2505a40 is same with the state(6) to be set 00:28:09.140 [2024-11-15 11:46:09.782702] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2505a40 (9): Bad file descriptor 00:28:09.140 [2024-11-15 11:46:09.783231] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:09.140 [2024-11-15 11:46:09.783244] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:09.140 [2024-11-15 11:46:09.783254] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:09.140 [2024-11-15 11:46:09.783265] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:09.140 [2024-11-15 11:46:09.796153] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:09.140 [2024-11-15 11:46:09.796721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:09.140 [2024-11-15 11:46:09.796767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2505a40 with addr=10.0.0.2, port=4420 00:28:09.140 [2024-11-15 11:46:09.796792] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2505a40 is same with the state(6) to be set 00:28:09.140 [2024-11-15 11:46:09.797317] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2505a40 (9): Bad file descriptor 00:28:09.140 [2024-11-15 11:46:09.797600] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:09.140 [2024-11-15 11:46:09.797615] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:09.140 [2024-11-15 11:46:09.797626] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:09.140 [2024-11-15 11:46:09.797636] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:09.140 [2024-11-15 11:46:09.810776] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:09.140 [2024-11-15 11:46:09.811338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:09.140 [2024-11-15 11:46:09.811383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2505a40 with addr=10.0.0.2, port=4420 00:28:09.140 [2024-11-15 11:46:09.811407] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2505a40 is same with the state(6) to be set 00:28:09.140 [2024-11-15 11:46:09.811971] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2505a40 (9): Bad file descriptor 00:28:09.140 [2024-11-15 11:46:09.812241] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:09.140 [2024-11-15 11:46:09.812254] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:09.140 [2024-11-15 11:46:09.812264] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:09.140 [2024-11-15 11:46:09.812274] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:09.140 [2024-11-15 11:46:09.825394] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:09.140 [2024-11-15 11:46:09.825871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:09.140 [2024-11-15 11:46:09.825895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2505a40 with addr=10.0.0.2, port=4420 00:28:09.140 [2024-11-15 11:46:09.825907] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2505a40 is same with the state(6) to be set 00:28:09.140 [2024-11-15 11:46:09.826174] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2505a40 (9): Bad file descriptor 00:28:09.140 [2024-11-15 11:46:09.826441] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:09.140 [2024-11-15 11:46:09.826454] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:09.140 [2024-11-15 11:46:09.826472] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:09.140 [2024-11-15 11:46:09.826482] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:09.140 [2024-11-15 11:46:09.840112] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:09.140 [2024-11-15 11:46:09.840596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:09.140 [2024-11-15 11:46:09.840642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2505a40 with addr=10.0.0.2, port=4420 00:28:09.140 [2024-11-15 11:46:09.840674] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2505a40 is same with the state(6) to be set 00:28:09.140 [2024-11-15 11:46:09.841199] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2505a40 (9): Bad file descriptor 00:28:09.140 [2024-11-15 11:46:09.841473] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:09.140 [2024-11-15 11:46:09.841487] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:09.140 [2024-11-15 11:46:09.841498] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:09.140 [2024-11-15 11:46:09.841508] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:09.140 [2024-11-15 11:46:09.854880] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:09.140 [2024-11-15 11:46:09.855429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:09.140 [2024-11-15 11:46:09.855496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2505a40 with addr=10.0.0.2, port=4420 00:28:09.140 [2024-11-15 11:46:09.855521] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2505a40 is same with the state(6) to be set 00:28:09.140 [2024-11-15 11:46:09.856073] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2505a40 (9): Bad file descriptor 00:28:09.140 [2024-11-15 11:46:09.856342] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:09.140 [2024-11-15 11:46:09.856355] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:09.140 [2024-11-15 11:46:09.856365] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:09.140 [2024-11-15 11:46:09.856375] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:09.140 [2024-11-15 11:46:09.869499] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:09.140 [2024-11-15 11:46:09.870055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:09.140 [2024-11-15 11:46:09.870099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2505a40 with addr=10.0.0.2, port=4420 00:28:09.140 [2024-11-15 11:46:09.870123] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2505a40 is same with the state(6) to be set 00:28:09.140 [2024-11-15 11:46:09.870563] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2505a40 (9): Bad file descriptor 00:28:09.140 [2024-11-15 11:46:09.870833] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:09.140 [2024-11-15 11:46:09.870846] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:09.140 [2024-11-15 11:46:09.870856] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:09.141 [2024-11-15 11:46:09.870867] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:09.141 [2024-11-15 11:46:09.884258] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:09.141 [2024-11-15 11:46:09.884812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:09.141 [2024-11-15 11:46:09.884865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2505a40 with addr=10.0.0.2, port=4420 00:28:09.141 [2024-11-15 11:46:09.884890] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2505a40 is same with the state(6) to be set 00:28:09.141 [2024-11-15 11:46:09.885402] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2505a40 (9): Bad file descriptor 00:28:09.141 [2024-11-15 11:46:09.885682] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:09.141 [2024-11-15 11:46:09.885696] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:09.141 [2024-11-15 11:46:09.885707] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:09.141 [2024-11-15 11:46:09.885717] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:09.141 [2024-11-15 11:46:09.898865] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:09.141 [2024-11-15 11:46:09.899346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:09.141 [2024-11-15 11:46:09.899370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2505a40 with addr=10.0.0.2, port=4420 00:28:09.141 [2024-11-15 11:46:09.899381] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2505a40 is same with the state(6) to be set 00:28:09.141 [2024-11-15 11:46:09.899655] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2505a40 (9): Bad file descriptor 00:28:09.141 [2024-11-15 11:46:09.899924] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:09.141 [2024-11-15 11:46:09.899937] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:09.141 [2024-11-15 11:46:09.899948] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:09.141 [2024-11-15 11:46:09.899958] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:09.141 [2024-11-15 11:46:09.913609] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:09.141 [2024-11-15 11:46:09.914168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:09.141 [2024-11-15 11:46:09.914211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2505a40 with addr=10.0.0.2, port=4420 00:28:09.141 [2024-11-15 11:46:09.914235] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2505a40 is same with the state(6) to be set 00:28:09.141 [2024-11-15 11:46:09.914835] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2505a40 (9): Bad file descriptor 00:28:09.141 [2024-11-15 11:46:09.915397] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:09.141 [2024-11-15 11:46:09.915410] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:09.141 [2024-11-15 11:46:09.915420] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:09.141 [2024-11-15 11:46:09.915431] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:09.141 [2024-11-15 11:46:09.928308] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:09.141 [2024-11-15 11:46:09.928867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:09.141 [2024-11-15 11:46:09.928892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2505a40 with addr=10.0.0.2, port=4420 00:28:09.141 [2024-11-15 11:46:09.928903] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2505a40 is same with the state(6) to be set 00:28:09.141 [2024-11-15 11:46:09.929171] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2505a40 (9): Bad file descriptor 00:28:09.141 [2024-11-15 11:46:09.929439] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:09.141 [2024-11-15 11:46:09.929452] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:09.141 [2024-11-15 11:46:09.929478] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:09.141 [2024-11-15 11:46:09.929489] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:09.141 [2024-11-15 11:46:09.943127] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:09.141 [2024-11-15 11:46:09.943579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:09.141 [2024-11-15 11:46:09.943625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2505a40 with addr=10.0.0.2, port=4420 00:28:09.141 [2024-11-15 11:46:09.943649] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2505a40 is same with the state(6) to be set 00:28:09.141 [2024-11-15 11:46:09.944175] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2505a40 (9): Bad file descriptor 00:28:09.141 [2024-11-15 11:46:09.944443] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:09.141 [2024-11-15 11:46:09.944456] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:09.141 [2024-11-15 11:46:09.944475] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:09.141 [2024-11-15 11:46:09.944485] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:09.141 [2024-11-15 11:46:09.957872] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:09.141 [2024-11-15 11:46:09.958397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:09.141 [2024-11-15 11:46:09.958421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2505a40 with addr=10.0.0.2, port=4420 00:28:09.141 [2024-11-15 11:46:09.958433] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2505a40 is same with the state(6) to be set 00:28:09.141 [2024-11-15 11:46:09.958707] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2505a40 (9): Bad file descriptor 00:28:09.141 [2024-11-15 11:46:09.958976] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:09.141 [2024-11-15 11:46:09.958990] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:09.141 [2024-11-15 11:46:09.958999] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:09.141 [2024-11-15 11:46:09.959010] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:09.141 [2024-11-15 11:46:09.972650] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:09.141 [2024-11-15 11:46:09.973150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:09.141 [2024-11-15 11:46:09.973173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2505a40 with addr=10.0.0.2, port=4420 00:28:09.141 [2024-11-15 11:46:09.973184] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2505a40 is same with the state(6) to be set 00:28:09.141 [2024-11-15 11:46:09.973451] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2505a40 (9): Bad file descriptor 00:28:09.141 [2024-11-15 11:46:09.973727] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:09.141 [2024-11-15 11:46:09.973741] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:09.141 [2024-11-15 11:46:09.973752] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:09.141 [2024-11-15 11:46:09.973763] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:09.141 8548.33 IOPS, 33.39 MiB/s [2024-11-15T10:46:09.994Z] [2024-11-15 11:46:09.989338] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:09.141 [2024-11-15 11:46:09.989822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:09.141 [2024-11-15 11:46:09.989845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2505a40 with addr=10.0.0.2, port=4420 00:28:09.141 [2024-11-15 11:46:09.989857] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2505a40 is same with the state(6) to be set 00:28:09.141 [2024-11-15 11:46:09.990124] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2505a40 (9): Bad file descriptor 00:28:09.401 [2024-11-15 11:46:09.990392] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:09.402 [2024-11-15 11:46:09.990407] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:09.402 [2024-11-15 11:46:09.990418] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:09.402 [2024-11-15 11:46:09.990429] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:09.402 [2024-11-15 11:46:10.004079] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:09.402 [2024-11-15 11:46:10.004547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:09.402 [2024-11-15 11:46:10.004572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2505a40 with addr=10.0.0.2, port=4420 00:28:09.402 [2024-11-15 11:46:10.004584] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2505a40 is same with the state(6) to be set 00:28:09.402 [2024-11-15 11:46:10.004851] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2505a40 (9): Bad file descriptor 00:28:09.402 [2024-11-15 11:46:10.005120] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:09.402 [2024-11-15 11:46:10.005133] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:09.402 [2024-11-15 11:46:10.005144] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:09.402 [2024-11-15 11:46:10.005155] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:09.402 [2024-11-15 11:46:10.018806] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:09.402 [2024-11-15 11:46:10.019365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:09.402 [2024-11-15 11:46:10.019388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2505a40 with addr=10.0.0.2, port=4420 00:28:09.402 [2024-11-15 11:46:10.019400] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2505a40 is same with the state(6) to be set 00:28:09.402 [2024-11-15 11:46:10.019677] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2505a40 (9): Bad file descriptor 00:28:09.402 [2024-11-15 11:46:10.019946] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:09.402 [2024-11-15 11:46:10.019959] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:09.402 [2024-11-15 11:46:10.019970] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:09.402 [2024-11-15 11:46:10.019981] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:09.402 [2024-11-15 11:46:10.033622] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:09.402 [2024-11-15 11:46:10.034161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:09.402 [2024-11-15 11:46:10.034190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2505a40 with addr=10.0.0.2, port=4420 00:28:09.402 [2024-11-15 11:46:10.034202] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2505a40 is same with the state(6) to be set 00:28:09.402 [2024-11-15 11:46:10.034477] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2505a40 (9): Bad file descriptor 00:28:09.402 [2024-11-15 11:46:10.034745] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:09.402 [2024-11-15 11:46:10.034759] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:09.402 [2024-11-15 11:46:10.034769] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:09.402 [2024-11-15 11:46:10.034779] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:09.402 [2024-11-15 11:46:10.048672] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:09.402 [2024-11-15 11:46:10.049149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:09.402 [2024-11-15 11:46:10.049174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2505a40 with addr=10.0.0.2, port=4420 00:28:09.402 [2024-11-15 11:46:10.049186] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2505a40 is same with the state(6) to be set 00:28:09.402 [2024-11-15 11:46:10.049454] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2505a40 (9): Bad file descriptor 00:28:09.402 [2024-11-15 11:46:10.049731] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:09.402 [2024-11-15 11:46:10.049745] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:09.402 [2024-11-15 11:46:10.049756] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:09.402 [2024-11-15 11:46:10.049766] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:09.402 [2024-11-15 11:46:10.063422] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:09.402 [2024-11-15 11:46:10.063908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:09.402 [2024-11-15 11:46:10.063933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2505a40 with addr=10.0.0.2, port=4420 00:28:09.402 [2024-11-15 11:46:10.063945] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2505a40 is same with the state(6) to be set 00:28:09.402 [2024-11-15 11:46:10.064213] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2505a40 (9): Bad file descriptor 00:28:09.402 [2024-11-15 11:46:10.064489] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:09.402 [2024-11-15 11:46:10.064504] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:09.402 [2024-11-15 11:46:10.064515] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:09.402 [2024-11-15 11:46:10.064525] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:09.402 [2024-11-15 11:46:10.078168] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:09.402 [2024-11-15 11:46:10.078730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:09.402 [2024-11-15 11:46:10.078753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2505a40 with addr=10.0.0.2, port=4420 00:28:09.402 [2024-11-15 11:46:10.078765] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2505a40 is same with the state(6) to be set 00:28:09.402 [2024-11-15 11:46:10.079035] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2505a40 (9): Bad file descriptor 00:28:09.402 [2024-11-15 11:46:10.079303] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:09.402 [2024-11-15 11:46:10.079316] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:09.402 [2024-11-15 11:46:10.079327] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:09.402 [2024-11-15 11:46:10.079337] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:09.402 [2024-11-15 11:46:10.092962] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:09.402 [2024-11-15 11:46:10.093503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:09.402 [2024-11-15 11:46:10.093527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2505a40 with addr=10.0.0.2, port=4420 00:28:09.402 [2024-11-15 11:46:10.093539] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2505a40 is same with the state(6) to be set 00:28:09.402 [2024-11-15 11:46:10.093807] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2505a40 (9): Bad file descriptor 00:28:09.402 [2024-11-15 11:46:10.094075] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:09.402 [2024-11-15 11:46:10.094088] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:09.402 [2024-11-15 11:46:10.094098] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:09.402 [2024-11-15 11:46:10.094108] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:09.402 [2024-11-15 11:46:10.107755] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:09.402 [2024-11-15 11:46:10.108287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:09.402 [2024-11-15 11:46:10.108311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2505a40 with addr=10.0.0.2, port=4420 00:28:09.402 [2024-11-15 11:46:10.108322] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2505a40 is same with the state(6) to be set 00:28:09.402 [2024-11-15 11:46:10.108601] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2505a40 (9): Bad file descriptor 00:28:09.402 [2024-11-15 11:46:10.108899] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:09.402 [2024-11-15 11:46:10.108912] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:09.402 [2024-11-15 11:46:10.108943] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:09.402 [2024-11-15 11:46:10.108968] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:09.402 [2024-11-15 11:46:10.122345] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:09.402 [2024-11-15 11:46:10.122887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:09.402 [2024-11-15 11:46:10.122932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2505a40 with addr=10.0.0.2, port=4420 00:28:09.402 [2024-11-15 11:46:10.122954] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2505a40 is same with the state(6) to be set 00:28:09.402 [2024-11-15 11:46:10.123552] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2505a40 (9): Bad file descriptor 00:28:09.402 [2024-11-15 11:46:10.124118] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:09.402 [2024-11-15 11:46:10.124136] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:09.402 [2024-11-15 11:46:10.124156] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:09.402 [2024-11-15 11:46:10.124170] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:09.402 [2024-11-15 11:46:10.137751] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:09.402 [2024-11-15 11:46:10.138305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:09.403 [2024-11-15 11:46:10.138328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2505a40 with addr=10.0.0.2, port=4420 00:28:09.403 [2024-11-15 11:46:10.138339] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2505a40 is same with the state(6) to be set 00:28:09.403 [2024-11-15 11:46:10.138613] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2505a40 (9): Bad file descriptor 00:28:09.403 [2024-11-15 11:46:10.138880] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:09.403 [2024-11-15 11:46:10.138893] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:09.403 [2024-11-15 11:46:10.138903] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:09.403 [2024-11-15 11:46:10.138913] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:09.403 [2024-11-15 11:46:10.152509] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:09.403 [2024-11-15 11:46:10.153047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:09.403 [2024-11-15 11:46:10.153091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2505a40 with addr=10.0.0.2, port=4420 00:28:09.403 [2024-11-15 11:46:10.153114] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2505a40 is same with the state(6) to be set 00:28:09.403 [2024-11-15 11:46:10.153660] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2505a40 (9): Bad file descriptor 00:28:09.403 [2024-11-15 11:46:10.153929] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:09.403 [2024-11-15 11:46:10.153941] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:09.403 [2024-11-15 11:46:10.153951] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:09.403 [2024-11-15 11:46:10.153960] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:09.403 [2024-11-15 11:46:10.167367] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:09.403 [2024-11-15 11:46:10.167906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:09.403 [2024-11-15 11:46:10.167952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2505a40 with addr=10.0.0.2, port=4420 00:28:09.403 [2024-11-15 11:46:10.167976] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2505a40 is same with the state(6) to be set 00:28:09.403 [2024-11-15 11:46:10.168575] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2505a40 (9): Bad file descriptor 00:28:09.403 [2024-11-15 11:46:10.169061] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:09.403 [2024-11-15 11:46:10.169074] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:09.403 [2024-11-15 11:46:10.169084] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:09.403 [2024-11-15 11:46:10.169093] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:09.403 [2024-11-15 11:46:10.181971] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:09.403 [2024-11-15 11:46:10.182531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:09.403 [2024-11-15 11:46:10.182576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2505a40 with addr=10.0.0.2, port=4420 00:28:09.403 [2024-11-15 11:46:10.182599] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2505a40 is same with the state(6) to be set 00:28:09.403 [2024-11-15 11:46:10.183065] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2505a40 (9): Bad file descriptor 00:28:09.403 [2024-11-15 11:46:10.183333] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:09.403 [2024-11-15 11:46:10.183347] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:09.403 [2024-11-15 11:46:10.183358] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:09.403 [2024-11-15 11:46:10.183368] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:09.403 [2024-11-15 11:46:10.196757] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:09.403 [2024-11-15 11:46:10.197325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:09.403 [2024-11-15 11:46:10.197348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2505a40 with addr=10.0.0.2, port=4420 00:28:09.403 [2024-11-15 11:46:10.197359] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2505a40 is same with the state(6) to be set 00:28:09.403 [2024-11-15 11:46:10.197632] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2505a40 (9): Bad file descriptor 00:28:09.403 [2024-11-15 11:46:10.197901] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:09.403 [2024-11-15 11:46:10.197915] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:09.403 [2024-11-15 11:46:10.197925] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:09.403 [2024-11-15 11:46:10.197936] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:09.403 [2024-11-15 11:46:10.211580] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:09.403 [2024-11-15 11:46:10.212109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:09.403 [2024-11-15 11:46:10.212131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2505a40 with addr=10.0.0.2, port=4420 00:28:09.403 [2024-11-15 11:46:10.212143] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2505a40 is same with the state(6) to be set 00:28:09.403 [2024-11-15 11:46:10.212409] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2505a40 (9): Bad file descriptor 00:28:09.403 [2024-11-15 11:46:10.212685] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:09.403 [2024-11-15 11:46:10.212699] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:09.403 [2024-11-15 11:46:10.212709] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:09.403 [2024-11-15 11:46:10.212720] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:09.403 [2024-11-15 11:46:10.226355] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:09.403 [2024-11-15 11:46:10.226911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:09.403 [2024-11-15 11:46:10.226938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2505a40 with addr=10.0.0.2, port=4420 00:28:09.403 [2024-11-15 11:46:10.226950] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2505a40 is same with the state(6) to be set 00:28:09.403 [2024-11-15 11:46:10.227216] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2505a40 (9): Bad file descriptor 00:28:09.403 [2024-11-15 11:46:10.227490] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:09.403 [2024-11-15 11:46:10.227504] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:09.403 [2024-11-15 11:46:10.227514] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:09.403 [2024-11-15 11:46:10.227524] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:09.403 [2024-11-15 11:46:10.241162] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:09.403 [2024-11-15 11:46:10.241676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:09.403 [2024-11-15 11:46:10.241701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2505a40 with addr=10.0.0.2, port=4420 00:28:09.403 [2024-11-15 11:46:10.241713] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2505a40 is same with the state(6) to be set 00:28:09.403 [2024-11-15 11:46:10.241979] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2505a40 (9): Bad file descriptor 00:28:09.403 [2024-11-15 11:46:10.242248] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:09.403 [2024-11-15 11:46:10.242261] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:09.403 [2024-11-15 11:46:10.242271] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:09.403 [2024-11-15 11:46:10.242282] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:09.663 [2024-11-15 11:46:10.255926] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:09.663 [2024-11-15 11:46:10.256497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:09.663 [2024-11-15 11:46:10.256543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2505a40 with addr=10.0.0.2, port=4420 00:28:09.663 [2024-11-15 11:46:10.256566] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2505a40 is same with the state(6) to be set 00:28:09.663 [2024-11-15 11:46:10.257124] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2505a40 (9): Bad file descriptor 00:28:09.663 [2024-11-15 11:46:10.257393] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:09.663 [2024-11-15 11:46:10.257406] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:09.663 [2024-11-15 11:46:10.257417] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:09.663 [2024-11-15 11:46:10.257427] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:09.663 [2024-11-15 11:46:10.270563] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:09.663 [2024-11-15 11:46:10.271036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:09.663 [2024-11-15 11:46:10.271060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2505a40 with addr=10.0.0.2, port=4420 00:28:09.663 [2024-11-15 11:46:10.271072] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2505a40 is same with the state(6) to be set 00:28:09.663 [2024-11-15 11:46:10.271339] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2505a40 (9): Bad file descriptor 00:28:09.663 [2024-11-15 11:46:10.271619] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:09.663 [2024-11-15 11:46:10.271634] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:09.663 [2024-11-15 11:46:10.271644] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:09.663 [2024-11-15 11:46:10.271654] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:09.663 [2024-11-15 11:46:10.285277] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:09.663 [2024-11-15 11:46:10.285843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:09.663 [2024-11-15 11:46:10.285889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2505a40 with addr=10.0.0.2, port=4420 00:28:09.663 [2024-11-15 11:46:10.285912] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2505a40 is same with the state(6) to be set 00:28:09.663 [2024-11-15 11:46:10.286359] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2505a40 (9): Bad file descriptor 00:28:09.663 [2024-11-15 11:46:10.286635] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:09.663 [2024-11-15 11:46:10.286649] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:09.663 [2024-11-15 11:46:10.286660] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:09.663 [2024-11-15 11:46:10.286670] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:09.663 [2024-11-15 11:46:10.300090] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:09.663 [2024-11-15 11:46:10.300643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:09.663 [2024-11-15 11:46:10.300668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2505a40 with addr=10.0.0.2, port=4420 00:28:09.663 [2024-11-15 11:46:10.300680] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2505a40 is same with the state(6) to be set 00:28:09.663 [2024-11-15 11:46:10.300947] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2505a40 (9): Bad file descriptor 00:28:09.663 [2024-11-15 11:46:10.301216] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:09.663 [2024-11-15 11:46:10.301229] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:09.663 [2024-11-15 11:46:10.301239] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:09.663 [2024-11-15 11:46:10.301250] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:09.663 [2024-11-15 11:46:10.314877] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:09.663 [2024-11-15 11:46:10.315346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:09.663 [2024-11-15 11:46:10.315370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2505a40 with addr=10.0.0.2, port=4420 00:28:09.663 [2024-11-15 11:46:10.315381] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2505a40 is same with the state(6) to be set 00:28:09.663 [2024-11-15 11:46:10.315656] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2505a40 (9): Bad file descriptor 00:28:09.663 [2024-11-15 11:46:10.315927] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:09.663 [2024-11-15 11:46:10.315940] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:09.663 [2024-11-15 11:46:10.315954] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:09.663 [2024-11-15 11:46:10.315964] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:09.663 [2024-11-15 11:46:10.329583] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:09.663 [2024-11-15 11:46:10.330144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:09.663 [2024-11-15 11:46:10.330188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2505a40 with addr=10.0.0.2, port=4420 00:28:09.663 [2024-11-15 11:46:10.330211] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2505a40 is same with the state(6) to be set 00:28:09.663 [2024-11-15 11:46:10.330810] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2505a40 (9): Bad file descriptor 00:28:09.663 [2024-11-15 11:46:10.331399] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:09.663 [2024-11-15 11:46:10.331424] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:09.663 [2024-11-15 11:46:10.331445] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:09.663 [2024-11-15 11:46:10.331482] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:09.663 [2024-11-15 11:46:10.344329] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:09.663 [2024-11-15 11:46:10.344894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:09.663 [2024-11-15 11:46:10.344940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2505a40 with addr=10.0.0.2, port=4420 00:28:09.663 [2024-11-15 11:46:10.344963] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2505a40 is same with the state(6) to be set 00:28:09.663 [2024-11-15 11:46:10.345437] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2505a40 (9): Bad file descriptor 00:28:09.663 [2024-11-15 11:46:10.345713] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:09.663 [2024-11-15 11:46:10.345727] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:09.663 [2024-11-15 11:46:10.345738] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:09.663 [2024-11-15 11:46:10.345748] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:09.663 [2024-11-15 11:46:10.359110] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:09.663 [2024-11-15 11:46:10.359633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:09.663 [2024-11-15 11:46:10.359657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2505a40 with addr=10.0.0.2, port=4420 00:28:09.663 [2024-11-15 11:46:10.359668] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2505a40 is same with the state(6) to be set 00:28:09.663 [2024-11-15 11:46:10.359934] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2505a40 (9): Bad file descriptor 00:28:09.664 [2024-11-15 11:46:10.360200] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:09.664 [2024-11-15 11:46:10.360212] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:09.664 [2024-11-15 11:46:10.360222] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:09.664 [2024-11-15 11:46:10.360233] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:09.664 [2024-11-15 11:46:10.373864] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:09.664 [2024-11-15 11:46:10.374432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:09.664 [2024-11-15 11:46:10.374488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2505a40 with addr=10.0.0.2, port=4420 00:28:09.664 [2024-11-15 11:46:10.374512] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2505a40 is same with the state(6) to be set 00:28:09.664 [2024-11-15 11:46:10.375097] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2505a40 (9): Bad file descriptor 00:28:09.664 [2024-11-15 11:46:10.375442] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:09.664 [2024-11-15 11:46:10.375455] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:09.664 [2024-11-15 11:46:10.375471] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:09.664 [2024-11-15 11:46:10.375481] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:09.664 [2024-11-15 11:46:10.388589] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:09.664 [2024-11-15 11:46:10.389143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:09.664 [2024-11-15 11:46:10.389166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2505a40 with addr=10.0.0.2, port=4420 00:28:09.664 [2024-11-15 11:46:10.389177] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2505a40 is same with the state(6) to be set 00:28:09.664 [2024-11-15 11:46:10.389443] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2505a40 (9): Bad file descriptor 00:28:09.664 [2024-11-15 11:46:10.389718] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:09.664 [2024-11-15 11:46:10.389732] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:09.664 [2024-11-15 11:46:10.389742] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:09.664 [2024-11-15 11:46:10.389753] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:09.664 [2024-11-15 11:46:10.403385] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:09.664 [2024-11-15 11:46:10.403941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:09.664 [2024-11-15 11:46:10.403987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2505a40 with addr=10.0.0.2, port=4420 00:28:09.664 [2024-11-15 11:46:10.404012] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2505a40 is same with the state(6) to be set 00:28:09.664 [2024-11-15 11:46:10.404601] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2505a40 (9): Bad file descriptor 00:28:09.664 [2024-11-15 11:46:10.404877] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:09.664 [2024-11-15 11:46:10.404891] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:09.664 [2024-11-15 11:46:10.404902] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:09.664 [2024-11-15 11:46:10.404912] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:09.664 [2024-11-15 11:46:10.418045] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:09.664 [2024-11-15 11:46:10.418588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:09.664 [2024-11-15 11:46:10.418648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2505a40 with addr=10.0.0.2, port=4420 00:28:09.664 [2024-11-15 11:46:10.418678] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2505a40 is same with the state(6) to be set 00:28:09.664 [2024-11-15 11:46:10.419262] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2505a40 (9): Bad file descriptor 00:28:09.664 [2024-11-15 11:46:10.419593] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:09.664 [2024-11-15 11:46:10.419607] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:09.664 [2024-11-15 11:46:10.419618] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:09.664 [2024-11-15 11:46:10.419629] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:09.664 [2024-11-15 11:46:10.432736] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:09.664 [2024-11-15 11:46:10.433273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:09.664 [2024-11-15 11:46:10.433318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2505a40 with addr=10.0.0.2, port=4420 00:28:09.664 [2024-11-15 11:46:10.433341] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2505a40 is same with the state(6) to be set 00:28:09.664 [2024-11-15 11:46:10.433870] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2505a40 (9): Bad file descriptor 00:28:09.664 [2024-11-15 11:46:10.434139] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:09.664 [2024-11-15 11:46:10.434153] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:09.664 [2024-11-15 11:46:10.434163] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:09.664 [2024-11-15 11:46:10.434173] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:09.664 [2024-11-15 11:46:10.447537] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:09.664 [2024-11-15 11:46:10.448094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:09.664 [2024-11-15 11:46:10.448148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2505a40 with addr=10.0.0.2, port=4420 00:28:09.664 [2024-11-15 11:46:10.448171] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2505a40 is same with the state(6) to be set 00:28:09.664 [2024-11-15 11:46:10.448771] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2505a40 (9): Bad file descriptor 00:28:09.664 [2024-11-15 11:46:10.449039] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:09.664 [2024-11-15 11:46:10.449052] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:09.664 [2024-11-15 11:46:10.449062] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:09.664 [2024-11-15 11:46:10.449073] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:09.664 [2024-11-15 11:46:10.462211] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:09.664 [2024-11-15 11:46:10.462738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:09.664 [2024-11-15 11:46:10.462762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2505a40 with addr=10.0.0.2, port=4420 00:28:09.664 [2024-11-15 11:46:10.462773] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2505a40 is same with the state(6) to be set 00:28:09.664 [2024-11-15 11:46:10.463039] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2505a40 (9): Bad file descriptor 00:28:09.664 [2024-11-15 11:46:10.463312] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:09.664 [2024-11-15 11:46:10.463326] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:09.664 [2024-11-15 11:46:10.463336] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:09.664 [2024-11-15 11:46:10.463345] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:09.664 [2024-11-15 11:46:10.476997] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:09.664 [2024-11-15 11:46:10.477525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:09.664 [2024-11-15 11:46:10.477547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2505a40 with addr=10.0.0.2, port=4420 00:28:09.664 [2024-11-15 11:46:10.477559] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2505a40 is same with the state(6) to be set 00:28:09.664 [2024-11-15 11:46:10.477826] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2505a40 (9): Bad file descriptor 00:28:09.664 [2024-11-15 11:46:10.478094] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:09.664 [2024-11-15 11:46:10.478107] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:09.664 [2024-11-15 11:46:10.478117] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:09.664 [2024-11-15 11:46:10.478128] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:09.664 [2024-11-15 11:46:10.491646] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:09.664 [2024-11-15 11:46:10.492201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:09.664 [2024-11-15 11:46:10.492225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2505a40 with addr=10.0.0.2, port=4420 00:28:09.664 [2024-11-15 11:46:10.492237] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2505a40 is same with the state(6) to be set 00:28:09.664 [2024-11-15 11:46:10.492511] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2505a40 (9): Bad file descriptor 00:28:09.664 [2024-11-15 11:46:10.492781] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:09.664 [2024-11-15 11:46:10.492794] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:09.664 [2024-11-15 11:46:10.492805] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:09.664 [2024-11-15 11:46:10.492816] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:09.664 [2024-11-15 11:46:10.506462] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:09.664 [2024-11-15 11:46:10.507040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:09.664 [2024-11-15 11:46:10.507063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2505a40 with addr=10.0.0.2, port=4420 00:28:09.664 [2024-11-15 11:46:10.507075] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2505a40 is same with the state(6) to be set 00:28:09.664 [2024-11-15 11:46:10.507343] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2505a40 (9): Bad file descriptor 00:28:09.665 [2024-11-15 11:46:10.507627] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:09.665 [2024-11-15 11:46:10.507642] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:09.665 [2024-11-15 11:46:10.507656] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:09.665 [2024-11-15 11:46:10.507667] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:09.925 [2024-11-15 11:46:10.521085] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:09.925 [2024-11-15 11:46:10.521621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:09.925 [2024-11-15 11:46:10.521645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2505a40 with addr=10.0.0.2, port=4420 00:28:09.925 [2024-11-15 11:46:10.521657] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2505a40 is same with the state(6) to be set 00:28:09.925 [2024-11-15 11:46:10.521924] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2505a40 (9): Bad file descriptor 00:28:09.925 [2024-11-15 11:46:10.522193] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:09.925 [2024-11-15 11:46:10.522206] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:09.925 [2024-11-15 11:46:10.522217] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:09.925 [2024-11-15 11:46:10.522227] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:09.925 [2024-11-15 11:46:10.535861] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:09.925 [2024-11-15 11:46:10.536320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:09.925 [2024-11-15 11:46:10.536344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2505a40 with addr=10.0.0.2, port=4420 00:28:09.925 [2024-11-15 11:46:10.536356] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2505a40 is same with the state(6) to be set 00:28:09.925 [2024-11-15 11:46:10.536630] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2505a40 (9): Bad file descriptor 00:28:09.925 [2024-11-15 11:46:10.536898] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:09.925 [2024-11-15 11:46:10.536912] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:09.925 [2024-11-15 11:46:10.536922] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:09.925 [2024-11-15 11:46:10.536932] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:09.925 [2024-11-15 11:46:10.550563] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:09.925 [2024-11-15 11:46:10.551089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:09.925 [2024-11-15 11:46:10.551133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2505a40 with addr=10.0.0.2, port=4420 00:28:09.925 [2024-11-15 11:46:10.551157] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2505a40 is same with the state(6) to be set 00:28:09.925 [2024-11-15 11:46:10.551678] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2505a40 (9): Bad file descriptor 00:28:09.925 [2024-11-15 11:46:10.551948] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:09.925 [2024-11-15 11:46:10.551960] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:09.925 [2024-11-15 11:46:10.551971] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:09.925 [2024-11-15 11:46:10.551981] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:09.925 [2024-11-15 11:46:10.565359] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:09.925 [2024-11-15 11:46:10.565891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:09.925 [2024-11-15 11:46:10.565914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2505a40 with addr=10.0.0.2, port=4420 00:28:09.925 [2024-11-15 11:46:10.565925] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2505a40 is same with the state(6) to be set 00:28:09.925 [2024-11-15 11:46:10.566192] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2505a40 (9): Bad file descriptor 00:28:09.925 [2024-11-15 11:46:10.566467] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:09.925 [2024-11-15 11:46:10.566481] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:09.925 [2024-11-15 11:46:10.566492] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:09.925 [2024-11-15 11:46:10.566503] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:09.925 [2024-11-15 11:46:10.580109] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:09.925 [2024-11-15 11:46:10.580583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:09.925 [2024-11-15 11:46:10.580607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2505a40 with addr=10.0.0.2, port=4420 00:28:09.925 [2024-11-15 11:46:10.580619] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2505a40 is same with the state(6) to be set 00:28:09.925 [2024-11-15 11:46:10.580888] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2505a40 (9): Bad file descriptor 00:28:09.925 [2024-11-15 11:46:10.581156] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:09.925 [2024-11-15 11:46:10.581169] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:09.925 [2024-11-15 11:46:10.581180] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:09.925 [2024-11-15 11:46:10.581190] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:09.925 [2024-11-15 11:46:10.594796] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:09.925 [2024-11-15 11:46:10.595370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:09.925 [2024-11-15 11:46:10.595415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2505a40 with addr=10.0.0.2, port=4420 00:28:09.925 [2024-11-15 11:46:10.595440] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2505a40 is same with the state(6) to be set 00:28:09.925 [2024-11-15 11:46:10.596046] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2505a40 (9): Bad file descriptor 00:28:09.925 [2024-11-15 11:46:10.596316] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:09.925 [2024-11-15 11:46:10.596330] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:09.925 [2024-11-15 11:46:10.596340] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:09.925 [2024-11-15 11:46:10.596350] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:09.925 [2024-11-15 11:46:10.609455] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:09.925 [2024-11-15 11:46:10.609987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:09.925 [2024-11-15 11:46:10.610010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2505a40 with addr=10.0.0.2, port=4420 00:28:09.925 [2024-11-15 11:46:10.610025] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2505a40 is same with the state(6) to be set 00:28:09.925 [2024-11-15 11:46:10.610292] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2505a40 (9): Bad file descriptor 00:28:09.925 [2024-11-15 11:46:10.610568] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:09.925 [2024-11-15 11:46:10.610582] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:09.925 [2024-11-15 11:46:10.610592] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:09.925 [2024-11-15 11:46:10.610603] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:09.925 [2024-11-15 11:46:10.624208] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:09.925 [2024-11-15 11:46:10.624787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:09.926 [2024-11-15 11:46:10.624833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2505a40 with addr=10.0.0.2, port=4420 00:28:09.926 [2024-11-15 11:46:10.624856] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2505a40 is same with the state(6) to be set 00:28:09.926 [2024-11-15 11:46:10.625440] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2505a40 (9): Bad file descriptor 00:28:09.926 [2024-11-15 11:46:10.626047] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:09.926 [2024-11-15 11:46:10.626061] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:09.926 [2024-11-15 11:46:10.626071] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:09.926 [2024-11-15 11:46:10.626081] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:09.926 [2024-11-15 11:46:10.638932] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:09.926 [2024-11-15 11:46:10.639488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:09.926 [2024-11-15 11:46:10.639512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2505a40 with addr=10.0.0.2, port=4420 00:28:09.926 [2024-11-15 11:46:10.639523] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2505a40 is same with the state(6) to be set 00:28:09.926 [2024-11-15 11:46:10.639791] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2505a40 (9): Bad file descriptor 00:28:09.926 [2024-11-15 11:46:10.640059] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:09.926 [2024-11-15 11:46:10.640072] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:09.926 [2024-11-15 11:46:10.640083] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:09.926 [2024-11-15 11:46:10.640093] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:09.926 [2024-11-15 11:46:10.653718] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:09.926 [2024-11-15 11:46:10.654279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:09.926 [2024-11-15 11:46:10.654325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2505a40 with addr=10.0.0.2, port=4420 00:28:09.926 [2024-11-15 11:46:10.654348] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2505a40 is same with the state(6) to be set 00:28:09.926 [2024-11-15 11:46:10.654959] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2505a40 (9): Bad file descriptor 00:28:09.926 [2024-11-15 11:46:10.655237] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:09.926 [2024-11-15 11:46:10.655251] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:09.926 [2024-11-15 11:46:10.655261] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:09.926 [2024-11-15 11:46:10.655272] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:09.926 [2024-11-15 11:46:10.668380] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:09.926 [2024-11-15 11:46:10.668919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:09.926 [2024-11-15 11:46:10.668964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2505a40 with addr=10.0.0.2, port=4420 00:28:09.926 [2024-11-15 11:46:10.668987] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2505a40 is same with the state(6) to be set 00:28:09.926 [2024-11-15 11:46:10.669584] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2505a40 (9): Bad file descriptor 00:28:09.926 [2024-11-15 11:46:10.670175] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:09.926 [2024-11-15 11:46:10.670207] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:09.926 [2024-11-15 11:46:10.670222] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:09.926 [2024-11-15 11:46:10.670235] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:09.926 [2024-11-15 11:46:10.683937] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:09.926 [2024-11-15 11:46:10.684498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:09.926 [2024-11-15 11:46:10.684543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2505a40 with addr=10.0.0.2, port=4420 00:28:09.926 [2024-11-15 11:46:10.684566] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2505a40 is same with the state(6) to be set 00:28:09.926 [2024-11-15 11:46:10.685150] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2505a40 (9): Bad file descriptor 00:28:09.926 [2024-11-15 11:46:10.685546] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:09.926 [2024-11-15 11:46:10.685560] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:09.926 [2024-11-15 11:46:10.685571] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:09.926 [2024-11-15 11:46:10.685581] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:09.926 [2024-11-15 11:46:10.698691] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:09.926 [2024-11-15 11:46:10.699223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:09.926 [2024-11-15 11:46:10.699270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2505a40 with addr=10.0.0.2, port=4420 00:28:09.926 [2024-11-15 11:46:10.699294] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2505a40 is same with the state(6) to be set 00:28:09.926 [2024-11-15 11:46:10.699894] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2505a40 (9): Bad file descriptor 00:28:09.926 [2024-11-15 11:46:10.700278] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:09.926 [2024-11-15 11:46:10.700291] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:09.926 [2024-11-15 11:46:10.700306] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:09.926 [2024-11-15 11:46:10.700317] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:09.926 [2024-11-15 11:46:10.713446] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:09.926 [2024-11-15 11:46:10.714003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:09.926 [2024-11-15 11:46:10.714027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2505a40 with addr=10.0.0.2, port=4420 00:28:09.926 [2024-11-15 11:46:10.714038] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2505a40 is same with the state(6) to be set 00:28:09.926 [2024-11-15 11:46:10.714304] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2505a40 (9): Bad file descriptor 00:28:09.926 [2024-11-15 11:46:10.714580] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:09.926 [2024-11-15 11:46:10.714594] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:09.926 [2024-11-15 11:46:10.714604] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:09.926 [2024-11-15 11:46:10.714613] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:09.926 [2024-11-15 11:46:10.728214] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:09.926 [2024-11-15 11:46:10.728770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:09.926 [2024-11-15 11:46:10.728793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2505a40 with addr=10.0.0.2, port=4420 00:28:09.926 [2024-11-15 11:46:10.728804] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2505a40 is same with the state(6) to be set 00:28:09.926 [2024-11-15 11:46:10.729070] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2505a40 (9): Bad file descriptor 00:28:09.926 [2024-11-15 11:46:10.729338] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:09.926 [2024-11-15 11:46:10.729352] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:09.926 [2024-11-15 11:46:10.729362] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:09.926 [2024-11-15 11:46:10.729372] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:09.926 [2024-11-15 11:46:10.743026] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:09.926 [2024-11-15 11:46:10.743610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:09.926 [2024-11-15 11:46:10.743634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2505a40 with addr=10.0.0.2, port=4420 00:28:09.926 [2024-11-15 11:46:10.743646] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2505a40 is same with the state(6) to be set 00:28:09.926 [2024-11-15 11:46:10.743914] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2505a40 (9): Bad file descriptor 00:28:09.926 [2024-11-15 11:46:10.744183] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:09.926 [2024-11-15 11:46:10.744196] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:09.926 [2024-11-15 11:46:10.744206] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:09.926 [2024-11-15 11:46:10.744217] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:09.926 [2024-11-15 11:46:10.757602] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:09.926 [2024-11-15 11:46:10.758077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:09.926 [2024-11-15 11:46:10.758101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2505a40 with addr=10.0.0.2, port=4420 00:28:09.926 [2024-11-15 11:46:10.758113] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2505a40 is same with the state(6) to be set 00:28:09.926 [2024-11-15 11:46:10.758381] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2505a40 (9): Bad file descriptor 00:28:09.926 [2024-11-15 11:46:10.758657] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:09.926 [2024-11-15 11:46:10.758671] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:09.926 [2024-11-15 11:46:10.758682] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:09.926 [2024-11-15 11:46:10.758692] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:09.926 [2024-11-15 11:46:10.772315] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:09.927 [2024-11-15 11:46:10.772842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:09.927 [2024-11-15 11:46:10.772865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2505a40 with addr=10.0.0.2, port=4420 00:28:09.927 [2024-11-15 11:46:10.772877] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2505a40 is same with the state(6) to be set 00:28:09.927 [2024-11-15 11:46:10.773144] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2505a40 (9): Bad file descriptor 00:28:09.927 [2024-11-15 11:46:10.773413] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:09.927 [2024-11-15 11:46:10.773425] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:09.927 [2024-11-15 11:46:10.773435] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:09.927 [2024-11-15 11:46:10.773445] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:10.187 [2024-11-15 11:46:10.787070] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:10.187 [2024-11-15 11:46:10.787624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.187 [2024-11-15 11:46:10.787647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2505a40 with addr=10.0.0.2, port=4420 00:28:10.187 [2024-11-15 11:46:10.787658] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2505a40 is same with the state(6) to be set 00:28:10.187 [2024-11-15 11:46:10.787926] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2505a40 (9): Bad file descriptor 00:28:10.187 [2024-11-15 11:46:10.788194] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:10.187 [2024-11-15 11:46:10.788206] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:10.187 [2024-11-15 11:46:10.788217] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:10.187 [2024-11-15 11:46:10.788227] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:10.187 [2024-11-15 11:46:10.801898] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:10.187 [2024-11-15 11:46:10.802365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.187 [2024-11-15 11:46:10.802388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2505a40 with addr=10.0.0.2, port=4420 00:28:10.187 [2024-11-15 11:46:10.802403] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2505a40 is same with the state(6) to be set 00:28:10.187 [2024-11-15 11:46:10.802679] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2505a40 (9): Bad file descriptor 00:28:10.187 [2024-11-15 11:46:10.802949] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:10.187 [2024-11-15 11:46:10.802962] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:10.187 [2024-11-15 11:46:10.802973] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:10.187 [2024-11-15 11:46:10.802983] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:10.188 [2024-11-15 11:46:10.816667] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:10.188 [2024-11-15 11:46:10.817196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.188 [2024-11-15 11:46:10.817220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2505a40 with addr=10.0.0.2, port=4420 00:28:10.188 [2024-11-15 11:46:10.817231] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2505a40 is same with the state(6) to be set 00:28:10.188 [2024-11-15 11:46:10.817507] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2505a40 (9): Bad file descriptor 00:28:10.188 [2024-11-15 11:46:10.817775] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:10.188 [2024-11-15 11:46:10.817788] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:10.188 [2024-11-15 11:46:10.817798] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:10.188 [2024-11-15 11:46:10.817807] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:10.188 [2024-11-15 11:46:10.831436] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:10.188 [2024-11-15 11:46:10.831962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.188 [2024-11-15 11:46:10.831986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2505a40 with addr=10.0.0.2, port=4420 00:28:10.188 [2024-11-15 11:46:10.831997] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2505a40 is same with the state(6) to be set 00:28:10.188 [2024-11-15 11:46:10.832262] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2505a40 (9): Bad file descriptor 00:28:10.188 [2024-11-15 11:46:10.832536] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:10.188 [2024-11-15 11:46:10.832550] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:10.188 [2024-11-15 11:46:10.832560] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:10.188 [2024-11-15 11:46:10.832571] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:10.188 [2024-11-15 11:46:10.846200] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:10.188 [2024-11-15 11:46:10.846736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.188 [2024-11-15 11:46:10.846760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2505a40 with addr=10.0.0.2, port=4420 00:28:10.188 [2024-11-15 11:46:10.846772] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2505a40 is same with the state(6) to be set 00:28:10.188 [2024-11-15 11:46:10.847039] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2505a40 (9): Bad file descriptor 00:28:10.188 [2024-11-15 11:46:10.847312] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:10.188 [2024-11-15 11:46:10.847326] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:10.188 [2024-11-15 11:46:10.847336] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:10.188 [2024-11-15 11:46:10.847346] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:10.188 [2024-11-15 11:46:10.860976] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:10.188 [2024-11-15 11:46:10.861456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.188 [2024-11-15 11:46:10.861486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2505a40 with addr=10.0.0.2, port=4420 00:28:10.188 [2024-11-15 11:46:10.861497] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2505a40 is same with the state(6) to be set 00:28:10.188 [2024-11-15 11:46:10.861766] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2505a40 (9): Bad file descriptor 00:28:10.188 [2024-11-15 11:46:10.862033] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:10.188 [2024-11-15 11:46:10.862047] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:10.188 [2024-11-15 11:46:10.862057] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:10.188 [2024-11-15 11:46:10.862067] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:10.188 [2024-11-15 11:46:10.875694] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:10.188 [2024-11-15 11:46:10.876223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.188 [2024-11-15 11:46:10.876246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2505a40 with addr=10.0.0.2, port=4420 00:28:10.188 [2024-11-15 11:46:10.876257] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2505a40 is same with the state(6) to be set 00:28:10.188 [2024-11-15 11:46:10.876530] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2505a40 (9): Bad file descriptor 00:28:10.188 [2024-11-15 11:46:10.876799] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:10.188 [2024-11-15 11:46:10.876812] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:10.188 [2024-11-15 11:46:10.876821] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:10.188 [2024-11-15 11:46:10.876831] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:10.188 [2024-11-15 11:46:10.890443] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:10.188 [2024-11-15 11:46:10.891000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.188 [2024-11-15 11:46:10.891023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2505a40 with addr=10.0.0.2, port=4420 00:28:10.188 [2024-11-15 11:46:10.891034] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2505a40 is same with the state(6) to be set 00:28:10.188 [2024-11-15 11:46:10.891300] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2505a40 (9): Bad file descriptor 00:28:10.188 [2024-11-15 11:46:10.891574] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:10.188 [2024-11-15 11:46:10.891587] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:10.188 [2024-11-15 11:46:10.891602] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:10.188 [2024-11-15 11:46:10.891612] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:10.188 [2024-11-15 11:46:10.905254] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:10.188 [2024-11-15 11:46:10.905834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.188 [2024-11-15 11:46:10.905878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2505a40 with addr=10.0.0.2, port=4420 00:28:10.188 [2024-11-15 11:46:10.905902] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2505a40 is same with the state(6) to be set 00:28:10.188 [2024-11-15 11:46:10.906500] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2505a40 (9): Bad file descriptor 00:28:10.188 [2024-11-15 11:46:10.906790] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:10.188 [2024-11-15 11:46:10.906803] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:10.188 [2024-11-15 11:46:10.906814] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:10.188 [2024-11-15 11:46:10.906824] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:10.188 [2024-11-15 11:46:10.919989] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:10.188 [2024-11-15 11:46:10.920518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.188 [2024-11-15 11:46:10.920542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2505a40 with addr=10.0.0.2, port=4420 00:28:10.188 [2024-11-15 11:46:10.920553] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2505a40 is same with the state(6) to be set 00:28:10.188 [2024-11-15 11:46:10.920821] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2505a40 (9): Bad file descriptor 00:28:10.188 [2024-11-15 11:46:10.921088] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:10.188 [2024-11-15 11:46:10.921102] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:10.188 [2024-11-15 11:46:10.921112] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:10.188 [2024-11-15 11:46:10.921123] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:10.188 [2024-11-15 11:46:10.934759] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:10.188 [2024-11-15 11:46:10.935275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.188 [2024-11-15 11:46:10.935319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2505a40 with addr=10.0.0.2, port=4420 00:28:10.188 [2024-11-15 11:46:10.935343] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2505a40 is same with the state(6) to be set 00:28:10.188 [2024-11-15 11:46:10.935887] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2505a40 (9): Bad file descriptor 00:28:10.188 [2024-11-15 11:46:10.936256] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:10.188 [2024-11-15 11:46:10.936273] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:10.188 [2024-11-15 11:46:10.936287] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:10.188 [2024-11-15 11:46:10.936300] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:10.188 [2024-11-15 11:46:10.949730] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:10.188 [2024-11-15 11:46:10.950333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.188 [2024-11-15 11:46:10.950357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2505a40 with addr=10.0.0.2, port=4420 00:28:10.188 [2024-11-15 11:46:10.950368] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2505a40 is same with the state(6) to be set 00:28:10.188 [2024-11-15 11:46:10.950644] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2505a40 (9): Bad file descriptor 00:28:10.188 [2024-11-15 11:46:10.950914] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:10.188 [2024-11-15 11:46:10.950927] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:10.188 [2024-11-15 11:46:10.950937] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:10.188 [2024-11-15 11:46:10.950947] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:10.189 [2024-11-15 11:46:10.964334] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:10.189 [2024-11-15 11:46:10.964787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.189 [2024-11-15 11:46:10.964811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2505a40 with addr=10.0.0.2, port=4420 00:28:10.189 [2024-11-15 11:46:10.964823] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2505a40 is same with the state(6) to be set 00:28:10.189 [2024-11-15 11:46:10.965090] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2505a40 (9): Bad file descriptor 00:28:10.189 [2024-11-15 11:46:10.965359] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:10.189 [2024-11-15 11:46:10.965373] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:10.189 [2024-11-15 11:46:10.965383] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:10.189 [2024-11-15 11:46:10.965394] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:10.189 [2024-11-15 11:46:10.979056] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:10.189 [2024-11-15 11:46:10.979596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.189 [2024-11-15 11:46:10.979622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2505a40 with addr=10.0.0.2, port=4420 00:28:10.189 [2024-11-15 11:46:10.979634] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2505a40 is same with the state(6) to be set 00:28:10.189 [2024-11-15 11:46:10.979902] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2505a40 (9): Bad file descriptor 00:28:10.189 [2024-11-15 11:46:10.980171] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:10.189 [2024-11-15 11:46:10.980185] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:10.189 [2024-11-15 11:46:10.980195] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:10.189 [2024-11-15 11:46:10.980205] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:10.189 6411.25 IOPS, 25.04 MiB/s [2024-11-15T10:46:11.042Z] [2024-11-15 11:46:10.995796] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:10.189 [2024-11-15 11:46:10.996369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.189 [2024-11-15 11:46:10.996423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2505a40 with addr=10.0.0.2, port=4420 00:28:10.189 [2024-11-15 11:46:10.996448] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2505a40 is same with the state(6) to be set 00:28:10.189 [2024-11-15 11:46:10.997045] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2505a40 (9): Bad file descriptor 00:28:10.189 [2024-11-15 11:46:10.997338] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:10.189 [2024-11-15 11:46:10.997351] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:10.189 [2024-11-15 11:46:10.997361] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:10.189 [2024-11-15 11:46:10.997372] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:10.189 [2024-11-15 11:46:11.010525] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:10.189 [2024-11-15 11:46:11.010982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.189 [2024-11-15 11:46:11.011006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2505a40 with addr=10.0.0.2, port=4420 00:28:10.189 [2024-11-15 11:46:11.011017] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2505a40 is same with the state(6) to be set 00:28:10.189 [2024-11-15 11:46:11.011285] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2505a40 (9): Bad file descriptor 00:28:10.189 [2024-11-15 11:46:11.011565] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:10.189 [2024-11-15 11:46:11.011579] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:10.189 [2024-11-15 11:46:11.011589] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:10.189 [2024-11-15 11:46:11.011600] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:10.189 [2024-11-15 11:46:11.025248] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:10.189 [2024-11-15 11:46:11.025673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.189 [2024-11-15 11:46:11.025696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2505a40 with addr=10.0.0.2, port=4420 00:28:10.189 [2024-11-15 11:46:11.025708] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2505a40 is same with the state(6) to be set 00:28:10.189 [2024-11-15 11:46:11.025976] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2505a40 (9): Bad file descriptor 00:28:10.189 [2024-11-15 11:46:11.026246] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:10.189 [2024-11-15 11:46:11.026259] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:10.189 [2024-11-15 11:46:11.026269] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:10.189 [2024-11-15 11:46:11.026286] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:10.450 [2024-11-15 11:46:11.039950] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:10.450 [2024-11-15 11:46:11.040510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.450 [2024-11-15 11:46:11.040534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2505a40 with addr=10.0.0.2, port=4420 00:28:10.450 [2024-11-15 11:46:11.040545] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2505a40 is same with the state(6) to be set 00:28:10.450 [2024-11-15 11:46:11.040818] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2505a40 (9): Bad file descriptor 00:28:10.450 [2024-11-15 11:46:11.041086] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:10.450 [2024-11-15 11:46:11.041100] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:10.450 [2024-11-15 11:46:11.041111] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:10.450 [2024-11-15 11:46:11.041122] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:10.450 [2024-11-15 11:46:11.054779] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:10.450 [2024-11-15 11:46:11.055342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.450 [2024-11-15 11:46:11.055367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2505a40 with addr=10.0.0.2, port=4420 00:28:10.450 [2024-11-15 11:46:11.055379] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2505a40 is same with the state(6) to be set 00:28:10.450 [2024-11-15 11:46:11.055655] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2505a40 (9): Bad file descriptor 00:28:10.450 [2024-11-15 11:46:11.055924] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:10.450 [2024-11-15 11:46:11.055938] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:10.450 [2024-11-15 11:46:11.055949] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:10.450 [2024-11-15 11:46:11.055961] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:10.450 [2024-11-15 11:46:11.069406] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:10.450 [2024-11-15 11:46:11.069819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.450 [2024-11-15 11:46:11.069844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2505a40 with addr=10.0.0.2, port=4420 00:28:10.450 [2024-11-15 11:46:11.069856] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2505a40 is same with the state(6) to be set 00:28:10.450 [2024-11-15 11:46:11.070124] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2505a40 (9): Bad file descriptor 00:28:10.450 [2024-11-15 11:46:11.070392] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:10.450 [2024-11-15 11:46:11.070406] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:10.450 [2024-11-15 11:46:11.070416] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:10.450 [2024-11-15 11:46:11.070426] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:10.450 [2024-11-15 11:46:11.084092] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:10.450 [2024-11-15 11:46:11.084577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.450 [2024-11-15 11:46:11.084601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2505a40 with addr=10.0.0.2, port=4420 00:28:10.450 [2024-11-15 11:46:11.084613] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2505a40 is same with the state(6) to be set 00:28:10.450 [2024-11-15 11:46:11.084880] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2505a40 (9): Bad file descriptor 00:28:10.450 [2024-11-15 11:46:11.085147] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:10.450 [2024-11-15 11:46:11.085159] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:10.450 [2024-11-15 11:46:11.085175] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:10.450 [2024-11-15 11:46:11.085185] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:10.450 [2024-11-15 11:46:11.098832] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:10.450 [2024-11-15 11:46:11.099310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.450 [2024-11-15 11:46:11.099334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2505a40 with addr=10.0.0.2, port=4420 00:28:10.450 [2024-11-15 11:46:11.099345] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2505a40 is same with the state(6) to be set 00:28:10.450 [2024-11-15 11:46:11.099621] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2505a40 (9): Bad file descriptor 00:28:10.450 [2024-11-15 11:46:11.099890] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:10.450 [2024-11-15 11:46:11.099903] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:10.450 [2024-11-15 11:46:11.099913] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:10.450 [2024-11-15 11:46:11.099924] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:10.450 [2024-11-15 11:46:11.113592] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:10.450 [2024-11-15 11:46:11.114049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.450 [2024-11-15 11:46:11.114072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2505a40 with addr=10.0.0.2, port=4420 00:28:10.450 [2024-11-15 11:46:11.114084] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2505a40 is same with the state(6) to be set 00:28:10.450 [2024-11-15 11:46:11.114352] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2505a40 (9): Bad file descriptor 00:28:10.450 [2024-11-15 11:46:11.114627] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:10.450 [2024-11-15 11:46:11.114641] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:10.450 [2024-11-15 11:46:11.114652] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:10.450 [2024-11-15 11:46:11.114662] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:10.450 [2024-11-15 11:46:11.128284] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:10.450 [2024-11-15 11:46:11.128685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.450 [2024-11-15 11:46:11.128708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2505a40 with addr=10.0.0.2, port=4420 00:28:10.450 [2024-11-15 11:46:11.128719] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2505a40 is same with the state(6) to be set 00:28:10.450 [2024-11-15 11:46:11.128987] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2505a40 (9): Bad file descriptor 00:28:10.450 [2024-11-15 11:46:11.129255] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:10.450 [2024-11-15 11:46:11.129268] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:10.450 [2024-11-15 11:46:11.129278] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:10.450 [2024-11-15 11:46:11.129288] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:10.450 [2024-11-15 11:46:11.142929] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:10.450 [2024-11-15 11:46:11.143416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.450 [2024-11-15 11:46:11.143475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2505a40 with addr=10.0.0.2, port=4420 00:28:10.450 [2024-11-15 11:46:11.143501] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2505a40 is same with the state(6) to be set 00:28:10.450 [2024-11-15 11:46:11.144085] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2505a40 (9): Bad file descriptor 00:28:10.450 [2024-11-15 11:46:11.144604] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:10.450 [2024-11-15 11:46:11.144618] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:10.450 [2024-11-15 11:46:11.144628] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:10.450 [2024-11-15 11:46:11.144638] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:10.451 [2024-11-15 11:46:11.157524] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:10.451 [2024-11-15 11:46:11.158002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.451 [2024-11-15 11:46:11.158047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2505a40 with addr=10.0.0.2, port=4420 00:28:10.451 [2024-11-15 11:46:11.158071] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2505a40 is same with the state(6) to be set 00:28:10.451 [2024-11-15 11:46:11.158607] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2505a40 (9): Bad file descriptor 00:28:10.451 [2024-11-15 11:46:11.158876] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:10.451 [2024-11-15 11:46:11.158889] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:10.451 [2024-11-15 11:46:11.158899] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:10.451 [2024-11-15 11:46:11.158910] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:10.451 [2024-11-15 11:46:11.172286] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:10.451 [2024-11-15 11:46:11.172793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.451 [2024-11-15 11:46:11.172817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2505a40 with addr=10.0.0.2, port=4420 00:28:10.451 [2024-11-15 11:46:11.172828] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2505a40 is same with the state(6) to be set 00:28:10.451 [2024-11-15 11:46:11.173096] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2505a40 (9): Bad file descriptor 00:28:10.451 [2024-11-15 11:46:11.173365] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:10.451 [2024-11-15 11:46:11.173379] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:10.451 [2024-11-15 11:46:11.173388] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:10.451 [2024-11-15 11:46:11.173400] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:10.451 [2024-11-15 11:46:11.187032] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:10.451 [2024-11-15 11:46:11.187518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.451 [2024-11-15 11:46:11.187573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2505a40 with addr=10.0.0.2, port=4420 00:28:10.451 [2024-11-15 11:46:11.187597] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2505a40 is same with the state(6) to be set 00:28:10.451 [2024-11-15 11:46:11.188183] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2505a40 (9): Bad file descriptor 00:28:10.451 [2024-11-15 11:46:11.188795] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:10.451 [2024-11-15 11:46:11.188816] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:10.451 [2024-11-15 11:46:11.188831] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:10.451 [2024-11-15 11:46:11.188845] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:10.451 [2024-11-15 11:46:11.202287] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:10.451 [2024-11-15 11:46:11.202836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.451 [2024-11-15 11:46:11.202889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2505a40 with addr=10.0.0.2, port=4420 00:28:10.451 [2024-11-15 11:46:11.202913] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2505a40 is same with the state(6) to be set 00:28:10.451 [2024-11-15 11:46:11.203487] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2505a40 (9): Bad file descriptor 00:28:10.451 [2024-11-15 11:46:11.203759] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:10.451 [2024-11-15 11:46:11.203772] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:10.451 [2024-11-15 11:46:11.203781] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:10.451 [2024-11-15 11:46:11.203791] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:10.451 [2024-11-15 11:46:11.216926] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:10.451 [2024-11-15 11:46:11.217406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.451 [2024-11-15 11:46:11.217430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2505a40 with addr=10.0.0.2, port=4420 00:28:10.451 [2024-11-15 11:46:11.217442] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2505a40 is same with the state(6) to be set 00:28:10.451 [2024-11-15 11:46:11.217717] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2505a40 (9): Bad file descriptor 00:28:10.451 [2024-11-15 11:46:11.217986] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:10.451 [2024-11-15 11:46:11.217999] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:10.451 [2024-11-15 11:46:11.218010] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:10.451 [2024-11-15 11:46:11.218020] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:10.451 [2024-11-15 11:46:11.231653] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:10.451 [2024-11-15 11:46:11.232124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.451 [2024-11-15 11:46:11.232148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2505a40 with addr=10.0.0.2, port=4420 00:28:10.451 [2024-11-15 11:46:11.232159] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2505a40 is same with the state(6) to be set 00:28:10.451 [2024-11-15 11:46:11.232431] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2505a40 (9): Bad file descriptor 00:28:10.451 [2024-11-15 11:46:11.232706] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:10.451 [2024-11-15 11:46:11.232720] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:10.451 [2024-11-15 11:46:11.232730] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:10.451 [2024-11-15 11:46:11.232741] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:10.451 [2024-11-15 11:46:11.246364] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:10.451 [2024-11-15 11:46:11.246765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.451 [2024-11-15 11:46:11.246788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2505a40 with addr=10.0.0.2, port=4420 00:28:10.451 [2024-11-15 11:46:11.246800] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2505a40 is same with the state(6) to be set 00:28:10.451 [2024-11-15 11:46:11.247067] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2505a40 (9): Bad file descriptor 00:28:10.451 [2024-11-15 11:46:11.247334] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:10.451 [2024-11-15 11:46:11.247348] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:10.451 [2024-11-15 11:46:11.247357] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:10.451 [2024-11-15 11:46:11.247368] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:10.451 [2024-11-15 11:46:11.260977] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:10.451 [2024-11-15 11:46:11.261530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.451 [2024-11-15 11:46:11.261575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2505a40 with addr=10.0.0.2, port=4420 00:28:10.451 [2024-11-15 11:46:11.261600] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2505a40 is same with the state(6) to be set 00:28:10.451 [2024-11-15 11:46:11.262186] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2505a40 (9): Bad file descriptor 00:28:10.451 [2024-11-15 11:46:11.262574] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:10.451 [2024-11-15 11:46:11.262589] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:10.451 [2024-11-15 11:46:11.262599] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:10.451 [2024-11-15 11:46:11.262610] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:10.451 [2024-11-15 11:46:11.275717] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:10.451 [2024-11-15 11:46:11.276248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.451 [2024-11-15 11:46:11.276293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2505a40 with addr=10.0.0.2, port=4420 00:28:10.451 [2024-11-15 11:46:11.276317] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2505a40 is same with the state(6) to be set 00:28:10.451 [2024-11-15 11:46:11.276755] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2505a40 (9): Bad file descriptor 00:28:10.452 [2024-11-15 11:46:11.277025] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:10.452 [2024-11-15 11:46:11.277038] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:10.452 [2024-11-15 11:46:11.277052] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:10.452 [2024-11-15 11:46:11.277062] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:10.452 [2024-11-15 11:46:11.290426] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:10.452 [2024-11-15 11:46:11.290970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.452 [2024-11-15 11:46:11.291015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2505a40 with addr=10.0.0.2, port=4420 00:28:10.452 [2024-11-15 11:46:11.291039] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2505a40 is same with the state(6) to be set 00:28:10.452 [2024-11-15 11:46:11.291601] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2505a40 (9): Bad file descriptor 00:28:10.452 [2024-11-15 11:46:11.291870] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:10.452 [2024-11-15 11:46:11.291883] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:10.452 [2024-11-15 11:46:11.291893] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:10.452 [2024-11-15 11:46:11.291904] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:10.712 [2024-11-15 11:46:11.305030] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:10.712 [2024-11-15 11:46:11.305585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.712 [2024-11-15 11:46:11.305609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2505a40 with addr=10.0.0.2, port=4420 00:28:10.712 [2024-11-15 11:46:11.305621] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2505a40 is same with the state(6) to be set 00:28:10.712 [2024-11-15 11:46:11.305889] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2505a40 (9): Bad file descriptor 00:28:10.712 [2024-11-15 11:46:11.306157] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:10.712 [2024-11-15 11:46:11.306170] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:10.712 [2024-11-15 11:46:11.306181] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:10.712 [2024-11-15 11:46:11.306191] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:10.712 [2024-11-15 11:46:11.319828] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:10.712 [2024-11-15 11:46:11.320362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.712 [2024-11-15 11:46:11.320407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2505a40 with addr=10.0.0.2, port=4420 00:28:10.712 [2024-11-15 11:46:11.320431] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2505a40 is same with the state(6) to be set 00:28:10.712 [2024-11-15 11:46:11.320873] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2505a40 (9): Bad file descriptor 00:28:10.712 [2024-11-15 11:46:11.321142] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:10.712 [2024-11-15 11:46:11.321155] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:10.712 [2024-11-15 11:46:11.321166] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:10.712 [2024-11-15 11:46:11.321176] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:10.712 [2024-11-15 11:46:11.334551] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:10.712 [2024-11-15 11:46:11.335104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.712 [2024-11-15 11:46:11.335128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2505a40 with addr=10.0.0.2, port=4420 00:28:10.712 [2024-11-15 11:46:11.335139] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2505a40 is same with the state(6) to be set 00:28:10.712 [2024-11-15 11:46:11.335406] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2505a40 (9): Bad file descriptor 00:28:10.712 [2024-11-15 11:46:11.335683] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:10.713 [2024-11-15 11:46:11.335696] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:10.713 [2024-11-15 11:46:11.335707] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:10.713 [2024-11-15 11:46:11.335717] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:10.713 [2024-11-15 11:46:11.349333] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:10.713 [2024-11-15 11:46:11.349864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.713 [2024-11-15 11:46:11.349909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2505a40 with addr=10.0.0.2, port=4420 00:28:10.713 [2024-11-15 11:46:11.349933] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2505a40 is same with the state(6) to be set 00:28:10.713 [2024-11-15 11:46:11.350532] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2505a40 (9): Bad file descriptor 00:28:10.713 [2024-11-15 11:46:11.350976] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:10.713 [2024-11-15 11:46:11.350994] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:10.713 [2024-11-15 11:46:11.351008] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:10.713 [2024-11-15 11:46:11.351022] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:10.713 [2024-11-15 11:46:11.364467] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:10.713 [2024-11-15 11:46:11.365021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.713 [2024-11-15 11:46:11.365066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2505a40 with addr=10.0.0.2, port=4420 00:28:10.713 [2024-11-15 11:46:11.365089] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2505a40 is same with the state(6) to be set 00:28:10.713 [2024-11-15 11:46:11.365688] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2505a40 (9): Bad file descriptor 00:28:10.713 [2024-11-15 11:46:11.366186] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:10.713 [2024-11-15 11:46:11.366199] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:10.713 [2024-11-15 11:46:11.366209] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:10.713 [2024-11-15 11:46:11.366219] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:10.713 [2024-11-15 11:46:11.379086] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:10.713 [2024-11-15 11:46:11.379607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.713 [2024-11-15 11:46:11.379639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2505a40 with addr=10.0.0.2, port=4420 00:28:10.713 [2024-11-15 11:46:11.379652] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2505a40 is same with the state(6) to be set 00:28:10.713 [2024-11-15 11:46:11.379920] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2505a40 (9): Bad file descriptor 00:28:10.713 [2024-11-15 11:46:11.380187] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:10.713 [2024-11-15 11:46:11.380200] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:10.713 [2024-11-15 11:46:11.380210] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:10.713 [2024-11-15 11:46:11.380220] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:10.713 [2024-11-15 11:46:11.393839] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:10.713 [2024-11-15 11:46:11.394386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.713 [2024-11-15 11:46:11.394409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2505a40 with addr=10.0.0.2, port=4420 00:28:10.713 [2024-11-15 11:46:11.394421] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2505a40 is same with the state(6) to be set 00:28:10.713 [2024-11-15 11:46:11.394695] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2505a40 (9): Bad file descriptor 00:28:10.713 [2024-11-15 11:46:11.394964] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:10.713 [2024-11-15 11:46:11.394977] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:10.713 [2024-11-15 11:46:11.394987] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:10.713 [2024-11-15 11:46:11.394997] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:10.713 [2024-11-15 11:46:11.408641] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:10.713 [2024-11-15 11:46:11.409189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.713 [2024-11-15 11:46:11.409213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2505a40 with addr=10.0.0.2, port=4420 00:28:10.713 [2024-11-15 11:46:11.409225] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2505a40 is same with the state(6) to be set 00:28:10.713 [2024-11-15 11:46:11.409498] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2505a40 (9): Bad file descriptor 00:28:10.713 [2024-11-15 11:46:11.409768] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:10.713 [2024-11-15 11:46:11.409781] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:10.713 [2024-11-15 11:46:11.409791] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:10.713 [2024-11-15 11:46:11.409801] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:10.713 [2024-11-15 11:46:11.423420] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:10.713 [2024-11-15 11:46:11.423993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.713 [2024-11-15 11:46:11.424038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2505a40 with addr=10.0.0.2, port=4420 00:28:10.713 [2024-11-15 11:46:11.424061] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2505a40 is same with the state(6) to be set 00:28:10.713 [2024-11-15 11:46:11.424667] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2505a40 (9): Bad file descriptor 00:28:10.713 [2024-11-15 11:46:11.424964] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:10.713 [2024-11-15 11:46:11.424978] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:10.713 [2024-11-15 11:46:11.424988] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:10.713 [2024-11-15 11:46:11.424999] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:10.713 [2024-11-15 11:46:11.438116] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:10.713 [2024-11-15 11:46:11.438671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.713 [2024-11-15 11:46:11.438694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2505a40 with addr=10.0.0.2, port=4420 00:28:10.713 [2024-11-15 11:46:11.438706] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2505a40 is same with the state(6) to be set 00:28:10.713 [2024-11-15 11:46:11.438973] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2505a40 (9): Bad file descriptor 00:28:10.713 [2024-11-15 11:46:11.439241] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:10.713 [2024-11-15 11:46:11.439255] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:10.713 [2024-11-15 11:46:11.439265] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:10.713 [2024-11-15 11:46:11.439275] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:10.713 [2024-11-15 11:46:11.452900] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:10.713 [2024-11-15 11:46:11.453443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.713 [2024-11-15 11:46:11.453473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2505a40 with addr=10.0.0.2, port=4420 00:28:10.713 [2024-11-15 11:46:11.453486] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2505a40 is same with the state(6) to be set 00:28:10.713 [2024-11-15 11:46:11.453753] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2505a40 (9): Bad file descriptor 00:28:10.713 [2024-11-15 11:46:11.454020] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:10.713 [2024-11-15 11:46:11.454033] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:10.713 [2024-11-15 11:46:11.454043] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:10.713 [2024-11-15 11:46:11.454053] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:10.713 [2024-11-15 11:46:11.467677] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:10.713 [2024-11-15 11:46:11.468154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.713 [2024-11-15 11:46:11.468198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2505a40 with addr=10.0.0.2, port=4420 00:28:10.713 [2024-11-15 11:46:11.468221] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2505a40 is same with the state(6) to be set 00:28:10.713 [2024-11-15 11:46:11.468820] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2505a40 (9): Bad file descriptor 00:28:10.713 [2024-11-15 11:46:11.469359] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:10.713 [2024-11-15 11:46:11.469373] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:10.713 [2024-11-15 11:46:11.469387] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:10.713 [2024-11-15 11:46:11.469397] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:10.713 [2024-11-15 11:46:11.482264] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:10.713 [2024-11-15 11:46:11.482732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.713 [2024-11-15 11:46:11.482755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2505a40 with addr=10.0.0.2, port=4420 00:28:10.713 [2024-11-15 11:46:11.482767] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2505a40 is same with the state(6) to be set 00:28:10.714 [2024-11-15 11:46:11.483034] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2505a40 (9): Bad file descriptor 00:28:10.714 [2024-11-15 11:46:11.483302] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:10.714 [2024-11-15 11:46:11.483316] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:10.714 [2024-11-15 11:46:11.483326] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:10.714 [2024-11-15 11:46:11.483336] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:10.714 [2024-11-15 11:46:11.496985] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:10.714 [2024-11-15 11:46:11.497535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.714 [2024-11-15 11:46:11.497559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2505a40 with addr=10.0.0.2, port=4420 00:28:10.714 [2024-11-15 11:46:11.497571] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2505a40 is same with the state(6) to be set 00:28:10.714 [2024-11-15 11:46:11.497839] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2505a40 (9): Bad file descriptor 00:28:10.714 [2024-11-15 11:46:11.498107] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:10.714 [2024-11-15 11:46:11.498120] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:10.714 [2024-11-15 11:46:11.498131] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:10.714 [2024-11-15 11:46:11.498141] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:10.714 [2024-11-15 11:46:11.511671] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:10.714 [2024-11-15 11:46:11.512199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.714 [2024-11-15 11:46:11.512222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2505a40 with addr=10.0.0.2, port=4420 00:28:10.714 [2024-11-15 11:46:11.512233] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2505a40 is same with the state(6) to be set 00:28:10.714 [2024-11-15 11:46:11.512510] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2505a40 (9): Bad file descriptor 00:28:10.714 [2024-11-15 11:46:11.512780] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:10.714 [2024-11-15 11:46:11.512793] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:10.714 [2024-11-15 11:46:11.512803] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:10.714 [2024-11-15 11:46:11.512813] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:10.714 [2024-11-15 11:46:11.526440] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:10.714 [2024-11-15 11:46:11.526896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.714 [2024-11-15 11:46:11.526919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2505a40 with addr=10.0.0.2, port=4420 00:28:10.714 [2024-11-15 11:46:11.526930] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2505a40 is same with the state(6) to be set 00:28:10.714 [2024-11-15 11:46:11.527198] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2505a40 (9): Bad file descriptor 00:28:10.714 [2024-11-15 11:46:11.527473] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:10.714 [2024-11-15 11:46:11.527487] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:10.714 [2024-11-15 11:46:11.527497] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:10.714 [2024-11-15 11:46:11.527507] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:10.714 [2024-11-15 11:46:11.541128] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:10.714 [2024-11-15 11:46:11.541656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.714 [2024-11-15 11:46:11.541680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2505a40 with addr=10.0.0.2, port=4420 00:28:10.714 [2024-11-15 11:46:11.541692] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2505a40 is same with the state(6) to be set 00:28:10.714 [2024-11-15 11:46:11.541959] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2505a40 (9): Bad file descriptor 00:28:10.714 [2024-11-15 11:46:11.542227] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:10.714 [2024-11-15 11:46:11.542240] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:10.714 [2024-11-15 11:46:11.542251] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:10.714 [2024-11-15 11:46:11.542261] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:10.714 [2024-11-15 11:46:11.555892] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:10.714 [2024-11-15 11:46:11.556373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.714 [2024-11-15 11:46:11.556418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2505a40 with addr=10.0.0.2, port=4420 00:28:10.714 [2024-11-15 11:46:11.556442] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2505a40 is same with the state(6) to be set 00:28:10.714 [2024-11-15 11:46:11.557042] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2505a40 (9): Bad file descriptor 00:28:10.714 [2024-11-15 11:46:11.557352] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:10.714 [2024-11-15 11:46:11.557365] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:10.714 [2024-11-15 11:46:11.557375] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:10.714 [2024-11-15 11:46:11.557385] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:10.975 [2024-11-15 11:46:11.570522] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:10.975 [2024-11-15 11:46:11.571025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.975 [2024-11-15 11:46:11.571054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2505a40 with addr=10.0.0.2, port=4420 00:28:10.975 [2024-11-15 11:46:11.571067] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2505a40 is same with the state(6) to be set 00:28:10.975 [2024-11-15 11:46:11.571334] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2505a40 (9): Bad file descriptor 00:28:10.975 [2024-11-15 11:46:11.571609] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:10.975 [2024-11-15 11:46:11.571623] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:10.975 [2024-11-15 11:46:11.571634] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:10.975 [2024-11-15 11:46:11.571645] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:10.975 [2024-11-15 11:46:11.585273] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:10.975 [2024-11-15 11:46:11.585801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.975 [2024-11-15 11:46:11.585824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2505a40 with addr=10.0.0.2, port=4420 00:28:10.975 [2024-11-15 11:46:11.585837] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2505a40 is same with the state(6) to be set 00:28:10.975 [2024-11-15 11:46:11.586103] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2505a40 (9): Bad file descriptor 00:28:10.975 [2024-11-15 11:46:11.586371] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:10.975 [2024-11-15 11:46:11.586384] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:10.975 [2024-11-15 11:46:11.586394] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:10.975 [2024-11-15 11:46:11.586405] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:10.975 [2024-11-15 11:46:11.600049] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:10.975 [2024-11-15 11:46:11.600577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.975 [2024-11-15 11:46:11.600621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2505a40 with addr=10.0.0.2, port=4420 00:28:10.975 [2024-11-15 11:46:11.600647] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2505a40 is same with the state(6) to be set 00:28:10.975 [2024-11-15 11:46:11.601190] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2505a40 (9): Bad file descriptor 00:28:10.975 [2024-11-15 11:46:11.601466] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:10.975 [2024-11-15 11:46:11.601480] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:10.975 [2024-11-15 11:46:11.601491] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:10.975 [2024-11-15 11:46:11.601501] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:10.975 [2024-11-15 11:46:11.614654] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:10.975 [2024-11-15 11:46:11.615189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.975 [2024-11-15 11:46:11.615234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2505a40 with addr=10.0.0.2, port=4420 00:28:10.975 [2024-11-15 11:46:11.615258] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2505a40 is same with the state(6) to be set 00:28:10.975 [2024-11-15 11:46:11.615764] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2505a40 (9): Bad file descriptor 00:28:10.975 [2024-11-15 11:46:11.616034] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:10.975 [2024-11-15 11:46:11.616047] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:10.975 [2024-11-15 11:46:11.616057] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:10.975 [2024-11-15 11:46:11.616067] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:10.975 [2024-11-15 11:46:11.629447] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:10.975 [2024-11-15 11:46:11.630020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.975 [2024-11-15 11:46:11.630067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2505a40 with addr=10.0.0.2, port=4420 00:28:10.975 [2024-11-15 11:46:11.630091] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2505a40 is same with the state(6) to be set 00:28:10.975 [2024-11-15 11:46:11.630574] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2505a40 (9): Bad file descriptor 00:28:10.975 [2024-11-15 11:46:11.630845] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:10.975 [2024-11-15 11:46:11.630857] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:10.975 [2024-11-15 11:46:11.630868] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:10.975 [2024-11-15 11:46:11.630878] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:10.975 [2024-11-15 11:46:11.644247] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:10.975 [2024-11-15 11:46:11.644824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.975 [2024-11-15 11:46:11.644868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2505a40 with addr=10.0.0.2, port=4420 00:28:10.975 [2024-11-15 11:46:11.644893] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2505a40 is same with the state(6) to be set 00:28:10.975 [2024-11-15 11:46:11.645421] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2505a40 (9): Bad file descriptor 00:28:10.975 [2024-11-15 11:46:11.645699] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:10.975 [2024-11-15 11:46:11.645713] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:10.975 [2024-11-15 11:46:11.645724] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:10.975 [2024-11-15 11:46:11.645734] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:10.975 [2024-11-15 11:46:11.658859] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:10.975 [2024-11-15 11:46:11.659392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.975 [2024-11-15 11:46:11.659415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2505a40 with addr=10.0.0.2, port=4420 00:28:10.975 [2024-11-15 11:46:11.659426] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2505a40 is same with the state(6) to be set 00:28:10.975 [2024-11-15 11:46:11.659700] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2505a40 (9): Bad file descriptor 00:28:10.975 [2024-11-15 11:46:11.659969] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:10.975 [2024-11-15 11:46:11.659982] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:10.975 [2024-11-15 11:46:11.659997] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:10.975 [2024-11-15 11:46:11.660008] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:10.975 [2024-11-15 11:46:11.673632] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:10.975 [2024-11-15 11:46:11.674183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.975 [2024-11-15 11:46:11.674206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2505a40 with addr=10.0.0.2, port=4420 00:28:10.975 [2024-11-15 11:46:11.674217] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2505a40 is same with the state(6) to be set 00:28:10.975 [2024-11-15 11:46:11.674491] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2505a40 (9): Bad file descriptor 00:28:10.975 [2024-11-15 11:46:11.674761] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:10.975 [2024-11-15 11:46:11.674774] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:10.975 [2024-11-15 11:46:11.674785] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:10.976 [2024-11-15 11:46:11.674795] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:10.976 [2024-11-15 11:46:11.688421] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:10.976 [2024-11-15 11:46:11.688921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.976 [2024-11-15 11:46:11.688944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2505a40 with addr=10.0.0.2, port=4420 00:28:10.976 [2024-11-15 11:46:11.688956] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2505a40 is same with the state(6) to be set 00:28:10.976 [2024-11-15 11:46:11.689223] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2505a40 (9): Bad file descriptor 00:28:10.976 [2024-11-15 11:46:11.689497] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:10.976 [2024-11-15 11:46:11.689511] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:10.976 [2024-11-15 11:46:11.689521] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:10.976 [2024-11-15 11:46:11.689532] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:10.976 [2024-11-15 11:46:11.703154] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:10.976 [2024-11-15 11:46:11.703729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.976 [2024-11-15 11:46:11.703775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2505a40 with addr=10.0.0.2, port=4420 00:28:10.976 [2024-11-15 11:46:11.703799] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2505a40 is same with the state(6) to be set 00:28:10.976 [2024-11-15 11:46:11.704384] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2505a40 (9): Bad file descriptor 00:28:10.976 [2024-11-15 11:46:11.704964] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:10.976 [2024-11-15 11:46:11.704984] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:10.976 [2024-11-15 11:46:11.704997] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:10.976 [2024-11-15 11:46:11.705010] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:10.976 [2024-11-15 11:46:11.718304] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:10.976 [2024-11-15 11:46:11.718869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.976 [2024-11-15 11:46:11.718915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2505a40 with addr=10.0.0.2, port=4420 00:28:10.976 [2024-11-15 11:46:11.718939] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2505a40 is same with the state(6) to be set 00:28:10.976 [2024-11-15 11:46:11.719535] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2505a40 (9): Bad file descriptor 00:28:10.976 [2024-11-15 11:46:11.720022] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:10.976 [2024-11-15 11:46:11.720035] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:10.976 [2024-11-15 11:46:11.720045] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:10.976 [2024-11-15 11:46:11.720056] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:10.976 [2024-11-15 11:46:11.732918] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:10.976 [2024-11-15 11:46:11.733366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.976 [2024-11-15 11:46:11.733389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2505a40 with addr=10.0.0.2, port=4420 00:28:10.976 [2024-11-15 11:46:11.733400] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2505a40 is same with the state(6) to be set 00:28:10.976 [2024-11-15 11:46:11.733675] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2505a40 (9): Bad file descriptor 00:28:10.976 [2024-11-15 11:46:11.733945] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:10.976 [2024-11-15 11:46:11.733958] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:10.976 [2024-11-15 11:46:11.733969] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:10.976 [2024-11-15 11:46:11.733980] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:10.976 [2024-11-15 11:46:11.747616] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:10.976 [2024-11-15 11:46:11.748085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.976 [2024-11-15 11:46:11.748108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2505a40 with addr=10.0.0.2, port=4420 00:28:10.976 [2024-11-15 11:46:11.748119] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2505a40 is same with the state(6) to be set 00:28:10.976 [2024-11-15 11:46:11.748385] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2505a40 (9): Bad file descriptor 00:28:10.976 [2024-11-15 11:46:11.748660] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:10.976 [2024-11-15 11:46:11.748674] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:10.976 [2024-11-15 11:46:11.748684] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:10.976 [2024-11-15 11:46:11.748694] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:10.976 [2024-11-15 11:46:11.762327] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:10.976 [2024-11-15 11:46:11.762901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.976 [2024-11-15 11:46:11.762953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2505a40 with addr=10.0.0.2, port=4420 00:28:10.976 [2024-11-15 11:46:11.762978] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2505a40 is same with the state(6) to be set 00:28:10.976 [2024-11-15 11:46:11.763567] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2505a40 (9): Bad file descriptor 00:28:10.976 [2024-11-15 11:46:11.763836] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:10.976 [2024-11-15 11:46:11.763849] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:10.976 [2024-11-15 11:46:11.763859] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:10.976 [2024-11-15 11:46:11.763869] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:10.976 [2024-11-15 11:46:11.776973] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:10.976 [2024-11-15 11:46:11.777524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.976 [2024-11-15 11:46:11.777548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2505a40 with addr=10.0.0.2, port=4420 00:28:10.976 [2024-11-15 11:46:11.777559] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2505a40 is same with the state(6) to be set 00:28:10.976 [2024-11-15 11:46:11.777827] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2505a40 (9): Bad file descriptor 00:28:10.976 [2024-11-15 11:46:11.778096] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:10.976 [2024-11-15 11:46:11.778110] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:10.976 [2024-11-15 11:46:11.778120] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:10.976 [2024-11-15 11:46:11.778130] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:10.976 [2024-11-15 11:46:11.791752] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:10.976 [2024-11-15 11:46:11.792273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.976 [2024-11-15 11:46:11.792297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2505a40 with addr=10.0.0.2, port=4420 00:28:10.976 [2024-11-15 11:46:11.792308] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2505a40 is same with the state(6) to be set 00:28:10.976 [2024-11-15 11:46:11.792583] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2505a40 (9): Bad file descriptor 00:28:10.976 [2024-11-15 11:46:11.792852] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:10.976 [2024-11-15 11:46:11.792865] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:10.976 [2024-11-15 11:46:11.792875] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:10.976 [2024-11-15 11:46:11.792886] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:10.976 [2024-11-15 11:46:11.806519] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:10.976 [2024-11-15 11:46:11.807053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.976 [2024-11-15 11:46:11.807098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2505a40 with addr=10.0.0.2, port=4420 00:28:10.976 [2024-11-15 11:46:11.807122] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2505a40 is same with the state(6) to be set 00:28:10.976 [2024-11-15 11:46:11.807730] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2505a40 (9): Bad file descriptor 00:28:10.976 [2024-11-15 11:46:11.808000] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:10.976 [2024-11-15 11:46:11.808013] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:10.976 [2024-11-15 11:46:11.808023] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:10.976 [2024-11-15 11:46:11.808033] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:10.976 [2024-11-15 11:46:11.821163] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:10.976 [2024-11-15 11:46:11.821631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.976 [2024-11-15 11:46:11.821654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2505a40 with addr=10.0.0.2, port=4420 00:28:10.976 [2024-11-15 11:46:11.821666] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2505a40 is same with the state(6) to be set 00:28:10.976 [2024-11-15 11:46:11.821933] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2505a40 (9): Bad file descriptor 00:28:10.976 [2024-11-15 11:46:11.822202] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:10.976 [2024-11-15 11:46:11.822215] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:10.977 [2024-11-15 11:46:11.822225] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:10.977 [2024-11-15 11:46:11.822235] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:11.237 [2024-11-15 11:46:11.835865] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:11.237 [2024-11-15 11:46:11.836438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.237 [2024-11-15 11:46:11.836493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2505a40 with addr=10.0.0.2, port=4420 00:28:11.237 [2024-11-15 11:46:11.836518] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2505a40 is same with the state(6) to be set 00:28:11.237 [2024-11-15 11:46:11.837035] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2505a40 (9): Bad file descriptor 00:28:11.237 [2024-11-15 11:46:11.837303] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:11.237 [2024-11-15 11:46:11.837316] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:11.237 [2024-11-15 11:46:11.837326] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:11.237 [2024-11-15 11:46:11.837337] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:11.237 [2024-11-15 11:46:11.850463] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:11.237 [2024-11-15 11:46:11.850998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.237 [2024-11-15 11:46:11.851050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2505a40 with addr=10.0.0.2, port=4420 00:28:11.237 [2024-11-15 11:46:11.851074] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2505a40 is same with the state(6) to be set 00:28:11.237 [2024-11-15 11:46:11.851672] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2505a40 (9): Bad file descriptor 00:28:11.237 [2024-11-15 11:46:11.852015] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:11.237 [2024-11-15 11:46:11.852028] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:11.237 [2024-11-15 11:46:11.852043] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:11.237 [2024-11-15 11:46:11.852054] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:11.237 [2024-11-15 11:46:11.865172] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:11.237 [2024-11-15 11:46:11.865659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.237 [2024-11-15 11:46:11.865683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2505a40 with addr=10.0.0.2, port=4420 00:28:11.237 [2024-11-15 11:46:11.865694] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2505a40 is same with the state(6) to be set 00:28:11.237 [2024-11-15 11:46:11.865961] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2505a40 (9): Bad file descriptor 00:28:11.237 [2024-11-15 11:46:11.866228] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:11.237 [2024-11-15 11:46:11.866240] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:11.237 [2024-11-15 11:46:11.866251] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:11.237 [2024-11-15 11:46:11.866261] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:11.237 [2024-11-15 11:46:11.879882] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:11.237 [2024-11-15 11:46:11.880406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.237 [2024-11-15 11:46:11.880430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2505a40 with addr=10.0.0.2, port=4420 00:28:11.237 [2024-11-15 11:46:11.880479] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2505a40 is same with the state(6) to be set 00:28:11.237 [2024-11-15 11:46:11.881032] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2505a40 (9): Bad file descriptor 00:28:11.237 [2024-11-15 11:46:11.881300] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:11.237 [2024-11-15 11:46:11.881313] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:11.237 [2024-11-15 11:46:11.881323] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:11.237 [2024-11-15 11:46:11.881333] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:11.237 [2024-11-15 11:46:11.894453] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:11.237 [2024-11-15 11:46:11.894900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.237 [2024-11-15 11:46:11.894923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2505a40 with addr=10.0.0.2, port=4420 00:28:11.237 [2024-11-15 11:46:11.894934] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2505a40 is same with the state(6) to be set 00:28:11.238 [2024-11-15 11:46:11.895202] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2505a40 (9): Bad file descriptor 00:28:11.238 [2024-11-15 11:46:11.895477] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:11.238 [2024-11-15 11:46:11.895491] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:11.238 [2024-11-15 11:46:11.895501] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:11.238 [2024-11-15 11:46:11.895512] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:11.238 [2024-11-15 11:46:11.909149] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:11.238 [2024-11-15 11:46:11.909715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.238 [2024-11-15 11:46:11.909760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2505a40 with addr=10.0.0.2, port=4420 00:28:11.238 [2024-11-15 11:46:11.909784] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2505a40 is same with the state(6) to be set 00:28:11.238 [2024-11-15 11:46:11.910312] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2505a40 (9): Bad file descriptor 00:28:11.238 [2024-11-15 11:46:11.910587] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:11.238 [2024-11-15 11:46:11.910600] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:11.238 [2024-11-15 11:46:11.910611] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:11.238 [2024-11-15 11:46:11.910621] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:11.238 [2024-11-15 11:46:11.923743] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:11.238 [2024-11-15 11:46:11.924311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.238 [2024-11-15 11:46:11.924354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2505a40 with addr=10.0.0.2, port=4420 00:28:11.238 [2024-11-15 11:46:11.924378] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2505a40 is same with the state(6) to be set 00:28:11.238 [2024-11-15 11:46:11.924988] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2505a40 (9): Bad file descriptor 00:28:11.238 [2024-11-15 11:46:11.925256] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:11.238 [2024-11-15 11:46:11.925270] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:11.238 [2024-11-15 11:46:11.925281] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:11.238 [2024-11-15 11:46:11.925291] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:11.238 [2024-11-15 11:46:11.938401] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:11.238 [2024-11-15 11:46:11.938967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.238 [2024-11-15 11:46:11.939013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2505a40 with addr=10.0.0.2, port=4420 00:28:11.238 [2024-11-15 11:46:11.939037] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2505a40 is same with the state(6) to be set 00:28:11.238 [2024-11-15 11:46:11.939636] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2505a40 (9): Bad file descriptor 00:28:11.238 [2024-11-15 11:46:11.940223] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:11.238 [2024-11-15 11:46:11.940248] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:11.238 [2024-11-15 11:46:11.940270] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:11.238 [2024-11-15 11:46:11.940289] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:11.238 [2024-11-15 11:46:11.953185] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:11.238 [2024-11-15 11:46:11.953742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.238 [2024-11-15 11:46:11.953795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2505a40 with addr=10.0.0.2, port=4420 00:28:11.238 [2024-11-15 11:46:11.953819] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2505a40 is same with the state(6) to be set 00:28:11.238 [2024-11-15 11:46:11.954317] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2505a40 (9): Bad file descriptor 00:28:11.238 [2024-11-15 11:46:11.954592] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:11.238 [2024-11-15 11:46:11.954606] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:11.238 [2024-11-15 11:46:11.954616] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:11.238 [2024-11-15 11:46:11.954627] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:11.238 [2024-11-15 11:46:11.967980] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:11.238 [2024-11-15 11:46:11.968420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.238 [2024-11-15 11:46:11.968442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2505a40 with addr=10.0.0.2, port=4420 00:28:11.238 [2024-11-15 11:46:11.968454] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2505a40 is same with the state(6) to be set 00:28:11.238 [2024-11-15 11:46:11.968728] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2505a40 (9): Bad file descriptor 00:28:11.238 [2024-11-15 11:46:11.968996] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:11.238 [2024-11-15 11:46:11.969010] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:11.238 [2024-11-15 11:46:11.969020] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:11.238 [2024-11-15 11:46:11.969030] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:11.238 [2024-11-15 11:46:11.982667] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:11.238 [2024-11-15 11:46:11.983100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.238 [2024-11-15 11:46:11.983123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2505a40 with addr=10.0.0.2, port=4420 00:28:11.238 [2024-11-15 11:46:11.983135] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2505a40 is same with the state(6) to be set 00:28:11.238 [2024-11-15 11:46:11.983401] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2505a40 (9): Bad file descriptor 00:28:11.238 [2024-11-15 11:46:11.983678] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:11.238 [2024-11-15 11:46:11.983693] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:11.238 [2024-11-15 11:46:11.983704] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:11.238 [2024-11-15 11:46:11.983714] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:11.238 5129.00 IOPS, 20.04 MiB/s [2024-11-15T10:46:12.091Z] [2024-11-15 11:46:11.998535] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:11.238 [2024-11-15 11:46:11.999090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.238 [2024-11-15 11:46:11.999135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2505a40 with addr=10.0.0.2, port=4420 00:28:11.238 [2024-11-15 11:46:11.999159] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2505a40 is same with the state(6) to be set 00:28:11.238 [2024-11-15 11:46:11.999622] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2505a40 (9): Bad file descriptor 00:28:11.238 [2024-11-15 11:46:11.999890] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:11.238 [2024-11-15 11:46:11.999903] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:11.238 [2024-11-15 11:46:11.999914] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:11.238 [2024-11-15 11:46:11.999924] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:11.238 [2024-11-15 11:46:12.013318] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:11.238 [2024-11-15 11:46:12.013813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.238 [2024-11-15 11:46:12.013837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2505a40 with addr=10.0.0.2, port=4420 00:28:11.238 [2024-11-15 11:46:12.013849] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2505a40 is same with the state(6) to be set 00:28:11.238 [2024-11-15 11:46:12.014117] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2505a40 (9): Bad file descriptor 00:28:11.238 [2024-11-15 11:46:12.014385] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:11.238 [2024-11-15 11:46:12.014398] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:11.238 [2024-11-15 11:46:12.014409] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:11.238 [2024-11-15 11:46:12.014420] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:11.238 [2024-11-15 11:46:12.028060] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:11.238 [2024-11-15 11:46:12.028621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.239 [2024-11-15 11:46:12.028667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2505a40 with addr=10.0.0.2, port=4420 00:28:11.239 [2024-11-15 11:46:12.028692] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2505a40 is same with the state(6) to be set 00:28:11.239 [2024-11-15 11:46:12.029189] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2505a40 (9): Bad file descriptor 00:28:11.239 [2024-11-15 11:46:12.029590] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:11.239 [2024-11-15 11:46:12.029609] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:11.239 [2024-11-15 11:46:12.029625] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:11.239 [2024-11-15 11:46:12.029639] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:11.239 [2024-11-15 11:46:12.043285] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:11.239 [2024-11-15 11:46:12.043770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.239 [2024-11-15 11:46:12.043794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2505a40 with addr=10.0.0.2, port=4420 00:28:11.239 [2024-11-15 11:46:12.043806] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2505a40 is same with the state(6) to be set 00:28:11.239 [2024-11-15 11:46:12.044074] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2505a40 (9): Bad file descriptor 00:28:11.239 [2024-11-15 11:46:12.044342] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:11.239 [2024-11-15 11:46:12.044356] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:11.239 [2024-11-15 11:46:12.044371] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:11.239 [2024-11-15 11:46:12.044381] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:11.239 [2024-11-15 11:46:12.057980] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:11.239 [2024-11-15 11:46:12.058485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.239 [2024-11-15 11:46:12.058534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2505a40 with addr=10.0.0.2, port=4420 00:28:11.239 [2024-11-15 11:46:12.058559] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2505a40 is same with the state(6) to be set 00:28:11.239 [2024-11-15 11:46:12.059144] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2505a40 (9): Bad file descriptor 00:28:11.239 [2024-11-15 11:46:12.059674] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:11.239 [2024-11-15 11:46:12.059689] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:11.239 [2024-11-15 11:46:12.059699] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:11.239 [2024-11-15 11:46:12.059710] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:11.239 [2024-11-15 11:46:12.073349] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:11.239 [2024-11-15 11:46:12.073864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.239 [2024-11-15 11:46:12.073888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2505a40 with addr=10.0.0.2, port=4420 00:28:11.239 [2024-11-15 11:46:12.073900] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2505a40 is same with the state(6) to be set 00:28:11.239 [2024-11-15 11:46:12.074168] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2505a40 (9): Bad file descriptor 00:28:11.239 [2024-11-15 11:46:12.074436] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:11.239 [2024-11-15 11:46:12.074449] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:11.239 [2024-11-15 11:46:12.074467] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:11.239 [2024-11-15 11:46:12.074478] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:11.500 [2024-11-15 11:46:12.088112] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:11.500 [2024-11-15 11:46:12.088671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.500 [2024-11-15 11:46:12.088719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2505a40 with addr=10.0.0.2, port=4420 00:28:11.500 [2024-11-15 11:46:12.088744] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2505a40 is same with the state(6) to be set 00:28:11.500 [2024-11-15 11:46:12.089335] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2505a40 (9): Bad file descriptor 00:28:11.500 [2024-11-15 11:46:12.089610] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:11.500 [2024-11-15 11:46:12.089625] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:11.500 [2024-11-15 11:46:12.089635] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:11.500 [2024-11-15 11:46:12.089645] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:11.500 [2024-11-15 11:46:12.102784] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:11.500 [2024-11-15 11:46:12.103335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.500 [2024-11-15 11:46:12.103358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2505a40 with addr=10.0.0.2, port=4420 00:28:11.500 [2024-11-15 11:46:12.103370] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2505a40 is same with the state(6) to be set 00:28:11.500 [2024-11-15 11:46:12.103644] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2505a40 (9): Bad file descriptor 00:28:11.500 [2024-11-15 11:46:12.103913] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:11.500 [2024-11-15 11:46:12.103926] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:11.500 [2024-11-15 11:46:12.103936] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:11.500 [2024-11-15 11:46:12.103946] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:11.500 [2024-11-15 11:46:12.117603] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:11.500 [2024-11-15 11:46:12.118075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.500 [2024-11-15 11:46:12.118098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2505a40 with addr=10.0.0.2, port=4420 00:28:11.500 [2024-11-15 11:46:12.118109] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2505a40 is same with the state(6) to be set 00:28:11.500 [2024-11-15 11:46:12.118376] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2505a40 (9): Bad file descriptor 00:28:11.500 [2024-11-15 11:46:12.118652] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:11.501 [2024-11-15 11:46:12.118666] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:11.501 [2024-11-15 11:46:12.118676] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:11.501 [2024-11-15 11:46:12.118686] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:11.501 [2024-11-15 11:46:12.132310] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:11.501 [2024-11-15 11:46:12.132834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.501 [2024-11-15 11:46:12.132857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2505a40 with addr=10.0.0.2, port=4420 00:28:11.501 [2024-11-15 11:46:12.132870] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2505a40 is same with the state(6) to be set 00:28:11.501 [2024-11-15 11:46:12.133137] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2505a40 (9): Bad file descriptor 00:28:11.501 [2024-11-15 11:46:12.133403] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:11.501 [2024-11-15 11:46:12.133417] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:11.501 [2024-11-15 11:46:12.133427] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:11.501 [2024-11-15 11:46:12.133437] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:11.501 [2024-11-15 11:46:12.147073] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:11.501 [2024-11-15 11:46:12.147638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.501 [2024-11-15 11:46:12.147692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2505a40 with addr=10.0.0.2, port=4420 00:28:11.501 [2024-11-15 11:46:12.147717] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2505a40 is same with the state(6) to be set 00:28:11.501 [2024-11-15 11:46:12.148301] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2505a40 (9): Bad file descriptor 00:28:11.501 [2024-11-15 11:46:12.148832] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:11.501 [2024-11-15 11:46:12.148845] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:11.501 [2024-11-15 11:46:12.148856] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:11.501 [2024-11-15 11:46:12.148867] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:11.501 [2024-11-15 11:46:12.161736] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:11.501 [2024-11-15 11:46:12.162260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.501 [2024-11-15 11:46:12.162283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2505a40 with addr=10.0.0.2, port=4420 00:28:11.501 [2024-11-15 11:46:12.162295] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2505a40 is same with the state(6) to be set 00:28:11.501 [2024-11-15 11:46:12.162570] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2505a40 (9): Bad file descriptor 00:28:11.501 [2024-11-15 11:46:12.162839] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:11.501 [2024-11-15 11:46:12.162852] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:11.501 [2024-11-15 11:46:12.162862] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:11.501 [2024-11-15 11:46:12.162873] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:11.501 [2024-11-15 11:46:12.176511] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:11.501 [2024-11-15 11:46:12.177060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.501 [2024-11-15 11:46:12.177083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2505a40 with addr=10.0.0.2, port=4420 00:28:11.501 [2024-11-15 11:46:12.177094] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2505a40 is same with the state(6) to be set 00:28:11.501 [2024-11-15 11:46:12.177361] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2505a40 (9): Bad file descriptor 00:28:11.501 [2024-11-15 11:46:12.177637] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:11.501 [2024-11-15 11:46:12.177652] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:11.501 [2024-11-15 11:46:12.177662] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:11.501 [2024-11-15 11:46:12.177673] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:11.501 [2024-11-15 11:46:12.191278] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:11.501 [2024-11-15 11:46:12.191756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.501 [2024-11-15 11:46:12.191779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2505a40 with addr=10.0.0.2, port=4420 00:28:11.501 [2024-11-15 11:46:12.191790] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2505a40 is same with the state(6) to be set 00:28:11.501 [2024-11-15 11:46:12.192062] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2505a40 (9): Bad file descriptor 00:28:11.501 [2024-11-15 11:46:12.192330] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:11.501 [2024-11-15 11:46:12.192344] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:11.501 [2024-11-15 11:46:12.192354] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:11.501 [2024-11-15 11:46:12.192364] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:11.501 [2024-11-15 11:46:12.206006] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:11.501 [2024-11-15 11:46:12.206559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.501 [2024-11-15 11:46:12.206582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2505a40 with addr=10.0.0.2, port=4420 00:28:11.501 [2024-11-15 11:46:12.206594] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2505a40 is same with the state(6) to be set 00:28:11.501 [2024-11-15 11:46:12.206869] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2505a40 (9): Bad file descriptor 00:28:11.501 [2024-11-15 11:46:12.207139] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:11.501 [2024-11-15 11:46:12.207151] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:11.501 [2024-11-15 11:46:12.207161] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:11.501 [2024-11-15 11:46:12.207171] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:11.501 [2024-11-15 11:46:12.220732] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:11.501 [2024-11-15 11:46:12.221201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.501 [2024-11-15 11:46:12.221225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2505a40 with addr=10.0.0.2, port=4420 00:28:11.501 [2024-11-15 11:46:12.221236] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2505a40 is same with the state(6) to be set 00:28:11.501 [2024-11-15 11:46:12.221511] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2505a40 (9): Bad file descriptor 00:28:11.501 [2024-11-15 11:46:12.221780] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:11.501 [2024-11-15 11:46:12.221793] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:11.501 [2024-11-15 11:46:12.221803] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:11.501 [2024-11-15 11:46:12.221813] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:11.501 [2024-11-15 11:46:12.235442] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:11.501 [2024-11-15 11:46:12.236023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.501 [2024-11-15 11:46:12.236069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2505a40 with addr=10.0.0.2, port=4420 00:28:11.502 [2024-11-15 11:46:12.236093] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2505a40 is same with the state(6) to be set 00:28:11.502 [2024-11-15 11:46:12.236694] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2505a40 (9): Bad file descriptor 00:28:11.502 [2024-11-15 11:46:12.236997] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:11.502 [2024-11-15 11:46:12.237009] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:11.502 [2024-11-15 11:46:12.237024] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:11.502 [2024-11-15 11:46:12.237035] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:11.502 [2024-11-15 11:46:12.250153] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:11.502 [2024-11-15 11:46:12.250714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.502 [2024-11-15 11:46:12.250738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2505a40 with addr=10.0.0.2, port=4420 00:28:11.502 [2024-11-15 11:46:12.250750] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2505a40 is same with the state(6) to be set 00:28:11.502 [2024-11-15 11:46:12.251018] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2505a40 (9): Bad file descriptor 00:28:11.502 [2024-11-15 11:46:12.251286] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:11.502 [2024-11-15 11:46:12.251298] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:11.502 [2024-11-15 11:46:12.251310] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:11.502 [2024-11-15 11:46:12.251321] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:11.502 [2024-11-15 11:46:12.264954] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:11.502 [2024-11-15 11:46:12.265479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.502 [2024-11-15 11:46:12.265503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2505a40 with addr=10.0.0.2, port=4420 00:28:11.502 [2024-11-15 11:46:12.265514] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2505a40 is same with the state(6) to be set 00:28:11.502 [2024-11-15 11:46:12.265781] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2505a40 (9): Bad file descriptor 00:28:11.502 [2024-11-15 11:46:12.266049] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:11.502 [2024-11-15 11:46:12.266063] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:11.502 [2024-11-15 11:46:12.266073] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:11.502 [2024-11-15 11:46:12.266083] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:11.502 [2024-11-15 11:46:12.279750] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:11.502 [2024-11-15 11:46:12.280250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.502 [2024-11-15 11:46:12.280273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2505a40 with addr=10.0.0.2, port=4420 00:28:11.502 [2024-11-15 11:46:12.280286] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2505a40 is same with the state(6) to be set 00:28:11.502 [2024-11-15 11:46:12.280559] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2505a40 (9): Bad file descriptor 00:28:11.502 [2024-11-15 11:46:12.280828] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:11.502 [2024-11-15 11:46:12.280841] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:11.502 [2024-11-15 11:46:12.280851] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:11.502 [2024-11-15 11:46:12.280861] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:11.502 [2024-11-15 11:46:12.294516] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:11.502 [2024-11-15 11:46:12.295020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.502 [2024-11-15 11:46:12.295043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2505a40 with addr=10.0.0.2, port=4420 00:28:11.502 [2024-11-15 11:46:12.295055] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2505a40 is same with the state(6) to be set 00:28:11.502 [2024-11-15 11:46:12.295322] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2505a40 (9): Bad file descriptor 00:28:11.502 [2024-11-15 11:46:12.295597] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:11.502 [2024-11-15 11:46:12.295612] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:11.502 [2024-11-15 11:46:12.295623] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:11.502 [2024-11-15 11:46:12.295636] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:11.502 [2024-11-15 11:46:12.309288] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:11.502 [2024-11-15 11:46:12.309758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.502 [2024-11-15 11:46:12.309782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2505a40 with addr=10.0.0.2, port=4420 00:28:11.502 [2024-11-15 11:46:12.309794] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2505a40 is same with the state(6) to be set 00:28:11.502 [2024-11-15 11:46:12.310061] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2505a40 (9): Bad file descriptor 00:28:11.502 [2024-11-15 11:46:12.310330] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:11.502 [2024-11-15 11:46:12.310343] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:11.502 [2024-11-15 11:46:12.310353] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:11.502 [2024-11-15 11:46:12.310364] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:11.502 [2024-11-15 11:46:12.324018] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:11.502 [2024-11-15 11:46:12.324493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.502 [2024-11-15 11:46:12.324518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2505a40 with addr=10.0.0.2, port=4420 00:28:11.502 [2024-11-15 11:46:12.324529] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2505a40 is same with the state(6) to be set 00:28:11.502 [2024-11-15 11:46:12.324797] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2505a40 (9): Bad file descriptor 00:28:11.502 [2024-11-15 11:46:12.325065] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:11.502 [2024-11-15 11:46:12.325078] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:11.502 [2024-11-15 11:46:12.325088] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:11.502 [2024-11-15 11:46:12.325098] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:11.502 [2024-11-15 11:46:12.338759] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:11.502 [2024-11-15 11:46:12.339158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.502 [2024-11-15 11:46:12.339184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2505a40 with addr=10.0.0.2, port=4420 00:28:11.502 [2024-11-15 11:46:12.339196] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2505a40 is same with the state(6) to be set 00:28:11.502 [2024-11-15 11:46:12.339470] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2505a40 (9): Bad file descriptor 00:28:11.502 [2024-11-15 11:46:12.339740] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:11.502 [2024-11-15 11:46:12.339753] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:11.502 [2024-11-15 11:46:12.339764] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:11.502 [2024-11-15 11:46:12.339774] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:11.763 [2024-11-15 11:46:12.353414] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:11.763 [2024-11-15 11:46:12.353974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.763 [2024-11-15 11:46:12.353998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2505a40 with addr=10.0.0.2, port=4420 00:28:11.763 [2024-11-15 11:46:12.354009] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2505a40 is same with the state(6) to be set 00:28:11.763 [2024-11-15 11:46:12.354276] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2505a40 (9): Bad file descriptor 00:28:11.763 [2024-11-15 11:46:12.354553] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:11.763 [2024-11-15 11:46:12.354567] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:11.763 [2024-11-15 11:46:12.354578] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:11.763 [2024-11-15 11:46:12.354589] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:11.763 [2024-11-15 11:46:12.368221] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:11.763 [2024-11-15 11:46:12.368776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.763 [2024-11-15 11:46:12.368822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2505a40 with addr=10.0.0.2, port=4420 00:28:11.763 [2024-11-15 11:46:12.368846] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2505a40 is same with the state(6) to be set 00:28:11.763 [2024-11-15 11:46:12.369382] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2505a40 (9): Bad file descriptor 00:28:11.763 [2024-11-15 11:46:12.369657] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:11.763 [2024-11-15 11:46:12.369671] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:11.763 [2024-11-15 11:46:12.369682] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:11.763 [2024-11-15 11:46:12.369692] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:11.763 [2024-11-15 11:46:12.382825] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:11.763 [2024-11-15 11:46:12.383268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.763 [2024-11-15 11:46:12.383291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2505a40 with addr=10.0.0.2, port=4420 00:28:11.763 [2024-11-15 11:46:12.383302] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2505a40 is same with the state(6) to be set 00:28:11.763 [2024-11-15 11:46:12.383579] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2505a40 (9): Bad file descriptor 00:28:11.763 [2024-11-15 11:46:12.383849] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:11.763 [2024-11-15 11:46:12.383862] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:11.763 [2024-11-15 11:46:12.383872] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:11.763 [2024-11-15 11:46:12.383882] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:11.763 [2024-11-15 11:46:12.397526] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:11.763 [2024-11-15 11:46:12.397932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.763 [2024-11-15 11:46:12.397977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2505a40 with addr=10.0.0.2, port=4420 00:28:11.763 [2024-11-15 11:46:12.398001] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2505a40 is same with the state(6) to be set 00:28:11.763 [2024-11-15 11:46:12.398540] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2505a40 (9): Bad file descriptor 00:28:11.763 [2024-11-15 11:46:12.398811] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:11.763 [2024-11-15 11:46:12.398824] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:11.763 [2024-11-15 11:46:12.398834] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:11.763 [2024-11-15 11:46:12.398844] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:11.763 [2024-11-15 11:46:12.412254] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:11.763 [2024-11-15 11:46:12.412708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.763 [2024-11-15 11:46:12.412731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2505a40 with addr=10.0.0.2, port=4420 00:28:11.763 [2024-11-15 11:46:12.412743] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2505a40 is same with the state(6) to be set 00:28:11.763 [2024-11-15 11:46:12.413010] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2505a40 (9): Bad file descriptor 00:28:11.763 [2024-11-15 11:46:12.413278] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:11.763 [2024-11-15 11:46:12.413291] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:11.763 [2024-11-15 11:46:12.413301] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:11.763 [2024-11-15 11:46:12.413312] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:11.763 [2024-11-15 11:46:12.426959] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:11.763 [2024-11-15 11:46:12.427433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.763 [2024-11-15 11:46:12.427457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2505a40 with addr=10.0.0.2, port=4420 00:28:11.763 [2024-11-15 11:46:12.427475] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2505a40 is same with the state(6) to be set 00:28:11.764 [2024-11-15 11:46:12.427742] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2505a40 (9): Bad file descriptor 00:28:11.764 [2024-11-15 11:46:12.428011] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:11.764 [2024-11-15 11:46:12.428024] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:11.764 [2024-11-15 11:46:12.428039] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:11.764 [2024-11-15 11:46:12.428050] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:11.764 [2024-11-15 11:46:12.441693] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:11.764 [2024-11-15 11:46:12.442090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.764 [2024-11-15 11:46:12.442114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2505a40 with addr=10.0.0.2, port=4420 00:28:11.764 [2024-11-15 11:46:12.442125] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2505a40 is same with the state(6) to be set 00:28:11.764 [2024-11-15 11:46:12.442393] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2505a40 (9): Bad file descriptor 00:28:11.764 [2024-11-15 11:46:12.442667] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:11.764 [2024-11-15 11:46:12.442680] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:11.764 [2024-11-15 11:46:12.442691] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:11.764 [2024-11-15 11:46:12.442701] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:11.764 [2024-11-15 11:46:12.456338] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:11.764 [2024-11-15 11:46:12.456788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.764 [2024-11-15 11:46:12.456812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2505a40 with addr=10.0.0.2, port=4420 00:28:11.764 [2024-11-15 11:46:12.456824] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2505a40 is same with the state(6) to be set 00:28:11.764 [2024-11-15 11:46:12.457091] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2505a40 (9): Bad file descriptor 00:28:11.764 [2024-11-15 11:46:12.457360] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:11.764 [2024-11-15 11:46:12.457374] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:11.764 [2024-11-15 11:46:12.457384] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:11.764 [2024-11-15 11:46:12.457394] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:11.764 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh: line 35: 1400574 Killed "${NVMF_APP[@]}" "$@" 00:28:11.764 11:46:12 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@36 -- # tgt_init 00:28:11.764 11:46:12 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@15 -- # nvmfappstart -m 0xE 00:28:11.764 11:46:12 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:28:11.764 11:46:12 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@724 -- # xtrace_disable 00:28:11.764 11:46:12 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:28:11.764 11:46:12 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@509 -- # nvmfpid=1402021 00:28:11.764 11:46:12 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@510 -- # waitforlisten 1402021 00:28:11.764 11:46:12 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:28:11.764 11:46:12 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@833 -- # '[' -z 1402021 ']' 00:28:11.764 11:46:12 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:11.764 11:46:12 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@838 -- # local max_retries=100 00:28:11.764 11:46:12 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:11.764 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:11.764 11:46:12 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@842 -- # xtrace_disable 00:28:11.764 11:46:12 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:28:11.764 [2024-11-15 11:46:12.471081] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:11.764 [2024-11-15 11:46:12.471529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.764 [2024-11-15 11:46:12.471554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2505a40 with addr=10.0.0.2, port=4420 00:28:11.764 [2024-11-15 11:46:12.471565] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2505a40 is same with the state(6) to be set 00:28:11.764 [2024-11-15 11:46:12.471833] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2505a40 (9): Bad file descriptor 00:28:11.764 [2024-11-15 11:46:12.472102] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:11.764 [2024-11-15 11:46:12.472115] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:11.764 [2024-11-15 11:46:12.472125] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:11.764 [2024-11-15 11:46:12.472136] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:11.764 [2024-11-15 11:46:12.485789] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:11.764 [2024-11-15 11:46:12.486239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.764 [2024-11-15 11:46:12.486261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2505a40 with addr=10.0.0.2, port=4420 00:28:11.764 [2024-11-15 11:46:12.486272] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2505a40 is same with the state(6) to be set 00:28:11.764 [2024-11-15 11:46:12.486547] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2505a40 (9): Bad file descriptor 00:28:11.764 [2024-11-15 11:46:12.486816] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:11.764 [2024-11-15 11:46:12.486829] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:11.764 [2024-11-15 11:46:12.486840] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:11.764 [2024-11-15 11:46:12.486851] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:11.764 [2024-11-15 11:46:12.500489] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:11.764 [2024-11-15 11:46:12.500963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.764 [2024-11-15 11:46:12.500986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2505a40 with addr=10.0.0.2, port=4420 00:28:11.764 [2024-11-15 11:46:12.500997] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2505a40 is same with the state(6) to be set 00:28:11.764 [2024-11-15 11:46:12.501265] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2505a40 (9): Bad file descriptor 00:28:11.764 [2024-11-15 11:46:12.501540] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:11.764 [2024-11-15 11:46:12.501555] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:11.764 [2024-11-15 11:46:12.501565] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:11.764 [2024-11-15 11:46:12.501580] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:11.764 [2024-11-15 11:46:12.515237] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:11.764 [2024-11-15 11:46:12.515771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.764 [2024-11-15 11:46:12.515795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2505a40 with addr=10.0.0.2, port=4420 00:28:11.764 [2024-11-15 11:46:12.515807] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2505a40 is same with the state(6) to be set 00:28:11.764 [2024-11-15 11:46:12.516075] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2505a40 (9): Bad file descriptor 00:28:11.764 [2024-11-15 11:46:12.516342] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:11.764 [2024-11-15 11:46:12.516355] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:11.764 [2024-11-15 11:46:12.516365] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:11.764 [2024-11-15 11:46:12.516376] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:11.764 [2024-11-15 11:46:12.522879] Starting SPDK v25.01-pre git sha1 4b2d483c6 / DPDK 24.03.0 initialization... 00:28:11.764 [2024-11-15 11:46:12.522932] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:11.764 [2024-11-15 11:46:12.530094] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:11.764 [2024-11-15 11:46:12.530577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.764 [2024-11-15 11:46:12.530602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2505a40 with addr=10.0.0.2, port=4420 00:28:11.764 [2024-11-15 11:46:12.530614] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2505a40 is same with the state(6) to be set 00:28:11.764 [2024-11-15 11:46:12.530881] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2505a40 (9): Bad file descriptor 00:28:11.764 [2024-11-15 11:46:12.531149] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:11.764 [2024-11-15 11:46:12.531162] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:11.764 [2024-11-15 11:46:12.531173] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:11.764 [2024-11-15 11:46:12.531183] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:11.764 [2024-11-15 11:46:12.544856] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:11.764 [2024-11-15 11:46:12.545337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.764 [2024-11-15 11:46:12.545360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2505a40 with addr=10.0.0.2, port=4420 00:28:11.764 [2024-11-15 11:46:12.545371] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2505a40 is same with the state(6) to be set 00:28:11.764 [2024-11-15 11:46:12.545643] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2505a40 (9): Bad file descriptor 00:28:11.764 [2024-11-15 11:46:12.545912] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:11.765 [2024-11-15 11:46:12.545925] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:11.765 [2024-11-15 11:46:12.545941] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:11.765 [2024-11-15 11:46:12.545951] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:11.765 [2024-11-15 11:46:12.559588] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:11.765 [2024-11-15 11:46:12.560117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.765 [2024-11-15 11:46:12.560140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2505a40 with addr=10.0.0.2, port=4420 00:28:11.765 [2024-11-15 11:46:12.560152] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2505a40 is same with the state(6) to be set 00:28:11.765 [2024-11-15 11:46:12.560420] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2505a40 (9): Bad file descriptor 00:28:11.765 [2024-11-15 11:46:12.560696] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:11.765 [2024-11-15 11:46:12.560710] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:11.765 [2024-11-15 11:46:12.560721] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:11.765 [2024-11-15 11:46:12.560731] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:11.765 [2024-11-15 11:46:12.574366] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:11.765 [2024-11-15 11:46:12.574772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.765 [2024-11-15 11:46:12.574796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2505a40 with addr=10.0.0.2, port=4420 00:28:11.765 [2024-11-15 11:46:12.574808] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2505a40 is same with the state(6) to be set 00:28:11.765 [2024-11-15 11:46:12.575076] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2505a40 (9): Bad file descriptor 00:28:11.765 [2024-11-15 11:46:12.575343] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:11.765 [2024-11-15 11:46:12.575356] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:11.765 [2024-11-15 11:46:12.575366] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:11.765 [2024-11-15 11:46:12.575377] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:11.765 [2024-11-15 11:46:12.589017] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:11.765 [2024-11-15 11:46:12.589406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.765 [2024-11-15 11:46:12.589429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2505a40 with addr=10.0.0.2, port=4420 00:28:11.765 [2024-11-15 11:46:12.589441] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2505a40 is same with the state(6) to be set 00:28:11.765 [2024-11-15 11:46:12.589716] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2505a40 (9): Bad file descriptor 00:28:11.765 [2024-11-15 11:46:12.589985] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:11.765 [2024-11-15 11:46:12.589998] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:11.765 [2024-11-15 11:46:12.590008] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:11.765 [2024-11-15 11:46:12.590019] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:11.765 [2024-11-15 11:46:12.596470] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:28:11.765 [2024-11-15 11:46:12.603660] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:11.765 [2024-11-15 11:46:12.604190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.765 [2024-11-15 11:46:12.604213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2505a40 with addr=10.0.0.2, port=4420 00:28:11.765 [2024-11-15 11:46:12.604225] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2505a40 is same with the state(6) to be set 00:28:11.765 [2024-11-15 11:46:12.604500] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2505a40 (9): Bad file descriptor 00:28:11.765 [2024-11-15 11:46:12.604770] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:11.765 [2024-11-15 11:46:12.604783] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:11.765 [2024-11-15 11:46:12.604793] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:11.765 [2024-11-15 11:46:12.604804] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:12.026 [2024-11-15 11:46:12.618466] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:12.026 [2024-11-15 11:46:12.618871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.026 [2024-11-15 11:46:12.618897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2505a40 with addr=10.0.0.2, port=4420 00:28:12.026 [2024-11-15 11:46:12.618909] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2505a40 is same with the state(6) to be set 00:28:12.026 [2024-11-15 11:46:12.619177] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2505a40 (9): Bad file descriptor 00:28:12.026 [2024-11-15 11:46:12.619446] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:12.026 [2024-11-15 11:46:12.619466] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:12.026 [2024-11-15 11:46:12.619478] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:12.026 [2024-11-15 11:46:12.619488] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:12.026 [2024-11-15 11:46:12.633131] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:12.026 [2024-11-15 11:46:12.633537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.026 [2024-11-15 11:46:12.633562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2505a40 with addr=10.0.0.2, port=4420 00:28:12.026 [2024-11-15 11:46:12.633574] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2505a40 is same with the state(6) to be set 00:28:12.026 [2024-11-15 11:46:12.633843] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2505a40 (9): Bad file descriptor 00:28:12.026 [2024-11-15 11:46:12.634112] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:12.026 [2024-11-15 11:46:12.634125] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:12.026 [2024-11-15 11:46:12.634136] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:12.026 [2024-11-15 11:46:12.634146] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:12.026 [2024-11-15 11:46:12.637881] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:12.026 [2024-11-15 11:46:12.637909] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:12.026 [2024-11-15 11:46:12.637919] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:28:12.026 [2024-11-15 11:46:12.637924] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:28:12.026 [2024-11-15 11:46:12.637930] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:12.026 [2024-11-15 11:46:12.639372] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:28:12.026 [2024-11-15 11:46:12.639388] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:28:12.026 [2024-11-15 11:46:12.639391] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:28:12.026 [2024-11-15 11:46:12.647816] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:12.026 [2024-11-15 11:46:12.648320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.026 [2024-11-15 11:46:12.648345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2505a40 with addr=10.0.0.2, port=4420 00:28:12.026 [2024-11-15 11:46:12.648358] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2505a40 is same with the state(6) to be set 00:28:12.026 [2024-11-15 11:46:12.648635] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2505a40 (9): Bad file descriptor 00:28:12.026 [2024-11-15 11:46:12.648906] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:12.026 [2024-11-15 11:46:12.648920] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:12.026 [2024-11-15 11:46:12.648930] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:12.026 [2024-11-15 11:46:12.648942] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:12.026 [2024-11-15 11:46:12.662618] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:12.026 [2024-11-15 11:46:12.663043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.026 [2024-11-15 11:46:12.663069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2505a40 with addr=10.0.0.2, port=4420 00:28:12.026 [2024-11-15 11:46:12.663082] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2505a40 is same with the state(6) to be set 00:28:12.026 [2024-11-15 11:46:12.663351] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2505a40 (9): Bad file descriptor 00:28:12.026 [2024-11-15 11:46:12.663626] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:12.026 [2024-11-15 11:46:12.663641] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:12.026 [2024-11-15 11:46:12.663653] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:12.026 [2024-11-15 11:46:12.663665] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:12.026 [2024-11-15 11:46:12.677313] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:12.026 [2024-11-15 11:46:12.677731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.026 [2024-11-15 11:46:12.677758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2505a40 with addr=10.0.0.2, port=4420 00:28:12.026 [2024-11-15 11:46:12.677771] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2505a40 is same with the state(6) to be set 00:28:12.026 [2024-11-15 11:46:12.678039] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2505a40 (9): Bad file descriptor 00:28:12.026 [2024-11-15 11:46:12.678309] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:12.026 [2024-11-15 11:46:12.678322] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:12.026 [2024-11-15 11:46:12.678340] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:12.026 [2024-11-15 11:46:12.678350] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:12.026 [2024-11-15 11:46:12.692007] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:12.026 [2024-11-15 11:46:12.692473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.026 [2024-11-15 11:46:12.692501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2505a40 with addr=10.0.0.2, port=4420 00:28:12.026 [2024-11-15 11:46:12.692513] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2505a40 is same with the state(6) to be set 00:28:12.026 [2024-11-15 11:46:12.692782] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2505a40 (9): Bad file descriptor 00:28:12.026 [2024-11-15 11:46:12.693052] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:12.026 [2024-11-15 11:46:12.693066] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:12.026 [2024-11-15 11:46:12.693077] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:12.026 [2024-11-15 11:46:12.693089] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:12.026 [2024-11-15 11:46:12.706741] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:12.027 [2024-11-15 11:46:12.707150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.027 [2024-11-15 11:46:12.707174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2505a40 with addr=10.0.0.2, port=4420 00:28:12.027 [2024-11-15 11:46:12.707186] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2505a40 is same with the state(6) to be set 00:28:12.027 [2024-11-15 11:46:12.707456] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2505a40 (9): Bad file descriptor 00:28:12.027 [2024-11-15 11:46:12.707734] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:12.027 [2024-11-15 11:46:12.707747] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:12.027 [2024-11-15 11:46:12.707758] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:12.027 [2024-11-15 11:46:12.707769] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:12.027 [2024-11-15 11:46:12.721430] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:12.027 [2024-11-15 11:46:12.721918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.027 [2024-11-15 11:46:12.721942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2505a40 with addr=10.0.0.2, port=4420 00:28:12.027 [2024-11-15 11:46:12.721954] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2505a40 is same with the state(6) to be set 00:28:12.027 [2024-11-15 11:46:12.722222] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2505a40 (9): Bad file descriptor 00:28:12.027 [2024-11-15 11:46:12.722496] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:12.027 [2024-11-15 11:46:12.722511] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:12.027 [2024-11-15 11:46:12.722521] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:12.027 [2024-11-15 11:46:12.722532] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:12.027 [2024-11-15 11:46:12.736154] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:12.027 [2024-11-15 11:46:12.736694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.027 [2024-11-15 11:46:12.736718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2505a40 with addr=10.0.0.2, port=4420 00:28:12.027 [2024-11-15 11:46:12.736730] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2505a40 is same with the state(6) to be set 00:28:12.027 [2024-11-15 11:46:12.736996] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2505a40 (9): Bad file descriptor 00:28:12.027 [2024-11-15 11:46:12.737266] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:12.027 [2024-11-15 11:46:12.737279] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:12.027 [2024-11-15 11:46:12.737290] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:12.027 [2024-11-15 11:46:12.737301] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:12.027 [2024-11-15 11:46:12.750927] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:12.027 [2024-11-15 11:46:12.751400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.027 [2024-11-15 11:46:12.751423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2505a40 with addr=10.0.0.2, port=4420 00:28:12.027 [2024-11-15 11:46:12.751435] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2505a40 is same with the state(6) to be set 00:28:12.027 [2024-11-15 11:46:12.751709] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2505a40 (9): Bad file descriptor 00:28:12.027 [2024-11-15 11:46:12.751978] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:12.027 [2024-11-15 11:46:12.751991] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:12.027 [2024-11-15 11:46:12.752001] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:12.027 [2024-11-15 11:46:12.752012] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:12.027 11:46:12 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:28:12.027 11:46:12 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@866 -- # return 0 00:28:12.027 11:46:12 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:28:12.027 11:46:12 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@730 -- # xtrace_disable 00:28:12.027 11:46:12 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:28:12.027 [2024-11-15 11:46:12.765636] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:12.027 [2024-11-15 11:46:12.766166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.027 [2024-11-15 11:46:12.766189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2505a40 with addr=10.0.0.2, port=4420 00:28:12.027 [2024-11-15 11:46:12.766201] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2505a40 is same with the state(6) to be set 00:28:12.027 [2024-11-15 11:46:12.766476] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2505a40 (9): Bad file descriptor 00:28:12.027 [2024-11-15 11:46:12.766746] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:12.027 [2024-11-15 11:46:12.766758] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:12.027 [2024-11-15 11:46:12.766770] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:12.027 [2024-11-15 11:46:12.766785] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:12.027 [2024-11-15 11:46:12.780411] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:12.027 [2024-11-15 11:46:12.780891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.027 [2024-11-15 11:46:12.780915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2505a40 with addr=10.0.0.2, port=4420 00:28:12.027 [2024-11-15 11:46:12.780926] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2505a40 is same with the state(6) to be set 00:28:12.027 [2024-11-15 11:46:12.781194] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2505a40 (9): Bad file descriptor 00:28:12.027 [2024-11-15 11:46:12.781470] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:12.027 [2024-11-15 11:46:12.781483] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:12.027 [2024-11-15 11:46:12.781494] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:12.027 [2024-11-15 11:46:12.781505] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:12.027 11:46:12 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:12.027 11:46:12 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:28:12.027 11:46:12 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:12.027 11:46:12 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:28:12.027 [2024-11-15 11:46:12.795135] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:12.027 [2024-11-15 11:46:12.795640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.027 [2024-11-15 11:46:12.795665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2505a40 with addr=10.0.0.2, port=4420 00:28:12.027 [2024-11-15 11:46:12.795677] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2505a40 is same with the state(6) to be set 00:28:12.027 [2024-11-15 11:46:12.795945] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2505a40 (9): Bad file descriptor 00:28:12.027 [2024-11-15 11:46:12.796213] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:12.027 [2024-11-15 11:46:12.796226] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:12.027 [2024-11-15 11:46:12.796236] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:12.027 [2024-11-15 11:46:12.796247] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:12.027 [2024-11-15 11:46:12.798109] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:12.027 11:46:12 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:12.027 11:46:12 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@18 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:28:12.027 11:46:12 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:12.027 11:46:12 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:28:12.027 [2024-11-15 11:46:12.809890] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:12.028 [2024-11-15 11:46:12.810443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.028 [2024-11-15 11:46:12.810471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2505a40 with addr=10.0.0.2, port=4420 00:28:12.028 [2024-11-15 11:46:12.810483] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2505a40 is same with the state(6) to be set 00:28:12.028 [2024-11-15 11:46:12.810754] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2505a40 (9): Bad file descriptor 00:28:12.028 [2024-11-15 11:46:12.811023] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:12.028 [2024-11-15 11:46:12.811036] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:12.028 [2024-11-15 11:46:12.811046] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:12.028 [2024-11-15 11:46:12.811055] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:12.028 [2024-11-15 11:46:12.824686] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:12.028 [2024-11-15 11:46:12.825215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.028 [2024-11-15 11:46:12.825239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2505a40 with addr=10.0.0.2, port=4420 00:28:12.028 [2024-11-15 11:46:12.825251] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2505a40 is same with the state(6) to be set 00:28:12.028 [2024-11-15 11:46:12.825523] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2505a40 (9): Bad file descriptor 00:28:12.028 [2024-11-15 11:46:12.825793] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:12.028 [2024-11-15 11:46:12.825807] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:12.028 [2024-11-15 11:46:12.825817] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:12.028 [2024-11-15 11:46:12.825827] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:12.028 Malloc0 00:28:12.028 11:46:12 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:12.028 11:46:12 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:28:12.028 11:46:12 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:12.028 11:46:12 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:28:12.028 11:46:12 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:12.028 11:46:12 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:28:12.028 11:46:12 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:12.028 11:46:12 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:28:12.028 [2024-11-15 11:46:12.839450] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:12.028 [2024-11-15 11:46:12.840008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.028 [2024-11-15 11:46:12.840031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2505a40 with addr=10.0.0.2, port=4420 00:28:12.028 [2024-11-15 11:46:12.840043] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2505a40 is same with the state(6) to be set 00:28:12.028 [2024-11-15 11:46:12.840310] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2505a40 (9): Bad file descriptor 00:28:12.028 [2024-11-15 11:46:12.840585] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:12.028 [2024-11-15 11:46:12.840598] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:12.028 [2024-11-15 11:46:12.840609] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:12.028 [2024-11-15 11:46:12.840619] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:12.028 11:46:12 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:12.028 11:46:12 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:28:12.028 11:46:12 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:12.028 11:46:12 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:28:12.028 [2024-11-15 11:46:12.847103] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:12.028 11:46:12 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:12.028 11:46:12 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@38 -- # wait 1400973 00:28:12.028 [2024-11-15 11:46:12.854243] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:12.287 [2024-11-15 11:46:12.878193] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] Resetting controller successful. 00:28:13.223 4471.83 IOPS, 17.47 MiB/s [2024-11-15T10:46:15.019Z] 5353.43 IOPS, 20.91 MiB/s [2024-11-15T10:46:16.093Z] 6049.62 IOPS, 23.63 MiB/s [2024-11-15T10:46:17.066Z] 6509.56 IOPS, 25.43 MiB/s [2024-11-15T10:46:18.443Z] 6920.10 IOPS, 27.03 MiB/s [2024-11-15T10:46:19.380Z] 7264.64 IOPS, 28.38 MiB/s [2024-11-15T10:46:20.315Z] 7504.08 IOPS, 29.31 MiB/s [2024-11-15T10:46:21.251Z] 7726.31 IOPS, 30.18 MiB/s [2024-11-15T10:46:22.189Z] 7902.71 IOPS, 30.87 MiB/s 00:28:21.336 Latency(us) 00:28:21.336 [2024-11-15T10:46:22.189Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:21.336 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:28:21.336 Verification LBA range: start 0x0 length 0x4000 00:28:21.336 Nvme1n1 : 15.01 8111.21 31.68 6873.27 0.00 8509.95 673.98 13822.14 00:28:21.336 [2024-11-15T10:46:22.189Z] =================================================================================================================== 00:28:21.336 [2024-11-15T10:46:22.189Z] Total : 8111.21 31.68 6873.27 0.00 8509.95 673.98 13822.14 00:28:21.596 11:46:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@39 -- # sync 00:28:21.596 11:46:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:28:21.596 11:46:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:21.596 11:46:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:28:21.596 11:46:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:21.596 11:46:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@42 -- # trap - SIGINT SIGTERM EXIT 00:28:21.596 11:46:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@44 -- # nvmftestfini 00:28:21.596 11:46:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@516 -- # nvmfcleanup 00:28:21.596 11:46:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@121 -- # sync 00:28:21.596 11:46:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:28:21.596 11:46:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@124 -- # set +e 00:28:21.596 11:46:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@125 -- # for i in {1..20} 00:28:21.596 11:46:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:28:21.596 rmmod nvme_tcp 00:28:21.596 rmmod nvme_fabrics 00:28:21.596 rmmod nvme_keyring 00:28:21.596 11:46:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:28:21.596 11:46:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@128 -- # set -e 00:28:21.596 11:46:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@129 -- # return 0 00:28:21.596 11:46:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@517 -- # '[' -n 1402021 ']' 00:28:21.596 11:46:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@518 -- # killprocess 1402021 00:28:21.596 11:46:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@952 -- # '[' -z 1402021 ']' 00:28:21.596 11:46:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@956 -- # kill -0 1402021 00:28:21.596 11:46:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@957 -- # uname 00:28:21.596 11:46:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:28:21.596 11:46:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 1402021 00:28:21.596 11:46:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:28:21.596 11:46:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:28:21.596 11:46:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@970 -- # echo 'killing process with pid 1402021' 00:28:21.596 killing process with pid 1402021 00:28:21.596 11:46:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@971 -- # kill 1402021 00:28:21.596 11:46:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@976 -- # wait 1402021 00:28:21.855 11:46:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:28:21.855 11:46:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:28:21.855 11:46:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:28:21.855 11:46:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@297 -- # iptr 00:28:21.855 11:46:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@791 -- # iptables-save 00:28:21.855 11:46:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:28:21.855 11:46:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@791 -- # iptables-restore 00:28:21.855 11:46:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:28:21.855 11:46:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@302 -- # remove_spdk_ns 00:28:21.855 11:46:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:21.855 11:46:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:21.855 11:46:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:23.763 11:46:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:28:23.763 00:28:23.763 real 0m25.427s 00:28:23.763 user 1m1.708s 00:28:23.763 sys 0m5.952s 00:28:23.763 11:46:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1128 -- # xtrace_disable 00:28:23.763 11:46:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:28:23.763 ************************************ 00:28:23.763 END TEST nvmf_bdevperf 00:28:23.763 ************************************ 00:28:24.022 11:46:24 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@48 -- # run_test nvmf_target_disconnect /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh --transport=tcp 00:28:24.022 11:46:24 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:28:24.022 11:46:24 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1109 -- # xtrace_disable 00:28:24.022 11:46:24 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:28:24.022 ************************************ 00:28:24.022 START TEST nvmf_target_disconnect 00:28:24.022 ************************************ 00:28:24.022 11:46:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh --transport=tcp 00:28:24.022 * Looking for test storage... 00:28:24.022 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:28:24.022 11:46:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:28:24.022 11:46:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1691 -- # lcov --version 00:28:24.022 11:46:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:28:24.022 11:46:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:28:24.022 11:46:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:28:24.022 11:46:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@333 -- # local ver1 ver1_l 00:28:24.022 11:46:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@334 -- # local ver2 ver2_l 00:28:24.022 11:46:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@336 -- # IFS=.-: 00:28:24.022 11:46:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@336 -- # read -ra ver1 00:28:24.022 11:46:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@337 -- # IFS=.-: 00:28:24.022 11:46:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@337 -- # read -ra ver2 00:28:24.022 11:46:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@338 -- # local 'op=<' 00:28:24.022 11:46:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@340 -- # ver1_l=2 00:28:24.022 11:46:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@341 -- # ver2_l=1 00:28:24.022 11:46:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:28:24.022 11:46:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@344 -- # case "$op" in 00:28:24.022 11:46:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@345 -- # : 1 00:28:24.022 11:46:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@364 -- # (( v = 0 )) 00:28:24.022 11:46:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:28:24.022 11:46:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@365 -- # decimal 1 00:28:24.022 11:46:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@353 -- # local d=1 00:28:24.022 11:46:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:28:24.022 11:46:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@355 -- # echo 1 00:28:24.022 11:46:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@365 -- # ver1[v]=1 00:28:24.022 11:46:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@366 -- # decimal 2 00:28:24.022 11:46:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@353 -- # local d=2 00:28:24.022 11:46:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:28:24.022 11:46:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@355 -- # echo 2 00:28:24.022 11:46:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@366 -- # ver2[v]=2 00:28:24.022 11:46:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:28:24.022 11:46:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:28:24.022 11:46:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@368 -- # return 0 00:28:24.022 11:46:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:28:24.022 11:46:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:28:24.022 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:24.022 --rc genhtml_branch_coverage=1 00:28:24.022 --rc genhtml_function_coverage=1 00:28:24.022 --rc genhtml_legend=1 00:28:24.022 --rc geninfo_all_blocks=1 00:28:24.022 --rc geninfo_unexecuted_blocks=1 00:28:24.022 00:28:24.022 ' 00:28:24.022 11:46:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:28:24.022 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:24.022 --rc genhtml_branch_coverage=1 00:28:24.022 --rc genhtml_function_coverage=1 00:28:24.022 --rc genhtml_legend=1 00:28:24.022 --rc geninfo_all_blocks=1 00:28:24.022 --rc geninfo_unexecuted_blocks=1 00:28:24.022 00:28:24.022 ' 00:28:24.022 11:46:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:28:24.022 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:24.022 --rc genhtml_branch_coverage=1 00:28:24.022 --rc genhtml_function_coverage=1 00:28:24.022 --rc genhtml_legend=1 00:28:24.022 --rc geninfo_all_blocks=1 00:28:24.022 --rc geninfo_unexecuted_blocks=1 00:28:24.022 00:28:24.022 ' 00:28:24.022 11:46:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:28:24.022 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:24.022 --rc genhtml_branch_coverage=1 00:28:24.022 --rc genhtml_function_coverage=1 00:28:24.022 --rc genhtml_legend=1 00:28:24.022 --rc geninfo_all_blocks=1 00:28:24.022 --rc geninfo_unexecuted_blocks=1 00:28:24.022 00:28:24.022 ' 00:28:24.022 11:46:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:28:24.022 11:46:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@7 -- # uname -s 00:28:24.022 11:46:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:24.022 11:46:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:24.022 11:46:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:24.022 11:46:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:24.022 11:46:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:28:24.022 11:46:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:28:24.022 11:46:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:24.022 11:46:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:28:24.022 11:46:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:24.022 11:46:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:28:24.022 11:46:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:28:24.022 11:46:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@18 -- # NVME_HOSTID=00abaa28-3537-eb11-906e-0017a4403562 00:28:24.022 11:46:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:24.022 11:46:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:28:24.022 11:46:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:28:24.022 11:46:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:28:24.022 11:46:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:28:24.022 11:46:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@15 -- # shopt -s extglob 00:28:24.022 11:46:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:24.022 11:46:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:24.022 11:46:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:24.022 11:46:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:24.022 11:46:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:24.022 11:46:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:24.022 11:46:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@5 -- # export PATH 00:28:24.022 11:46:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:24.022 11:46:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@51 -- # : 0 00:28:24.022 11:46:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:28:24.022 11:46:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:28:24.022 11:46:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:28:24.022 11:46:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:24.022 11:46:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:24.022 11:46:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:28:24.022 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:28:24.022 11:46:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:28:24.022 11:46:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:28:24.022 11:46:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@55 -- # have_pci_nics=0 00:28:24.022 11:46:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@11 -- # PLUGIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme 00:28:24.022 11:46:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@13 -- # MALLOC_BDEV_SIZE=64 00:28:24.022 11:46:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:28:24.022 11:46:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@69 -- # nvmftestinit 00:28:24.022 11:46:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:28:24.022 11:46:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:24.022 11:46:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@476 -- # prepare_net_devs 00:28:24.022 11:46:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@438 -- # local -g is_hw=no 00:28:24.022 11:46:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@440 -- # remove_spdk_ns 00:28:24.022 11:46:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:24.022 11:46:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:24.022 11:46:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:24.022 11:46:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:28:24.022 11:46:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:28:24.022 11:46:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@309 -- # xtrace_disable 00:28:24.022 11:46:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:28:30.591 11:46:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:28:30.591 11:46:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@315 -- # pci_devs=() 00:28:30.591 11:46:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@315 -- # local -a pci_devs 00:28:30.591 11:46:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@316 -- # pci_net_devs=() 00:28:30.591 11:46:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:28:30.591 11:46:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@317 -- # pci_drivers=() 00:28:30.591 11:46:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@317 -- # local -A pci_drivers 00:28:30.591 11:46:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@319 -- # net_devs=() 00:28:30.591 11:46:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@319 -- # local -ga net_devs 00:28:30.591 11:46:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@320 -- # e810=() 00:28:30.591 11:46:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@320 -- # local -ga e810 00:28:30.591 11:46:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@321 -- # x722=() 00:28:30.591 11:46:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@321 -- # local -ga x722 00:28:30.591 11:46:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@322 -- # mlx=() 00:28:30.591 11:46:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@322 -- # local -ga mlx 00:28:30.591 11:46:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:28:30.591 11:46:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:28:30.591 11:46:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:28:30.591 11:46:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:28:30.591 11:46:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:28:30.591 11:46:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:28:30.591 11:46:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:28:30.591 11:46:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:28:30.591 11:46:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:28:30.591 11:46:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:28:30.591 11:46:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:28:30.591 11:46:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:28:30.591 11:46:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:28:30.591 11:46:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:28:30.591 11:46:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:28:30.591 11:46:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:28:30.591 11:46:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:28:30.591 11:46:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:28:30.591 11:46:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:28:30.591 11:46:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:28:30.592 Found 0000:af:00.0 (0x8086 - 0x159b) 00:28:30.592 11:46:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:28:30.592 11:46:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:28:30.592 11:46:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:30.592 11:46:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:30.592 11:46:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:28:30.592 11:46:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:28:30.592 11:46:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:28:30.592 Found 0000:af:00.1 (0x8086 - 0x159b) 00:28:30.592 11:46:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:28:30.592 11:46:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:28:30.592 11:46:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:30.592 11:46:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:30.592 11:46:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:28:30.592 11:46:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:28:30.592 11:46:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:28:30.592 11:46:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:28:30.592 11:46:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:28:30.592 11:46:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:30.592 11:46:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:28:30.592 11:46:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:30.592 11:46:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@418 -- # [[ up == up ]] 00:28:30.592 11:46:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:28:30.592 11:46:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:30.592 11:46:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:28:30.592 Found net devices under 0000:af:00.0: cvl_0_0 00:28:30.592 11:46:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:28:30.592 11:46:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:28:30.592 11:46:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:30.592 11:46:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:28:30.592 11:46:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:30.592 11:46:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@418 -- # [[ up == up ]] 00:28:30.592 11:46:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:28:30.592 11:46:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:30.592 11:46:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:28:30.592 Found net devices under 0000:af:00.1: cvl_0_1 00:28:30.592 11:46:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:28:30.592 11:46:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:28:30.592 11:46:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@442 -- # is_hw=yes 00:28:30.592 11:46:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:28:30.592 11:46:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:28:30.592 11:46:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:28:30.592 11:46:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:28:30.592 11:46:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:28:30.592 11:46:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:28:30.592 11:46:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:28:30.592 11:46:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:28:30.592 11:46:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:28:30.592 11:46:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:28:30.592 11:46:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:28:30.592 11:46:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:28:30.592 11:46:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:28:30.592 11:46:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:28:30.592 11:46:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:28:30.592 11:46:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:28:30.592 11:46:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:28:30.592 11:46:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:28:30.592 11:46:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:28:30.592 11:46:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:28:30.592 11:46:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:28:30.592 11:46:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:28:30.592 11:46:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:28:30.592 11:46:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:28:30.592 11:46:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:28:30.592 11:46:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:28:30.592 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:28:30.592 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.319 ms 00:28:30.592 00:28:30.592 --- 10.0.0.2 ping statistics --- 00:28:30.592 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:30.592 rtt min/avg/max/mdev = 0.319/0.319/0.319/0.000 ms 00:28:30.592 11:46:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:28:30.592 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:28:30.592 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.146 ms 00:28:30.592 00:28:30.592 --- 10.0.0.1 ping statistics --- 00:28:30.592 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:30.592 rtt min/avg/max/mdev = 0.146/0.146/0.146/0.000 ms 00:28:30.592 11:46:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:28:30.592 11:46:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@450 -- # return 0 00:28:30.592 11:46:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:28:30.592 11:46:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:28:30.592 11:46:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:28:30.592 11:46:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:28:30.592 11:46:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:28:30.592 11:46:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:28:30.592 11:46:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:28:30.592 11:46:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@70 -- # run_test nvmf_target_disconnect_tc1 nvmf_target_disconnect_tc1 00:28:30.592 11:46:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:28:30.592 11:46:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1109 -- # xtrace_disable 00:28:30.592 11:46:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:28:30.592 ************************************ 00:28:30.592 START TEST nvmf_target_disconnect_tc1 00:28:30.592 ************************************ 00:28:30.592 11:46:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@1127 -- # nvmf_target_disconnect_tc1 00:28:30.592 11:46:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- host/target_disconnect.sh@32 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:28:30.592 11:46:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@650 -- # local es=0 00:28:30.592 11:46:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:28:30.592 11:46:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:28:30.592 11:46:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:28:30.592 11:46:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:28:30.592 11:46:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:28:30.592 11:46:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:28:30.592 11:46:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:28:30.592 11:46:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:28:30.592 11:46:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect ]] 00:28:30.592 11:46:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:28:30.592 [2024-11-15 11:46:30.703510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.592 [2024-11-15 11:46:30.703615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x55d460 with addr=10.0.0.2, port=4420 00:28:30.592 [2024-11-15 11:46:30.703668] nvme_tcp.c:2612:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:28:30.592 [2024-11-15 11:46:30.703702] nvme.c: 831:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:28:30.592 [2024-11-15 11:46:30.703722] nvme.c: 939:spdk_nvme_probe_ext: *ERROR*: Create probe context failed 00:28:30.592 spdk_nvme_probe() failed for transport address '10.0.0.2' 00:28:30.592 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect: errors occurred 00:28:30.592 Initializing NVMe Controllers 00:28:30.592 11:46:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@653 -- # es=1 00:28:30.592 11:46:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:28:30.592 11:46:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:28:30.592 11:46:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:28:30.592 00:28:30.592 real 0m0.139s 00:28:30.592 user 0m0.070s 00:28:30.592 sys 0m0.069s 00:28:30.592 11:46:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@1128 -- # xtrace_disable 00:28:30.592 11:46:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@10 -- # set +x 00:28:30.592 ************************************ 00:28:30.592 END TEST nvmf_target_disconnect_tc1 00:28:30.592 ************************************ 00:28:30.592 11:46:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@71 -- # run_test nvmf_target_disconnect_tc2 nvmf_target_disconnect_tc2 00:28:30.592 11:46:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:28:30.592 11:46:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1109 -- # xtrace_disable 00:28:30.592 11:46:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:28:30.592 ************************************ 00:28:30.592 START TEST nvmf_target_disconnect_tc2 00:28:30.592 ************************************ 00:28:30.592 11:46:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@1127 -- # nvmf_target_disconnect_tc2 00:28:30.592 11:46:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@37 -- # disconnect_init 10.0.0.2 00:28:30.592 11:46:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:28:30.592 11:46:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:28:30.592 11:46:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@724 -- # xtrace_disable 00:28:30.592 11:46:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:30.592 11:46:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@509 -- # nvmfpid=1407508 00:28:30.592 11:46:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@510 -- # waitforlisten 1407508 00:28:30.592 11:46:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:28:30.592 11:46:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@833 -- # '[' -z 1407508 ']' 00:28:30.592 11:46:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:30.592 11:46:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@838 -- # local max_retries=100 00:28:30.592 11:46:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:30.592 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:30.592 11:46:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@842 -- # xtrace_disable 00:28:30.592 11:46:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:30.592 [2024-11-15 11:46:30.850336] Starting SPDK v25.01-pre git sha1 4b2d483c6 / DPDK 24.03.0 initialization... 00:28:30.592 [2024-11-15 11:46:30.850389] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:30.592 [2024-11-15 11:46:30.921152] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:28:30.592 [2024-11-15 11:46:30.960703] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:30.592 [2024-11-15 11:46:30.960736] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:30.592 [2024-11-15 11:46:30.960742] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:28:30.592 [2024-11-15 11:46:30.960748] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:28:30.592 [2024-11-15 11:46:30.960752] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:30.592 [2024-11-15 11:46:30.962388] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:28:30.592 [2024-11-15 11:46:30.962502] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:28:30.592 [2024-11-15 11:46:30.962614] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:28:30.592 [2024-11-15 11:46:30.962615] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 7 00:28:30.592 11:46:31 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:28:30.592 11:46:31 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@866 -- # return 0 00:28:30.592 11:46:31 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:28:30.592 11:46:31 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@730 -- # xtrace_disable 00:28:30.592 11:46:31 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:30.592 11:46:31 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:30.592 11:46:31 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:28:30.592 11:46:31 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:30.592 11:46:31 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:30.592 Malloc0 00:28:30.592 11:46:31 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:30.592 11:46:31 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:28:30.592 11:46:31 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:30.592 11:46:31 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:30.592 [2024-11-15 11:46:31.148909] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:30.592 11:46:31 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:30.592 11:46:31 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:28:30.592 11:46:31 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:30.592 11:46:31 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:30.592 11:46:31 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:30.592 11:46:31 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:28:30.592 11:46:31 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:30.592 11:46:31 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:30.592 11:46:31 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:30.592 11:46:31 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:28:30.592 11:46:31 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:30.592 11:46:31 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:30.592 [2024-11-15 11:46:31.181162] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:30.592 11:46:31 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:30.593 11:46:31 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:28:30.593 11:46:31 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:30.593 11:46:31 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:30.593 11:46:31 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:30.593 11:46:31 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@42 -- # reconnectpid=1407535 00:28:30.593 11:46:31 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@44 -- # sleep 2 00:28:30.593 11:46:31 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:28:32.506 11:46:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@45 -- # kill -9 1407508 00:28:32.506 11:46:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@47 -- # sleep 2 00:28:32.506 Read completed with error (sct=0, sc=8) 00:28:32.506 starting I/O failed 00:28:32.506 Read completed with error (sct=0, sc=8) 00:28:32.506 starting I/O failed 00:28:32.506 Write completed with error (sct=0, sc=8) 00:28:32.506 starting I/O failed 00:28:32.506 Write completed with error (sct=0, sc=8) 00:28:32.506 starting I/O failed 00:28:32.506 Read completed with error (sct=0, sc=8) 00:28:32.506 starting I/O failed 00:28:32.506 Read completed with error (sct=0, sc=8) 00:28:32.506 starting I/O failed 00:28:32.506 Write completed with error (sct=0, sc=8) 00:28:32.506 starting I/O failed 00:28:32.506 Read completed with error (sct=0, sc=8) 00:28:32.506 starting I/O failed 00:28:32.506 Read completed with error (sct=0, sc=8) 00:28:32.506 starting I/O failed 00:28:32.506 Read completed with error (sct=0, sc=8) 00:28:32.506 starting I/O failed 00:28:32.506 Read completed with error (sct=0, sc=8) 00:28:32.506 starting I/O failed 00:28:32.506 Write completed with error (sct=0, sc=8) 00:28:32.506 starting I/O failed 00:28:32.506 Write completed with error (sct=0, sc=8) 00:28:32.506 starting I/O failed 00:28:32.506 Write completed with error (sct=0, sc=8) 00:28:32.506 starting I/O failed 00:28:32.506 Write completed with error (sct=0, sc=8) 00:28:32.506 starting I/O failed 00:28:32.506 Write completed with error (sct=0, sc=8) 00:28:32.506 starting I/O failed 00:28:32.506 Read completed with error (sct=0, sc=8) 00:28:32.506 starting I/O failed 00:28:32.506 Read completed with error (sct=0, sc=8) 00:28:32.506 starting I/O failed 00:28:32.506 Read completed with error (sct=0, sc=8) 00:28:32.506 starting I/O failed 00:28:32.506 Read completed with error (sct=0, sc=8) 00:28:32.506 starting I/O failed 00:28:32.506 Write completed with error (sct=0, sc=8) 00:28:32.506 starting I/O failed 00:28:32.506 Write completed with error (sct=0, sc=8) 00:28:32.506 starting I/O failed 00:28:32.506 Read completed with error (sct=0, sc=8) 00:28:32.506 starting I/O failed 00:28:32.506 Read completed with error (sct=0, sc=8) 00:28:32.506 starting I/O failed 00:28:32.506 Write completed with error (sct=0, sc=8) 00:28:32.506 starting I/O failed 00:28:32.506 Write completed with error (sct=0, sc=8) 00:28:32.506 starting I/O failed 00:28:32.506 Write completed with error (sct=0, sc=8) 00:28:32.506 starting I/O failed 00:28:32.506 Write completed with error (sct=0, sc=8) 00:28:32.506 starting I/O failed 00:28:32.506 Read completed with error (sct=0, sc=8) 00:28:32.506 starting I/O failed 00:28:32.506 Write completed with error (sct=0, sc=8) 00:28:32.506 starting I/O failed 00:28:32.506 Write completed with error (sct=0, sc=8) 00:28:32.506 starting I/O failed 00:28:32.506 Read completed with error (sct=0, sc=8) 00:28:32.506 starting I/O failed 00:28:32.506 Read completed with error (sct=0, sc=8) 00:28:32.506 starting I/O failed 00:28:32.506 Read completed with error (sct=0, sc=8) 00:28:32.506 starting I/O failed 00:28:32.506 Read completed with error (sct=0, sc=8) 00:28:32.506 starting I/O failed 00:28:32.506 Read completed with error (sct=0, sc=8) 00:28:32.506 starting I/O failed 00:28:32.506 Read completed with error (sct=0, sc=8) 00:28:32.506 starting I/O failed 00:28:32.506 Read completed with error (sct=0, sc=8) 00:28:32.506 starting I/O failed 00:28:32.506 [2024-11-15 11:46:33.209515] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:28:32.506 Read completed with error (sct=0, sc=8) 00:28:32.506 starting I/O failed 00:28:32.506 Read completed with error (sct=0, sc=8) 00:28:32.506 starting I/O failed 00:28:32.506 Read completed with error (sct=0, sc=8) 00:28:32.507 starting I/O failed 00:28:32.507 Read completed with error (sct=0, sc=8) 00:28:32.507 starting I/O failed 00:28:32.507 Read completed with error (sct=0, sc=8) 00:28:32.507 starting I/O failed 00:28:32.507 Read completed with error (sct=0, sc=8) 00:28:32.507 starting I/O failed 00:28:32.507 Read completed with error (sct=0, sc=8) 00:28:32.507 starting I/O failed 00:28:32.507 Read completed with error (sct=0, sc=8) 00:28:32.507 starting I/O failed 00:28:32.507 Read completed with error (sct=0, sc=8) 00:28:32.507 starting I/O failed 00:28:32.507 Read completed with error (sct=0, sc=8) 00:28:32.507 starting I/O failed 00:28:32.507 Read completed with error (sct=0, sc=8) 00:28:32.507 starting I/O failed 00:28:32.507 Write completed with error (sct=0, sc=8) 00:28:32.507 starting I/O failed 00:28:32.507 Read completed with error (sct=0, sc=8) 00:28:32.507 starting I/O failed 00:28:32.507 Read completed with error (sct=0, sc=8) 00:28:32.507 starting I/O failed 00:28:32.507 Read completed with error (sct=0, sc=8) 00:28:32.507 starting I/O failed 00:28:32.507 Read completed with error (sct=0, sc=8) 00:28:32.507 starting I/O failed 00:28:32.507 Read completed with error (sct=0, sc=8) 00:28:32.507 starting I/O failed 00:28:32.507 Write completed with error (sct=0, sc=8) 00:28:32.507 starting I/O failed 00:28:32.507 Write completed with error (sct=0, sc=8) 00:28:32.507 starting I/O failed 00:28:32.507 Read completed with error (sct=0, sc=8) 00:28:32.507 starting I/O failed 00:28:32.507 Write completed with error (sct=0, sc=8) 00:28:32.507 starting I/O failed 00:28:32.507 Write completed with error (sct=0, sc=8) 00:28:32.507 starting I/O failed 00:28:32.507 Write completed with error (sct=0, sc=8) 00:28:32.507 starting I/O failed 00:28:32.507 Write completed with error (sct=0, sc=8) 00:28:32.507 starting I/O failed 00:28:32.507 Write completed with error (sct=0, sc=8) 00:28:32.507 starting I/O failed 00:28:32.507 Read completed with error (sct=0, sc=8) 00:28:32.507 starting I/O failed 00:28:32.507 [2024-11-15 11:46:33.209711] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:32.507 Read completed with error (sct=0, sc=8) 00:28:32.507 starting I/O failed 00:28:32.507 Read completed with error (sct=0, sc=8) 00:28:32.507 starting I/O failed 00:28:32.507 Read completed with error (sct=0, sc=8) 00:28:32.507 starting I/O failed 00:28:32.507 Read completed with error (sct=0, sc=8) 00:28:32.507 starting I/O failed 00:28:32.507 Read completed with error (sct=0, sc=8) 00:28:32.507 starting I/O failed 00:28:32.507 Read completed with error (sct=0, sc=8) 00:28:32.507 starting I/O failed 00:28:32.507 Read completed with error (sct=0, sc=8) 00:28:32.507 starting I/O failed 00:28:32.507 Read completed with error (sct=0, sc=8) 00:28:32.507 starting I/O failed 00:28:32.507 Read completed with error (sct=0, sc=8) 00:28:32.507 starting I/O failed 00:28:32.507 Read completed with error (sct=0, sc=8) 00:28:32.507 starting I/O failed 00:28:32.507 Read completed with error (sct=0, sc=8) 00:28:32.507 starting I/O failed 00:28:32.507 Read completed with error (sct=0, sc=8) 00:28:32.507 starting I/O failed 00:28:32.507 Read completed with error (sct=0, sc=8) 00:28:32.507 starting I/O failed 00:28:32.507 Read completed with error (sct=0, sc=8) 00:28:32.507 starting I/O failed 00:28:32.507 Read completed with error (sct=0, sc=8) 00:28:32.507 starting I/O failed 00:28:32.507 Read completed with error (sct=0, sc=8) 00:28:32.507 starting I/O failed 00:28:32.507 Read completed with error (sct=0, sc=8) 00:28:32.507 starting I/O failed 00:28:32.507 Read completed with error (sct=0, sc=8) 00:28:32.507 starting I/O failed 00:28:32.507 Write completed with error (sct=0, sc=8) 00:28:32.507 starting I/O failed 00:28:32.507 Read completed with error (sct=0, sc=8) 00:28:32.507 starting I/O failed 00:28:32.507 Read completed with error (sct=0, sc=8) 00:28:32.507 starting I/O failed 00:28:32.507 Write completed with error (sct=0, sc=8) 00:28:32.507 starting I/O failed 00:28:32.507 Write completed with error (sct=0, sc=8) 00:28:32.507 starting I/O failed 00:28:32.507 Read completed with error (sct=0, sc=8) 00:28:32.507 starting I/O failed 00:28:32.507 Read completed with error (sct=0, sc=8) 00:28:32.507 starting I/O failed 00:28:32.507 Read completed with error (sct=0, sc=8) 00:28:32.507 starting I/O failed 00:28:32.507 Read completed with error (sct=0, sc=8) 00:28:32.507 starting I/O failed 00:28:32.507 Read completed with error (sct=0, sc=8) 00:28:32.507 starting I/O failed 00:28:32.507 Read completed with error (sct=0, sc=8) 00:28:32.507 starting I/O failed 00:28:32.507 Read completed with error (sct=0, sc=8) 00:28:32.507 starting I/O failed 00:28:32.507 Write completed with error (sct=0, sc=8) 00:28:32.507 starting I/O failed 00:28:32.507 Read completed with error (sct=0, sc=8) 00:28:32.507 starting I/O failed 00:28:32.507 [2024-11-15 11:46:33.210009] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:32.507 Read completed with error (sct=0, sc=8) 00:28:32.507 starting I/O failed 00:28:32.507 Read completed with error (sct=0, sc=8) 00:28:32.507 starting I/O failed 00:28:32.507 Read completed with error (sct=0, sc=8) 00:28:32.507 starting I/O failed 00:28:32.507 Read completed with error (sct=0, sc=8) 00:28:32.507 starting I/O failed 00:28:32.507 Read completed with error (sct=0, sc=8) 00:28:32.507 starting I/O failed 00:28:32.507 Read completed with error (sct=0, sc=8) 00:28:32.507 starting I/O failed 00:28:32.507 Read completed with error (sct=0, sc=8) 00:28:32.507 starting I/O failed 00:28:32.507 Read completed with error (sct=0, sc=8) 00:28:32.507 starting I/O failed 00:28:32.507 Read completed with error (sct=0, sc=8) 00:28:32.507 starting I/O failed 00:28:32.507 Read completed with error (sct=0, sc=8) 00:28:32.507 starting I/O failed 00:28:32.507 Read completed with error (sct=0, sc=8) 00:28:32.507 starting I/O failed 00:28:32.507 Read completed with error (sct=0, sc=8) 00:28:32.507 starting I/O failed 00:28:32.507 Read completed with error (sct=0, sc=8) 00:28:32.507 starting I/O failed 00:28:32.507 Write completed with error (sct=0, sc=8) 00:28:32.507 starting I/O failed 00:28:32.507 Write completed with error (sct=0, sc=8) 00:28:32.507 starting I/O failed 00:28:32.507 Write completed with error (sct=0, sc=8) 00:28:32.507 starting I/O failed 00:28:32.507 Read completed with error (sct=0, sc=8) 00:28:32.507 starting I/O failed 00:28:32.507 Read completed with error (sct=0, sc=8) 00:28:32.507 starting I/O failed 00:28:32.507 Read completed with error (sct=0, sc=8) 00:28:32.507 starting I/O failed 00:28:32.507 Write completed with error (sct=0, sc=8) 00:28:32.507 starting I/O failed 00:28:32.507 Write completed with error (sct=0, sc=8) 00:28:32.507 starting I/O failed 00:28:32.507 Read completed with error (sct=0, sc=8) 00:28:32.507 starting I/O failed 00:28:32.507 Read completed with error (sct=0, sc=8) 00:28:32.507 starting I/O failed 00:28:32.507 Write completed with error (sct=0, sc=8) 00:28:32.507 starting I/O failed 00:28:32.507 Read completed with error (sct=0, sc=8) 00:28:32.507 starting I/O failed 00:28:32.507 Write completed with error (sct=0, sc=8) 00:28:32.507 starting I/O failed 00:28:32.507 Read completed with error (sct=0, sc=8) 00:28:32.507 starting I/O failed 00:28:32.507 Read completed with error (sct=0, sc=8) 00:28:32.507 starting I/O failed 00:28:32.507 Read completed with error (sct=0, sc=8) 00:28:32.507 starting I/O failed 00:28:32.507 Write completed with error (sct=0, sc=8) 00:28:32.507 starting I/O failed 00:28:32.507 Write completed with error (sct=0, sc=8) 00:28:32.507 starting I/O failed 00:28:32.507 Read completed with error (sct=0, sc=8) 00:28:32.507 starting I/O failed 00:28:32.507 [2024-11-15 11:46:33.210199] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:32.507 [2024-11-15 11:46:33.210492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.507 [2024-11-15 11:46:33.210514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:32.507 qpair failed and we were unable to recover it. 00:28:32.507 [2024-11-15 11:46:33.210688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.507 [2024-11-15 11:46:33.210721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:32.507 qpair failed and we were unable to recover it. 00:28:32.507 [2024-11-15 11:46:33.210929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.507 [2024-11-15 11:46:33.210962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:32.507 qpair failed and we were unable to recover it. 00:28:32.507 [2024-11-15 11:46:33.211173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.507 [2024-11-15 11:46:33.211205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:32.507 qpair failed and we were unable to recover it. 00:28:32.507 [2024-11-15 11:46:33.211410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.507 [2024-11-15 11:46:33.211443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:32.507 qpair failed and we were unable to recover it. 00:28:32.507 [2024-11-15 11:46:33.211674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.507 [2024-11-15 11:46:33.211708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:32.507 qpair failed and we were unable to recover it. 00:28:32.507 [2024-11-15 11:46:33.211918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.507 [2024-11-15 11:46:33.211950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:32.507 qpair failed and we were unable to recover it. 00:28:32.507 [2024-11-15 11:46:33.212135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.507 [2024-11-15 11:46:33.212173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:32.507 qpair failed and we were unable to recover it. 00:28:32.507 [2024-11-15 11:46:33.212476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.508 [2024-11-15 11:46:33.212511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:32.508 qpair failed and we were unable to recover it. 00:28:32.508 [2024-11-15 11:46:33.212712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.508 [2024-11-15 11:46:33.212744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:32.508 qpair failed and we were unable to recover it. 00:28:32.508 [2024-11-15 11:46:33.212935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.508 [2024-11-15 11:46:33.212966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:32.508 qpair failed and we were unable to recover it. 00:28:32.508 [2024-11-15 11:46:33.213104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.508 [2024-11-15 11:46:33.213135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:32.508 qpair failed and we were unable to recover it. 00:28:32.508 [2024-11-15 11:46:33.213337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.508 [2024-11-15 11:46:33.213369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:32.508 qpair failed and we were unable to recover it. 00:28:32.508 [2024-11-15 11:46:33.213593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.508 [2024-11-15 11:46:33.213626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:32.508 qpair failed and we were unable to recover it. 00:28:32.508 [2024-11-15 11:46:33.213741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.508 [2024-11-15 11:46:33.213773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:32.508 qpair failed and we were unable to recover it. 00:28:32.508 [2024-11-15 11:46:33.213983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.508 [2024-11-15 11:46:33.214015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:32.508 qpair failed and we were unable to recover it. 00:28:32.508 [2024-11-15 11:46:33.214156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.508 [2024-11-15 11:46:33.214166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:32.508 qpair failed and we were unable to recover it. 00:28:32.508 [2024-11-15 11:46:33.214346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.508 [2024-11-15 11:46:33.214380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:32.508 qpair failed and we were unable to recover it. 00:28:32.508 [2024-11-15 11:46:33.214531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.508 [2024-11-15 11:46:33.214565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:32.508 qpair failed and we were unable to recover it. 00:28:32.508 [2024-11-15 11:46:33.214688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.508 [2024-11-15 11:46:33.214718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:32.508 qpair failed and we were unable to recover it. 00:28:32.508 [2024-11-15 11:46:33.214906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.508 [2024-11-15 11:46:33.214938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:32.508 qpair failed and we were unable to recover it. 00:28:32.508 [2024-11-15 11:46:33.215149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.508 [2024-11-15 11:46:33.215181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:32.508 qpair failed and we were unable to recover it. 00:28:32.508 [2024-11-15 11:46:33.215387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.508 [2024-11-15 11:46:33.215419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:32.508 qpair failed and we were unable to recover it. 00:28:32.508 [2024-11-15 11:46:33.215579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.508 [2024-11-15 11:46:33.215612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:32.508 qpair failed and we were unable to recover it. 00:28:32.508 [2024-11-15 11:46:33.215815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.508 [2024-11-15 11:46:33.215847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:32.508 qpair failed and we were unable to recover it. 00:28:32.508 [2024-11-15 11:46:33.216029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.508 [2024-11-15 11:46:33.216060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:32.508 qpair failed and we were unable to recover it. 00:28:32.508 [2024-11-15 11:46:33.216290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.508 [2024-11-15 11:46:33.216323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:32.508 qpair failed and we were unable to recover it. 00:28:32.508 [2024-11-15 11:46:33.216605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.508 [2024-11-15 11:46:33.216638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:32.508 qpair failed and we were unable to recover it. 00:28:32.508 [2024-11-15 11:46:33.216910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.508 [2024-11-15 11:46:33.216950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:32.508 qpair failed and we were unable to recover it. 00:28:32.508 [2024-11-15 11:46:33.217207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.508 [2024-11-15 11:46:33.217239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:32.508 qpair failed and we were unable to recover it. 00:28:32.508 [2024-11-15 11:46:33.217421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.508 [2024-11-15 11:46:33.217432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:32.508 qpair failed and we were unable to recover it. 00:28:32.508 [2024-11-15 11:46:33.217628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.508 [2024-11-15 11:46:33.217662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:32.508 qpair failed and we were unable to recover it. 00:28:32.508 [2024-11-15 11:46:33.217808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.508 [2024-11-15 11:46:33.217839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:32.508 qpair failed and we were unable to recover it. 00:28:32.508 [2024-11-15 11:46:33.218061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.508 [2024-11-15 11:46:33.218091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:32.508 qpair failed and we were unable to recover it. 00:28:32.508 [2024-11-15 11:46:33.218337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.508 [2024-11-15 11:46:33.218347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:32.508 qpair failed and we were unable to recover it. 00:28:32.508 [2024-11-15 11:46:33.218488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.508 [2024-11-15 11:46:33.218500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:32.508 qpair failed and we were unable to recover it. 00:28:32.508 [2024-11-15 11:46:33.218577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.508 [2024-11-15 11:46:33.218586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:32.508 qpair failed and we were unable to recover it. 00:28:32.508 [2024-11-15 11:46:33.218870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.508 [2024-11-15 11:46:33.218882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:32.508 qpair failed and we were unable to recover it. 00:28:32.508 [2024-11-15 11:46:33.218974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.508 [2024-11-15 11:46:33.218984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:32.508 qpair failed and we were unable to recover it. 00:28:32.508 [2024-11-15 11:46:33.219245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.508 [2024-11-15 11:46:33.219256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:32.508 qpair failed and we were unable to recover it. 00:28:32.508 [2024-11-15 11:46:33.219323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.508 [2024-11-15 11:46:33.219332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:32.508 qpair failed and we were unable to recover it. 00:28:32.508 [2024-11-15 11:46:33.219425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.508 [2024-11-15 11:46:33.219435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:32.508 qpair failed and we were unable to recover it. 00:28:32.508 [2024-11-15 11:46:33.219515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.508 [2024-11-15 11:46:33.219525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:32.508 qpair failed and we were unable to recover it. 00:28:32.508 [2024-11-15 11:46:33.219607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.508 [2024-11-15 11:46:33.219616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:32.508 qpair failed and we were unable to recover it. 00:28:32.508 [2024-11-15 11:46:33.219702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.508 [2024-11-15 11:46:33.219713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:32.508 qpair failed and we were unable to recover it. 00:28:32.508 [2024-11-15 11:46:33.219804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.508 [2024-11-15 11:46:33.219813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:32.508 qpair failed and we were unable to recover it. 00:28:32.508 [2024-11-15 11:46:33.219975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.509 [2024-11-15 11:46:33.220005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:32.509 qpair failed and we were unable to recover it. 00:28:32.509 [2024-11-15 11:46:33.220258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.509 [2024-11-15 11:46:33.220296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:32.509 qpair failed and we were unable to recover it. 00:28:32.509 [2024-11-15 11:46:33.220426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.509 [2024-11-15 11:46:33.220477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:32.509 qpair failed and we were unable to recover it. 00:28:32.509 [2024-11-15 11:46:33.220688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.509 [2024-11-15 11:46:33.220720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:32.509 qpair failed and we were unable to recover it. 00:28:32.509 [2024-11-15 11:46:33.220932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.509 [2024-11-15 11:46:33.220963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:32.509 qpair failed and we were unable to recover it. 00:28:32.509 [2024-11-15 11:46:33.221185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.509 [2024-11-15 11:46:33.221195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:32.509 qpair failed and we were unable to recover it. 00:28:32.509 [2024-11-15 11:46:33.221294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.509 [2024-11-15 11:46:33.221304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:32.509 qpair failed and we were unable to recover it. 00:28:32.509 [2024-11-15 11:46:33.221548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.509 [2024-11-15 11:46:33.221580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:32.509 qpair failed and we were unable to recover it. 00:28:32.509 [2024-11-15 11:46:33.221698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.509 [2024-11-15 11:46:33.221730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:32.509 qpair failed and we were unable to recover it. 00:28:32.509 [2024-11-15 11:46:33.221910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.509 [2024-11-15 11:46:33.221941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:32.509 qpair failed and we were unable to recover it. 00:28:32.509 [2024-11-15 11:46:33.222085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.509 [2024-11-15 11:46:33.222096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:32.509 qpair failed and we were unable to recover it. 00:28:32.509 [2024-11-15 11:46:33.222255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.509 [2024-11-15 11:46:33.222287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:32.509 qpair failed and we were unable to recover it. 00:28:32.509 [2024-11-15 11:46:33.222398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.509 [2024-11-15 11:46:33.222429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:32.509 qpair failed and we were unable to recover it. 00:28:32.509 [2024-11-15 11:46:33.222780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.509 [2024-11-15 11:46:33.222858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1922550 with addr=10.0.0.2, port=4420 00:28:32.509 qpair failed and we were unable to recover it. 00:28:32.509 [2024-11-15 11:46:33.223168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.509 [2024-11-15 11:46:33.223204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1922550 with addr=10.0.0.2, port=4420 00:28:32.509 qpair failed and we were unable to recover it. 00:28:32.509 [2024-11-15 11:46:33.223418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.509 [2024-11-15 11:46:33.223435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1922550 with addr=10.0.0.2, port=4420 00:28:32.509 qpair failed and we were unable to recover it. 00:28:32.509 [2024-11-15 11:46:33.223524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.509 [2024-11-15 11:46:33.223540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1922550 with addr=10.0.0.2, port=4420 00:28:32.509 qpair failed and we were unable to recover it. 00:28:32.509 [2024-11-15 11:46:33.223721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.509 [2024-11-15 11:46:33.223738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1922550 with addr=10.0.0.2, port=4420 00:28:32.509 qpair failed and we were unable to recover it. 00:28:32.509 [2024-11-15 11:46:33.223844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.509 [2024-11-15 11:46:33.223876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1922550 with addr=10.0.0.2, port=4420 00:28:32.509 qpair failed and we were unable to recover it. 00:28:32.509 [2024-11-15 11:46:33.224079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.509 [2024-11-15 11:46:33.224112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1922550 with addr=10.0.0.2, port=4420 00:28:32.509 qpair failed and we were unable to recover it. 00:28:32.509 [2024-11-15 11:46:33.224318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.509 [2024-11-15 11:46:33.224364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1922550 with addr=10.0.0.2, port=4420 00:28:32.509 qpair failed and we were unable to recover it. 00:28:32.509 [2024-11-15 11:46:33.224600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.509 [2024-11-15 11:46:33.224617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1922550 with addr=10.0.0.2, port=4420 00:28:32.509 qpair failed and we were unable to recover it. 00:28:32.509 [2024-11-15 11:46:33.224798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.509 [2024-11-15 11:46:33.224811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:32.509 qpair failed and we were unable to recover it. 00:28:32.509 [2024-11-15 11:46:33.224976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.509 [2024-11-15 11:46:33.225007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:32.509 qpair failed and we were unable to recover it. 00:28:32.509 [2024-11-15 11:46:33.225192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.509 [2024-11-15 11:46:33.225224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:32.509 qpair failed and we were unable to recover it. 00:28:32.509 [2024-11-15 11:46:33.225360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.509 [2024-11-15 11:46:33.225392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:32.509 qpair failed and we were unable to recover it. 00:28:32.509 [2024-11-15 11:46:33.225537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.509 [2024-11-15 11:46:33.225570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:32.509 qpair failed and we were unable to recover it. 00:28:32.509 [2024-11-15 11:46:33.225716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.509 [2024-11-15 11:46:33.225748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:32.509 qpair failed and we were unable to recover it. 00:28:32.509 [2024-11-15 11:46:33.225960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.509 [2024-11-15 11:46:33.225996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1922550 with addr=10.0.0.2, port=4420 00:28:32.509 qpair failed and we were unable to recover it. 00:28:32.509 [2024-11-15 11:46:33.226277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.509 [2024-11-15 11:46:33.226292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1922550 with addr=10.0.0.2, port=4420 00:28:32.509 qpair failed and we were unable to recover it. 00:28:32.509 [2024-11-15 11:46:33.226474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.509 [2024-11-15 11:46:33.226508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1922550 with addr=10.0.0.2, port=4420 00:28:32.509 qpair failed and we were unable to recover it. 00:28:32.509 [2024-11-15 11:46:33.226711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.509 [2024-11-15 11:46:33.226743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1922550 with addr=10.0.0.2, port=4420 00:28:32.509 qpair failed and we were unable to recover it. 00:28:32.509 [2024-11-15 11:46:33.226959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.509 [2024-11-15 11:46:33.226991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1922550 with addr=10.0.0.2, port=4420 00:28:32.509 qpair failed and we were unable to recover it. 00:28:32.509 [2024-11-15 11:46:33.227177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.509 [2024-11-15 11:46:33.227210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1922550 with addr=10.0.0.2, port=4420 00:28:32.509 qpair failed and we were unable to recover it. 00:28:32.509 [2024-11-15 11:46:33.227491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.509 [2024-11-15 11:46:33.227524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1922550 with addr=10.0.0.2, port=4420 00:28:32.509 qpair failed and we were unable to recover it. 00:28:32.509 [2024-11-15 11:46:33.227718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.509 [2024-11-15 11:46:33.227750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1922550 with addr=10.0.0.2, port=4420 00:28:32.509 qpair failed and we were unable to recover it. 00:28:32.509 [2024-11-15 11:46:33.227878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.509 [2024-11-15 11:46:33.227911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1922550 with addr=10.0.0.2, port=4420 00:28:32.509 qpair failed and we were unable to recover it. 00:28:32.509 [2024-11-15 11:46:33.228106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.509 [2024-11-15 11:46:33.228138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1922550 with addr=10.0.0.2, port=4420 00:28:32.509 qpair failed and we were unable to recover it. 00:28:32.509 [2024-11-15 11:46:33.228272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.510 [2024-11-15 11:46:33.228304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1922550 with addr=10.0.0.2, port=4420 00:28:32.510 qpair failed and we were unable to recover it. 00:28:32.510 [2024-11-15 11:46:33.228559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.510 [2024-11-15 11:46:33.228576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1922550 with addr=10.0.0.2, port=4420 00:28:32.510 qpair failed and we were unable to recover it. 00:28:32.510 [2024-11-15 11:46:33.228863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.510 [2024-11-15 11:46:33.228896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1922550 with addr=10.0.0.2, port=4420 00:28:32.510 qpair failed and we were unable to recover it. 00:28:32.510 [2024-11-15 11:46:33.228997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.510 [2024-11-15 11:46:33.229029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1922550 with addr=10.0.0.2, port=4420 00:28:32.510 qpair failed and we were unable to recover it. 00:28:32.510 [2024-11-15 11:46:33.229342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.510 [2024-11-15 11:46:33.229375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1922550 with addr=10.0.0.2, port=4420 00:28:32.510 qpair failed and we were unable to recover it. 00:28:32.510 [2024-11-15 11:46:33.229514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.510 [2024-11-15 11:46:33.229532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1922550 with addr=10.0.0.2, port=4420 00:28:32.510 qpair failed and we were unable to recover it. 00:28:32.510 [2024-11-15 11:46:33.229687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.510 [2024-11-15 11:46:33.229703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1922550 with addr=10.0.0.2, port=4420 00:28:32.510 qpair failed and we were unable to recover it. 00:28:32.510 [2024-11-15 11:46:33.229922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.510 [2024-11-15 11:46:33.229938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1922550 with addr=10.0.0.2, port=4420 00:28:32.510 qpair failed and we were unable to recover it. 00:28:32.510 [2024-11-15 11:46:33.230155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.510 [2024-11-15 11:46:33.230167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:32.510 qpair failed and we were unable to recover it. 00:28:32.510 [2024-11-15 11:46:33.230242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.510 [2024-11-15 11:46:33.230251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:32.510 qpair failed and we were unable to recover it. 00:28:32.510 [2024-11-15 11:46:33.230387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.510 [2024-11-15 11:46:33.230398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:32.510 qpair failed and we were unable to recover it. 00:28:32.510 [2024-11-15 11:46:33.230531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.510 [2024-11-15 11:46:33.230542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:32.510 qpair failed and we were unable to recover it. 00:28:32.510 [2024-11-15 11:46:33.230698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.510 [2024-11-15 11:46:33.230730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:32.510 qpair failed and we were unable to recover it. 00:28:32.510 [2024-11-15 11:46:33.230879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.510 [2024-11-15 11:46:33.230911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:32.510 qpair failed and we were unable to recover it. 00:28:32.510 [2024-11-15 11:46:33.231040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.510 [2024-11-15 11:46:33.231071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:32.510 qpair failed and we were unable to recover it. 00:28:32.510 [2024-11-15 11:46:33.231253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.510 [2024-11-15 11:46:33.231263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:32.510 qpair failed and we were unable to recover it. 00:28:32.510 [2024-11-15 11:46:33.231400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.510 [2024-11-15 11:46:33.231411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:32.510 qpair failed and we were unable to recover it. 00:28:32.510 [2024-11-15 11:46:33.231633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.510 [2024-11-15 11:46:33.231651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1922550 with addr=10.0.0.2, port=4420 00:28:32.510 qpair failed and we were unable to recover it. 00:28:32.510 [2024-11-15 11:46:33.231770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.510 [2024-11-15 11:46:33.231802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1922550 with addr=10.0.0.2, port=4420 00:28:32.510 qpair failed and we were unable to recover it. 00:28:32.510 [2024-11-15 11:46:33.232014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.510 [2024-11-15 11:46:33.232046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1922550 with addr=10.0.0.2, port=4420 00:28:32.510 qpair failed and we were unable to recover it. 00:28:32.510 [2024-11-15 11:46:33.232240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.510 [2024-11-15 11:46:33.232274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1922550 with addr=10.0.0.2, port=4420 00:28:32.510 qpair failed and we were unable to recover it. 00:28:32.510 [2024-11-15 11:46:33.232403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.510 [2024-11-15 11:46:33.232434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1922550 with addr=10.0.0.2, port=4420 00:28:32.510 qpair failed and we were unable to recover it. 00:28:32.510 [2024-11-15 11:46:33.232726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.510 [2024-11-15 11:46:33.232759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1922550 with addr=10.0.0.2, port=4420 00:28:32.510 qpair failed and we were unable to recover it. 00:28:32.510 [2024-11-15 11:46:33.232939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.510 [2024-11-15 11:46:33.232972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1922550 with addr=10.0.0.2, port=4420 00:28:32.510 qpair failed and we were unable to recover it. 00:28:32.510 [2024-11-15 11:46:33.233253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.510 [2024-11-15 11:46:33.233285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1922550 with addr=10.0.0.2, port=4420 00:28:32.510 qpair failed and we were unable to recover it. 00:28:32.510 [2024-11-15 11:46:33.233432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.510 [2024-11-15 11:46:33.233476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1922550 with addr=10.0.0.2, port=4420 00:28:32.510 qpair failed and we were unable to recover it. 00:28:32.510 [2024-11-15 11:46:33.233595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.510 [2024-11-15 11:46:33.233627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1922550 with addr=10.0.0.2, port=4420 00:28:32.510 qpair failed and we were unable to recover it. 00:28:32.510 [2024-11-15 11:46:33.233753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.510 [2024-11-15 11:46:33.233785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1922550 with addr=10.0.0.2, port=4420 00:28:32.510 qpair failed and we were unable to recover it. 00:28:32.510 [2024-11-15 11:46:33.234012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.510 [2024-11-15 11:46:33.234045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1922550 with addr=10.0.0.2, port=4420 00:28:32.510 qpair failed and we were unable to recover it. 00:28:32.510 [2024-11-15 11:46:33.234179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.510 [2024-11-15 11:46:33.234212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1922550 with addr=10.0.0.2, port=4420 00:28:32.510 qpair failed and we were unable to recover it. 00:28:32.510 [2024-11-15 11:46:33.234391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.510 [2024-11-15 11:46:33.234422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1922550 with addr=10.0.0.2, port=4420 00:28:32.510 qpair failed and we were unable to recover it. 00:28:32.510 [2024-11-15 11:46:33.234635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.510 [2024-11-15 11:46:33.234668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1922550 with addr=10.0.0.2, port=4420 00:28:32.510 qpair failed and we were unable to recover it. 00:28:32.510 [2024-11-15 11:46:33.234889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.510 [2024-11-15 11:46:33.234920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1922550 with addr=10.0.0.2, port=4420 00:28:32.510 qpair failed and we were unable to recover it. 00:28:32.510 [2024-11-15 11:46:33.235194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.510 [2024-11-15 11:46:33.235227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1922550 with addr=10.0.0.2, port=4420 00:28:32.510 qpair failed and we were unable to recover it. 00:28:32.510 [2024-11-15 11:46:33.235412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.510 [2024-11-15 11:46:33.235443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1922550 with addr=10.0.0.2, port=4420 00:28:32.510 qpair failed and we were unable to recover it. 00:28:32.510 [2024-11-15 11:46:33.235584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.510 [2024-11-15 11:46:33.235617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1922550 with addr=10.0.0.2, port=4420 00:28:32.510 qpair failed and we were unable to recover it. 00:28:32.510 [2024-11-15 11:46:33.235882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.510 [2024-11-15 11:46:33.235914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1922550 with addr=10.0.0.2, port=4420 00:28:32.510 qpair failed and we were unable to recover it. 00:28:32.510 [2024-11-15 11:46:33.236204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.510 [2024-11-15 11:46:33.236237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1922550 with addr=10.0.0.2, port=4420 00:28:32.510 qpair failed and we were unable to recover it. 00:28:32.510 [2024-11-15 11:46:33.236508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.511 [2024-11-15 11:46:33.236525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1922550 with addr=10.0.0.2, port=4420 00:28:32.511 qpair failed and we were unable to recover it. 00:28:32.511 [2024-11-15 11:46:33.236709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.511 [2024-11-15 11:46:33.236726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1922550 with addr=10.0.0.2, port=4420 00:28:32.511 qpair failed and we were unable to recover it. 00:28:32.511 [2024-11-15 11:46:33.236829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.511 [2024-11-15 11:46:33.236843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1922550 with addr=10.0.0.2, port=4420 00:28:32.511 qpair failed and we were unable to recover it. 00:28:32.511 [2024-11-15 11:46:33.236930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.511 [2024-11-15 11:46:33.236945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1922550 with addr=10.0.0.2, port=4420 00:28:32.511 qpair failed and we were unable to recover it. 00:28:32.511 [2024-11-15 11:46:33.237092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.511 [2024-11-15 11:46:33.237109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1922550 with addr=10.0.0.2, port=4420 00:28:32.511 qpair failed and we were unable to recover it. 00:28:32.511 [2024-11-15 11:46:33.237282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.511 [2024-11-15 11:46:33.237315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1922550 with addr=10.0.0.2, port=4420 00:28:32.511 qpair failed and we were unable to recover it. 00:28:32.511 [2024-11-15 11:46:33.237504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.511 [2024-11-15 11:46:33.237543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1922550 with addr=10.0.0.2, port=4420 00:28:32.511 qpair failed and we were unable to recover it. 00:28:32.511 [2024-11-15 11:46:33.237824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.511 [2024-11-15 11:46:33.237857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1922550 with addr=10.0.0.2, port=4420 00:28:32.511 qpair failed and we were unable to recover it. 00:28:32.511 [2024-11-15 11:46:33.238042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.511 [2024-11-15 11:46:33.238074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1922550 with addr=10.0.0.2, port=4420 00:28:32.511 qpair failed and we were unable to recover it. 00:28:32.511 [2024-11-15 11:46:33.238372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.511 [2024-11-15 11:46:33.238404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1922550 with addr=10.0.0.2, port=4420 00:28:32.511 qpair failed and we were unable to recover it. 00:28:32.511 [2024-11-15 11:46:33.238549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.511 [2024-11-15 11:46:33.238582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1922550 with addr=10.0.0.2, port=4420 00:28:32.511 qpair failed and we were unable to recover it. 00:28:32.511 [2024-11-15 11:46:33.238852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.511 [2024-11-15 11:46:33.238884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1922550 with addr=10.0.0.2, port=4420 00:28:32.511 qpair failed and we were unable to recover it. 00:28:32.511 [2024-11-15 11:46:33.239166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.511 [2024-11-15 11:46:33.239199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1922550 with addr=10.0.0.2, port=4420 00:28:32.511 qpair failed and we were unable to recover it. 00:28:32.511 [2024-11-15 11:46:33.239338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.511 [2024-11-15 11:46:33.239370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1922550 with addr=10.0.0.2, port=4420 00:28:32.511 qpair failed and we were unable to recover it. 00:28:32.511 [2024-11-15 11:46:33.239506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.511 [2024-11-15 11:46:33.239539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1922550 with addr=10.0.0.2, port=4420 00:28:32.511 qpair failed and we were unable to recover it. 00:28:32.511 [2024-11-15 11:46:33.239724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.511 [2024-11-15 11:46:33.239757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1922550 with addr=10.0.0.2, port=4420 00:28:32.511 qpair failed and we were unable to recover it. 00:28:32.511 [2024-11-15 11:46:33.239958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.511 [2024-11-15 11:46:33.239991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1922550 with addr=10.0.0.2, port=4420 00:28:32.511 qpair failed and we were unable to recover it. 00:28:32.511 [2024-11-15 11:46:33.240191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.511 [2024-11-15 11:46:33.240208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1922550 with addr=10.0.0.2, port=4420 00:28:32.511 qpair failed and we were unable to recover it. 00:28:32.511 [2024-11-15 11:46:33.240377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.511 [2024-11-15 11:46:33.240404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:32.511 qpair failed and we were unable to recover it. 00:28:32.511 [2024-11-15 11:46:33.240571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.511 [2024-11-15 11:46:33.240584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:32.511 qpair failed and we were unable to recover it. 00:28:32.511 [2024-11-15 11:46:33.240691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.511 [2024-11-15 11:46:33.240701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:32.511 qpair failed and we were unable to recover it. 00:28:32.511 [2024-11-15 11:46:33.240771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.511 [2024-11-15 11:46:33.240780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:32.511 qpair failed and we were unable to recover it. 00:28:32.511 [2024-11-15 11:46:33.240859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.511 [2024-11-15 11:46:33.240868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:32.511 qpair failed and we were unable to recover it. 00:28:32.511 [2024-11-15 11:46:33.241060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.511 [2024-11-15 11:46:33.241092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:32.511 qpair failed and we were unable to recover it. 00:28:32.511 [2024-11-15 11:46:33.241295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.511 [2024-11-15 11:46:33.241327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:32.511 qpair failed and we were unable to recover it. 00:28:32.511 [2024-11-15 11:46:33.241614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.511 [2024-11-15 11:46:33.241647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:32.511 qpair failed and we were unable to recover it. 00:28:32.511 [2024-11-15 11:46:33.241851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.511 [2024-11-15 11:46:33.241883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:32.511 qpair failed and we were unable to recover it. 00:28:32.511 [2024-11-15 11:46:33.241990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.511 [2024-11-15 11:46:33.242022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:32.511 qpair failed and we were unable to recover it. 00:28:32.511 [2024-11-15 11:46:33.242232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.511 [2024-11-15 11:46:33.242264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:32.511 qpair failed and we were unable to recover it. 00:28:32.511 [2024-11-15 11:46:33.242553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.511 [2024-11-15 11:46:33.242585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:32.511 qpair failed and we were unable to recover it. 00:28:32.511 [2024-11-15 11:46:33.242770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.511 [2024-11-15 11:46:33.242802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:32.511 qpair failed and we were unable to recover it. 00:28:32.511 [2024-11-15 11:46:33.243000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.511 [2024-11-15 11:46:33.243033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:32.511 qpair failed and we were unable to recover it. 00:28:32.511 [2024-11-15 11:46:33.243178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.512 [2024-11-15 11:46:33.243203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:32.512 qpair failed and we were unable to recover it. 00:28:32.512 [2024-11-15 11:46:33.243464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.512 [2024-11-15 11:46:33.243482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1922550 with addr=10.0.0.2, port=4420 00:28:32.512 qpair failed and we were unable to recover it. 00:28:32.512 [2024-11-15 11:46:33.243580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.512 [2024-11-15 11:46:33.243596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1922550 with addr=10.0.0.2, port=4420 00:28:32.512 qpair failed and we were unable to recover it. 00:28:32.512 [2024-11-15 11:46:33.243698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.512 [2024-11-15 11:46:33.243713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1922550 with addr=10.0.0.2, port=4420 00:28:32.512 qpair failed and we were unable to recover it. 00:28:32.512 [2024-11-15 11:46:33.243872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.512 [2024-11-15 11:46:33.243903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1922550 with addr=10.0.0.2, port=4420 00:28:32.512 qpair failed and we were unable to recover it. 00:28:32.512 [2024-11-15 11:46:33.244102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.512 [2024-11-15 11:46:33.244134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1922550 with addr=10.0.0.2, port=4420 00:28:32.512 qpair failed and we were unable to recover it. 00:28:32.512 [2024-11-15 11:46:33.244334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.512 [2024-11-15 11:46:33.244367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1922550 with addr=10.0.0.2, port=4420 00:28:32.512 qpair failed and we were unable to recover it. 00:28:32.512 [2024-11-15 11:46:33.244561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.512 [2024-11-15 11:46:33.244577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1922550 with addr=10.0.0.2, port=4420 00:28:32.512 qpair failed and we were unable to recover it. 00:28:32.512 [2024-11-15 11:46:33.244659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.512 [2024-11-15 11:46:33.244674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1922550 with addr=10.0.0.2, port=4420 00:28:32.512 qpair failed and we were unable to recover it. 00:28:32.512 [2024-11-15 11:46:33.244823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.512 [2024-11-15 11:46:33.244839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1922550 with addr=10.0.0.2, port=4420 00:28:32.512 qpair failed and we were unable to recover it. 00:28:32.512 [2024-11-15 11:46:33.245080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.512 [2024-11-15 11:46:33.245093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:32.512 qpair failed and we were unable to recover it. 00:28:32.512 [2024-11-15 11:46:33.245334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.512 [2024-11-15 11:46:33.245365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:32.512 qpair failed and we were unable to recover it. 00:28:32.512 [2024-11-15 11:46:33.245602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.512 [2024-11-15 11:46:33.245634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:32.512 qpair failed and we were unable to recover it. 00:28:32.512 [2024-11-15 11:46:33.245927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.512 [2024-11-15 11:46:33.245958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:32.512 qpair failed and we were unable to recover it. 00:28:32.512 [2024-11-15 11:46:33.246137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.512 [2024-11-15 11:46:33.246170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:32.512 qpair failed and we were unable to recover it. 00:28:32.512 [2024-11-15 11:46:33.246303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.512 [2024-11-15 11:46:33.246339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:32.512 qpair failed and we were unable to recover it. 00:28:32.512 [2024-11-15 11:46:33.246486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.512 [2024-11-15 11:46:33.246497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:32.512 qpair failed and we were unable to recover it. 00:28:32.512 [2024-11-15 11:46:33.246640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.512 [2024-11-15 11:46:33.246672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:32.512 qpair failed and we were unable to recover it. 00:28:32.512 [2024-11-15 11:46:33.246875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.512 [2024-11-15 11:46:33.246907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:32.512 qpair failed and we were unable to recover it. 00:28:32.512 [2024-11-15 11:46:33.247089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.512 [2024-11-15 11:46:33.247122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:32.512 qpair failed and we were unable to recover it. 00:28:32.512 [2024-11-15 11:46:33.247228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.512 [2024-11-15 11:46:33.247238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:32.512 qpair failed and we were unable to recover it. 00:28:32.512 [2024-11-15 11:46:33.247386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.512 [2024-11-15 11:46:33.247397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:32.512 qpair failed and we were unable to recover it. 00:28:32.512 [2024-11-15 11:46:33.247554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.512 [2024-11-15 11:46:33.247565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:32.512 qpair failed and we were unable to recover it. 00:28:32.512 [2024-11-15 11:46:33.247719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.512 [2024-11-15 11:46:33.247752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:32.512 qpair failed and we were unable to recover it. 00:28:32.512 [2024-11-15 11:46:33.247883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.512 [2024-11-15 11:46:33.247916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:32.512 qpair failed and we were unable to recover it. 00:28:32.512 [2024-11-15 11:46:33.248045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.512 [2024-11-15 11:46:33.248078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:32.512 qpair failed and we were unable to recover it. 00:28:32.512 [2024-11-15 11:46:33.248296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.512 [2024-11-15 11:46:33.248328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:32.512 qpair failed and we were unable to recover it. 00:28:32.512 [2024-11-15 11:46:33.248514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.512 [2024-11-15 11:46:33.248548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:32.512 qpair failed and we were unable to recover it. 00:28:32.512 [2024-11-15 11:46:33.248755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.512 [2024-11-15 11:46:33.248788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:32.512 qpair failed and we were unable to recover it. 00:28:32.512 [2024-11-15 11:46:33.248994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.512 [2024-11-15 11:46:33.249025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:32.512 qpair failed and we were unable to recover it. 00:28:32.512 [2024-11-15 11:46:33.249256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.512 [2024-11-15 11:46:33.249289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:32.512 qpair failed and we were unable to recover it. 00:28:32.512 [2024-11-15 11:46:33.249483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.512 [2024-11-15 11:46:33.249517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:32.512 qpair failed and we were unable to recover it. 00:28:32.512 [2024-11-15 11:46:33.249662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.512 [2024-11-15 11:46:33.249695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:32.512 qpair failed and we were unable to recover it. 00:28:32.512 [2024-11-15 11:46:33.249986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.512 [2024-11-15 11:46:33.250018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:32.512 qpair failed and we were unable to recover it. 00:28:32.512 [2024-11-15 11:46:33.250301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.512 [2024-11-15 11:46:33.250333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:32.512 qpair failed and we were unable to recover it. 00:28:32.512 [2024-11-15 11:46:33.250464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.512 [2024-11-15 11:46:33.250475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:32.512 qpair failed and we were unable to recover it. 00:28:32.512 [2024-11-15 11:46:33.250603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.512 [2024-11-15 11:46:33.250613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:32.512 qpair failed and we were unable to recover it. 00:28:32.512 [2024-11-15 11:46:33.250764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.512 [2024-11-15 11:46:33.250775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:32.512 qpair failed and we were unable to recover it. 00:28:32.512 [2024-11-15 11:46:33.250913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.513 [2024-11-15 11:46:33.250944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:32.513 qpair failed and we were unable to recover it. 00:28:32.513 [2024-11-15 11:46:33.251170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.513 [2024-11-15 11:46:33.251201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:32.513 qpair failed and we were unable to recover it. 00:28:32.513 [2024-11-15 11:46:33.251387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.513 [2024-11-15 11:46:33.251418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:32.513 qpair failed and we were unable to recover it. 00:28:32.513 [2024-11-15 11:46:33.251625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.513 [2024-11-15 11:46:33.251662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:32.513 qpair failed and we were unable to recover it. 00:28:32.513 [2024-11-15 11:46:33.251851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.513 [2024-11-15 11:46:33.251888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:32.513 qpair failed and we were unable to recover it. 00:28:32.513 [2024-11-15 11:46:33.252176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.513 [2024-11-15 11:46:33.252209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:32.513 qpair failed and we were unable to recover it. 00:28:32.513 [2024-11-15 11:46:33.252424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.513 [2024-11-15 11:46:33.252455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:32.513 qpair failed and we were unable to recover it. 00:28:32.513 [2024-11-15 11:46:33.252660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.513 [2024-11-15 11:46:33.252692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:32.513 qpair failed and we were unable to recover it. 00:28:32.513 [2024-11-15 11:46:33.252977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.513 [2024-11-15 11:46:33.253010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:32.513 qpair failed and we were unable to recover it. 00:28:32.513 [2024-11-15 11:46:33.253208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.513 [2024-11-15 11:46:33.253239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:32.513 qpair failed and we were unable to recover it. 00:28:32.513 [2024-11-15 11:46:33.253472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.513 [2024-11-15 11:46:33.253506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:32.513 qpair failed and we were unable to recover it. 00:28:32.513 [2024-11-15 11:46:33.253762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.513 [2024-11-15 11:46:33.253794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:32.513 qpair failed and we were unable to recover it. 00:28:32.513 [2024-11-15 11:46:33.253997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.513 [2024-11-15 11:46:33.254028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:32.513 qpair failed and we were unable to recover it. 00:28:32.513 [2024-11-15 11:46:33.254251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.513 [2024-11-15 11:46:33.254283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:32.513 qpair failed and we were unable to recover it. 00:28:32.513 [2024-11-15 11:46:33.254414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.513 [2024-11-15 11:46:33.254425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:32.513 qpair failed and we were unable to recover it. 00:28:32.513 [2024-11-15 11:46:33.254656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.513 [2024-11-15 11:46:33.254689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:32.513 qpair failed and we were unable to recover it. 00:28:32.513 [2024-11-15 11:46:33.254906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.513 [2024-11-15 11:46:33.254938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:32.513 qpair failed and we were unable to recover it. 00:28:32.513 [2024-11-15 11:46:33.255200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.513 [2024-11-15 11:46:33.255237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:32.513 qpair failed and we were unable to recover it. 00:28:32.513 [2024-11-15 11:46:33.256347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.513 [2024-11-15 11:46:33.256372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:32.513 qpair failed and we were unable to recover it. 00:28:32.513 [2024-11-15 11:46:33.256616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.513 [2024-11-15 11:46:33.256628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:32.513 qpair failed and we were unable to recover it. 00:28:32.513 [2024-11-15 11:46:33.256844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.513 [2024-11-15 11:46:33.256876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:32.513 qpair failed and we were unable to recover it. 00:28:32.513 [2024-11-15 11:46:33.257087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.513 [2024-11-15 11:46:33.257119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:32.513 qpair failed and we were unable to recover it. 00:28:32.513 [2024-11-15 11:46:33.257335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.513 [2024-11-15 11:46:33.257368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:32.513 qpair failed and we were unable to recover it. 00:28:32.513 [2024-11-15 11:46:33.257496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.513 [2024-11-15 11:46:33.257529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:32.513 qpair failed and we were unable to recover it. 00:28:32.513 [2024-11-15 11:46:33.257727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.513 [2024-11-15 11:46:33.257759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:32.513 qpair failed and we were unable to recover it. 00:28:32.513 [2024-11-15 11:46:33.257947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.513 [2024-11-15 11:46:33.257980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:32.513 qpair failed and we were unable to recover it. 00:28:32.513 [2024-11-15 11:46:33.258174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.513 [2024-11-15 11:46:33.258205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:32.513 qpair failed and we were unable to recover it. 00:28:32.513 [2024-11-15 11:46:33.258425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.513 [2024-11-15 11:46:33.258436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:32.513 qpair failed and we were unable to recover it. 00:28:32.513 [2024-11-15 11:46:33.258656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.513 [2024-11-15 11:46:33.258667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:32.513 qpair failed and we were unable to recover it. 00:28:32.513 [2024-11-15 11:46:33.258828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.513 [2024-11-15 11:46:33.258859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:32.513 qpair failed and we were unable to recover it. 00:28:32.513 [2024-11-15 11:46:33.259063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.513 [2024-11-15 11:46:33.259095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:32.513 qpair failed and we were unable to recover it. 00:28:32.513 [2024-11-15 11:46:33.259293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.513 [2024-11-15 11:46:33.259304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:32.513 qpair failed and we were unable to recover it. 00:28:32.513 [2024-11-15 11:46:33.259381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.513 [2024-11-15 11:46:33.259391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:32.513 qpair failed and we were unable to recover it. 00:28:32.513 [2024-11-15 11:46:33.259540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.513 [2024-11-15 11:46:33.259551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:32.513 qpair failed and we were unable to recover it. 00:28:32.513 [2024-11-15 11:46:33.259802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.513 [2024-11-15 11:46:33.259836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:32.513 qpair failed and we were unable to recover it. 00:28:32.513 [2024-11-15 11:46:33.260088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.513 [2024-11-15 11:46:33.260118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:32.513 qpair failed and we were unable to recover it. 00:28:32.513 [2024-11-15 11:46:33.260255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.513 [2024-11-15 11:46:33.260286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:32.513 qpair failed and we were unable to recover it. 00:28:32.513 [2024-11-15 11:46:33.260510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.513 [2024-11-15 11:46:33.260521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:32.513 qpair failed and we were unable to recover it. 00:28:32.513 [2024-11-15 11:46:33.260676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.513 [2024-11-15 11:46:33.260708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:32.513 qpair failed and we were unable to recover it. 00:28:32.514 [2024-11-15 11:46:33.260912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.514 [2024-11-15 11:46:33.260944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:32.514 qpair failed and we were unable to recover it. 00:28:32.514 [2024-11-15 11:46:33.261087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.514 [2024-11-15 11:46:33.261118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:32.514 qpair failed and we were unable to recover it. 00:28:32.514 [2024-11-15 11:46:33.261336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.514 [2024-11-15 11:46:33.261347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:32.514 qpair failed and we were unable to recover it. 00:28:32.514 [2024-11-15 11:46:33.261533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.514 [2024-11-15 11:46:33.261566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:32.514 qpair failed and we were unable to recover it. 00:28:32.514 [2024-11-15 11:46:33.261705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.514 [2024-11-15 11:46:33.261744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:32.514 qpair failed and we were unable to recover it. 00:28:32.514 [2024-11-15 11:46:33.261960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.514 [2024-11-15 11:46:33.261991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:32.514 qpair failed and we were unable to recover it. 00:28:32.514 [2024-11-15 11:46:33.262123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.514 [2024-11-15 11:46:33.262155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:32.514 qpair failed and we were unable to recover it. 00:28:32.514 [2024-11-15 11:46:33.262265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.514 [2024-11-15 11:46:33.262297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:32.514 qpair failed and we were unable to recover it. 00:28:32.514 [2024-11-15 11:46:33.262508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.514 [2024-11-15 11:46:33.262518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:32.514 qpair failed and we were unable to recover it. 00:28:32.514 [2024-11-15 11:46:33.262599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.514 [2024-11-15 11:46:33.262609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:32.514 qpair failed and we were unable to recover it. 00:28:32.514 [2024-11-15 11:46:33.262856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.514 [2024-11-15 11:46:33.262888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:32.514 qpair failed and we were unable to recover it. 00:28:32.514 [2024-11-15 11:46:33.263088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.514 [2024-11-15 11:46:33.263119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:32.514 qpair failed and we were unable to recover it. 00:28:32.514 [2024-11-15 11:46:33.263409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.514 [2024-11-15 11:46:33.263441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:32.514 qpair failed and we were unable to recover it. 00:28:32.514 [2024-11-15 11:46:33.263590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.514 [2024-11-15 11:46:33.263622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:32.514 qpair failed and we were unable to recover it. 00:28:32.514 [2024-11-15 11:46:33.263850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.514 [2024-11-15 11:46:33.263881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:32.514 qpair failed and we were unable to recover it. 00:28:32.514 [2024-11-15 11:46:33.264081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.514 [2024-11-15 11:46:33.264113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:32.514 qpair failed and we were unable to recover it. 00:28:32.514 [2024-11-15 11:46:33.264364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.514 [2024-11-15 11:46:33.264394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:32.514 qpair failed and we were unable to recover it. 00:28:32.514 [2024-11-15 11:46:33.264662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.514 [2024-11-15 11:46:33.264695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:32.514 qpair failed and we were unable to recover it. 00:28:32.514 [2024-11-15 11:46:33.264954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.514 [2024-11-15 11:46:33.264986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:32.514 qpair failed and we were unable to recover it. 00:28:32.514 [2024-11-15 11:46:33.265130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.514 [2024-11-15 11:46:33.265162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:32.514 qpair failed and we were unable to recover it. 00:28:32.514 [2024-11-15 11:46:33.265291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.514 [2024-11-15 11:46:33.265322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:32.514 qpair failed and we were unable to recover it. 00:28:32.514 [2024-11-15 11:46:33.265607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.514 [2024-11-15 11:46:33.265640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:32.514 qpair failed and we were unable to recover it. 00:28:32.514 [2024-11-15 11:46:33.265839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.514 [2024-11-15 11:46:33.265871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:32.514 qpair failed and we were unable to recover it. 00:28:32.514 [2024-11-15 11:46:33.266003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.514 [2024-11-15 11:46:33.266034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:32.514 qpair failed and we were unable to recover it. 00:28:32.514 [2024-11-15 11:46:33.266219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.514 [2024-11-15 11:46:33.266250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:32.514 qpair failed and we were unable to recover it. 00:28:32.514 [2024-11-15 11:46:33.266382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.514 [2024-11-15 11:46:33.266414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:32.514 qpair failed and we were unable to recover it. 00:28:32.514 [2024-11-15 11:46:33.266602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.514 [2024-11-15 11:46:33.266613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:32.514 qpair failed and we were unable to recover it. 00:28:32.514 [2024-11-15 11:46:33.266712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.514 [2024-11-15 11:46:33.266722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:32.514 qpair failed and we were unable to recover it. 00:28:32.514 [2024-11-15 11:46:33.266854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.514 [2024-11-15 11:46:33.266886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:32.514 qpair failed and we were unable to recover it. 00:28:32.514 [2024-11-15 11:46:33.267015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.514 [2024-11-15 11:46:33.267047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:32.514 qpair failed and we were unable to recover it. 00:28:32.514 [2024-11-15 11:46:33.267238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.514 [2024-11-15 11:46:33.267275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:32.514 qpair failed and we were unable to recover it. 00:28:32.514 [2024-11-15 11:46:33.267417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.514 [2024-11-15 11:46:33.267428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:32.514 qpair failed and we were unable to recover it. 00:28:32.514 [2024-11-15 11:46:33.267518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.514 [2024-11-15 11:46:33.267528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:32.514 qpair failed and we were unable to recover it. 00:28:32.514 [2024-11-15 11:46:33.267662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.514 [2024-11-15 11:46:33.267694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:32.514 qpair failed and we were unable to recover it. 00:28:32.514 [2024-11-15 11:46:33.267883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.514 [2024-11-15 11:46:33.267913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:32.514 qpair failed and we were unable to recover it. 00:28:32.514 [2024-11-15 11:46:33.268169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.514 [2024-11-15 11:46:33.268201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:32.514 qpair failed and we were unable to recover it. 00:28:32.514 [2024-11-15 11:46:33.268313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.514 [2024-11-15 11:46:33.268344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:32.514 qpair failed and we were unable to recover it. 00:28:32.514 [2024-11-15 11:46:33.268539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.514 [2024-11-15 11:46:33.268573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:32.514 qpair failed and we were unable to recover it. 00:28:32.515 [2024-11-15 11:46:33.268770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.515 [2024-11-15 11:46:33.268802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:32.515 qpair failed and we were unable to recover it. 00:28:32.515 [2024-11-15 11:46:33.268979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.515 [2024-11-15 11:46:33.269012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:32.515 qpair failed and we were unable to recover it. 00:28:32.515 [2024-11-15 11:46:33.269267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.515 [2024-11-15 11:46:33.269304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:32.515 qpair failed and we were unable to recover it. 00:28:32.515 [2024-11-15 11:46:33.269443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.515 [2024-11-15 11:46:33.269454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:32.515 qpair failed and we were unable to recover it. 00:28:32.515 [2024-11-15 11:46:33.269609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.515 [2024-11-15 11:46:33.269620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:32.515 qpair failed and we were unable to recover it. 00:28:32.515 [2024-11-15 11:46:33.269780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.515 [2024-11-15 11:46:33.269791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:32.515 qpair failed and we were unable to recover it. 00:28:32.515 [2024-11-15 11:46:33.270018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.515 [2024-11-15 11:46:33.270031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:32.515 qpair failed and we were unable to recover it. 00:28:32.515 [2024-11-15 11:46:33.270119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.515 [2024-11-15 11:46:33.270128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:32.515 qpair failed and we were unable to recover it. 00:28:32.515 [2024-11-15 11:46:33.270215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.515 [2024-11-15 11:46:33.270225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:32.515 qpair failed and we were unable to recover it. 00:28:32.515 [2024-11-15 11:46:33.271539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.515 [2024-11-15 11:46:33.271562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:32.515 qpair failed and we were unable to recover it. 00:28:32.515 [2024-11-15 11:46:33.271832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.515 [2024-11-15 11:46:33.271866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:32.515 qpair failed and we were unable to recover it. 00:28:32.515 [2024-11-15 11:46:33.272050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.515 [2024-11-15 11:46:33.272083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:32.515 qpair failed and we were unable to recover it. 00:28:32.515 [2024-11-15 11:46:33.272299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.515 [2024-11-15 11:46:33.272331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:32.515 qpair failed and we were unable to recover it. 00:28:32.515 [2024-11-15 11:46:33.272485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.515 [2024-11-15 11:46:33.272520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:32.515 qpair failed and we were unable to recover it. 00:28:32.515 [2024-11-15 11:46:33.272779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.515 [2024-11-15 11:46:33.272812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:32.515 qpair failed and we were unable to recover it. 00:28:32.515 [2024-11-15 11:46:33.273065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.515 [2024-11-15 11:46:33.273098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:32.515 qpair failed and we were unable to recover it. 00:28:32.515 [2024-11-15 11:46:33.273297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.515 [2024-11-15 11:46:33.273328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:32.515 qpair failed and we were unable to recover it. 00:28:32.515 [2024-11-15 11:46:33.273572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.515 [2024-11-15 11:46:33.273583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:32.515 qpair failed and we were unable to recover it. 00:28:32.515 [2024-11-15 11:46:33.273744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.515 [2024-11-15 11:46:33.273755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:32.515 qpair failed and we were unable to recover it. 00:28:32.515 [2024-11-15 11:46:33.273911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.515 [2024-11-15 11:46:33.273943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:32.515 qpair failed and we were unable to recover it. 00:28:32.515 [2024-11-15 11:46:33.274258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.515 [2024-11-15 11:46:33.274291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:32.515 qpair failed and we were unable to recover it. 00:28:32.515 [2024-11-15 11:46:33.274482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.515 [2024-11-15 11:46:33.274494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:32.515 qpair failed and we were unable to recover it. 00:28:32.515 [2024-11-15 11:46:33.274644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.515 [2024-11-15 11:46:33.274655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:32.515 qpair failed and we were unable to recover it. 00:28:32.515 [2024-11-15 11:46:33.274821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.515 [2024-11-15 11:46:33.274831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:32.515 qpair failed and we were unable to recover it. 00:28:32.515 [2024-11-15 11:46:33.274973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.515 [2024-11-15 11:46:33.275005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:32.515 qpair failed and we were unable to recover it. 00:28:32.515 [2024-11-15 11:46:33.275227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.515 [2024-11-15 11:46:33.275259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:32.515 qpair failed and we were unable to recover it. 00:28:32.515 [2024-11-15 11:46:33.275404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.515 [2024-11-15 11:46:33.275435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:32.515 qpair failed and we were unable to recover it. 00:28:32.515 [2024-11-15 11:46:33.275620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.515 [2024-11-15 11:46:33.275631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:32.515 qpair failed and we were unable to recover it. 00:28:32.515 [2024-11-15 11:46:33.275724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.515 [2024-11-15 11:46:33.275733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:32.515 qpair failed and we were unable to recover it. 00:28:32.515 [2024-11-15 11:46:33.275886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.515 [2024-11-15 11:46:33.275897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:32.515 qpair failed and we were unable to recover it. 00:28:32.515 [2024-11-15 11:46:33.275970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.515 [2024-11-15 11:46:33.275980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:32.515 qpair failed and we were unable to recover it. 00:28:32.515 [2024-11-15 11:46:33.276104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.515 [2024-11-15 11:46:33.276135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:32.515 qpair failed and we were unable to recover it. 00:28:32.515 [2024-11-15 11:46:33.276331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.515 [2024-11-15 11:46:33.276362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:32.515 qpair failed and we were unable to recover it. 00:28:32.515 [2024-11-15 11:46:33.276710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.515 [2024-11-15 11:46:33.276784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1922550 with addr=10.0.0.2, port=4420 00:28:32.516 qpair failed and we were unable to recover it. 00:28:32.516 [2024-11-15 11:46:33.277031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.516 [2024-11-15 11:46:33.277068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1922550 with addr=10.0.0.2, port=4420 00:28:32.516 qpair failed and we were unable to recover it. 00:28:32.516 [2024-11-15 11:46:33.277201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.516 [2024-11-15 11:46:33.277217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1922550 with addr=10.0.0.2, port=4420 00:28:32.516 qpair failed and we were unable to recover it. 00:28:32.516 [2024-11-15 11:46:33.277377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.516 [2024-11-15 11:46:33.277393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1922550 with addr=10.0.0.2, port=4420 00:28:32.516 qpair failed and we were unable to recover it. 00:28:32.516 [2024-11-15 11:46:33.277607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.516 [2024-11-15 11:46:33.277625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1922550 with addr=10.0.0.2, port=4420 00:28:32.516 qpair failed and we were unable to recover it. 00:28:32.516 [2024-11-15 11:46:33.277730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.516 [2024-11-15 11:46:33.277745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1922550 with addr=10.0.0.2, port=4420 00:28:32.516 qpair failed and we were unable to recover it. 00:28:32.516 [2024-11-15 11:46:33.277855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.516 [2024-11-15 11:46:33.277871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1922550 with addr=10.0.0.2, port=4420 00:28:32.516 qpair failed and we were unable to recover it. 00:28:32.516 [2024-11-15 11:46:33.278087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.516 [2024-11-15 11:46:33.278103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1922550 with addr=10.0.0.2, port=4420 00:28:32.516 qpair failed and we were unable to recover it. 00:28:32.516 [2024-11-15 11:46:33.278277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.516 [2024-11-15 11:46:33.278294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1922550 with addr=10.0.0.2, port=4420 00:28:32.516 qpair failed and we were unable to recover it. 00:28:32.516 [2024-11-15 11:46:33.278515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.516 [2024-11-15 11:46:33.278553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:32.516 qpair failed and we were unable to recover it. 00:28:32.516 [2024-11-15 11:46:33.278696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.516 [2024-11-15 11:46:33.278727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:32.516 qpair failed and we were unable to recover it. 00:28:32.516 [2024-11-15 11:46:33.278922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.516 [2024-11-15 11:46:33.278954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:32.516 qpair failed and we were unable to recover it. 00:28:32.516 [2024-11-15 11:46:33.279173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.516 [2024-11-15 11:46:33.279205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:32.516 qpair failed and we were unable to recover it. 00:28:32.516 [2024-11-15 11:46:33.279323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.516 [2024-11-15 11:46:33.279354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:32.516 qpair failed and we were unable to recover it. 00:28:32.516 [2024-11-15 11:46:33.279475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.516 [2024-11-15 11:46:33.279486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:32.516 qpair failed and we were unable to recover it. 00:28:32.516 [2024-11-15 11:46:33.279589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.516 [2024-11-15 11:46:33.279599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:32.516 qpair failed and we were unable to recover it. 00:28:32.516 [2024-11-15 11:46:33.279737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.516 [2024-11-15 11:46:33.279748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:32.516 qpair failed and we were unable to recover it. 00:28:32.516 [2024-11-15 11:46:33.279952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.516 [2024-11-15 11:46:33.279963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:32.516 qpair failed and we were unable to recover it. 00:28:32.516 [2024-11-15 11:46:33.280055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.516 [2024-11-15 11:46:33.280064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:32.516 qpair failed and we were unable to recover it. 00:28:32.516 [2024-11-15 11:46:33.280212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.516 [2024-11-15 11:46:33.280222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:32.516 qpair failed and we were unable to recover it. 00:28:32.516 [2024-11-15 11:46:33.280371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.516 [2024-11-15 11:46:33.280382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:32.516 qpair failed and we were unable to recover it. 00:28:32.516 [2024-11-15 11:46:33.280546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.516 [2024-11-15 11:46:33.280557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:32.516 qpair failed and we were unable to recover it. 00:28:32.516 [2024-11-15 11:46:33.280649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.516 [2024-11-15 11:46:33.280658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:32.516 qpair failed and we were unable to recover it. 00:28:32.516 [2024-11-15 11:46:33.280807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.516 [2024-11-15 11:46:33.280818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:32.516 qpair failed and we were unable to recover it. 00:28:32.516 [2024-11-15 11:46:33.280973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.516 [2024-11-15 11:46:33.281006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:32.516 qpair failed and we were unable to recover it. 00:28:32.516 [2024-11-15 11:46:33.281206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.516 [2024-11-15 11:46:33.281237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:32.516 qpair failed and we were unable to recover it. 00:28:32.516 [2024-11-15 11:46:33.281379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.516 [2024-11-15 11:46:33.281411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:32.516 qpair failed and we were unable to recover it. 00:28:32.516 [2024-11-15 11:46:33.281643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.516 [2024-11-15 11:46:33.281654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:32.516 qpair failed and we were unable to recover it. 00:28:32.516 [2024-11-15 11:46:33.281746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.516 [2024-11-15 11:46:33.281776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:32.516 qpair failed and we were unable to recover it. 00:28:32.516 [2024-11-15 11:46:33.283173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.516 [2024-11-15 11:46:33.283192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:32.516 qpair failed and we were unable to recover it. 00:28:32.516 [2024-11-15 11:46:33.283357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.516 [2024-11-15 11:46:33.283369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:32.516 qpair failed and we were unable to recover it. 00:28:32.516 [2024-11-15 11:46:33.283534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.516 [2024-11-15 11:46:33.283570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:32.516 qpair failed and we were unable to recover it. 00:28:32.516 [2024-11-15 11:46:33.283795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.516 [2024-11-15 11:46:33.283828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:32.516 qpair failed and we were unable to recover it. 00:28:32.516 [2024-11-15 11:46:33.284017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.516 [2024-11-15 11:46:33.284048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:32.516 qpair failed and we were unable to recover it. 00:28:32.516 [2024-11-15 11:46:33.284199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.516 [2024-11-15 11:46:33.284210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:32.516 qpair failed and we were unable to recover it. 00:28:32.516 [2024-11-15 11:46:33.284436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.516 [2024-11-15 11:46:33.284446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:32.516 qpair failed and we were unable to recover it. 00:28:32.516 [2024-11-15 11:46:33.284527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.516 [2024-11-15 11:46:33.284537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:32.516 qpair failed and we were unable to recover it. 00:28:32.516 [2024-11-15 11:46:33.284696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.516 [2024-11-15 11:46:33.284707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:32.516 qpair failed and we were unable to recover it. 00:28:32.516 [2024-11-15 11:46:33.284804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.516 [2024-11-15 11:46:33.284813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:32.516 qpair failed and we were unable to recover it. 00:28:32.517 [2024-11-15 11:46:33.285040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.517 [2024-11-15 11:46:33.285051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:32.517 qpair failed and we were unable to recover it. 00:28:32.517 [2024-11-15 11:46:33.285204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.517 [2024-11-15 11:46:33.285218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:32.517 qpair failed and we were unable to recover it. 00:28:32.517 [2024-11-15 11:46:33.285324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.517 [2024-11-15 11:46:33.285333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:32.517 qpair failed and we were unable to recover it. 00:28:32.517 [2024-11-15 11:46:33.285451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.517 [2024-11-15 11:46:33.285466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:32.517 qpair failed and we were unable to recover it. 00:28:32.517 [2024-11-15 11:46:33.285614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.517 [2024-11-15 11:46:33.285624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:32.517 qpair failed and we were unable to recover it. 00:28:32.517 [2024-11-15 11:46:33.285697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.517 [2024-11-15 11:46:33.285706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:32.517 qpair failed and we were unable to recover it. 00:28:32.517 [2024-11-15 11:46:33.285804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.517 [2024-11-15 11:46:33.285814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:32.517 qpair failed and we were unable to recover it. 00:28:32.517 [2024-11-15 11:46:33.285900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.517 [2024-11-15 11:46:33.285909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:32.517 qpair failed and we were unable to recover it. 00:28:32.517 [2024-11-15 11:46:33.286044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.517 [2024-11-15 11:46:33.286054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:32.517 qpair failed and we were unable to recover it. 00:28:32.517 [2024-11-15 11:46:33.286191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.517 [2024-11-15 11:46:33.286201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:32.517 qpair failed and we were unable to recover it. 00:28:32.517 [2024-11-15 11:46:33.286355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.517 [2024-11-15 11:46:33.286365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:32.517 qpair failed and we were unable to recover it. 00:28:32.517 [2024-11-15 11:46:33.286471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.517 [2024-11-15 11:46:33.286481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:32.517 qpair failed and we were unable to recover it. 00:28:32.517 [2024-11-15 11:46:33.286559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.517 [2024-11-15 11:46:33.286569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:32.517 qpair failed and we were unable to recover it. 00:28:32.517 [2024-11-15 11:46:33.286707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.517 [2024-11-15 11:46:33.286718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:32.517 qpair failed and we were unable to recover it. 00:28:32.517 [2024-11-15 11:46:33.286788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.517 [2024-11-15 11:46:33.286798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:32.517 qpair failed and we were unable to recover it. 00:28:32.517 [2024-11-15 11:46:33.286939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.517 [2024-11-15 11:46:33.286950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:32.517 qpair failed and we were unable to recover it. 00:28:32.517 [2024-11-15 11:46:33.287034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.517 [2024-11-15 11:46:33.287044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:32.517 qpair failed and we were unable to recover it. 00:28:32.517 [2024-11-15 11:46:33.287183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.517 [2024-11-15 11:46:33.287194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:32.517 qpair failed and we were unable to recover it. 00:28:32.517 [2024-11-15 11:46:33.287420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.517 [2024-11-15 11:46:33.287431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:32.517 qpair failed and we were unable to recover it. 00:28:32.517 [2024-11-15 11:46:33.287512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.517 [2024-11-15 11:46:33.287523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:32.517 qpair failed and we were unable to recover it. 00:28:32.517 [2024-11-15 11:46:33.287599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.517 [2024-11-15 11:46:33.287608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:32.517 qpair failed and we were unable to recover it. 00:28:32.517 [2024-11-15 11:46:33.287672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.517 [2024-11-15 11:46:33.287681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:32.517 qpair failed and we were unable to recover it. 00:28:32.517 [2024-11-15 11:46:33.287771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.517 [2024-11-15 11:46:33.287781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:32.517 qpair failed and we were unable to recover it. 00:28:32.517 [2024-11-15 11:46:33.287861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.517 [2024-11-15 11:46:33.287871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:32.517 qpair failed and we were unable to recover it. 00:28:32.517 [2024-11-15 11:46:33.287957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.517 [2024-11-15 11:46:33.287967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:32.517 qpair failed and we were unable to recover it. 00:28:32.517 [2024-11-15 11:46:33.288057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.517 [2024-11-15 11:46:33.288066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:32.517 qpair failed and we were unable to recover it. 00:28:32.517 [2024-11-15 11:46:33.288209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.517 [2024-11-15 11:46:33.288220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:32.517 qpair failed and we were unable to recover it. 00:28:32.517 [2024-11-15 11:46:33.288398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.517 [2024-11-15 11:46:33.288428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:32.517 qpair failed and we were unable to recover it. 00:28:32.517 [2024-11-15 11:46:33.288571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.517 [2024-11-15 11:46:33.288603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:32.517 qpair failed and we were unable to recover it. 00:28:32.517 [2024-11-15 11:46:33.288731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.517 [2024-11-15 11:46:33.288763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:32.517 qpair failed and we were unable to recover it. 00:28:32.517 [2024-11-15 11:46:33.288947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.517 [2024-11-15 11:46:33.288978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:32.517 qpair failed and we were unable to recover it. 00:28:32.517 [2024-11-15 11:46:33.289192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.517 [2024-11-15 11:46:33.289226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:32.517 qpair failed and we were unable to recover it. 00:28:32.517 [2024-11-15 11:46:33.289409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.517 [2024-11-15 11:46:33.289420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:32.517 qpair failed and we were unable to recover it. 00:28:32.517 [2024-11-15 11:46:33.289588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.517 [2024-11-15 11:46:33.289621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:32.517 qpair failed and we were unable to recover it. 00:28:32.517 [2024-11-15 11:46:33.289736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.517 [2024-11-15 11:46:33.289765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:32.517 qpair failed and we were unable to recover it. 00:28:32.517 [2024-11-15 11:46:33.289950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.517 [2024-11-15 11:46:33.289982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:32.517 qpair failed and we were unable to recover it. 00:28:32.517 [2024-11-15 11:46:33.290121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.517 [2024-11-15 11:46:33.290152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:32.517 qpair failed and we were unable to recover it. 00:28:32.517 [2024-11-15 11:46:33.290324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.517 [2024-11-15 11:46:33.290334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:32.517 qpair failed and we were unable to recover it. 00:28:32.518 [2024-11-15 11:46:33.290512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.518 [2024-11-15 11:46:33.290547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:32.518 qpair failed and we were unable to recover it. 00:28:32.518 [2024-11-15 11:46:33.290758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.518 [2024-11-15 11:46:33.290789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:32.518 qpair failed and we were unable to recover it. 00:28:32.518 [2024-11-15 11:46:33.291014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.518 [2024-11-15 11:46:33.291045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:32.518 qpair failed and we were unable to recover it. 00:28:32.518 [2024-11-15 11:46:33.291220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.518 [2024-11-15 11:46:33.291232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:32.518 qpair failed and we were unable to recover it. 00:28:32.518 [2024-11-15 11:46:33.291321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.518 [2024-11-15 11:46:33.291331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:32.518 qpair failed and we were unable to recover it. 00:28:32.518 [2024-11-15 11:46:33.291480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.518 [2024-11-15 11:46:33.291513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:32.518 qpair failed and we were unable to recover it. 00:28:32.518 [2024-11-15 11:46:33.291720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.518 [2024-11-15 11:46:33.291750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:32.518 qpair failed and we were unable to recover it. 00:28:32.518 [2024-11-15 11:46:33.291866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.518 [2024-11-15 11:46:33.291897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:32.518 qpair failed and we were unable to recover it. 00:28:32.518 [2024-11-15 11:46:33.292075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.518 [2024-11-15 11:46:33.292106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:32.518 qpair failed and we were unable to recover it. 00:28:32.518 [2024-11-15 11:46:33.292236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.518 [2024-11-15 11:46:33.292247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:32.518 qpair failed and we were unable to recover it. 00:28:32.518 [2024-11-15 11:46:33.292390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.518 [2024-11-15 11:46:33.292400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:32.518 qpair failed and we were unable to recover it. 00:28:32.518 [2024-11-15 11:46:33.292692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.518 [2024-11-15 11:46:33.292725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:32.518 qpair failed and we were unable to recover it. 00:28:32.518 [2024-11-15 11:46:33.293003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.518 [2024-11-15 11:46:33.293034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:32.518 qpair failed and we were unable to recover it. 00:28:32.518 [2024-11-15 11:46:33.293233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.518 [2024-11-15 11:46:33.293265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:32.518 qpair failed and we were unable to recover it. 00:28:32.518 [2024-11-15 11:46:33.293451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.518 [2024-11-15 11:46:33.293494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:32.518 qpair failed and we were unable to recover it. 00:28:32.518 [2024-11-15 11:46:33.293704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.518 [2024-11-15 11:46:33.293737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:32.518 qpair failed and we were unable to recover it. 00:28:32.518 [2024-11-15 11:46:33.293851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.518 [2024-11-15 11:46:33.293883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:32.518 qpair failed and we were unable to recover it. 00:28:32.518 [2024-11-15 11:46:33.294003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.518 [2024-11-15 11:46:33.294034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:32.518 qpair failed and we were unable to recover it. 00:28:32.518 [2024-11-15 11:46:33.294254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.518 [2024-11-15 11:46:33.294286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:32.518 qpair failed and we were unable to recover it. 00:28:32.518 [2024-11-15 11:46:33.294467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.518 [2024-11-15 11:46:33.294479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:32.518 qpair failed and we were unable to recover it. 00:28:32.518 [2024-11-15 11:46:33.294545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.518 [2024-11-15 11:46:33.294555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:32.518 qpair failed and we were unable to recover it. 00:28:32.518 [2024-11-15 11:46:33.294669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.518 [2024-11-15 11:46:33.294701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:32.518 qpair failed and we were unable to recover it. 00:28:32.518 [2024-11-15 11:46:33.294917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.518 [2024-11-15 11:46:33.294948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:32.518 qpair failed and we were unable to recover it. 00:28:32.518 [2024-11-15 11:46:33.295067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.518 [2024-11-15 11:46:33.295098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:32.518 qpair failed and we were unable to recover it. 00:28:32.518 [2024-11-15 11:46:33.295270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.518 [2024-11-15 11:46:33.295280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:32.518 qpair failed and we were unable to recover it. 00:28:32.518 [2024-11-15 11:46:33.295444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.518 [2024-11-15 11:46:33.295487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:32.518 qpair failed and we were unable to recover it. 00:28:32.518 [2024-11-15 11:46:33.295745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.518 [2024-11-15 11:46:33.295776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:32.518 qpair failed and we were unable to recover it. 00:28:32.518 [2024-11-15 11:46:33.295989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.518 [2024-11-15 11:46:33.296020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:32.518 qpair failed and we were unable to recover it. 00:28:32.518 [2024-11-15 11:46:33.296216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.518 [2024-11-15 11:46:33.296247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:32.518 qpair failed and we were unable to recover it. 00:28:32.518 [2024-11-15 11:46:33.296440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.518 [2024-11-15 11:46:33.296489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:32.518 qpair failed and we were unable to recover it. 00:28:32.518 [2024-11-15 11:46:33.296573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.518 [2024-11-15 11:46:33.296583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:32.518 qpair failed and we were unable to recover it. 00:28:32.518 [2024-11-15 11:46:33.296720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.518 [2024-11-15 11:46:33.296731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:32.518 qpair failed and we were unable to recover it. 00:28:32.518 [2024-11-15 11:46:33.296958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.518 [2024-11-15 11:46:33.296969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:32.518 qpair failed and we were unable to recover it. 00:28:32.518 [2024-11-15 11:46:33.297066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.518 [2024-11-15 11:46:33.297077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:32.518 qpair failed and we were unable to recover it. 00:28:32.518 [2024-11-15 11:46:33.297279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.518 [2024-11-15 11:46:33.297310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:32.518 qpair failed and we were unable to recover it. 00:28:32.518 [2024-11-15 11:46:33.297535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.518 [2024-11-15 11:46:33.297570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:32.518 qpair failed and we were unable to recover it. 00:28:32.518 [2024-11-15 11:46:33.297712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.518 [2024-11-15 11:46:33.297722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:32.518 qpair failed and we were unable to recover it. 00:28:32.518 [2024-11-15 11:46:33.297797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.518 [2024-11-15 11:46:33.297807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:32.518 qpair failed and we were unable to recover it. 00:28:32.519 [2024-11-15 11:46:33.297896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.519 [2024-11-15 11:46:33.297925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:32.519 qpair failed and we were unable to recover it. 00:28:32.519 [2024-11-15 11:46:33.298133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.519 [2024-11-15 11:46:33.298164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:32.519 qpair failed and we were unable to recover it. 00:28:32.519 [2024-11-15 11:46:33.298303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.519 [2024-11-15 11:46:33.298334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:32.519 qpair failed and we were unable to recover it. 00:28:32.519 [2024-11-15 11:46:33.298539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.519 [2024-11-15 11:46:33.298573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:32.519 qpair failed and we were unable to recover it. 00:28:32.519 [2024-11-15 11:46:33.298705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.519 [2024-11-15 11:46:33.298737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:32.519 qpair failed and we were unable to recover it. 00:28:32.519 [2024-11-15 11:46:33.298878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.519 [2024-11-15 11:46:33.298915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:32.519 qpair failed and we were unable to recover it. 00:28:32.519 [2024-11-15 11:46:33.299036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.519 [2024-11-15 11:46:33.299068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:32.519 qpair failed and we were unable to recover it. 00:28:32.519 [2024-11-15 11:46:33.299249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.519 [2024-11-15 11:46:33.299280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:32.519 qpair failed and we were unable to recover it. 00:28:32.519 [2024-11-15 11:46:33.299393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.519 [2024-11-15 11:46:33.299424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:32.519 qpair failed and we were unable to recover it. 00:28:32.519 [2024-11-15 11:46:33.299663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.519 [2024-11-15 11:46:33.299696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:32.519 qpair failed and we were unable to recover it. 00:28:32.519 [2024-11-15 11:46:33.299829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.519 [2024-11-15 11:46:33.299860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:32.519 qpair failed and we were unable to recover it. 00:28:32.519 [2024-11-15 11:46:33.300056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.519 [2024-11-15 11:46:33.300088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:32.519 qpair failed and we were unable to recover it. 00:28:32.519 [2024-11-15 11:46:33.300276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.519 [2024-11-15 11:46:33.300286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:32.519 qpair failed and we were unable to recover it. 00:28:32.519 [2024-11-15 11:46:33.300447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.519 [2024-11-15 11:46:33.300457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:32.519 qpair failed and we were unable to recover it. 00:28:32.519 [2024-11-15 11:46:33.300543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.519 [2024-11-15 11:46:33.300552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:32.519 qpair failed and we were unable to recover it. 00:28:32.519 [2024-11-15 11:46:33.300766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.519 [2024-11-15 11:46:33.300794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:32.519 qpair failed and we were unable to recover it. 00:28:32.519 [2024-11-15 11:46:33.300878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.519 [2024-11-15 11:46:33.300890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:32.519 qpair failed and we were unable to recover it. 00:28:32.519 [2024-11-15 11:46:33.301033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.519 [2024-11-15 11:46:33.301044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:32.519 qpair failed and we were unable to recover it. 00:28:32.519 [2024-11-15 11:46:33.301146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.519 [2024-11-15 11:46:33.301158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:32.519 qpair failed and we were unable to recover it. 00:28:32.519 [2024-11-15 11:46:33.301369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.519 [2024-11-15 11:46:33.301403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:32.519 qpair failed and we were unable to recover it. 00:28:32.519 [2024-11-15 11:46:33.301652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.519 [2024-11-15 11:46:33.301685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:32.519 qpair failed and we were unable to recover it. 00:28:32.519 [2024-11-15 11:46:33.301816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.519 [2024-11-15 11:46:33.301849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:32.519 qpair failed and we were unable to recover it. 00:28:32.519 [2024-11-15 11:46:33.301972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.519 [2024-11-15 11:46:33.302003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:32.519 qpair failed and we were unable to recover it. 00:28:32.519 [2024-11-15 11:46:33.302190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.519 [2024-11-15 11:46:33.302222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:32.519 qpair failed and we were unable to recover it. 00:28:32.519 [2024-11-15 11:46:33.302370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.519 [2024-11-15 11:46:33.302401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:32.519 qpair failed and we were unable to recover it. 00:28:32.519 [2024-11-15 11:46:33.302688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.519 [2024-11-15 11:46:33.302700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:32.519 qpair failed and we were unable to recover it. 00:28:32.519 [2024-11-15 11:46:33.302900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.519 [2024-11-15 11:46:33.302932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:32.519 qpair failed and we were unable to recover it. 00:28:32.519 [2024-11-15 11:46:33.303209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.519 [2024-11-15 11:46:33.303242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:32.519 qpair failed and we were unable to recover it. 00:28:32.519 [2024-11-15 11:46:33.303443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.519 [2024-11-15 11:46:33.303488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:32.519 qpair failed and we were unable to recover it. 00:28:32.519 [2024-11-15 11:46:33.303621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.519 [2024-11-15 11:46:33.303653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:32.519 qpair failed and we were unable to recover it. 00:28:32.519 [2024-11-15 11:46:33.303839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.519 [2024-11-15 11:46:33.303870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:32.519 qpair failed and we were unable to recover it. 00:28:32.519 [2024-11-15 11:46:33.304055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.519 [2024-11-15 11:46:33.304088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:32.519 qpair failed and we were unable to recover it. 00:28:32.519 [2024-11-15 11:46:33.304224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.519 [2024-11-15 11:46:33.304257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:32.519 qpair failed and we were unable to recover it. 00:28:32.519 [2024-11-15 11:46:33.304452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.519 [2024-11-15 11:46:33.304496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:32.519 qpair failed and we were unable to recover it. 00:28:32.519 [2024-11-15 11:46:33.304620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.519 [2024-11-15 11:46:33.304651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:32.519 qpair failed and we were unable to recover it. 00:28:32.519 [2024-11-15 11:46:33.304779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.519 [2024-11-15 11:46:33.304826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:32.519 qpair failed and we were unable to recover it. 00:28:32.519 [2024-11-15 11:46:33.305012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.519 [2024-11-15 11:46:33.305044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:32.519 qpair failed and we were unable to recover it. 00:28:32.519 [2024-11-15 11:46:33.305326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.519 [2024-11-15 11:46:33.305358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:32.520 qpair failed and we were unable to recover it. 00:28:32.520 [2024-11-15 11:46:33.305485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.520 [2024-11-15 11:46:33.305512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:32.520 qpair failed and we were unable to recover it. 00:28:32.520 [2024-11-15 11:46:33.305598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.520 [2024-11-15 11:46:33.305610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:32.520 qpair failed and we were unable to recover it. 00:28:32.520 [2024-11-15 11:46:33.305704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.520 [2024-11-15 11:46:33.305713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:32.520 qpair failed and we were unable to recover it. 00:28:32.520 [2024-11-15 11:46:33.305782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.520 [2024-11-15 11:46:33.305792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:32.520 qpair failed and we were unable to recover it. 00:28:32.520 [2024-11-15 11:46:33.305944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.520 [2024-11-15 11:46:33.305974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:32.520 qpair failed and we were unable to recover it. 00:28:32.520 [2024-11-15 11:46:33.306996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.520 [2024-11-15 11:46:33.307016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:32.520 qpair failed and we were unable to recover it. 00:28:32.520 [2024-11-15 11:46:33.307128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.520 [2024-11-15 11:46:33.307168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:32.520 qpair failed and we were unable to recover it. 00:28:32.520 [2024-11-15 11:46:33.307450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.520 [2024-11-15 11:46:33.307509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:32.520 qpair failed and we were unable to recover it. 00:28:32.520 [2024-11-15 11:46:33.307698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.520 [2024-11-15 11:46:33.307731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:32.520 qpair failed and we were unable to recover it. 00:28:32.520 [2024-11-15 11:46:33.307944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.520 [2024-11-15 11:46:33.307976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:32.520 qpair failed and we were unable to recover it. 00:28:32.520 [2024-11-15 11:46:33.308113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.520 [2024-11-15 11:46:33.308145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:32.520 qpair failed and we were unable to recover it. 00:28:32.520 [2024-11-15 11:46:33.308274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.520 [2024-11-15 11:46:33.308305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:32.520 qpair failed and we were unable to recover it. 00:28:32.520 [2024-11-15 11:46:33.308492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.520 [2024-11-15 11:46:33.308534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:32.520 qpair failed and we were unable to recover it. 00:28:32.520 [2024-11-15 11:46:33.308696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.520 [2024-11-15 11:46:33.308707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:32.520 qpair failed and we were unable to recover it. 00:28:32.520 [2024-11-15 11:46:33.308874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.520 [2024-11-15 11:46:33.308906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:32.520 qpair failed and we were unable to recover it. 00:28:32.520 [2024-11-15 11:46:33.309040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.520 [2024-11-15 11:46:33.309071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:32.520 qpair failed and we were unable to recover it. 00:28:32.520 [2024-11-15 11:46:33.309206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.520 [2024-11-15 11:46:33.309236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:32.520 qpair failed and we were unable to recover it. 00:28:32.520 [2024-11-15 11:46:33.309449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.520 [2024-11-15 11:46:33.309493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:32.520 qpair failed and we were unable to recover it. 00:28:32.520 [2024-11-15 11:46:33.309617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.520 [2024-11-15 11:46:33.309648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:32.520 qpair failed and we were unable to recover it. 00:28:32.520 [2024-11-15 11:46:33.309826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.520 [2024-11-15 11:46:33.309856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:32.520 qpair failed and we were unable to recover it. 00:28:32.520 [2024-11-15 11:46:33.309974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.520 [2024-11-15 11:46:33.310005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:32.520 qpair failed and we were unable to recover it. 00:28:32.520 [2024-11-15 11:46:33.310273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.520 [2024-11-15 11:46:33.310285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:32.520 qpair failed and we were unable to recover it. 00:28:32.520 [2024-11-15 11:46:33.310348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.520 [2024-11-15 11:46:33.310358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:32.520 qpair failed and we were unable to recover it. 00:28:32.520 [2024-11-15 11:46:33.310454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.520 [2024-11-15 11:46:33.310468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:32.520 qpair failed and we were unable to recover it. 00:28:32.520 [2024-11-15 11:46:33.310539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.520 [2024-11-15 11:46:33.310548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:32.520 qpair failed and we were unable to recover it. 00:28:32.520 [2024-11-15 11:46:33.310685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.520 [2024-11-15 11:46:33.310696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:32.520 qpair failed and we were unable to recover it. 00:28:32.520 [2024-11-15 11:46:33.310835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.520 [2024-11-15 11:46:33.310866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:32.520 qpair failed and we were unable to recover it. 00:28:32.520 [2024-11-15 11:46:33.311074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.520 [2024-11-15 11:46:33.311106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:32.520 qpair failed and we were unable to recover it. 00:28:32.520 [2024-11-15 11:46:33.311247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.520 [2024-11-15 11:46:33.311258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:32.520 qpair failed and we were unable to recover it. 00:28:32.520 [2024-11-15 11:46:33.311410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.520 [2024-11-15 11:46:33.311422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:32.520 qpair failed and we were unable to recover it. 00:28:32.520 [2024-11-15 11:46:33.311516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.520 [2024-11-15 11:46:33.311527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:32.520 qpair failed and we were unable to recover it. 00:28:32.520 [2024-11-15 11:46:33.311704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.520 [2024-11-15 11:46:33.311736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:32.520 qpair failed and we were unable to recover it. 00:28:32.520 [2024-11-15 11:46:33.311874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.520 [2024-11-15 11:46:33.311905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:32.521 qpair failed and we were unable to recover it. 00:28:32.521 [2024-11-15 11:46:33.312035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.521 [2024-11-15 11:46:33.312067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:32.521 qpair failed and we were unable to recover it. 00:28:32.521 [2024-11-15 11:46:33.312343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.521 [2024-11-15 11:46:33.312374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:32.521 qpair failed and we were unable to recover it. 00:28:32.521 [2024-11-15 11:46:33.312635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.521 [2024-11-15 11:46:33.312668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:32.521 qpair failed and we were unable to recover it. 00:28:32.521 [2024-11-15 11:46:33.312786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.521 [2024-11-15 11:46:33.312818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:32.521 qpair failed and we were unable to recover it. 00:28:32.521 [2024-11-15 11:46:33.312936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.521 [2024-11-15 11:46:33.312967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:32.521 qpair failed and we were unable to recover it. 00:28:32.521 [2024-11-15 11:46:33.313188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.521 [2024-11-15 11:46:33.313220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:32.521 qpair failed and we were unable to recover it. 00:28:32.521 [2024-11-15 11:46:33.313425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.521 [2024-11-15 11:46:33.313456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:32.521 qpair failed and we were unable to recover it. 00:28:32.521 [2024-11-15 11:46:33.313648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.521 [2024-11-15 11:46:33.313681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:32.521 qpair failed and we were unable to recover it. 00:28:32.521 [2024-11-15 11:46:33.313823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.521 [2024-11-15 11:46:33.313854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:32.521 qpair failed and we were unable to recover it. 00:28:32.521 [2024-11-15 11:46:33.314065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.521 [2024-11-15 11:46:33.314098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:32.521 qpair failed and we were unable to recover it. 00:28:32.521 [2024-11-15 11:46:33.314226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.521 [2024-11-15 11:46:33.314257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:32.521 qpair failed and we were unable to recover it. 00:28:32.521 [2024-11-15 11:46:33.314374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.521 [2024-11-15 11:46:33.314405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:32.521 qpair failed and we were unable to recover it. 00:28:32.521 [2024-11-15 11:46:33.314543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.521 [2024-11-15 11:46:33.314576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:32.521 qpair failed and we were unable to recover it. 00:28:32.521 [2024-11-15 11:46:33.314711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.521 [2024-11-15 11:46:33.314742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:32.521 qpair failed and we were unable to recover it. 00:28:32.521 [2024-11-15 11:46:33.314994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.521 [2024-11-15 11:46:33.315031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:32.521 qpair failed and we were unable to recover it. 00:28:32.521 [2024-11-15 11:46:33.315163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.521 [2024-11-15 11:46:33.315198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:32.521 qpair failed and we were unable to recover it. 00:28:32.521 [2024-11-15 11:46:33.315384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.521 [2024-11-15 11:46:33.315395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:32.521 qpair failed and we were unable to recover it. 00:28:32.521 [2024-11-15 11:46:33.315498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.521 [2024-11-15 11:46:33.315530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:32.521 qpair failed and we were unable to recover it. 00:28:32.521 [2024-11-15 11:46:33.315785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.521 [2024-11-15 11:46:33.315819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:32.521 qpair failed and we were unable to recover it. 00:28:32.521 [2024-11-15 11:46:33.316010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.521 [2024-11-15 11:46:33.316041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:32.521 qpair failed and we were unable to recover it. 00:28:32.521 [2024-11-15 11:46:33.316227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.521 [2024-11-15 11:46:33.316259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:32.521 qpair failed and we were unable to recover it. 00:28:32.521 [2024-11-15 11:46:33.316438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.521 [2024-11-15 11:46:33.316449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:32.521 qpair failed and we were unable to recover it. 00:28:32.521 [2024-11-15 11:46:33.316548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.521 [2024-11-15 11:46:33.316559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:32.521 qpair failed and we were unable to recover it. 00:28:32.521 [2024-11-15 11:46:33.316765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.521 [2024-11-15 11:46:33.316776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:32.521 qpair failed and we were unable to recover it. 00:28:32.521 [2024-11-15 11:46:33.316934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.521 [2024-11-15 11:46:33.316945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:32.521 qpair failed and we were unable to recover it. 00:28:32.521 [2024-11-15 11:46:33.317079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.521 [2024-11-15 11:46:33.317090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:32.521 qpair failed and we were unable to recover it. 00:28:32.521 [2024-11-15 11:46:33.317358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.521 [2024-11-15 11:46:33.317389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:32.521 qpair failed and we were unable to recover it. 00:28:32.521 [2024-11-15 11:46:33.317505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.521 [2024-11-15 11:46:33.317539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:32.521 qpair failed and we were unable to recover it. 00:28:32.521 [2024-11-15 11:46:33.317739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.521 [2024-11-15 11:46:33.317772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:32.521 qpair failed and we were unable to recover it. 00:28:32.521 [2024-11-15 11:46:33.317909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.521 [2024-11-15 11:46:33.317941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:32.521 qpair failed and we were unable to recover it. 00:28:32.521 [2024-11-15 11:46:33.318220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.521 [2024-11-15 11:46:33.318252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:32.521 qpair failed and we were unable to recover it. 00:28:32.521 [2024-11-15 11:46:33.318447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.521 [2024-11-15 11:46:33.318487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:32.521 qpair failed and we were unable to recover it. 00:28:32.521 [2024-11-15 11:46:33.318674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.521 [2024-11-15 11:46:33.318707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:32.521 qpair failed and we were unable to recover it. 00:28:32.521 [2024-11-15 11:46:33.318969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.521 [2024-11-15 11:46:33.318980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:32.521 qpair failed and we were unable to recover it. 00:28:32.521 [2024-11-15 11:46:33.319115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.521 [2024-11-15 11:46:33.319126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:32.521 qpair failed and we were unable to recover it. 00:28:32.521 [2024-11-15 11:46:33.319263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.521 [2024-11-15 11:46:33.319274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:32.521 qpair failed and we were unable to recover it. 00:28:32.521 [2024-11-15 11:46:33.319352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.521 [2024-11-15 11:46:33.319362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:32.521 qpair failed and we were unable to recover it. 00:28:32.521 [2024-11-15 11:46:33.319445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.521 [2024-11-15 11:46:33.319455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:32.522 qpair failed and we were unable to recover it. 00:28:32.522 [2024-11-15 11:46:33.319621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.522 [2024-11-15 11:46:33.319632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:32.522 qpair failed and we were unable to recover it. 00:28:32.522 [2024-11-15 11:46:33.319841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.522 [2024-11-15 11:46:33.319851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:32.522 qpair failed and we were unable to recover it. 00:28:32.522 [2024-11-15 11:46:33.320008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.522 [2024-11-15 11:46:33.320019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:32.522 qpair failed and we were unable to recover it. 00:28:32.522 [2024-11-15 11:46:33.320171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.522 [2024-11-15 11:46:33.320182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:32.522 qpair failed and we were unable to recover it. 00:28:32.522 [2024-11-15 11:46:33.320244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.522 [2024-11-15 11:46:33.320254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:32.522 qpair failed and we were unable to recover it. 00:28:32.522 [2024-11-15 11:46:33.320369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.522 [2024-11-15 11:46:33.320406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1922550 with addr=10.0.0.2, port=4420 00:28:32.522 qpair failed and we were unable to recover it. 00:28:32.522 [2024-11-15 11:46:33.320591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.522 [2024-11-15 11:46:33.320622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:32.522 qpair failed and we were unable to recover it. 00:28:32.522 [2024-11-15 11:46:33.320779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.522 [2024-11-15 11:46:33.320792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:32.522 qpair failed and we were unable to recover it. 00:28:32.522 [2024-11-15 11:46:33.320953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.522 [2024-11-15 11:46:33.320984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:32.522 qpair failed and we were unable to recover it. 00:28:32.522 [2024-11-15 11:46:33.321224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.522 [2024-11-15 11:46:33.321256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:32.522 qpair failed and we were unable to recover it. 00:28:32.522 [2024-11-15 11:46:33.321400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.522 [2024-11-15 11:46:33.321433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:32.522 qpair failed and we were unable to recover it. 00:28:32.522 [2024-11-15 11:46:33.321724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.522 [2024-11-15 11:46:33.321735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:32.522 qpair failed and we were unable to recover it. 00:28:32.522 [2024-11-15 11:46:33.321937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.522 [2024-11-15 11:46:33.321969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:32.522 qpair failed and we were unable to recover it. 00:28:32.522 [2024-11-15 11:46:33.322163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.522 [2024-11-15 11:46:33.322196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:32.522 qpair failed and we were unable to recover it. 00:28:32.522 [2024-11-15 11:46:33.322388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.522 [2024-11-15 11:46:33.322420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:32.522 qpair failed and we were unable to recover it. 00:28:32.522 [2024-11-15 11:46:33.322528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.522 [2024-11-15 11:46:33.322538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:32.522 qpair failed and we were unable to recover it. 00:28:32.522 [2024-11-15 11:46:33.322621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.522 [2024-11-15 11:46:33.322633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:32.522 qpair failed and we were unable to recover it. 00:28:32.522 [2024-11-15 11:46:33.322723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.522 [2024-11-15 11:46:33.322733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:32.522 qpair failed and we were unable to recover it. 00:28:32.522 [2024-11-15 11:46:33.322812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.522 [2024-11-15 11:46:33.322822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:32.522 qpair failed and we were unable to recover it. 00:28:32.522 [2024-11-15 11:46:33.323015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.522 [2024-11-15 11:46:33.323047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:32.522 qpair failed and we were unable to recover it. 00:28:32.522 [2024-11-15 11:46:33.323322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.522 [2024-11-15 11:46:33.323354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:32.522 qpair failed and we were unable to recover it. 00:28:32.522 [2024-11-15 11:46:33.323516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.522 [2024-11-15 11:46:33.323556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:32.522 qpair failed and we were unable to recover it. 00:28:32.522 [2024-11-15 11:46:33.323698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.522 [2024-11-15 11:46:33.323709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:32.522 qpair failed and we were unable to recover it. 00:28:32.522 [2024-11-15 11:46:33.323799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.522 [2024-11-15 11:46:33.323809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:32.522 qpair failed and we were unable to recover it. 00:28:32.522 [2024-11-15 11:46:33.323926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.522 [2024-11-15 11:46:33.323956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:32.522 qpair failed and we were unable to recover it. 00:28:32.522 [2024-11-15 11:46:33.324140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.522 [2024-11-15 11:46:33.324173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:32.522 qpair failed and we were unable to recover it. 00:28:32.522 [2024-11-15 11:46:33.324470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.522 [2024-11-15 11:46:33.324504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:32.522 qpair failed and we were unable to recover it. 00:28:32.522 [2024-11-15 11:46:33.324763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.522 [2024-11-15 11:46:33.324795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:32.522 qpair failed and we were unable to recover it. 00:28:32.522 [2024-11-15 11:46:33.324924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.522 [2024-11-15 11:46:33.324956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:32.522 qpair failed and we were unable to recover it. 00:28:32.522 [2024-11-15 11:46:33.325160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.522 [2024-11-15 11:46:33.325192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:32.522 qpair failed and we were unable to recover it. 00:28:32.522 [2024-11-15 11:46:33.325381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.522 [2024-11-15 11:46:33.325413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:32.522 qpair failed and we were unable to recover it. 00:28:32.522 [2024-11-15 11:46:33.325549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.522 [2024-11-15 11:46:33.325581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:32.522 qpair failed and we were unable to recover it. 00:28:32.522 [2024-11-15 11:46:33.325764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.522 [2024-11-15 11:46:33.325775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:32.522 qpair failed and we were unable to recover it. 00:28:32.522 [2024-11-15 11:46:33.325939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.522 [2024-11-15 11:46:33.325971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:32.522 qpair failed and we were unable to recover it. 00:28:32.522 [2024-11-15 11:46:33.326238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.522 [2024-11-15 11:46:33.326269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:32.522 qpair failed and we were unable to recover it. 00:28:32.522 [2024-11-15 11:46:33.326468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.522 [2024-11-15 11:46:33.326501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:32.522 qpair failed and we were unable to recover it. 00:28:32.522 [2024-11-15 11:46:33.326610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.522 [2024-11-15 11:46:33.326620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:32.522 qpair failed and we were unable to recover it. 00:28:32.522 [2024-11-15 11:46:33.326700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.522 [2024-11-15 11:46:33.326710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:32.523 qpair failed and we were unable to recover it. 00:28:32.523 [2024-11-15 11:46:33.328052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.523 [2024-11-15 11:46:33.328071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:32.523 qpair failed and we were unable to recover it. 00:28:32.523 [2024-11-15 11:46:33.328238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.523 [2024-11-15 11:46:33.328247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:32.523 qpair failed and we were unable to recover it. 00:28:32.523 [2024-11-15 11:46:33.328380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.523 [2024-11-15 11:46:33.328390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:32.523 qpair failed and we were unable to recover it. 00:28:32.523 [2024-11-15 11:46:33.328536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.523 [2024-11-15 11:46:33.328567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:32.523 qpair failed and we were unable to recover it. 00:28:32.523 [2024-11-15 11:46:33.328705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.523 [2024-11-15 11:46:33.328739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:32.523 qpair failed and we were unable to recover it. 00:28:32.523 [2024-11-15 11:46:33.329050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.523 [2024-11-15 11:46:33.329123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:32.523 qpair failed and we were unable to recover it. 00:28:32.523 [2024-11-15 11:46:33.329289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.523 [2024-11-15 11:46:33.329324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:32.523 qpair failed and we were unable to recover it. 00:28:32.523 [2024-11-15 11:46:33.329511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.523 [2024-11-15 11:46:33.329523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:32.523 qpair failed and we were unable to recover it. 00:28:32.523 [2024-11-15 11:46:33.329610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.523 [2024-11-15 11:46:33.329620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:32.523 qpair failed and we were unable to recover it. 00:28:32.523 [2024-11-15 11:46:33.329731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.523 [2024-11-15 11:46:33.329763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:32.523 qpair failed and we were unable to recover it. 00:28:32.523 [2024-11-15 11:46:33.330018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.523 [2024-11-15 11:46:33.330050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:32.523 qpair failed and we were unable to recover it. 00:28:32.523 [2024-11-15 11:46:33.330308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.523 [2024-11-15 11:46:33.330340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:32.523 qpair failed and we were unable to recover it. 00:28:32.523 [2024-11-15 11:46:33.330476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.523 [2024-11-15 11:46:33.330487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:32.523 qpair failed and we were unable to recover it. 00:28:32.523 [2024-11-15 11:46:33.330568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.523 [2024-11-15 11:46:33.330577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:32.523 qpair failed and we were unable to recover it. 00:28:32.523 [2024-11-15 11:46:33.330788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.523 [2024-11-15 11:46:33.330820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:32.523 qpair failed and we were unable to recover it. 00:28:32.523 [2024-11-15 11:46:33.331076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.523 [2024-11-15 11:46:33.331108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:32.523 qpair failed and we were unable to recover it. 00:28:32.523 [2024-11-15 11:46:33.331294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.523 [2024-11-15 11:46:33.331326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:32.523 qpair failed and we were unable to recover it. 00:28:32.523 [2024-11-15 11:46:33.331504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.523 [2024-11-15 11:46:33.331516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:32.523 qpair failed and we were unable to recover it. 00:28:32.523 [2024-11-15 11:46:33.331709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.523 [2024-11-15 11:46:33.331750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:32.523 qpair failed and we were unable to recover it. 00:28:32.523 [2024-11-15 11:46:33.331878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.523 [2024-11-15 11:46:33.331910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:32.523 qpair failed and we were unable to recover it. 00:28:32.523 [2024-11-15 11:46:33.332139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.523 [2024-11-15 11:46:33.332171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:32.523 qpair failed and we were unable to recover it. 00:28:32.523 [2024-11-15 11:46:33.332367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.523 [2024-11-15 11:46:33.332378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:32.523 qpair failed and we were unable to recover it. 00:28:32.523 [2024-11-15 11:46:33.332554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.523 [2024-11-15 11:46:33.332565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:32.523 qpair failed and we were unable to recover it. 00:28:32.523 [2024-11-15 11:46:33.332640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.523 [2024-11-15 11:46:33.332650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:32.523 qpair failed and we were unable to recover it. 00:28:32.523 [2024-11-15 11:46:33.332876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.523 [2024-11-15 11:46:33.332907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:32.523 qpair failed and we were unable to recover it. 00:28:32.523 [2024-11-15 11:46:33.333041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.523 [2024-11-15 11:46:33.333073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:32.523 qpair failed and we were unable to recover it. 00:28:32.523 [2024-11-15 11:46:33.333259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.523 [2024-11-15 11:46:33.333290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:32.523 qpair failed and we were unable to recover it. 00:28:32.523 [2024-11-15 11:46:33.333568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.523 [2024-11-15 11:46:33.333602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:32.523 qpair failed and we were unable to recover it. 00:28:32.523 [2024-11-15 11:46:33.333781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.523 [2024-11-15 11:46:33.333813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:32.523 qpair failed and we were unable to recover it. 00:28:32.523 [2024-11-15 11:46:33.334096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.523 [2024-11-15 11:46:33.334134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:32.523 qpair failed and we were unable to recover it. 00:28:32.523 [2024-11-15 11:46:33.334224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.523 [2024-11-15 11:46:33.334234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:32.523 qpair failed and we were unable to recover it. 00:28:32.523 [2024-11-15 11:46:33.334437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.523 [2024-11-15 11:46:33.334492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:32.523 qpair failed and we were unable to recover it. 00:28:32.523 [2024-11-15 11:46:33.334763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.523 [2024-11-15 11:46:33.334796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:32.523 qpair failed and we were unable to recover it. 00:28:32.523 [2024-11-15 11:46:33.334991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.523 [2024-11-15 11:46:33.335023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:32.523 qpair failed and we were unable to recover it. 00:28:32.523 [2024-11-15 11:46:33.335151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.523 [2024-11-15 11:46:33.335183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:32.523 qpair failed and we were unable to recover it. 00:28:32.523 [2024-11-15 11:46:33.335451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.524 [2024-11-15 11:46:33.335496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:32.524 qpair failed and we were unable to recover it. 00:28:32.524 [2024-11-15 11:46:33.335639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.524 [2024-11-15 11:46:33.335664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:32.524 qpair failed and we were unable to recover it. 00:28:32.524 [2024-11-15 11:46:33.335816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.524 [2024-11-15 11:46:33.335827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:32.524 qpair failed and we were unable to recover it. 00:28:32.524 [2024-11-15 11:46:33.335972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.524 [2024-11-15 11:46:33.336004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:32.524 qpair failed and we were unable to recover it. 00:28:32.524 [2024-11-15 11:46:33.336202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.524 [2024-11-15 11:46:33.336233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:32.524 qpair failed and we were unable to recover it. 00:28:32.524 [2024-11-15 11:46:33.336418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.524 [2024-11-15 11:46:33.336451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:32.524 qpair failed and we were unable to recover it. 00:28:32.524 [2024-11-15 11:46:33.336690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.524 [2024-11-15 11:46:33.336701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:32.524 qpair failed and we were unable to recover it. 00:28:32.524 [2024-11-15 11:46:33.336856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.524 [2024-11-15 11:46:33.336866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:32.524 qpair failed and we were unable to recover it. 00:28:32.524 [2024-11-15 11:46:33.337001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.524 [2024-11-15 11:46:33.337012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:32.524 qpair failed and we were unable to recover it. 00:28:32.524 [2024-11-15 11:46:33.337159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.524 [2024-11-15 11:46:33.337170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:32.524 qpair failed and we were unable to recover it. 00:28:32.524 [2024-11-15 11:46:33.337256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.524 [2024-11-15 11:46:33.337285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:32.524 qpair failed and we were unable to recover it. 00:28:32.524 [2024-11-15 11:46:33.337403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.524 [2024-11-15 11:46:33.337435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:32.524 qpair failed and we were unable to recover it. 00:28:32.524 [2024-11-15 11:46:33.337594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.524 [2024-11-15 11:46:33.337631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:32.524 qpair failed and we were unable to recover it. 00:28:32.524 [2024-11-15 11:46:33.337869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.524 [2024-11-15 11:46:33.337900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:32.524 qpair failed and we were unable to recover it. 00:28:32.524 [2024-11-15 11:46:33.338138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.524 [2024-11-15 11:46:33.338171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:32.524 qpair failed and we were unable to recover it. 00:28:32.524 [2024-11-15 11:46:33.338365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.524 [2024-11-15 11:46:33.338376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:32.524 qpair failed and we were unable to recover it. 00:28:32.524 [2024-11-15 11:46:33.338560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.524 [2024-11-15 11:46:33.338593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:32.524 qpair failed and we were unable to recover it. 00:28:32.524 [2024-11-15 11:46:33.338724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.524 [2024-11-15 11:46:33.338757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:32.524 qpair failed and we were unable to recover it. 00:28:32.524 [2024-11-15 11:46:33.338893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.524 [2024-11-15 11:46:33.338924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:32.524 qpair failed and we were unable to recover it. 00:28:32.524 [2024-11-15 11:46:33.339109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.524 [2024-11-15 11:46:33.339141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:32.524 qpair failed and we were unable to recover it. 00:28:32.524 [2024-11-15 11:46:33.339327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.524 [2024-11-15 11:46:33.339359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:32.524 qpair failed and we were unable to recover it. 00:28:32.524 [2024-11-15 11:46:33.339561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.524 [2024-11-15 11:46:33.339594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:32.524 qpair failed and we were unable to recover it. 00:28:32.524 [2024-11-15 11:46:33.339742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.524 [2024-11-15 11:46:33.339775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:32.524 qpair failed and we were unable to recover it. 00:28:32.524 [2024-11-15 11:46:33.339925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.524 [2024-11-15 11:46:33.339962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:32.524 qpair failed and we were unable to recover it. 00:28:32.524 [2024-11-15 11:46:33.340158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.524 [2024-11-15 11:46:33.340199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:32.524 qpair failed and we were unable to recover it. 00:28:32.524 [2024-11-15 11:46:33.340391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.524 [2024-11-15 11:46:33.340402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:32.524 qpair failed and we were unable to recover it. 00:28:32.524 [2024-11-15 11:46:33.340547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.524 [2024-11-15 11:46:33.340558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:32.524 qpair failed and we were unable to recover it. 00:28:32.524 [2024-11-15 11:46:33.340639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.524 [2024-11-15 11:46:33.340648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:32.524 qpair failed and we were unable to recover it. 00:28:32.524 [2024-11-15 11:46:33.340733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.524 [2024-11-15 11:46:33.340742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:32.524 qpair failed and we were unable to recover it. 00:28:32.524 [2024-11-15 11:46:33.340891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.524 [2024-11-15 11:46:33.340923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:32.524 qpair failed and we were unable to recover it. 00:28:32.524 [2024-11-15 11:46:33.341054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.524 [2024-11-15 11:46:33.341086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:32.524 qpair failed and we were unable to recover it. 00:28:32.524 [2024-11-15 11:46:33.341219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.524 [2024-11-15 11:46:33.341249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:32.524 qpair failed and we were unable to recover it. 00:28:32.524 [2024-11-15 11:46:33.341476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.524 [2024-11-15 11:46:33.341508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:32.524 qpair failed and we were unable to recover it. 00:28:32.524 [2024-11-15 11:46:33.341711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.524 [2024-11-15 11:46:33.341722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:32.524 qpair failed and we were unable to recover it. 00:28:32.524 [2024-11-15 11:46:33.341819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.524 [2024-11-15 11:46:33.341828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:32.524 qpair failed and we were unable to recover it. 00:28:32.524 [2024-11-15 11:46:33.342011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.524 [2024-11-15 11:46:33.342041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:32.524 qpair failed and we were unable to recover it. 00:28:32.524 [2024-11-15 11:46:33.342234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.524 [2024-11-15 11:46:33.342265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:32.525 qpair failed and we were unable to recover it. 00:28:32.525 [2024-11-15 11:46:33.342405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.525 [2024-11-15 11:46:33.342438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:32.525 qpair failed and we were unable to recover it. 00:28:32.525 [2024-11-15 11:46:33.342564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.525 [2024-11-15 11:46:33.342597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:32.525 qpair failed and we were unable to recover it. 00:28:32.525 [2024-11-15 11:46:33.342850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.525 [2024-11-15 11:46:33.342860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:32.525 qpair failed and we were unable to recover it. 00:28:32.525 [2024-11-15 11:46:33.343039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.525 [2024-11-15 11:46:33.343049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:32.525 qpair failed and we were unable to recover it. 00:28:32.525 [2024-11-15 11:46:33.343131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.525 [2024-11-15 11:46:33.343140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:32.525 qpair failed and we were unable to recover it. 00:28:32.525 [2024-11-15 11:46:33.343250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.525 [2024-11-15 11:46:33.343280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:32.525 qpair failed and we were unable to recover it. 00:28:32.525 [2024-11-15 11:46:33.343483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.525 [2024-11-15 11:46:33.343516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:32.525 qpair failed and we were unable to recover it. 00:28:32.525 [2024-11-15 11:46:33.343716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.525 [2024-11-15 11:46:33.343747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:32.525 qpair failed and we were unable to recover it. 00:28:32.525 [2024-11-15 11:46:33.343977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.525 [2024-11-15 11:46:33.344008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:32.525 qpair failed and we were unable to recover it. 00:28:32.525 [2024-11-15 11:46:33.344244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.525 [2024-11-15 11:46:33.344277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:32.525 qpair failed and we were unable to recover it. 00:28:32.525 [2024-11-15 11:46:33.344398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.525 [2024-11-15 11:46:33.344430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:32.525 qpair failed and we were unable to recover it. 00:28:32.525 [2024-11-15 11:46:33.344553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.525 [2024-11-15 11:46:33.344564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:32.525 qpair failed and we were unable to recover it. 00:28:32.525 [2024-11-15 11:46:33.344706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.525 [2024-11-15 11:46:33.344717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:32.525 qpair failed and we were unable to recover it. 00:28:32.525 [2024-11-15 11:46:33.344812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.525 [2024-11-15 11:46:33.344822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:32.525 qpair failed and we were unable to recover it. 00:28:32.525 [2024-11-15 11:46:33.344955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.525 [2024-11-15 11:46:33.344966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:32.525 qpair failed and we were unable to recover it. 00:28:32.525 [2024-11-15 11:46:33.345108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.525 [2024-11-15 11:46:33.345119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:32.525 qpair failed and we were unable to recover it. 00:28:32.525 [2024-11-15 11:46:33.345272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.525 [2024-11-15 11:46:33.345283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:32.525 qpair failed and we were unable to recover it. 00:28:32.525 [2024-11-15 11:46:33.345429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.525 [2024-11-15 11:46:33.345440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:32.525 qpair failed and we were unable to recover it. 00:28:32.525 [2024-11-15 11:46:33.346103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.525 [2024-11-15 11:46:33.346128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:32.525 qpair failed and we were unable to recover it. 00:28:32.525 [2024-11-15 11:46:33.346357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.525 [2024-11-15 11:46:33.346369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:32.525 qpair failed and we were unable to recover it. 00:28:32.525 [2024-11-15 11:46:33.346455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.525 [2024-11-15 11:46:33.346468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:32.525 qpair failed and we were unable to recover it. 00:28:32.525 [2024-11-15 11:46:33.346636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.525 [2024-11-15 11:46:33.346647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:32.525 qpair failed and we were unable to recover it. 00:28:32.525 [2024-11-15 11:46:33.347435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.525 [2024-11-15 11:46:33.347456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:32.525 qpair failed and we were unable to recover it. 00:28:32.525 [2024-11-15 11:46:33.347646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.525 [2024-11-15 11:46:33.347657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:32.525 qpair failed and we were unable to recover it. 00:28:32.525 [2024-11-15 11:46:33.347738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.525 [2024-11-15 11:46:33.347748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:32.525 qpair failed and we were unable to recover it. 00:28:32.525 [2024-11-15 11:46:33.347967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.525 [2024-11-15 11:46:33.347999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:32.525 qpair failed and we were unable to recover it. 00:28:32.808 [2024-11-15 11:46:33.348803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.808 [2024-11-15 11:46:33.348826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:32.808 qpair failed and we were unable to recover it. 00:28:32.808 [2024-11-15 11:46:33.348974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.808 [2024-11-15 11:46:33.349019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:32.808 qpair failed and we were unable to recover it. 00:28:32.808 [2024-11-15 11:46:33.349178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.808 [2024-11-15 11:46:33.349210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:32.808 qpair failed and we were unable to recover it. 00:28:32.808 [2024-11-15 11:46:33.349489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.808 [2024-11-15 11:46:33.349523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:32.808 qpair failed and we were unable to recover it. 00:28:32.808 [2024-11-15 11:46:33.349642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.808 [2024-11-15 11:46:33.349652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:32.808 qpair failed and we were unable to recover it. 00:28:32.808 [2024-11-15 11:46:33.349738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.808 [2024-11-15 11:46:33.349748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:32.808 qpair failed and we were unable to recover it. 00:28:32.808 [2024-11-15 11:46:33.349846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.808 [2024-11-15 11:46:33.349876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:32.808 qpair failed and we were unable to recover it. 00:28:32.808 [2024-11-15 11:46:33.350075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.808 [2024-11-15 11:46:33.350106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:32.808 qpair failed and we were unable to recover it. 00:28:32.808 [2024-11-15 11:46:33.350301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.808 [2024-11-15 11:46:33.350311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:32.808 qpair failed and we were unable to recover it. 00:28:32.808 [2024-11-15 11:46:33.350493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.808 [2024-11-15 11:46:33.350525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:32.808 qpair failed and we were unable to recover it. 00:28:32.808 [2024-11-15 11:46:33.350661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.808 [2024-11-15 11:46:33.350691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:32.808 qpair failed and we were unable to recover it. 00:28:32.808 [2024-11-15 11:46:33.350974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.808 [2024-11-15 11:46:33.351006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:32.808 qpair failed and we were unable to recover it. 00:28:32.808 [2024-11-15 11:46:33.351147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.808 [2024-11-15 11:46:33.351177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:32.808 qpair failed and we were unable to recover it. 00:28:32.808 [2024-11-15 11:46:33.351390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.808 [2024-11-15 11:46:33.351400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:32.808 qpair failed and we were unable to recover it. 00:28:32.808 [2024-11-15 11:46:33.351478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.808 [2024-11-15 11:46:33.351489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:32.808 qpair failed and we were unable to recover it. 00:28:32.808 [2024-11-15 11:46:33.351623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.808 [2024-11-15 11:46:33.351633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:32.808 qpair failed and we were unable to recover it. 00:28:32.808 [2024-11-15 11:46:33.351721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.808 [2024-11-15 11:46:33.351731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:32.808 qpair failed and we were unable to recover it. 00:28:32.808 [2024-11-15 11:46:33.351821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.808 [2024-11-15 11:46:33.351853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:32.808 qpair failed and we were unable to recover it. 00:28:32.808 [2024-11-15 11:46:33.352046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.808 [2024-11-15 11:46:33.352078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:32.808 qpair failed and we were unable to recover it. 00:28:32.808 [2024-11-15 11:46:33.352205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.808 [2024-11-15 11:46:33.352216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:32.808 qpair failed and we were unable to recover it. 00:28:32.808 [2024-11-15 11:46:33.352497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.808 [2024-11-15 11:46:33.352532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:32.808 qpair failed and we were unable to recover it. 00:28:32.808 [2024-11-15 11:46:33.352675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.808 [2024-11-15 11:46:33.352707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:32.808 qpair failed and we were unable to recover it. 00:28:32.808 [2024-11-15 11:46:33.352842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.808 [2024-11-15 11:46:33.352875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:32.808 qpair failed and we were unable to recover it. 00:28:32.808 [2024-11-15 11:46:33.353003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.808 [2024-11-15 11:46:33.353035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:32.808 qpair failed and we were unable to recover it. 00:28:32.808 [2024-11-15 11:46:33.353164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.808 [2024-11-15 11:46:33.353196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:32.808 qpair failed and we were unable to recover it. 00:28:32.808 [2024-11-15 11:46:33.353409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.808 [2024-11-15 11:46:33.353440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:32.808 qpair failed and we were unable to recover it. 00:28:32.808 [2024-11-15 11:46:33.353668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.808 [2024-11-15 11:46:33.353702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:32.808 qpair failed and we were unable to recover it. 00:28:32.808 [2024-11-15 11:46:33.353871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.808 [2024-11-15 11:46:33.353943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:32.808 qpair failed and we were unable to recover it. 00:28:32.808 [2024-11-15 11:46:33.354157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.808 [2024-11-15 11:46:33.354193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:32.808 qpair failed and we were unable to recover it. 00:28:32.808 [2024-11-15 11:46:33.354393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.808 [2024-11-15 11:46:33.354404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:32.808 qpair failed and we were unable to recover it. 00:28:32.808 [2024-11-15 11:46:33.354598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.809 [2024-11-15 11:46:33.354633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:32.809 qpair failed and we were unable to recover it. 00:28:32.809 [2024-11-15 11:46:33.354766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.809 [2024-11-15 11:46:33.354796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:32.809 qpair failed and we were unable to recover it. 00:28:32.809 [2024-11-15 11:46:33.354991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.809 [2024-11-15 11:46:33.355023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:32.809 qpair failed and we were unable to recover it. 00:28:32.809 [2024-11-15 11:46:33.355234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.809 [2024-11-15 11:46:33.355266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:32.809 qpair failed and we were unable to recover it. 00:28:32.809 [2024-11-15 11:46:33.355604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.809 [2024-11-15 11:46:33.355615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:32.809 qpair failed and we were unable to recover it. 00:28:32.809 [2024-11-15 11:46:33.355686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.809 [2024-11-15 11:46:33.355696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:32.809 qpair failed and we were unable to recover it. 00:28:32.809 [2024-11-15 11:46:33.355828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.809 [2024-11-15 11:46:33.355860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:32.809 qpair failed and we were unable to recover it. 00:28:32.809 [2024-11-15 11:46:33.356061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.809 [2024-11-15 11:46:33.356092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:32.809 qpair failed and we were unable to recover it. 00:28:32.809 [2024-11-15 11:46:33.356287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.809 [2024-11-15 11:46:33.356319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:32.809 qpair failed and we were unable to recover it. 00:28:32.809 [2024-11-15 11:46:33.356577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.809 [2024-11-15 11:46:33.356611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:32.809 qpair failed and we were unable to recover it. 00:28:32.809 [2024-11-15 11:46:33.356880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.809 [2024-11-15 11:46:33.356922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:32.809 qpair failed and we were unable to recover it. 00:28:32.809 [2024-11-15 11:46:33.357136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.809 [2024-11-15 11:46:33.357167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:32.809 qpair failed and we were unable to recover it. 00:28:32.809 [2024-11-15 11:46:33.357292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.809 [2024-11-15 11:46:33.357303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:32.809 qpair failed and we were unable to recover it. 00:28:32.809 [2024-11-15 11:46:33.357474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.809 [2024-11-15 11:46:33.357485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:32.809 qpair failed and we were unable to recover it. 00:28:32.809 [2024-11-15 11:46:33.357630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.809 [2024-11-15 11:46:33.357641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:32.809 qpair failed and we were unable to recover it. 00:28:32.809 [2024-11-15 11:46:33.357797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.809 [2024-11-15 11:46:33.357830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:32.809 qpair failed and we were unable to recover it. 00:28:32.809 [2024-11-15 11:46:33.358010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.809 [2024-11-15 11:46:33.358042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:32.809 qpair failed and we were unable to recover it. 00:28:32.809 [2024-11-15 11:46:33.358245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.809 [2024-11-15 11:46:33.358278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:32.809 qpair failed and we were unable to recover it. 00:28:32.809 [2024-11-15 11:46:33.358382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.809 [2024-11-15 11:46:33.358392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:32.809 qpair failed and we were unable to recover it. 00:28:32.809 [2024-11-15 11:46:33.358467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.809 [2024-11-15 11:46:33.358477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:32.809 qpair failed and we were unable to recover it. 00:28:32.809 [2024-11-15 11:46:33.358637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.809 [2024-11-15 11:46:33.358670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:32.809 qpair failed and we were unable to recover it. 00:28:32.809 [2024-11-15 11:46:33.358906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.809 [2024-11-15 11:46:33.358937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:32.809 qpair failed and we were unable to recover it. 00:28:32.809 [2024-11-15 11:46:33.359135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.809 [2024-11-15 11:46:33.359168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:32.809 qpair failed and we were unable to recover it. 00:28:32.809 [2024-11-15 11:46:33.359353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.809 [2024-11-15 11:46:33.359385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:32.809 qpair failed and we were unable to recover it. 00:28:32.809 [2024-11-15 11:46:33.359581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.809 [2024-11-15 11:46:33.359616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:32.809 qpair failed and we were unable to recover it. 00:28:32.809 [2024-11-15 11:46:33.359749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.809 [2024-11-15 11:46:33.359781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:32.809 qpair failed and we were unable to recover it. 00:28:32.809 [2024-11-15 11:46:33.359904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.809 [2024-11-15 11:46:33.359936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:32.809 qpair failed and we were unable to recover it. 00:28:32.809 [2024-11-15 11:46:33.360150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.809 [2024-11-15 11:46:33.360181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:32.809 qpair failed and we were unable to recover it. 00:28:32.809 [2024-11-15 11:46:33.360436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.809 [2024-11-15 11:46:33.360476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:32.809 qpair failed and we were unable to recover it. 00:28:32.809 [2024-11-15 11:46:33.360728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.809 [2024-11-15 11:46:33.360739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:32.809 qpair failed and we were unable to recover it. 00:28:32.809 [2024-11-15 11:46:33.360909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.809 [2024-11-15 11:46:33.360940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:32.809 qpair failed and we were unable to recover it. 00:28:32.809 [2024-11-15 11:46:33.361086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.809 [2024-11-15 11:46:33.361117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:32.809 qpair failed and we were unable to recover it. 00:28:32.809 [2024-11-15 11:46:33.361252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.809 [2024-11-15 11:46:33.361284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:32.809 qpair failed and we were unable to recover it. 00:28:32.809 [2024-11-15 11:46:33.361594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.809 [2024-11-15 11:46:33.361627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:32.809 qpair failed and we were unable to recover it. 00:28:32.809 [2024-11-15 11:46:33.361827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.809 [2024-11-15 11:46:33.361859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:32.809 qpair failed and we were unable to recover it. 00:28:32.809 [2024-11-15 11:46:33.362057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.809 [2024-11-15 11:46:33.362088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:32.809 qpair failed and we were unable to recover it. 00:28:32.809 [2024-11-15 11:46:33.362232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.809 [2024-11-15 11:46:33.362264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:32.809 qpair failed and we were unable to recover it. 00:28:32.809 [2024-11-15 11:46:33.362467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.809 [2024-11-15 11:46:33.362501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:32.809 qpair failed and we were unable to recover it. 00:28:32.809 [2024-11-15 11:46:33.362753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.810 [2024-11-15 11:46:33.362785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:32.810 qpair failed and we were unable to recover it. 00:28:32.810 [2024-11-15 11:46:33.362988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.810 [2024-11-15 11:46:33.363020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:32.810 qpair failed and we were unable to recover it. 00:28:32.810 [2024-11-15 11:46:33.363211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.810 [2024-11-15 11:46:33.363242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:32.810 qpair failed and we were unable to recover it. 00:28:32.810 [2024-11-15 11:46:33.363374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.810 [2024-11-15 11:46:33.363384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:32.810 qpair failed and we were unable to recover it. 00:28:32.810 [2024-11-15 11:46:33.363633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.810 [2024-11-15 11:46:33.363644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:32.810 qpair failed and we were unable to recover it. 00:28:32.810 [2024-11-15 11:46:33.363722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.810 [2024-11-15 11:46:33.363732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:32.810 qpair failed and we were unable to recover it. 00:28:32.810 [2024-11-15 11:46:33.363821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.810 [2024-11-15 11:46:33.363831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:32.810 qpair failed and we were unable to recover it. 00:28:32.810 [2024-11-15 11:46:33.364049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.810 [2024-11-15 11:46:33.364081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:32.810 qpair failed and we were unable to recover it. 00:28:32.810 [2024-11-15 11:46:33.364228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.810 [2024-11-15 11:46:33.364259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:32.810 qpair failed and we were unable to recover it. 00:28:32.810 [2024-11-15 11:46:33.364395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.810 [2024-11-15 11:46:33.364430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:32.810 qpair failed and we were unable to recover it. 00:28:32.810 [2024-11-15 11:46:33.364533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.810 [2024-11-15 11:46:33.364543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:32.810 qpair failed and we were unable to recover it. 00:28:32.810 [2024-11-15 11:46:33.364769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.810 [2024-11-15 11:46:33.364801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:32.810 qpair failed and we were unable to recover it. 00:28:32.810 [2024-11-15 11:46:33.364933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.810 [2024-11-15 11:46:33.364965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:32.810 qpair failed and we were unable to recover it. 00:28:32.810 [2024-11-15 11:46:33.365157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.810 [2024-11-15 11:46:33.365190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:32.810 qpair failed and we were unable to recover it. 00:28:32.810 [2024-11-15 11:46:33.365324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.810 [2024-11-15 11:46:33.365356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:32.810 qpair failed and we were unable to recover it. 00:28:32.810 [2024-11-15 11:46:33.365551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.810 [2024-11-15 11:46:33.365584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:32.810 qpair failed and we were unable to recover it. 00:28:32.810 [2024-11-15 11:46:33.365706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.810 [2024-11-15 11:46:33.365717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:32.810 qpair failed and we were unable to recover it. 00:28:32.810 [2024-11-15 11:46:33.365864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.810 [2024-11-15 11:46:33.365875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:32.810 qpair failed and we were unable to recover it. 00:28:32.810 [2024-11-15 11:46:33.366116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.810 [2024-11-15 11:46:33.366148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:32.810 qpair failed and we were unable to recover it. 00:28:32.810 [2024-11-15 11:46:33.366330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.810 [2024-11-15 11:46:33.366362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:32.810 qpair failed and we were unable to recover it. 00:28:32.810 [2024-11-15 11:46:33.366548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.810 [2024-11-15 11:46:33.366581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:32.810 qpair failed and we were unable to recover it. 00:28:32.810 [2024-11-15 11:46:33.366795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.810 [2024-11-15 11:46:33.366829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:32.810 qpair failed and we were unable to recover it. 00:28:32.810 [2024-11-15 11:46:33.366947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.810 [2024-11-15 11:46:33.366979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:32.810 qpair failed and we were unable to recover it. 00:28:32.810 [2024-11-15 11:46:33.367115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.810 [2024-11-15 11:46:33.367146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:32.810 qpair failed and we were unable to recover it. 00:28:32.810 [2024-11-15 11:46:33.367370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.810 [2024-11-15 11:46:33.367403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:32.810 qpair failed and we were unable to recover it. 00:28:32.810 [2024-11-15 11:46:33.367598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.810 [2024-11-15 11:46:33.367632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:32.810 qpair failed and we were unable to recover it. 00:28:32.810 [2024-11-15 11:46:33.367870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.810 [2024-11-15 11:46:33.367902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:32.810 qpair failed and we were unable to recover it. 00:28:32.810 [2024-11-15 11:46:33.368107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.810 [2024-11-15 11:46:33.368139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:32.810 qpair failed and we were unable to recover it. 00:28:32.810 [2024-11-15 11:46:33.368329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.810 [2024-11-15 11:46:33.368360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:32.810 qpair failed and we were unable to recover it. 00:28:32.810 [2024-11-15 11:46:33.368624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.810 [2024-11-15 11:46:33.368634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:32.810 qpair failed and we were unable to recover it. 00:28:32.810 [2024-11-15 11:46:33.368775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.810 [2024-11-15 11:46:33.368808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:32.810 qpair failed and we were unable to recover it. 00:28:32.810 [2024-11-15 11:46:33.368946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.810 [2024-11-15 11:46:33.368978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:32.810 qpair failed and we were unable to recover it. 00:28:32.810 [2024-11-15 11:46:33.369162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.810 [2024-11-15 11:46:33.369194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:32.810 qpair failed and we were unable to recover it. 00:28:32.810 [2024-11-15 11:46:33.369392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.810 [2024-11-15 11:46:33.369423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:32.810 qpair failed and we were unable to recover it. 00:28:32.810 [2024-11-15 11:46:33.369564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.810 [2024-11-15 11:46:33.369575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:32.810 qpair failed and we were unable to recover it. 00:28:32.810 [2024-11-15 11:46:33.369762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.810 [2024-11-15 11:46:33.369794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:32.810 qpair failed and we were unable to recover it. 00:28:32.810 [2024-11-15 11:46:33.369929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.810 [2024-11-15 11:46:33.369960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:32.810 qpair failed and we were unable to recover it. 00:28:32.810 [2024-11-15 11:46:33.370167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.810 [2024-11-15 11:46:33.370199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:32.810 qpair failed and we were unable to recover it. 00:28:32.810 [2024-11-15 11:46:33.371544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.811 [2024-11-15 11:46:33.371565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:32.811 qpair failed and we were unable to recover it. 00:28:32.811 [2024-11-15 11:46:33.371729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.811 [2024-11-15 11:46:33.371744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:32.811 qpair failed and we were unable to recover it. 00:28:32.811 [2024-11-15 11:46:33.371981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.811 [2024-11-15 11:46:33.371992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:32.811 qpair failed and we were unable to recover it. 00:28:32.811 [2024-11-15 11:46:33.373002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.811 [2024-11-15 11:46:33.373022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:32.811 qpair failed and we were unable to recover it. 00:28:32.811 [2024-11-15 11:46:33.373128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.811 [2024-11-15 11:46:33.373139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:32.811 qpair failed and we were unable to recover it. 00:28:32.811 [2024-11-15 11:46:33.373280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.811 [2024-11-15 11:46:33.373290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:32.811 qpair failed and we were unable to recover it. 00:28:32.811 [2024-11-15 11:46:33.373448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.811 [2024-11-15 11:46:33.373512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:32.811 qpair failed and we were unable to recover it. 00:28:32.811 [2024-11-15 11:46:33.374732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.811 [2024-11-15 11:46:33.374753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:32.811 qpair failed and we were unable to recover it. 00:28:32.811 [2024-11-15 11:46:33.374997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.811 [2024-11-15 11:46:33.375009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:32.811 qpair failed and we were unable to recover it. 00:28:32.811 [2024-11-15 11:46:33.375158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.811 [2024-11-15 11:46:33.375170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:32.811 qpair failed and we were unable to recover it. 00:28:32.811 [2024-11-15 11:46:33.375388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.811 [2024-11-15 11:46:33.375420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:32.811 qpair failed and we were unable to recover it. 00:28:32.811 [2024-11-15 11:46:33.375586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.811 [2024-11-15 11:46:33.375619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:32.811 qpair failed and we were unable to recover it. 00:28:32.811 [2024-11-15 11:46:33.375815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.811 [2024-11-15 11:46:33.375847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:32.811 qpair failed and we were unable to recover it. 00:28:32.811 [2024-11-15 11:46:33.376058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.811 [2024-11-15 11:46:33.376090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:32.811 qpair failed and we were unable to recover it. 00:28:32.811 [2024-11-15 11:46:33.376280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.811 [2024-11-15 11:46:33.376314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:32.811 qpair failed and we were unable to recover it. 00:28:32.811 [2024-11-15 11:46:33.376457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.811 [2024-11-15 11:46:33.376502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:32.811 qpair failed and we were unable to recover it. 00:28:32.811 [2024-11-15 11:46:33.376757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.811 [2024-11-15 11:46:33.376790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:32.811 qpair failed and we were unable to recover it. 00:28:32.811 [2024-11-15 11:46:33.376986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.811 [2024-11-15 11:46:33.377018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:32.811 qpair failed and we were unable to recover it. 00:28:32.811 [2024-11-15 11:46:33.377150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.811 [2024-11-15 11:46:33.377183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:32.811 qpair failed and we were unable to recover it. 00:28:32.811 [2024-11-15 11:46:33.377368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.811 [2024-11-15 11:46:33.377399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:32.811 qpair failed and we were unable to recover it. 00:28:32.811 [2024-11-15 11:46:33.377596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.811 [2024-11-15 11:46:33.377630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:32.811 qpair failed and we were unable to recover it. 00:28:32.811 [2024-11-15 11:46:33.377788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.811 [2024-11-15 11:46:33.377799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:32.811 qpair failed and we were unable to recover it. 00:28:32.811 [2024-11-15 11:46:33.377951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.811 [2024-11-15 11:46:33.377983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:32.811 qpair failed and we were unable to recover it. 00:28:32.811 [2024-11-15 11:46:33.378189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.811 [2024-11-15 11:46:33.378222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:32.811 qpair failed and we were unable to recover it. 00:28:32.811 [2024-11-15 11:46:33.378358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.811 [2024-11-15 11:46:33.378390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:32.811 qpair failed and we were unable to recover it. 00:28:32.811 [2024-11-15 11:46:33.378639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.811 [2024-11-15 11:46:33.378651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:32.811 qpair failed and we were unable to recover it. 00:28:32.811 [2024-11-15 11:46:33.378737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.811 [2024-11-15 11:46:33.378747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:32.811 qpair failed and we were unable to recover it. 00:28:32.811 [2024-11-15 11:46:33.378889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.811 [2024-11-15 11:46:33.378898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:32.811 qpair failed and we were unable to recover it. 00:28:32.811 [2024-11-15 11:46:33.379119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.811 [2024-11-15 11:46:33.379152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:32.811 qpair failed and we were unable to recover it. 00:28:32.811 [2024-11-15 11:46:33.379301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.811 [2024-11-15 11:46:33.379333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:32.811 qpair failed and we were unable to recover it. 00:28:32.811 [2024-11-15 11:46:33.380249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.811 [2024-11-15 11:46:33.380279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:32.811 qpair failed and we were unable to recover it. 00:28:32.811 [2024-11-15 11:46:33.380467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.811 [2024-11-15 11:46:33.380479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:32.811 qpair failed and we were unable to recover it. 00:28:32.811 [2024-11-15 11:46:33.380650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.811 [2024-11-15 11:46:33.380682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:32.811 qpair failed and we were unable to recover it. 00:28:32.811 [2024-11-15 11:46:33.380803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.811 [2024-11-15 11:46:33.380835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:32.811 qpair failed and we were unable to recover it. 00:28:32.811 [2024-11-15 11:46:33.380976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.811 [2024-11-15 11:46:33.381008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:32.811 qpair failed and we were unable to recover it. 00:28:32.811 [2024-11-15 11:46:33.381302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.811 [2024-11-15 11:46:33.381335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:32.811 qpair failed and we were unable to recover it. 00:28:32.811 [2024-11-15 11:46:33.381589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.811 [2024-11-15 11:46:33.381622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:32.811 qpair failed and we were unable to recover it. 00:28:32.811 [2024-11-15 11:46:33.381817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.811 [2024-11-15 11:46:33.381849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:32.811 qpair failed and we were unable to recover it. 00:28:32.812 [2024-11-15 11:46:33.382063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.812 [2024-11-15 11:46:33.382075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:32.812 qpair failed and we were unable to recover it. 00:28:32.812 [2024-11-15 11:46:33.382291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.812 [2024-11-15 11:46:33.382302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:32.812 qpair failed and we were unable to recover it. 00:28:32.812 [2024-11-15 11:46:33.382389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.812 [2024-11-15 11:46:33.382398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:32.812 qpair failed and we were unable to recover it. 00:28:32.812 [2024-11-15 11:46:33.382492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.812 [2024-11-15 11:46:33.382504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:32.812 qpair failed and we were unable to recover it. 00:28:32.812 [2024-11-15 11:46:33.383407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.812 [2024-11-15 11:46:33.383428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:32.812 qpair failed and we were unable to recover it. 00:28:32.812 [2024-11-15 11:46:33.383692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.812 [2024-11-15 11:46:33.383705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:32.812 qpair failed and we were unable to recover it. 00:28:32.812 [2024-11-15 11:46:33.383862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.812 [2024-11-15 11:46:33.383873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:32.812 qpair failed and we were unable to recover it. 00:28:32.812 [2024-11-15 11:46:33.384080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.812 [2024-11-15 11:46:33.384091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:32.812 qpair failed and we were unable to recover it. 00:28:32.812 [2024-11-15 11:46:33.384261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.812 [2024-11-15 11:46:33.384272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:32.812 qpair failed and we were unable to recover it. 00:28:32.812 [2024-11-15 11:46:33.384342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.812 [2024-11-15 11:46:33.384351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:32.812 qpair failed and we were unable to recover it. 00:28:32.812 [2024-11-15 11:46:33.384503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.812 [2024-11-15 11:46:33.384514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:32.812 qpair failed and we were unable to recover it. 00:28:32.812 [2024-11-15 11:46:33.384625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.812 [2024-11-15 11:46:33.384656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:32.812 qpair failed and we were unable to recover it. 00:28:32.812 [2024-11-15 11:46:33.384780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.812 [2024-11-15 11:46:33.384812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:32.812 qpair failed and we were unable to recover it. 00:28:32.812 [2024-11-15 11:46:33.385066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.812 [2024-11-15 11:46:33.385099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:32.812 qpair failed and we were unable to recover it. 00:28:32.812 [2024-11-15 11:46:33.385309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.812 [2024-11-15 11:46:33.385319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:32.812 qpair failed and we were unable to recover it. 00:28:32.812 [2024-11-15 11:46:33.385419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.812 [2024-11-15 11:46:33.385429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:32.812 qpair failed and we were unable to recover it. 00:28:32.812 [2024-11-15 11:46:33.385579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.812 [2024-11-15 11:46:33.385590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:32.812 qpair failed and we were unable to recover it. 00:28:32.812 [2024-11-15 11:46:33.385737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.812 [2024-11-15 11:46:33.385768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:32.812 qpair failed and we were unable to recover it. 00:28:32.812 [2024-11-15 11:46:33.386541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.812 [2024-11-15 11:46:33.386601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:32.812 qpair failed and we were unable to recover it. 00:28:32.812 [2024-11-15 11:46:33.386876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.812 [2024-11-15 11:46:33.386891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:32.812 qpair failed and we were unable to recover it. 00:28:32.812 [2024-11-15 11:46:33.387052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.812 [2024-11-15 11:46:33.387063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:32.812 qpair failed and we were unable to recover it. 00:28:32.812 [2024-11-15 11:46:33.387136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.812 [2024-11-15 11:46:33.387148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:32.812 qpair failed and we were unable to recover it. 00:28:32.812 [2024-11-15 11:46:33.387353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.812 [2024-11-15 11:46:33.387365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:32.812 qpair failed and we were unable to recover it. 00:28:32.812 [2024-11-15 11:46:33.387515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.812 [2024-11-15 11:46:33.387526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:32.812 qpair failed and we were unable to recover it. 00:28:32.812 [2024-11-15 11:46:33.387587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.812 [2024-11-15 11:46:33.387596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:32.812 qpair failed and we were unable to recover it. 00:28:32.812 [2024-11-15 11:46:33.387748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.812 [2024-11-15 11:46:33.387759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:32.812 qpair failed and we were unable to recover it. 00:28:32.812 [2024-11-15 11:46:33.387845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.812 [2024-11-15 11:46:33.387855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:32.812 qpair failed and we were unable to recover it. 00:28:32.812 [2024-11-15 11:46:33.387986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.812 [2024-11-15 11:46:33.387996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:32.812 qpair failed and we were unable to recover it. 00:28:32.812 [2024-11-15 11:46:33.388215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.812 [2024-11-15 11:46:33.388226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:32.812 qpair failed and we were unable to recover it. 00:28:32.812 [2024-11-15 11:46:33.388380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.812 [2024-11-15 11:46:33.388391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:32.812 qpair failed and we were unable to recover it. 00:28:32.812 [2024-11-15 11:46:33.388545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.812 [2024-11-15 11:46:33.388564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:32.812 qpair failed and we were unable to recover it. 00:28:32.812 [2024-11-15 11:46:33.388750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.812 [2024-11-15 11:46:33.388761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:32.812 qpair failed and we were unable to recover it. 00:28:32.812 [2024-11-15 11:46:33.388898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.812 [2024-11-15 11:46:33.388909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:32.813 qpair failed and we were unable to recover it. 00:28:32.813 [2024-11-15 11:46:33.389060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.813 [2024-11-15 11:46:33.389070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:32.813 qpair failed and we were unable to recover it. 00:28:32.813 [2024-11-15 11:46:33.389237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.813 [2024-11-15 11:46:33.389248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:32.813 qpair failed and we were unable to recover it. 00:28:32.813 [2024-11-15 11:46:33.389399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.813 [2024-11-15 11:46:33.389410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:32.813 qpair failed and we were unable to recover it. 00:28:32.813 [2024-11-15 11:46:33.389503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.813 [2024-11-15 11:46:33.389514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:32.813 qpair failed and we were unable to recover it. 00:28:32.813 [2024-11-15 11:46:33.389581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.813 [2024-11-15 11:46:33.389590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:32.813 qpair failed and we were unable to recover it. 00:28:32.813 [2024-11-15 11:46:33.389743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.813 [2024-11-15 11:46:33.389753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:32.813 qpair failed and we were unable to recover it. 00:28:32.813 [2024-11-15 11:46:33.389841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.813 [2024-11-15 11:46:33.389850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:32.813 qpair failed and we were unable to recover it. 00:28:32.813 [2024-11-15 11:46:33.390008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.813 [2024-11-15 11:46:33.390019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:32.813 qpair failed and we were unable to recover it. 00:28:32.813 [2024-11-15 11:46:33.390108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.813 [2024-11-15 11:46:33.390118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:32.813 qpair failed and we were unable to recover it. 00:28:32.813 [2024-11-15 11:46:33.390371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.813 [2024-11-15 11:46:33.390382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:32.813 qpair failed and we were unable to recover it. 00:28:32.813 [2024-11-15 11:46:33.390441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.813 [2024-11-15 11:46:33.390453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:32.813 qpair failed and we were unable to recover it. 00:28:32.813 [2024-11-15 11:46:33.390589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.813 [2024-11-15 11:46:33.390599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:32.813 qpair failed and we were unable to recover it. 00:28:32.813 [2024-11-15 11:46:33.390743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.813 [2024-11-15 11:46:33.390754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:32.813 qpair failed and we were unable to recover it. 00:28:32.813 [2024-11-15 11:46:33.390910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.813 [2024-11-15 11:46:33.390941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:32.813 qpair failed and we were unable to recover it. 00:28:32.813 [2024-11-15 11:46:33.391069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.813 [2024-11-15 11:46:33.391102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:32.813 qpair failed and we were unable to recover it. 00:28:32.813 [2024-11-15 11:46:33.391213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.813 [2024-11-15 11:46:33.391245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:32.813 qpair failed and we were unable to recover it. 00:28:32.813 [2024-11-15 11:46:33.391430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.813 [2024-11-15 11:46:33.391515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:32.813 qpair failed and we were unable to recover it. 00:28:32.813 [2024-11-15 11:46:33.391791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.813 [2024-11-15 11:46:33.391825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:32.813 qpair failed and we were unable to recover it. 00:28:32.813 [2024-11-15 11:46:33.392017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.813 [2024-11-15 11:46:33.392050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:32.813 qpair failed and we were unable to recover it. 00:28:32.813 [2024-11-15 11:46:33.393005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.813 [2024-11-15 11:46:33.393025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:32.813 qpair failed and we were unable to recover it. 00:28:32.813 [2024-11-15 11:46:33.393234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.813 [2024-11-15 11:46:33.393267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:32.813 qpair failed and we were unable to recover it. 00:28:32.813 [2024-11-15 11:46:33.393486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.813 [2024-11-15 11:46:33.393521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:32.813 qpair failed and we were unable to recover it. 00:28:32.813 [2024-11-15 11:46:33.393721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.813 [2024-11-15 11:46:33.393753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:32.813 qpair failed and we were unable to recover it. 00:28:32.813 [2024-11-15 11:46:33.393878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.813 [2024-11-15 11:46:33.393911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:32.813 qpair failed and we were unable to recover it. 00:28:32.813 [2024-11-15 11:46:33.394055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.813 [2024-11-15 11:46:33.394088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:32.813 qpair failed and we were unable to recover it. 00:28:32.813 [2024-11-15 11:46:33.394368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.813 [2024-11-15 11:46:33.394401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:32.813 qpair failed and we were unable to recover it. 00:28:32.813 [2024-11-15 11:46:33.394536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.813 [2024-11-15 11:46:33.394569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:32.813 qpair failed and we were unable to recover it. 00:28:32.813 [2024-11-15 11:46:33.394780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.813 [2024-11-15 11:46:33.394792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:32.813 qpair failed and we were unable to recover it. 00:28:32.813 [2024-11-15 11:46:33.394934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.813 [2024-11-15 11:46:33.394966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:32.813 qpair failed and we were unable to recover it. 00:28:32.813 [2024-11-15 11:46:33.395162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.813 [2024-11-15 11:46:33.395195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:32.813 qpair failed and we were unable to recover it. 00:28:32.813 [2024-11-15 11:46:33.395332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.813 [2024-11-15 11:46:33.395364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:32.813 qpair failed and we were unable to recover it. 00:28:32.813 [2024-11-15 11:46:33.395546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.813 [2024-11-15 11:46:33.395557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:32.813 qpair failed and we were unable to recover it. 00:28:32.813 [2024-11-15 11:46:33.395645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.813 [2024-11-15 11:46:33.395655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:32.813 qpair failed and we were unable to recover it. 00:28:32.813 [2024-11-15 11:46:33.395739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.813 [2024-11-15 11:46:33.395748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:32.813 qpair failed and we were unable to recover it. 00:28:32.813 [2024-11-15 11:46:33.395905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.813 [2024-11-15 11:46:33.395936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:32.813 qpair failed and we were unable to recover it. 00:28:32.813 [2024-11-15 11:46:33.396089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.813 [2024-11-15 11:46:33.396122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:32.813 qpair failed and we were unable to recover it. 00:28:32.813 [2024-11-15 11:46:33.396325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.813 [2024-11-15 11:46:33.396356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:32.813 qpair failed and we were unable to recover it. 00:28:32.813 [2024-11-15 11:46:33.396556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.813 [2024-11-15 11:46:33.396589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:32.813 qpair failed and we were unable to recover it. 00:28:32.814 [2024-11-15 11:46:33.396808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.814 [2024-11-15 11:46:33.396840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:32.814 qpair failed and we were unable to recover it. 00:28:32.814 [2024-11-15 11:46:33.397035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.814 [2024-11-15 11:46:33.397067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:32.814 qpair failed and we were unable to recover it. 00:28:32.814 [2024-11-15 11:46:33.397249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.814 [2024-11-15 11:46:33.397281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:32.814 qpair failed and we were unable to recover it. 00:28:32.814 [2024-11-15 11:46:33.397399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.814 [2024-11-15 11:46:33.397431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:32.814 qpair failed and we were unable to recover it. 00:28:32.814 [2024-11-15 11:46:33.397670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.814 [2024-11-15 11:46:33.397741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1922550 with addr=10.0.0.2, port=4420 00:28:32.814 qpair failed and we were unable to recover it. 00:28:32.814 [2024-11-15 11:46:33.398033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.814 [2024-11-15 11:46:33.398070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1922550 with addr=10.0.0.2, port=4420 00:28:32.814 qpair failed and we were unable to recover it. 00:28:32.814 [2024-11-15 11:46:33.398200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.814 [2024-11-15 11:46:33.398234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1922550 with addr=10.0.0.2, port=4420 00:28:32.814 qpair failed and we were unable to recover it. 00:28:32.814 [2024-11-15 11:46:33.398366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.814 [2024-11-15 11:46:33.398400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1922550 with addr=10.0.0.2, port=4420 00:28:32.814 qpair failed and we were unable to recover it. 00:28:32.814 [2024-11-15 11:46:33.398674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.814 [2024-11-15 11:46:33.398708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1922550 with addr=10.0.0.2, port=4420 00:28:32.814 qpair failed and we were unable to recover it. 00:28:32.814 [2024-11-15 11:46:33.398899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.814 [2024-11-15 11:46:33.398933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1922550 with addr=10.0.0.2, port=4420 00:28:32.814 qpair failed and we were unable to recover it. 00:28:32.814 [2024-11-15 11:46:33.399117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.814 [2024-11-15 11:46:33.399150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1922550 with addr=10.0.0.2, port=4420 00:28:32.814 qpair failed and we were unable to recover it. 00:28:32.814 [2024-11-15 11:46:33.399375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.814 [2024-11-15 11:46:33.399408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1922550 with addr=10.0.0.2, port=4420 00:28:32.814 qpair failed and we were unable to recover it. 00:28:32.814 [2024-11-15 11:46:33.399641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.814 [2024-11-15 11:46:33.399666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1922550 with addr=10.0.0.2, port=4420 00:28:32.814 qpair failed and we were unable to recover it. 00:28:32.814 [2024-11-15 11:46:33.399789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.814 [2024-11-15 11:46:33.399822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1922550 with addr=10.0.0.2, port=4420 00:28:32.814 qpair failed and we were unable to recover it. 00:28:32.814 [2024-11-15 11:46:33.399966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.814 [2024-11-15 11:46:33.399999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1922550 with addr=10.0.0.2, port=4420 00:28:32.814 qpair failed and we were unable to recover it. 00:28:32.814 [2024-11-15 11:46:33.400135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.814 [2024-11-15 11:46:33.400169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1922550 with addr=10.0.0.2, port=4420 00:28:32.814 qpair failed and we were unable to recover it. 00:28:32.814 [2024-11-15 11:46:33.400372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.814 [2024-11-15 11:46:33.400405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1922550 with addr=10.0.0.2, port=4420 00:28:32.814 qpair failed and we were unable to recover it. 00:28:32.814 [2024-11-15 11:46:33.400608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.814 [2024-11-15 11:46:33.400625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1922550 with addr=10.0.0.2, port=4420 00:28:32.814 qpair failed and we were unable to recover it. 00:28:32.814 [2024-11-15 11:46:33.400726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.814 [2024-11-15 11:46:33.400757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1922550 with addr=10.0.0.2, port=4420 00:28:32.814 qpair failed and we were unable to recover it. 00:28:32.814 [2024-11-15 11:46:33.400890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.814 [2024-11-15 11:46:33.400923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1922550 with addr=10.0.0.2, port=4420 00:28:32.814 qpair failed and we were unable to recover it. 00:28:32.814 [2024-11-15 11:46:33.401059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.814 [2024-11-15 11:46:33.401092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1922550 with addr=10.0.0.2, port=4420 00:28:32.814 qpair failed and we were unable to recover it. 00:28:32.814 [2024-11-15 11:46:33.401400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.814 [2024-11-15 11:46:33.401434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1922550 with addr=10.0.0.2, port=4420 00:28:32.814 qpair failed and we were unable to recover it. 00:28:32.814 [2024-11-15 11:46:33.401578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.814 [2024-11-15 11:46:33.401611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1922550 with addr=10.0.0.2, port=4420 00:28:32.814 qpair failed and we were unable to recover it. 00:28:32.814 [2024-11-15 11:46:33.401842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.814 [2024-11-15 11:46:33.401875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1922550 with addr=10.0.0.2, port=4420 00:28:32.814 qpair failed and we were unable to recover it. 00:28:32.814 [2024-11-15 11:46:33.402056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.814 [2024-11-15 11:46:33.402088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1922550 with addr=10.0.0.2, port=4420 00:28:32.814 qpair failed and we were unable to recover it. 00:28:32.814 [2024-11-15 11:46:33.402276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.814 [2024-11-15 11:46:33.402309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1922550 with addr=10.0.0.2, port=4420 00:28:32.814 qpair failed and we were unable to recover it. 00:28:32.814 [2024-11-15 11:46:33.402509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.814 [2024-11-15 11:46:33.402528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1922550 with addr=10.0.0.2, port=4420 00:28:32.814 qpair failed and we were unable to recover it. 00:28:32.814 [2024-11-15 11:46:33.402698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.814 [2024-11-15 11:46:33.402732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1922550 with addr=10.0.0.2, port=4420 00:28:32.814 qpair failed and we were unable to recover it. 00:28:32.814 [2024-11-15 11:46:33.402925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.814 [2024-11-15 11:46:33.402958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1922550 with addr=10.0.0.2, port=4420 00:28:32.814 qpair failed and we were unable to recover it. 00:28:32.814 [2024-11-15 11:46:33.403096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.814 [2024-11-15 11:46:33.403130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1922550 with addr=10.0.0.2, port=4420 00:28:32.814 qpair failed and we were unable to recover it. 00:28:32.814 [2024-11-15 11:46:33.403248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.814 [2024-11-15 11:46:33.403281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1922550 with addr=10.0.0.2, port=4420 00:28:32.814 qpair failed and we were unable to recover it. 00:28:32.814 [2024-11-15 11:46:33.403488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.814 [2024-11-15 11:46:33.403522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1922550 with addr=10.0.0.2, port=4420 00:28:32.814 qpair failed and we were unable to recover it. 00:28:32.814 [2024-11-15 11:46:33.403729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.814 [2024-11-15 11:46:33.403763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1922550 with addr=10.0.0.2, port=4420 00:28:32.814 qpair failed and we were unable to recover it. 00:28:32.814 [2024-11-15 11:46:33.403944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.814 [2024-11-15 11:46:33.403960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1922550 with addr=10.0.0.2, port=4420 00:28:32.814 qpair failed and we were unable to recover it. 00:28:32.814 [2024-11-15 11:46:33.404128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.814 [2024-11-15 11:46:33.404145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1922550 with addr=10.0.0.2, port=4420 00:28:32.814 qpair failed and we were unable to recover it. 00:28:32.814 [2024-11-15 11:46:33.404239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.814 [2024-11-15 11:46:33.404254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1922550 with addr=10.0.0.2, port=4420 00:28:32.814 qpair failed and we were unable to recover it. 00:28:32.814 [2024-11-15 11:46:33.404443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.814 [2024-11-15 11:46:33.404484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1922550 with addr=10.0.0.2, port=4420 00:28:32.814 qpair failed and we were unable to recover it. 00:28:32.814 [2024-11-15 11:46:33.404628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.814 [2024-11-15 11:46:33.404661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1922550 with addr=10.0.0.2, port=4420 00:28:32.814 qpair failed and we were unable to recover it. 00:28:32.815 [2024-11-15 11:46:33.404866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.815 [2024-11-15 11:46:33.404899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1922550 with addr=10.0.0.2, port=4420 00:28:32.815 qpair failed and we were unable to recover it. 00:28:32.815 [2024-11-15 11:46:33.405174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.815 [2024-11-15 11:46:33.405191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1922550 with addr=10.0.0.2, port=4420 00:28:32.815 qpair failed and we were unable to recover it. 00:28:32.815 [2024-11-15 11:46:33.405285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.815 [2024-11-15 11:46:33.405300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1922550 with addr=10.0.0.2, port=4420 00:28:32.815 qpair failed and we were unable to recover it. 00:28:32.815 [2024-11-15 11:46:33.405489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.815 [2024-11-15 11:46:33.405522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1922550 with addr=10.0.0.2, port=4420 00:28:32.815 qpair failed and we were unable to recover it. 00:28:32.815 [2024-11-15 11:46:33.405659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.815 [2024-11-15 11:46:33.405693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1922550 with addr=10.0.0.2, port=4420 00:28:32.815 qpair failed and we were unable to recover it. 00:28:32.815 [2024-11-15 11:46:33.405948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.815 [2024-11-15 11:46:33.405981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1922550 with addr=10.0.0.2, port=4420 00:28:32.815 qpair failed and we were unable to recover it. 00:28:32.815 [2024-11-15 11:46:33.406106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.815 [2024-11-15 11:46:33.406140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1922550 with addr=10.0.0.2, port=4420 00:28:32.815 qpair failed and we were unable to recover it. 00:28:32.815 [2024-11-15 11:46:33.406362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.815 [2024-11-15 11:46:33.406396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1922550 with addr=10.0.0.2, port=4420 00:28:32.815 qpair failed and we were unable to recover it. 00:28:32.815 [2024-11-15 11:46:33.406607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.815 [2024-11-15 11:46:33.406625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1922550 with addr=10.0.0.2, port=4420 00:28:32.815 qpair failed and we were unable to recover it. 00:28:32.815 [2024-11-15 11:46:33.406721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.815 [2024-11-15 11:46:33.406736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1922550 with addr=10.0.0.2, port=4420 00:28:32.815 qpair failed and we were unable to recover it. 00:28:32.815 [2024-11-15 11:46:33.406883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.815 [2024-11-15 11:46:33.406915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1922550 with addr=10.0.0.2, port=4420 00:28:32.815 qpair failed and we were unable to recover it. 00:28:32.815 [2024-11-15 11:46:33.407130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.815 [2024-11-15 11:46:33.407164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1922550 with addr=10.0.0.2, port=4420 00:28:32.815 qpair failed and we were unable to recover it. 00:28:32.815 [2024-11-15 11:46:33.407376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.815 [2024-11-15 11:46:33.407409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1922550 with addr=10.0.0.2, port=4420 00:28:32.815 qpair failed and we were unable to recover it. 00:28:32.815 [2024-11-15 11:46:33.407610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.815 [2024-11-15 11:46:33.407627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1922550 with addr=10.0.0.2, port=4420 00:28:32.815 qpair failed and we were unable to recover it. 00:28:32.815 [2024-11-15 11:46:33.407782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.815 [2024-11-15 11:46:33.407798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1922550 with addr=10.0.0.2, port=4420 00:28:32.815 qpair failed and we were unable to recover it. 00:28:32.815 [2024-11-15 11:46:33.407979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.815 [2024-11-15 11:46:33.408048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:32.815 qpair failed and we were unable to recover it. 00:28:32.815 [2024-11-15 11:46:33.408202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.815 [2024-11-15 11:46:33.408238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:32.815 qpair failed and we were unable to recover it. 00:28:32.815 [2024-11-15 11:46:33.408438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.815 [2024-11-15 11:46:33.408479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:32.815 qpair failed and we were unable to recover it. 00:28:32.815 [2024-11-15 11:46:33.408615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.815 [2024-11-15 11:46:33.408647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:32.815 qpair failed and we were unable to recover it. 00:28:32.815 [2024-11-15 11:46:33.408787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.815 [2024-11-15 11:46:33.408818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:32.815 qpair failed and we were unable to recover it. 00:28:32.815 [2024-11-15 11:46:33.408991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.815 [2024-11-15 11:46:33.409002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:32.815 qpair failed and we were unable to recover it. 00:28:32.815 [2024-11-15 11:46:33.409213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.815 [2024-11-15 11:46:33.409245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:32.815 qpair failed and we were unable to recover it. 00:28:32.815 [2024-11-15 11:46:33.409389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.815 [2024-11-15 11:46:33.409433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:32.815 qpair failed and we were unable to recover it. 00:28:32.815 [2024-11-15 11:46:33.409585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.815 [2024-11-15 11:46:33.409597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:32.815 qpair failed and we were unable to recover it. 00:28:32.815 [2024-11-15 11:46:33.409758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.815 [2024-11-15 11:46:33.409790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:32.815 qpair failed and we were unable to recover it. 00:28:32.815 [2024-11-15 11:46:33.409929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.815 [2024-11-15 11:46:33.409961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:32.815 qpair failed and we were unable to recover it. 00:28:32.815 [2024-11-15 11:46:33.410072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.815 [2024-11-15 11:46:33.410103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:32.815 qpair failed and we were unable to recover it. 00:28:32.815 [2024-11-15 11:46:33.410319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.815 [2024-11-15 11:46:33.410351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:32.815 qpair failed and we were unable to recover it. 00:28:32.815 [2024-11-15 11:46:33.410480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.815 [2024-11-15 11:46:33.410524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:32.815 qpair failed and we were unable to recover it. 00:28:32.815 [2024-11-15 11:46:33.410720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.815 [2024-11-15 11:46:33.410752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:32.815 qpair failed and we were unable to recover it. 00:28:32.815 [2024-11-15 11:46:33.410860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.815 [2024-11-15 11:46:33.410871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:32.815 qpair failed and we were unable to recover it. 00:28:32.815 [2024-11-15 11:46:33.411038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.815 [2024-11-15 11:46:33.411049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:32.815 qpair failed and we were unable to recover it. 00:28:32.815 [2024-11-15 11:46:33.411212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.815 [2024-11-15 11:46:33.411223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:32.815 qpair failed and we were unable to recover it. 00:28:32.815 [2024-11-15 11:46:33.411393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.815 [2024-11-15 11:46:33.411424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:32.815 qpair failed and we were unable to recover it. 00:28:32.815 [2024-11-15 11:46:33.411556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.815 [2024-11-15 11:46:33.411590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:32.815 qpair failed and we were unable to recover it. 00:28:32.815 [2024-11-15 11:46:33.411793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.815 [2024-11-15 11:46:33.411825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:32.815 qpair failed and we were unable to recover it. 00:28:32.815 [2024-11-15 11:46:33.412029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.815 [2024-11-15 11:46:33.412040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:32.815 qpair failed and we were unable to recover it. 00:28:32.815 [2024-11-15 11:46:33.412300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.815 [2024-11-15 11:46:33.412332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:32.816 qpair failed and we were unable to recover it. 00:28:32.816 [2024-11-15 11:46:33.412475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.816 [2024-11-15 11:46:33.412508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:32.816 qpair failed and we were unable to recover it. 00:28:32.816 [2024-11-15 11:46:33.412699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.816 [2024-11-15 11:46:33.412731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:32.816 qpair failed and we were unable to recover it. 00:28:32.816 [2024-11-15 11:46:33.412984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.816 [2024-11-15 11:46:33.413016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:32.816 qpair failed and we were unable to recover it. 00:28:32.816 [2024-11-15 11:46:33.413185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.816 [2024-11-15 11:46:33.413218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:32.816 qpair failed and we were unable to recover it. 00:28:32.816 [2024-11-15 11:46:33.413427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.816 [2024-11-15 11:46:33.413471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:32.816 qpair failed and we were unable to recover it. 00:28:32.816 [2024-11-15 11:46:33.413666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.816 [2024-11-15 11:46:33.413677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:32.816 qpair failed and we were unable to recover it. 00:28:32.816 [2024-11-15 11:46:33.413776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.816 [2024-11-15 11:46:33.413785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:32.816 qpair failed and we were unable to recover it. 00:28:32.816 [2024-11-15 11:46:33.413925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.816 [2024-11-15 11:46:33.413936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:32.816 qpair failed and we were unable to recover it. 00:28:32.816 [2024-11-15 11:46:33.414011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.816 [2024-11-15 11:46:33.414022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:32.816 qpair failed and we were unable to recover it. 00:28:32.816 [2024-11-15 11:46:33.414107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.816 [2024-11-15 11:46:33.414116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:32.816 qpair failed and we were unable to recover it. 00:28:32.816 [2024-11-15 11:46:33.414279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.816 [2024-11-15 11:46:33.414289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:32.816 qpair failed and we were unable to recover it. 00:28:32.816 [2024-11-15 11:46:33.414379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.816 [2024-11-15 11:46:33.414388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:32.816 qpair failed and we were unable to recover it. 00:28:32.816 [2024-11-15 11:46:33.414565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.816 [2024-11-15 11:46:33.414599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:32.816 qpair failed and we were unable to recover it. 00:28:32.816 [2024-11-15 11:46:33.414741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.816 [2024-11-15 11:46:33.414774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:32.816 qpair failed and we were unable to recover it. 00:28:32.816 [2024-11-15 11:46:33.414929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.816 [2024-11-15 11:46:33.414962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:32.816 qpair failed and we were unable to recover it. 00:28:32.816 [2024-11-15 11:46:33.415161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.816 [2024-11-15 11:46:33.415193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:32.816 qpair failed and we were unable to recover it. 00:28:32.816 [2024-11-15 11:46:33.415478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.816 [2024-11-15 11:46:33.415512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:32.816 qpair failed and we were unable to recover it. 00:28:32.816 [2024-11-15 11:46:33.415711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.816 [2024-11-15 11:46:33.415723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:32.816 qpair failed and we were unable to recover it. 00:28:32.816 [2024-11-15 11:46:33.415823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.816 [2024-11-15 11:46:33.415856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:32.816 qpair failed and we were unable to recover it. 00:28:32.816 [2024-11-15 11:46:33.416051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.816 [2024-11-15 11:46:33.416084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:32.816 qpair failed and we were unable to recover it. 00:28:32.816 [2024-11-15 11:46:33.416276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.816 [2024-11-15 11:46:33.416308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:32.816 qpair failed and we were unable to recover it. 00:28:32.816 [2024-11-15 11:46:33.416518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.816 [2024-11-15 11:46:33.416529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:32.816 qpair failed and we were unable to recover it. 00:28:32.816 [2024-11-15 11:46:33.416682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.816 [2024-11-15 11:46:33.416715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:32.816 qpair failed and we were unable to recover it. 00:28:32.816 [2024-11-15 11:46:33.416842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.816 [2024-11-15 11:46:33.416874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:32.816 qpair failed and we were unable to recover it. 00:28:32.816 [2024-11-15 11:46:33.417073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.816 [2024-11-15 11:46:33.417105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:32.816 qpair failed and we were unable to recover it. 00:28:32.816 [2024-11-15 11:46:33.417249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.816 [2024-11-15 11:46:33.417280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:32.816 qpair failed and we were unable to recover it. 00:28:32.816 [2024-11-15 11:46:33.417470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.816 [2024-11-15 11:46:33.417503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:32.816 qpair failed and we were unable to recover it. 00:28:32.816 [2024-11-15 11:46:33.417694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.816 [2024-11-15 11:46:33.417726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:32.816 qpair failed and we were unable to recover it. 00:28:32.816 [2024-11-15 11:46:33.417931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.816 [2024-11-15 11:46:33.417962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:32.816 qpair failed and we were unable to recover it. 00:28:32.816 [2024-11-15 11:46:33.418091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.816 [2024-11-15 11:46:33.418122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:32.816 qpair failed and we were unable to recover it. 00:28:32.816 [2024-11-15 11:46:33.418320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.816 [2024-11-15 11:46:33.418358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:32.816 qpair failed and we were unable to recover it. 00:28:32.816 [2024-11-15 11:46:33.418618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.816 [2024-11-15 11:46:33.418651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:32.816 qpair failed and we were unable to recover it. 00:28:32.816 [2024-11-15 11:46:33.418849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.816 [2024-11-15 11:46:33.418860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:32.816 qpair failed and we were unable to recover it. 00:28:32.816 [2024-11-15 11:46:33.419008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.816 [2024-11-15 11:46:33.419040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:32.816 qpair failed and we were unable to recover it. 00:28:32.816 [2024-11-15 11:46:33.419156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.816 [2024-11-15 11:46:33.419188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:32.816 qpair failed and we were unable to recover it. 00:28:32.816 [2024-11-15 11:46:33.419388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.816 [2024-11-15 11:46:33.419420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:32.816 qpair failed and we were unable to recover it. 00:28:32.816 [2024-11-15 11:46:33.419615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.816 [2024-11-15 11:46:33.419626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:32.816 qpair failed and we were unable to recover it. 00:28:32.816 [2024-11-15 11:46:33.419857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.816 [2024-11-15 11:46:33.419890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:32.816 qpair failed and we were unable to recover it. 00:28:32.817 [2024-11-15 11:46:33.420089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.817 [2024-11-15 11:46:33.420121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:32.817 qpair failed and we were unable to recover it. 00:28:32.817 [2024-11-15 11:46:33.420255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.817 [2024-11-15 11:46:33.420287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:32.817 qpair failed and we were unable to recover it. 00:28:32.817 [2024-11-15 11:46:33.420408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.817 [2024-11-15 11:46:33.420442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:32.817 qpair failed and we were unable to recover it. 00:28:32.817 [2024-11-15 11:46:33.420597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.817 [2024-11-15 11:46:33.420629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:32.817 qpair failed and we were unable to recover it. 00:28:32.817 [2024-11-15 11:46:33.420747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.817 [2024-11-15 11:46:33.420779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:32.817 qpair failed and we were unable to recover it. 00:28:32.817 [2024-11-15 11:46:33.420965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.817 [2024-11-15 11:46:33.420997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:32.817 qpair failed and we were unable to recover it. 00:28:32.817 [2024-11-15 11:46:33.421199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.817 [2024-11-15 11:46:33.421232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:32.817 qpair failed and we were unable to recover it. 00:28:32.817 [2024-11-15 11:46:33.421427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.817 [2024-11-15 11:46:33.421469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:32.817 qpair failed and we were unable to recover it. 00:28:32.817 [2024-11-15 11:46:33.421799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.817 [2024-11-15 11:46:33.421832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:32.817 qpair failed and we were unable to recover it. 00:28:32.817 [2024-11-15 11:46:33.422120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.817 [2024-11-15 11:46:33.422152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:32.817 qpair failed and we were unable to recover it. 00:28:32.817 [2024-11-15 11:46:33.422278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.817 [2024-11-15 11:46:33.422311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:32.817 qpair failed and we were unable to recover it. 00:28:32.817 [2024-11-15 11:46:33.422439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.817 [2024-11-15 11:46:33.422450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:32.817 qpair failed and we were unable to recover it. 00:28:32.817 [2024-11-15 11:46:33.422644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.817 [2024-11-15 11:46:33.422676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:32.817 qpair failed and we were unable to recover it. 00:28:32.817 [2024-11-15 11:46:33.422807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.817 [2024-11-15 11:46:33.422839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:32.817 qpair failed and we were unable to recover it. 00:28:32.817 [2024-11-15 11:46:33.423020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.817 [2024-11-15 11:46:33.423053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:32.817 qpair failed and we were unable to recover it. 00:28:32.817 [2024-11-15 11:46:33.423177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.817 [2024-11-15 11:46:33.423208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:32.817 qpair failed and we were unable to recover it. 00:28:32.817 [2024-11-15 11:46:33.423319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.817 [2024-11-15 11:46:33.423352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:32.817 qpair failed and we were unable to recover it. 00:28:32.817 [2024-11-15 11:46:33.423486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.817 [2024-11-15 11:46:33.423521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:32.817 qpair failed and we were unable to recover it. 00:28:32.817 [2024-11-15 11:46:33.423645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.817 [2024-11-15 11:46:33.423677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:32.817 qpair failed and we were unable to recover it. 00:28:32.817 [2024-11-15 11:46:33.424589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.817 [2024-11-15 11:46:33.424615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:32.817 qpair failed and we were unable to recover it. 00:28:32.817 [2024-11-15 11:46:33.424719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.817 [2024-11-15 11:46:33.424730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:32.817 qpair failed and we were unable to recover it. 00:28:32.817 [2024-11-15 11:46:33.424965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.817 [2024-11-15 11:46:33.424997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:32.817 qpair failed and we were unable to recover it. 00:28:32.817 [2024-11-15 11:46:33.425275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.817 [2024-11-15 11:46:33.425307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:32.817 qpair failed and we were unable to recover it. 00:28:32.817 [2024-11-15 11:46:33.425504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.817 [2024-11-15 11:46:33.425537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:32.817 qpair failed and we were unable to recover it. 00:28:32.817 [2024-11-15 11:46:33.425725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.817 [2024-11-15 11:46:33.425758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:32.817 qpair failed and we were unable to recover it. 00:28:32.817 [2024-11-15 11:46:33.425898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.817 [2024-11-15 11:46:33.425908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:32.817 qpair failed and we were unable to recover it. 00:28:32.817 [2024-11-15 11:46:33.426045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.817 [2024-11-15 11:46:33.426056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:32.817 qpair failed and we were unable to recover it. 00:28:32.817 [2024-11-15 11:46:33.426204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.817 [2024-11-15 11:46:33.426215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:32.817 qpair failed and we were unable to recover it. 00:28:32.817 [2024-11-15 11:46:33.426382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.817 [2024-11-15 11:46:33.426415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:32.817 qpair failed and we were unable to recover it. 00:28:32.817 [2024-11-15 11:46:33.426538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.817 [2024-11-15 11:46:33.426571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:32.817 qpair failed and we were unable to recover it. 00:28:32.817 [2024-11-15 11:46:33.426683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.817 [2024-11-15 11:46:33.426716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:32.817 qpair failed and we were unable to recover it. 00:28:32.817 [2024-11-15 11:46:33.426850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.817 [2024-11-15 11:46:33.426882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:32.817 qpair failed and we were unable to recover it. 00:28:32.817 [2024-11-15 11:46:33.427010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.817 [2024-11-15 11:46:33.427052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:32.817 qpair failed and we were unable to recover it. 00:28:32.817 [2024-11-15 11:46:33.427237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.818 [2024-11-15 11:46:33.427270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:32.818 qpair failed and we were unable to recover it. 00:28:32.818 [2024-11-15 11:46:33.427491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.818 [2024-11-15 11:46:33.427526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:32.818 qpair failed and we were unable to recover it. 00:28:32.818 [2024-11-15 11:46:33.427727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.818 [2024-11-15 11:46:33.427738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:32.818 qpair failed and we were unable to recover it. 00:28:32.818 [2024-11-15 11:46:33.427822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.818 [2024-11-15 11:46:33.427832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:32.818 qpair failed and we were unable to recover it. 00:28:32.818 [2024-11-15 11:46:33.427897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.818 [2024-11-15 11:46:33.427907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:32.818 qpair failed and we were unable to recover it. 00:28:32.818 [2024-11-15 11:46:33.428053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.818 [2024-11-15 11:46:33.428063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:32.818 qpair failed and we were unable to recover it. 00:28:32.818 [2024-11-15 11:46:33.428134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.818 [2024-11-15 11:46:33.428144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:32.818 qpair failed and we were unable to recover it. 00:28:32.818 [2024-11-15 11:46:33.428227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.818 [2024-11-15 11:46:33.428237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:32.818 qpair failed and we were unable to recover it. 00:28:32.818 [2024-11-15 11:46:33.428435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.818 [2024-11-15 11:46:33.428478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:32.818 qpair failed and we were unable to recover it. 00:28:32.818 [2024-11-15 11:46:33.428669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.818 [2024-11-15 11:46:33.428700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:32.818 qpair failed and we were unable to recover it. 00:28:32.818 [2024-11-15 11:46:33.428844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.818 [2024-11-15 11:46:33.428877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:32.818 qpair failed and we were unable to recover it. 00:28:32.818 [2024-11-15 11:46:33.429072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.818 [2024-11-15 11:46:33.429105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:32.818 qpair failed and we were unable to recover it. 00:28:32.818 [2024-11-15 11:46:33.429302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.818 [2024-11-15 11:46:33.429334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:32.818 qpair failed and we were unable to recover it. 00:28:32.818 [2024-11-15 11:46:33.429529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.818 [2024-11-15 11:46:33.429564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:32.818 qpair failed and we were unable to recover it. 00:28:32.818 [2024-11-15 11:46:33.429817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.818 [2024-11-15 11:46:33.429850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:32.818 qpair failed and we were unable to recover it. 00:28:32.818 [2024-11-15 11:46:33.429984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.818 [2024-11-15 11:46:33.430017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:32.818 qpair failed and we were unable to recover it. 00:28:32.818 [2024-11-15 11:46:33.430162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.818 [2024-11-15 11:46:33.430194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:32.818 qpair failed and we were unable to recover it. 00:28:32.818 [2024-11-15 11:46:33.430341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.818 [2024-11-15 11:46:33.430373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:32.818 qpair failed and we were unable to recover it. 00:28:32.818 [2024-11-15 11:46:33.430568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.818 [2024-11-15 11:46:33.430602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:32.818 qpair failed and we were unable to recover it. 00:28:32.818 [2024-11-15 11:46:33.430741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.818 [2024-11-15 11:46:33.430774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:32.818 qpair failed and we were unable to recover it. 00:28:32.818 [2024-11-15 11:46:33.430909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.818 [2024-11-15 11:46:33.430940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:32.818 qpair failed and we were unable to recover it. 00:28:32.818 [2024-11-15 11:46:33.431124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.818 [2024-11-15 11:46:33.431156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:32.818 qpair failed and we were unable to recover it. 00:28:32.818 [2024-11-15 11:46:33.431306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.818 [2024-11-15 11:46:33.431339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:32.818 qpair failed and we were unable to recover it. 00:28:32.818 [2024-11-15 11:46:33.431482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.818 [2024-11-15 11:46:33.431516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:32.818 qpair failed and we were unable to recover it. 00:28:32.818 [2024-11-15 11:46:33.431658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.818 [2024-11-15 11:46:33.431690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:32.818 qpair failed and we were unable to recover it. 00:28:32.818 [2024-11-15 11:46:33.431817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.818 [2024-11-15 11:46:33.431849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:32.818 qpair failed and we were unable to recover it. 00:28:32.818 [2024-11-15 11:46:33.432059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.818 [2024-11-15 11:46:33.432070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:32.818 qpair failed and we were unable to recover it. 00:28:32.818 [2024-11-15 11:46:33.432308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.818 [2024-11-15 11:46:33.432341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:32.818 qpair failed and we were unable to recover it. 00:28:32.818 [2024-11-15 11:46:33.432457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.818 [2024-11-15 11:46:33.432499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:32.818 qpair failed and we were unable to recover it. 00:28:32.818 [2024-11-15 11:46:33.432636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.818 [2024-11-15 11:46:33.432667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:32.818 qpair failed and we were unable to recover it. 00:28:32.818 [2024-11-15 11:46:33.432869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.818 [2024-11-15 11:46:33.432902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:32.818 qpair failed and we were unable to recover it. 00:28:32.818 [2024-11-15 11:46:33.433103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.818 [2024-11-15 11:46:33.433134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:32.818 qpair failed and we were unable to recover it. 00:28:32.818 [2024-11-15 11:46:33.433345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.818 [2024-11-15 11:46:33.433377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:32.818 qpair failed and we were unable to recover it. 00:28:32.818 [2024-11-15 11:46:33.433496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.818 [2024-11-15 11:46:33.433529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:32.818 qpair failed and we were unable to recover it. 00:28:32.818 [2024-11-15 11:46:33.433669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.818 [2024-11-15 11:46:33.433701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:32.818 qpair failed and we were unable to recover it. 00:28:32.818 [2024-11-15 11:46:33.433825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.818 [2024-11-15 11:46:33.433858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:32.818 qpair failed and we were unable to recover it. 00:28:32.818 [2024-11-15 11:46:33.434069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.818 [2024-11-15 11:46:33.434101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:32.818 qpair failed and we were unable to recover it. 00:28:32.818 [2024-11-15 11:46:33.434312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.818 [2024-11-15 11:46:33.434344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:32.818 qpair failed and we were unable to recover it. 00:28:32.818 [2024-11-15 11:46:33.434557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.818 [2024-11-15 11:46:33.434590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:32.819 qpair failed and we were unable to recover it. 00:28:32.819 [2024-11-15 11:46:33.434784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.819 [2024-11-15 11:46:33.434818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:32.819 qpair failed and we were unable to recover it. 00:28:32.819 [2024-11-15 11:46:33.435035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.819 [2024-11-15 11:46:33.435067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:32.819 qpair failed and we were unable to recover it. 00:28:32.819 [2024-11-15 11:46:33.436254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.819 [2024-11-15 11:46:33.436275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:32.819 qpair failed and we were unable to recover it. 00:28:32.819 [2024-11-15 11:46:33.436448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.819 [2024-11-15 11:46:33.436474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:32.819 qpair failed and we were unable to recover it. 00:28:32.819 [2024-11-15 11:46:33.436706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.819 [2024-11-15 11:46:33.436739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:32.819 qpair failed and we were unable to recover it. 00:28:32.819 [2024-11-15 11:46:33.437978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.819 [2024-11-15 11:46:33.437998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:32.819 qpair failed and we were unable to recover it. 00:28:32.819 [2024-11-15 11:46:33.438165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.819 [2024-11-15 11:46:33.438175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:32.819 qpair failed and we were unable to recover it. 00:28:32.819 [2024-11-15 11:46:33.438349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.819 [2024-11-15 11:46:33.438359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:32.819 qpair failed and we were unable to recover it. 00:28:32.819 [2024-11-15 11:46:33.438424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.819 [2024-11-15 11:46:33.438433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:32.819 qpair failed and we were unable to recover it. 00:28:32.819 [2024-11-15 11:46:33.439099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.819 [2024-11-15 11:46:33.439117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:32.819 qpair failed and we were unable to recover it. 00:28:32.819 [2024-11-15 11:46:33.439213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.819 [2024-11-15 11:46:33.439223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:32.819 qpair failed and we were unable to recover it. 00:28:32.819 [2024-11-15 11:46:33.439381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.819 [2024-11-15 11:46:33.439391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:32.819 qpair failed and we were unable to recover it. 00:28:32.819 [2024-11-15 11:46:33.439467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.819 [2024-11-15 11:46:33.439476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:32.819 qpair failed and we were unable to recover it. 00:28:32.819 [2024-11-15 11:46:33.439538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.819 [2024-11-15 11:46:33.439549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:32.819 qpair failed and we were unable to recover it. 00:28:32.819 [2024-11-15 11:46:33.439732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.819 [2024-11-15 11:46:33.439766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:32.819 qpair failed and we were unable to recover it. 00:28:32.819 [2024-11-15 11:46:33.439951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.819 [2024-11-15 11:46:33.439983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:32.819 qpair failed and we were unable to recover it. 00:28:32.819 [2024-11-15 11:46:33.440212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.819 [2024-11-15 11:46:33.440245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:32.819 qpair failed and we were unable to recover it. 00:28:32.819 [2024-11-15 11:46:33.440381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.819 [2024-11-15 11:46:33.440415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:32.819 qpair failed and we were unable to recover it. 00:28:32.819 [2024-11-15 11:46:33.440623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.819 [2024-11-15 11:46:33.440656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:32.819 qpair failed and we were unable to recover it. 00:28:32.819 [2024-11-15 11:46:33.440836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.819 [2024-11-15 11:46:33.440863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:32.819 qpair failed and we were unable to recover it. 00:28:32.819 [2024-11-15 11:46:33.441001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.819 [2024-11-15 11:46:33.441013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:32.819 qpair failed and we were unable to recover it. 00:28:32.819 [2024-11-15 11:46:33.441162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.819 [2024-11-15 11:46:33.441172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:32.819 qpair failed and we were unable to recover it. 00:28:32.819 [2024-11-15 11:46:33.441333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.819 [2024-11-15 11:46:33.441345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:32.819 qpair failed and we were unable to recover it. 00:28:32.819 [2024-11-15 11:46:33.441421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.819 [2024-11-15 11:46:33.441430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:32.819 qpair failed and we were unable to recover it. 00:28:32.819 [2024-11-15 11:46:33.441516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.819 [2024-11-15 11:46:33.441527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:32.819 qpair failed and we were unable to recover it. 00:28:32.819 [2024-11-15 11:46:33.441625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.819 [2024-11-15 11:46:33.441636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:32.819 qpair failed and we were unable to recover it. 00:28:32.819 [2024-11-15 11:46:33.441876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.819 [2024-11-15 11:46:33.441910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:32.819 qpair failed and we were unable to recover it. 00:28:32.819 [2024-11-15 11:46:33.442110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.819 [2024-11-15 11:46:33.442149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:32.819 qpair failed and we were unable to recover it. 00:28:32.819 [2024-11-15 11:46:33.442293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.819 [2024-11-15 11:46:33.442327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:32.819 qpair failed and we were unable to recover it. 00:28:32.819 [2024-11-15 11:46:33.442514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.819 [2024-11-15 11:46:33.442548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:32.819 qpair failed and we were unable to recover it. 00:28:32.819 [2024-11-15 11:46:33.442726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.819 [2024-11-15 11:46:33.442738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:32.819 qpair failed and we were unable to recover it. 00:28:32.819 [2024-11-15 11:46:33.442820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.819 [2024-11-15 11:46:33.442830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:32.819 qpair failed and we were unable to recover it. 00:28:32.819 [2024-11-15 11:46:33.442932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.819 [2024-11-15 11:46:33.442942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:32.819 qpair failed and we were unable to recover it. 00:28:32.819 [2024-11-15 11:46:33.443030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.819 [2024-11-15 11:46:33.443040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:32.819 qpair failed and we were unable to recover it. 00:28:32.819 [2024-11-15 11:46:33.443138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.819 [2024-11-15 11:46:33.443148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:32.819 qpair failed and we were unable to recover it. 00:28:32.819 [2024-11-15 11:46:33.443220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.819 [2024-11-15 11:46:33.443230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:32.819 qpair failed and we were unable to recover it. 00:28:32.819 [2024-11-15 11:46:33.443315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.819 [2024-11-15 11:46:33.443346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:32.819 qpair failed and we were unable to recover it. 00:28:32.819 [2024-11-15 11:46:33.443524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.820 [2024-11-15 11:46:33.443560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:32.820 qpair failed and we were unable to recover it. 00:28:32.820 [2024-11-15 11:46:33.443760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.820 [2024-11-15 11:46:33.443771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:32.820 qpair failed and we were unable to recover it. 00:28:32.820 [2024-11-15 11:46:33.443915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.820 [2024-11-15 11:46:33.443947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:32.820 qpair failed and we were unable to recover it. 00:28:32.820 [2024-11-15 11:46:33.444143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.820 [2024-11-15 11:46:33.444175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:32.820 qpair failed and we were unable to recover it. 00:28:32.820 [2024-11-15 11:46:33.444305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.820 [2024-11-15 11:46:33.444337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:32.820 qpair failed and we were unable to recover it. 00:28:32.820 [2024-11-15 11:46:33.444539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.820 [2024-11-15 11:46:33.444549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:32.820 qpair failed and we were unable to recover it. 00:28:32.820 [2024-11-15 11:46:33.444786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.820 [2024-11-15 11:46:33.444819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:32.820 qpair failed and we were unable to recover it. 00:28:32.820 [2024-11-15 11:46:33.445066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.820 [2024-11-15 11:46:33.445097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:32.820 qpair failed and we were unable to recover it. 00:28:32.820 [2024-11-15 11:46:33.445291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.820 [2024-11-15 11:46:33.445323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:32.820 qpair failed and we were unable to recover it. 00:28:32.820 [2024-11-15 11:46:33.445545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.820 [2024-11-15 11:46:33.445578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:32.820 qpair failed and we were unable to recover it. 00:28:32.820 [2024-11-15 11:46:33.445824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.820 [2024-11-15 11:46:33.445855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:32.820 qpair failed and we were unable to recover it. 00:28:32.820 [2024-11-15 11:46:33.446037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.820 [2024-11-15 11:46:33.446047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:32.820 qpair failed and we were unable to recover it. 00:28:32.820 [2024-11-15 11:46:33.446133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.820 [2024-11-15 11:46:33.446142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:32.820 qpair failed and we were unable to recover it. 00:28:32.820 [2024-11-15 11:46:33.446341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.820 [2024-11-15 11:46:33.446373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:32.820 qpair failed and we were unable to recover it. 00:28:32.820 [2024-11-15 11:46:33.446513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.820 [2024-11-15 11:46:33.446523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:32.820 qpair failed and we were unable to recover it. 00:28:32.820 [2024-11-15 11:46:33.446661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.820 [2024-11-15 11:46:33.446673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:32.820 qpair failed and we were unable to recover it. 00:28:32.820 [2024-11-15 11:46:33.446809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.820 [2024-11-15 11:46:33.446820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:32.820 qpair failed and we were unable to recover it. 00:28:32.820 [2024-11-15 11:46:33.446976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.820 [2024-11-15 11:46:33.447007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:32.820 qpair failed and we were unable to recover it. 00:28:32.820 [2024-11-15 11:46:33.447232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.820 [2024-11-15 11:46:33.447264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:32.820 qpair failed and we were unable to recover it. 00:28:32.820 [2024-11-15 11:46:33.447374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.820 [2024-11-15 11:46:33.447406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:32.820 qpair failed and we were unable to recover it. 00:28:32.820 [2024-11-15 11:46:33.447593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.820 [2024-11-15 11:46:33.447604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:32.820 qpair failed and we were unable to recover it. 00:28:32.820 [2024-11-15 11:46:33.447753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.820 [2024-11-15 11:46:33.447782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:32.820 qpair failed and we were unable to recover it. 00:28:32.820 [2024-11-15 11:46:33.447963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.820 [2024-11-15 11:46:33.447995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:32.820 qpair failed and we were unable to recover it. 00:28:32.820 [2024-11-15 11:46:33.448123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.820 [2024-11-15 11:46:33.448154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:32.820 qpair failed and we were unable to recover it. 00:28:32.820 [2024-11-15 11:46:33.448259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.820 [2024-11-15 11:46:33.448291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:32.820 qpair failed and we were unable to recover it. 00:28:32.820 [2024-11-15 11:46:33.448415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.820 [2024-11-15 11:46:33.448446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:32.820 qpair failed and we were unable to recover it. 00:28:32.820 [2024-11-15 11:46:33.448673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.820 [2024-11-15 11:46:33.448708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:32.820 qpair failed and we were unable to recover it. 00:28:32.820 [2024-11-15 11:46:33.448917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.820 [2024-11-15 11:46:33.448949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:32.820 qpair failed and we were unable to recover it. 00:28:32.820 [2024-11-15 11:46:33.449172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.820 [2024-11-15 11:46:33.449203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:32.820 qpair failed and we were unable to recover it. 00:28:32.820 [2024-11-15 11:46:33.449403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.820 [2024-11-15 11:46:33.449434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:32.820 qpair failed and we were unable to recover it. 00:28:32.820 [2024-11-15 11:46:33.449637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.820 [2024-11-15 11:46:33.449650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:32.820 qpair failed and we were unable to recover it. 00:28:32.820 [2024-11-15 11:46:33.449766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.820 [2024-11-15 11:46:33.449798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:32.820 qpair failed and we were unable to recover it. 00:28:32.820 [2024-11-15 11:46:33.449922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.820 [2024-11-15 11:46:33.449954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:32.820 qpair failed and we were unable to recover it. 00:28:32.820 [2024-11-15 11:46:33.450143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.820 [2024-11-15 11:46:33.450175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:32.820 qpair failed and we were unable to recover it. 00:28:32.820 [2024-11-15 11:46:33.450304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.820 [2024-11-15 11:46:33.450336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:32.820 qpair failed and we were unable to recover it. 00:28:32.820 [2024-11-15 11:46:33.450474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.820 [2024-11-15 11:46:33.450506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:32.820 qpair failed and we were unable to recover it. 00:28:32.820 [2024-11-15 11:46:33.450761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.820 [2024-11-15 11:46:33.450793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:32.820 qpair failed and we were unable to recover it. 00:28:32.820 [2024-11-15 11:46:33.450906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.820 [2024-11-15 11:46:33.450938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:32.820 qpair failed and we were unable to recover it. 00:28:32.820 [2024-11-15 11:46:33.451081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.820 [2024-11-15 11:46:33.451113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:32.821 qpair failed and we were unable to recover it. 00:28:32.821 [2024-11-15 11:46:33.451248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.821 [2024-11-15 11:46:33.451280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:32.821 qpair failed and we were unable to recover it. 00:28:32.821 [2024-11-15 11:46:33.451555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.821 [2024-11-15 11:46:33.451588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:32.821 qpair failed and we were unable to recover it. 00:28:32.821 [2024-11-15 11:46:33.451772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.821 [2024-11-15 11:46:33.451804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:32.821 qpair failed and we were unable to recover it. 00:28:32.821 [2024-11-15 11:46:33.451995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.821 [2024-11-15 11:46:33.452026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:32.821 qpair failed and we were unable to recover it. 00:28:32.821 [2024-11-15 11:46:33.452170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.821 [2024-11-15 11:46:33.452201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:32.821 qpair failed and we were unable to recover it. 00:28:32.821 [2024-11-15 11:46:33.452393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.821 [2024-11-15 11:46:33.452404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:32.821 qpair failed and we were unable to recover it. 00:28:32.821 [2024-11-15 11:46:33.452479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.821 [2024-11-15 11:46:33.452490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:32.821 qpair failed and we were unable to recover it. 00:28:32.821 [2024-11-15 11:46:33.452576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.821 [2024-11-15 11:46:33.452587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:32.821 qpair failed and we were unable to recover it. 00:28:32.821 [2024-11-15 11:46:33.452651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.821 [2024-11-15 11:46:33.452660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:32.821 qpair failed and we were unable to recover it. 00:28:32.821 [2024-11-15 11:46:33.452794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.821 [2024-11-15 11:46:33.452804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:32.821 qpair failed and we were unable to recover it. 00:28:32.821 [2024-11-15 11:46:33.452884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.821 [2024-11-15 11:46:33.452893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:32.821 qpair failed and we were unable to recover it. 00:28:32.821 [2024-11-15 11:46:33.453051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.821 [2024-11-15 11:46:33.453062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:32.821 qpair failed and we were unable to recover it. 00:28:32.821 [2024-11-15 11:46:33.453306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.821 [2024-11-15 11:46:33.453338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:32.821 qpair failed and we were unable to recover it. 00:28:32.821 [2024-11-15 11:46:33.453493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.821 [2024-11-15 11:46:33.453527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:32.821 qpair failed and we were unable to recover it. 00:28:32.821 [2024-11-15 11:46:33.453641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.821 [2024-11-15 11:46:33.453672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:32.821 qpair failed and we were unable to recover it. 00:28:32.821 [2024-11-15 11:46:33.453955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.821 [2024-11-15 11:46:33.453987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:32.821 qpair failed and we were unable to recover it. 00:28:32.821 [2024-11-15 11:46:33.454106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.821 [2024-11-15 11:46:33.454137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:32.821 qpair failed and we were unable to recover it. 00:28:32.821 [2024-11-15 11:46:33.454345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.821 [2024-11-15 11:46:33.454357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:32.821 qpair failed and we were unable to recover it. 00:28:32.821 [2024-11-15 11:46:33.454452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.821 [2024-11-15 11:46:33.454495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:32.821 qpair failed and we were unable to recover it. 00:28:32.821 [2024-11-15 11:46:33.454800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.821 [2024-11-15 11:46:33.454834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:32.821 qpair failed and we were unable to recover it. 00:28:32.821 [2024-11-15 11:46:33.454966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.821 [2024-11-15 11:46:33.454998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:32.821 qpair failed and we were unable to recover it. 00:28:32.821 [2024-11-15 11:46:33.455142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.821 [2024-11-15 11:46:33.455174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:32.821 qpair failed and we were unable to recover it. 00:28:32.821 [2024-11-15 11:46:33.455396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.821 [2024-11-15 11:46:33.455427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:32.821 qpair failed and we were unable to recover it. 00:28:32.821 [2024-11-15 11:46:33.455625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.821 [2024-11-15 11:46:33.455636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:32.821 qpair failed and we were unable to recover it. 00:28:32.821 [2024-11-15 11:46:33.455727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.821 [2024-11-15 11:46:33.455737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:32.821 qpair failed and we were unable to recover it. 00:28:32.821 [2024-11-15 11:46:33.455947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.821 [2024-11-15 11:46:33.455958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:32.821 qpair failed and we were unable to recover it. 00:28:32.821 [2024-11-15 11:46:33.456039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.821 [2024-11-15 11:46:33.456049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:32.821 qpair failed and we were unable to recover it. 00:28:32.821 [2024-11-15 11:46:33.456126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.821 [2024-11-15 11:46:33.456136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:32.821 qpair failed and we were unable to recover it. 00:28:32.821 [2024-11-15 11:46:33.456208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.821 [2024-11-15 11:46:33.456248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:32.821 qpair failed and we were unable to recover it. 00:28:32.821 [2024-11-15 11:46:33.456453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.821 [2024-11-15 11:46:33.456522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:32.821 qpair failed and we were unable to recover it. 00:28:32.821 [2024-11-15 11:46:33.456657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.821 [2024-11-15 11:46:33.456691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:32.821 qpair failed and we were unable to recover it. 00:28:32.821 [2024-11-15 11:46:33.456961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.821 [2024-11-15 11:46:33.456974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:32.821 qpair failed and we were unable to recover it. 00:28:32.821 [2024-11-15 11:46:33.457755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.821 [2024-11-15 11:46:33.457775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:32.821 qpair failed and we were unable to recover it. 00:28:32.821 [2024-11-15 11:46:33.457876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.821 [2024-11-15 11:46:33.457888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:32.821 qpair failed and we were unable to recover it. 00:28:32.821 [2024-11-15 11:46:33.458032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.822 [2024-11-15 11:46:33.458066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:32.822 qpair failed and we were unable to recover it. 00:28:32.822 [2024-11-15 11:46:33.458268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.822 [2024-11-15 11:46:33.458302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:32.822 qpair failed and we were unable to recover it. 00:28:32.822 [2024-11-15 11:46:33.458436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.822 [2024-11-15 11:46:33.458481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:32.822 qpair failed and we were unable to recover it. 00:28:32.822 [2024-11-15 11:46:33.458685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.822 [2024-11-15 11:46:33.458718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:32.822 qpair failed and we were unable to recover it. 00:28:32.822 [2024-11-15 11:46:33.458995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.822 [2024-11-15 11:46:33.459026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:32.822 qpair failed and we were unable to recover it. 00:28:32.822 [2024-11-15 11:46:33.459209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.822 [2024-11-15 11:46:33.459242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:32.822 qpair failed and we were unable to recover it. 00:28:32.822 [2024-11-15 11:46:33.459383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.822 [2024-11-15 11:46:33.459416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:32.822 qpair failed and we were unable to recover it. 00:28:32.822 [2024-11-15 11:46:33.459538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.822 [2024-11-15 11:46:33.459570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:32.822 qpair failed and we were unable to recover it. 00:28:32.822 [2024-11-15 11:46:33.459742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.822 [2024-11-15 11:46:33.459754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:32.822 qpair failed and we were unable to recover it. 00:28:32.822 [2024-11-15 11:46:33.459952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.822 [2024-11-15 11:46:33.459986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:32.822 qpair failed and we were unable to recover it. 00:28:32.822 [2024-11-15 11:46:33.461188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.822 [2024-11-15 11:46:33.461207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:32.822 qpair failed and we were unable to recover it. 00:28:32.822 [2024-11-15 11:46:33.461398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.822 [2024-11-15 11:46:33.461410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:32.822 qpair failed and we were unable to recover it. 00:28:32.822 [2024-11-15 11:46:33.461497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.822 [2024-11-15 11:46:33.461508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:32.822 qpair failed and we were unable to recover it. 00:28:32.822 [2024-11-15 11:46:33.461653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.822 [2024-11-15 11:46:33.461686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:32.822 qpair failed and we were unable to recover it. 00:28:32.822 [2024-11-15 11:46:33.461821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.822 [2024-11-15 11:46:33.461854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:32.822 qpair failed and we were unable to recover it. 00:28:32.822 [2024-11-15 11:46:33.462011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.822 [2024-11-15 11:46:33.462043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:32.822 qpair failed and we were unable to recover it. 00:28:32.822 [2024-11-15 11:46:33.462166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.822 [2024-11-15 11:46:33.462199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:32.822 qpair failed and we were unable to recover it. 00:28:32.822 [2024-11-15 11:46:33.462316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.822 [2024-11-15 11:46:33.462350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:32.822 qpair failed and we were unable to recover it. 00:28:32.822 [2024-11-15 11:46:33.462551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.822 [2024-11-15 11:46:33.462563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:32.822 qpair failed and we were unable to recover it. 00:28:32.822 [2024-11-15 11:46:33.462663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.822 [2024-11-15 11:46:33.462673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:32.822 qpair failed and we were unable to recover it. 00:28:32.822 [2024-11-15 11:46:33.462807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.822 [2024-11-15 11:46:33.462817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:32.822 qpair failed and we were unable to recover it. 00:28:32.822 [2024-11-15 11:46:33.462892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.822 [2024-11-15 11:46:33.462902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:32.822 qpair failed and we were unable to recover it. 00:28:32.822 [2024-11-15 11:46:33.463056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.822 [2024-11-15 11:46:33.463089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:32.822 qpair failed and we were unable to recover it. 00:28:32.822 [2024-11-15 11:46:33.463378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.822 [2024-11-15 11:46:33.463411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:32.822 qpair failed and we were unable to recover it. 00:28:32.822 [2024-11-15 11:46:33.463548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.822 [2024-11-15 11:46:33.463560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:32.822 qpair failed and we were unable to recover it. 00:28:32.822 [2024-11-15 11:46:33.463656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.822 [2024-11-15 11:46:33.463666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:32.822 qpair failed and we were unable to recover it. 00:28:32.822 [2024-11-15 11:46:33.463772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.822 [2024-11-15 11:46:33.463806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:32.822 qpair failed and we were unable to recover it. 00:28:32.822 [2024-11-15 11:46:33.463923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.822 [2024-11-15 11:46:33.463955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:32.822 qpair failed and we were unable to recover it. 00:28:32.822 [2024-11-15 11:46:33.464161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.822 [2024-11-15 11:46:33.464194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:32.822 qpair failed and we were unable to recover it. 00:28:32.822 [2024-11-15 11:46:33.464340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.822 [2024-11-15 11:46:33.464373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:32.822 qpair failed and we were unable to recover it. 00:28:32.822 [2024-11-15 11:46:33.464510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.822 [2024-11-15 11:46:33.464544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:32.822 qpair failed and we were unable to recover it. 00:28:32.822 [2024-11-15 11:46:33.464729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.822 [2024-11-15 11:46:33.464762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:32.822 qpair failed and we were unable to recover it. 00:28:32.822 [2024-11-15 11:46:33.464963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.822 [2024-11-15 11:46:33.464974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:32.822 qpair failed and we were unable to recover it. 00:28:32.822 [2024-11-15 11:46:33.465129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.822 [2024-11-15 11:46:33.465152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:32.822 qpair failed and we were unable to recover it. 00:28:32.822 [2024-11-15 11:46:33.465252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.822 [2024-11-15 11:46:33.465262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:32.822 qpair failed and we were unable to recover it. 00:28:32.822 [2024-11-15 11:46:33.465411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.822 [2024-11-15 11:46:33.465423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:32.822 qpair failed and we were unable to recover it. 00:28:32.822 [2024-11-15 11:46:33.465501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.822 [2024-11-15 11:46:33.465512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:32.822 qpair failed and we were unable to recover it. 00:28:32.822 [2024-11-15 11:46:33.465761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.822 [2024-11-15 11:46:33.465803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:32.822 qpair failed and we were unable to recover it. 00:28:32.822 [2024-11-15 11:46:33.465943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.823 [2024-11-15 11:46:33.465976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:32.823 qpair failed and we were unable to recover it. 00:28:32.823 [2024-11-15 11:46:33.466096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.823 [2024-11-15 11:46:33.466130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:32.823 qpair failed and we were unable to recover it. 00:28:32.823 [2024-11-15 11:46:33.466246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.823 [2024-11-15 11:46:33.466280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:32.823 qpair failed and we were unable to recover it. 00:28:32.823 [2024-11-15 11:46:33.466418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.823 [2024-11-15 11:46:33.466451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:32.823 qpair failed and we were unable to recover it. 00:28:32.823 [2024-11-15 11:46:33.466672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.823 [2024-11-15 11:46:33.466706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:32.823 qpair failed and we were unable to recover it. 00:28:32.823 [2024-11-15 11:46:33.467007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.823 [2024-11-15 11:46:33.467040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:32.823 qpair failed and we were unable to recover it. 00:28:32.823 [2024-11-15 11:46:33.467184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.823 [2024-11-15 11:46:33.467216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:32.823 qpair failed and we were unable to recover it. 00:28:32.823 [2024-11-15 11:46:33.467388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.823 [2024-11-15 11:46:33.467420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:32.823 qpair failed and we were unable to recover it. 00:28:32.823 [2024-11-15 11:46:33.468352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.823 [2024-11-15 11:46:33.468373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:32.823 qpair failed and we were unable to recover it. 00:28:32.823 [2024-11-15 11:46:33.468536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.823 [2024-11-15 11:46:33.468550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:32.823 qpair failed and we were unable to recover it. 00:28:32.823 [2024-11-15 11:46:33.468704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.823 [2024-11-15 11:46:33.468726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:32.823 qpair failed and we were unable to recover it. 00:28:32.823 [2024-11-15 11:46:33.468816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.823 [2024-11-15 11:46:33.468826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:32.823 qpair failed and we were unable to recover it. 00:28:32.823 [2024-11-15 11:46:33.468917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.823 [2024-11-15 11:46:33.468948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:32.823 qpair failed and we were unable to recover it. 00:28:32.823 [2024-11-15 11:46:33.469082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.823 [2024-11-15 11:46:33.469116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:32.823 qpair failed and we were unable to recover it. 00:28:32.823 [2024-11-15 11:46:33.469257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.823 [2024-11-15 11:46:33.469289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:32.823 qpair failed and we were unable to recover it. 00:28:32.823 [2024-11-15 11:46:33.469510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.823 [2024-11-15 11:46:33.469543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:32.823 qpair failed and we were unable to recover it. 00:28:32.823 [2024-11-15 11:46:33.469768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.823 [2024-11-15 11:46:33.469779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:32.823 qpair failed and we were unable to recover it. 00:28:32.823 [2024-11-15 11:46:33.469939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.823 [2024-11-15 11:46:33.469950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:32.823 qpair failed and we were unable to recover it. 00:28:32.823 [2024-11-15 11:46:33.470038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.823 [2024-11-15 11:46:33.470049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:32.823 qpair failed and we were unable to recover it. 00:28:32.823 [2024-11-15 11:46:33.470198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.823 [2024-11-15 11:46:33.470209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:32.823 qpair failed and we were unable to recover it. 00:28:32.823 [2024-11-15 11:46:33.470304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.823 [2024-11-15 11:46:33.470315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:32.823 qpair failed and we were unable to recover it. 00:28:32.823 [2024-11-15 11:46:33.470406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.823 [2024-11-15 11:46:33.470416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:32.823 qpair failed and we were unable to recover it. 00:28:32.823 [2024-11-15 11:46:33.470594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.823 [2024-11-15 11:46:33.470607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:32.823 qpair failed and we were unable to recover it. 00:28:32.823 [2024-11-15 11:46:33.471892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.823 [2024-11-15 11:46:33.471912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:32.823 qpair failed and we were unable to recover it. 00:28:32.823 [2024-11-15 11:46:33.472104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.823 [2024-11-15 11:46:33.472115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:32.823 qpair failed and we were unable to recover it. 00:28:32.823 [2024-11-15 11:46:33.472273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.823 [2024-11-15 11:46:33.472284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:32.823 qpair failed and we were unable to recover it. 00:28:32.823 [2024-11-15 11:46:33.473321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.823 [2024-11-15 11:46:33.473341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:32.823 qpair failed and we were unable to recover it. 00:28:32.823 [2024-11-15 11:46:33.473514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.823 [2024-11-15 11:46:33.473527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:32.823 qpair failed and we were unable to recover it. 00:28:32.823 [2024-11-15 11:46:33.473623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.823 [2024-11-15 11:46:33.473654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:32.823 qpair failed and we were unable to recover it. 00:28:32.823 [2024-11-15 11:46:33.473780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.823 [2024-11-15 11:46:33.473812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:32.823 qpair failed and we were unable to recover it. 00:28:32.823 [2024-11-15 11:46:33.474010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.823 [2024-11-15 11:46:33.474043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:32.823 qpair failed and we were unable to recover it. 00:28:32.823 [2024-11-15 11:46:33.474296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.823 [2024-11-15 11:46:33.474328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:32.823 qpair failed and we were unable to recover it. 00:28:32.823 [2024-11-15 11:46:33.474522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.823 [2024-11-15 11:46:33.474557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:32.823 qpair failed and we were unable to recover it. 00:28:32.823 [2024-11-15 11:46:33.474787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.823 [2024-11-15 11:46:33.474798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:32.823 qpair failed and we were unable to recover it. 00:28:32.823 [2024-11-15 11:46:33.474954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.823 [2024-11-15 11:46:33.474965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:32.823 qpair failed and we were unable to recover it. 00:28:32.823 [2024-11-15 11:46:33.475028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.823 [2024-11-15 11:46:33.475039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:32.823 qpair failed and we were unable to recover it. 00:28:32.823 [2024-11-15 11:46:33.475173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.823 [2024-11-15 11:46:33.475218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:32.823 qpair failed and we were unable to recover it. 00:28:32.823 [2024-11-15 11:46:33.475336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.823 [2024-11-15 11:46:33.475368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:32.823 qpair failed and we were unable to recover it. 00:28:32.823 [2024-11-15 11:46:33.475488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.824 [2024-11-15 11:46:33.475521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:32.824 qpair failed and we were unable to recover it. 00:28:32.824 [2024-11-15 11:46:33.476323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.824 [2024-11-15 11:46:33.476347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:32.824 qpair failed and we were unable to recover it. 00:28:32.824 [2024-11-15 11:46:33.476432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.824 [2024-11-15 11:46:33.476443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:32.824 qpair failed and we were unable to recover it. 00:28:32.824 [2024-11-15 11:46:33.476521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.824 [2024-11-15 11:46:33.476532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:32.824 qpair failed and we were unable to recover it. 00:28:32.824 [2024-11-15 11:46:33.476607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.824 [2024-11-15 11:46:33.476618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:32.824 qpair failed and we were unable to recover it. 00:28:32.824 [2024-11-15 11:46:33.476755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.824 [2024-11-15 11:46:33.476786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:32.824 qpair failed and we were unable to recover it. 00:28:32.824 [2024-11-15 11:46:33.476977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.824 [2024-11-15 11:46:33.477020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:32.824 qpair failed and we were unable to recover it. 00:28:32.824 [2024-11-15 11:46:33.477232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.824 [2024-11-15 11:46:33.477277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:32.824 qpair failed and we were unable to recover it. 00:28:32.824 [2024-11-15 11:46:33.477520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.824 [2024-11-15 11:46:33.477571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:32.824 qpair failed and we were unable to recover it. 00:28:32.824 [2024-11-15 11:46:33.477791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.824 [2024-11-15 11:46:33.477809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:32.824 qpair failed and we were unable to recover it. 00:28:32.824 [2024-11-15 11:46:33.477893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.824 [2024-11-15 11:46:33.477905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:32.824 qpair failed and we were unable to recover it. 00:28:32.824 [2024-11-15 11:46:33.478060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.824 [2024-11-15 11:46:33.478074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:32.824 qpair failed and we were unable to recover it. 00:28:32.824 [2024-11-15 11:46:33.478228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.824 [2024-11-15 11:46:33.478240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:32.824 qpair failed and we were unable to recover it. 00:28:32.824 [2024-11-15 11:46:33.478340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.824 [2024-11-15 11:46:33.478351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:32.824 qpair failed and we were unable to recover it. 00:28:32.824 [2024-11-15 11:46:33.478432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.824 [2024-11-15 11:46:33.478443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:32.824 qpair failed and we were unable to recover it. 00:28:32.824 [2024-11-15 11:46:33.478591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.824 [2024-11-15 11:46:33.478604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:32.824 qpair failed and we were unable to recover it. 00:28:32.824 [2024-11-15 11:46:33.478700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.824 [2024-11-15 11:46:33.478709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:32.824 qpair failed and we were unable to recover it. 00:28:32.824 [2024-11-15 11:46:33.478857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.824 [2024-11-15 11:46:33.478870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:32.824 qpair failed and we were unable to recover it. 00:28:32.824 [2024-11-15 11:46:33.478936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.824 [2024-11-15 11:46:33.478947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:32.824 qpair failed and we were unable to recover it. 00:28:32.824 [2024-11-15 11:46:33.479088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.824 [2024-11-15 11:46:33.479100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:32.824 qpair failed and we were unable to recover it. 00:28:32.824 [2024-11-15 11:46:33.479262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.824 [2024-11-15 11:46:33.479274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:32.824 qpair failed and we were unable to recover it. 00:28:32.824 [2024-11-15 11:46:33.479423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.824 [2024-11-15 11:46:33.479435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:32.824 qpair failed and we were unable to recover it. 00:28:32.824 [2024-11-15 11:46:33.479583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.824 [2024-11-15 11:46:33.479595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:32.824 qpair failed and we were unable to recover it. 00:28:32.824 [2024-11-15 11:46:33.479682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.824 [2024-11-15 11:46:33.479692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:32.824 qpair failed and we were unable to recover it. 00:28:32.824 [2024-11-15 11:46:33.479880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.824 [2024-11-15 11:46:33.479914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:32.824 qpair failed and we were unable to recover it. 00:28:32.824 [2024-11-15 11:46:33.480063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.824 [2024-11-15 11:46:33.480096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:32.824 qpair failed and we were unable to recover it. 00:28:32.824 [2024-11-15 11:46:33.480278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.824 [2024-11-15 11:46:33.480312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:32.824 qpair failed and we were unable to recover it. 00:28:32.824 [2024-11-15 11:46:33.480513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.824 [2024-11-15 11:46:33.480549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:32.824 qpair failed and we were unable to recover it. 00:28:32.824 [2024-11-15 11:46:33.480683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.824 [2024-11-15 11:46:33.480719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:32.824 qpair failed and we were unable to recover it. 00:28:32.824 [2024-11-15 11:46:33.480844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.824 [2024-11-15 11:46:33.480856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:32.824 qpair failed and we were unable to recover it. 00:28:32.824 [2024-11-15 11:46:33.480944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.824 [2024-11-15 11:46:33.480955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:32.824 qpair failed and we were unable to recover it. 00:28:32.824 [2024-11-15 11:46:33.481109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.824 [2024-11-15 11:46:33.481121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:32.824 qpair failed and we were unable to recover it. 00:28:32.824 [2024-11-15 11:46:33.481288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.824 [2024-11-15 11:46:33.481322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:32.824 qpair failed and we were unable to recover it. 00:28:32.824 [2024-11-15 11:46:33.481475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.824 [2024-11-15 11:46:33.481510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:32.824 qpair failed and we were unable to recover it. 00:28:32.824 [2024-11-15 11:46:33.481764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.824 [2024-11-15 11:46:33.481797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:32.824 qpair failed and we were unable to recover it. 00:28:32.824 [2024-11-15 11:46:33.482046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.824 [2024-11-15 11:46:33.482058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:32.824 qpair failed and we were unable to recover it. 00:28:32.824 [2024-11-15 11:46:33.482156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.824 [2024-11-15 11:46:33.482166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:32.824 qpair failed and we were unable to recover it. 00:28:32.824 [2024-11-15 11:46:33.482256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.824 [2024-11-15 11:46:33.482267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:32.824 qpair failed and we were unable to recover it. 00:28:32.824 [2024-11-15 11:46:33.482403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.824 [2024-11-15 11:46:33.482450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:32.825 qpair failed and we were unable to recover it. 00:28:32.825 [2024-11-15 11:46:33.482735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.825 [2024-11-15 11:46:33.482772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:32.825 qpair failed and we were unable to recover it. 00:28:32.825 [2024-11-15 11:46:33.482916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.825 [2024-11-15 11:46:33.482951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:32.825 qpair failed and we were unable to recover it. 00:28:32.825 [2024-11-15 11:46:33.483139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.825 [2024-11-15 11:46:33.483155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:32.825 qpair failed and we were unable to recover it. 00:28:32.825 [2024-11-15 11:46:33.483262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.825 [2024-11-15 11:46:33.483295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:32.825 qpair failed and we were unable to recover it. 00:28:32.825 [2024-11-15 11:46:33.484191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.825 [2024-11-15 11:46:33.484211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:32.825 qpair failed and we were unable to recover it. 00:28:32.825 [2024-11-15 11:46:33.484402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.825 [2024-11-15 11:46:33.484437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:32.825 qpair failed and we were unable to recover it. 00:28:32.825 [2024-11-15 11:46:33.484582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.825 [2024-11-15 11:46:33.484624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:32.825 qpair failed and we were unable to recover it. 00:28:32.825 [2024-11-15 11:46:33.484830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.825 [2024-11-15 11:46:33.484864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:32.825 qpair failed and we were unable to recover it. 00:28:32.825 [2024-11-15 11:46:33.485137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.825 [2024-11-15 11:46:33.485149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:32.825 qpair failed and we were unable to recover it. 00:28:32.825 [2024-11-15 11:46:33.485302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.825 [2024-11-15 11:46:33.485313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:32.825 qpair failed and we were unable to recover it. 00:28:32.825 [2024-11-15 11:46:33.485403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.825 [2024-11-15 11:46:33.485413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:32.825 qpair failed and we were unable to recover it. 00:28:32.825 [2024-11-15 11:46:33.485569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.825 [2024-11-15 11:46:33.485581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:32.825 qpair failed and we were unable to recover it. 00:28:32.825 [2024-11-15 11:46:33.485733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.825 [2024-11-15 11:46:33.485744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:32.825 qpair failed and we were unable to recover it. 00:28:32.825 [2024-11-15 11:46:33.485819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.825 [2024-11-15 11:46:33.485829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:32.825 qpair failed and we were unable to recover it. 00:28:32.825 [2024-11-15 11:46:33.486096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.825 [2024-11-15 11:46:33.486130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:32.825 qpair failed and we were unable to recover it. 00:28:32.825 [2024-11-15 11:46:33.486276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.825 [2024-11-15 11:46:33.486310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:32.825 qpair failed and we were unable to recover it. 00:28:32.825 [2024-11-15 11:46:33.486442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.825 [2024-11-15 11:46:33.486491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:32.825 qpair failed and we were unable to recover it. 00:28:32.825 [2024-11-15 11:46:33.486679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.825 [2024-11-15 11:46:33.486713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:32.825 qpair failed and we were unable to recover it. 00:28:32.825 [2024-11-15 11:46:33.486837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.825 [2024-11-15 11:46:33.486871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:32.825 qpair failed and we were unable to recover it. 00:28:32.825 [2024-11-15 11:46:33.487097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.825 [2024-11-15 11:46:33.487130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:32.825 qpair failed and we were unable to recover it. 00:28:32.825 [2024-11-15 11:46:33.487333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.825 [2024-11-15 11:46:33.487367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:32.825 qpair failed and we were unable to recover it. 00:28:32.825 [2024-11-15 11:46:33.487589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.825 [2024-11-15 11:46:33.487624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:32.825 qpair failed and we were unable to recover it. 00:28:32.825 [2024-11-15 11:46:33.487816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.825 [2024-11-15 11:46:33.487850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:32.825 qpair failed and we were unable to recover it. 00:28:32.825 [2024-11-15 11:46:33.488077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.825 [2024-11-15 11:46:33.488112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:32.825 qpair failed and we were unable to recover it. 00:28:32.825 [2024-11-15 11:46:33.488251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.825 [2024-11-15 11:46:33.488284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:32.825 qpair failed and we were unable to recover it. 00:28:32.825 [2024-11-15 11:46:33.488601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.825 [2024-11-15 11:46:33.488636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:32.825 qpair failed and we were unable to recover it. 00:28:32.825 [2024-11-15 11:46:33.488766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.825 [2024-11-15 11:46:33.488800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:32.825 qpair failed and we were unable to recover it. 00:28:32.825 [2024-11-15 11:46:33.488974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.825 [2024-11-15 11:46:33.488985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:32.825 qpair failed and we were unable to recover it. 00:28:32.825 [2024-11-15 11:46:33.489153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.825 [2024-11-15 11:46:33.489164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:32.825 qpair failed and we were unable to recover it. 00:28:32.825 [2024-11-15 11:46:33.489303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.825 [2024-11-15 11:46:33.489314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:32.825 qpair failed and we were unable to recover it. 00:28:32.825 [2024-11-15 11:46:33.489403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.825 [2024-11-15 11:46:33.489414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:32.825 qpair failed and we were unable to recover it. 00:28:32.825 [2024-11-15 11:46:33.489573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.825 [2024-11-15 11:46:33.489586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:32.825 qpair failed and we were unable to recover it. 00:28:32.825 [2024-11-15 11:46:33.489771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.825 [2024-11-15 11:46:33.489803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:32.825 qpair failed and we were unable to recover it. 00:28:32.825 [2024-11-15 11:46:33.490563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.825 [2024-11-15 11:46:33.490627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:32.825 qpair failed and we were unable to recover it. 00:28:32.825 [2024-11-15 11:46:33.490777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.825 [2024-11-15 11:46:33.490793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:32.825 qpair failed and we were unable to recover it. 00:28:32.825 [2024-11-15 11:46:33.490986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.825 [2024-11-15 11:46:33.491002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:32.825 qpair failed and we were unable to recover it. 00:28:32.825 [2024-11-15 11:46:33.491113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.825 [2024-11-15 11:46:33.491125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:32.825 qpair failed and we were unable to recover it. 00:28:32.825 [2024-11-15 11:46:33.491203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.826 [2024-11-15 11:46:33.491213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:32.826 qpair failed and we were unable to recover it. 00:28:32.826 [2024-11-15 11:46:33.491423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.826 [2024-11-15 11:46:33.491434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:32.826 qpair failed and we were unable to recover it. 00:28:32.826 [2024-11-15 11:46:33.491647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.826 [2024-11-15 11:46:33.491659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:32.826 qpair failed and we were unable to recover it. 00:28:32.826 [2024-11-15 11:46:33.491801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.826 [2024-11-15 11:46:33.491813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:32.826 qpair failed and we were unable to recover it. 00:28:32.826 [2024-11-15 11:46:33.491900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.826 [2024-11-15 11:46:33.491910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:32.826 qpair failed and we were unable to recover it. 00:28:32.826 [2024-11-15 11:46:33.492000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.826 [2024-11-15 11:46:33.492017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:32.826 qpair failed and we were unable to recover it. 00:28:32.826 [2024-11-15 11:46:33.492169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.826 [2024-11-15 11:46:33.492190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:32.826 qpair failed and we were unable to recover it. 00:28:32.826 [2024-11-15 11:46:33.492335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.826 [2024-11-15 11:46:33.492368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:32.826 qpair failed and we were unable to recover it. 00:28:32.826 [2024-11-15 11:46:33.492668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.826 [2024-11-15 11:46:33.492704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:32.826 qpair failed and we were unable to recover it. 00:28:32.826 [2024-11-15 11:46:33.492841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.826 [2024-11-15 11:46:33.492873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:32.826 qpair failed and we were unable to recover it. 00:28:32.826 [2024-11-15 11:46:33.493066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.826 [2024-11-15 11:46:33.493076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:32.826 qpair failed and we were unable to recover it. 00:28:32.826 [2024-11-15 11:46:33.494217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.826 [2024-11-15 11:46:33.494237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:32.826 qpair failed and we were unable to recover it. 00:28:32.826 [2024-11-15 11:46:33.494536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.826 [2024-11-15 11:46:33.494572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:32.826 qpair failed and we were unable to recover it. 00:28:32.826 [2024-11-15 11:46:33.494832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.826 [2024-11-15 11:46:33.494869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:32.826 qpair failed and we were unable to recover it. 00:28:32.826 [2024-11-15 11:46:33.495172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.826 [2024-11-15 11:46:33.495184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:32.826 qpair failed and we were unable to recover it. 00:28:32.826 [2024-11-15 11:46:33.495418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.826 [2024-11-15 11:46:33.495429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:32.826 qpair failed and we were unable to recover it. 00:28:32.826 [2024-11-15 11:46:33.495528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.826 [2024-11-15 11:46:33.495538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:32.826 qpair failed and we were unable to recover it. 00:28:32.826 [2024-11-15 11:46:33.495698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.826 [2024-11-15 11:46:33.495710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:32.826 qpair failed and we were unable to recover it. 00:28:32.826 [2024-11-15 11:46:33.495906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.826 [2024-11-15 11:46:33.495938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:32.826 qpair failed and we were unable to recover it. 00:28:32.826 [2024-11-15 11:46:33.496090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.826 [2024-11-15 11:46:33.496123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:32.826 qpair failed and we were unable to recover it. 00:28:32.826 [2024-11-15 11:46:33.496326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.826 [2024-11-15 11:46:33.496360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:32.826 qpair failed and we were unable to recover it. 00:28:32.826 [2024-11-15 11:46:33.496563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.826 [2024-11-15 11:46:33.496599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:32.826 qpair failed and we were unable to recover it. 00:28:32.826 [2024-11-15 11:46:33.496825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.826 [2024-11-15 11:46:33.496858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:32.826 qpair failed and we were unable to recover it. 00:28:32.826 [2024-11-15 11:46:33.496972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.826 [2024-11-15 11:46:33.497004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:32.826 qpair failed and we were unable to recover it. 00:28:32.826 [2024-11-15 11:46:33.497125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.826 [2024-11-15 11:46:33.497158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:32.826 qpair failed and we were unable to recover it. 00:28:32.826 [2024-11-15 11:46:33.497424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.826 [2024-11-15 11:46:33.497456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:32.826 qpair failed and we were unable to recover it. 00:28:32.826 [2024-11-15 11:46:33.497664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.826 [2024-11-15 11:46:33.497696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:32.826 qpair failed and we were unable to recover it. 00:28:32.826 [2024-11-15 11:46:33.497833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.826 [2024-11-15 11:46:33.497868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:32.826 qpair failed and we were unable to recover it. 00:28:32.826 [2024-11-15 11:46:33.498202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.826 [2024-11-15 11:46:33.498213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:32.826 qpair failed and we were unable to recover it. 00:28:32.826 [2024-11-15 11:46:33.498364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.826 [2024-11-15 11:46:33.498376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:32.826 qpair failed and we were unable to recover it. 00:28:32.826 [2024-11-15 11:46:33.498513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.826 [2024-11-15 11:46:33.498526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:32.826 qpair failed and we were unable to recover it. 00:28:32.826 [2024-11-15 11:46:33.498763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.826 [2024-11-15 11:46:33.498798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:32.826 qpair failed and we were unable to recover it. 00:28:32.826 [2024-11-15 11:46:33.499034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.826 [2024-11-15 11:46:33.499101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:32.826 qpair failed and we were unable to recover it. 00:28:32.826 [2024-11-15 11:46:33.499257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.826 [2024-11-15 11:46:33.499294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:32.826 qpair failed and we were unable to recover it. 00:28:32.826 [2024-11-15 11:46:33.499520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.827 [2024-11-15 11:46:33.499554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:32.827 qpair failed and we were unable to recover it. 00:28:32.827 [2024-11-15 11:46:33.499754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.827 [2024-11-15 11:46:33.499788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:32.827 qpair failed and we were unable to recover it. 00:28:32.827 [2024-11-15 11:46:33.499979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.827 [2024-11-15 11:46:33.500011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:32.827 qpair failed and we were unable to recover it. 00:28:32.827 [2024-11-15 11:46:33.500103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.827 [2024-11-15 11:46:33.500112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:32.827 qpair failed and we were unable to recover it. 00:28:32.827 [2024-11-15 11:46:33.500263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.827 [2024-11-15 11:46:33.500275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:32.827 qpair failed and we were unable to recover it. 00:28:32.827 [2024-11-15 11:46:33.500349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.827 [2024-11-15 11:46:33.500359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:32.827 qpair failed and we were unable to recover it. 00:28:32.827 [2024-11-15 11:46:33.500432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.827 [2024-11-15 11:46:33.500443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:32.827 qpair failed and we were unable to recover it. 00:28:32.827 [2024-11-15 11:46:33.500599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.827 [2024-11-15 11:46:33.500609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:32.827 qpair failed and we were unable to recover it. 00:28:32.827 [2024-11-15 11:46:33.500719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.827 [2024-11-15 11:46:33.500730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:32.827 qpair failed and we were unable to recover it. 00:28:32.827 [2024-11-15 11:46:33.500914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.827 [2024-11-15 11:46:33.500945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:32.827 qpair failed and we were unable to recover it. 00:28:32.827 [2024-11-15 11:46:33.501074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.827 [2024-11-15 11:46:33.501107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:32.827 qpair failed and we were unable to recover it. 00:28:32.827 [2024-11-15 11:46:33.501305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.827 [2024-11-15 11:46:33.501345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:32.827 qpair failed and we were unable to recover it. 00:28:32.827 [2024-11-15 11:46:33.501576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.827 [2024-11-15 11:46:33.501609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:32.827 qpair failed and we were unable to recover it. 00:28:32.827 [2024-11-15 11:46:33.501738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.827 [2024-11-15 11:46:33.501771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:32.827 qpair failed and we were unable to recover it. 00:28:32.827 [2024-11-15 11:46:33.501975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.827 [2024-11-15 11:46:33.501986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:32.827 qpair failed and we were unable to recover it. 00:28:32.827 [2024-11-15 11:46:33.502192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.827 [2024-11-15 11:46:33.502203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:32.827 qpair failed and we were unable to recover it. 00:28:32.827 [2024-11-15 11:46:33.502386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.827 [2024-11-15 11:46:33.502418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:32.827 qpair failed and we were unable to recover it. 00:28:32.827 [2024-11-15 11:46:33.502560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.827 [2024-11-15 11:46:33.502593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:32.827 qpair failed and we were unable to recover it. 00:28:32.827 [2024-11-15 11:46:33.502723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.827 [2024-11-15 11:46:33.502756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:32.827 qpair failed and we were unable to recover it. 00:28:32.827 [2024-11-15 11:46:33.503010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.827 [2024-11-15 11:46:33.503043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:32.827 qpair failed and we were unable to recover it. 00:28:32.827 [2024-11-15 11:46:33.503181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.827 [2024-11-15 11:46:33.503214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:32.827 qpair failed and we were unable to recover it. 00:28:32.827 [2024-11-15 11:46:33.503400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.827 [2024-11-15 11:46:33.503432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:32.827 qpair failed and we were unable to recover it. 00:28:32.827 [2024-11-15 11:46:33.503636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.827 [2024-11-15 11:46:33.503668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:32.827 qpair failed and we were unable to recover it. 00:28:32.827 [2024-11-15 11:46:33.503813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.827 [2024-11-15 11:46:33.503844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:32.827 qpair failed and we were unable to recover it. 00:28:32.827 [2024-11-15 11:46:33.504040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.827 [2024-11-15 11:46:33.504051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:32.827 qpair failed and we were unable to recover it. 00:28:32.827 [2024-11-15 11:46:33.504136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.827 [2024-11-15 11:46:33.504146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:32.827 qpair failed and we were unable to recover it. 00:28:32.827 [2024-11-15 11:46:33.504345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.827 [2024-11-15 11:46:33.504377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:32.827 qpair failed and we were unable to recover it. 00:28:32.827 [2024-11-15 11:46:33.504514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.827 [2024-11-15 11:46:33.504547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:32.827 qpair failed and we were unable to recover it. 00:28:32.827 [2024-11-15 11:46:33.504732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.827 [2024-11-15 11:46:33.504764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:32.827 qpair failed and we were unable to recover it. 00:28:32.827 [2024-11-15 11:46:33.505018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.827 [2024-11-15 11:46:33.505050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:32.827 qpair failed and we were unable to recover it. 00:28:32.827 [2024-11-15 11:46:33.505238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.827 [2024-11-15 11:46:33.505270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:32.827 qpair failed and we were unable to recover it. 00:28:32.827 [2024-11-15 11:46:33.505483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.827 [2024-11-15 11:46:33.505517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:32.827 qpair failed and we were unable to recover it. 00:28:32.827 [2024-11-15 11:46:33.505799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.827 [2024-11-15 11:46:33.505810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:32.827 qpair failed and we were unable to recover it. 00:28:32.827 [2024-11-15 11:46:33.505973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.827 [2024-11-15 11:46:33.505984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:32.827 qpair failed and we were unable to recover it. 00:28:32.827 [2024-11-15 11:46:33.506129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.827 [2024-11-15 11:46:33.506140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:32.827 qpair failed and we were unable to recover it. 00:28:32.827 [2024-11-15 11:46:33.506285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.827 [2024-11-15 11:46:33.506296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:32.827 qpair failed and we were unable to recover it. 00:28:32.827 [2024-11-15 11:46:33.506393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.827 [2024-11-15 11:46:33.506404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:32.827 qpair failed and we were unable to recover it. 00:28:32.827 [2024-11-15 11:46:33.506513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.827 [2024-11-15 11:46:33.506523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:32.827 qpair failed and we were unable to recover it. 00:28:32.827 [2024-11-15 11:46:33.506620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.828 [2024-11-15 11:46:33.506641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:32.828 qpair failed and we were unable to recover it. 00:28:32.828 [2024-11-15 11:46:33.506850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.828 [2024-11-15 11:46:33.506862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:32.828 qpair failed and we were unable to recover it. 00:28:32.828 [2024-11-15 11:46:33.507010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.828 [2024-11-15 11:46:33.507021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:32.828 qpair failed and we were unable to recover it. 00:28:32.828 [2024-11-15 11:46:33.507102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.828 [2024-11-15 11:46:33.507112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:32.828 qpair failed and we were unable to recover it. 00:28:32.828 [2024-11-15 11:46:33.507222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.828 [2024-11-15 11:46:33.507261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:32.828 qpair failed and we were unable to recover it. 00:28:32.828 [2024-11-15 11:46:33.507455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.828 [2024-11-15 11:46:33.507503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:32.828 qpair failed and we were unable to recover it. 00:28:32.828 [2024-11-15 11:46:33.507630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.828 [2024-11-15 11:46:33.507642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:32.828 qpair failed and we were unable to recover it. 00:28:32.828 [2024-11-15 11:46:33.507785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.828 [2024-11-15 11:46:33.507797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:32.828 qpair failed and we were unable to recover it. 00:28:32.828 [2024-11-15 11:46:33.507944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.828 [2024-11-15 11:46:33.507956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:32.828 qpair failed and we were unable to recover it. 00:28:32.828 [2024-11-15 11:46:33.508140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.828 [2024-11-15 11:46:33.508173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:32.828 qpair failed and we were unable to recover it. 00:28:32.828 [2024-11-15 11:46:33.509199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.828 [2024-11-15 11:46:33.509220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:32.828 qpair failed and we were unable to recover it. 00:28:32.828 [2024-11-15 11:46:33.509329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.828 [2024-11-15 11:46:33.509341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:32.828 qpair failed and we were unable to recover it. 00:28:32.828 [2024-11-15 11:46:33.509448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.828 [2024-11-15 11:46:33.509498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:32.828 qpair failed and we were unable to recover it. 00:28:32.828 [2024-11-15 11:46:33.509694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.828 [2024-11-15 11:46:33.509736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:32.828 qpair failed and we were unable to recover it. 00:28:32.828 [2024-11-15 11:46:33.509972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.828 [2024-11-15 11:46:33.510012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:32.828 qpair failed and we were unable to recover it. 00:28:32.828 [2024-11-15 11:46:33.510217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.828 [2024-11-15 11:46:33.510229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:32.828 qpair failed and we were unable to recover it. 00:28:32.828 [2024-11-15 11:46:33.510336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.828 [2024-11-15 11:46:33.510348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:32.828 qpair failed and we were unable to recover it. 00:28:32.828 [2024-11-15 11:46:33.510423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.828 [2024-11-15 11:46:33.510434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:32.828 qpair failed and we were unable to recover it. 00:28:32.828 [2024-11-15 11:46:33.510604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.828 [2024-11-15 11:46:33.510617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:32.828 qpair failed and we were unable to recover it. 00:28:32.828 [2024-11-15 11:46:33.510748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.828 [2024-11-15 11:46:33.510781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:32.828 qpair failed and we were unable to recover it. 00:28:32.828 [2024-11-15 11:46:33.510910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.828 [2024-11-15 11:46:33.510943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:32.828 qpair failed and we were unable to recover it. 00:28:32.828 [2024-11-15 11:46:33.511138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.828 [2024-11-15 11:46:33.511170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:32.828 qpair failed and we were unable to recover it. 00:28:32.828 [2024-11-15 11:46:33.511373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.828 [2024-11-15 11:46:33.511404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:32.828 qpair failed and we were unable to recover it. 00:28:32.828 [2024-11-15 11:46:33.511603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.828 [2024-11-15 11:46:33.511637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:32.828 qpair failed and we were unable to recover it. 00:28:32.828 [2024-11-15 11:46:33.511793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.828 [2024-11-15 11:46:33.511825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:32.828 qpair failed and we were unable to recover it. 00:28:32.828 [2024-11-15 11:46:33.511976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.828 [2024-11-15 11:46:33.512009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:32.828 qpair failed and we were unable to recover it. 00:28:32.828 [2024-11-15 11:46:33.512164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.828 [2024-11-15 11:46:33.512176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:32.828 qpair failed and we were unable to recover it. 00:28:32.828 [2024-11-15 11:46:33.512963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.828 [2024-11-15 11:46:33.512984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:32.828 qpair failed and we were unable to recover it. 00:28:32.828 [2024-11-15 11:46:33.513270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.828 [2024-11-15 11:46:33.513306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:32.828 qpair failed and we were unable to recover it. 00:28:32.828 [2024-11-15 11:46:33.513545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.828 [2024-11-15 11:46:33.513579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:32.828 qpair failed and we were unable to recover it. 00:28:32.828 [2024-11-15 11:46:33.513757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.828 [2024-11-15 11:46:33.513769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:32.828 qpair failed and we were unable to recover it. 00:28:32.828 [2024-11-15 11:46:33.513846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.828 [2024-11-15 11:46:33.513857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:32.828 qpair failed and we were unable to recover it. 00:28:32.828 [2024-11-15 11:46:33.514070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.828 [2024-11-15 11:46:33.514081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:32.828 qpair failed and we were unable to recover it. 00:28:32.828 [2024-11-15 11:46:33.514230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.828 [2024-11-15 11:46:33.514242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:32.828 qpair failed and we were unable to recover it. 00:28:32.828 [2024-11-15 11:46:33.514319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.828 [2024-11-15 11:46:33.514330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:32.828 qpair failed and we were unable to recover it. 00:28:32.828 [2024-11-15 11:46:33.514518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.828 [2024-11-15 11:46:33.514538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:32.828 qpair failed and we were unable to recover it. 00:28:32.828 [2024-11-15 11:46:33.514628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.828 [2024-11-15 11:46:33.514638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:32.828 qpair failed and we were unable to recover it. 00:28:32.828 [2024-11-15 11:46:33.516050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.828 [2024-11-15 11:46:33.516097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:32.828 qpair failed and we were unable to recover it. 00:28:32.828 [2024-11-15 11:46:33.516383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.829 [2024-11-15 11:46:33.516418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:32.829 qpair failed and we were unable to recover it. 00:28:32.829 [2024-11-15 11:46:33.516551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.829 [2024-11-15 11:46:33.516585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:32.829 qpair failed and we were unable to recover it. 00:28:32.829 [2024-11-15 11:46:33.516847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.829 [2024-11-15 11:46:33.516880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:32.829 qpair failed and we were unable to recover it. 00:28:32.829 [2024-11-15 11:46:33.517009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.829 [2024-11-15 11:46:33.517020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:32.829 qpair failed and we were unable to recover it. 00:28:32.829 [2024-11-15 11:46:33.517105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.829 [2024-11-15 11:46:33.517114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:32.829 qpair failed and we were unable to recover it. 00:28:32.829 [2024-11-15 11:46:33.517270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.829 [2024-11-15 11:46:33.517282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:32.829 qpair failed and we were unable to recover it. 00:28:32.829 [2024-11-15 11:46:33.517520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.829 [2024-11-15 11:46:33.517531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:32.829 qpair failed and we were unable to recover it. 00:28:32.829 [2024-11-15 11:46:33.517704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.829 [2024-11-15 11:46:33.517715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:32.829 qpair failed and we were unable to recover it. 00:28:32.829 [2024-11-15 11:46:33.517811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.829 [2024-11-15 11:46:33.517821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:32.829 qpair failed and we were unable to recover it. 00:28:32.829 [2024-11-15 11:46:33.517968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.829 [2024-11-15 11:46:33.517977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:32.829 qpair failed and we were unable to recover it. 00:28:32.829 [2024-11-15 11:46:33.518075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.829 [2024-11-15 11:46:33.518085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:32.829 qpair failed and we were unable to recover it. 00:28:32.829 [2024-11-15 11:46:33.518322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.829 [2024-11-15 11:46:33.518333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:32.829 qpair failed and we were unable to recover it. 00:28:32.829 [2024-11-15 11:46:33.518412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.829 [2024-11-15 11:46:33.518422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:32.829 qpair failed and we were unable to recover it. 00:28:32.829 [2024-11-15 11:46:33.518588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.829 [2024-11-15 11:46:33.518600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:32.829 qpair failed and we were unable to recover it. 00:28:32.829 [2024-11-15 11:46:33.518676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.829 [2024-11-15 11:46:33.518686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:32.829 qpair failed and we were unable to recover it. 00:28:32.829 [2024-11-15 11:46:33.518828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.829 [2024-11-15 11:46:33.518841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:32.829 qpair failed and we were unable to recover it. 00:28:32.829 [2024-11-15 11:46:33.518924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.829 [2024-11-15 11:46:33.518934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:32.829 qpair failed and we were unable to recover it. 00:28:32.829 [2024-11-15 11:46:33.519072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.829 [2024-11-15 11:46:33.519082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:32.829 qpair failed and we were unable to recover it. 00:28:32.829 [2024-11-15 11:46:33.519146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.829 [2024-11-15 11:46:33.519157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:32.829 qpair failed and we were unable to recover it. 00:28:32.829 [2024-11-15 11:46:33.519239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.829 [2024-11-15 11:46:33.519249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:32.829 qpair failed and we were unable to recover it. 00:28:32.829 [2024-11-15 11:46:33.519382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.829 [2024-11-15 11:46:33.519392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:32.829 qpair failed and we were unable to recover it. 00:28:32.829 [2024-11-15 11:46:33.519487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.829 [2024-11-15 11:46:33.519497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:32.829 qpair failed and we were unable to recover it. 00:28:32.829 [2024-11-15 11:46:33.519655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.829 [2024-11-15 11:46:33.519688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:32.829 qpair failed and we were unable to recover it. 00:28:32.829 [2024-11-15 11:46:33.519838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.829 [2024-11-15 11:46:33.519870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:32.829 qpair failed and we were unable to recover it. 00:28:32.829 [2024-11-15 11:46:33.520077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.829 [2024-11-15 11:46:33.520109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:32.829 qpair failed and we were unable to recover it. 00:28:32.829 [2024-11-15 11:46:33.520396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.829 [2024-11-15 11:46:33.520428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:32.829 qpair failed and we were unable to recover it. 00:28:32.829 [2024-11-15 11:46:33.520579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.829 [2024-11-15 11:46:33.520612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:32.829 qpair failed and we were unable to recover it. 00:28:32.829 [2024-11-15 11:46:33.521562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.829 [2024-11-15 11:46:33.521583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:32.829 qpair failed and we were unable to recover it. 00:28:32.829 [2024-11-15 11:46:33.521833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.829 [2024-11-15 11:46:33.521866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:32.829 qpair failed and we were unable to recover it. 00:28:32.829 [2024-11-15 11:46:33.522064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.829 [2024-11-15 11:46:33.522096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:32.829 qpair failed and we were unable to recover it. 00:28:32.829 [2024-11-15 11:46:33.522313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.829 [2024-11-15 11:46:33.522345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:32.829 qpair failed and we were unable to recover it. 00:28:32.829 [2024-11-15 11:46:33.522523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.829 [2024-11-15 11:46:33.522560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:32.829 qpair failed and we were unable to recover it. 00:28:32.829 [2024-11-15 11:46:33.522751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.829 [2024-11-15 11:46:33.522784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:32.829 qpair failed and we were unable to recover it. 00:28:32.829 [2024-11-15 11:46:33.523069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.829 [2024-11-15 11:46:33.523081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:32.829 qpair failed and we were unable to recover it. 00:28:32.829 [2024-11-15 11:46:33.523228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.829 [2024-11-15 11:46:33.523240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:32.829 qpair failed and we were unable to recover it. 00:28:32.829 [2024-11-15 11:46:33.523418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.829 [2024-11-15 11:46:33.523430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:32.829 qpair failed and we were unable to recover it. 00:28:32.829 [2024-11-15 11:46:33.523581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.829 [2024-11-15 11:46:33.523593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:32.829 qpair failed and we were unable to recover it. 00:28:32.829 [2024-11-15 11:46:33.523821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.829 [2024-11-15 11:46:33.523854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:32.829 qpair failed and we were unable to recover it. 00:28:32.830 [2024-11-15 11:46:33.523998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.830 [2024-11-15 11:46:33.524029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:32.830 qpair failed and we were unable to recover it. 00:28:32.830 [2024-11-15 11:46:33.524158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.830 [2024-11-15 11:46:33.524191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:32.830 qpair failed and we were unable to recover it. 00:28:32.830 [2024-11-15 11:46:33.524393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.830 [2024-11-15 11:46:33.524426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:32.830 qpair failed and we were unable to recover it. 00:28:32.830 [2024-11-15 11:46:33.524550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.830 [2024-11-15 11:46:33.524583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:32.830 qpair failed and we were unable to recover it. 00:28:32.830 [2024-11-15 11:46:33.524810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.830 [2024-11-15 11:46:33.524889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1922550 with addr=10.0.0.2, port=4420 00:28:32.830 qpair failed and we were unable to recover it. 00:28:32.830 [2024-11-15 11:46:33.525082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.830 [2024-11-15 11:46:33.525108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:32.830 qpair failed and we were unable to recover it. 00:28:32.830 [2024-11-15 11:46:33.525205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.830 [2024-11-15 11:46:33.525233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:32.830 qpair failed and we were unable to recover it. 00:28:32.830 [2024-11-15 11:46:33.525397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.830 [2024-11-15 11:46:33.525434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:32.830 qpair failed and we were unable to recover it. 00:28:32.830 [2024-11-15 11:46:33.525680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.830 [2024-11-15 11:46:33.525715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:32.830 qpair failed and we were unable to recover it. 00:28:32.830 [2024-11-15 11:46:33.525944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.830 [2024-11-15 11:46:33.525976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:32.830 qpair failed and we were unable to recover it. 00:28:32.830 [2024-11-15 11:46:33.526092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.830 [2024-11-15 11:46:33.526124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:32.830 qpair failed and we were unable to recover it. 00:28:32.830 [2024-11-15 11:46:33.526386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.830 [2024-11-15 11:46:33.526418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:32.830 qpair failed and we were unable to recover it. 00:28:32.830 [2024-11-15 11:46:33.526629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.830 [2024-11-15 11:46:33.526664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:32.830 qpair failed and we were unable to recover it. 00:28:32.830 [2024-11-15 11:46:33.526872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.830 [2024-11-15 11:46:33.526905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:32.830 qpair failed and we were unable to recover it. 00:28:32.830 [2024-11-15 11:46:33.527087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.830 [2024-11-15 11:46:33.527120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:32.830 qpair failed and we were unable to recover it. 00:28:32.830 [2024-11-15 11:46:33.527260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.830 [2024-11-15 11:46:33.527293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:32.830 qpair failed and we were unable to recover it. 00:28:32.830 [2024-11-15 11:46:33.527421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.830 [2024-11-15 11:46:33.527453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:32.830 qpair failed and we were unable to recover it. 00:28:32.830 [2024-11-15 11:46:33.527665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.830 [2024-11-15 11:46:33.527707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:32.830 qpair failed and we were unable to recover it. 00:28:32.830 [2024-11-15 11:46:33.527837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.830 [2024-11-15 11:46:33.527871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:32.830 qpair failed and we were unable to recover it. 00:28:32.830 [2024-11-15 11:46:33.528067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.830 [2024-11-15 11:46:33.528102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:32.830 qpair failed and we were unable to recover it. 00:28:32.830 [2024-11-15 11:46:33.528299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.830 [2024-11-15 11:46:33.528331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:32.830 qpair failed and we were unable to recover it. 00:28:32.830 [2024-11-15 11:46:33.528535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.830 [2024-11-15 11:46:33.528569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:32.830 qpair failed and we were unable to recover it. 00:28:32.830 [2024-11-15 11:46:33.528759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.830 [2024-11-15 11:46:33.528791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:32.830 qpair failed and we were unable to recover it. 00:28:32.830 [2024-11-15 11:46:33.528939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.830 [2024-11-15 11:46:33.528971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:32.830 qpair failed and we were unable to recover it. 00:28:32.830 [2024-11-15 11:46:33.529169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.830 [2024-11-15 11:46:33.529201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:32.830 qpair failed and we were unable to recover it. 00:28:32.830 [2024-11-15 11:46:33.529332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.830 [2024-11-15 11:46:33.529364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:32.830 qpair failed and we were unable to recover it. 00:28:32.830 [2024-11-15 11:46:33.529494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.830 [2024-11-15 11:46:33.529527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:32.830 qpair failed and we were unable to recover it. 00:28:32.830 [2024-11-15 11:46:33.529745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.830 [2024-11-15 11:46:33.529777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:32.830 qpair failed and we were unable to recover it. 00:28:32.830 [2024-11-15 11:46:33.529937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.830 [2024-11-15 11:46:33.529949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:32.830 qpair failed and we were unable to recover it. 00:28:32.830 [2024-11-15 11:46:33.530158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.830 [2024-11-15 11:46:33.530190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:32.830 qpair failed and we were unable to recover it. 00:28:32.830 [2024-11-15 11:46:33.530327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.830 [2024-11-15 11:46:33.530359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:32.830 qpair failed and we were unable to recover it. 00:28:32.830 [2024-11-15 11:46:33.530566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.830 [2024-11-15 11:46:33.530599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:32.830 qpair failed and we were unable to recover it. 00:28:32.830 [2024-11-15 11:46:33.530786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.830 [2024-11-15 11:46:33.530818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:32.830 qpair failed and we were unable to recover it. 00:28:32.830 [2024-11-15 11:46:33.531079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.830 [2024-11-15 11:46:33.531090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:32.831 qpair failed and we were unable to recover it. 00:28:32.831 [2024-11-15 11:46:33.531277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.831 [2024-11-15 11:46:33.531309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:32.831 qpair failed and we were unable to recover it. 00:28:32.831 [2024-11-15 11:46:33.531426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.831 [2024-11-15 11:46:33.531469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:32.831 qpair failed and we were unable to recover it. 00:28:32.831 [2024-11-15 11:46:33.531660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.831 [2024-11-15 11:46:33.531691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:32.831 qpair failed and we were unable to recover it. 00:28:32.831 [2024-11-15 11:46:33.531895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.831 [2024-11-15 11:46:33.531929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:32.831 qpair failed and we were unable to recover it. 00:28:32.831 [2024-11-15 11:46:33.532138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.831 [2024-11-15 11:46:33.532170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:32.831 qpair failed and we were unable to recover it. 00:28:32.831 [2024-11-15 11:46:33.532372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.831 [2024-11-15 11:46:33.532404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:32.831 qpair failed and we were unable to recover it. 00:28:32.831 [2024-11-15 11:46:33.532621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.831 [2024-11-15 11:46:33.532654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:32.831 qpair failed and we were unable to recover it. 00:28:32.831 [2024-11-15 11:46:33.532776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.831 [2024-11-15 11:46:33.532788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:32.831 qpair failed and we were unable to recover it. 00:28:32.831 [2024-11-15 11:46:33.532925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.831 [2024-11-15 11:46:33.532936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:32.831 qpair failed and we were unable to recover it. 00:28:32.831 [2024-11-15 11:46:33.533093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.831 [2024-11-15 11:46:33.533103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:32.831 qpair failed and we were unable to recover it. 00:28:32.831 [2024-11-15 11:46:33.533280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.831 [2024-11-15 11:46:33.533322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1922550 with addr=10.0.0.2, port=4420 00:28:32.831 qpair failed and we were unable to recover it. 00:28:32.831 [2024-11-15 11:46:33.533530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.831 [2024-11-15 11:46:33.533565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1922550 with addr=10.0.0.2, port=4420 00:28:32.831 qpair failed and we were unable to recover it. 00:28:32.831 [2024-11-15 11:46:33.533753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.831 [2024-11-15 11:46:33.533786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1922550 with addr=10.0.0.2, port=4420 00:28:32.831 qpair failed and we were unable to recover it. 00:28:32.831 [2024-11-15 11:46:33.533982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.831 [2024-11-15 11:46:33.533999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1922550 with addr=10.0.0.2, port=4420 00:28:32.831 qpair failed and we were unable to recover it. 00:28:32.831 [2024-11-15 11:46:33.534110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.831 [2024-11-15 11:46:33.534125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1922550 with addr=10.0.0.2, port=4420 00:28:32.831 qpair failed and we were unable to recover it. 00:28:32.831 [2024-11-15 11:46:33.534323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.831 [2024-11-15 11:46:33.534356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1922550 with addr=10.0.0.2, port=4420 00:28:32.831 qpair failed and we were unable to recover it. 00:28:32.831 [2024-11-15 11:46:33.534562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.831 [2024-11-15 11:46:33.534596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1922550 with addr=10.0.0.2, port=4420 00:28:32.831 qpair failed and we were unable to recover it. 00:28:32.831 [2024-11-15 11:46:33.534752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.831 [2024-11-15 11:46:33.534785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1922550 with addr=10.0.0.2, port=4420 00:28:32.831 qpair failed and we were unable to recover it. 00:28:32.831 [2024-11-15 11:46:33.534918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.831 [2024-11-15 11:46:33.534951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1922550 with addr=10.0.0.2, port=4420 00:28:32.831 qpair failed and we were unable to recover it. 00:28:32.831 [2024-11-15 11:46:33.535067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.831 [2024-11-15 11:46:33.535099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1922550 with addr=10.0.0.2, port=4420 00:28:32.831 qpair failed and we were unable to recover it. 00:28:32.831 [2024-11-15 11:46:33.535394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.831 [2024-11-15 11:46:33.535427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1922550 with addr=10.0.0.2, port=4420 00:28:32.831 qpair failed and we were unable to recover it. 00:28:32.831 [2024-11-15 11:46:33.535580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.831 [2024-11-15 11:46:33.535614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1922550 with addr=10.0.0.2, port=4420 00:28:32.831 qpair failed and we were unable to recover it. 00:28:32.831 [2024-11-15 11:46:33.535732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.831 [2024-11-15 11:46:33.535745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:32.831 qpair failed and we were unable to recover it. 00:28:32.831 [2024-11-15 11:46:33.535861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.831 [2024-11-15 11:46:33.535871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:32.831 qpair failed and we were unable to recover it. 00:28:32.831 [2024-11-15 11:46:33.536027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.831 [2024-11-15 11:46:33.536038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:32.831 qpair failed and we were unable to recover it. 00:28:32.831 [2024-11-15 11:46:33.536131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.831 [2024-11-15 11:46:33.536141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:32.831 qpair failed and we were unable to recover it. 00:28:32.831 [2024-11-15 11:46:33.536274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.831 [2024-11-15 11:46:33.536306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:32.831 qpair failed and we were unable to recover it. 00:28:32.831 [2024-11-15 11:46:33.536496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.831 [2024-11-15 11:46:33.536529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:32.831 qpair failed and we were unable to recover it. 00:28:32.831 [2024-11-15 11:46:33.536658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.831 [2024-11-15 11:46:33.536690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:32.831 qpair failed and we were unable to recover it. 00:28:32.831 [2024-11-15 11:46:33.536896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.831 [2024-11-15 11:46:33.536927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:32.831 qpair failed and we were unable to recover it. 00:28:32.831 [2024-11-15 11:46:33.537135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.831 [2024-11-15 11:46:33.537168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:32.831 qpair failed and we were unable to recover it. 00:28:32.831 [2024-11-15 11:46:33.537281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.831 [2024-11-15 11:46:33.537313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:32.831 qpair failed and we were unable to recover it. 00:28:32.831 [2024-11-15 11:46:33.537499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.831 [2024-11-15 11:46:33.537532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:32.831 qpair failed and we were unable to recover it. 00:28:32.831 [2024-11-15 11:46:33.537647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.831 [2024-11-15 11:46:33.537680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:32.831 qpair failed and we were unable to recover it. 00:28:32.831 [2024-11-15 11:46:33.537833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.831 [2024-11-15 11:46:33.537844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:32.831 qpair failed and we were unable to recover it. 00:28:32.831 [2024-11-15 11:46:33.538018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.831 [2024-11-15 11:46:33.538051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:32.831 qpair failed and we were unable to recover it. 00:28:32.831 [2024-11-15 11:46:33.539269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.831 [2024-11-15 11:46:33.539290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:32.831 qpair failed and we were unable to recover it. 00:28:32.832 [2024-11-15 11:46:33.539449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.832 [2024-11-15 11:46:33.539475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:32.832 qpair failed and we were unable to recover it. 00:28:32.832 [2024-11-15 11:46:33.539705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.832 [2024-11-15 11:46:33.539739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:32.832 qpair failed and we were unable to recover it. 00:28:32.832 [2024-11-15 11:46:33.539962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.832 [2024-11-15 11:46:33.539994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:32.832 qpair failed and we were unable to recover it. 00:28:32.832 [2024-11-15 11:46:33.540200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.832 [2024-11-15 11:46:33.540211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:32.832 qpair failed and we were unable to recover it. 00:28:32.832 [2024-11-15 11:46:33.540311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.832 [2024-11-15 11:46:33.540321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:32.832 qpair failed and we were unable to recover it. 00:28:32.832 [2024-11-15 11:46:33.540416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.832 [2024-11-15 11:46:33.540426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:32.832 qpair failed and we were unable to recover it. 00:28:32.832 [2024-11-15 11:46:33.540531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.832 [2024-11-15 11:46:33.540541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:32.832 qpair failed and we were unable to recover it. 00:28:32.832 [2024-11-15 11:46:33.540622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.832 [2024-11-15 11:46:33.540631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:32.832 qpair failed and we were unable to recover it. 00:28:32.832 [2024-11-15 11:46:33.540725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.832 [2024-11-15 11:46:33.540737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:32.832 qpair failed and we were unable to recover it. 00:28:32.832 [2024-11-15 11:46:33.540889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.832 [2024-11-15 11:46:33.540901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:32.832 qpair failed and we were unable to recover it. 00:28:32.832 [2024-11-15 11:46:33.541054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.832 [2024-11-15 11:46:33.541065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:32.832 qpair failed and we were unable to recover it. 00:28:32.832 [2024-11-15 11:46:33.541157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.832 [2024-11-15 11:46:33.541167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:32.832 qpair failed and we were unable to recover it. 00:28:32.832 [2024-11-15 11:46:33.541393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.832 [2024-11-15 11:46:33.541404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:32.832 qpair failed and we were unable to recover it. 00:28:32.832 [2024-11-15 11:46:33.541556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.832 [2024-11-15 11:46:33.541570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:32.832 qpair failed and we were unable to recover it. 00:28:32.832 [2024-11-15 11:46:33.541719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.832 [2024-11-15 11:46:33.541731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:32.832 qpair failed and we were unable to recover it. 00:28:32.832 [2024-11-15 11:46:33.541814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.832 [2024-11-15 11:46:33.541824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:32.832 qpair failed and we were unable to recover it. 00:28:32.832 [2024-11-15 11:46:33.541964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.832 [2024-11-15 11:46:33.541976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:32.832 qpair failed and we were unable to recover it. 00:28:32.832 [2024-11-15 11:46:33.542056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.832 [2024-11-15 11:46:33.542066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:32.832 qpair failed and we were unable to recover it. 00:28:32.832 [2024-11-15 11:46:33.542144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.832 [2024-11-15 11:46:33.542154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:32.832 qpair failed and we were unable to recover it. 00:28:32.832 [2024-11-15 11:46:33.542307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.832 [2024-11-15 11:46:33.542318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:32.832 qpair failed and we were unable to recover it. 00:28:32.832 [2024-11-15 11:46:33.542387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.832 [2024-11-15 11:46:33.542397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:32.832 qpair failed and we were unable to recover it. 00:28:32.832 [2024-11-15 11:46:33.542550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.832 [2024-11-15 11:46:33.542562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:32.832 qpair failed and we were unable to recover it. 00:28:32.832 [2024-11-15 11:46:33.542770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.832 [2024-11-15 11:46:33.542781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:32.832 qpair failed and we were unable to recover it. 00:28:32.832 [2024-11-15 11:46:33.542850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.832 [2024-11-15 11:46:33.542859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:32.832 qpair failed and we were unable to recover it. 00:28:32.832 [2024-11-15 11:46:33.543009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.832 [2024-11-15 11:46:33.543019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:32.832 qpair failed and we were unable to recover it. 00:28:32.832 [2024-11-15 11:46:33.543103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.832 [2024-11-15 11:46:33.543114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:32.832 qpair failed and we were unable to recover it. 00:28:32.832 [2024-11-15 11:46:33.543184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.832 [2024-11-15 11:46:33.543195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:32.832 qpair failed and we were unable to recover it. 00:28:32.832 [2024-11-15 11:46:33.543339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.832 [2024-11-15 11:46:33.543349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:32.832 qpair failed and we were unable to recover it. 00:28:32.832 [2024-11-15 11:46:33.543505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.832 [2024-11-15 11:46:33.543517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:32.832 qpair failed and we were unable to recover it. 00:28:32.832 [2024-11-15 11:46:33.543608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.832 [2024-11-15 11:46:33.543618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:32.832 qpair failed and we were unable to recover it. 00:28:32.832 [2024-11-15 11:46:33.543806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.832 [2024-11-15 11:46:33.543817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:32.832 qpair failed and we were unable to recover it. 00:28:32.832 [2024-11-15 11:46:33.543954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.832 [2024-11-15 11:46:33.543965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:32.832 qpair failed and we were unable to recover it. 00:28:32.832 [2024-11-15 11:46:33.544135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.832 [2024-11-15 11:46:33.544147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:32.832 qpair failed and we were unable to recover it. 00:28:32.832 [2024-11-15 11:46:33.544222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.833 [2024-11-15 11:46:33.544232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:32.833 qpair failed and we were unable to recover it. 00:28:32.833 [2024-11-15 11:46:33.544308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.833 [2024-11-15 11:46:33.544318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:32.833 qpair failed and we were unable to recover it. 00:28:32.833 [2024-11-15 11:46:33.544401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.833 [2024-11-15 11:46:33.544411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:32.833 qpair failed and we were unable to recover it. 00:28:32.833 [2024-11-15 11:46:33.544569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.833 [2024-11-15 11:46:33.544580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:32.833 qpair failed and we were unable to recover it. 00:28:32.833 [2024-11-15 11:46:33.544666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.833 [2024-11-15 11:46:33.544676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:32.833 qpair failed and we were unable to recover it. 00:28:32.833 [2024-11-15 11:46:33.544753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.833 [2024-11-15 11:46:33.544764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:32.833 qpair failed and we were unable to recover it. 00:28:32.833 [2024-11-15 11:46:33.544836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.833 [2024-11-15 11:46:33.544847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:32.833 qpair failed and we were unable to recover it. 00:28:32.833 [2024-11-15 11:46:33.545014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.833 [2024-11-15 11:46:33.545043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:32.833 qpair failed and we were unable to recover it. 00:28:32.833 [2024-11-15 11:46:33.545208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.833 [2024-11-15 11:46:33.545221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:32.833 qpair failed and we were unable to recover it. 00:28:32.833 [2024-11-15 11:46:33.545320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.833 [2024-11-15 11:46:33.545330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:32.833 qpair failed and we were unable to recover it. 00:28:32.833 [2024-11-15 11:46:33.545499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.833 [2024-11-15 11:46:33.545511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:32.833 qpair failed and we were unable to recover it. 00:28:32.833 [2024-11-15 11:46:33.545649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.833 [2024-11-15 11:46:33.545660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:32.833 qpair failed and we were unable to recover it. 00:28:32.833 [2024-11-15 11:46:33.545911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.833 [2024-11-15 11:46:33.545943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:32.833 qpair failed and we were unable to recover it. 00:28:32.833 [2024-11-15 11:46:33.546077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.833 [2024-11-15 11:46:33.546108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:32.833 qpair failed and we were unable to recover it. 00:28:32.833 [2024-11-15 11:46:33.546367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.833 [2024-11-15 11:46:33.546399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:32.833 qpair failed and we were unable to recover it. 00:28:32.833 [2024-11-15 11:46:33.546679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.833 [2024-11-15 11:46:33.546713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:32.833 qpair failed and we were unable to recover it. 00:28:32.833 [2024-11-15 11:46:33.546939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.833 [2024-11-15 11:46:33.546951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:32.833 qpair failed and we were unable to recover it. 00:28:32.833 [2024-11-15 11:46:33.547029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.833 [2024-11-15 11:46:33.547040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:32.833 qpair failed and we were unable to recover it. 00:28:32.833 [2024-11-15 11:46:33.547125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.833 [2024-11-15 11:46:33.547135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:32.833 qpair failed and we were unable to recover it. 00:28:32.833 [2024-11-15 11:46:33.547213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.833 [2024-11-15 11:46:33.547224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:32.833 qpair failed and we were unable to recover it. 00:28:32.833 [2024-11-15 11:46:33.547310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.833 [2024-11-15 11:46:33.547323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:32.833 qpair failed and we were unable to recover it. 00:28:32.833 [2024-11-15 11:46:33.547406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.833 [2024-11-15 11:46:33.547416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:32.833 qpair failed and we were unable to recover it. 00:28:32.833 [2024-11-15 11:46:33.547565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.833 [2024-11-15 11:46:33.547576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:32.833 qpair failed and we were unable to recover it. 00:28:32.833 [2024-11-15 11:46:33.547733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.833 [2024-11-15 11:46:33.547744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:32.833 qpair failed and we were unable to recover it. 00:28:32.833 [2024-11-15 11:46:33.547886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.833 [2024-11-15 11:46:33.547896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:32.833 qpair failed and we were unable to recover it. 00:28:32.833 [2024-11-15 11:46:33.547978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.833 [2024-11-15 11:46:33.547988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:32.833 qpair failed and we were unable to recover it. 00:28:32.833 [2024-11-15 11:46:33.548081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.833 [2024-11-15 11:46:33.548091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:32.833 qpair failed and we were unable to recover it. 00:28:32.833 [2024-11-15 11:46:33.548228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.833 [2024-11-15 11:46:33.548239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:32.833 qpair failed and we were unable to recover it. 00:28:32.833 [2024-11-15 11:46:33.548393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.833 [2024-11-15 11:46:33.548426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:32.833 qpair failed and we were unable to recover it. 00:28:32.833 [2024-11-15 11:46:33.548580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.833 [2024-11-15 11:46:33.548616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:32.833 qpair failed and we were unable to recover it. 00:28:32.833 [2024-11-15 11:46:33.549570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.833 [2024-11-15 11:46:33.549591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:32.833 qpair failed and we were unable to recover it. 00:28:32.833 [2024-11-15 11:46:33.549846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.833 [2024-11-15 11:46:33.549882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:32.833 qpair failed and we were unable to recover it. 00:28:32.833 [2024-11-15 11:46:33.551161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.833 [2024-11-15 11:46:33.551181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:32.833 qpair failed and we were unable to recover it. 00:28:32.833 [2024-11-15 11:46:33.551368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.833 [2024-11-15 11:46:33.551378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:32.833 qpair failed and we were unable to recover it. 00:28:32.833 [2024-11-15 11:46:33.551635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.833 [2024-11-15 11:46:33.551672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:32.833 qpair failed and we were unable to recover it. 00:28:32.833 [2024-11-15 11:46:33.551828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.833 [2024-11-15 11:46:33.551860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:32.833 qpair failed and we were unable to recover it. 00:28:32.833 [2024-11-15 11:46:33.552072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.833 [2024-11-15 11:46:33.552084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:32.833 qpair failed and we were unable to recover it. 00:28:32.833 [2024-11-15 11:46:33.552229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.833 [2024-11-15 11:46:33.552240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:32.833 qpair failed and we were unable to recover it. 00:28:32.833 [2024-11-15 11:46:33.552396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.834 [2024-11-15 11:46:33.552406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:32.834 qpair failed and we were unable to recover it. 00:28:32.834 [2024-11-15 11:46:33.552545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.834 [2024-11-15 11:46:33.552556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:32.834 qpair failed and we were unable to recover it. 00:28:32.834 [2024-11-15 11:46:33.552689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.834 [2024-11-15 11:46:33.552722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:32.834 qpair failed and we were unable to recover it. 00:28:32.834 [2024-11-15 11:46:33.553004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.834 [2024-11-15 11:46:33.553036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:32.834 qpair failed and we were unable to recover it. 00:28:32.834 [2024-11-15 11:46:33.553519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.834 [2024-11-15 11:46:33.553563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:32.834 qpair failed and we were unable to recover it. 00:28:32.834 [2024-11-15 11:46:33.553700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.834 [2024-11-15 11:46:33.553735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:32.834 qpair failed and we were unable to recover it. 00:28:32.834 [2024-11-15 11:46:33.554065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.834 [2024-11-15 11:46:33.554096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:32.834 qpair failed and we were unable to recover it. 00:28:32.834 [2024-11-15 11:46:33.554554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.834 [2024-11-15 11:46:33.554571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:32.834 qpair failed and we were unable to recover it. 00:28:32.834 [2024-11-15 11:46:33.554814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.834 [2024-11-15 11:46:33.554826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:32.834 qpair failed and we were unable to recover it. 00:28:32.834 [2024-11-15 11:46:33.554983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.834 [2024-11-15 11:46:33.554995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:32.834 qpair failed and we were unable to recover it. 00:28:32.834 [2024-11-15 11:46:33.555703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.834 [2024-11-15 11:46:33.555725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:32.834 qpair failed and we were unable to recover it. 00:28:32.834 [2024-11-15 11:46:33.555889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.834 [2024-11-15 11:46:33.555901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:32.834 qpair failed and we were unable to recover it. 00:28:32.834 [2024-11-15 11:46:33.556042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.834 [2024-11-15 11:46:33.556054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:32.834 qpair failed and we were unable to recover it. 00:28:32.834 [2024-11-15 11:46:33.556220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.834 [2024-11-15 11:46:33.556230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:32.834 qpair failed and we were unable to recover it. 00:28:32.834 [2024-11-15 11:46:33.556417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.834 [2024-11-15 11:46:33.556449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:32.834 qpair failed and we were unable to recover it. 00:28:32.834 [2024-11-15 11:46:33.556720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.834 [2024-11-15 11:46:33.556752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:32.834 qpair failed and we were unable to recover it. 00:28:32.834 [2024-11-15 11:46:33.556948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.834 [2024-11-15 11:46:33.556980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:32.834 qpair failed and we were unable to recover it. 00:28:32.834 [2024-11-15 11:46:33.557119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.834 [2024-11-15 11:46:33.557152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:32.834 qpair failed and we were unable to recover it. 00:28:32.834 [2024-11-15 11:46:33.557345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.834 [2024-11-15 11:46:33.557380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:32.834 qpair failed and we were unable to recover it. 00:28:32.834 [2024-11-15 11:46:33.557632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.834 [2024-11-15 11:46:33.557666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:32.834 qpair failed and we were unable to recover it. 00:28:32.834 [2024-11-15 11:46:33.557920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.834 [2024-11-15 11:46:33.557953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:32.834 qpair failed and we were unable to recover it. 00:28:32.834 [2024-11-15 11:46:33.558135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.834 [2024-11-15 11:46:33.558156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:32.834 qpair failed and we were unable to recover it. 00:28:32.834 [2024-11-15 11:46:33.558283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.834 [2024-11-15 11:46:33.558296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:32.834 qpair failed and we were unable to recover it. 00:28:32.834 [2024-11-15 11:46:33.558438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.834 [2024-11-15 11:46:33.558449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:32.834 qpair failed and we were unable to recover it. 00:28:32.834 [2024-11-15 11:46:33.558693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.834 [2024-11-15 11:46:33.558706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:32.834 qpair failed and we were unable to recover it. 00:28:32.834 [2024-11-15 11:46:33.558829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.834 [2024-11-15 11:46:33.558839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:32.834 qpair failed and we were unable to recover it. 00:28:32.834 [2024-11-15 11:46:33.558994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.834 [2024-11-15 11:46:33.559027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:32.834 qpair failed and we were unable to recover it. 00:28:32.834 [2024-11-15 11:46:33.559166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.834 [2024-11-15 11:46:33.559197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:32.834 qpair failed and we were unable to recover it. 00:28:32.834 [2024-11-15 11:46:33.559327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.834 [2024-11-15 11:46:33.559359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:32.834 qpair failed and we were unable to recover it. 00:28:32.834 [2024-11-15 11:46:33.559502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.834 [2024-11-15 11:46:33.559535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:32.834 qpair failed and we were unable to recover it. 00:28:32.834 [2024-11-15 11:46:33.559668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.834 [2024-11-15 11:46:33.559701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:32.834 qpair failed and we were unable to recover it. 00:28:32.834 [2024-11-15 11:46:33.559945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.834 [2024-11-15 11:46:33.559977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:32.834 qpair failed and we were unable to recover it. 00:28:32.834 [2024-11-15 11:46:33.560178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.834 [2024-11-15 11:46:33.560211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:32.834 qpair failed and we were unable to recover it. 00:28:32.834 [2024-11-15 11:46:33.560407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.834 [2024-11-15 11:46:33.560440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:32.834 qpair failed and we were unable to recover it. 00:28:32.834 [2024-11-15 11:46:33.560685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.834 [2024-11-15 11:46:33.560717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:32.834 qpair failed and we were unable to recover it. 00:28:32.834 [2024-11-15 11:46:33.560829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.834 [2024-11-15 11:46:33.560840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:32.834 qpair failed and we were unable to recover it. 00:28:32.834 [2024-11-15 11:46:33.560928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.834 [2024-11-15 11:46:33.560938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:32.834 qpair failed and we were unable to recover it. 00:28:32.834 [2024-11-15 11:46:33.561114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.834 [2024-11-15 11:46:33.561147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:32.834 qpair failed and we were unable to recover it. 00:28:32.834 [2024-11-15 11:46:33.561340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.835 [2024-11-15 11:46:33.561372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:32.835 qpair failed and we were unable to recover it. 00:28:32.835 [2024-11-15 11:46:33.561520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.835 [2024-11-15 11:46:33.561555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:32.835 qpair failed and we were unable to recover it. 00:28:32.835 [2024-11-15 11:46:33.561682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.835 [2024-11-15 11:46:33.561715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:32.835 qpair failed and we were unable to recover it. 00:28:32.835 [2024-11-15 11:46:33.561908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.835 [2024-11-15 11:46:33.561941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:32.835 qpair failed and we were unable to recover it. 00:28:32.835 [2024-11-15 11:46:33.562062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.835 [2024-11-15 11:46:33.562095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:32.835 qpair failed and we were unable to recover it. 00:28:32.835 [2024-11-15 11:46:33.562211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.835 [2024-11-15 11:46:33.562221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:32.835 qpair failed and we were unable to recover it. 00:28:32.835 [2024-11-15 11:46:33.562364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.835 [2024-11-15 11:46:33.562374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:32.835 qpair failed and we were unable to recover it. 00:28:32.835 [2024-11-15 11:46:33.562595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.835 [2024-11-15 11:46:33.562629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:32.835 qpair failed and we were unable to recover it. 00:28:32.835 [2024-11-15 11:46:33.562889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.835 [2024-11-15 11:46:33.562921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:32.835 qpair failed and we were unable to recover it. 00:28:32.835 [2024-11-15 11:46:33.563050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.835 [2024-11-15 11:46:33.563072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:32.835 qpair failed and we were unable to recover it. 00:28:32.835 [2024-11-15 11:46:33.563227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.835 [2024-11-15 11:46:33.563258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:32.835 qpair failed and we were unable to recover it. 00:28:32.835 [2024-11-15 11:46:33.563426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.835 [2024-11-15 11:46:33.563504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1922550 with addr=10.0.0.2, port=4420 00:28:32.835 qpair failed and we were unable to recover it. 00:28:32.835 [2024-11-15 11:46:33.563669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.835 [2024-11-15 11:46:33.563706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1922550 with addr=10.0.0.2, port=4420 00:28:32.835 qpair failed and we were unable to recover it. 00:28:32.835 [2024-11-15 11:46:33.563966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.835 [2024-11-15 11:46:33.563999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1922550 with addr=10.0.0.2, port=4420 00:28:32.835 qpair failed and we were unable to recover it. 00:28:32.835 [2024-11-15 11:46:33.564126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.835 [2024-11-15 11:46:33.564159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1922550 with addr=10.0.0.2, port=4420 00:28:32.835 qpair failed and we were unable to recover it. 00:28:32.835 [2024-11-15 11:46:33.564287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.835 [2024-11-15 11:46:33.564322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1922550 with addr=10.0.0.2, port=4420 00:28:32.835 qpair failed and we were unable to recover it. 00:28:32.835 [2024-11-15 11:46:33.564472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.835 [2024-11-15 11:46:33.564505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1922550 with addr=10.0.0.2, port=4420 00:28:32.835 qpair failed and we were unable to recover it. 00:28:32.835 [2024-11-15 11:46:33.564716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.835 [2024-11-15 11:46:33.564748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1922550 with addr=10.0.0.2, port=4420 00:28:32.835 qpair failed and we were unable to recover it. 00:28:32.835 [2024-11-15 11:46:33.565861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.835 [2024-11-15 11:46:33.565891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1922550 with addr=10.0.0.2, port=4420 00:28:32.835 qpair failed and we were unable to recover it. 00:28:32.835 [2024-11-15 11:46:33.566128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.835 [2024-11-15 11:46:33.566147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1922550 with addr=10.0.0.2, port=4420 00:28:32.835 qpair failed and we were unable to recover it. 00:28:32.835 [2024-11-15 11:46:33.566299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.835 [2024-11-15 11:46:33.566316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1922550 with addr=10.0.0.2, port=4420 00:28:32.835 qpair failed and we were unable to recover it. 00:28:32.835 [2024-11-15 11:46:33.566411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.835 [2024-11-15 11:46:33.566430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1922550 with addr=10.0.0.2, port=4420 00:28:32.835 qpair failed and we were unable to recover it. 00:28:32.835 [2024-11-15 11:46:33.566555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.835 [2024-11-15 11:46:33.566579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1922550 with addr=10.0.0.2, port=4420 00:28:32.835 qpair failed and we were unable to recover it. 00:28:32.835 [2024-11-15 11:46:33.566743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.835 [2024-11-15 11:46:33.566755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:32.835 qpair failed and we were unable to recover it. 00:28:32.835 [2024-11-15 11:46:33.566906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.835 [2024-11-15 11:46:33.566917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:32.835 qpair failed and we were unable to recover it. 00:28:32.835 [2024-11-15 11:46:33.567002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.835 [2024-11-15 11:46:33.567012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:32.835 qpair failed and we were unable to recover it. 00:28:32.835 [2024-11-15 11:46:33.567104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.835 [2024-11-15 11:46:33.567113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:32.835 qpair failed and we were unable to recover it. 00:28:32.835 [2024-11-15 11:46:33.567291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.835 [2024-11-15 11:46:33.567302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:32.835 qpair failed and we were unable to recover it. 00:28:32.835 [2024-11-15 11:46:33.567451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.835 [2024-11-15 11:46:33.567466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:32.835 qpair failed and we were unable to recover it. 00:28:32.835 [2024-11-15 11:46:33.567622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.835 [2024-11-15 11:46:33.567633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:32.835 qpair failed and we were unable to recover it. 00:28:32.835 [2024-11-15 11:46:33.568338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.836 [2024-11-15 11:46:33.568361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:32.836 qpair failed and we were unable to recover it. 00:28:32.836 [2024-11-15 11:46:33.568545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.836 [2024-11-15 11:46:33.568558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:32.836 qpair failed and we were unable to recover it. 00:28:32.836 [2024-11-15 11:46:33.568646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.836 [2024-11-15 11:46:33.568657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:32.836 qpair failed and we were unable to recover it. 00:28:32.836 [2024-11-15 11:46:33.568761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.836 [2024-11-15 11:46:33.568771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:32.836 qpair failed and we were unable to recover it. 00:28:32.836 [2024-11-15 11:46:33.568866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.836 [2024-11-15 11:46:33.568877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:32.836 qpair failed and we were unable to recover it. 00:28:32.836 [2024-11-15 11:46:33.569022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.836 [2024-11-15 11:46:33.569033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:32.836 qpair failed and we were unable to recover it. 00:28:32.836 [2024-11-15 11:46:33.569266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.836 [2024-11-15 11:46:33.569277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:32.836 qpair failed and we were unable to recover it. 00:28:32.836 [2024-11-15 11:46:33.569354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.836 [2024-11-15 11:46:33.569365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:32.836 qpair failed and we were unable to recover it. 00:28:32.836 [2024-11-15 11:46:33.569506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.836 [2024-11-15 11:46:33.569518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:32.836 qpair failed and we were unable to recover it. 00:28:32.836 [2024-11-15 11:46:33.569590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.836 [2024-11-15 11:46:33.569601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:32.836 qpair failed and we were unable to recover it. 00:28:32.836 [2024-11-15 11:46:33.569716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.836 [2024-11-15 11:46:33.569725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:32.836 qpair failed and we were unable to recover it. 00:28:32.836 [2024-11-15 11:46:33.569789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.836 [2024-11-15 11:46:33.569799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:32.836 qpair failed and we were unable to recover it. 00:28:32.836 [2024-11-15 11:46:33.569949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.836 [2024-11-15 11:46:33.569960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:32.836 qpair failed and we were unable to recover it. 00:28:32.836 [2024-11-15 11:46:33.570033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.836 [2024-11-15 11:46:33.570043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:32.836 qpair failed and we were unable to recover it. 00:28:32.836 [2024-11-15 11:46:33.570142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.836 [2024-11-15 11:46:33.570152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:32.836 qpair failed and we were unable to recover it. 00:28:32.836 [2024-11-15 11:46:33.570310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.836 [2024-11-15 11:46:33.570321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:32.836 qpair failed and we were unable to recover it. 00:28:32.836 [2024-11-15 11:46:33.570402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.836 [2024-11-15 11:46:33.570412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:32.836 qpair failed and we were unable to recover it. 00:28:32.836 [2024-11-15 11:46:33.570563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.836 [2024-11-15 11:46:33.570575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:32.836 qpair failed and we were unable to recover it. 00:28:32.836 [2024-11-15 11:46:33.570653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.836 [2024-11-15 11:46:33.570665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:32.836 qpair failed and we were unable to recover it. 00:28:32.836 [2024-11-15 11:46:33.570778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.836 [2024-11-15 11:46:33.570787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:32.836 qpair failed and we were unable to recover it. 00:28:32.836 [2024-11-15 11:46:33.570934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.836 [2024-11-15 11:46:33.570946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:32.836 qpair failed and we were unable to recover it. 00:28:32.836 [2024-11-15 11:46:33.571032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.836 [2024-11-15 11:46:33.571043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:32.836 qpair failed and we were unable to recover it. 00:28:32.836 [2024-11-15 11:46:33.571117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.836 [2024-11-15 11:46:33.571128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:32.836 qpair failed and we were unable to recover it. 00:28:32.836 [2024-11-15 11:46:33.571270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.836 [2024-11-15 11:46:33.571279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:32.836 qpair failed and we were unable to recover it. 00:28:32.836 [2024-11-15 11:46:33.571349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.836 [2024-11-15 11:46:33.571359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:32.836 qpair failed and we were unable to recover it. 00:28:32.836 [2024-11-15 11:46:33.571493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.836 [2024-11-15 11:46:33.571505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:32.836 qpair failed and we were unable to recover it. 00:28:32.836 [2024-11-15 11:46:33.571586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.836 [2024-11-15 11:46:33.571596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:32.836 qpair failed and we were unable to recover it. 00:28:32.836 [2024-11-15 11:46:33.571741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.836 [2024-11-15 11:46:33.571752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:32.836 qpair failed and we were unable to recover it. 00:28:32.836 [2024-11-15 11:46:33.571824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.836 [2024-11-15 11:46:33.571833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:32.836 qpair failed and we were unable to recover it. 00:28:32.836 [2024-11-15 11:46:33.571911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.836 [2024-11-15 11:46:33.571921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:32.836 qpair failed and we were unable to recover it. 00:28:32.836 [2024-11-15 11:46:33.572064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.836 [2024-11-15 11:46:33.572074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:32.836 qpair failed and we were unable to recover it. 00:28:32.836 [2024-11-15 11:46:33.572255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.836 [2024-11-15 11:46:33.572266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:32.836 qpair failed and we were unable to recover it. 00:28:32.836 [2024-11-15 11:46:33.572340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.836 [2024-11-15 11:46:33.572350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:32.836 qpair failed and we were unable to recover it. 00:28:32.836 [2024-11-15 11:46:33.572432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.836 [2024-11-15 11:46:33.572442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:32.836 qpair failed and we were unable to recover it. 00:28:32.836 [2024-11-15 11:46:33.572656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.836 [2024-11-15 11:46:33.572668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:32.836 qpair failed and we were unable to recover it. 00:28:32.836 [2024-11-15 11:46:33.572818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.837 [2024-11-15 11:46:33.572828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:32.837 qpair failed and we were unable to recover it. 00:28:32.837 [2024-11-15 11:46:33.572891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.837 [2024-11-15 11:46:33.572901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:32.837 qpair failed and we were unable to recover it. 00:28:32.837 [2024-11-15 11:46:33.573072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.837 [2024-11-15 11:46:33.573083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:32.837 qpair failed and we were unable to recover it. 00:28:32.837 [2024-11-15 11:46:33.573167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.837 [2024-11-15 11:46:33.573177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:32.837 qpair failed and we were unable to recover it. 00:28:32.837 [2024-11-15 11:46:33.573254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.837 [2024-11-15 11:46:33.573265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:32.837 qpair failed and we were unable to recover it. 00:28:32.837 [2024-11-15 11:46:33.573403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.837 [2024-11-15 11:46:33.573415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:32.837 qpair failed and we were unable to recover it. 00:28:32.837 [2024-11-15 11:46:33.573553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.837 [2024-11-15 11:46:33.573564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:32.837 qpair failed and we were unable to recover it. 00:28:32.837 [2024-11-15 11:46:33.573704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.837 [2024-11-15 11:46:33.573715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:32.837 qpair failed and we were unable to recover it. 00:28:32.837 [2024-11-15 11:46:33.573876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.837 [2024-11-15 11:46:33.573887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:32.837 qpair failed and we were unable to recover it. 00:28:32.837 [2024-11-15 11:46:33.573956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.837 [2024-11-15 11:46:33.573966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:32.837 qpair failed and we were unable to recover it. 00:28:32.837 [2024-11-15 11:46:33.574033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.837 [2024-11-15 11:46:33.574043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:32.837 qpair failed and we were unable to recover it. 00:28:32.837 [2024-11-15 11:46:33.574110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.837 [2024-11-15 11:46:33.574119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:32.837 qpair failed and we were unable to recover it. 00:28:32.837 [2024-11-15 11:46:33.574194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.837 [2024-11-15 11:46:33.574203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:32.837 qpair failed and we were unable to recover it. 00:28:32.837 [2024-11-15 11:46:33.574270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.837 [2024-11-15 11:46:33.574279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:32.837 qpair failed and we were unable to recover it. 00:28:32.837 [2024-11-15 11:46:33.574367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.837 [2024-11-15 11:46:33.574377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:32.837 qpair failed and we were unable to recover it. 00:28:32.837 [2024-11-15 11:46:33.574531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.837 [2024-11-15 11:46:33.574542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:32.837 qpair failed and we were unable to recover it. 00:28:32.837 [2024-11-15 11:46:33.574654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.837 [2024-11-15 11:46:33.574664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:32.837 qpair failed and we were unable to recover it. 00:28:32.837 [2024-11-15 11:46:33.574821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.837 [2024-11-15 11:46:33.574831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:32.837 qpair failed and we were unable to recover it. 00:28:32.837 [2024-11-15 11:46:33.574899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.837 [2024-11-15 11:46:33.574909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:32.837 qpair failed and we were unable to recover it. 00:28:32.837 [2024-11-15 11:46:33.574980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.837 [2024-11-15 11:46:33.574991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:32.837 qpair failed and we were unable to recover it. 00:28:32.837 [2024-11-15 11:46:33.575055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.837 [2024-11-15 11:46:33.575065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:32.837 qpair failed and we were unable to recover it. 00:28:32.837 [2024-11-15 11:46:33.575158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.837 [2024-11-15 11:46:33.575168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:32.837 qpair failed and we were unable to recover it. 00:28:32.837 [2024-11-15 11:46:33.575247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.837 [2024-11-15 11:46:33.575257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:32.837 qpair failed and we were unable to recover it. 00:28:32.837 [2024-11-15 11:46:33.575337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.837 [2024-11-15 11:46:33.575348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:32.837 qpair failed and we were unable to recover it. 00:28:32.837 [2024-11-15 11:46:33.575492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.837 [2024-11-15 11:46:33.575502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:32.837 qpair failed and we were unable to recover it. 00:28:32.837 [2024-11-15 11:46:33.576352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.837 [2024-11-15 11:46:33.576375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:32.837 qpair failed and we were unable to recover it. 00:28:32.837 [2024-11-15 11:46:33.576570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.837 [2024-11-15 11:46:33.576585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:32.837 qpair failed and we were unable to recover it. 00:28:32.837 [2024-11-15 11:46:33.576818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.837 [2024-11-15 11:46:33.576852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:32.837 qpair failed and we were unable to recover it. 00:28:32.837 [2024-11-15 11:46:33.577452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.837 [2024-11-15 11:46:33.577508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:32.837 qpair failed and we were unable to recover it. 00:28:32.837 [2024-11-15 11:46:33.577702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.837 [2024-11-15 11:46:33.577734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:32.837 qpair failed and we were unable to recover it. 00:28:32.837 [2024-11-15 11:46:33.577955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.837 [2024-11-15 11:46:33.577987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:32.837 qpair failed and we were unable to recover it. 00:28:32.837 [2024-11-15 11:46:33.578114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.837 [2024-11-15 11:46:33.578124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:32.837 qpair failed and we were unable to recover it. 00:28:32.837 [2024-11-15 11:46:33.578219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.837 [2024-11-15 11:46:33.578230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:32.837 qpair failed and we were unable to recover it. 00:28:32.837 [2024-11-15 11:46:33.578376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.837 [2024-11-15 11:46:33.578387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:32.837 qpair failed and we were unable to recover it. 00:28:32.837 [2024-11-15 11:46:33.578486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.837 [2024-11-15 11:46:33.578496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:32.837 qpair failed and we were unable to recover it. 00:28:32.837 [2024-11-15 11:46:33.578582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.837 [2024-11-15 11:46:33.578592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:32.837 qpair failed and we were unable to recover it. 00:28:32.837 [2024-11-15 11:46:33.578745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.837 [2024-11-15 11:46:33.578756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:32.837 qpair failed and we were unable to recover it. 00:28:32.837 [2024-11-15 11:46:33.578820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.837 [2024-11-15 11:46:33.578830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:32.837 qpair failed and we were unable to recover it. 00:28:32.837 [2024-11-15 11:46:33.578920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.837 [2024-11-15 11:46:33.578930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:32.837 qpair failed and we were unable to recover it. 00:28:32.837 [2024-11-15 11:46:33.579014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.837 [2024-11-15 11:46:33.579025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:32.837 qpair failed and we were unable to recover it. 00:28:32.837 [2024-11-15 11:46:33.579121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.837 [2024-11-15 11:46:33.579130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:32.838 qpair failed and we were unable to recover it. 00:28:32.838 [2024-11-15 11:46:33.579220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.838 [2024-11-15 11:46:33.579231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:32.838 qpair failed and we were unable to recover it. 00:28:32.838 [2024-11-15 11:46:33.579320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.838 [2024-11-15 11:46:33.579331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:32.838 qpair failed and we were unable to recover it. 00:28:32.838 [2024-11-15 11:46:33.579414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.838 [2024-11-15 11:46:33.579425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:32.838 qpair failed and we were unable to recover it. 00:28:32.838 [2024-11-15 11:46:33.579603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.838 [2024-11-15 11:46:33.579613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:32.838 qpair failed and we were unable to recover it. 00:28:32.838 [2024-11-15 11:46:33.579677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.838 [2024-11-15 11:46:33.579687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:32.838 qpair failed and we were unable to recover it. 00:28:32.838 [2024-11-15 11:46:33.579807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.838 [2024-11-15 11:46:33.579836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:32.838 qpair failed and we were unable to recover it. 00:28:32.838 [2024-11-15 11:46:33.579982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.838 [2024-11-15 11:46:33.580014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:32.838 qpair failed and we were unable to recover it. 00:28:32.838 [2024-11-15 11:46:33.580274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.838 [2024-11-15 11:46:33.580306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:32.838 qpair failed and we were unable to recover it. 00:28:32.838 [2024-11-15 11:46:33.580425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.838 [2024-11-15 11:46:33.580457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:32.838 qpair failed and we were unable to recover it. 00:28:32.838 [2024-11-15 11:46:33.580596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.838 [2024-11-15 11:46:33.580629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:32.838 qpair failed and we were unable to recover it. 00:28:32.838 [2024-11-15 11:46:33.580811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.838 [2024-11-15 11:46:33.580844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:32.838 qpair failed and we were unable to recover it. 00:28:32.838 [2024-11-15 11:46:33.580932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.838 [2024-11-15 11:46:33.580942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:32.838 qpair failed and we were unable to recover it. 00:28:32.838 [2024-11-15 11:46:33.581083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.838 [2024-11-15 11:46:33.581095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:32.838 qpair failed and we were unable to recover it. 00:28:32.838 [2024-11-15 11:46:33.581208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.838 [2024-11-15 11:46:33.581219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:32.838 qpair failed and we were unable to recover it. 00:28:32.838 [2024-11-15 11:46:33.581370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.838 [2024-11-15 11:46:33.581381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:32.838 qpair failed and we were unable to recover it. 00:28:32.838 [2024-11-15 11:46:33.581614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.838 [2024-11-15 11:46:33.581647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:32.838 qpair failed and we were unable to recover it. 00:28:32.838 [2024-11-15 11:46:33.582680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.838 [2024-11-15 11:46:33.582701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:32.838 qpair failed and we were unable to recover it. 00:28:32.838 [2024-11-15 11:46:33.582882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.838 [2024-11-15 11:46:33.582918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:32.838 qpair failed and we were unable to recover it. 00:28:32.838 [2024-11-15 11:46:33.583055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.838 [2024-11-15 11:46:33.583089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:32.838 qpair failed and we were unable to recover it. 00:28:32.838 [2024-11-15 11:46:33.583275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.838 [2024-11-15 11:46:33.583307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:32.838 qpair failed and we were unable to recover it. 00:28:32.838 [2024-11-15 11:46:33.583592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.838 [2024-11-15 11:46:33.583626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:32.838 qpair failed and we were unable to recover it. 00:28:32.838 [2024-11-15 11:46:33.583746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.838 [2024-11-15 11:46:33.583780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:32.838 qpair failed and we were unable to recover it. 00:28:32.838 [2024-11-15 11:46:33.583910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.838 [2024-11-15 11:46:33.583942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:32.838 qpair failed and we were unable to recover it. 00:28:32.838 [2024-11-15 11:46:33.584053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.838 [2024-11-15 11:46:33.584064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:32.838 qpair failed and we were unable to recover it. 00:28:32.838 [2024-11-15 11:46:33.584203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.838 [2024-11-15 11:46:33.584214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:32.838 qpair failed and we were unable to recover it. 00:28:32.838 [2024-11-15 11:46:33.584294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.838 [2024-11-15 11:46:33.584306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:32.838 qpair failed and we were unable to recover it. 00:28:32.838 [2024-11-15 11:46:33.584401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.838 [2024-11-15 11:46:33.584411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:32.838 qpair failed and we were unable to recover it. 00:28:32.838 [2024-11-15 11:46:33.584485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.838 [2024-11-15 11:46:33.584496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:32.838 qpair failed and we were unable to recover it. 00:28:32.838 [2024-11-15 11:46:33.584717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.838 [2024-11-15 11:46:33.584728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:32.838 qpair failed and we were unable to recover it. 00:28:32.838 [2024-11-15 11:46:33.584872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.838 [2024-11-15 11:46:33.584883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:32.838 qpair failed and we were unable to recover it. 00:28:32.838 [2024-11-15 11:46:33.584957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.838 [2024-11-15 11:46:33.584967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:32.838 qpair failed and we were unable to recover it. 00:28:32.838 [2024-11-15 11:46:33.585050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.838 [2024-11-15 11:46:33.585061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:32.838 qpair failed and we were unable to recover it. 00:28:32.838 [2024-11-15 11:46:33.585845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.838 [2024-11-15 11:46:33.585864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:32.838 qpair failed and we were unable to recover it. 00:28:32.838 [2024-11-15 11:46:33.585966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.838 [2024-11-15 11:46:33.585977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:32.838 qpair failed and we were unable to recover it. 00:28:32.838 [2024-11-15 11:46:33.586054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.838 [2024-11-15 11:46:33.586065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:32.838 qpair failed and we were unable to recover it. 00:28:32.838 [2024-11-15 11:46:33.586208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.838 [2024-11-15 11:46:33.586239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:32.838 qpair failed and we were unable to recover it. 00:28:32.838 [2024-11-15 11:46:33.586445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.838 [2024-11-15 11:46:33.586494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:32.838 qpair failed and we were unable to recover it. 00:28:32.838 [2024-11-15 11:46:33.586618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.838 [2024-11-15 11:46:33.586651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:32.838 qpair failed and we were unable to recover it. 00:28:32.838 [2024-11-15 11:46:33.586764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.838 [2024-11-15 11:46:33.586797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:32.838 qpair failed and we were unable to recover it. 00:28:32.838 [2024-11-15 11:46:33.587065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.838 [2024-11-15 11:46:33.587076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:32.838 qpair failed and we were unable to recover it. 00:28:32.838 [2024-11-15 11:46:33.587225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.838 [2024-11-15 11:46:33.587236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:32.838 qpair failed and we were unable to recover it. 00:28:32.839 [2024-11-15 11:46:33.587430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.839 [2024-11-15 11:46:33.587441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:32.839 qpair failed and we were unable to recover it. 00:28:32.839 [2024-11-15 11:46:33.587654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.839 [2024-11-15 11:46:33.587665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:32.839 qpair failed and we were unable to recover it. 00:28:32.839 [2024-11-15 11:46:33.587738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.839 [2024-11-15 11:46:33.587748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:32.839 qpair failed and we were unable to recover it. 00:28:32.839 [2024-11-15 11:46:33.587816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.839 [2024-11-15 11:46:33.587826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:32.839 qpair failed and we were unable to recover it. 00:28:32.839 [2024-11-15 11:46:33.587977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.839 [2024-11-15 11:46:33.587987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:32.839 qpair failed and we were unable to recover it. 00:28:32.839 [2024-11-15 11:46:33.588077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.839 [2024-11-15 11:46:33.588088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:32.839 qpair failed and we were unable to recover it. 00:28:32.839 [2024-11-15 11:46:33.588174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.839 [2024-11-15 11:46:33.588185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:32.839 qpair failed and we were unable to recover it. 00:28:32.839 [2024-11-15 11:46:33.588348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.839 [2024-11-15 11:46:33.588359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:32.839 qpair failed and we were unable to recover it. 00:28:32.839 [2024-11-15 11:46:33.588441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.839 [2024-11-15 11:46:33.588451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:32.839 qpair failed and we were unable to recover it. 00:28:32.839 [2024-11-15 11:46:33.588540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.839 [2024-11-15 11:46:33.588552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:32.839 qpair failed and we were unable to recover it. 00:28:32.839 [2024-11-15 11:46:33.588706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.839 [2024-11-15 11:46:33.588718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:32.839 qpair failed and we were unable to recover it. 00:28:32.839 [2024-11-15 11:46:33.588795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.839 [2024-11-15 11:46:33.588805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:32.839 qpair failed and we were unable to recover it. 00:28:32.839 [2024-11-15 11:46:33.588949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.839 [2024-11-15 11:46:33.588960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:32.839 qpair failed and we were unable to recover it. 00:28:32.839 [2024-11-15 11:46:33.589136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.839 [2024-11-15 11:46:33.589148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:32.839 qpair failed and we were unable to recover it. 00:28:32.839 [2024-11-15 11:46:33.589224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.839 [2024-11-15 11:46:33.589233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:32.839 qpair failed and we were unable to recover it. 00:28:32.839 [2024-11-15 11:46:33.589379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.839 [2024-11-15 11:46:33.589390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:32.839 qpair failed and we were unable to recover it. 00:28:32.839 [2024-11-15 11:46:33.589631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.839 [2024-11-15 11:46:33.589643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:32.839 qpair failed and we were unable to recover it. 00:28:32.839 [2024-11-15 11:46:33.589790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.839 [2024-11-15 11:46:33.589801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:32.839 qpair failed and we were unable to recover it. 00:28:32.839 [2024-11-15 11:46:33.589875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.839 [2024-11-15 11:46:33.589885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:32.839 qpair failed and we were unable to recover it. 00:28:32.839 [2024-11-15 11:46:33.589980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.839 [2024-11-15 11:46:33.589990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:32.839 qpair failed and we were unable to recover it. 00:28:32.839 [2024-11-15 11:46:33.590152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.839 [2024-11-15 11:46:33.590162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:32.839 qpair failed and we were unable to recover it. 00:28:32.839 [2024-11-15 11:46:33.590244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.839 [2024-11-15 11:46:33.590253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:32.839 qpair failed and we were unable to recover it. 00:28:32.839 [2024-11-15 11:46:33.590395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.839 [2024-11-15 11:46:33.590403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:32.839 qpair failed and we were unable to recover it. 00:28:32.839 [2024-11-15 11:46:33.590473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.839 [2024-11-15 11:46:33.590482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:32.839 qpair failed and we were unable to recover it. 00:28:32.839 [2024-11-15 11:46:33.590629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.839 [2024-11-15 11:46:33.590640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:32.839 qpair failed and we were unable to recover it. 00:28:32.839 [2024-11-15 11:46:33.590730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.839 [2024-11-15 11:46:33.590738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:32.839 qpair failed and we were unable to recover it. 00:28:32.839 [2024-11-15 11:46:33.590925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.839 [2024-11-15 11:46:33.590934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:32.839 qpair failed and we were unable to recover it. 00:28:32.839 [2024-11-15 11:46:33.591002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.839 [2024-11-15 11:46:33.591010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:32.839 qpair failed and we were unable to recover it. 00:28:32.839 [2024-11-15 11:46:33.591183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.839 [2024-11-15 11:46:33.591192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:32.839 qpair failed and we were unable to recover it. 00:28:32.839 [2024-11-15 11:46:33.591269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.839 [2024-11-15 11:46:33.591278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:32.839 qpair failed and we were unable to recover it. 00:28:32.839 [2024-11-15 11:46:33.591353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.839 [2024-11-15 11:46:33.591362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:32.839 qpair failed and we were unable to recover it. 00:28:32.839 [2024-11-15 11:46:33.591436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.839 [2024-11-15 11:46:33.591445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:32.839 qpair failed and we were unable to recover it. 00:28:32.839 [2024-11-15 11:46:33.591515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.839 [2024-11-15 11:46:33.591524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:32.839 qpair failed and we were unable to recover it. 00:28:32.839 [2024-11-15 11:46:33.591657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.839 [2024-11-15 11:46:33.591666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:32.839 qpair failed and we were unable to recover it. 00:28:32.839 [2024-11-15 11:46:33.591828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.839 [2024-11-15 11:46:33.591837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:32.839 qpair failed and we were unable to recover it. 00:28:32.839 [2024-11-15 11:46:33.592063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.839 [2024-11-15 11:46:33.592072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:32.840 qpair failed and we were unable to recover it. 00:28:32.840 [2024-11-15 11:46:33.592211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.840 [2024-11-15 11:46:33.592220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:32.840 qpair failed and we were unable to recover it. 00:28:32.840 [2024-11-15 11:46:33.592290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.840 [2024-11-15 11:46:33.592299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:32.840 qpair failed and we were unable to recover it. 00:28:32.840 [2024-11-15 11:46:33.592462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.840 [2024-11-15 11:46:33.592471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:32.840 qpair failed and we were unable to recover it. 00:28:32.840 [2024-11-15 11:46:33.592637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.840 [2024-11-15 11:46:33.592646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:32.840 qpair failed and we were unable to recover it. 00:28:32.840 [2024-11-15 11:46:33.592758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.840 [2024-11-15 11:46:33.592767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:32.840 qpair failed and we were unable to recover it. 00:28:32.840 [2024-11-15 11:46:33.592934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.840 [2024-11-15 11:46:33.592943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:32.840 qpair failed and we were unable to recover it. 00:28:32.840 [2024-11-15 11:46:33.593006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.840 [2024-11-15 11:46:33.593015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:32.840 qpair failed and we were unable to recover it. 00:28:32.840 [2024-11-15 11:46:33.593083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.840 [2024-11-15 11:46:33.593092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:32.840 qpair failed and we were unable to recover it. 00:28:32.840 [2024-11-15 11:46:33.593173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.840 [2024-11-15 11:46:33.593181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:32.840 qpair failed and we were unable to recover it. 00:28:32.840 [2024-11-15 11:46:33.593318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.840 [2024-11-15 11:46:33.593326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:32.840 qpair failed and we were unable to recover it. 00:28:32.840 [2024-11-15 11:46:33.593515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.840 [2024-11-15 11:46:33.593526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:32.840 qpair failed and we were unable to recover it. 00:28:32.840 [2024-11-15 11:46:33.593619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.840 [2024-11-15 11:46:33.593628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:32.840 qpair failed and we were unable to recover it. 00:28:32.840 [2024-11-15 11:46:33.593710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.840 [2024-11-15 11:46:33.593719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:32.840 qpair failed and we were unable to recover it. 00:28:32.840 [2024-11-15 11:46:33.593851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.840 [2024-11-15 11:46:33.593860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:32.840 qpair failed and we were unable to recover it. 00:28:32.840 [2024-11-15 11:46:33.593997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.840 [2024-11-15 11:46:33.594006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:32.840 qpair failed and we were unable to recover it. 00:28:32.840 [2024-11-15 11:46:33.594121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.840 [2024-11-15 11:46:33.594153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1922550 with addr=10.0.0.2, port=4420 00:28:32.840 qpair failed and we were unable to recover it. 00:28:32.840 [2024-11-15 11:46:33.594327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.840 [2024-11-15 11:46:33.594351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:32.840 qpair failed and we were unable to recover it. 00:28:32.840 [2024-11-15 11:46:33.594446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.840 [2024-11-15 11:46:33.594456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:32.840 qpair failed and we were unable to recover it. 00:28:32.840 [2024-11-15 11:46:33.594551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.840 [2024-11-15 11:46:33.594562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:32.840 qpair failed and we were unable to recover it. 00:28:32.840 [2024-11-15 11:46:33.594632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.840 [2024-11-15 11:46:33.594642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:32.840 qpair failed and we were unable to recover it. 00:28:32.840 [2024-11-15 11:46:33.594779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.840 [2024-11-15 11:46:33.594788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:32.840 qpair failed and we were unable to recover it. 00:28:32.840 [2024-11-15 11:46:33.594929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.840 [2024-11-15 11:46:33.594938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:32.840 qpair failed and we were unable to recover it. 00:28:32.840 [2024-11-15 11:46:33.595077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.840 [2024-11-15 11:46:33.595087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:32.840 qpair failed and we were unable to recover it. 00:28:32.840 [2024-11-15 11:46:33.595220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.840 [2024-11-15 11:46:33.595230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:32.840 qpair failed and we were unable to recover it. 00:28:32.840 [2024-11-15 11:46:33.595479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.840 [2024-11-15 11:46:33.595491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:32.840 qpair failed and we were unable to recover it. 00:28:32.840 [2024-11-15 11:46:33.595593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.840 [2024-11-15 11:46:33.595604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:32.840 qpair failed and we were unable to recover it. 00:28:32.840 [2024-11-15 11:46:33.595677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.840 [2024-11-15 11:46:33.595686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:32.840 qpair failed and we were unable to recover it. 00:28:32.840 [2024-11-15 11:46:33.595919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.840 [2024-11-15 11:46:33.595928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:32.840 qpair failed and we were unable to recover it. 00:28:32.840 [2024-11-15 11:46:33.596064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.840 [2024-11-15 11:46:33.596076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:32.840 qpair failed and we were unable to recover it. 00:28:32.840 [2024-11-15 11:46:33.596226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.840 [2024-11-15 11:46:33.596237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:32.840 qpair failed and we were unable to recover it. 00:28:32.840 [2024-11-15 11:46:33.596327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.840 [2024-11-15 11:46:33.596336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:32.840 qpair failed and we were unable to recover it. 00:28:32.840 [2024-11-15 11:46:33.596469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.840 [2024-11-15 11:46:33.596480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:32.840 qpair failed and we were unable to recover it. 00:28:32.840 [2024-11-15 11:46:33.596577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.840 [2024-11-15 11:46:33.596588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:32.840 qpair failed and we were unable to recover it. 00:28:32.840 [2024-11-15 11:46:33.596737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.840 [2024-11-15 11:46:33.596747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:32.840 qpair failed and we were unable to recover it. 00:28:32.840 [2024-11-15 11:46:33.596892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.840 [2024-11-15 11:46:33.596902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:32.840 qpair failed and we were unable to recover it. 00:28:32.840 [2024-11-15 11:46:33.597047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.840 [2024-11-15 11:46:33.597061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:32.840 qpair failed and we were unable to recover it. 00:28:32.840 [2024-11-15 11:46:33.597133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.840 [2024-11-15 11:46:33.597144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:32.840 qpair failed and we were unable to recover it. 00:28:32.840 [2024-11-15 11:46:33.597284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.840 [2024-11-15 11:46:33.597296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:32.840 qpair failed and we were unable to recover it. 00:28:32.840 [2024-11-15 11:46:33.597441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.840 [2024-11-15 11:46:33.597452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:32.840 qpair failed and we were unable to recover it. 00:28:32.841 [2024-11-15 11:46:33.597600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.841 [2024-11-15 11:46:33.597612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:32.841 qpair failed and we were unable to recover it. 00:28:32.841 [2024-11-15 11:46:33.597714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.841 [2024-11-15 11:46:33.597725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:32.841 qpair failed and we were unable to recover it. 00:28:32.841 [2024-11-15 11:46:33.597863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.841 [2024-11-15 11:46:33.597874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:32.841 qpair failed and we were unable to recover it. 00:28:32.841 [2024-11-15 11:46:33.598049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.841 [2024-11-15 11:46:33.598060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:32.841 qpair failed and we were unable to recover it. 00:28:32.841 [2024-11-15 11:46:33.598198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.841 [2024-11-15 11:46:33.598208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:32.841 qpair failed and we were unable to recover it. 00:28:32.841 [2024-11-15 11:46:33.598375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.841 [2024-11-15 11:46:33.598385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:32.841 qpair failed and we were unable to recover it. 00:28:32.841 [2024-11-15 11:46:33.598479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.841 [2024-11-15 11:46:33.598491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:32.841 qpair failed and we were unable to recover it. 00:28:32.841 [2024-11-15 11:46:33.598645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.841 [2024-11-15 11:46:33.598657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:32.841 qpair failed and we were unable to recover it. 00:28:32.841 [2024-11-15 11:46:33.598797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.841 [2024-11-15 11:46:33.598806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:32.841 qpair failed and we were unable to recover it. 00:28:32.841 [2024-11-15 11:46:33.598940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.841 [2024-11-15 11:46:33.598949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:32.841 qpair failed and we were unable to recover it. 00:28:32.841 [2024-11-15 11:46:33.599102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.841 [2024-11-15 11:46:33.599112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:32.841 qpair failed and we were unable to recover it. 00:28:32.841 [2024-11-15 11:46:33.599319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.841 [2024-11-15 11:46:33.599330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:32.841 qpair failed and we were unable to recover it. 00:28:32.841 [2024-11-15 11:46:33.599399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.841 [2024-11-15 11:46:33.599409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:32.841 qpair failed and we were unable to recover it. 00:28:32.841 [2024-11-15 11:46:33.599557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.841 [2024-11-15 11:46:33.599569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:32.841 qpair failed and we were unable to recover it. 00:28:32.841 [2024-11-15 11:46:33.599648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.841 [2024-11-15 11:46:33.599658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:32.841 qpair failed and we were unable to recover it. 00:28:32.841 [2024-11-15 11:46:33.599814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.841 [2024-11-15 11:46:33.599824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:32.841 qpair failed and we were unable to recover it. 00:28:32.841 [2024-11-15 11:46:33.599908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.841 [2024-11-15 11:46:33.599921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:32.841 qpair failed and we were unable to recover it. 00:28:32.841 [2024-11-15 11:46:33.600131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.841 [2024-11-15 11:46:33.600143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:32.841 qpair failed and we were unable to recover it. 00:28:32.841 [2024-11-15 11:46:33.600350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.841 [2024-11-15 11:46:33.600361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:32.841 qpair failed and we were unable to recover it. 00:28:32.841 [2024-11-15 11:46:33.600460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.841 [2024-11-15 11:46:33.600471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:32.841 qpair failed and we were unable to recover it. 00:28:32.841 [2024-11-15 11:46:33.600566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.841 [2024-11-15 11:46:33.600577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:32.841 qpair failed and we were unable to recover it. 00:28:32.841 [2024-11-15 11:46:33.600663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.841 [2024-11-15 11:46:33.600673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:32.841 qpair failed and we were unable to recover it. 00:28:32.841 [2024-11-15 11:46:33.600757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.841 [2024-11-15 11:46:33.600767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:32.841 qpair failed and we were unable to recover it. 00:28:32.841 [2024-11-15 11:46:33.600849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.841 [2024-11-15 11:46:33.600859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:32.841 qpair failed and we were unable to recover it. 00:28:32.841 [2024-11-15 11:46:33.600946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.841 [2024-11-15 11:46:33.600956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:32.841 qpair failed and we were unable to recover it. 00:28:32.841 [2024-11-15 11:46:33.601043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.841 [2024-11-15 11:46:33.601054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:32.841 qpair failed and we were unable to recover it. 00:28:32.841 [2024-11-15 11:46:33.601137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.841 [2024-11-15 11:46:33.601147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:32.841 qpair failed and we were unable to recover it. 00:28:32.841 [2024-11-15 11:46:33.601240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.841 [2024-11-15 11:46:33.601251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:32.841 qpair failed and we were unable to recover it. 00:28:32.841 [2024-11-15 11:46:33.601348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.841 [2024-11-15 11:46:33.601359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:32.841 qpair failed and we were unable to recover it. 00:28:32.841 [2024-11-15 11:46:33.601448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.841 [2024-11-15 11:46:33.601464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:32.841 qpair failed and we were unable to recover it. 00:28:32.841 [2024-11-15 11:46:33.601603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.841 [2024-11-15 11:46:33.601614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:32.841 qpair failed and we were unable to recover it. 00:28:32.841 [2024-11-15 11:46:33.601697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.841 [2024-11-15 11:46:33.601706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:32.841 qpair failed and we were unable to recover it. 00:28:32.841 [2024-11-15 11:46:33.601839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.841 [2024-11-15 11:46:33.601850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:32.841 qpair failed and we were unable to recover it. 00:28:32.841 [2024-11-15 11:46:33.602004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.841 [2024-11-15 11:46:33.602015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:32.841 qpair failed and we were unable to recover it. 00:28:32.841 [2024-11-15 11:46:33.602100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.841 [2024-11-15 11:46:33.602110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:32.841 qpair failed and we were unable to recover it. 00:28:32.841 [2024-11-15 11:46:33.602243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.841 [2024-11-15 11:46:33.602254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:32.841 qpair failed and we were unable to recover it. 00:28:32.841 [2024-11-15 11:46:33.602405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.841 [2024-11-15 11:46:33.602417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:32.841 qpair failed and we were unable to recover it. 00:28:32.841 [2024-11-15 11:46:33.602560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.841 [2024-11-15 11:46:33.602572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:32.841 qpair failed and we were unable to recover it. 00:28:32.841 [2024-11-15 11:46:33.602648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.841 [2024-11-15 11:46:33.602658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:32.841 qpair failed and we were unable to recover it. 00:28:32.841 [2024-11-15 11:46:33.602722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.841 [2024-11-15 11:46:33.602732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:32.841 qpair failed and we were unable to recover it. 00:28:32.841 [2024-11-15 11:46:33.602895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.841 [2024-11-15 11:46:33.602906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:32.841 qpair failed and we were unable to recover it. 00:28:32.841 [2024-11-15 11:46:33.602995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.841 [2024-11-15 11:46:33.603006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:32.841 qpair failed and we were unable to recover it. 00:28:32.841 [2024-11-15 11:46:33.603085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.841 [2024-11-15 11:46:33.603096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:32.841 qpair failed and we were unable to recover it. 00:28:32.841 [2024-11-15 11:46:33.603239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.841 [2024-11-15 11:46:33.603251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:32.841 qpair failed and we were unable to recover it. 00:28:32.841 [2024-11-15 11:46:33.603319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.842 [2024-11-15 11:46:33.603330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:32.842 qpair failed and we were unable to recover it. 00:28:32.842 [2024-11-15 11:46:33.603423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.842 [2024-11-15 11:46:33.603432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:32.842 qpair failed and we were unable to recover it. 00:28:32.842 [2024-11-15 11:46:33.603594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.842 [2024-11-15 11:46:33.603605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:32.842 qpair failed and we were unable to recover it. 00:28:32.842 [2024-11-15 11:46:33.603745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.842 [2024-11-15 11:46:33.603756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:32.842 qpair failed and we were unable to recover it. 00:28:32.842 [2024-11-15 11:46:33.603982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.842 [2024-11-15 11:46:33.603993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:32.842 qpair failed and we were unable to recover it. 00:28:32.842 [2024-11-15 11:46:33.604099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.842 [2024-11-15 11:46:33.604110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:32.842 qpair failed and we were unable to recover it. 00:28:32.842 [2024-11-15 11:46:33.604177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.842 [2024-11-15 11:46:33.604188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:32.842 qpair failed and we were unable to recover it. 00:28:32.842 [2024-11-15 11:46:33.604345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.842 [2024-11-15 11:46:33.604355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:32.842 qpair failed and we were unable to recover it. 00:28:32.842 [2024-11-15 11:46:33.604435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.842 [2024-11-15 11:46:33.604446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:32.842 qpair failed and we were unable to recover it. 00:28:32.842 [2024-11-15 11:46:33.604533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.842 [2024-11-15 11:46:33.604543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:32.842 qpair failed and we were unable to recover it. 00:28:32.842 [2024-11-15 11:46:33.604612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.842 [2024-11-15 11:46:33.604622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:32.842 qpair failed and we were unable to recover it. 00:28:32.842 [2024-11-15 11:46:33.604700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.842 [2024-11-15 11:46:33.604711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:32.842 qpair failed and we were unable to recover it. 00:28:32.842 [2024-11-15 11:46:33.604810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.842 [2024-11-15 11:46:33.604823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:32.842 qpair failed and we were unable to recover it. 00:28:32.842 [2024-11-15 11:46:33.604959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.842 [2024-11-15 11:46:33.604970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:32.842 qpair failed and we were unable to recover it. 00:28:32.842 [2024-11-15 11:46:33.605104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.842 [2024-11-15 11:46:33.605114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:32.842 qpair failed and we were unable to recover it. 00:28:32.842 [2024-11-15 11:46:33.605197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.842 [2024-11-15 11:46:33.605208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:32.842 qpair failed and we were unable to recover it. 00:28:32.842 [2024-11-15 11:46:33.605294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.842 [2024-11-15 11:46:33.605304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:32.842 qpair failed and we were unable to recover it. 00:28:32.842 [2024-11-15 11:46:33.605382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.842 [2024-11-15 11:46:33.605392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:32.842 qpair failed and we were unable to recover it. 00:28:32.842 [2024-11-15 11:46:33.605528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.842 [2024-11-15 11:46:33.605540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:32.842 qpair failed and we were unable to recover it. 00:28:32.842 [2024-11-15 11:46:33.605626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.842 [2024-11-15 11:46:33.605637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:32.842 qpair failed and we were unable to recover it. 00:28:32.842 [2024-11-15 11:46:33.605728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.842 [2024-11-15 11:46:33.605739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:32.842 qpair failed and we were unable to recover it. 00:28:32.842 [2024-11-15 11:46:33.605879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.842 [2024-11-15 11:46:33.605890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:32.842 qpair failed and we were unable to recover it. 00:28:32.842 [2024-11-15 11:46:33.606045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.842 [2024-11-15 11:46:33.606056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:32.842 qpair failed and we were unable to recover it. 00:28:32.842 [2024-11-15 11:46:33.606126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.842 [2024-11-15 11:46:33.606137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:32.842 qpair failed and we were unable to recover it. 00:28:32.842 [2024-11-15 11:46:33.606300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.842 [2024-11-15 11:46:33.606311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:32.842 qpair failed and we were unable to recover it. 00:28:32.842 [2024-11-15 11:46:33.606383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.842 [2024-11-15 11:46:33.606395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:32.842 qpair failed and we were unable to recover it. 00:28:32.842 [2024-11-15 11:46:33.606488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.842 [2024-11-15 11:46:33.606499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:32.842 qpair failed and we were unable to recover it. 00:28:32.842 [2024-11-15 11:46:33.606642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.842 [2024-11-15 11:46:33.606654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:32.842 qpair failed and we were unable to recover it. 00:28:32.842 [2024-11-15 11:46:33.606792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.842 [2024-11-15 11:46:33.606803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:32.842 qpair failed and we were unable to recover it. 00:28:32.842 [2024-11-15 11:46:33.606975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.842 [2024-11-15 11:46:33.606985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:32.842 qpair failed and we were unable to recover it. 00:28:32.842 [2024-11-15 11:46:33.607061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.842 [2024-11-15 11:46:33.607073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:32.842 qpair failed and we were unable to recover it. 00:28:32.842 [2024-11-15 11:46:33.607157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.842 [2024-11-15 11:46:33.607169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:32.842 qpair failed and we were unable to recover it. 00:28:32.842 [2024-11-15 11:46:33.607240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.842 [2024-11-15 11:46:33.607251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:32.842 qpair failed and we were unable to recover it. 00:28:32.842 [2024-11-15 11:46:33.607357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.842 [2024-11-15 11:46:33.607369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:32.842 qpair failed and we were unable to recover it. 00:28:32.842 [2024-11-15 11:46:33.607600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.842 [2024-11-15 11:46:33.607612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:32.842 qpair failed and we were unable to recover it. 00:28:32.842 [2024-11-15 11:46:33.607699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.842 [2024-11-15 11:46:33.607710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:32.842 qpair failed and we were unable to recover it. 00:28:32.842 [2024-11-15 11:46:33.607784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.842 [2024-11-15 11:46:33.607794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:32.842 qpair failed and we were unable to recover it. 00:28:32.842 [2024-11-15 11:46:33.607877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.842 [2024-11-15 11:46:33.607887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:32.842 qpair failed and we were unable to recover it. 00:28:32.842 [2024-11-15 11:46:33.608049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.842 [2024-11-15 11:46:33.608060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:32.842 qpair failed and we were unable to recover it. 00:28:32.842 [2024-11-15 11:46:33.608136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.842 [2024-11-15 11:46:33.608147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:32.842 qpair failed and we were unable to recover it. 00:28:32.842 [2024-11-15 11:46:33.608226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.842 [2024-11-15 11:46:33.608236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:32.842 qpair failed and we were unable to recover it. 00:28:32.842 [2024-11-15 11:46:33.608371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.842 [2024-11-15 11:46:33.608382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:32.842 qpair failed and we were unable to recover it. 00:28:32.842 [2024-11-15 11:46:33.608534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.842 [2024-11-15 11:46:33.608546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:32.842 qpair failed and we were unable to recover it. 00:28:32.842 [2024-11-15 11:46:33.608698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.842 [2024-11-15 11:46:33.608710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:32.842 qpair failed and we were unable to recover it. 00:28:32.843 [2024-11-15 11:46:33.608917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.843 [2024-11-15 11:46:33.608929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:32.843 qpair failed and we were unable to recover it. 00:28:32.843 [2024-11-15 11:46:33.609086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.843 [2024-11-15 11:46:33.609098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:32.843 qpair failed and we were unable to recover it. 00:28:32.843 [2024-11-15 11:46:33.609166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.843 [2024-11-15 11:46:33.609179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:32.843 qpair failed and we were unable to recover it. 00:28:32.843 [2024-11-15 11:46:33.609246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.843 [2024-11-15 11:46:33.609258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:32.843 qpair failed and we were unable to recover it. 00:28:32.843 [2024-11-15 11:46:33.609332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.843 [2024-11-15 11:46:33.609344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:32.843 qpair failed and we were unable to recover it. 00:28:32.843 [2024-11-15 11:46:33.609532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.843 [2024-11-15 11:46:33.609544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:32.843 qpair failed and we were unable to recover it. 00:28:32.843 [2024-11-15 11:46:33.609685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.843 [2024-11-15 11:46:33.609697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:32.843 qpair failed and we were unable to recover it. 00:28:32.843 [2024-11-15 11:46:33.609848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.843 [2024-11-15 11:46:33.609859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:32.843 qpair failed and we were unable to recover it. 00:28:32.843 [2024-11-15 11:46:33.609946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.843 [2024-11-15 11:46:33.609960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:32.843 qpair failed and we were unable to recover it. 00:28:32.843 [2024-11-15 11:46:33.610102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.843 [2024-11-15 11:46:33.610112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:32.843 qpair failed and we were unable to recover it. 00:28:32.843 [2024-11-15 11:46:33.610188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.843 [2024-11-15 11:46:33.610199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:32.843 qpair failed and we were unable to recover it. 00:28:32.843 [2024-11-15 11:46:33.610406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.843 [2024-11-15 11:46:33.610418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:32.843 qpair failed and we were unable to recover it. 00:28:32.843 [2024-11-15 11:46:33.610579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.843 [2024-11-15 11:46:33.610591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:32.843 qpair failed and we were unable to recover it. 00:28:32.843 [2024-11-15 11:46:33.610685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.843 [2024-11-15 11:46:33.610696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:32.843 qpair failed and we were unable to recover it. 00:28:32.843 [2024-11-15 11:46:33.610799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.843 [2024-11-15 11:46:33.610810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:32.843 qpair failed and we were unable to recover it. 00:28:32.843 [2024-11-15 11:46:33.610956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.843 [2024-11-15 11:46:33.610968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:32.843 qpair failed and we were unable to recover it. 00:28:32.843 [2024-11-15 11:46:33.611131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.843 [2024-11-15 11:46:33.611142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:32.843 qpair failed and we were unable to recover it. 00:28:32.843 [2024-11-15 11:46:33.611230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.843 [2024-11-15 11:46:33.611243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:32.843 qpair failed and we were unable to recover it. 00:28:32.843 [2024-11-15 11:46:33.611410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.843 [2024-11-15 11:46:33.611422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:32.843 qpair failed and we were unable to recover it. 00:28:32.843 [2024-11-15 11:46:33.611563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.843 [2024-11-15 11:46:33.611575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:32.843 qpair failed and we were unable to recover it. 00:28:32.843 [2024-11-15 11:46:33.611784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.843 [2024-11-15 11:46:33.611796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:32.843 qpair failed and we were unable to recover it. 00:28:32.843 [2024-11-15 11:46:33.611867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.843 [2024-11-15 11:46:33.611879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:32.843 qpair failed and we were unable to recover it. 00:28:32.843 [2024-11-15 11:46:33.612024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.843 [2024-11-15 11:46:33.612035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:32.843 qpair failed and we were unable to recover it. 00:28:32.843 [2024-11-15 11:46:33.612132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.843 [2024-11-15 11:46:33.612144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:32.843 qpair failed and we were unable to recover it. 00:28:32.843 [2024-11-15 11:46:33.612303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.843 [2024-11-15 11:46:33.612314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:32.843 qpair failed and we were unable to recover it. 00:28:32.843 [2024-11-15 11:46:33.612390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.843 [2024-11-15 11:46:33.612401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:32.843 qpair failed and we were unable to recover it. 00:28:32.843 [2024-11-15 11:46:33.612560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.843 [2024-11-15 11:46:33.612572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:32.843 qpair failed and we were unable to recover it. 00:28:32.843 [2024-11-15 11:46:33.612728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.843 [2024-11-15 11:46:33.612740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:32.843 qpair failed and we were unable to recover it. 00:28:32.843 [2024-11-15 11:46:33.612874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.843 [2024-11-15 11:46:33.612886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:32.843 qpair failed and we were unable to recover it. 00:28:32.843 [2024-11-15 11:46:33.612953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.843 [2024-11-15 11:46:33.612964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:32.843 qpair failed and we were unable to recover it. 00:28:32.843 [2024-11-15 11:46:33.613045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.843 [2024-11-15 11:46:33.613056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:32.843 qpair failed and we were unable to recover it. 00:28:32.843 [2024-11-15 11:46:33.613128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.843 [2024-11-15 11:46:33.613140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:32.843 qpair failed and we were unable to recover it. 00:28:32.843 [2024-11-15 11:46:33.613240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.843 [2024-11-15 11:46:33.613252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:32.843 qpair failed and we were unable to recover it. 00:28:32.843 [2024-11-15 11:46:33.613409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.843 [2024-11-15 11:46:33.613421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:32.843 qpair failed and we were unable to recover it. 00:28:32.843 [2024-11-15 11:46:33.613524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.843 [2024-11-15 11:46:33.613535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:32.843 qpair failed and we were unable to recover it. 00:28:32.843 [2024-11-15 11:46:33.613613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.843 [2024-11-15 11:46:33.613624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:32.843 qpair failed and we were unable to recover it. 00:28:32.843 [2024-11-15 11:46:33.613708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.843 [2024-11-15 11:46:33.613720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:32.843 qpair failed and we were unable to recover it. 00:28:32.843 [2024-11-15 11:46:33.613856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.843 [2024-11-15 11:46:33.613868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:32.843 qpair failed and we were unable to recover it. 00:28:32.843 [2024-11-15 11:46:33.613939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.843 [2024-11-15 11:46:33.613950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:32.843 qpair failed and we were unable to recover it. 00:28:32.843 [2024-11-15 11:46:33.614159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.843 [2024-11-15 11:46:33.614170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:32.843 qpair failed and we were unable to recover it. 00:28:32.843 [2024-11-15 11:46:33.614241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.843 [2024-11-15 11:46:33.614252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:32.843 qpair failed and we were unable to recover it. 00:28:32.843 [2024-11-15 11:46:33.614390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.843 [2024-11-15 11:46:33.614401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:32.843 qpair failed and we were unable to recover it. 00:28:32.843 [2024-11-15 11:46:33.614494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.843 [2024-11-15 11:46:33.614506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:32.843 qpair failed and we were unable to recover it. 00:28:32.843 [2024-11-15 11:46:33.614660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.843 [2024-11-15 11:46:33.614673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:32.843 qpair failed and we were unable to recover it. 00:28:32.843 [2024-11-15 11:46:33.614766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.843 [2024-11-15 11:46:33.614776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:32.843 qpair failed and we were unable to recover it. 00:28:32.843 [2024-11-15 11:46:33.614911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.843 [2024-11-15 11:46:33.614922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:32.843 qpair failed and we were unable to recover it. 00:28:32.843 [2024-11-15 11:46:33.615018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.843 [2024-11-15 11:46:33.615030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:32.843 qpair failed and we were unable to recover it. 00:28:32.843 [2024-11-15 11:46:33.615181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.844 [2024-11-15 11:46:33.615193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:32.844 qpair failed and we were unable to recover it. 00:28:32.844 [2024-11-15 11:46:33.615343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.844 [2024-11-15 11:46:33.615358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:32.844 qpair failed and we were unable to recover it. 00:28:32.844 [2024-11-15 11:46:33.615547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.844 [2024-11-15 11:46:33.615572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1922550 with addr=10.0.0.2, port=4420 00:28:32.844 qpair failed and we were unable to recover it. 00:28:32.844 [2024-11-15 11:46:33.615746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.844 [2024-11-15 11:46:33.615764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1922550 with addr=10.0.0.2, port=4420 00:28:32.844 qpair failed and we were unable to recover it. 00:28:32.844 [2024-11-15 11:46:33.615913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.844 [2024-11-15 11:46:33.615930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1922550 with addr=10.0.0.2, port=4420 00:28:32.844 qpair failed and we were unable to recover it. 00:28:32.844 [2024-11-15 11:46:33.616018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.844 [2024-11-15 11:46:33.616035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1922550 with addr=10.0.0.2, port=4420 00:28:32.844 qpair failed and we were unable to recover it. 00:28:32.844 [2024-11-15 11:46:33.616107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.844 [2024-11-15 11:46:33.616124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1922550 with addr=10.0.0.2, port=4420 00:28:32.844 qpair failed and we were unable to recover it. 00:28:32.844 [2024-11-15 11:46:33.616301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.844 [2024-11-15 11:46:33.616318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1922550 with addr=10.0.0.2, port=4420 00:28:32.844 qpair failed and we were unable to recover it. 00:28:32.844 [2024-11-15 11:46:33.616414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.844 [2024-11-15 11:46:33.616432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1922550 with addr=10.0.0.2, port=4420 00:28:32.844 qpair failed and we were unable to recover it. 00:28:32.844 [2024-11-15 11:46:33.616594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.844 [2024-11-15 11:46:33.616611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1922550 with addr=10.0.0.2, port=4420 00:28:32.844 qpair failed and we were unable to recover it. 00:28:32.844 [2024-11-15 11:46:33.616777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.844 [2024-11-15 11:46:33.616790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:32.844 qpair failed and we were unable to recover it. 00:28:32.844 [2024-11-15 11:46:33.616895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.844 [2024-11-15 11:46:33.616907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:32.844 qpair failed and we were unable to recover it. 00:28:32.844 [2024-11-15 11:46:33.616996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.844 [2024-11-15 11:46:33.617007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:32.844 qpair failed and we were unable to recover it. 00:28:32.844 [2024-11-15 11:46:33.617074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.844 [2024-11-15 11:46:33.617085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:32.844 qpair failed and we were unable to recover it. 00:28:32.844 [2024-11-15 11:46:33.617159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.844 [2024-11-15 11:46:33.617171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:32.844 qpair failed and we were unable to recover it. 00:28:32.844 [2024-11-15 11:46:33.617323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.844 [2024-11-15 11:46:33.617335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:32.844 qpair failed and we were unable to recover it. 00:28:32.844 [2024-11-15 11:46:33.617420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.844 [2024-11-15 11:46:33.617431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:32.844 qpair failed and we were unable to recover it. 00:28:32.844 [2024-11-15 11:46:33.617584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.844 [2024-11-15 11:46:33.617596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:32.844 qpair failed and we were unable to recover it. 00:28:32.844 [2024-11-15 11:46:33.617828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.844 [2024-11-15 11:46:33.617839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:32.844 qpair failed and we were unable to recover it. 00:28:32.844 [2024-11-15 11:46:33.617924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.844 [2024-11-15 11:46:33.617936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:32.844 qpair failed and we were unable to recover it. 00:28:32.844 [2024-11-15 11:46:33.618089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.844 [2024-11-15 11:46:33.618100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:32.844 qpair failed and we were unable to recover it. 00:28:32.844 [2024-11-15 11:46:33.618183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.844 [2024-11-15 11:46:33.618195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:32.844 qpair failed and we were unable to recover it. 00:28:32.844 [2024-11-15 11:46:33.618301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.844 [2024-11-15 11:46:33.618312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:32.844 qpair failed and we were unable to recover it. 00:28:32.844 [2024-11-15 11:46:33.618413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.844 [2024-11-15 11:46:33.618423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:32.844 qpair failed and we were unable to recover it. 00:28:32.844 [2024-11-15 11:46:33.618602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.844 [2024-11-15 11:46:33.618614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:32.844 qpair failed and we were unable to recover it. 00:28:32.844 [2024-11-15 11:46:33.618708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.844 [2024-11-15 11:46:33.618719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:32.844 qpair failed and we were unable to recover it. 00:28:32.844 [2024-11-15 11:46:33.618792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.844 [2024-11-15 11:46:33.618804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:32.844 qpair failed and we were unable to recover it. 00:28:32.844 [2024-11-15 11:46:33.618951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.844 [2024-11-15 11:46:33.618962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:32.844 qpair failed and we were unable to recover it. 00:28:32.844 [2024-11-15 11:46:33.619027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.844 [2024-11-15 11:46:33.619038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:32.844 qpair failed and we were unable to recover it. 00:28:32.844 [2024-11-15 11:46:33.619172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.844 [2024-11-15 11:46:33.619185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:32.844 qpair failed and we were unable to recover it. 00:28:32.844 [2024-11-15 11:46:33.619259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.844 [2024-11-15 11:46:33.619270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:32.844 qpair failed and we were unable to recover it. 00:28:32.844 [2024-11-15 11:46:33.619343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.844 [2024-11-15 11:46:33.619354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:32.844 qpair failed and we were unable to recover it. 00:28:32.844 [2024-11-15 11:46:33.619564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.844 [2024-11-15 11:46:33.619576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:32.844 qpair failed and we were unable to recover it. 00:28:32.844 [2024-11-15 11:46:33.619656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.844 [2024-11-15 11:46:33.619668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:32.844 qpair failed and we were unable to recover it. 00:28:32.844 [2024-11-15 11:46:33.619746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.844 [2024-11-15 11:46:33.619758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:32.844 qpair failed and we were unable to recover it. 00:28:32.844 [2024-11-15 11:46:33.619909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.844 [2024-11-15 11:46:33.619921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:32.844 qpair failed and we were unable to recover it. 00:28:32.844 [2024-11-15 11:46:33.620084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.844 [2024-11-15 11:46:33.620095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:32.844 qpair failed and we were unable to recover it. 00:28:32.844 [2024-11-15 11:46:33.620236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.844 [2024-11-15 11:46:33.620248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:32.844 qpair failed and we were unable to recover it. 00:28:32.844 [2024-11-15 11:46:33.620334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.844 [2024-11-15 11:46:33.620345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:32.844 qpair failed and we were unable to recover it. 00:28:32.844 [2024-11-15 11:46:33.620431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.844 [2024-11-15 11:46:33.620443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:32.844 qpair failed and we were unable to recover it. 00:28:32.844 [2024-11-15 11:46:33.620549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.844 [2024-11-15 11:46:33.620560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:32.844 qpair failed and we were unable to recover it. 00:28:32.844 [2024-11-15 11:46:33.620702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.844 [2024-11-15 11:46:33.620717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:32.844 qpair failed and we were unable to recover it. 00:28:32.844 [2024-11-15 11:46:33.620875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.844 [2024-11-15 11:46:33.620887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:32.844 qpair failed and we were unable to recover it. 00:28:32.844 [2024-11-15 11:46:33.621026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.845 [2024-11-15 11:46:33.621038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:32.845 qpair failed and we were unable to recover it. 00:28:32.845 [2024-11-15 11:46:33.621108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.845 [2024-11-15 11:46:33.621119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:32.845 qpair failed and we were unable to recover it. 00:28:32.845 [2024-11-15 11:46:33.621198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.845 [2024-11-15 11:46:33.621209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:32.845 qpair failed and we were unable to recover it. 00:28:32.845 [2024-11-15 11:46:33.621282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.845 [2024-11-15 11:46:33.621293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:32.845 qpair failed and we were unable to recover it. 00:28:32.845 [2024-11-15 11:46:33.621443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.845 [2024-11-15 11:46:33.621455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:32.845 qpair failed and we were unable to recover it. 00:28:32.845 [2024-11-15 11:46:33.621556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.845 [2024-11-15 11:46:33.621568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:32.845 qpair failed and we were unable to recover it. 00:28:32.845 [2024-11-15 11:46:33.621707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.845 [2024-11-15 11:46:33.621719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:32.845 qpair failed and we were unable to recover it. 00:28:32.845 [2024-11-15 11:46:33.621816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.845 [2024-11-15 11:46:33.621827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:32.845 qpair failed and we were unable to recover it. 00:28:32.845 [2024-11-15 11:46:33.622065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.845 [2024-11-15 11:46:33.622077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:32.845 qpair failed and we were unable to recover it. 00:28:32.845 [2024-11-15 11:46:33.622170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.845 [2024-11-15 11:46:33.622181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:32.845 qpair failed and we were unable to recover it. 00:28:32.845 [2024-11-15 11:46:33.622275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.845 [2024-11-15 11:46:33.622287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:32.845 qpair failed and we were unable to recover it. 00:28:32.845 [2024-11-15 11:46:33.622480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.845 [2024-11-15 11:46:33.622492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:32.845 qpair failed and we were unable to recover it. 00:28:32.845 [2024-11-15 11:46:33.622599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.845 [2024-11-15 11:46:33.622611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:32.845 qpair failed and we were unable to recover it. 00:28:32.845 [2024-11-15 11:46:33.622681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.845 [2024-11-15 11:46:33.622692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:32.845 qpair failed and we were unable to recover it. 00:28:32.845 [2024-11-15 11:46:33.622791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.845 [2024-11-15 11:46:33.622803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:32.845 qpair failed and we were unable to recover it. 00:28:32.845 [2024-11-15 11:46:33.622886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.845 [2024-11-15 11:46:33.622897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:32.845 qpair failed and we were unable to recover it. 00:28:32.845 [2024-11-15 11:46:33.622966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.845 [2024-11-15 11:46:33.622977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:32.845 qpair failed and we were unable to recover it. 00:28:32.845 [2024-11-15 11:46:33.623057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.845 [2024-11-15 11:46:33.623068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:32.845 qpair failed and we were unable to recover it. 00:28:32.845 [2024-11-15 11:46:33.623153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.845 [2024-11-15 11:46:33.623165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:32.845 qpair failed and we were unable to recover it. 00:28:32.845 [2024-11-15 11:46:33.623254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.845 [2024-11-15 11:46:33.623265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:32.845 qpair failed and we were unable to recover it. 00:28:32.845 [2024-11-15 11:46:33.623351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.845 [2024-11-15 11:46:33.623363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:32.845 qpair failed and we were unable to recover it. 00:28:32.845 [2024-11-15 11:46:33.623447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.845 [2024-11-15 11:46:33.623469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:32.845 qpair failed and we were unable to recover it. 00:28:32.845 [2024-11-15 11:46:33.623640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.845 [2024-11-15 11:46:33.623653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:32.845 qpair failed and we were unable to recover it. 00:28:32.845 [2024-11-15 11:46:33.623743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.845 [2024-11-15 11:46:33.623755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:32.845 qpair failed and we were unable to recover it. 00:28:32.845 [2024-11-15 11:46:33.623973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.845 [2024-11-15 11:46:33.623983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:32.845 qpair failed and we were unable to recover it. 00:28:32.845 [2024-11-15 11:46:33.624154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.845 [2024-11-15 11:46:33.624166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:32.845 qpair failed and we were unable to recover it. 00:28:32.845 [2024-11-15 11:46:33.624250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.845 [2024-11-15 11:46:33.624261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:32.845 qpair failed and we were unable to recover it. 00:28:32.845 [2024-11-15 11:46:33.624439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.845 [2024-11-15 11:46:33.624451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:32.845 qpair failed and we were unable to recover it. 00:28:32.845 [2024-11-15 11:46:33.624624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.845 [2024-11-15 11:46:33.624636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:32.845 qpair failed and we were unable to recover it. 00:28:32.845 [2024-11-15 11:46:33.624893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.845 [2024-11-15 11:46:33.624904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:32.845 qpair failed and we were unable to recover it. 00:28:32.845 [2024-11-15 11:46:33.624971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.845 [2024-11-15 11:46:33.624982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:32.845 qpair failed and we were unable to recover it. 00:28:32.845 [2024-11-15 11:46:33.625156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.845 [2024-11-15 11:46:33.625169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:32.845 qpair failed and we were unable to recover it. 00:28:32.845 [2024-11-15 11:46:33.625345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.845 [2024-11-15 11:46:33.625357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:32.845 qpair failed and we were unable to recover it. 00:28:32.845 [2024-11-15 11:46:33.625519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.845 [2024-11-15 11:46:33.625532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:32.845 qpair failed and we were unable to recover it. 00:28:32.845 [2024-11-15 11:46:33.625706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.845 [2024-11-15 11:46:33.625718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:32.845 qpair failed and we were unable to recover it. 00:28:32.845 [2024-11-15 11:46:33.625901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.845 [2024-11-15 11:46:33.625913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:32.845 qpair failed and we were unable to recover it. 00:28:32.845 [2024-11-15 11:46:33.626079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.845 [2024-11-15 11:46:33.626092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:32.845 qpair failed and we were unable to recover it. 00:28:32.845 [2024-11-15 11:46:33.626191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.845 [2024-11-15 11:46:33.626202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:32.845 qpair failed and we were unable to recover it. 00:28:32.845 [2024-11-15 11:46:33.626347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.845 [2024-11-15 11:46:33.626364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:32.845 qpair failed and we were unable to recover it. 00:28:32.845 [2024-11-15 11:46:33.626448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.845 [2024-11-15 11:46:33.626466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:32.845 qpair failed and we were unable to recover it. 00:28:32.845 [2024-11-15 11:46:33.626566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.845 [2024-11-15 11:46:33.626578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:32.845 qpair failed and we were unable to recover it. 00:28:32.845 [2024-11-15 11:46:33.626740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.845 [2024-11-15 11:46:33.626752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:32.845 qpair failed and we were unable to recover it. 00:28:32.845 [2024-11-15 11:46:33.626849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.845 [2024-11-15 11:46:33.626861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:32.845 qpair failed and we were unable to recover it. 00:28:32.845 [2024-11-15 11:46:33.626933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.845 [2024-11-15 11:46:33.626945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:32.845 qpair failed and we were unable to recover it. 00:28:32.845 [2024-11-15 11:46:33.627043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.845 [2024-11-15 11:46:33.627055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:32.845 qpair failed and we were unable to recover it. 00:28:32.845 [2024-11-15 11:46:33.627191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.845 [2024-11-15 11:46:33.627204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:32.845 qpair failed and we were unable to recover it. 00:28:32.845 [2024-11-15 11:46:33.627362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.845 [2024-11-15 11:46:33.627374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:32.845 qpair failed and we were unable to recover it. 00:28:32.845 [2024-11-15 11:46:33.627470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.846 [2024-11-15 11:46:33.627484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:32.846 qpair failed and we were unable to recover it. 00:28:32.846 [2024-11-15 11:46:33.627568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.846 [2024-11-15 11:46:33.627580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:32.846 qpair failed and we were unable to recover it. 00:28:32.846 [2024-11-15 11:46:33.627647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.846 [2024-11-15 11:46:33.627658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:32.846 qpair failed and we were unable to recover it. 00:28:32.846 [2024-11-15 11:46:33.627815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.846 [2024-11-15 11:46:33.627827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:32.846 qpair failed and we were unable to recover it. 00:28:32.846 [2024-11-15 11:46:33.628038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.846 [2024-11-15 11:46:33.628050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:32.846 qpair failed and we were unable to recover it. 00:28:32.846 [2024-11-15 11:46:33.628146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.846 [2024-11-15 11:46:33.628158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:32.846 qpair failed and we were unable to recover it. 00:28:32.846 [2024-11-15 11:46:33.628394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.846 [2024-11-15 11:46:33.628406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:32.846 qpair failed and we were unable to recover it. 00:28:32.846 [2024-11-15 11:46:33.628645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.846 [2024-11-15 11:46:33.628657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:32.846 qpair failed and we were unable to recover it. 00:28:32.846 [2024-11-15 11:46:33.628819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.846 [2024-11-15 11:46:33.628831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:32.846 qpair failed and we were unable to recover it. 00:28:32.846 [2024-11-15 11:46:33.628978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.846 [2024-11-15 11:46:33.628990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:32.846 qpair failed and we were unable to recover it. 00:28:32.846 [2024-11-15 11:46:33.629078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.846 [2024-11-15 11:46:33.629090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:32.846 qpair failed and we were unable to recover it. 00:28:32.846 [2024-11-15 11:46:33.629237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.846 [2024-11-15 11:46:33.629249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:32.846 qpair failed and we were unable to recover it. 00:28:32.846 [2024-11-15 11:46:33.629403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.846 [2024-11-15 11:46:33.629415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:32.846 qpair failed and we were unable to recover it. 00:28:32.846 [2024-11-15 11:46:33.629666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.846 [2024-11-15 11:46:33.629679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:32.846 qpair failed and we were unable to recover it. 00:28:32.846 [2024-11-15 11:46:33.629775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.846 [2024-11-15 11:46:33.629788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:32.846 qpair failed and we were unable to recover it. 00:28:32.846 [2024-11-15 11:46:33.629886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.846 [2024-11-15 11:46:33.629900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:32.846 qpair failed and we were unable to recover it. 00:28:32.846 [2024-11-15 11:46:33.630050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.846 [2024-11-15 11:46:33.630063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:32.846 qpair failed and we were unable to recover it. 00:28:32.846 [2024-11-15 11:46:33.630156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.846 [2024-11-15 11:46:33.630168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:32.846 qpair failed and we were unable to recover it. 00:28:32.846 [2024-11-15 11:46:33.630243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.846 [2024-11-15 11:46:33.630255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:32.846 qpair failed and we were unable to recover it. 00:28:32.846 [2024-11-15 11:46:33.630410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.846 [2024-11-15 11:46:33.630423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:32.846 qpair failed and we were unable to recover it. 00:28:32.846 [2024-11-15 11:46:33.630509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.846 [2024-11-15 11:46:33.630521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:32.846 qpair failed and we were unable to recover it. 00:28:32.846 [2024-11-15 11:46:33.630612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.846 [2024-11-15 11:46:33.630625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:32.846 qpair failed and we were unable to recover it. 00:28:32.846 [2024-11-15 11:46:33.630806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.846 [2024-11-15 11:46:33.630818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:32.846 qpair failed and we were unable to recover it. 00:28:32.846 [2024-11-15 11:46:33.630980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.846 [2024-11-15 11:46:33.630994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:32.846 qpair failed and we were unable to recover it. 00:28:32.846 [2024-11-15 11:46:33.631141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.846 [2024-11-15 11:46:33.631154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:32.846 qpair failed and we were unable to recover it. 00:28:32.846 [2024-11-15 11:46:33.631334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.846 [2024-11-15 11:46:33.631347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:32.846 qpair failed and we were unable to recover it. 00:28:32.846 [2024-11-15 11:46:33.631417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.846 [2024-11-15 11:46:33.631429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:32.846 qpair failed and we were unable to recover it. 00:28:32.846 [2024-11-15 11:46:33.631524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.846 [2024-11-15 11:46:33.631536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:32.846 qpair failed and we were unable to recover it. 00:28:32.846 [2024-11-15 11:46:33.631670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.846 [2024-11-15 11:46:33.631683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:32.846 qpair failed and we were unable to recover it. 00:28:32.846 [2024-11-15 11:46:33.631838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.846 [2024-11-15 11:46:33.631850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:32.846 qpair failed and we were unable to recover it. 00:28:32.846 [2024-11-15 11:46:33.632005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.846 [2024-11-15 11:46:33.632019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:32.846 qpair failed and we were unable to recover it. 00:28:32.846 [2024-11-15 11:46:33.632118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.846 [2024-11-15 11:46:33.632132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:32.846 qpair failed and we were unable to recover it. 00:28:32.846 [2024-11-15 11:46:33.632342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.846 [2024-11-15 11:46:33.632355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:32.846 qpair failed and we were unable to recover it. 00:28:32.846 [2024-11-15 11:46:33.632466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.846 [2024-11-15 11:46:33.632479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:32.846 qpair failed and we were unable to recover it. 00:28:32.846 [2024-11-15 11:46:33.632559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.846 [2024-11-15 11:46:33.632571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:32.846 qpair failed and we were unable to recover it. 00:28:32.846 [2024-11-15 11:46:33.632728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.846 [2024-11-15 11:46:33.632741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:32.846 qpair failed and we were unable to recover it. 00:28:32.846 [2024-11-15 11:46:33.632922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.846 [2024-11-15 11:46:33.632934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:32.846 qpair failed and we were unable to recover it. 00:28:32.846 [2024-11-15 11:46:33.633141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.846 [2024-11-15 11:46:33.633154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:32.846 qpair failed and we were unable to recover it. 00:28:32.846 [2024-11-15 11:46:33.633237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.846 [2024-11-15 11:46:33.633248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:32.846 qpair failed and we were unable to recover it. 00:28:32.846 [2024-11-15 11:46:33.633345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.846 [2024-11-15 11:46:33.633357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:32.846 qpair failed and we were unable to recover it. 00:28:32.846 [2024-11-15 11:46:33.633529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.846 [2024-11-15 11:46:33.633542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:32.846 qpair failed and we were unable to recover it. 00:28:32.846 [2024-11-15 11:46:33.633698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.846 [2024-11-15 11:46:33.633710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:32.846 qpair failed and we were unable to recover it. 00:28:32.846 [2024-11-15 11:46:33.633869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.846 [2024-11-15 11:46:33.633882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:32.846 qpair failed and we were unable to recover it. 00:28:32.846 [2024-11-15 11:46:33.634028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.846 [2024-11-15 11:46:33.634042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:32.846 qpair failed and we were unable to recover it. 00:28:32.846 [2024-11-15 11:46:33.634249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.846 [2024-11-15 11:46:33.634262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:32.846 qpair failed and we were unable to recover it. 00:28:32.846 [2024-11-15 11:46:33.634553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.846 [2024-11-15 11:46:33.634565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:32.846 qpair failed and we were unable to recover it. 00:28:32.846 [2024-11-15 11:46:33.634704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.846 [2024-11-15 11:46:33.634717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:32.846 qpair failed and we were unable to recover it. 00:28:32.846 [2024-11-15 11:46:33.634999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.846 [2024-11-15 11:46:33.635012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:32.846 qpair failed and we were unable to recover it. 00:28:32.847 [2024-11-15 11:46:33.635153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.847 [2024-11-15 11:46:33.635165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:32.847 qpair failed and we were unable to recover it. 00:28:32.847 [2024-11-15 11:46:33.635259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.847 [2024-11-15 11:46:33.635272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:32.847 qpair failed and we were unable to recover it. 00:28:32.847 [2024-11-15 11:46:33.635438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.847 [2024-11-15 11:46:33.635450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:32.847 qpair failed and we were unable to recover it. 00:28:32.847 [2024-11-15 11:46:33.635607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.847 [2024-11-15 11:46:33.635619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:32.847 qpair failed and we were unable to recover it. 00:28:33.134 [2024-11-15 11:46:33.635784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.134 [2024-11-15 11:46:33.635798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.134 qpair failed and we were unable to recover it. 00:28:33.134 [2024-11-15 11:46:33.636010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.134 [2024-11-15 11:46:33.636028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.134 qpair failed and we were unable to recover it. 00:28:33.134 [2024-11-15 11:46:33.636177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.134 [2024-11-15 11:46:33.636191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.134 qpair failed and we were unable to recover it. 00:28:33.134 [2024-11-15 11:46:33.636294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.134 [2024-11-15 11:46:33.636306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.134 qpair failed and we were unable to recover it. 00:28:33.134 [2024-11-15 11:46:33.636377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.134 [2024-11-15 11:46:33.636388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.134 qpair failed and we were unable to recover it. 00:28:33.134 [2024-11-15 11:46:33.636542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.134 [2024-11-15 11:46:33.636557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.134 qpair failed and we were unable to recover it. 00:28:33.134 [2024-11-15 11:46:33.636647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.134 [2024-11-15 11:46:33.636660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.134 qpair failed and we were unable to recover it. 00:28:33.134 [2024-11-15 11:46:33.636820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.134 [2024-11-15 11:46:33.636833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.134 qpair failed and we were unable to recover it. 00:28:33.134 [2024-11-15 11:46:33.636989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.134 [2024-11-15 11:46:33.637003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.134 qpair failed and we were unable to recover it. 00:28:33.134 [2024-11-15 11:46:33.637180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.134 [2024-11-15 11:46:33.637192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.134 qpair failed and we were unable to recover it. 00:28:33.134 [2024-11-15 11:46:33.637287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.134 [2024-11-15 11:46:33.637298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.134 qpair failed and we were unable to recover it. 00:28:33.134 [2024-11-15 11:46:33.637378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.134 [2024-11-15 11:46:33.637390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.134 qpair failed and we were unable to recover it. 00:28:33.134 [2024-11-15 11:46:33.637549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.134 [2024-11-15 11:46:33.637563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.134 qpair failed and we were unable to recover it. 00:28:33.134 [2024-11-15 11:46:33.637701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.134 [2024-11-15 11:46:33.637716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.134 qpair failed and we were unable to recover it. 00:28:33.134 [2024-11-15 11:46:33.637954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.134 [2024-11-15 11:46:33.637966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.134 qpair failed and we were unable to recover it. 00:28:33.134 [2024-11-15 11:46:33.638108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.134 [2024-11-15 11:46:33.638121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.134 qpair failed and we were unable to recover it. 00:28:33.134 [2024-11-15 11:46:33.638207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.134 [2024-11-15 11:46:33.638219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.134 qpair failed and we were unable to recover it. 00:28:33.135 [2024-11-15 11:46:33.638291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.135 [2024-11-15 11:46:33.638302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.135 qpair failed and we were unable to recover it. 00:28:33.135 [2024-11-15 11:46:33.638377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.135 [2024-11-15 11:46:33.638389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.135 qpair failed and we were unable to recover it. 00:28:33.135 [2024-11-15 11:46:33.638531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.135 [2024-11-15 11:46:33.638545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.135 qpair failed and we were unable to recover it. 00:28:33.135 [2024-11-15 11:46:33.638758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.135 [2024-11-15 11:46:33.638769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.135 qpair failed and we were unable to recover it. 00:28:33.135 [2024-11-15 11:46:33.638875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.135 [2024-11-15 11:46:33.638887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.135 qpair failed and we were unable to recover it. 00:28:33.135 [2024-11-15 11:46:33.638979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.135 [2024-11-15 11:46:33.638991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.135 qpair failed and we were unable to recover it. 00:28:33.135 [2024-11-15 11:46:33.639067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.135 [2024-11-15 11:46:33.639079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.135 qpair failed and we were unable to recover it. 00:28:33.135 [2024-11-15 11:46:33.639170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.135 [2024-11-15 11:46:33.639182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.135 qpair failed and we were unable to recover it. 00:28:33.135 [2024-11-15 11:46:33.639343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.135 [2024-11-15 11:46:33.639354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.135 qpair failed and we were unable to recover it. 00:28:33.135 [2024-11-15 11:46:33.639424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.135 [2024-11-15 11:46:33.639436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.135 qpair failed and we were unable to recover it. 00:28:33.135 [2024-11-15 11:46:33.639599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.135 [2024-11-15 11:46:33.639611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.135 qpair failed and we were unable to recover it. 00:28:33.135 [2024-11-15 11:46:33.639680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.135 [2024-11-15 11:46:33.639692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.135 qpair failed and we were unable to recover it. 00:28:33.135 [2024-11-15 11:46:33.639785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.135 [2024-11-15 11:46:33.639797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.135 qpair failed and we were unable to recover it. 00:28:33.135 [2024-11-15 11:46:33.639876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.135 [2024-11-15 11:46:33.639888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.135 qpair failed and we were unable to recover it. 00:28:33.135 [2024-11-15 11:46:33.639962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.135 [2024-11-15 11:46:33.639974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.135 qpair failed and we were unable to recover it. 00:28:33.135 [2024-11-15 11:46:33.640050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.135 [2024-11-15 11:46:33.640061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.135 qpair failed and we were unable to recover it. 00:28:33.135 [2024-11-15 11:46:33.640200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.135 [2024-11-15 11:46:33.640211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.135 qpair failed and we were unable to recover it. 00:28:33.135 [2024-11-15 11:46:33.640303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.135 [2024-11-15 11:46:33.640315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.135 qpair failed and we were unable to recover it. 00:28:33.135 [2024-11-15 11:46:33.640486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.135 [2024-11-15 11:46:33.640498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.135 qpair failed and we were unable to recover it. 00:28:33.135 [2024-11-15 11:46:33.640598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.135 [2024-11-15 11:46:33.640610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.135 qpair failed and we were unable to recover it. 00:28:33.135 [2024-11-15 11:46:33.640703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.135 [2024-11-15 11:46:33.640714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.135 qpair failed and we were unable to recover it. 00:28:33.135 [2024-11-15 11:46:33.640790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.135 [2024-11-15 11:46:33.640802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.135 qpair failed and we were unable to recover it. 00:28:33.135 [2024-11-15 11:46:33.640870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.135 [2024-11-15 11:46:33.640881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.135 qpair failed and we were unable to recover it. 00:28:33.135 [2024-11-15 11:46:33.640967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.135 [2024-11-15 11:46:33.640979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.135 qpair failed and we were unable to recover it. 00:28:33.135 [2024-11-15 11:46:33.641151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.135 [2024-11-15 11:46:33.641163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.135 qpair failed and we were unable to recover it. 00:28:33.135 [2024-11-15 11:46:33.641251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.135 [2024-11-15 11:46:33.641263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.135 qpair failed and we were unable to recover it. 00:28:33.135 [2024-11-15 11:46:33.641421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.135 [2024-11-15 11:46:33.641432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.135 qpair failed and we were unable to recover it. 00:28:33.135 [2024-11-15 11:46:33.641586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.135 [2024-11-15 11:46:33.641598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.135 qpair failed and we were unable to recover it. 00:28:33.135 [2024-11-15 11:46:33.641711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.135 [2024-11-15 11:46:33.641722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.135 qpair failed and we were unable to recover it. 00:28:33.135 [2024-11-15 11:46:33.641871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.135 [2024-11-15 11:46:33.641884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.135 qpair failed and we were unable to recover it. 00:28:33.135 [2024-11-15 11:46:33.642029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.135 [2024-11-15 11:46:33.642041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.135 qpair failed and we were unable to recover it. 00:28:33.135 [2024-11-15 11:46:33.642147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.135 [2024-11-15 11:46:33.642161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.135 qpair failed and we were unable to recover it. 00:28:33.135 [2024-11-15 11:46:33.642299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.135 [2024-11-15 11:46:33.642312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.135 qpair failed and we were unable to recover it. 00:28:33.135 [2024-11-15 11:46:33.642406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.135 [2024-11-15 11:46:33.642419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.135 qpair failed and we were unable to recover it. 00:28:33.135 [2024-11-15 11:46:33.642499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.135 [2024-11-15 11:46:33.642512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.135 qpair failed and we were unable to recover it. 00:28:33.136 [2024-11-15 11:46:33.642601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.136 [2024-11-15 11:46:33.642613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.136 qpair failed and we were unable to recover it. 00:28:33.136 [2024-11-15 11:46:33.642794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.136 [2024-11-15 11:46:33.642808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.136 qpair failed and we were unable to recover it. 00:28:33.136 [2024-11-15 11:46:33.642905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.136 [2024-11-15 11:46:33.642917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.136 qpair failed and we were unable to recover it. 00:28:33.136 [2024-11-15 11:46:33.643133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.136 [2024-11-15 11:46:33.643146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.136 qpair failed and we were unable to recover it. 00:28:33.136 [2024-11-15 11:46:33.643301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.136 [2024-11-15 11:46:33.643314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.136 qpair failed and we were unable to recover it. 00:28:33.136 [2024-11-15 11:46:33.643468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.136 [2024-11-15 11:46:33.643482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.136 qpair failed and we were unable to recover it. 00:28:33.136 [2024-11-15 11:46:33.643550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.136 [2024-11-15 11:46:33.643561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.136 qpair failed and we were unable to recover it. 00:28:33.136 [2024-11-15 11:46:33.643637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.136 [2024-11-15 11:46:33.643652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.136 qpair failed and we were unable to recover it. 00:28:33.136 [2024-11-15 11:46:33.643798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.136 [2024-11-15 11:46:33.643810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.136 qpair failed and we were unable to recover it. 00:28:33.136 [2024-11-15 11:46:33.643910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.136 [2024-11-15 11:46:33.643923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.136 qpair failed and we were unable to recover it. 00:28:33.136 [2024-11-15 11:46:33.644061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.136 [2024-11-15 11:46:33.644074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.136 qpair failed and we were unable to recover it. 00:28:33.136 [2024-11-15 11:46:33.644227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.136 [2024-11-15 11:46:33.644241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.136 qpair failed and we were unable to recover it. 00:28:33.136 [2024-11-15 11:46:33.644389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.136 [2024-11-15 11:46:33.644402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.136 qpair failed and we were unable to recover it. 00:28:33.136 [2024-11-15 11:46:33.644555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.136 [2024-11-15 11:46:33.644569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.136 qpair failed and we were unable to recover it. 00:28:33.136 [2024-11-15 11:46:33.644664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.136 [2024-11-15 11:46:33.644676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.136 qpair failed and we were unable to recover it. 00:28:33.136 [2024-11-15 11:46:33.644820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.136 [2024-11-15 11:46:33.644833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.136 qpair failed and we were unable to recover it. 00:28:33.136 [2024-11-15 11:46:33.644906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.136 [2024-11-15 11:46:33.644918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.136 qpair failed and we were unable to recover it. 00:28:33.136 [2024-11-15 11:46:33.645115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.136 [2024-11-15 11:46:33.645127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.136 qpair failed and we were unable to recover it. 00:28:33.136 [2024-11-15 11:46:33.645194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.136 [2024-11-15 11:46:33.645206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.136 qpair failed and we were unable to recover it. 00:28:33.136 [2024-11-15 11:46:33.645416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.136 [2024-11-15 11:46:33.645429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.136 qpair failed and we were unable to recover it. 00:28:33.136 [2024-11-15 11:46:33.645672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.136 [2024-11-15 11:46:33.645685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.136 qpair failed and we were unable to recover it. 00:28:33.136 [2024-11-15 11:46:33.645786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.136 [2024-11-15 11:46:33.645800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.136 qpair failed and we were unable to recover it. 00:28:33.136 [2024-11-15 11:46:33.645899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.136 [2024-11-15 11:46:33.645912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.136 qpair failed and we were unable to recover it. 00:28:33.136 [2024-11-15 11:46:33.645994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.136 [2024-11-15 11:46:33.646007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.136 qpair failed and we were unable to recover it. 00:28:33.136 [2024-11-15 11:46:33.646145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.136 [2024-11-15 11:46:33.646158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.136 qpair failed and we were unable to recover it. 00:28:33.136 [2024-11-15 11:46:33.646229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.136 [2024-11-15 11:46:33.646241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.136 qpair failed and we were unable to recover it. 00:28:33.136 [2024-11-15 11:46:33.646448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.136 [2024-11-15 11:46:33.646466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.136 qpair failed and we were unable to recover it. 00:28:33.136 [2024-11-15 11:46:33.646681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.136 [2024-11-15 11:46:33.646693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.136 qpair failed and we were unable to recover it. 00:28:33.136 [2024-11-15 11:46:33.646861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.136 [2024-11-15 11:46:33.646874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.136 qpair failed and we were unable to recover it. 00:28:33.136 [2024-11-15 11:46:33.647107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.136 [2024-11-15 11:46:33.647120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.136 qpair failed and we were unable to recover it. 00:28:33.136 [2024-11-15 11:46:33.647334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.136 [2024-11-15 11:46:33.647347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.136 qpair failed and we were unable to recover it. 00:28:33.136 [2024-11-15 11:46:33.647492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.136 [2024-11-15 11:46:33.647505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.136 qpair failed and we were unable to recover it. 00:28:33.136 [2024-11-15 11:46:33.647674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.136 [2024-11-15 11:46:33.647686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.136 qpair failed and we were unable to recover it. 00:28:33.136 [2024-11-15 11:46:33.647770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.136 [2024-11-15 11:46:33.647783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.136 qpair failed and we were unable to recover it. 00:28:33.136 [2024-11-15 11:46:33.647875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.136 [2024-11-15 11:46:33.647887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.136 qpair failed and we were unable to recover it. 00:28:33.136 [2024-11-15 11:46:33.648114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.137 [2024-11-15 11:46:33.648127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.137 qpair failed and we were unable to recover it. 00:28:33.137 [2024-11-15 11:46:33.648273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.137 [2024-11-15 11:46:33.648285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.137 qpair failed and we were unable to recover it. 00:28:33.137 [2024-11-15 11:46:33.648369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.137 [2024-11-15 11:46:33.648381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.137 qpair failed and we were unable to recover it. 00:28:33.137 [2024-11-15 11:46:33.648525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.137 [2024-11-15 11:46:33.648538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.137 qpair failed and we were unable to recover it. 00:28:33.137 [2024-11-15 11:46:33.648683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.137 [2024-11-15 11:46:33.648695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.137 qpair failed and we were unable to recover it. 00:28:33.137 [2024-11-15 11:46:33.648796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.137 [2024-11-15 11:46:33.648812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.137 qpair failed and we were unable to recover it. 00:28:33.137 [2024-11-15 11:46:33.648899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.137 [2024-11-15 11:46:33.648912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.137 qpair failed and we were unable to recover it. 00:28:33.137 [2024-11-15 11:46:33.649092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.137 [2024-11-15 11:46:33.649108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.137 qpair failed and we were unable to recover it. 00:28:33.137 [2024-11-15 11:46:33.649193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.137 [2024-11-15 11:46:33.649206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.137 qpair failed and we were unable to recover it. 00:28:33.137 [2024-11-15 11:46:33.649292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.137 [2024-11-15 11:46:33.649305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.137 qpair failed and we were unable to recover it. 00:28:33.137 [2024-11-15 11:46:33.649385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.137 [2024-11-15 11:46:33.649398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.137 qpair failed and we were unable to recover it. 00:28:33.137 [2024-11-15 11:46:33.649615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.137 [2024-11-15 11:46:33.649628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.137 qpair failed and we were unable to recover it. 00:28:33.137 [2024-11-15 11:46:33.649713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.137 [2024-11-15 11:46:33.649725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.137 qpair failed and we were unable to recover it. 00:28:33.137 [2024-11-15 11:46:33.649867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.137 [2024-11-15 11:46:33.649879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.137 qpair failed and we were unable to recover it. 00:28:33.137 [2024-11-15 11:46:33.650017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.137 [2024-11-15 11:46:33.650029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.137 qpair failed and we were unable to recover it. 00:28:33.137 [2024-11-15 11:46:33.650170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.137 [2024-11-15 11:46:33.650183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.137 qpair failed and we were unable to recover it. 00:28:33.137 [2024-11-15 11:46:33.650271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.137 [2024-11-15 11:46:33.650284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.137 qpair failed and we were unable to recover it. 00:28:33.137 [2024-11-15 11:46:33.650353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.137 [2024-11-15 11:46:33.650364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.137 qpair failed and we were unable to recover it. 00:28:33.137 [2024-11-15 11:46:33.650506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.137 [2024-11-15 11:46:33.650520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.137 qpair failed and we were unable to recover it. 00:28:33.137 [2024-11-15 11:46:33.650623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.137 [2024-11-15 11:46:33.650636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.137 qpair failed and we were unable to recover it. 00:28:33.137 [2024-11-15 11:46:33.650721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.137 [2024-11-15 11:46:33.650733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.137 qpair failed and we were unable to recover it. 00:28:33.137 [2024-11-15 11:46:33.650825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.137 [2024-11-15 11:46:33.650837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.137 qpair failed and we were unable to recover it. 00:28:33.137 [2024-11-15 11:46:33.651024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.137 [2024-11-15 11:46:33.651037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.137 qpair failed and we were unable to recover it. 00:28:33.137 [2024-11-15 11:46:33.651132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.137 [2024-11-15 11:46:33.651145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.137 qpair failed and we were unable to recover it. 00:28:33.137 [2024-11-15 11:46:33.651290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.137 [2024-11-15 11:46:33.651303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.137 qpair failed and we were unable to recover it. 00:28:33.137 [2024-11-15 11:46:33.651401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.137 [2024-11-15 11:46:33.651413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.137 qpair failed and we were unable to recover it. 00:28:33.137 [2024-11-15 11:46:33.651680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.137 [2024-11-15 11:46:33.651694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.137 qpair failed and we were unable to recover it. 00:28:33.137 [2024-11-15 11:46:33.651766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.137 [2024-11-15 11:46:33.651778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.137 qpair failed and we were unable to recover it. 00:28:33.137 [2024-11-15 11:46:33.651862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.137 [2024-11-15 11:46:33.651875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.137 qpair failed and we were unable to recover it. 00:28:33.137 [2024-11-15 11:46:33.651979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.137 [2024-11-15 11:46:33.651991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.137 qpair failed and we were unable to recover it. 00:28:33.137 [2024-11-15 11:46:33.652161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.137 [2024-11-15 11:46:33.652174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.137 qpair failed and we were unable to recover it. 00:28:33.137 [2024-11-15 11:46:33.652276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.137 [2024-11-15 11:46:33.652288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.137 qpair failed and we were unable to recover it. 00:28:33.137 [2024-11-15 11:46:33.652396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.137 [2024-11-15 11:46:33.652408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.137 qpair failed and we were unable to recover it. 00:28:33.137 [2024-11-15 11:46:33.652630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.137 [2024-11-15 11:46:33.652643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.137 qpair failed and we were unable to recover it. 00:28:33.137 [2024-11-15 11:46:33.652793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.137 [2024-11-15 11:46:33.652806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.137 qpair failed and we were unable to recover it. 00:28:33.137 [2024-11-15 11:46:33.653071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.138 [2024-11-15 11:46:33.653084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.138 qpair failed and we were unable to recover it. 00:28:33.138 [2024-11-15 11:46:33.653162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.138 [2024-11-15 11:46:33.653175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.138 qpair failed and we were unable to recover it. 00:28:33.138 [2024-11-15 11:46:33.653314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.138 [2024-11-15 11:46:33.653326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.138 qpair failed and we were unable to recover it. 00:28:33.138 [2024-11-15 11:46:33.653396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.138 [2024-11-15 11:46:33.653409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.138 qpair failed and we were unable to recover it. 00:28:33.138 [2024-11-15 11:46:33.653564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.138 [2024-11-15 11:46:33.653579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.138 qpair failed and we were unable to recover it. 00:28:33.138 [2024-11-15 11:46:33.653786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.138 [2024-11-15 11:46:33.653798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.138 qpair failed and we were unable to recover it. 00:28:33.138 [2024-11-15 11:46:33.653883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.138 [2024-11-15 11:46:33.653896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.138 qpair failed and we were unable to recover it. 00:28:33.138 [2024-11-15 11:46:33.653996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.138 [2024-11-15 11:46:33.654009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.138 qpair failed and we were unable to recover it. 00:28:33.138 [2024-11-15 11:46:33.654194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.138 [2024-11-15 11:46:33.654207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.138 qpair failed and we were unable to recover it. 00:28:33.138 [2024-11-15 11:46:33.654372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.138 [2024-11-15 11:46:33.654385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.138 qpair failed and we were unable to recover it. 00:28:33.138 [2024-11-15 11:46:33.654465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.138 [2024-11-15 11:46:33.654478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.138 qpair failed and we were unable to recover it. 00:28:33.138 [2024-11-15 11:46:33.654565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.138 [2024-11-15 11:46:33.654578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.138 qpair failed and we were unable to recover it. 00:28:33.138 [2024-11-15 11:46:33.654739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.138 [2024-11-15 11:46:33.654752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.138 qpair failed and we were unable to recover it. 00:28:33.138 [2024-11-15 11:46:33.654857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.138 [2024-11-15 11:46:33.654871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.138 qpair failed and we were unable to recover it. 00:28:33.138 [2024-11-15 11:46:33.654964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.138 [2024-11-15 11:46:33.654977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.138 qpair failed and we were unable to recover it. 00:28:33.138 [2024-11-15 11:46:33.655055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.138 [2024-11-15 11:46:33.655068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.138 qpair failed and we were unable to recover it. 00:28:33.138 [2024-11-15 11:46:33.655168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.138 [2024-11-15 11:46:33.655181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.138 qpair failed and we were unable to recover it. 00:28:33.138 [2024-11-15 11:46:33.655354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.138 [2024-11-15 11:46:33.655367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.138 qpair failed and we were unable to recover it. 00:28:33.138 [2024-11-15 11:46:33.655532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.138 [2024-11-15 11:46:33.655546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.138 qpair failed and we were unable to recover it. 00:28:33.138 [2024-11-15 11:46:33.655788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.138 [2024-11-15 11:46:33.655800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.138 qpair failed and we were unable to recover it. 00:28:33.138 [2024-11-15 11:46:33.655983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.138 [2024-11-15 11:46:33.655996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.138 qpair failed and we were unable to recover it. 00:28:33.138 [2024-11-15 11:46:33.656103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.138 [2024-11-15 11:46:33.656115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.138 qpair failed and we were unable to recover it. 00:28:33.138 [2024-11-15 11:46:33.656350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.138 [2024-11-15 11:46:33.656364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.138 qpair failed and we were unable to recover it. 00:28:33.138 [2024-11-15 11:46:33.656536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.138 [2024-11-15 11:46:33.656550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.138 qpair failed and we were unable to recover it. 00:28:33.138 [2024-11-15 11:46:33.656718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.138 [2024-11-15 11:46:33.656730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.138 qpair failed and we were unable to recover it. 00:28:33.138 [2024-11-15 11:46:33.656955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.138 [2024-11-15 11:46:33.656968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.138 qpair failed and we were unable to recover it. 00:28:33.138 [2024-11-15 11:46:33.657193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.138 [2024-11-15 11:46:33.657206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.138 qpair failed and we were unable to recover it. 00:28:33.138 [2024-11-15 11:46:33.657359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.138 [2024-11-15 11:46:33.657371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.138 qpair failed and we were unable to recover it. 00:28:33.138 [2024-11-15 11:46:33.657462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.138 [2024-11-15 11:46:33.657476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.138 qpair failed and we were unable to recover it. 00:28:33.138 [2024-11-15 11:46:33.657586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.138 [2024-11-15 11:46:33.657602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.138 qpair failed and we were unable to recover it. 00:28:33.138 [2024-11-15 11:46:33.657684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.138 [2024-11-15 11:46:33.657697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.138 qpair failed and we were unable to recover it. 00:28:33.138 [2024-11-15 11:46:33.657849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.138 [2024-11-15 11:46:33.657862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.138 qpair failed and we were unable to recover it. 00:28:33.138 [2024-11-15 11:46:33.658017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.138 [2024-11-15 11:46:33.658030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.138 qpair failed and we were unable to recover it. 00:28:33.138 [2024-11-15 11:46:33.658243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.138 [2024-11-15 11:46:33.658257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.138 qpair failed and we were unable to recover it. 00:28:33.138 [2024-11-15 11:46:33.658351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.138 [2024-11-15 11:46:33.658365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.138 qpair failed and we were unable to recover it. 00:28:33.138 [2024-11-15 11:46:33.658447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.138 [2024-11-15 11:46:33.658463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.139 qpair failed and we were unable to recover it. 00:28:33.139 [2024-11-15 11:46:33.658540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.139 [2024-11-15 11:46:33.658553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.139 qpair failed and we were unable to recover it. 00:28:33.139 [2024-11-15 11:46:33.658637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.139 [2024-11-15 11:46:33.658649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.139 qpair failed and we were unable to recover it. 00:28:33.139 [2024-11-15 11:46:33.658792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.139 [2024-11-15 11:46:33.658805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.139 qpair failed and we were unable to recover it. 00:28:33.139 [2024-11-15 11:46:33.658968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.139 [2024-11-15 11:46:33.658981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.139 qpair failed and we were unable to recover it. 00:28:33.139 [2024-11-15 11:46:33.659060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.139 [2024-11-15 11:46:33.659073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.139 qpair failed and we were unable to recover it. 00:28:33.139 [2024-11-15 11:46:33.659139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.139 [2024-11-15 11:46:33.659152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.139 qpair failed and we were unable to recover it. 00:28:33.139 [2024-11-15 11:46:33.659327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.139 [2024-11-15 11:46:33.659341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.139 qpair failed and we were unable to recover it. 00:28:33.139 [2024-11-15 11:46:33.659581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.139 [2024-11-15 11:46:33.659594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.139 qpair failed and we were unable to recover it. 00:28:33.139 [2024-11-15 11:46:33.659748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.139 [2024-11-15 11:46:33.659764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.139 qpair failed and we were unable to recover it. 00:28:33.139 [2024-11-15 11:46:33.659977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.139 [2024-11-15 11:46:33.659991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.139 qpair failed and we were unable to recover it. 00:28:33.139 [2024-11-15 11:46:33.660135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.139 [2024-11-15 11:46:33.660149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.139 qpair failed and we were unable to recover it. 00:28:33.139 [2024-11-15 11:46:33.660310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.139 [2024-11-15 11:46:33.660324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.139 qpair failed and we were unable to recover it. 00:28:33.139 [2024-11-15 11:46:33.660430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.139 [2024-11-15 11:46:33.660443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.139 qpair failed and we were unable to recover it. 00:28:33.139 [2024-11-15 11:46:33.660616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.139 [2024-11-15 11:46:33.660631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.139 qpair failed and we were unable to recover it. 00:28:33.139 [2024-11-15 11:46:33.660730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.139 [2024-11-15 11:46:33.660744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.139 qpair failed and we were unable to recover it. 00:28:33.139 [2024-11-15 11:46:33.660895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.139 [2024-11-15 11:46:33.660908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.139 qpair failed and we were unable to recover it. 00:28:33.139 [2024-11-15 11:46:33.660992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.139 [2024-11-15 11:46:33.661005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.139 qpair failed and we were unable to recover it. 00:28:33.139 [2024-11-15 11:46:33.661095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.139 [2024-11-15 11:46:33.661107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.139 qpair failed and we were unable to recover it. 00:28:33.139 [2024-11-15 11:46:33.661183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.139 [2024-11-15 11:46:33.661197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.139 qpair failed and we were unable to recover it. 00:28:33.139 [2024-11-15 11:46:33.661281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.139 [2024-11-15 11:46:33.661293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.139 qpair failed and we were unable to recover it. 00:28:33.139 [2024-11-15 11:46:33.661434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.139 [2024-11-15 11:46:33.661447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.139 qpair failed and we were unable to recover it. 00:28:33.139 [2024-11-15 11:46:33.661628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.139 [2024-11-15 11:46:33.661645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:33.139 qpair failed and we were unable to recover it. 00:28:33.139 [2024-11-15 11:46:33.661731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.139 [2024-11-15 11:46:33.661744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:33.139 qpair failed and we were unable to recover it. 00:28:33.139 [2024-11-15 11:46:33.661902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.139 [2024-11-15 11:46:33.661915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:33.139 qpair failed and we were unable to recover it. 00:28:33.139 [2024-11-15 11:46:33.661993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.139 [2024-11-15 11:46:33.662004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:33.139 qpair failed and we were unable to recover it. 00:28:33.139 [2024-11-15 11:46:33.662157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.139 [2024-11-15 11:46:33.662169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:33.139 qpair failed and we were unable to recover it. 00:28:33.139 [2024-11-15 11:46:33.662376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.139 [2024-11-15 11:46:33.662389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:33.139 qpair failed and we were unable to recover it. 00:28:33.139 [2024-11-15 11:46:33.662533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.139 [2024-11-15 11:46:33.662547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:33.139 qpair failed and we were unable to recover it. 00:28:33.139 [2024-11-15 11:46:33.662619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.139 [2024-11-15 11:46:33.662631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:33.139 qpair failed and we were unable to recover it. 00:28:33.139 [2024-11-15 11:46:33.662722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.139 [2024-11-15 11:46:33.662734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:33.139 qpair failed and we were unable to recover it. 00:28:33.140 [2024-11-15 11:46:33.662831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.140 [2024-11-15 11:46:33.662843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:33.140 qpair failed and we were unable to recover it. 00:28:33.140 [2024-11-15 11:46:33.662983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.140 [2024-11-15 11:46:33.662995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:33.140 qpair failed and we were unable to recover it. 00:28:33.140 [2024-11-15 11:46:33.663095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.140 [2024-11-15 11:46:33.663108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:33.140 qpair failed and we were unable to recover it. 00:28:33.140 [2024-11-15 11:46:33.663248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.140 [2024-11-15 11:46:33.663261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:33.140 qpair failed and we were unable to recover it. 00:28:33.140 [2024-11-15 11:46:33.663359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.140 [2024-11-15 11:46:33.663371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:33.140 qpair failed and we were unable to recover it. 00:28:33.140 [2024-11-15 11:46:33.663469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.140 [2024-11-15 11:46:33.663481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:33.140 qpair failed and we were unable to recover it. 00:28:33.140 [2024-11-15 11:46:33.663566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.140 [2024-11-15 11:46:33.663578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:33.140 qpair failed and we were unable to recover it. 00:28:33.140 [2024-11-15 11:46:33.663745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.140 [2024-11-15 11:46:33.663757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:33.140 qpair failed and we were unable to recover it. 00:28:33.140 [2024-11-15 11:46:33.663848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.140 [2024-11-15 11:46:33.663859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:33.140 qpair failed and we were unable to recover it. 00:28:33.140 [2024-11-15 11:46:33.663942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.140 [2024-11-15 11:46:33.663953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:33.140 qpair failed and we were unable to recover it. 00:28:33.140 [2024-11-15 11:46:33.664029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.140 [2024-11-15 11:46:33.664040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:33.140 qpair failed and we were unable to recover it. 00:28:33.140 [2024-11-15 11:46:33.664282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.140 [2024-11-15 11:46:33.664295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:33.140 qpair failed and we were unable to recover it. 00:28:33.140 [2024-11-15 11:46:33.664456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.140 [2024-11-15 11:46:33.664475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:33.140 qpair failed and we were unable to recover it. 00:28:33.140 [2024-11-15 11:46:33.664562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.140 [2024-11-15 11:46:33.664574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:33.140 qpair failed and we were unable to recover it. 00:28:33.140 [2024-11-15 11:46:33.664711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.140 [2024-11-15 11:46:33.664725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:33.140 qpair failed and we were unable to recover it. 00:28:33.140 [2024-11-15 11:46:33.664818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.140 [2024-11-15 11:46:33.664830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:33.140 qpair failed and we were unable to recover it. 00:28:33.140 [2024-11-15 11:46:33.664967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.140 [2024-11-15 11:46:33.664979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:33.140 qpair failed and we were unable to recover it. 00:28:33.140 [2024-11-15 11:46:33.665164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.140 [2024-11-15 11:46:33.665178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:33.140 qpair failed and we were unable to recover it. 00:28:33.140 [2024-11-15 11:46:33.665355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.140 [2024-11-15 11:46:33.665371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:33.140 qpair failed and we were unable to recover it. 00:28:33.140 [2024-11-15 11:46:33.665531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.140 [2024-11-15 11:46:33.665544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:33.140 qpair failed and we were unable to recover it. 00:28:33.140 [2024-11-15 11:46:33.665641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.140 [2024-11-15 11:46:33.665653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:33.140 qpair failed and we were unable to recover it. 00:28:33.140 [2024-11-15 11:46:33.665734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.140 [2024-11-15 11:46:33.665746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:33.140 qpair failed and we were unable to recover it. 00:28:33.140 [2024-11-15 11:46:33.665963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.140 [2024-11-15 11:46:33.665976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:33.140 qpair failed and we were unable to recover it. 00:28:33.140 [2024-11-15 11:46:33.666055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.140 [2024-11-15 11:46:33.666067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:33.140 qpair failed and we were unable to recover it. 00:28:33.140 [2024-11-15 11:46:33.666167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.140 [2024-11-15 11:46:33.666179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:33.140 qpair failed and we were unable to recover it. 00:28:33.140 [2024-11-15 11:46:33.666335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.140 [2024-11-15 11:46:33.666348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:33.140 qpair failed and we were unable to recover it. 00:28:33.140 [2024-11-15 11:46:33.666437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.140 [2024-11-15 11:46:33.666450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:33.140 qpair failed and we were unable to recover it. 00:28:33.140 [2024-11-15 11:46:33.666595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.140 [2024-11-15 11:46:33.666608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:33.140 qpair failed and we were unable to recover it. 00:28:33.140 [2024-11-15 11:46:33.666700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.140 [2024-11-15 11:46:33.666712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:33.140 qpair failed and we were unable to recover it. 00:28:33.140 [2024-11-15 11:46:33.666853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.140 [2024-11-15 11:46:33.666866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:33.140 qpair failed and we were unable to recover it. 00:28:33.140 [2024-11-15 11:46:33.666947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.140 [2024-11-15 11:46:33.666959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:33.140 qpair failed and we were unable to recover it. 00:28:33.140 [2024-11-15 11:46:33.667029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.140 [2024-11-15 11:46:33.667041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:33.140 qpair failed and we were unable to recover it. 00:28:33.140 [2024-11-15 11:46:33.667130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.140 [2024-11-15 11:46:33.667143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:33.140 qpair failed and we were unable to recover it. 00:28:33.140 [2024-11-15 11:46:33.667233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.140 [2024-11-15 11:46:33.667245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:33.140 qpair failed and we were unable to recover it. 00:28:33.140 [2024-11-15 11:46:33.667386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.140 [2024-11-15 11:46:33.667397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:33.140 qpair failed and we were unable to recover it. 00:28:33.140 [2024-11-15 11:46:33.667493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.140 [2024-11-15 11:46:33.667506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:33.140 qpair failed and we were unable to recover it. 00:28:33.140 [2024-11-15 11:46:33.667670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.140 [2024-11-15 11:46:33.667687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:33.140 qpair failed and we were unable to recover it. 00:28:33.141 [2024-11-15 11:46:33.667758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.141 [2024-11-15 11:46:33.667769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:33.141 qpair failed and we were unable to recover it. 00:28:33.141 [2024-11-15 11:46:33.667962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.141 [2024-11-15 11:46:33.667974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:33.141 qpair failed and we were unable to recover it. 00:28:33.141 [2024-11-15 11:46:33.668076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.141 [2024-11-15 11:46:33.668088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:33.141 qpair failed and we were unable to recover it. 00:28:33.141 [2024-11-15 11:46:33.668227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.141 [2024-11-15 11:46:33.668239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:33.141 qpair failed and we were unable to recover it. 00:28:33.141 [2024-11-15 11:46:33.668322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.141 [2024-11-15 11:46:33.668334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:33.141 qpair failed and we were unable to recover it. 00:28:33.141 [2024-11-15 11:46:33.668506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.141 [2024-11-15 11:46:33.668519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:33.141 qpair failed and we were unable to recover it. 00:28:33.141 [2024-11-15 11:46:33.668699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.141 [2024-11-15 11:46:33.668712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:33.141 qpair failed and we were unable to recover it. 00:28:33.141 [2024-11-15 11:46:33.668807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.141 [2024-11-15 11:46:33.668819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:33.141 qpair failed and we were unable to recover it. 00:28:33.141 [2024-11-15 11:46:33.668891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.141 [2024-11-15 11:46:33.668903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:33.141 qpair failed and we were unable to recover it. 00:28:33.141 [2024-11-15 11:46:33.668994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.141 [2024-11-15 11:46:33.669006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:33.141 qpair failed and we were unable to recover it. 00:28:33.141 [2024-11-15 11:46:33.669087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.141 [2024-11-15 11:46:33.669099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:33.141 qpair failed and we were unable to recover it. 00:28:33.141 [2024-11-15 11:46:33.669202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.141 [2024-11-15 11:46:33.669215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:33.141 qpair failed and we were unable to recover it. 00:28:33.141 [2024-11-15 11:46:33.669373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.141 [2024-11-15 11:46:33.669386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:33.141 qpair failed and we were unable to recover it. 00:28:33.141 [2024-11-15 11:46:33.669483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.141 [2024-11-15 11:46:33.669496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:33.141 qpair failed and we were unable to recover it. 00:28:33.141 [2024-11-15 11:46:33.669656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.141 [2024-11-15 11:46:33.669670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:33.141 qpair failed and we were unable to recover it. 00:28:33.141 [2024-11-15 11:46:33.669749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.141 [2024-11-15 11:46:33.669762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:33.141 qpair failed and we were unable to recover it. 00:28:33.141 [2024-11-15 11:46:33.669908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.141 [2024-11-15 11:46:33.669921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:33.141 qpair failed and we were unable to recover it. 00:28:33.141 [2024-11-15 11:46:33.669998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.141 [2024-11-15 11:46:33.670010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:33.141 qpair failed and we were unable to recover it. 00:28:33.141 [2024-11-15 11:46:33.670105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.141 [2024-11-15 11:46:33.670118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:33.141 qpair failed and we were unable to recover it. 00:28:33.141 [2024-11-15 11:46:33.670209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.141 [2024-11-15 11:46:33.670222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:33.141 qpair failed and we were unable to recover it. 00:28:33.141 [2024-11-15 11:46:33.670363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.141 [2024-11-15 11:46:33.670376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:33.141 qpair failed and we were unable to recover it. 00:28:33.141 [2024-11-15 11:46:33.670443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.141 [2024-11-15 11:46:33.670457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:33.141 qpair failed and we were unable to recover it. 00:28:33.141 [2024-11-15 11:46:33.670557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.141 [2024-11-15 11:46:33.670568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:33.141 qpair failed and we were unable to recover it. 00:28:33.141 [2024-11-15 11:46:33.670727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.141 [2024-11-15 11:46:33.670740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:33.141 qpair failed and we were unable to recover it. 00:28:33.141 [2024-11-15 11:46:33.670929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.141 [2024-11-15 11:46:33.670942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:33.141 qpair failed and we were unable to recover it. 00:28:33.141 [2024-11-15 11:46:33.671102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.141 [2024-11-15 11:46:33.671115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:33.141 qpair failed and we were unable to recover it. 00:28:33.141 [2024-11-15 11:46:33.671190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.141 [2024-11-15 11:46:33.671201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:33.141 qpair failed and we were unable to recover it. 00:28:33.141 [2024-11-15 11:46:33.671349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.141 [2024-11-15 11:46:33.671362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:33.141 qpair failed and we were unable to recover it. 00:28:33.141 [2024-11-15 11:46:33.671603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.141 [2024-11-15 11:46:33.671616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:33.141 qpair failed and we were unable to recover it. 00:28:33.141 [2024-11-15 11:46:33.671687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.141 [2024-11-15 11:46:33.671700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:33.141 qpair failed and we were unable to recover it. 00:28:33.141 [2024-11-15 11:46:33.671790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.141 [2024-11-15 11:46:33.671801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:33.141 qpair failed and we were unable to recover it. 00:28:33.141 [2024-11-15 11:46:33.671941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.141 [2024-11-15 11:46:33.671954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:33.141 qpair failed and we were unable to recover it. 00:28:33.141 [2024-11-15 11:46:33.672045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.141 [2024-11-15 11:46:33.672057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:33.141 qpair failed and we were unable to recover it. 00:28:33.141 [2024-11-15 11:46:33.672195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.141 [2024-11-15 11:46:33.672208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:33.141 qpair failed and we were unable to recover it. 00:28:33.141 [2024-11-15 11:46:33.672301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.141 [2024-11-15 11:46:33.672313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:33.141 qpair failed and we were unable to recover it. 00:28:33.141 [2024-11-15 11:46:33.672467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.142 [2024-11-15 11:46:33.672481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:33.142 qpair failed and we were unable to recover it. 00:28:33.142 [2024-11-15 11:46:33.672618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.142 [2024-11-15 11:46:33.672632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:33.142 qpair failed and we were unable to recover it. 00:28:33.142 [2024-11-15 11:46:33.672810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.142 [2024-11-15 11:46:33.672823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:33.142 qpair failed and we were unable to recover it. 00:28:33.142 [2024-11-15 11:46:33.672964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.142 [2024-11-15 11:46:33.672977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:33.142 qpair failed and we were unable to recover it. 00:28:33.142 [2024-11-15 11:46:33.673193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.142 [2024-11-15 11:46:33.673205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:33.142 qpair failed and we were unable to recover it. 00:28:33.142 [2024-11-15 11:46:33.673379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.142 [2024-11-15 11:46:33.673392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:33.142 qpair failed and we were unable to recover it. 00:28:33.142 [2024-11-15 11:46:33.673607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.142 [2024-11-15 11:46:33.673620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:33.142 qpair failed and we were unable to recover it. 00:28:33.142 [2024-11-15 11:46:33.673692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.142 [2024-11-15 11:46:33.673704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:33.142 qpair failed and we were unable to recover it. 00:28:33.142 [2024-11-15 11:46:33.673858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.142 [2024-11-15 11:46:33.673872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:33.142 qpair failed and we were unable to recover it. 00:28:33.142 [2024-11-15 11:46:33.674031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.142 [2024-11-15 11:46:33.674045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:33.142 qpair failed and we were unable to recover it. 00:28:33.142 [2024-11-15 11:46:33.674136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.142 [2024-11-15 11:46:33.674149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:33.142 qpair failed and we were unable to recover it. 00:28:33.142 [2024-11-15 11:46:33.674328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.142 [2024-11-15 11:46:33.674341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:33.142 qpair failed and we were unable to recover it. 00:28:33.142 [2024-11-15 11:46:33.674423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.142 [2024-11-15 11:46:33.674435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:33.142 qpair failed and we were unable to recover it. 00:28:33.142 [2024-11-15 11:46:33.674587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.142 [2024-11-15 11:46:33.674600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:33.142 qpair failed and we were unable to recover it. 00:28:33.142 [2024-11-15 11:46:33.674765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.142 [2024-11-15 11:46:33.674778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:33.142 qpair failed and we were unable to recover it. 00:28:33.142 [2024-11-15 11:46:33.674942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.142 [2024-11-15 11:46:33.674954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:33.142 qpair failed and we were unable to recover it. 00:28:33.142 [2024-11-15 11:46:33.675038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.142 [2024-11-15 11:46:33.675049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:33.142 qpair failed and we were unable to recover it. 00:28:33.142 [2024-11-15 11:46:33.675201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.142 [2024-11-15 11:46:33.675216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:33.142 qpair failed and we were unable to recover it. 00:28:33.142 [2024-11-15 11:46:33.675461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.142 [2024-11-15 11:46:33.675475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:33.142 qpair failed and we were unable to recover it. 00:28:33.142 [2024-11-15 11:46:33.675671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.142 [2024-11-15 11:46:33.675684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:33.142 qpair failed and we were unable to recover it. 00:28:33.142 [2024-11-15 11:46:33.675838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.142 [2024-11-15 11:46:33.675851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:33.142 qpair failed and we were unable to recover it. 00:28:33.142 [2024-11-15 11:46:33.676010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.142 [2024-11-15 11:46:33.676023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:33.142 qpair failed and we were unable to recover it. 00:28:33.142 [2024-11-15 11:46:33.676192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.142 [2024-11-15 11:46:33.676205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:33.142 qpair failed and we were unable to recover it. 00:28:33.142 [2024-11-15 11:46:33.676380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.142 [2024-11-15 11:46:33.676394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:33.142 qpair failed and we were unable to recover it. 00:28:33.142 [2024-11-15 11:46:33.676541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.142 [2024-11-15 11:46:33.676554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:33.142 qpair failed and we were unable to recover it. 00:28:33.142 [2024-11-15 11:46:33.676711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.142 [2024-11-15 11:46:33.676724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:33.142 qpair failed and we were unable to recover it. 00:28:33.142 [2024-11-15 11:46:33.676808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.142 [2024-11-15 11:46:33.676823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:33.142 qpair failed and we were unable to recover it. 00:28:33.142 [2024-11-15 11:46:33.676907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.142 [2024-11-15 11:46:33.676919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:33.142 qpair failed and we were unable to recover it. 00:28:33.142 [2024-11-15 11:46:33.677076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.142 [2024-11-15 11:46:33.677088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:33.142 qpair failed and we were unable to recover it. 00:28:33.142 [2024-11-15 11:46:33.677227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.142 [2024-11-15 11:46:33.677239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:33.142 qpair failed and we were unable to recover it. 00:28:33.142 [2024-11-15 11:46:33.677371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.142 [2024-11-15 11:46:33.677384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:33.142 qpair failed and we were unable to recover it. 00:28:33.142 [2024-11-15 11:46:33.677475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.142 [2024-11-15 11:46:33.677489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:33.142 qpair failed and we were unable to recover it. 00:28:33.142 [2024-11-15 11:46:33.677572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.142 [2024-11-15 11:46:33.677583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:33.142 qpair failed and we were unable to recover it. 00:28:33.142 [2024-11-15 11:46:33.677728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.142 [2024-11-15 11:46:33.677741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:33.142 qpair failed and we were unable to recover it. 00:28:33.142 [2024-11-15 11:46:33.677904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.142 [2024-11-15 11:46:33.677917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:33.142 qpair failed and we were unable to recover it. 00:28:33.142 [2024-11-15 11:46:33.678057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.142 [2024-11-15 11:46:33.678069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:33.142 qpair failed and we were unable to recover it. 00:28:33.142 [2024-11-15 11:46:33.678227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.143 [2024-11-15 11:46:33.678240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:33.143 qpair failed and we were unable to recover it. 00:28:33.143 [2024-11-15 11:46:33.678407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.143 [2024-11-15 11:46:33.678421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:33.143 qpair failed and we were unable to recover it. 00:28:33.143 [2024-11-15 11:46:33.678502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.143 [2024-11-15 11:46:33.678515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:33.143 qpair failed and we were unable to recover it. 00:28:33.143 [2024-11-15 11:46:33.678596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.143 [2024-11-15 11:46:33.678609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:33.143 qpair failed and we were unable to recover it. 00:28:33.143 [2024-11-15 11:46:33.678684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.143 [2024-11-15 11:46:33.678695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:33.143 qpair failed and we were unable to recover it. 00:28:33.143 [2024-11-15 11:46:33.678798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.143 [2024-11-15 11:46:33.678809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:33.143 qpair failed and we were unable to recover it. 00:28:33.143 [2024-11-15 11:46:33.679024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.143 [2024-11-15 11:46:33.679037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:33.143 qpair failed and we were unable to recover it. 00:28:33.143 [2024-11-15 11:46:33.679120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.143 [2024-11-15 11:46:33.679132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:33.143 qpair failed and we were unable to recover it. 00:28:33.143 [2024-11-15 11:46:33.679367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.143 [2024-11-15 11:46:33.679382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:33.143 qpair failed and we were unable to recover it. 00:28:33.143 [2024-11-15 11:46:33.679490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.143 [2024-11-15 11:46:33.679504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:33.143 qpair failed and we were unable to recover it. 00:28:33.143 [2024-11-15 11:46:33.679660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.143 [2024-11-15 11:46:33.679673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:33.143 qpair failed and we were unable to recover it. 00:28:33.143 [2024-11-15 11:46:33.679780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.143 [2024-11-15 11:46:33.679791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:33.143 qpair failed and we were unable to recover it. 00:28:33.143 [2024-11-15 11:46:33.679961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.143 [2024-11-15 11:46:33.679974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:33.143 qpair failed and we were unable to recover it. 00:28:33.143 [2024-11-15 11:46:33.680123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.143 [2024-11-15 11:46:33.680137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:33.143 qpair failed and we were unable to recover it. 00:28:33.143 [2024-11-15 11:46:33.680224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.143 [2024-11-15 11:46:33.680236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:33.143 qpair failed and we were unable to recover it. 00:28:33.143 [2024-11-15 11:46:33.680467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.143 [2024-11-15 11:46:33.680480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:33.143 qpair failed and we were unable to recover it. 00:28:33.143 [2024-11-15 11:46:33.680657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.143 [2024-11-15 11:46:33.680670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:33.143 qpair failed and we were unable to recover it. 00:28:33.143 [2024-11-15 11:46:33.680937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.143 [2024-11-15 11:46:33.680951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:33.143 qpair failed and we were unable to recover it. 00:28:33.143 [2024-11-15 11:46:33.681103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.143 [2024-11-15 11:46:33.681117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:33.143 qpair failed and we were unable to recover it. 00:28:33.143 [2024-11-15 11:46:33.681279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.143 [2024-11-15 11:46:33.681292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:33.143 qpair failed and we were unable to recover it. 00:28:33.143 [2024-11-15 11:46:33.681443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.143 [2024-11-15 11:46:33.681455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:33.143 qpair failed and we were unable to recover it. 00:28:33.143 [2024-11-15 11:46:33.681619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.143 [2024-11-15 11:46:33.681632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:33.143 qpair failed and we were unable to recover it. 00:28:33.143 [2024-11-15 11:46:33.681775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.143 [2024-11-15 11:46:33.681789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:33.143 qpair failed and we were unable to recover it. 00:28:33.143 [2024-11-15 11:46:33.681891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.143 [2024-11-15 11:46:33.681902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:33.143 qpair failed and we were unable to recover it. 00:28:33.143 [2024-11-15 11:46:33.682055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.143 [2024-11-15 11:46:33.682068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:33.143 qpair failed and we were unable to recover it. 00:28:33.143 [2024-11-15 11:46:33.682241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.143 [2024-11-15 11:46:33.682254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:33.143 qpair failed and we were unable to recover it. 00:28:33.143 [2024-11-15 11:46:33.682349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.143 [2024-11-15 11:46:33.682361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:33.143 qpair failed and we were unable to recover it. 00:28:33.143 [2024-11-15 11:46:33.682463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.143 [2024-11-15 11:46:33.682476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:33.143 qpair failed and we were unable to recover it. 00:28:33.143 [2024-11-15 11:46:33.682580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.143 [2024-11-15 11:46:33.682592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:33.143 qpair failed and we were unable to recover it. 00:28:33.143 [2024-11-15 11:46:33.682748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.143 [2024-11-15 11:46:33.682760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:33.143 qpair failed and we were unable to recover it. 00:28:33.143 [2024-11-15 11:46:33.682905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.143 [2024-11-15 11:46:33.682922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:33.143 qpair failed and we were unable to recover it. 00:28:33.143 [2024-11-15 11:46:33.683085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.143 [2024-11-15 11:46:33.683098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:33.143 qpair failed and we were unable to recover it. 00:28:33.143 [2024-11-15 11:46:33.683258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.143 [2024-11-15 11:46:33.683271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:33.143 qpair failed and we were unable to recover it. 00:28:33.143 [2024-11-15 11:46:33.683496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.143 [2024-11-15 11:46:33.683511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:33.143 qpair failed and we were unable to recover it. 00:28:33.143 [2024-11-15 11:46:33.683673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.143 [2024-11-15 11:46:33.683686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:33.143 qpair failed and we were unable to recover it. 00:28:33.143 [2024-11-15 11:46:33.683842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.144 [2024-11-15 11:46:33.683855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:33.144 qpair failed and we were unable to recover it. 00:28:33.144 [2024-11-15 11:46:33.684021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.144 [2024-11-15 11:46:33.684034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:33.144 qpair failed and we were unable to recover it. 00:28:33.144 [2024-11-15 11:46:33.684177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.144 [2024-11-15 11:46:33.684191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:33.144 qpair failed and we were unable to recover it. 00:28:33.144 [2024-11-15 11:46:33.684338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.144 [2024-11-15 11:46:33.684354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:33.144 qpair failed and we were unable to recover it. 00:28:33.144 [2024-11-15 11:46:33.684424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.144 [2024-11-15 11:46:33.684436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:33.144 qpair failed and we were unable to recover it. 00:28:33.144 [2024-11-15 11:46:33.684553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.144 [2024-11-15 11:46:33.684566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:33.144 qpair failed and we were unable to recover it. 00:28:33.144 [2024-11-15 11:46:33.684768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.144 [2024-11-15 11:46:33.684781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:33.144 qpair failed and we were unable to recover it. 00:28:33.144 [2024-11-15 11:46:33.685050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.144 [2024-11-15 11:46:33.685063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:33.144 qpair failed and we were unable to recover it. 00:28:33.144 [2024-11-15 11:46:33.685208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.144 [2024-11-15 11:46:33.685222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:33.144 qpair failed and we were unable to recover it. 00:28:33.144 [2024-11-15 11:46:33.685382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.144 [2024-11-15 11:46:33.685396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:33.144 qpair failed and we were unable to recover it. 00:28:33.144 [2024-11-15 11:46:33.685481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.144 [2024-11-15 11:46:33.685494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:33.144 qpair failed and we were unable to recover it. 00:28:33.144 [2024-11-15 11:46:33.685564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.144 [2024-11-15 11:46:33.685576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:33.144 qpair failed and we were unable to recover it. 00:28:33.144 [2024-11-15 11:46:33.685671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.144 [2024-11-15 11:46:33.685683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:33.144 qpair failed and we were unable to recover it. 00:28:33.144 [2024-11-15 11:46:33.685825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.144 [2024-11-15 11:46:33.685838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:33.144 qpair failed and we were unable to recover it. 00:28:33.144 [2024-11-15 11:46:33.686075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.144 [2024-11-15 11:46:33.686090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:33.144 qpair failed and we were unable to recover it. 00:28:33.144 [2024-11-15 11:46:33.686166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.144 [2024-11-15 11:46:33.686178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:33.144 qpair failed and we were unable to recover it. 00:28:33.144 [2024-11-15 11:46:33.686328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.144 [2024-11-15 11:46:33.686341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:33.144 qpair failed and we were unable to recover it. 00:28:33.144 [2024-11-15 11:46:33.686491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.144 [2024-11-15 11:46:33.686504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:33.144 qpair failed and we were unable to recover it. 00:28:33.144 [2024-11-15 11:46:33.686716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.144 [2024-11-15 11:46:33.686730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:33.144 qpair failed and we were unable to recover it. 00:28:33.144 [2024-11-15 11:46:33.686922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.144 [2024-11-15 11:46:33.686935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:33.144 qpair failed and we were unable to recover it. 00:28:33.144 [2024-11-15 11:46:33.687112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.144 [2024-11-15 11:46:33.687126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:33.144 qpair failed and we were unable to recover it. 00:28:33.144 [2024-11-15 11:46:33.687267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.144 [2024-11-15 11:46:33.687279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:33.144 qpair failed and we were unable to recover it. 00:28:33.144 [2024-11-15 11:46:33.687362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.144 [2024-11-15 11:46:33.687374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:33.144 qpair failed and we were unable to recover it. 00:28:33.144 [2024-11-15 11:46:33.687627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.144 [2024-11-15 11:46:33.687642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:33.144 qpair failed and we were unable to recover it. 00:28:33.144 [2024-11-15 11:46:33.687877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.144 [2024-11-15 11:46:33.687891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:33.144 qpair failed and we were unable to recover it. 00:28:33.144 [2024-11-15 11:46:33.688033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.144 [2024-11-15 11:46:33.688046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:33.144 qpair failed and we were unable to recover it. 00:28:33.144 [2024-11-15 11:46:33.688146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.144 [2024-11-15 11:46:33.688160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:33.144 qpair failed and we were unable to recover it. 00:28:33.144 [2024-11-15 11:46:33.688374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.144 [2024-11-15 11:46:33.688387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:33.144 qpair failed and we were unable to recover it. 00:28:33.144 [2024-11-15 11:46:33.688478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.144 [2024-11-15 11:46:33.688490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:33.144 qpair failed and we were unable to recover it. 00:28:33.144 [2024-11-15 11:46:33.688580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.144 [2024-11-15 11:46:33.688594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:33.144 qpair failed and we were unable to recover it. 00:28:33.144 [2024-11-15 11:46:33.688747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.144 [2024-11-15 11:46:33.688759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:33.145 qpair failed and we were unable to recover it. 00:28:33.145 [2024-11-15 11:46:33.688831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.145 [2024-11-15 11:46:33.688842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:33.145 qpair failed and we were unable to recover it. 00:28:33.145 [2024-11-15 11:46:33.688914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.145 [2024-11-15 11:46:33.688926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:33.145 qpair failed and we were unable to recover it. 00:28:33.145 [2024-11-15 11:46:33.689013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.145 [2024-11-15 11:46:33.689025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:33.145 qpair failed and we were unable to recover it. 00:28:33.145 [2024-11-15 11:46:33.689104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.145 [2024-11-15 11:46:33.689116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:33.145 qpair failed and we were unable to recover it. 00:28:33.145 [2024-11-15 11:46:33.689280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.145 [2024-11-15 11:46:33.689299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:33.145 qpair failed and we were unable to recover it. 00:28:33.145 [2024-11-15 11:46:33.689454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.145 [2024-11-15 11:46:33.689473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:33.145 qpair failed and we were unable to recover it. 00:28:33.145 [2024-11-15 11:46:33.689557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.145 [2024-11-15 11:46:33.689569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:33.145 qpair failed and we were unable to recover it. 00:28:33.145 [2024-11-15 11:46:33.689726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.145 [2024-11-15 11:46:33.689740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:33.145 qpair failed and we were unable to recover it. 00:28:33.145 [2024-11-15 11:46:33.689813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.145 [2024-11-15 11:46:33.689825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:33.145 qpair failed and we were unable to recover it. 00:28:33.145 [2024-11-15 11:46:33.689897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.145 [2024-11-15 11:46:33.689909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:33.145 qpair failed and we were unable to recover it. 00:28:33.145 [2024-11-15 11:46:33.690047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.145 [2024-11-15 11:46:33.690061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:33.145 qpair failed and we were unable to recover it. 00:28:33.145 [2024-11-15 11:46:33.690311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.145 [2024-11-15 11:46:33.690325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:33.145 qpair failed and we were unable to recover it. 00:28:33.145 [2024-11-15 11:46:33.690515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.145 [2024-11-15 11:46:33.690528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:33.145 qpair failed and we were unable to recover it. 00:28:33.145 [2024-11-15 11:46:33.690611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.145 [2024-11-15 11:46:33.690623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:33.145 qpair failed and we were unable to recover it. 00:28:33.145 [2024-11-15 11:46:33.690723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.145 [2024-11-15 11:46:33.690736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:33.145 qpair failed and we were unable to recover it. 00:28:33.145 [2024-11-15 11:46:33.690806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.145 [2024-11-15 11:46:33.690818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:33.145 qpair failed and we were unable to recover it. 00:28:33.145 [2024-11-15 11:46:33.690981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.145 [2024-11-15 11:46:33.690994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:33.145 qpair failed and we were unable to recover it. 00:28:33.145 [2024-11-15 11:46:33.691155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.145 [2024-11-15 11:46:33.691170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:33.145 qpair failed and we were unable to recover it. 00:28:33.145 [2024-11-15 11:46:33.691326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.145 [2024-11-15 11:46:33.691339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:33.145 qpair failed and we were unable to recover it. 00:28:33.145 [2024-11-15 11:46:33.691480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.145 [2024-11-15 11:46:33.691493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:33.145 qpair failed and we were unable to recover it. 00:28:33.145 [2024-11-15 11:46:33.691573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.145 [2024-11-15 11:46:33.691585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:33.145 qpair failed and we were unable to recover it. 00:28:33.145 [2024-11-15 11:46:33.691727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.145 [2024-11-15 11:46:33.691741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:33.145 qpair failed and we were unable to recover it. 00:28:33.145 [2024-11-15 11:46:33.691845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.145 [2024-11-15 11:46:33.691858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:33.145 qpair failed and we were unable to recover it. 00:28:33.145 [2024-11-15 11:46:33.691998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.145 [2024-11-15 11:46:33.692010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:33.145 qpair failed and we were unable to recover it. 00:28:33.145 [2024-11-15 11:46:33.692157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.145 [2024-11-15 11:46:33.692172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:33.145 qpair failed and we were unable to recover it. 00:28:33.145 [2024-11-15 11:46:33.692313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.145 [2024-11-15 11:46:33.692327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:33.145 qpair failed and we were unable to recover it. 00:28:33.145 [2024-11-15 11:46:33.692546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.145 [2024-11-15 11:46:33.692560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:33.145 qpair failed and we were unable to recover it. 00:28:33.145 [2024-11-15 11:46:33.692654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.145 [2024-11-15 11:46:33.692666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:33.145 qpair failed and we were unable to recover it. 00:28:33.145 [2024-11-15 11:46:33.692767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.145 [2024-11-15 11:46:33.692780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:33.145 qpair failed and we were unable to recover it. 00:28:33.145 [2024-11-15 11:46:33.692862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.145 [2024-11-15 11:46:33.692875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:33.145 qpair failed and we were unable to recover it. 00:28:33.145 [2024-11-15 11:46:33.692963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.145 [2024-11-15 11:46:33.692976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:33.145 qpair failed and we were unable to recover it. 00:28:33.145 [2024-11-15 11:46:33.693162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.145 [2024-11-15 11:46:33.693176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:33.145 qpair failed and we were unable to recover it. 00:28:33.145 [2024-11-15 11:46:33.693275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.145 [2024-11-15 11:46:33.693288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:33.145 qpair failed and we were unable to recover it. 00:28:33.145 [2024-11-15 11:46:33.693439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.145 [2024-11-15 11:46:33.693453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:33.145 qpair failed and we were unable to recover it. 00:28:33.145 [2024-11-15 11:46:33.693541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.145 [2024-11-15 11:46:33.693554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:33.145 qpair failed and we were unable to recover it. 00:28:33.145 [2024-11-15 11:46:33.693702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.145 [2024-11-15 11:46:33.693715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:33.145 qpair failed and we were unable to recover it. 00:28:33.145 [2024-11-15 11:46:33.693859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.146 [2024-11-15 11:46:33.693872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:33.146 qpair failed and we were unable to recover it. 00:28:33.146 [2024-11-15 11:46:33.694082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.146 [2024-11-15 11:46:33.694095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:33.146 qpair failed and we were unable to recover it. 00:28:33.146 [2024-11-15 11:46:33.694305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.146 [2024-11-15 11:46:33.694318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:33.146 qpair failed and we were unable to recover it. 00:28:33.146 [2024-11-15 11:46:33.694482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.146 [2024-11-15 11:46:33.694496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:33.146 qpair failed and we were unable to recover it. 00:28:33.146 [2024-11-15 11:46:33.694580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.146 [2024-11-15 11:46:33.694593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:33.146 qpair failed and we were unable to recover it. 00:28:33.146 [2024-11-15 11:46:33.694742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.146 [2024-11-15 11:46:33.694754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:33.146 qpair failed and we were unable to recover it. 00:28:33.146 [2024-11-15 11:46:33.694900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.146 [2024-11-15 11:46:33.694913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:33.146 qpair failed and we were unable to recover it. 00:28:33.146 [2024-11-15 11:46:33.695086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.146 [2024-11-15 11:46:33.695100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:33.146 qpair failed and we were unable to recover it. 00:28:33.146 [2024-11-15 11:46:33.695202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.146 [2024-11-15 11:46:33.695217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:33.146 qpair failed and we were unable to recover it. 00:28:33.146 [2024-11-15 11:46:33.695356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.146 [2024-11-15 11:46:33.695368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:33.146 qpair failed and we were unable to recover it. 00:28:33.146 [2024-11-15 11:46:33.695534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.146 [2024-11-15 11:46:33.695548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:33.146 qpair failed and we were unable to recover it. 00:28:33.146 [2024-11-15 11:46:33.695692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.146 [2024-11-15 11:46:33.695706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:33.146 qpair failed and we were unable to recover it. 00:28:33.146 [2024-11-15 11:46:33.695810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.146 [2024-11-15 11:46:33.695823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:33.146 qpair failed and we were unable to recover it. 00:28:33.146 [2024-11-15 11:46:33.695993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.146 [2024-11-15 11:46:33.696006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:33.146 qpair failed and we were unable to recover it. 00:28:33.146 [2024-11-15 11:46:33.696261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.146 [2024-11-15 11:46:33.696274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:33.146 qpair failed and we were unable to recover it. 00:28:33.146 [2024-11-15 11:46:33.696358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.146 [2024-11-15 11:46:33.696373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:33.146 qpair failed and we were unable to recover it. 00:28:33.146 [2024-11-15 11:46:33.696533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.146 [2024-11-15 11:46:33.696546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:33.146 qpair failed and we were unable to recover it. 00:28:33.146 [2024-11-15 11:46:33.696786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.146 [2024-11-15 11:46:33.696799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:33.146 qpair failed and we were unable to recover it. 00:28:33.146 [2024-11-15 11:46:33.696946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.146 [2024-11-15 11:46:33.696959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:33.146 qpair failed and we were unable to recover it. 00:28:33.146 [2024-11-15 11:46:33.697099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.146 [2024-11-15 11:46:33.697113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:33.146 qpair failed and we were unable to recover it. 00:28:33.146 [2024-11-15 11:46:33.697198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.146 [2024-11-15 11:46:33.697210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:33.146 qpair failed and we were unable to recover it. 00:28:33.146 [2024-11-15 11:46:33.697359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.146 [2024-11-15 11:46:33.697371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:33.146 qpair failed and we were unable to recover it. 00:28:33.146 [2024-11-15 11:46:33.697529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.146 [2024-11-15 11:46:33.697543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:33.146 qpair failed and we were unable to recover it. 00:28:33.146 [2024-11-15 11:46:33.697611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.146 [2024-11-15 11:46:33.697623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:33.146 qpair failed and we were unable to recover it. 00:28:33.146 [2024-11-15 11:46:33.697835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.146 [2024-11-15 11:46:33.697848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:33.146 qpair failed and we were unable to recover it. 00:28:33.146 [2024-11-15 11:46:33.698040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.146 [2024-11-15 11:46:33.698054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:33.146 qpair failed and we were unable to recover it. 00:28:33.146 [2024-11-15 11:46:33.698134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.146 [2024-11-15 11:46:33.698146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:33.146 qpair failed and we were unable to recover it. 00:28:33.146 [2024-11-15 11:46:33.698304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.146 [2024-11-15 11:46:33.698317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:33.146 qpair failed and we were unable to recover it. 00:28:33.146 [2024-11-15 11:46:33.698506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.146 [2024-11-15 11:46:33.698518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:33.146 qpair failed and we were unable to recover it. 00:28:33.146 [2024-11-15 11:46:33.698600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.146 [2024-11-15 11:46:33.698612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:33.146 qpair failed and we were unable to recover it. 00:28:33.146 [2024-11-15 11:46:33.698699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.146 [2024-11-15 11:46:33.698713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:33.146 qpair failed and we were unable to recover it. 00:28:33.146 [2024-11-15 11:46:33.698799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.146 [2024-11-15 11:46:33.698810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:33.146 qpair failed and we were unable to recover it. 00:28:33.146 [2024-11-15 11:46:33.698906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.146 [2024-11-15 11:46:33.698919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:33.146 qpair failed and we were unable to recover it. 00:28:33.146 [2024-11-15 11:46:33.699100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.146 [2024-11-15 11:46:33.699113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:33.146 qpair failed and we were unable to recover it. 00:28:33.146 [2024-11-15 11:46:33.699184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.146 [2024-11-15 11:46:33.699196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:33.146 qpair failed and we were unable to recover it. 00:28:33.146 [2024-11-15 11:46:33.699410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.146 [2024-11-15 11:46:33.699423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:33.147 qpair failed and we were unable to recover it. 00:28:33.147 [2024-11-15 11:46:33.699591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.147 [2024-11-15 11:46:33.699604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:33.147 qpair failed and we were unable to recover it. 00:28:33.147 [2024-11-15 11:46:33.699697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.147 [2024-11-15 11:46:33.699710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:33.147 qpair failed and we were unable to recover it. 00:28:33.147 [2024-11-15 11:46:33.699905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.147 [2024-11-15 11:46:33.699918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:33.147 qpair failed and we were unable to recover it. 00:28:33.147 [2024-11-15 11:46:33.700006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.147 [2024-11-15 11:46:33.700018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:33.147 qpair failed and we were unable to recover it. 00:28:33.147 [2024-11-15 11:46:33.700171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.147 [2024-11-15 11:46:33.700184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:33.147 qpair failed and we were unable to recover it. 00:28:33.147 [2024-11-15 11:46:33.700432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.147 [2024-11-15 11:46:33.700444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:33.147 qpair failed and we were unable to recover it. 00:28:33.147 [2024-11-15 11:46:33.700633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.147 [2024-11-15 11:46:33.700647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:33.147 qpair failed and we were unable to recover it. 00:28:33.147 [2024-11-15 11:46:33.700798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.147 [2024-11-15 11:46:33.700811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:33.147 qpair failed and we were unable to recover it. 00:28:33.147 [2024-11-15 11:46:33.700898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.147 [2024-11-15 11:46:33.700909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:33.147 qpair failed and we were unable to recover it. 00:28:33.147 [2024-11-15 11:46:33.701069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.147 [2024-11-15 11:46:33.701083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:33.147 qpair failed and we were unable to recover it. 00:28:33.147 [2024-11-15 11:46:33.701301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.147 [2024-11-15 11:46:33.701314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:33.147 qpair failed and we were unable to recover it. 00:28:33.147 [2024-11-15 11:46:33.701456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.147 [2024-11-15 11:46:33.701474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:33.147 qpair failed and we were unable to recover it. 00:28:33.147 [2024-11-15 11:46:33.701688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.147 [2024-11-15 11:46:33.701703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:33.147 qpair failed and we were unable to recover it. 00:28:33.147 [2024-11-15 11:46:33.701880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.147 [2024-11-15 11:46:33.701893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:33.147 qpair failed and we were unable to recover it. 00:28:33.147 [2024-11-15 11:46:33.702053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.147 [2024-11-15 11:46:33.702066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:33.147 qpair failed and we were unable to recover it. 00:28:33.147 [2024-11-15 11:46:33.702159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.147 [2024-11-15 11:46:33.702171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:33.147 qpair failed and we were unable to recover it. 00:28:33.147 [2024-11-15 11:46:33.702247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.147 [2024-11-15 11:46:33.702259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:33.147 qpair failed and we were unable to recover it. 00:28:33.147 [2024-11-15 11:46:33.702352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.147 [2024-11-15 11:46:33.702364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:33.147 qpair failed and we were unable to recover it. 00:28:33.147 [2024-11-15 11:46:33.702527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.147 [2024-11-15 11:46:33.702552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:33.147 qpair failed and we were unable to recover it. 00:28:33.147 [2024-11-15 11:46:33.702700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.147 [2024-11-15 11:46:33.702712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:33.147 qpair failed and we were unable to recover it. 00:28:33.147 [2024-11-15 11:46:33.702857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.147 [2024-11-15 11:46:33.702870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:33.147 qpair failed and we were unable to recover it. 00:28:33.147 [2024-11-15 11:46:33.703043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.147 [2024-11-15 11:46:33.703056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:33.147 qpair failed and we were unable to recover it. 00:28:33.147 [2024-11-15 11:46:33.703143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.147 [2024-11-15 11:46:33.703155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:33.147 qpair failed and we were unable to recover it. 00:28:33.147 [2024-11-15 11:46:33.703245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.147 [2024-11-15 11:46:33.703258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:33.147 qpair failed and we were unable to recover it. 00:28:33.147 [2024-11-15 11:46:33.703343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.147 [2024-11-15 11:46:33.703356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:33.147 qpair failed and we were unable to recover it. 00:28:33.147 [2024-11-15 11:46:33.703453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.147 [2024-11-15 11:46:33.703473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:33.147 qpair failed and we were unable to recover it. 00:28:33.147 [2024-11-15 11:46:33.703555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.147 [2024-11-15 11:46:33.703568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:33.147 qpair failed and we were unable to recover it. 00:28:33.147 [2024-11-15 11:46:33.703731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.147 [2024-11-15 11:46:33.703744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:33.147 qpair failed and we were unable to recover it. 00:28:33.147 [2024-11-15 11:46:33.703900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.147 [2024-11-15 11:46:33.703913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:33.147 qpair failed and we were unable to recover it. 00:28:33.147 [2024-11-15 11:46:33.704067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.147 [2024-11-15 11:46:33.704080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:33.147 qpair failed and we were unable to recover it. 00:28:33.147 [2024-11-15 11:46:33.704224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.147 [2024-11-15 11:46:33.704237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:33.147 qpair failed and we were unable to recover it. 00:28:33.147 [2024-11-15 11:46:33.704445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.147 [2024-11-15 11:46:33.704464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:33.147 qpair failed and we were unable to recover it. 00:28:33.147 [2024-11-15 11:46:33.704544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.147 [2024-11-15 11:46:33.704555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:33.147 qpair failed and we were unable to recover it. 00:28:33.147 [2024-11-15 11:46:33.704658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.147 [2024-11-15 11:46:33.704670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:33.147 qpair failed and we were unable to recover it. 00:28:33.147 [2024-11-15 11:46:33.704759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.148 [2024-11-15 11:46:33.704772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:33.148 qpair failed and we were unable to recover it. 00:28:33.148 [2024-11-15 11:46:33.704859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.148 [2024-11-15 11:46:33.704872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:33.148 qpair failed and we were unable to recover it. 00:28:33.148 [2024-11-15 11:46:33.705012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.148 [2024-11-15 11:46:33.705026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:33.148 qpair failed and we were unable to recover it. 00:28:33.148 [2024-11-15 11:46:33.705202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.148 [2024-11-15 11:46:33.705215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:33.148 qpair failed and we were unable to recover it. 00:28:33.148 [2024-11-15 11:46:33.705295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.148 [2024-11-15 11:46:33.705306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:33.148 qpair failed and we were unable to recover it. 00:28:33.148 [2024-11-15 11:46:33.705393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.148 [2024-11-15 11:46:33.705405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:33.148 qpair failed and we were unable to recover it. 00:28:33.148 [2024-11-15 11:46:33.705477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.148 [2024-11-15 11:46:33.705489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:33.148 qpair failed and we were unable to recover it. 00:28:33.148 [2024-11-15 11:46:33.705583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.148 [2024-11-15 11:46:33.705596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:33.148 qpair failed and we were unable to recover it. 00:28:33.148 [2024-11-15 11:46:33.705790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.148 [2024-11-15 11:46:33.705803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:33.148 qpair failed and we were unable to recover it. 00:28:33.148 [2024-11-15 11:46:33.706060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.148 [2024-11-15 11:46:33.706074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:33.148 qpair failed and we were unable to recover it. 00:28:33.148 [2024-11-15 11:46:33.706291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.148 [2024-11-15 11:46:33.706306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:33.148 qpair failed and we were unable to recover it. 00:28:33.148 [2024-11-15 11:46:33.706468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.148 [2024-11-15 11:46:33.706482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:33.148 qpair failed and we were unable to recover it. 00:28:33.148 [2024-11-15 11:46:33.706650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.148 [2024-11-15 11:46:33.706663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:33.148 qpair failed and we were unable to recover it. 00:28:33.148 [2024-11-15 11:46:33.706760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.148 [2024-11-15 11:46:33.706772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:33.148 qpair failed and we were unable to recover it. 00:28:33.148 [2024-11-15 11:46:33.706866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.148 [2024-11-15 11:46:33.706879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:33.148 qpair failed and we were unable to recover it. 00:28:33.148 [2024-11-15 11:46:33.707045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.148 [2024-11-15 11:46:33.707059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:33.148 qpair failed and we were unable to recover it. 00:28:33.148 [2024-11-15 11:46:33.707206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.148 [2024-11-15 11:46:33.707218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:33.148 qpair failed and we were unable to recover it. 00:28:33.148 [2024-11-15 11:46:33.707401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.148 [2024-11-15 11:46:33.707425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:33.148 qpair failed and we were unable to recover it. 00:28:33.148 [2024-11-15 11:46:33.707578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.148 [2024-11-15 11:46:33.707594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:33.148 qpair failed and we were unable to recover it. 00:28:33.148 [2024-11-15 11:46:33.707781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.148 [2024-11-15 11:46:33.707794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:33.148 qpair failed and we were unable to recover it. 00:28:33.148 [2024-11-15 11:46:33.707868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.148 [2024-11-15 11:46:33.707879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:33.148 qpair failed and we were unable to recover it. 00:28:33.148 [2024-11-15 11:46:33.707988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.148 [2024-11-15 11:46:33.708000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:33.148 qpair failed and we were unable to recover it. 00:28:33.148 [2024-11-15 11:46:33.708145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.148 [2024-11-15 11:46:33.708157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:33.148 qpair failed and we were unable to recover it. 00:28:33.148 [2024-11-15 11:46:33.708363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.148 [2024-11-15 11:46:33.708376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:33.148 qpair failed and we were unable to recover it. 00:28:33.148 [2024-11-15 11:46:33.708456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.148 [2024-11-15 11:46:33.708491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:33.148 qpair failed and we were unable to recover it. 00:28:33.148 [2024-11-15 11:46:33.708702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.148 [2024-11-15 11:46:33.708715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:33.148 qpair failed and we were unable to recover it. 00:28:33.148 [2024-11-15 11:46:33.708897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.148 [2024-11-15 11:46:33.708909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:33.148 qpair failed and we were unable to recover it. 00:28:33.148 [2024-11-15 11:46:33.708980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.148 [2024-11-15 11:46:33.708992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:33.148 qpair failed and we were unable to recover it. 00:28:33.148 [2024-11-15 11:46:33.709092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.148 [2024-11-15 11:46:33.709105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:33.148 qpair failed and we were unable to recover it. 00:28:33.148 [2024-11-15 11:46:33.709190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.148 [2024-11-15 11:46:33.709204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:33.148 qpair failed and we were unable to recover it. 00:28:33.148 [2024-11-15 11:46:33.709438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.148 [2024-11-15 11:46:33.709452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:33.148 qpair failed and we were unable to recover it. 00:28:33.148 [2024-11-15 11:46:33.709685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.148 [2024-11-15 11:46:33.709698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:33.148 qpair failed and we were unable to recover it. 00:28:33.148 [2024-11-15 11:46:33.709795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.148 [2024-11-15 11:46:33.709807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:33.148 qpair failed and we were unable to recover it. 00:28:33.148 [2024-11-15 11:46:33.709892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.148 [2024-11-15 11:46:33.709903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:33.148 qpair failed and we were unable to recover it. 00:28:33.148 [2024-11-15 11:46:33.709986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.148 [2024-11-15 11:46:33.709998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:33.148 qpair failed and we were unable to recover it. 00:28:33.148 [2024-11-15 11:46:33.710154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.148 [2024-11-15 11:46:33.710167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:33.148 qpair failed and we were unable to recover it. 00:28:33.148 [2024-11-15 11:46:33.710353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.148 [2024-11-15 11:46:33.710366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:33.148 qpair failed and we were unable to recover it. 00:28:33.148 [2024-11-15 11:46:33.710470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.148 [2024-11-15 11:46:33.710483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:33.148 qpair failed and we were unable to recover it. 00:28:33.149 [2024-11-15 11:46:33.710675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.149 [2024-11-15 11:46:33.710687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:33.149 qpair failed and we were unable to recover it. 00:28:33.149 [2024-11-15 11:46:33.710794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.149 [2024-11-15 11:46:33.710806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:33.149 qpair failed and we were unable to recover it. 00:28:33.149 [2024-11-15 11:46:33.711049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.149 [2024-11-15 11:46:33.711061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:33.149 qpair failed and we were unable to recover it. 00:28:33.149 [2024-11-15 11:46:33.711230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.149 [2024-11-15 11:46:33.711242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:33.149 qpair failed and we were unable to recover it. 00:28:33.149 [2024-11-15 11:46:33.711384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.149 [2024-11-15 11:46:33.711397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:33.149 qpair failed and we were unable to recover it. 00:28:33.149 [2024-11-15 11:46:33.711558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.149 [2024-11-15 11:46:33.711570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:33.149 qpair failed and we were unable to recover it. 00:28:33.149 [2024-11-15 11:46:33.711810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.149 [2024-11-15 11:46:33.711822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:33.149 qpair failed and we were unable to recover it. 00:28:33.149 [2024-11-15 11:46:33.711994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.149 [2024-11-15 11:46:33.712023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.149 qpair failed and we were unable to recover it. 00:28:33.149 [2024-11-15 11:46:33.712196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.149 [2024-11-15 11:46:33.712210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.149 qpair failed and we were unable to recover it. 00:28:33.149 [2024-11-15 11:46:33.712311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.149 [2024-11-15 11:46:33.712323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.149 qpair failed and we were unable to recover it. 00:28:33.149 [2024-11-15 11:46:33.712481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.149 [2024-11-15 11:46:33.712501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.149 qpair failed and we were unable to recover it. 00:28:33.149 [2024-11-15 11:46:33.712597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.149 [2024-11-15 11:46:33.712614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.149 qpair failed and we were unable to recover it. 00:28:33.149 [2024-11-15 11:46:33.712732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.149 [2024-11-15 11:46:33.712748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.149 qpair failed and we were unable to recover it. 00:28:33.149 [2024-11-15 11:46:33.712848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.149 [2024-11-15 11:46:33.712862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.149 qpair failed and we were unable to recover it. 00:28:33.149 [2024-11-15 11:46:33.712957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.149 [2024-11-15 11:46:33.712970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.149 qpair failed and we were unable to recover it. 00:28:33.149 [2024-11-15 11:46:33.713060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.149 [2024-11-15 11:46:33.713072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.149 qpair failed and we were unable to recover it. 00:28:33.149 [2024-11-15 11:46:33.713179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.149 [2024-11-15 11:46:33.713192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.149 qpair failed and we were unable to recover it. 00:28:33.149 [2024-11-15 11:46:33.713281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.149 [2024-11-15 11:46:33.713293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.149 qpair failed and we were unable to recover it. 00:28:33.149 [2024-11-15 11:46:33.713434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.149 [2024-11-15 11:46:33.713448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.149 qpair failed and we were unable to recover it. 00:28:33.149 [2024-11-15 11:46:33.713542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.149 [2024-11-15 11:46:33.713555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:33.149 qpair failed and we were unable to recover it. 00:28:33.149 [2024-11-15 11:46:33.713658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.149 [2024-11-15 11:46:33.713672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:33.149 qpair failed and we were unable to recover it. 00:28:33.149 [2024-11-15 11:46:33.713955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.149 [2024-11-15 11:46:33.713968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:33.149 qpair failed and we were unable to recover it. 00:28:33.149 [2024-11-15 11:46:33.714055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.149 [2024-11-15 11:46:33.714068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:33.149 qpair failed and we were unable to recover it. 00:28:33.149 [2024-11-15 11:46:33.714164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.149 [2024-11-15 11:46:33.714176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:33.149 qpair failed and we were unable to recover it. 00:28:33.149 [2024-11-15 11:46:33.714331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.149 [2024-11-15 11:46:33.714344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:33.149 qpair failed and we were unable to recover it. 00:28:33.149 [2024-11-15 11:46:33.714427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.149 [2024-11-15 11:46:33.714440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:33.149 qpair failed and we were unable to recover it. 00:28:33.149 [2024-11-15 11:46:33.714537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.149 [2024-11-15 11:46:33.714550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:33.149 qpair failed and we were unable to recover it. 00:28:33.149 [2024-11-15 11:46:33.714691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.149 [2024-11-15 11:46:33.714703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:33.149 qpair failed and we were unable to recover it. 00:28:33.149 [2024-11-15 11:46:33.714777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.149 [2024-11-15 11:46:33.714788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:33.149 qpair failed and we were unable to recover it. 00:28:33.149 [2024-11-15 11:46:33.714857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.149 [2024-11-15 11:46:33.714870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:33.150 qpair failed and we were unable to recover it. 00:28:33.150 [2024-11-15 11:46:33.715020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.150 [2024-11-15 11:46:33.715033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:33.150 qpair failed and we were unable to recover it. 00:28:33.150 [2024-11-15 11:46:33.715111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.150 [2024-11-15 11:46:33.715123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:33.150 qpair failed and we were unable to recover it. 00:28:33.150 [2024-11-15 11:46:33.715329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.150 [2024-11-15 11:46:33.715342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:33.150 qpair failed and we were unable to recover it. 00:28:33.150 [2024-11-15 11:46:33.715507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.150 [2024-11-15 11:46:33.715521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:33.150 qpair failed and we were unable to recover it. 00:28:33.150 [2024-11-15 11:46:33.715747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.150 [2024-11-15 11:46:33.715764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:33.150 qpair failed and we were unable to recover it. 00:28:33.150 [2024-11-15 11:46:33.715919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.150 [2024-11-15 11:46:33.715938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:33.150 qpair failed and we were unable to recover it. 00:28:33.150 [2024-11-15 11:46:33.716100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.150 [2024-11-15 11:46:33.716118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:33.150 qpair failed and we were unable to recover it. 00:28:33.150 [2024-11-15 11:46:33.716266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.150 [2024-11-15 11:46:33.716285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:33.150 qpair failed and we were unable to recover it. 00:28:33.150 [2024-11-15 11:46:33.716561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.150 [2024-11-15 11:46:33.716579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:33.150 qpair failed and we were unable to recover it. 00:28:33.150 [2024-11-15 11:46:33.716746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.150 [2024-11-15 11:46:33.716759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:33.150 qpair failed and we were unable to recover it. 00:28:33.150 [2024-11-15 11:46:33.716830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.150 [2024-11-15 11:46:33.716842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:33.150 qpair failed and we were unable to recover it. 00:28:33.150 [2024-11-15 11:46:33.717018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.150 [2024-11-15 11:46:33.717030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:33.150 qpair failed and we were unable to recover it. 00:28:33.150 [2024-11-15 11:46:33.717180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.150 [2024-11-15 11:46:33.717192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:33.150 qpair failed and we were unable to recover it. 00:28:33.150 [2024-11-15 11:46:33.717291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.150 [2024-11-15 11:46:33.717303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:33.150 qpair failed and we were unable to recover it. 00:28:33.150 [2024-11-15 11:46:33.717514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.150 [2024-11-15 11:46:33.717528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:33.150 qpair failed and we were unable to recover it. 00:28:33.150 [2024-11-15 11:46:33.717616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.150 [2024-11-15 11:46:33.717629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:33.150 qpair failed and we were unable to recover it. 00:28:33.150 [2024-11-15 11:46:33.717785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.150 [2024-11-15 11:46:33.717797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:33.150 qpair failed and we were unable to recover it. 00:28:33.150 [2024-11-15 11:46:33.717973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.150 [2024-11-15 11:46:33.717997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.150 qpair failed and we were unable to recover it. 00:28:33.150 [2024-11-15 11:46:33.718254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.150 [2024-11-15 11:46:33.718267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.150 qpair failed and we were unable to recover it. 00:28:33.150 [2024-11-15 11:46:33.718394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.150 [2024-11-15 11:46:33.718406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.150 qpair failed and we were unable to recover it. 00:28:33.150 [2024-11-15 11:46:33.718557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.150 [2024-11-15 11:46:33.718571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.150 qpair failed and we were unable to recover it. 00:28:33.150 [2024-11-15 11:46:33.718662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.150 [2024-11-15 11:46:33.718674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.150 qpair failed and we were unable to recover it. 00:28:33.150 [2024-11-15 11:46:33.718762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.150 [2024-11-15 11:46:33.718774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.150 qpair failed and we were unable to recover it. 00:28:33.150 [2024-11-15 11:46:33.719008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.150 [2024-11-15 11:46:33.719042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.150 qpair failed and we were unable to recover it. 00:28:33.150 [2024-11-15 11:46:33.719230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.150 [2024-11-15 11:46:33.719264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.150 qpair failed and we were unable to recover it. 00:28:33.150 [2024-11-15 11:46:33.719523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.150 [2024-11-15 11:46:33.719557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.150 qpair failed and we were unable to recover it. 00:28:33.150 [2024-11-15 11:46:33.719690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.150 [2024-11-15 11:46:33.719724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.150 qpair failed and we were unable to recover it. 00:28:33.150 [2024-11-15 11:46:33.719935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.150 [2024-11-15 11:46:33.719967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.150 qpair failed and we were unable to recover it. 00:28:33.150 [2024-11-15 11:46:33.720096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.150 [2024-11-15 11:46:33.720128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.150 qpair failed and we were unable to recover it. 00:28:33.150 [2024-11-15 11:46:33.720338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.150 [2024-11-15 11:46:33.720371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.150 qpair failed and we were unable to recover it. 00:28:33.150 [2024-11-15 11:46:33.720552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.150 [2024-11-15 11:46:33.720569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.150 qpair failed and we were unable to recover it. 00:28:33.150 [2024-11-15 11:46:33.720732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.150 [2024-11-15 11:46:33.720764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.150 qpair failed and we were unable to recover it. 00:28:33.150 [2024-11-15 11:46:33.720895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.150 [2024-11-15 11:46:33.720927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.150 qpair failed and we were unable to recover it. 00:28:33.150 [2024-11-15 11:46:33.721181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.150 [2024-11-15 11:46:33.721214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.150 qpair failed and we were unable to recover it. 00:28:33.150 [2024-11-15 11:46:33.721478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.150 [2024-11-15 11:46:33.721491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.150 qpair failed and we were unable to recover it. 00:28:33.150 [2024-11-15 11:46:33.721649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.150 [2024-11-15 11:46:33.721661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.150 qpair failed and we were unable to recover it. 00:28:33.150 [2024-11-15 11:46:33.721812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.150 [2024-11-15 11:46:33.721846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.150 qpair failed and we were unable to recover it. 00:28:33.151 [2024-11-15 11:46:33.722042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.151 [2024-11-15 11:46:33.722076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.151 qpair failed and we were unable to recover it. 00:28:33.151 [2024-11-15 11:46:33.722191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.151 [2024-11-15 11:46:33.722224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.151 qpair failed and we were unable to recover it. 00:28:33.151 [2024-11-15 11:46:33.722373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.151 [2024-11-15 11:46:33.722406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.151 qpair failed and we were unable to recover it. 00:28:33.151 [2024-11-15 11:46:33.722675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.151 [2024-11-15 11:46:33.722708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.151 qpair failed and we were unable to recover it. 00:28:33.151 [2024-11-15 11:46:33.723017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.151 [2024-11-15 11:46:33.723050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.151 qpair failed and we were unable to recover it. 00:28:33.151 [2024-11-15 11:46:33.723191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.151 [2024-11-15 11:46:33.723224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.151 qpair failed and we were unable to recover it. 00:28:33.151 [2024-11-15 11:46:33.723515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.151 [2024-11-15 11:46:33.723528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.151 qpair failed and we were unable to recover it. 00:28:33.151 [2024-11-15 11:46:33.723686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.151 [2024-11-15 11:46:33.723699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.151 qpair failed and we were unable to recover it. 00:28:33.151 [2024-11-15 11:46:33.723911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.151 [2024-11-15 11:46:33.723924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.151 qpair failed and we were unable to recover it. 00:28:33.151 [2024-11-15 11:46:33.724067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.151 [2024-11-15 11:46:33.724080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.151 qpair failed and we were unable to recover it. 00:28:33.151 [2024-11-15 11:46:33.724323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.151 [2024-11-15 11:46:33.724356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.151 qpair failed and we were unable to recover it. 00:28:33.151 [2024-11-15 11:46:33.724543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.151 [2024-11-15 11:46:33.724578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.151 qpair failed and we were unable to recover it. 00:28:33.151 [2024-11-15 11:46:33.724772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.151 [2024-11-15 11:46:33.724805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.151 qpair failed and we were unable to recover it. 00:28:33.151 [2024-11-15 11:46:33.725030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.151 [2024-11-15 11:46:33.725063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.151 qpair failed and we were unable to recover it. 00:28:33.151 [2024-11-15 11:46:33.725207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.151 [2024-11-15 11:46:33.725220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.151 qpair failed and we were unable to recover it. 00:28:33.151 [2024-11-15 11:46:33.725361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.151 [2024-11-15 11:46:33.725400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.151 qpair failed and we were unable to recover it. 00:28:33.151 [2024-11-15 11:46:33.725550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.151 [2024-11-15 11:46:33.725584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.151 qpair failed and we were unable to recover it. 00:28:33.151 [2024-11-15 11:46:33.725794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.151 [2024-11-15 11:46:33.725827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.151 qpair failed and we were unable to recover it. 00:28:33.151 [2024-11-15 11:46:33.726039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.151 [2024-11-15 11:46:33.726073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.151 qpair failed and we were unable to recover it. 00:28:33.151 [2024-11-15 11:46:33.726325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.151 [2024-11-15 11:46:33.726357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.151 qpair failed and we were unable to recover it. 00:28:33.151 [2024-11-15 11:46:33.726492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.151 [2024-11-15 11:46:33.726531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:33.151 qpair failed and we were unable to recover it. 00:28:33.151 [2024-11-15 11:46:33.726787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.151 [2024-11-15 11:46:33.726800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:33.151 qpair failed and we were unable to recover it. 00:28:33.151 [2024-11-15 11:46:33.726963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.151 [2024-11-15 11:46:33.726994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:33.151 qpair failed and we were unable to recover it. 00:28:33.151 [2024-11-15 11:46:33.727138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.151 [2024-11-15 11:46:33.727170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:33.151 qpair failed and we were unable to recover it. 00:28:33.151 [2024-11-15 11:46:33.727285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.151 [2024-11-15 11:46:33.727317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:33.151 qpair failed and we were unable to recover it. 00:28:33.151 [2024-11-15 11:46:33.727628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.151 [2024-11-15 11:46:33.727663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:33.151 qpair failed and we were unable to recover it. 00:28:33.151 [2024-11-15 11:46:33.727848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.151 [2024-11-15 11:46:33.727880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:33.151 qpair failed and we were unable to recover it. 00:28:33.151 [2024-11-15 11:46:33.728081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.151 [2024-11-15 11:46:33.728118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:33.151 qpair failed and we were unable to recover it. 00:28:33.151 [2024-11-15 11:46:33.728324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.151 [2024-11-15 11:46:33.728336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:33.151 qpair failed and we were unable to recover it. 00:28:33.151 [2024-11-15 11:46:33.728531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.151 [2024-11-15 11:46:33.728565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:33.151 qpair failed and we were unable to recover it. 00:28:33.151 [2024-11-15 11:46:33.728766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.151 [2024-11-15 11:46:33.728799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:33.151 qpair failed and we were unable to recover it. 00:28:33.151 [2024-11-15 11:46:33.728938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.151 [2024-11-15 11:46:33.728970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:33.151 qpair failed and we were unable to recover it. 00:28:33.151 [2024-11-15 11:46:33.729106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.151 [2024-11-15 11:46:33.729139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:33.151 qpair failed and we were unable to recover it. 00:28:33.151 [2024-11-15 11:46:33.729366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.151 [2024-11-15 11:46:33.729405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:33.151 qpair failed and we were unable to recover it. 00:28:33.151 [2024-11-15 11:46:33.729633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.151 [2024-11-15 11:46:33.729645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:33.151 qpair failed and we were unable to recover it. 00:28:33.151 [2024-11-15 11:46:33.729873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.151 [2024-11-15 11:46:33.729885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:33.151 qpair failed and we were unable to recover it. 00:28:33.151 [2024-11-15 11:46:33.730074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.151 [2024-11-15 11:46:33.730106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:33.152 qpair failed and we were unable to recover it. 00:28:33.152 [2024-11-15 11:46:33.730220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.152 [2024-11-15 11:46:33.730253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:33.152 qpair failed and we were unable to recover it. 00:28:33.152 [2024-11-15 11:46:33.730367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.152 [2024-11-15 11:46:33.730399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:33.152 qpair failed and we were unable to recover it. 00:28:33.152 [2024-11-15 11:46:33.730649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.152 [2024-11-15 11:46:33.730663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:33.152 qpair failed and we were unable to recover it. 00:28:33.152 [2024-11-15 11:46:33.730772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.152 [2024-11-15 11:46:33.730805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:33.152 qpair failed and we were unable to recover it. 00:28:33.152 [2024-11-15 11:46:33.730995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.152 [2024-11-15 11:46:33.731026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:33.152 qpair failed and we were unable to recover it. 00:28:33.152 [2024-11-15 11:46:33.731226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.152 [2024-11-15 11:46:33.731267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:33.152 qpair failed and we were unable to recover it. 00:28:33.152 [2024-11-15 11:46:33.731423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.152 [2024-11-15 11:46:33.731436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:33.152 qpair failed and we were unable to recover it. 00:28:33.152 [2024-11-15 11:46:33.731590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.152 [2024-11-15 11:46:33.731624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:33.152 qpair failed and we were unable to recover it. 00:28:33.152 [2024-11-15 11:46:33.731745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.152 [2024-11-15 11:46:33.731777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:33.152 qpair failed and we were unable to recover it. 00:28:33.152 [2024-11-15 11:46:33.731980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.152 [2024-11-15 11:46:33.732013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:33.152 qpair failed and we were unable to recover it. 00:28:33.152 [2024-11-15 11:46:33.732213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.152 [2024-11-15 11:46:33.732247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:33.152 qpair failed and we were unable to recover it. 00:28:33.152 [2024-11-15 11:46:33.732507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.152 [2024-11-15 11:46:33.732542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:33.152 qpair failed and we were unable to recover it. 00:28:33.152 [2024-11-15 11:46:33.732743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.152 [2024-11-15 11:46:33.732775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:33.152 qpair failed and we were unable to recover it. 00:28:33.152 [2024-11-15 11:46:33.733054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.152 [2024-11-15 11:46:33.733087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:33.152 qpair failed and we were unable to recover it. 00:28:33.152 [2024-11-15 11:46:33.733284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.152 [2024-11-15 11:46:33.733324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:33.152 qpair failed and we were unable to recover it. 00:28:33.152 [2024-11-15 11:46:33.733408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.152 [2024-11-15 11:46:33.733418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:33.152 qpair failed and we were unable to recover it. 00:28:33.152 [2024-11-15 11:46:33.733576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.152 [2024-11-15 11:46:33.733589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:33.152 qpair failed and we were unable to recover it. 00:28:33.152 [2024-11-15 11:46:33.733732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.152 [2024-11-15 11:46:33.733743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:33.152 qpair failed and we were unable to recover it. 00:28:33.152 [2024-11-15 11:46:33.733832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.152 [2024-11-15 11:46:33.733858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:33.152 qpair failed and we were unable to recover it. 00:28:33.152 [2024-11-15 11:46:33.734145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.152 [2024-11-15 11:46:33.734178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:33.152 qpair failed and we were unable to recover it. 00:28:33.152 [2024-11-15 11:46:33.734431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.152 [2024-11-15 11:46:33.734476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:33.152 qpair failed and we were unable to recover it. 00:28:33.152 [2024-11-15 11:46:33.734620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.152 [2024-11-15 11:46:33.734652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:33.152 qpair failed and we were unable to recover it. 00:28:33.152 [2024-11-15 11:46:33.734916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.152 [2024-11-15 11:46:33.734928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:33.152 qpair failed and we were unable to recover it. 00:28:33.152 [2024-11-15 11:46:33.735153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.152 [2024-11-15 11:46:33.735191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:33.152 qpair failed and we were unable to recover it. 00:28:33.152 [2024-11-15 11:46:33.735322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.152 [2024-11-15 11:46:33.735353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:33.152 qpair failed and we were unable to recover it. 00:28:33.152 [2024-11-15 11:46:33.735483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.152 [2024-11-15 11:46:33.735517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:33.152 qpair failed and we were unable to recover it. 00:28:33.152 [2024-11-15 11:46:33.735775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.152 [2024-11-15 11:46:33.735807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:33.152 qpair failed and we were unable to recover it. 00:28:33.152 [2024-11-15 11:46:33.735999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.152 [2024-11-15 11:46:33.736032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:33.152 qpair failed and we were unable to recover it. 00:28:33.152 [2024-11-15 11:46:33.736173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.152 [2024-11-15 11:46:33.736206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:33.152 qpair failed and we were unable to recover it. 00:28:33.152 [2024-11-15 11:46:33.736470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.152 [2024-11-15 11:46:33.736505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:33.152 qpair failed and we were unable to recover it. 00:28:33.152 [2024-11-15 11:46:33.736691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.152 [2024-11-15 11:46:33.736702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:33.152 qpair failed and we were unable to recover it. 00:28:33.152 [2024-11-15 11:46:33.736902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.152 [2024-11-15 11:46:33.736913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:33.152 qpair failed and we were unable to recover it. 00:28:33.152 [2024-11-15 11:46:33.737124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.152 [2024-11-15 11:46:33.737157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:33.152 qpair failed and we were unable to recover it. 00:28:33.152 [2024-11-15 11:46:33.737298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.152 [2024-11-15 11:46:33.737331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:33.152 qpair failed and we were unable to recover it. 00:28:33.152 [2024-11-15 11:46:33.737520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.152 [2024-11-15 11:46:33.737555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:33.152 qpair failed and we were unable to recover it. 00:28:33.152 [2024-11-15 11:46:33.737834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.152 [2024-11-15 11:46:33.737866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:33.153 qpair failed and we were unable to recover it. 00:28:33.153 [2024-11-15 11:46:33.737992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.153 [2024-11-15 11:46:33.738025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:33.153 qpair failed and we were unable to recover it. 00:28:33.153 [2024-11-15 11:46:33.738283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.153 [2024-11-15 11:46:33.738294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:33.153 qpair failed and we were unable to recover it. 00:28:33.153 [2024-11-15 11:46:33.738578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.153 [2024-11-15 11:46:33.738591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:33.153 qpair failed and we were unable to recover it. 00:28:33.153 [2024-11-15 11:46:33.738726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.153 [2024-11-15 11:46:33.738738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:33.153 qpair failed and we were unable to recover it. 00:28:33.153 [2024-11-15 11:46:33.738823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.153 [2024-11-15 11:46:33.738834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:33.153 qpair failed and we were unable to recover it. 00:28:33.153 [2024-11-15 11:46:33.738990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.153 [2024-11-15 11:46:33.739002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:33.153 qpair failed and we were unable to recover it. 00:28:33.153 [2024-11-15 11:46:33.739146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.153 [2024-11-15 11:46:33.739158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:33.153 qpair failed and we were unable to recover it. 00:28:33.153 [2024-11-15 11:46:33.739380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.153 [2024-11-15 11:46:33.739412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:33.153 qpair failed and we were unable to recover it. 00:28:33.153 [2024-11-15 11:46:33.739556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.153 [2024-11-15 11:46:33.739589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:33.153 qpair failed and we were unable to recover it. 00:28:33.153 [2024-11-15 11:46:33.739731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.153 [2024-11-15 11:46:33.739763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:33.153 qpair failed and we were unable to recover it. 00:28:33.153 [2024-11-15 11:46:33.740035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.153 [2024-11-15 11:46:33.740068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:33.153 qpair failed and we were unable to recover it. 00:28:33.153 [2024-11-15 11:46:33.740203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.153 [2024-11-15 11:46:33.740235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:33.153 qpair failed and we were unable to recover it. 00:28:33.153 [2024-11-15 11:46:33.740381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.153 [2024-11-15 11:46:33.740426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:33.153 qpair failed and we were unable to recover it. 00:28:33.153 [2024-11-15 11:46:33.740569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.153 [2024-11-15 11:46:33.740581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:33.153 qpair failed and we were unable to recover it. 00:28:33.153 [2024-11-15 11:46:33.740764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.153 [2024-11-15 11:46:33.740792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:33.153 qpair failed and we were unable to recover it. 00:28:33.153 [2024-11-15 11:46:33.740928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.153 [2024-11-15 11:46:33.740959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:33.153 qpair failed and we were unable to recover it. 00:28:33.153 [2024-11-15 11:46:33.741082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.153 [2024-11-15 11:46:33.741115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:33.153 qpair failed and we were unable to recover it. 00:28:33.153 [2024-11-15 11:46:33.741310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.153 [2024-11-15 11:46:33.741341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:33.153 qpair failed and we were unable to recover it. 00:28:33.153 [2024-11-15 11:46:33.741473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.153 [2024-11-15 11:46:33.741511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:33.153 qpair failed and we were unable to recover it. 00:28:33.153 [2024-11-15 11:46:33.741706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.153 [2024-11-15 11:46:33.741742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:33.153 qpair failed and we were unable to recover it. 00:28:33.153 [2024-11-15 11:46:33.741987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.153 [2024-11-15 11:46:33.742021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:33.153 qpair failed and we were unable to recover it. 00:28:33.153 [2024-11-15 11:46:33.742331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.153 [2024-11-15 11:46:33.742363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:33.153 qpair failed and we were unable to recover it. 00:28:33.153 [2024-11-15 11:46:33.742555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.153 [2024-11-15 11:46:33.742566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:33.153 qpair failed and we were unable to recover it. 00:28:33.153 [2024-11-15 11:46:33.742703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.153 [2024-11-15 11:46:33.742714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:33.153 qpair failed and we were unable to recover it. 00:28:33.153 [2024-11-15 11:46:33.743008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.153 [2024-11-15 11:46:33.743043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:33.153 qpair failed and we were unable to recover it. 00:28:33.153 [2024-11-15 11:46:33.743259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.153 [2024-11-15 11:46:33.743292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:33.153 qpair failed and we were unable to recover it. 00:28:33.153 [2024-11-15 11:46:33.743550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.153 [2024-11-15 11:46:33.743586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:33.153 qpair failed and we were unable to recover it. 00:28:33.153 [2024-11-15 11:46:33.743715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.153 [2024-11-15 11:46:33.743729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:33.153 qpair failed and we were unable to recover it. 00:28:33.153 [2024-11-15 11:46:33.743823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.153 [2024-11-15 11:46:33.743833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:33.153 qpair failed and we were unable to recover it. 00:28:33.153 [2024-11-15 11:46:33.743976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.153 [2024-11-15 11:46:33.743987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:33.153 qpair failed and we were unable to recover it. 00:28:33.153 [2024-11-15 11:46:33.744087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.153 [2024-11-15 11:46:33.744098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:33.153 qpair failed and we were unable to recover it. 00:28:33.153 [2024-11-15 11:46:33.744245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.153 [2024-11-15 11:46:33.744256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:33.153 qpair failed and we were unable to recover it. 00:28:33.153 [2024-11-15 11:46:33.744397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.154 [2024-11-15 11:46:33.744408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:33.154 qpair failed and we were unable to recover it. 00:28:33.154 [2024-11-15 11:46:33.744573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.154 [2024-11-15 11:46:33.744584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:33.154 qpair failed and we were unable to recover it. 00:28:33.154 [2024-11-15 11:46:33.744723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.154 [2024-11-15 11:46:33.744736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:33.154 qpair failed and we were unable to recover it. 00:28:33.154 [2024-11-15 11:46:33.744812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.154 [2024-11-15 11:46:33.744822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:33.154 qpair failed and we were unable to recover it. 00:28:33.154 [2024-11-15 11:46:33.744963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.154 [2024-11-15 11:46:33.744974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:33.154 qpair failed and we were unable to recover it. 00:28:33.154 [2024-11-15 11:46:33.745125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.154 [2024-11-15 11:46:33.745136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:33.154 qpair failed and we were unable to recover it. 00:28:33.154 [2024-11-15 11:46:33.745372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.154 [2024-11-15 11:46:33.745383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:33.154 qpair failed and we were unable to recover it. 00:28:33.154 [2024-11-15 11:46:33.745546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.154 [2024-11-15 11:46:33.745558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:33.154 qpair failed and we were unable to recover it. 00:28:33.154 [2024-11-15 11:46:33.745693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.154 [2024-11-15 11:46:33.745704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:33.154 qpair failed and we were unable to recover it. 00:28:33.154 [2024-11-15 11:46:33.745870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.154 [2024-11-15 11:46:33.745905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:33.154 qpair failed and we were unable to recover it. 00:28:33.154 [2024-11-15 11:46:33.746123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.154 [2024-11-15 11:46:33.746157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:33.154 qpair failed and we were unable to recover it. 00:28:33.154 [2024-11-15 11:46:33.746366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.154 [2024-11-15 11:46:33.746398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:33.154 qpair failed and we were unable to recover it. 00:28:33.154 [2024-11-15 11:46:33.746556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.154 [2024-11-15 11:46:33.746591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:33.154 qpair failed and we were unable to recover it. 00:28:33.154 [2024-11-15 11:46:33.746858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.154 [2024-11-15 11:46:33.746890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:33.154 qpair failed and we were unable to recover it. 00:28:33.154 [2024-11-15 11:46:33.747196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.154 [2024-11-15 11:46:33.747228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:33.154 qpair failed and we were unable to recover it. 00:28:33.154 [2024-11-15 11:46:33.747424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.154 [2024-11-15 11:46:33.747471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:33.154 qpair failed and we were unable to recover it. 00:28:33.154 [2024-11-15 11:46:33.747763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.154 [2024-11-15 11:46:33.747775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:33.154 qpair failed and we were unable to recover it. 00:28:33.154 [2024-11-15 11:46:33.747843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.154 [2024-11-15 11:46:33.747854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:33.154 qpair failed and we were unable to recover it. 00:28:33.154 [2024-11-15 11:46:33.747950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.154 [2024-11-15 11:46:33.747960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:33.154 qpair failed and we were unable to recover it. 00:28:33.154 [2024-11-15 11:46:33.748166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.154 [2024-11-15 11:46:33.748199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:33.154 qpair failed and we were unable to recover it. 00:28:33.154 [2024-11-15 11:46:33.748379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.154 [2024-11-15 11:46:33.748390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:33.154 qpair failed and we were unable to recover it. 00:28:33.154 [2024-11-15 11:46:33.748527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.154 [2024-11-15 11:46:33.748539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:33.154 qpair failed and we were unable to recover it. 00:28:33.154 [2024-11-15 11:46:33.748707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.154 [2024-11-15 11:46:33.748718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:33.154 qpair failed and we were unable to recover it. 00:28:33.154 [2024-11-15 11:46:33.748866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.154 [2024-11-15 11:46:33.748878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:33.154 qpair failed and we were unable to recover it. 00:28:33.154 [2024-11-15 11:46:33.749029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.154 [2024-11-15 11:46:33.749060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:33.154 qpair failed and we were unable to recover it. 00:28:33.154 [2024-11-15 11:46:33.749202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.154 [2024-11-15 11:46:33.749242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:33.154 qpair failed and we were unable to recover it. 00:28:33.154 [2024-11-15 11:46:33.749368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.154 [2024-11-15 11:46:33.749401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:33.154 qpair failed and we were unable to recover it. 00:28:33.154 [2024-11-15 11:46:33.749601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.154 [2024-11-15 11:46:33.749636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:33.154 qpair failed and we were unable to recover it. 00:28:33.154 [2024-11-15 11:46:33.749840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.154 [2024-11-15 11:46:33.749872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:33.154 qpair failed and we were unable to recover it. 00:28:33.154 [2024-11-15 11:46:33.750004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.154 [2024-11-15 11:46:33.750029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:33.154 qpair failed and we were unable to recover it. 00:28:33.154 [2024-11-15 11:46:33.750190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.154 [2024-11-15 11:46:33.750201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:33.154 qpair failed and we were unable to recover it. 00:28:33.154 [2024-11-15 11:46:33.750293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.154 [2024-11-15 11:46:33.750303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:33.154 qpair failed and we were unable to recover it. 00:28:33.154 [2024-11-15 11:46:33.750439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.154 [2024-11-15 11:46:33.750450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:33.154 qpair failed and we were unable to recover it. 00:28:33.154 [2024-11-15 11:46:33.750615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.154 [2024-11-15 11:46:33.750626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:33.154 qpair failed and we were unable to recover it. 00:28:33.154 [2024-11-15 11:46:33.750719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.154 [2024-11-15 11:46:33.750728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:33.154 qpair failed and we were unable to recover it. 00:28:33.154 [2024-11-15 11:46:33.750948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.154 [2024-11-15 11:46:33.750992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:33.154 qpair failed and we were unable to recover it. 00:28:33.154 [2024-11-15 11:46:33.751193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.154 [2024-11-15 11:46:33.751225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:33.154 qpair failed and we were unable to recover it. 00:28:33.154 [2024-11-15 11:46:33.751341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.154 [2024-11-15 11:46:33.751373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:33.154 qpair failed and we were unable to recover it. 00:28:33.154 [2024-11-15 11:46:33.751628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.155 [2024-11-15 11:46:33.751661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:33.155 qpair failed and we were unable to recover it. 00:28:33.155 [2024-11-15 11:46:33.751789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.155 [2024-11-15 11:46:33.751822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:33.155 qpair failed and we were unable to recover it. 00:28:33.155 [2024-11-15 11:46:33.752067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.155 [2024-11-15 11:46:33.752078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:33.155 qpair failed and we were unable to recover it. 00:28:33.155 [2024-11-15 11:46:33.752161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.155 [2024-11-15 11:46:33.752171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:33.155 qpair failed and we were unable to recover it. 00:28:33.155 [2024-11-15 11:46:33.752328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.155 [2024-11-15 11:46:33.752339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:33.155 qpair failed and we were unable to recover it. 00:28:33.155 [2024-11-15 11:46:33.752496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.155 [2024-11-15 11:46:33.752507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:33.155 qpair failed and we were unable to recover it. 00:28:33.155 [2024-11-15 11:46:33.752654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.155 [2024-11-15 11:46:33.752685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:33.155 qpair failed and we were unable to recover it. 00:28:33.155 [2024-11-15 11:46:33.752943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.155 [2024-11-15 11:46:33.752975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:33.155 qpair failed and we were unable to recover it. 00:28:33.155 [2024-11-15 11:46:33.753159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.155 [2024-11-15 11:46:33.753191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:33.155 qpair failed and we were unable to recover it. 00:28:33.155 [2024-11-15 11:46:33.753390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.155 [2024-11-15 11:46:33.753423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:33.155 qpair failed and we were unable to recover it. 00:28:33.155 [2024-11-15 11:46:33.753627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.155 [2024-11-15 11:46:33.753662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:33.155 qpair failed and we were unable to recover it. 00:28:33.155 [2024-11-15 11:46:33.753946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.155 [2024-11-15 11:46:33.753978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:33.155 qpair failed and we were unable to recover it. 00:28:33.155 [2024-11-15 11:46:33.754173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.155 [2024-11-15 11:46:33.754207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:33.155 qpair failed and we were unable to recover it. 00:28:33.155 [2024-11-15 11:46:33.754416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.155 [2024-11-15 11:46:33.754449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:33.155 qpair failed and we were unable to recover it. 00:28:33.155 [2024-11-15 11:46:33.754647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.155 [2024-11-15 11:46:33.754680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:33.155 qpair failed and we were unable to recover it. 00:28:33.155 [2024-11-15 11:46:33.754825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.155 [2024-11-15 11:46:33.754857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:33.155 qpair failed and we were unable to recover it. 00:28:33.155 [2024-11-15 11:46:33.755050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.155 [2024-11-15 11:46:33.755082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:33.155 qpair failed and we were unable to recover it. 00:28:33.155 [2024-11-15 11:46:33.755285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.155 [2024-11-15 11:46:33.755317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:33.155 qpair failed and we were unable to recover it. 00:28:33.155 [2024-11-15 11:46:33.755472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.155 [2024-11-15 11:46:33.755505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:33.155 qpair failed and we were unable to recover it. 00:28:33.155 [2024-11-15 11:46:33.755708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.155 [2024-11-15 11:46:33.755740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:33.155 qpair failed and we were unable to recover it. 00:28:33.155 [2024-11-15 11:46:33.755945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.155 [2024-11-15 11:46:33.755977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:33.155 qpair failed and we were unable to recover it. 00:28:33.155 [2024-11-15 11:46:33.756261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.155 [2024-11-15 11:46:33.756293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:33.155 qpair failed and we were unable to recover it. 00:28:33.155 [2024-11-15 11:46:33.756568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.155 [2024-11-15 11:46:33.756603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:33.155 qpair failed and we were unable to recover it. 00:28:33.155 [2024-11-15 11:46:33.756800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.155 [2024-11-15 11:46:33.756832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:33.155 qpair failed and we were unable to recover it. 00:28:33.155 [2024-11-15 11:46:33.757061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.155 [2024-11-15 11:46:33.757094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:33.155 qpair failed and we were unable to recover it. 00:28:33.155 [2024-11-15 11:46:33.757343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.155 [2024-11-15 11:46:33.757355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:33.155 qpair failed and we were unable to recover it. 00:28:33.155 [2024-11-15 11:46:33.757452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.155 [2024-11-15 11:46:33.757466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:33.155 qpair failed and we were unable to recover it. 00:28:33.155 [2024-11-15 11:46:33.757614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.155 [2024-11-15 11:46:33.757625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:33.155 qpair failed and we were unable to recover it. 00:28:33.155 [2024-11-15 11:46:33.757729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.155 [2024-11-15 11:46:33.757739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:33.155 qpair failed and we were unable to recover it. 00:28:33.155 [2024-11-15 11:46:33.757901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.155 [2024-11-15 11:46:33.757932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:33.155 qpair failed and we were unable to recover it. 00:28:33.155 [2024-11-15 11:46:33.758049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.155 [2024-11-15 11:46:33.758080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:33.155 qpair failed and we were unable to recover it. 00:28:33.155 [2024-11-15 11:46:33.758337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.155 [2024-11-15 11:46:33.758369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:33.155 qpair failed and we were unable to recover it. 00:28:33.155 [2024-11-15 11:46:33.758475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.155 [2024-11-15 11:46:33.758485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:33.155 qpair failed and we were unable to recover it. 00:28:33.155 [2024-11-15 11:46:33.758626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.155 [2024-11-15 11:46:33.758638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:33.155 qpair failed and we were unable to recover it. 00:28:33.155 [2024-11-15 11:46:33.758779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.155 [2024-11-15 11:46:33.758791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:33.155 qpair failed and we were unable to recover it. 00:28:33.155 [2024-11-15 11:46:33.758978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.155 [2024-11-15 11:46:33.759010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:33.155 qpair failed and we were unable to recover it. 00:28:33.155 [2024-11-15 11:46:33.759130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.155 [2024-11-15 11:46:33.759164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:33.155 qpair failed and we were unable to recover it. 00:28:33.155 [2024-11-15 11:46:33.759352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.155 [2024-11-15 11:46:33.759391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:33.155 qpair failed and we were unable to recover it. 00:28:33.156 [2024-11-15 11:46:33.759581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.156 [2024-11-15 11:46:33.759594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:33.156 qpair failed and we were unable to recover it. 00:28:33.156 [2024-11-15 11:46:33.759670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.156 [2024-11-15 11:46:33.759681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:33.156 qpair failed and we were unable to recover it. 00:28:33.156 [2024-11-15 11:46:33.759773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.156 [2024-11-15 11:46:33.759782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:33.156 qpair failed and we were unable to recover it. 00:28:33.156 [2024-11-15 11:46:33.760005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.156 [2024-11-15 11:46:33.760016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:33.156 qpair failed and we were unable to recover it. 00:28:33.156 [2024-11-15 11:46:33.760247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.156 [2024-11-15 11:46:33.760258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:33.156 qpair failed and we were unable to recover it. 00:28:33.156 [2024-11-15 11:46:33.760349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.156 [2024-11-15 11:46:33.760361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:33.156 qpair failed and we were unable to recover it. 00:28:33.156 [2024-11-15 11:46:33.760447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.156 [2024-11-15 11:46:33.760463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:33.156 qpair failed and we were unable to recover it. 00:28:33.156 [2024-11-15 11:46:33.760716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.156 [2024-11-15 11:46:33.760751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:33.156 qpair failed and we were unable to recover it. 00:28:33.156 [2024-11-15 11:46:33.760940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.156 [2024-11-15 11:46:33.760973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:33.156 qpair failed and we were unable to recover it. 00:28:33.156 [2024-11-15 11:46:33.761176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.156 [2024-11-15 11:46:33.761211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:33.156 qpair failed and we were unable to recover it. 00:28:33.156 [2024-11-15 11:46:33.761412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.156 [2024-11-15 11:46:33.761423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:33.156 qpair failed and we were unable to recover it. 00:28:33.156 [2024-11-15 11:46:33.761653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.156 [2024-11-15 11:46:33.761687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:33.156 qpair failed and we were unable to recover it. 00:28:33.156 [2024-11-15 11:46:33.761990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.156 [2024-11-15 11:46:33.762023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:33.156 qpair failed and we were unable to recover it. 00:28:33.156 [2024-11-15 11:46:33.762160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.156 [2024-11-15 11:46:33.762197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:33.156 qpair failed and we were unable to recover it. 00:28:33.156 [2024-11-15 11:46:33.762479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.156 [2024-11-15 11:46:33.762513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:33.156 qpair failed and we were unable to recover it. 00:28:33.156 [2024-11-15 11:46:33.762798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.156 [2024-11-15 11:46:33.762809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:33.156 qpair failed and we were unable to recover it. 00:28:33.156 [2024-11-15 11:46:33.763006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.156 [2024-11-15 11:46:33.763038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:33.156 qpair failed and we were unable to recover it. 00:28:33.156 [2024-11-15 11:46:33.763179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.156 [2024-11-15 11:46:33.763212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:33.156 qpair failed and we were unable to recover it. 00:28:33.156 [2024-11-15 11:46:33.763440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.156 [2024-11-15 11:46:33.763482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:33.156 qpair failed and we were unable to recover it. 00:28:33.156 [2024-11-15 11:46:33.763759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.156 [2024-11-15 11:46:33.763792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:33.156 qpair failed and we were unable to recover it. 00:28:33.156 [2024-11-15 11:46:33.764048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.156 [2024-11-15 11:46:33.764082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:33.156 qpair failed and we were unable to recover it. 00:28:33.156 [2024-11-15 11:46:33.764315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.156 [2024-11-15 11:46:33.764347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:33.156 qpair failed and we were unable to recover it. 00:28:33.156 [2024-11-15 11:46:33.764551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.156 [2024-11-15 11:46:33.764584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:33.156 qpair failed and we were unable to recover it. 00:28:33.156 [2024-11-15 11:46:33.764801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.156 [2024-11-15 11:46:33.764813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:33.156 qpair failed and we were unable to recover it. 00:28:33.156 [2024-11-15 11:46:33.764977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.156 [2024-11-15 11:46:33.765008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:33.156 qpair failed and we were unable to recover it. 00:28:33.156 [2024-11-15 11:46:33.765211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.156 [2024-11-15 11:46:33.765244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:33.156 qpair failed and we were unable to recover it. 00:28:33.156 [2024-11-15 11:46:33.765384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.156 [2024-11-15 11:46:33.765416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:33.156 qpair failed and we were unable to recover it. 00:28:33.156 [2024-11-15 11:46:33.765607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.156 [2024-11-15 11:46:33.765618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:33.156 qpair failed and we were unable to recover it. 00:28:33.156 [2024-11-15 11:46:33.765872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.156 [2024-11-15 11:46:33.765904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:33.156 qpair failed and we were unable to recover it. 00:28:33.156 [2024-11-15 11:46:33.766159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.156 [2024-11-15 11:46:33.766191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:33.156 qpair failed and we were unable to recover it. 00:28:33.156 [2024-11-15 11:46:33.766398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.156 [2024-11-15 11:46:33.766409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:33.156 qpair failed and we were unable to recover it. 00:28:33.156 [2024-11-15 11:46:33.766598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.156 [2024-11-15 11:46:33.766610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:33.156 qpair failed and we were unable to recover it. 00:28:33.156 [2024-11-15 11:46:33.766856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.156 [2024-11-15 11:46:33.766867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:33.156 qpair failed and we were unable to recover it. 00:28:33.156 [2024-11-15 11:46:33.767019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.156 [2024-11-15 11:46:33.767030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:33.156 qpair failed and we were unable to recover it. 00:28:33.156 [2024-11-15 11:46:33.767185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.156 [2024-11-15 11:46:33.767197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:33.156 qpair failed and we were unable to recover it. 00:28:33.156 [2024-11-15 11:46:33.767409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.156 [2024-11-15 11:46:33.767419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:33.156 qpair failed and we were unable to recover it. 00:28:33.156 [2024-11-15 11:46:33.767571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.156 [2024-11-15 11:46:33.767583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:33.156 qpair failed and we were unable to recover it. 00:28:33.156 [2024-11-15 11:46:33.767673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.156 [2024-11-15 11:46:33.767683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:33.157 qpair failed and we were unable to recover it. 00:28:33.157 [2024-11-15 11:46:33.767881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.157 [2024-11-15 11:46:33.767913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:33.157 qpair failed and we were unable to recover it. 00:28:33.157 [2024-11-15 11:46:33.768100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.157 [2024-11-15 11:46:33.768138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:33.157 qpair failed and we were unable to recover it. 00:28:33.157 [2024-11-15 11:46:33.768324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.157 [2024-11-15 11:46:33.768356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:33.157 qpair failed and we were unable to recover it. 00:28:33.157 [2024-11-15 11:46:33.768481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.157 [2024-11-15 11:46:33.768493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:33.157 qpair failed and we were unable to recover it. 00:28:33.157 [2024-11-15 11:46:33.768593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.157 [2024-11-15 11:46:33.768603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:33.157 qpair failed and we were unable to recover it. 00:28:33.157 [2024-11-15 11:46:33.768696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.157 [2024-11-15 11:46:33.768706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:33.157 qpair failed and we were unable to recover it. 00:28:33.157 [2024-11-15 11:46:33.768884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.157 [2024-11-15 11:46:33.768895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:33.157 qpair failed and we were unable to recover it. 00:28:33.157 [2024-11-15 11:46:33.769130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.157 [2024-11-15 11:46:33.769141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:33.157 qpair failed and we were unable to recover it. 00:28:33.157 [2024-11-15 11:46:33.769351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.157 [2024-11-15 11:46:33.769384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:33.157 qpair failed and we were unable to recover it. 00:28:33.157 [2024-11-15 11:46:33.769587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.157 [2024-11-15 11:46:33.769620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:33.157 qpair failed and we were unable to recover it. 00:28:33.157 [2024-11-15 11:46:33.769855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.157 [2024-11-15 11:46:33.769887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:33.157 qpair failed and we were unable to recover it. 00:28:33.157 [2024-11-15 11:46:33.770019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.157 [2024-11-15 11:46:33.770030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:33.157 qpair failed and we were unable to recover it. 00:28:33.157 [2024-11-15 11:46:33.770116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.157 [2024-11-15 11:46:33.770126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:33.157 qpair failed and we were unable to recover it. 00:28:33.157 [2024-11-15 11:46:33.770205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.157 [2024-11-15 11:46:33.770215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:33.157 qpair failed and we were unable to recover it. 00:28:33.157 [2024-11-15 11:46:33.770390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.157 [2024-11-15 11:46:33.770425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:33.157 qpair failed and we were unable to recover it. 00:28:33.157 [2024-11-15 11:46:33.770624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.157 [2024-11-15 11:46:33.770696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.157 qpair failed and we were unable to recover it. 00:28:33.157 [2024-11-15 11:46:33.770868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.157 [2024-11-15 11:46:33.770905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.157 qpair failed and we were unable to recover it. 00:28:33.157 [2024-11-15 11:46:33.771111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.157 [2024-11-15 11:46:33.771147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.157 qpair failed and we were unable to recover it. 00:28:33.157 [2024-11-15 11:46:33.771283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.157 [2024-11-15 11:46:33.771304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.157 qpair failed and we were unable to recover it. 00:28:33.157 [2024-11-15 11:46:33.771450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.157 [2024-11-15 11:46:33.771468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.157 qpair failed and we were unable to recover it. 00:28:33.157 [2024-11-15 11:46:33.771703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.157 [2024-11-15 11:46:33.771714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.157 qpair failed and we were unable to recover it. 00:28:33.157 [2024-11-15 11:46:33.771827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.157 [2024-11-15 11:46:33.771858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.157 qpair failed and we were unable to recover it. 00:28:33.157 [2024-11-15 11:46:33.772045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.157 [2024-11-15 11:46:33.772078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.157 qpair failed and we were unable to recover it. 00:28:33.157 [2024-11-15 11:46:33.772310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.157 [2024-11-15 11:46:33.772357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.157 qpair failed and we were unable to recover it. 00:28:33.157 [2024-11-15 11:46:33.772444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.157 [2024-11-15 11:46:33.772454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.157 qpair failed and we were unable to recover it. 00:28:33.157 [2024-11-15 11:46:33.772546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.157 [2024-11-15 11:46:33.772557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.157 qpair failed and we were unable to recover it. 00:28:33.157 [2024-11-15 11:46:33.772658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.157 [2024-11-15 11:46:33.772690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.157 qpair failed and we were unable to recover it. 00:28:33.157 [2024-11-15 11:46:33.772815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.157 [2024-11-15 11:46:33.772851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.157 qpair failed and we were unable to recover it. 00:28:33.157 [2024-11-15 11:46:33.773067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.157 [2024-11-15 11:46:33.773100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.157 qpair failed and we were unable to recover it. 00:28:33.157 [2024-11-15 11:46:33.773298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.157 [2024-11-15 11:46:33.773329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.157 qpair failed and we were unable to recover it. 00:28:33.157 [2024-11-15 11:46:33.773474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.157 [2024-11-15 11:46:33.773508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.157 qpair failed and we were unable to recover it. 00:28:33.157 [2024-11-15 11:46:33.773653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.157 [2024-11-15 11:46:33.773688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.157 qpair failed and we were unable to recover it. 00:28:33.157 [2024-11-15 11:46:33.773882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.157 [2024-11-15 11:46:33.773893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.157 qpair failed and we were unable to recover it. 00:28:33.157 [2024-11-15 11:46:33.774061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.157 [2024-11-15 11:46:33.774096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.157 qpair failed and we were unable to recover it. 00:28:33.157 [2024-11-15 11:46:33.774214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.157 [2024-11-15 11:46:33.774245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.157 qpair failed and we were unable to recover it. 00:28:33.157 [2024-11-15 11:46:33.774481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.157 [2024-11-15 11:46:33.774517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.157 qpair failed and we were unable to recover it. 00:28:33.157 [2024-11-15 11:46:33.774633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.157 [2024-11-15 11:46:33.774665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.157 qpair failed and we were unable to recover it. 00:28:33.157 [2024-11-15 11:46:33.774919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.158 [2024-11-15 11:46:33.774951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.158 qpair failed and we were unable to recover it. 00:28:33.158 [2024-11-15 11:46:33.775136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.158 [2024-11-15 11:46:33.775167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.158 qpair failed and we were unable to recover it. 00:28:33.158 [2024-11-15 11:46:33.775390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.158 [2024-11-15 11:46:33.775402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.158 qpair failed and we were unable to recover it. 00:28:33.158 [2024-11-15 11:46:33.775539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.158 [2024-11-15 11:46:33.775551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.158 qpair failed and we were unable to recover it. 00:28:33.158 [2024-11-15 11:46:33.775640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.158 [2024-11-15 11:46:33.775653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.158 qpair failed and we were unable to recover it. 00:28:33.158 [2024-11-15 11:46:33.775807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.158 [2024-11-15 11:46:33.775840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.158 qpair failed and we were unable to recover it. 00:28:33.158 [2024-11-15 11:46:33.775978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.158 [2024-11-15 11:46:33.776010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.158 qpair failed and we were unable to recover it. 00:28:33.158 [2024-11-15 11:46:33.776204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.158 [2024-11-15 11:46:33.776239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.158 qpair failed and we were unable to recover it. 00:28:33.158 [2024-11-15 11:46:33.776377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.158 [2024-11-15 11:46:33.776389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.158 qpair failed and we were unable to recover it. 00:28:33.158 [2024-11-15 11:46:33.776561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.158 [2024-11-15 11:46:33.776573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.158 qpair failed and we were unable to recover it. 00:28:33.158 [2024-11-15 11:46:33.776712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.158 [2024-11-15 11:46:33.776723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.158 qpair failed and we were unable to recover it. 00:28:33.158 [2024-11-15 11:46:33.776809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.158 [2024-11-15 11:46:33.776818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.158 qpair failed and we were unable to recover it. 00:28:33.158 [2024-11-15 11:46:33.776956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.158 [2024-11-15 11:46:33.776969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.158 qpair failed and we were unable to recover it. 00:28:33.158 [2024-11-15 11:46:33.777053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.158 [2024-11-15 11:46:33.777064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.158 qpair failed and we were unable to recover it. 00:28:33.158 [2024-11-15 11:46:33.777323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.158 [2024-11-15 11:46:33.777360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.158 qpair failed and we were unable to recover it. 00:28:33.158 [2024-11-15 11:46:33.777567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.158 [2024-11-15 11:46:33.777602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.158 qpair failed and we were unable to recover it. 00:28:33.158 [2024-11-15 11:46:33.777919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.158 [2024-11-15 11:46:33.777931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.158 qpair failed and we were unable to recover it. 00:28:33.158 [2024-11-15 11:46:33.778162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.158 [2024-11-15 11:46:33.778173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.158 qpair failed and we were unable to recover it. 00:28:33.158 [2024-11-15 11:46:33.778334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.158 [2024-11-15 11:46:33.778345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.158 qpair failed and we were unable to recover it. 00:28:33.158 [2024-11-15 11:46:33.778518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.158 [2024-11-15 11:46:33.778551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.158 qpair failed and we were unable to recover it. 00:28:33.158 [2024-11-15 11:46:33.778692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.158 [2024-11-15 11:46:33.778724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.158 qpair failed and we were unable to recover it. 00:28:33.158 [2024-11-15 11:46:33.778857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.158 [2024-11-15 11:46:33.778892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.158 qpair failed and we were unable to recover it. 00:28:33.158 [2024-11-15 11:46:33.779029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.158 [2024-11-15 11:46:33.779060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.158 qpair failed and we were unable to recover it. 00:28:33.158 [2024-11-15 11:46:33.779300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.158 [2024-11-15 11:46:33.779333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.158 qpair failed and we were unable to recover it. 00:28:33.158 [2024-11-15 11:46:33.779529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.158 [2024-11-15 11:46:33.779564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.158 qpair failed and we were unable to recover it. 00:28:33.158 [2024-11-15 11:46:33.779726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.158 [2024-11-15 11:46:33.779738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.158 qpair failed and we were unable to recover it. 00:28:33.158 [2024-11-15 11:46:33.779890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.158 [2024-11-15 11:46:33.779921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.158 qpair failed and we were unable to recover it. 00:28:33.158 [2024-11-15 11:46:33.780123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.158 [2024-11-15 11:46:33.780154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.158 qpair failed and we were unable to recover it. 00:28:33.158 [2024-11-15 11:46:33.780468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.158 [2024-11-15 11:46:33.780503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.158 qpair failed and we were unable to recover it. 00:28:33.158 [2024-11-15 11:46:33.780701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.158 [2024-11-15 11:46:33.780737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.158 qpair failed and we were unable to recover it. 00:28:33.158 [2024-11-15 11:46:33.781020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.158 [2024-11-15 11:46:33.781052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.158 qpair failed and we were unable to recover it. 00:28:33.158 [2024-11-15 11:46:33.781363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.158 [2024-11-15 11:46:33.781397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.158 qpair failed and we were unable to recover it. 00:28:33.158 [2024-11-15 11:46:33.781612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.158 [2024-11-15 11:46:33.781624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.158 qpair failed and we were unable to recover it. 00:28:33.158 [2024-11-15 11:46:33.781863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.159 [2024-11-15 11:46:33.781894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.159 qpair failed and we were unable to recover it. 00:28:33.159 [2024-11-15 11:46:33.782092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.159 [2024-11-15 11:46:33.782125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.159 qpair failed and we were unable to recover it. 00:28:33.159 [2024-11-15 11:46:33.782324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.159 [2024-11-15 11:46:33.782357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.159 qpair failed and we were unable to recover it. 00:28:33.159 [2024-11-15 11:46:33.782541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.159 [2024-11-15 11:46:33.782574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.159 qpair failed and we were unable to recover it. 00:28:33.159 [2024-11-15 11:46:33.782778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.159 [2024-11-15 11:46:33.782789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.159 qpair failed and we were unable to recover it. 00:28:33.159 [2024-11-15 11:46:33.783028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.159 [2024-11-15 11:46:33.783060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.159 qpair failed and we were unable to recover it. 00:28:33.159 [2024-11-15 11:46:33.783342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.159 [2024-11-15 11:46:33.783373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.159 qpair failed and we were unable to recover it. 00:28:33.159 [2024-11-15 11:46:33.783501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.159 [2024-11-15 11:46:33.783534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.159 qpair failed and we were unable to recover it. 00:28:33.159 [2024-11-15 11:46:33.783740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.159 [2024-11-15 11:46:33.783776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.159 qpair failed and we were unable to recover it. 00:28:33.159 [2024-11-15 11:46:33.784054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.159 [2024-11-15 11:46:33.784087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.159 qpair failed and we were unable to recover it. 00:28:33.159 [2024-11-15 11:46:33.784368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.159 [2024-11-15 11:46:33.784400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.159 qpair failed and we were unable to recover it. 00:28:33.159 [2024-11-15 11:46:33.784603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.159 [2024-11-15 11:46:33.784616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.159 qpair failed and we were unable to recover it. 00:28:33.159 [2024-11-15 11:46:33.784774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.159 [2024-11-15 11:46:33.784786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.159 qpair failed and we were unable to recover it. 00:28:33.159 [2024-11-15 11:46:33.784932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.159 [2024-11-15 11:46:33.784944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.159 qpair failed and we were unable to recover it. 00:28:33.159 [2024-11-15 11:46:33.785188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.159 [2024-11-15 11:46:33.785220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.159 qpair failed and we were unable to recover it. 00:28:33.159 [2024-11-15 11:46:33.785479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.159 [2024-11-15 11:46:33.785513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.159 qpair failed and we were unable to recover it. 00:28:33.159 [2024-11-15 11:46:33.785667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.159 [2024-11-15 11:46:33.785678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.159 qpair failed and we were unable to recover it. 00:28:33.159 [2024-11-15 11:46:33.785758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.159 [2024-11-15 11:46:33.785769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.159 qpair failed and we were unable to recover it. 00:28:33.159 [2024-11-15 11:46:33.785921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.159 [2024-11-15 11:46:33.785932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.159 qpair failed and we were unable to recover it. 00:28:33.159 [2024-11-15 11:46:33.786090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.159 [2024-11-15 11:46:33.786122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.159 qpair failed and we were unable to recover it. 00:28:33.159 [2024-11-15 11:46:33.786238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.159 [2024-11-15 11:46:33.786270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.159 qpair failed and we were unable to recover it. 00:28:33.159 [2024-11-15 11:46:33.786525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.159 [2024-11-15 11:46:33.786558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.159 qpair failed and we were unable to recover it. 00:28:33.159 [2024-11-15 11:46:33.786735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.159 [2024-11-15 11:46:33.786746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.159 qpair failed and we were unable to recover it. 00:28:33.159 [2024-11-15 11:46:33.786821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.159 [2024-11-15 11:46:33.786830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.159 qpair failed and we were unable to recover it. 00:28:33.159 [2024-11-15 11:46:33.786904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.159 [2024-11-15 11:46:33.786914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.159 qpair failed and we were unable to recover it. 00:28:33.159 [2024-11-15 11:46:33.787005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.159 [2024-11-15 11:46:33.787015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.159 qpair failed and we were unable to recover it. 00:28:33.159 [2024-11-15 11:46:33.787152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.159 [2024-11-15 11:46:33.787193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.159 qpair failed and we were unable to recover it. 00:28:33.159 [2024-11-15 11:46:33.787318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.159 [2024-11-15 11:46:33.787349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.159 qpair failed and we were unable to recover it. 00:28:33.159 [2024-11-15 11:46:33.787550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.159 [2024-11-15 11:46:33.787584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.159 qpair failed and we were unable to recover it. 00:28:33.159 [2024-11-15 11:46:33.787727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.159 [2024-11-15 11:46:33.787760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.159 qpair failed and we were unable to recover it. 00:28:33.159 [2024-11-15 11:46:33.787965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.159 [2024-11-15 11:46:33.787997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.159 qpair failed and we were unable to recover it. 00:28:33.159 [2024-11-15 11:46:33.788184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.159 [2024-11-15 11:46:33.788216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.159 qpair failed and we were unable to recover it. 00:28:33.159 [2024-11-15 11:46:33.788398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.159 [2024-11-15 11:46:33.788409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.159 qpair failed and we were unable to recover it. 00:28:33.159 [2024-11-15 11:46:33.788513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.159 [2024-11-15 11:46:33.788524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.159 qpair failed and we were unable to recover it. 00:28:33.159 [2024-11-15 11:46:33.788677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.159 [2024-11-15 11:46:33.788689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.159 qpair failed and we were unable to recover it. 00:28:33.159 [2024-11-15 11:46:33.788843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.159 [2024-11-15 11:46:33.788875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.159 qpair failed and we were unable to recover it. 00:28:33.159 [2024-11-15 11:46:33.788993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.159 [2024-11-15 11:46:33.789025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.159 qpair failed and we were unable to recover it. 00:28:33.159 [2024-11-15 11:46:33.789156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.159 [2024-11-15 11:46:33.789193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.159 qpair failed and we were unable to recover it. 00:28:33.159 [2024-11-15 11:46:33.789366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.160 [2024-11-15 11:46:33.789432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1922550 with addr=10.0.0.2, port=4420 00:28:33.160 qpair failed and we were unable to recover it. 00:28:33.160 [2024-11-15 11:46:33.789616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.160 [2024-11-15 11:46:33.789645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.160 qpair failed and we were unable to recover it. 00:28:33.160 [2024-11-15 11:46:33.789864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.160 [2024-11-15 11:46:33.789877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.160 qpair failed and we were unable to recover it. 00:28:33.160 [2024-11-15 11:46:33.789984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.160 [2024-11-15 11:46:33.789995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.160 qpair failed and we were unable to recover it. 00:28:33.160 [2024-11-15 11:46:33.790148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.160 [2024-11-15 11:46:33.790181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.160 qpair failed and we were unable to recover it. 00:28:33.160 [2024-11-15 11:46:33.790390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.160 [2024-11-15 11:46:33.790424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.160 qpair failed and we were unable to recover it. 00:28:33.160 [2024-11-15 11:46:33.790628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.160 [2024-11-15 11:46:33.790640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.160 qpair failed and we were unable to recover it. 00:28:33.160 [2024-11-15 11:46:33.790721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.160 [2024-11-15 11:46:33.790730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.160 qpair failed and we were unable to recover it. 00:28:33.160 [2024-11-15 11:46:33.790898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.160 [2024-11-15 11:46:33.790931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.160 qpair failed and we were unable to recover it. 00:28:33.160 [2024-11-15 11:46:33.791116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.160 [2024-11-15 11:46:33.791149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.160 qpair failed and we were unable to recover it. 00:28:33.160 [2024-11-15 11:46:33.791279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.160 [2024-11-15 11:46:33.791312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.160 qpair failed and we were unable to recover it. 00:28:33.160 [2024-11-15 11:46:33.791538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.160 [2024-11-15 11:46:33.791550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.160 qpair failed and we were unable to recover it. 00:28:33.160 [2024-11-15 11:46:33.791693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.160 [2024-11-15 11:46:33.791705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.160 qpair failed and we were unable to recover it. 00:28:33.160 [2024-11-15 11:46:33.791859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.160 [2024-11-15 11:46:33.791901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.160 qpair failed and we were unable to recover it. 00:28:33.160 [2024-11-15 11:46:33.792036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.160 [2024-11-15 11:46:33.792067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.160 qpair failed and we were unable to recover it. 00:28:33.160 [2024-11-15 11:46:33.792261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.160 [2024-11-15 11:46:33.792294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.160 qpair failed and we were unable to recover it. 00:28:33.160 [2024-11-15 11:46:33.792473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.160 [2024-11-15 11:46:33.792485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.160 qpair failed and we were unable to recover it. 00:28:33.160 [2024-11-15 11:46:33.792701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.160 [2024-11-15 11:46:33.792735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.160 qpair failed and we were unable to recover it. 00:28:33.160 [2024-11-15 11:46:33.792856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.160 [2024-11-15 11:46:33.792887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.160 qpair failed and we were unable to recover it. 00:28:33.160 [2024-11-15 11:46:33.793023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.160 [2024-11-15 11:46:33.793055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.160 qpair failed and we were unable to recover it. 00:28:33.160 [2024-11-15 11:46:33.793280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.160 [2024-11-15 11:46:33.793314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.160 qpair failed and we were unable to recover it. 00:28:33.160 [2024-11-15 11:46:33.793632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.160 [2024-11-15 11:46:33.793665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.160 qpair failed and we were unable to recover it. 00:28:33.160 [2024-11-15 11:46:33.793947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.160 [2024-11-15 11:46:33.793979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.160 qpair failed and we were unable to recover it. 00:28:33.160 [2024-11-15 11:46:33.794186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.160 [2024-11-15 11:46:33.794219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.160 qpair failed and we were unable to recover it. 00:28:33.160 [2024-11-15 11:46:33.794346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.160 [2024-11-15 11:46:33.794378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.160 qpair failed and we were unable to recover it. 00:28:33.160 [2024-11-15 11:46:33.794512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.160 [2024-11-15 11:46:33.794523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.160 qpair failed and we were unable to recover it. 00:28:33.160 [2024-11-15 11:46:33.794619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.160 [2024-11-15 11:46:33.794629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.160 qpair failed and we were unable to recover it. 00:28:33.160 [2024-11-15 11:46:33.794717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.160 [2024-11-15 11:46:33.794727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.160 qpair failed and we were unable to recover it. 00:28:33.160 [2024-11-15 11:46:33.794860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.160 [2024-11-15 11:46:33.794892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.160 qpair failed and we were unable to recover it. 00:28:33.160 [2024-11-15 11:46:33.795018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.160 [2024-11-15 11:46:33.795049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.160 qpair failed and we were unable to recover it. 00:28:33.160 [2024-11-15 11:46:33.795266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.160 [2024-11-15 11:46:33.795308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.160 qpair failed and we were unable to recover it. 00:28:33.160 [2024-11-15 11:46:33.795410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.160 [2024-11-15 11:46:33.795421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.160 qpair failed and we were unable to recover it. 00:28:33.160 [2024-11-15 11:46:33.795488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.160 [2024-11-15 11:46:33.795498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.160 qpair failed and we were unable to recover it. 00:28:33.160 [2024-11-15 11:46:33.795678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.160 [2024-11-15 11:46:33.795689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.160 qpair failed and we were unable to recover it. 00:28:33.160 [2024-11-15 11:46:33.795856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.160 [2024-11-15 11:46:33.795889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.160 qpair failed and we were unable to recover it. 00:28:33.160 [2024-11-15 11:46:33.796197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.160 [2024-11-15 11:46:33.796229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.160 qpair failed and we were unable to recover it. 00:28:33.160 [2024-11-15 11:46:33.796354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.160 [2024-11-15 11:46:33.796385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.160 qpair failed and we were unable to recover it. 00:28:33.160 [2024-11-15 11:46:33.796577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.160 [2024-11-15 11:46:33.796588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.160 qpair failed and we were unable to recover it. 00:28:33.160 [2024-11-15 11:46:33.796734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.161 [2024-11-15 11:46:33.796745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.161 qpair failed and we were unable to recover it. 00:28:33.161 [2024-11-15 11:46:33.796879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.161 [2024-11-15 11:46:33.796890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.161 qpair failed and we were unable to recover it. 00:28:33.161 [2024-11-15 11:46:33.797134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.161 [2024-11-15 11:46:33.797149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.161 qpair failed and we were unable to recover it. 00:28:33.161 [2024-11-15 11:46:33.797250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.161 [2024-11-15 11:46:33.797260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.161 qpair failed and we were unable to recover it. 00:28:33.161 [2024-11-15 11:46:33.797450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.161 [2024-11-15 11:46:33.797497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.161 qpair failed and we were unable to recover it. 00:28:33.161 [2024-11-15 11:46:33.797702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.161 [2024-11-15 11:46:33.797738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.161 qpair failed and we were unable to recover it. 00:28:33.161 [2024-11-15 11:46:33.798039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.161 [2024-11-15 11:46:33.798074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.161 qpair failed and we were unable to recover it. 00:28:33.161 [2024-11-15 11:46:33.798359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.161 [2024-11-15 11:46:33.798399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.161 qpair failed and we were unable to recover it. 00:28:33.161 [2024-11-15 11:46:33.798625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.161 [2024-11-15 11:46:33.798659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.161 qpair failed and we were unable to recover it. 00:28:33.161 [2024-11-15 11:46:33.798859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.161 [2024-11-15 11:46:33.798891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.161 qpair failed and we were unable to recover it. 00:28:33.161 [2024-11-15 11:46:33.799086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.161 [2024-11-15 11:46:33.799097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.161 qpair failed and we were unable to recover it. 00:28:33.161 [2024-11-15 11:46:33.799169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.161 [2024-11-15 11:46:33.799178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.161 qpair failed and we were unable to recover it. 00:28:33.161 [2024-11-15 11:46:33.799338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.161 [2024-11-15 11:46:33.799349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.161 qpair failed and we were unable to recover it. 00:28:33.161 [2024-11-15 11:46:33.799499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.161 [2024-11-15 11:46:33.799510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.161 qpair failed and we were unable to recover it. 00:28:33.161 [2024-11-15 11:46:33.799610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.161 [2024-11-15 11:46:33.799621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.161 qpair failed and we were unable to recover it. 00:28:33.161 [2024-11-15 11:46:33.799863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.161 [2024-11-15 11:46:33.799903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.161 qpair failed and we were unable to recover it. 00:28:33.161 [2024-11-15 11:46:33.800095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.161 [2024-11-15 11:46:33.800130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.161 qpair failed and we were unable to recover it. 00:28:33.161 [2024-11-15 11:46:33.800336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.161 [2024-11-15 11:46:33.800368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.161 qpair failed and we were unable to recover it. 00:28:33.161 [2024-11-15 11:46:33.800612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.161 [2024-11-15 11:46:33.800624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.161 qpair failed and we were unable to recover it. 00:28:33.161 [2024-11-15 11:46:33.800702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.161 [2024-11-15 11:46:33.800712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.161 qpair failed and we were unable to recover it. 00:28:33.161 [2024-11-15 11:46:33.800811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.161 [2024-11-15 11:46:33.800821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.161 qpair failed and we were unable to recover it. 00:28:33.161 [2024-11-15 11:46:33.800973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.161 [2024-11-15 11:46:33.800985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.161 qpair failed and we were unable to recover it. 00:28:33.161 [2024-11-15 11:46:33.801099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.161 [2024-11-15 11:46:33.801135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.161 qpair failed and we were unable to recover it. 00:28:33.161 [2024-11-15 11:46:33.801268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.161 [2024-11-15 11:46:33.801301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.161 qpair failed and we were unable to recover it. 00:28:33.161 [2024-11-15 11:46:33.801423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.161 [2024-11-15 11:46:33.801455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.161 qpair failed and we were unable to recover it. 00:28:33.161 [2024-11-15 11:46:33.801673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.161 [2024-11-15 11:46:33.801708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.161 qpair failed and we were unable to recover it. 00:28:33.161 [2024-11-15 11:46:33.801952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.161 [2024-11-15 11:46:33.801963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.161 qpair failed and we were unable to recover it. 00:28:33.161 [2024-11-15 11:46:33.802187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.161 [2024-11-15 11:46:33.802199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.161 qpair failed and we were unable to recover it. 00:28:33.161 [2024-11-15 11:46:33.802464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.161 [2024-11-15 11:46:33.802475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.161 qpair failed and we were unable to recover it. 00:28:33.161 [2024-11-15 11:46:33.802632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.161 [2024-11-15 11:46:33.802643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.161 qpair failed and we were unable to recover it. 00:28:33.161 [2024-11-15 11:46:33.802871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.161 [2024-11-15 11:46:33.802912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.161 qpair failed and we were unable to recover it. 00:28:33.161 [2024-11-15 11:46:33.803108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.161 [2024-11-15 11:46:33.803141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.161 qpair failed and we were unable to recover it. 00:28:33.161 [2024-11-15 11:46:33.803282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.161 [2024-11-15 11:46:33.803314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.161 qpair failed and we were unable to recover it. 00:28:33.161 [2024-11-15 11:46:33.803442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.161 [2024-11-15 11:46:33.803454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.161 qpair failed and we were unable to recover it. 00:28:33.161 [2024-11-15 11:46:33.803560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.161 [2024-11-15 11:46:33.803570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.161 qpair failed and we were unable to recover it. 00:28:33.161 [2024-11-15 11:46:33.803743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.161 [2024-11-15 11:46:33.803754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.161 qpair failed and we were unable to recover it. 00:28:33.161 [2024-11-15 11:46:33.803867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.161 [2024-11-15 11:46:33.803899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.161 qpair failed and we were unable to recover it. 00:28:33.161 [2024-11-15 11:46:33.804119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.161 [2024-11-15 11:46:33.804150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.161 qpair failed and we were unable to recover it. 00:28:33.161 [2024-11-15 11:46:33.804362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.162 [2024-11-15 11:46:33.804396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.162 qpair failed and we were unable to recover it. 00:28:33.162 [2024-11-15 11:46:33.804559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.162 [2024-11-15 11:46:33.804571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.162 qpair failed and we were unable to recover it. 00:28:33.162 [2024-11-15 11:46:33.804782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.162 [2024-11-15 11:46:33.804814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.162 qpair failed and we were unable to recover it. 00:28:33.162 [2024-11-15 11:46:33.805010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.162 [2024-11-15 11:46:33.805041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.162 qpair failed and we were unable to recover it. 00:28:33.162 [2024-11-15 11:46:33.805181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.162 [2024-11-15 11:46:33.805217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.162 qpair failed and we were unable to recover it. 00:28:33.162 [2024-11-15 11:46:33.805494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.162 [2024-11-15 11:46:33.805505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.162 qpair failed and we were unable to recover it. 00:28:33.162 [2024-11-15 11:46:33.805658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.162 [2024-11-15 11:46:33.805669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.162 qpair failed and we were unable to recover it. 00:28:33.162 [2024-11-15 11:46:33.805829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.162 [2024-11-15 11:46:33.805860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.162 qpair failed and we were unable to recover it. 00:28:33.162 [2024-11-15 11:46:33.806006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.162 [2024-11-15 11:46:33.806038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.162 qpair failed and we were unable to recover it. 00:28:33.162 [2024-11-15 11:46:33.806291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.162 [2024-11-15 11:46:33.806324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.162 qpair failed and we were unable to recover it. 00:28:33.162 [2024-11-15 11:46:33.806466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.162 [2024-11-15 11:46:33.806478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.162 qpair failed and we were unable to recover it. 00:28:33.162 [2024-11-15 11:46:33.806651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.162 [2024-11-15 11:46:33.806662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.162 qpair failed and we were unable to recover it. 00:28:33.162 [2024-11-15 11:46:33.806744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.162 [2024-11-15 11:46:33.806753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.162 qpair failed and we were unable to recover it. 00:28:33.162 [2024-11-15 11:46:33.806916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.162 [2024-11-15 11:46:33.806949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.162 qpair failed and we were unable to recover it. 00:28:33.162 [2024-11-15 11:46:33.807253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.162 [2024-11-15 11:46:33.807285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.162 qpair failed and we were unable to recover it. 00:28:33.162 [2024-11-15 11:46:33.807415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.162 [2024-11-15 11:46:33.807426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.162 qpair failed and we were unable to recover it. 00:28:33.162 [2024-11-15 11:46:33.807488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.162 [2024-11-15 11:46:33.807498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.162 qpair failed and we were unable to recover it. 00:28:33.162 [2024-11-15 11:46:33.807650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.162 [2024-11-15 11:46:33.807663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.162 qpair failed and we were unable to recover it. 00:28:33.162 [2024-11-15 11:46:33.807847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.162 [2024-11-15 11:46:33.807859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.162 qpair failed and we were unable to recover it. 00:28:33.162 [2024-11-15 11:46:33.807930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.162 [2024-11-15 11:46:33.807940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.162 qpair failed and we were unable to recover it. 00:28:33.162 [2024-11-15 11:46:33.808079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.162 [2024-11-15 11:46:33.808089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.162 qpair failed and we were unable to recover it. 00:28:33.162 [2024-11-15 11:46:33.808331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.162 [2024-11-15 11:46:33.808364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.162 qpair failed and we were unable to recover it. 00:28:33.162 [2024-11-15 11:46:33.808493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.162 [2024-11-15 11:46:33.808528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.162 qpair failed and we were unable to recover it. 00:28:33.162 [2024-11-15 11:46:33.808737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.162 [2024-11-15 11:46:33.808771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.162 qpair failed and we were unable to recover it. 00:28:33.162 [2024-11-15 11:46:33.808963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.162 [2024-11-15 11:46:33.808994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.162 qpair failed and we were unable to recover it. 00:28:33.162 [2024-11-15 11:46:33.809228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.162 [2024-11-15 11:46:33.809261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.162 qpair failed and we were unable to recover it. 00:28:33.162 [2024-11-15 11:46:33.809450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.162 [2024-11-15 11:46:33.809466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.162 qpair failed and we were unable to recover it. 00:28:33.162 [2024-11-15 11:46:33.809556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.162 [2024-11-15 11:46:33.809567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.162 qpair failed and we were unable to recover it. 00:28:33.162 [2024-11-15 11:46:33.809761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.162 [2024-11-15 11:46:33.809772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.162 qpair failed and we were unable to recover it. 00:28:33.162 [2024-11-15 11:46:33.809980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.162 [2024-11-15 11:46:33.809991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.162 qpair failed and we were unable to recover it. 00:28:33.162 [2024-11-15 11:46:33.810130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.162 [2024-11-15 11:46:33.810141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.162 qpair failed and we were unable to recover it. 00:28:33.162 [2024-11-15 11:46:33.810291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.162 [2024-11-15 11:46:33.810302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.162 qpair failed and we were unable to recover it. 00:28:33.162 [2024-11-15 11:46:33.810447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.162 [2024-11-15 11:46:33.810478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.162 qpair failed and we were unable to recover it. 00:28:33.162 [2024-11-15 11:46:33.810676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.162 [2024-11-15 11:46:33.810707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.162 qpair failed and we were unable to recover it. 00:28:33.162 [2024-11-15 11:46:33.810901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.162 [2024-11-15 11:46:33.810933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.162 qpair failed and we were unable to recover it. 00:28:33.162 [2024-11-15 11:46:33.811152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.162 [2024-11-15 11:46:33.811184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.162 qpair failed and we were unable to recover it. 00:28:33.162 [2024-11-15 11:46:33.811389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.163 [2024-11-15 11:46:33.811421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.163 qpair failed and we were unable to recover it. 00:28:33.163 [2024-11-15 11:46:33.811634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.163 [2024-11-15 11:46:33.811668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.163 qpair failed and we were unable to recover it. 00:28:33.163 [2024-11-15 11:46:33.811865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.163 [2024-11-15 11:46:33.811898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.163 qpair failed and we were unable to recover it. 00:28:33.163 [2024-11-15 11:46:33.812089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.163 [2024-11-15 11:46:33.812123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.163 qpair failed and we were unable to recover it. 00:28:33.163 [2024-11-15 11:46:33.812336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.163 [2024-11-15 11:46:33.812367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.163 qpair failed and we were unable to recover it. 00:28:33.163 [2024-11-15 11:46:33.812598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.163 [2024-11-15 11:46:33.812640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.163 qpair failed and we were unable to recover it. 00:28:33.163 [2024-11-15 11:46:33.812777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.163 [2024-11-15 11:46:33.812788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.163 qpair failed and we were unable to recover it. 00:28:33.163 [2024-11-15 11:46:33.813048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.163 [2024-11-15 11:46:33.813081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.163 qpair failed and we were unable to recover it. 00:28:33.163 [2024-11-15 11:46:33.813354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.163 [2024-11-15 11:46:33.813387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.163 qpair failed and we were unable to recover it. 00:28:33.163 [2024-11-15 11:46:33.813538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.163 [2024-11-15 11:46:33.813549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.163 qpair failed and we were unable to recover it. 00:28:33.163 [2024-11-15 11:46:33.813763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.163 [2024-11-15 11:46:33.813774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.163 qpair failed and we were unable to recover it. 00:28:33.163 [2024-11-15 11:46:33.813935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.163 [2024-11-15 11:46:33.813945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.163 qpair failed and we were unable to recover it. 00:28:33.163 [2024-11-15 11:46:33.814087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.163 [2024-11-15 11:46:33.814120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.163 qpair failed and we were unable to recover it. 00:28:33.163 [2024-11-15 11:46:33.814251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.163 [2024-11-15 11:46:33.814284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.163 qpair failed and we were unable to recover it. 00:28:33.163 [2024-11-15 11:46:33.814476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.163 [2024-11-15 11:46:33.814511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.163 qpair failed and we were unable to recover it. 00:28:33.163 [2024-11-15 11:46:33.814637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.163 [2024-11-15 11:46:33.814648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.163 qpair failed and we were unable to recover it. 00:28:33.163 [2024-11-15 11:46:33.814886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.163 [2024-11-15 11:46:33.814918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.163 qpair failed and we were unable to recover it. 00:28:33.163 [2024-11-15 11:46:33.815137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.163 [2024-11-15 11:46:33.815169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.163 qpair failed and we were unable to recover it. 00:28:33.163 [2024-11-15 11:46:33.815283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.163 [2024-11-15 11:46:33.815315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.163 qpair failed and we were unable to recover it. 00:28:33.163 [2024-11-15 11:46:33.815453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.163 [2024-11-15 11:46:33.815497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.163 qpair failed and we were unable to recover it. 00:28:33.163 [2024-11-15 11:46:33.815653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.163 [2024-11-15 11:46:33.815663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.163 qpair failed and we were unable to recover it. 00:28:33.163 [2024-11-15 11:46:33.815806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.163 [2024-11-15 11:46:33.815843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.163 qpair failed and we were unable to recover it. 00:28:33.163 [2024-11-15 11:46:33.816044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.163 [2024-11-15 11:46:33.816076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.163 qpair failed and we were unable to recover it. 00:28:33.163 [2024-11-15 11:46:33.816295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.163 [2024-11-15 11:46:33.816329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.163 qpair failed and we were unable to recover it. 00:28:33.163 [2024-11-15 11:46:33.816528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.163 [2024-11-15 11:46:33.816562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.163 qpair failed and we were unable to recover it. 00:28:33.163 [2024-11-15 11:46:33.816690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.163 [2024-11-15 11:46:33.816723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.163 qpair failed and we were unable to recover it. 00:28:33.163 [2024-11-15 11:46:33.816848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.163 [2024-11-15 11:46:33.816859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.163 qpair failed and we were unable to recover it. 00:28:33.163 [2024-11-15 11:46:33.816947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.163 [2024-11-15 11:46:33.816958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.163 qpair failed and we were unable to recover it. 00:28:33.163 [2024-11-15 11:46:33.817111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.163 [2024-11-15 11:46:33.817144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.163 qpair failed and we were unable to recover it. 00:28:33.163 [2024-11-15 11:46:33.817277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.163 [2024-11-15 11:46:33.817309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.163 qpair failed and we were unable to recover it. 00:28:33.163 [2024-11-15 11:46:33.817590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.163 [2024-11-15 11:46:33.817627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.163 qpair failed and we were unable to recover it. 00:28:33.163 [2024-11-15 11:46:33.817718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.163 [2024-11-15 11:46:33.817728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.163 qpair failed and we were unable to recover it. 00:28:33.163 [2024-11-15 11:46:33.817816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.163 [2024-11-15 11:46:33.817825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.163 qpair failed and we were unable to recover it. 00:28:33.163 [2024-11-15 11:46:33.817913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.163 [2024-11-15 11:46:33.817942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.163 qpair failed and we were unable to recover it. 00:28:33.163 [2024-11-15 11:46:33.818147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.163 [2024-11-15 11:46:33.818179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.164 qpair failed and we were unable to recover it. 00:28:33.164 [2024-11-15 11:46:33.818317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.164 [2024-11-15 11:46:33.818327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.164 qpair failed and we were unable to recover it. 00:28:33.164 [2024-11-15 11:46:33.818512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.164 [2024-11-15 11:46:33.818523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.164 qpair failed and we were unable to recover it. 00:28:33.164 [2024-11-15 11:46:33.818700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.164 [2024-11-15 11:46:33.818710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.164 qpair failed and we were unable to recover it. 00:28:33.164 [2024-11-15 11:46:33.818924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.164 [2024-11-15 11:46:33.818936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.164 qpair failed and we were unable to recover it. 00:28:33.164 [2024-11-15 11:46:33.819036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.164 [2024-11-15 11:46:33.819046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.164 qpair failed and we were unable to recover it. 00:28:33.164 [2024-11-15 11:46:33.819258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.164 [2024-11-15 11:46:33.819269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.164 qpair failed and we were unable to recover it. 00:28:33.164 [2024-11-15 11:46:33.819414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.164 [2024-11-15 11:46:33.819425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.164 qpair failed and we were unable to recover it. 00:28:33.164 [2024-11-15 11:46:33.819604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.164 [2024-11-15 11:46:33.819614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.164 qpair failed and we were unable to recover it. 00:28:33.164 [2024-11-15 11:46:33.819756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.164 [2024-11-15 11:46:33.819788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.164 qpair failed and we were unable to recover it. 00:28:33.164 [2024-11-15 11:46:33.820044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.164 [2024-11-15 11:46:33.820077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.164 qpair failed and we were unable to recover it. 00:28:33.164 [2024-11-15 11:46:33.820282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.164 [2024-11-15 11:46:33.820315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.164 qpair failed and we were unable to recover it. 00:28:33.164 [2024-11-15 11:46:33.820527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.164 [2024-11-15 11:46:33.820539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.164 qpair failed and we were unable to recover it. 00:28:33.164 [2024-11-15 11:46:33.820700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.164 [2024-11-15 11:46:33.820733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.164 qpair failed and we were unable to recover it. 00:28:33.164 [2024-11-15 11:46:33.821007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.164 [2024-11-15 11:46:33.821081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.164 qpair failed and we were unable to recover it. 00:28:33.164 [2024-11-15 11:46:33.821317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.164 [2024-11-15 11:46:33.821353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.164 qpair failed and we were unable to recover it. 00:28:33.164 [2024-11-15 11:46:33.821573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.164 [2024-11-15 11:46:33.821608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.164 qpair failed and we were unable to recover it. 00:28:33.164 [2024-11-15 11:46:33.821821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.164 [2024-11-15 11:46:33.821832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.164 qpair failed and we were unable to recover it. 00:28:33.164 [2024-11-15 11:46:33.822006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.164 [2024-11-15 11:46:33.822039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.164 qpair failed and we were unable to recover it. 00:28:33.164 [2024-11-15 11:46:33.822265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.164 [2024-11-15 11:46:33.822297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.164 qpair failed and we were unable to recover it. 00:28:33.164 [2024-11-15 11:46:33.822567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.164 [2024-11-15 11:46:33.822600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.164 qpair failed and we were unable to recover it. 00:28:33.164 [2024-11-15 11:46:33.822786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.164 [2024-11-15 11:46:33.822818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.164 qpair failed and we were unable to recover it. 00:28:33.164 [2024-11-15 11:46:33.822942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.164 [2024-11-15 11:46:33.822974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.164 qpair failed and we were unable to recover it. 00:28:33.164 [2024-11-15 11:46:33.823177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.164 [2024-11-15 11:46:33.823208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.164 qpair failed and we were unable to recover it. 00:28:33.164 [2024-11-15 11:46:33.823427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.164 [2024-11-15 11:46:33.823489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.164 qpair failed and we were unable to recover it. 00:28:33.164 [2024-11-15 11:46:33.823677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.164 [2024-11-15 11:46:33.823709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.164 qpair failed and we were unable to recover it. 00:28:33.164 [2024-11-15 11:46:33.823854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.164 [2024-11-15 11:46:33.823887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.164 qpair failed and we were unable to recover it. 00:28:33.164 [2024-11-15 11:46:33.824080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.164 [2024-11-15 11:46:33.824122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.164 qpair failed and we were unable to recover it. 00:28:33.164 [2024-11-15 11:46:33.824402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.164 [2024-11-15 11:46:33.824440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.164 qpair failed and we were unable to recover it. 00:28:33.164 [2024-11-15 11:46:33.824590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.164 [2024-11-15 11:46:33.824602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.164 qpair failed and we were unable to recover it. 00:28:33.164 [2024-11-15 11:46:33.824762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.164 [2024-11-15 11:46:33.824794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.164 qpair failed and we were unable to recover it. 00:28:33.164 [2024-11-15 11:46:33.824939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.164 [2024-11-15 11:46:33.824971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.164 qpair failed and we were unable to recover it. 00:28:33.164 [2024-11-15 11:46:33.825161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.164 [2024-11-15 11:46:33.825194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.164 qpair failed and we were unable to recover it. 00:28:33.164 [2024-11-15 11:46:33.825449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.164 [2024-11-15 11:46:33.825494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.164 qpair failed and we were unable to recover it. 00:28:33.164 [2024-11-15 11:46:33.825710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.164 [2024-11-15 11:46:33.825741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.164 qpair failed and we were unable to recover it. 00:28:33.164 [2024-11-15 11:46:33.825891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.164 [2024-11-15 11:46:33.825923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.164 qpair failed and we were unable to recover it. 00:28:33.164 [2024-11-15 11:46:33.826186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.164 [2024-11-15 11:46:33.826218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.164 qpair failed and we were unable to recover it. 00:28:33.164 [2024-11-15 11:46:33.826439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.164 [2024-11-15 11:46:33.826485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.164 qpair failed and we were unable to recover it. 00:28:33.164 [2024-11-15 11:46:33.826698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.164 [2024-11-15 11:46:33.826710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.165 qpair failed and we were unable to recover it. 00:28:33.165 [2024-11-15 11:46:33.826810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.165 [2024-11-15 11:46:33.826820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.165 qpair failed and we were unable to recover it. 00:28:33.165 [2024-11-15 11:46:33.826960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.165 [2024-11-15 11:46:33.826970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.165 qpair failed and we were unable to recover it. 00:28:33.165 [2024-11-15 11:46:33.827211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.165 [2024-11-15 11:46:33.827243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.165 qpair failed and we were unable to recover it. 00:28:33.165 [2024-11-15 11:46:33.827505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.165 [2024-11-15 11:46:33.827539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.165 qpair failed and we were unable to recover it. 00:28:33.165 [2024-11-15 11:46:33.827767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.165 [2024-11-15 11:46:33.827799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.165 qpair failed and we were unable to recover it. 00:28:33.165 [2024-11-15 11:46:33.828000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.165 [2024-11-15 11:46:33.828033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.165 qpair failed and we were unable to recover it. 00:28:33.165 [2024-11-15 11:46:33.828229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.165 [2024-11-15 11:46:33.828262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.165 qpair failed and we were unable to recover it. 00:28:33.165 [2024-11-15 11:46:33.828414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.165 [2024-11-15 11:46:33.828446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.165 qpair failed and we were unable to recover it. 00:28:33.165 [2024-11-15 11:46:33.828646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.165 [2024-11-15 11:46:33.828680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.165 qpair failed and we were unable to recover it. 00:28:33.165 [2024-11-15 11:46:33.828918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.165 [2024-11-15 11:46:33.828955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.165 qpair failed and we were unable to recover it. 00:28:33.165 [2024-11-15 11:46:33.829241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.165 [2024-11-15 11:46:33.829276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.165 qpair failed and we were unable to recover it. 00:28:33.165 [2024-11-15 11:46:33.829491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.165 [2024-11-15 11:46:33.829525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.165 qpair failed and we were unable to recover it. 00:28:33.165 [2024-11-15 11:46:33.829725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.165 [2024-11-15 11:46:33.829735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.165 qpair failed and we were unable to recover it. 00:28:33.165 [2024-11-15 11:46:33.829888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.165 [2024-11-15 11:46:33.829899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.165 qpair failed and we were unable to recover it. 00:28:33.165 [2024-11-15 11:46:33.830059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.165 [2024-11-15 11:46:33.830070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.165 qpair failed and we were unable to recover it. 00:28:33.165 [2024-11-15 11:46:33.830165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.165 [2024-11-15 11:46:33.830175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.165 qpair failed and we were unable to recover it. 00:28:33.165 [2024-11-15 11:46:33.830317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.165 [2024-11-15 11:46:33.830328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.165 qpair failed and we were unable to recover it. 00:28:33.165 [2024-11-15 11:46:33.830420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.165 [2024-11-15 11:46:33.830430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.165 qpair failed and we were unable to recover it. 00:28:33.165 [2024-11-15 11:46:33.830568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.165 [2024-11-15 11:46:33.830579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.165 qpair failed and we were unable to recover it. 00:28:33.165 [2024-11-15 11:46:33.830767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.165 [2024-11-15 11:46:33.830780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.165 qpair failed and we were unable to recover it. 00:28:33.165 [2024-11-15 11:46:33.830995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.165 [2024-11-15 11:46:33.831007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.165 qpair failed and we were unable to recover it. 00:28:33.165 [2024-11-15 11:46:33.831177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.165 [2024-11-15 11:46:33.831188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.165 qpair failed and we were unable to recover it. 00:28:33.165 [2024-11-15 11:46:33.831349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.165 [2024-11-15 11:46:33.831360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.165 qpair failed and we were unable to recover it. 00:28:33.165 [2024-11-15 11:46:33.831521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.165 [2024-11-15 11:46:33.831533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.165 qpair failed and we were unable to recover it. 00:28:33.165 [2024-11-15 11:46:33.831684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.165 [2024-11-15 11:46:33.831696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.165 qpair failed and we were unable to recover it. 00:28:33.165 [2024-11-15 11:46:33.831800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.165 [2024-11-15 11:46:33.831822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.165 qpair failed and we were unable to recover it. 00:28:33.165 [2024-11-15 11:46:33.831893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.165 [2024-11-15 11:46:33.831903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.165 qpair failed and we were unable to recover it. 00:28:33.165 [2024-11-15 11:46:33.832055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.165 [2024-11-15 11:46:33.832066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.165 qpair failed and we were unable to recover it. 00:28:33.165 [2024-11-15 11:46:33.832151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.165 [2024-11-15 11:46:33.832182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.165 qpair failed and we were unable to recover it. 00:28:33.165 [2024-11-15 11:46:33.832394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.165 [2024-11-15 11:46:33.832427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.165 qpair failed and we were unable to recover it. 00:28:33.165 [2024-11-15 11:46:33.832627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.165 [2024-11-15 11:46:33.832660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.165 qpair failed and we were unable to recover it. 00:28:33.165 [2024-11-15 11:46:33.832845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.165 [2024-11-15 11:46:33.832857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.165 qpair failed and we were unable to recover it. 00:28:33.165 [2024-11-15 11:46:33.832949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.165 [2024-11-15 11:46:33.832959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.165 qpair failed and we were unable to recover it. 00:28:33.165 [2024-11-15 11:46:33.833147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.165 [2024-11-15 11:46:33.833178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.165 qpair failed and we were unable to recover it. 00:28:33.165 [2024-11-15 11:46:33.833535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.165 [2024-11-15 11:46:33.833583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.165 qpair failed and we were unable to recover it. 00:28:33.165 [2024-11-15 11:46:33.833730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.165 [2024-11-15 11:46:33.833762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.165 qpair failed and we were unable to recover it. 00:28:33.165 [2024-11-15 11:46:33.833985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.165 [2024-11-15 11:46:33.834017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.165 qpair failed and we were unable to recover it. 00:28:33.165 [2024-11-15 11:46:33.834208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.166 [2024-11-15 11:46:33.834242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.166 qpair failed and we were unable to recover it. 00:28:33.166 [2024-11-15 11:46:33.834431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.166 [2024-11-15 11:46:33.834441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.166 qpair failed and we were unable to recover it. 00:28:33.166 [2024-11-15 11:46:33.834530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.166 [2024-11-15 11:46:33.834540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.166 qpair failed and we were unable to recover it. 00:28:33.166 [2024-11-15 11:46:33.834717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.166 [2024-11-15 11:46:33.834729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.166 qpair failed and we were unable to recover it. 00:28:33.166 [2024-11-15 11:46:33.834809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.166 [2024-11-15 11:46:33.834821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.166 qpair failed and we were unable to recover it. 00:28:33.166 [2024-11-15 11:46:33.834922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.166 [2024-11-15 11:46:33.834932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.166 qpair failed and we were unable to recover it. 00:28:33.166 [2024-11-15 11:46:33.835073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.166 [2024-11-15 11:46:33.835107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.166 qpair failed and we were unable to recover it. 00:28:33.166 [2024-11-15 11:46:33.835236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.166 [2024-11-15 11:46:33.835268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.166 qpair failed and we were unable to recover it. 00:28:33.166 [2024-11-15 11:46:33.835454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.166 [2024-11-15 11:46:33.835515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.166 qpair failed and we were unable to recover it. 00:28:33.166 [2024-11-15 11:46:33.835652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.166 [2024-11-15 11:46:33.835682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.166 qpair failed and we were unable to recover it. 00:28:33.166 [2024-11-15 11:46:33.835803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.166 [2024-11-15 11:46:33.835838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.166 qpair failed and we were unable to recover it. 00:28:33.166 [2024-11-15 11:46:33.836022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.166 [2024-11-15 11:46:33.836055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.166 qpair failed and we were unable to recover it. 00:28:33.166 [2024-11-15 11:46:33.836220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.166 [2024-11-15 11:46:33.836253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.166 qpair failed and we were unable to recover it. 00:28:33.166 [2024-11-15 11:46:33.836489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.166 [2024-11-15 11:46:33.836529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.166 qpair failed and we were unable to recover it. 00:28:33.166 [2024-11-15 11:46:33.836663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.166 [2024-11-15 11:46:33.836695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.166 qpair failed and we were unable to recover it. 00:28:33.166 [2024-11-15 11:46:33.836914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.166 [2024-11-15 11:46:33.836945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.166 qpair failed and we were unable to recover it. 00:28:33.166 [2024-11-15 11:46:33.837130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.166 [2024-11-15 11:46:33.837162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.166 qpair failed and we were unable to recover it. 00:28:33.166 [2024-11-15 11:46:33.837348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.166 [2024-11-15 11:46:33.837381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.166 qpair failed and we were unable to recover it. 00:28:33.166 [2024-11-15 11:46:33.837670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.166 [2024-11-15 11:46:33.837684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.166 qpair failed and we were unable to recover it. 00:28:33.166 [2024-11-15 11:46:33.837772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.166 [2024-11-15 11:46:33.837802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.166 qpair failed and we were unable to recover it. 00:28:33.166 [2024-11-15 11:46:33.837994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.166 [2024-11-15 11:46:33.838027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.166 qpair failed and we were unable to recover it. 00:28:33.166 [2024-11-15 11:46:33.838172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.166 [2024-11-15 11:46:33.838203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.166 qpair failed and we were unable to recover it. 00:28:33.166 [2024-11-15 11:46:33.838424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.166 [2024-11-15 11:46:33.838456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.166 qpair failed and we were unable to recover it. 00:28:33.166 [2024-11-15 11:46:33.838668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.166 [2024-11-15 11:46:33.838702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.166 qpair failed and we were unable to recover it. 00:28:33.166 [2024-11-15 11:46:33.838977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.166 [2024-11-15 11:46:33.838987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.166 qpair failed and we were unable to recover it. 00:28:33.166 [2024-11-15 11:46:33.839065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.166 [2024-11-15 11:46:33.839076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.166 qpair failed and we were unable to recover it. 00:28:33.166 [2024-11-15 11:46:33.839163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.166 [2024-11-15 11:46:33.839173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.166 qpair failed and we were unable to recover it. 00:28:33.166 [2024-11-15 11:46:33.839319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.166 [2024-11-15 11:46:33.839328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.166 qpair failed and we were unable to recover it. 00:28:33.166 [2024-11-15 11:46:33.839476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.166 [2024-11-15 11:46:33.839487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.166 qpair failed and we were unable to recover it. 00:28:33.166 [2024-11-15 11:46:33.839575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.166 [2024-11-15 11:46:33.839584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.166 qpair failed and we were unable to recover it. 00:28:33.166 [2024-11-15 11:46:33.839741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.166 [2024-11-15 11:46:33.839773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.166 qpair failed and we were unable to recover it. 00:28:33.166 [2024-11-15 11:46:33.839972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.166 [2024-11-15 11:46:33.840005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.166 qpair failed and we were unable to recover it. 00:28:33.166 [2024-11-15 11:46:33.840149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.166 [2024-11-15 11:46:33.840181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.166 qpair failed and we were unable to recover it. 00:28:33.166 [2024-11-15 11:46:33.840311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.166 [2024-11-15 11:46:33.840342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.166 qpair failed and we were unable to recover it. 00:28:33.166 [2024-11-15 11:46:33.840484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.166 [2024-11-15 11:46:33.840518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.166 qpair failed and we were unable to recover it. 00:28:33.166 [2024-11-15 11:46:33.840708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.166 [2024-11-15 11:46:33.840719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.166 qpair failed and we were unable to recover it. 00:28:33.166 [2024-11-15 11:46:33.840995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.166 [2024-11-15 11:46:33.841017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.166 qpair failed and we were unable to recover it. 00:28:33.166 [2024-11-15 11:46:33.841114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.166 [2024-11-15 11:46:33.841135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.166 qpair failed and we were unable to recover it. 00:28:33.166 [2024-11-15 11:46:33.841229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.167 [2024-11-15 11:46:33.841238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.167 qpair failed and we were unable to recover it. 00:28:33.167 [2024-11-15 11:46:33.841390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.167 [2024-11-15 11:46:33.841422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.167 qpair failed and we were unable to recover it. 00:28:33.167 [2024-11-15 11:46:33.841614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.167 [2024-11-15 11:46:33.841625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.167 qpair failed and we were unable to recover it. 00:28:33.167 [2024-11-15 11:46:33.841769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.167 [2024-11-15 11:46:33.841780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.167 qpair failed and we were unable to recover it. 00:28:33.167 [2024-11-15 11:46:33.841972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.167 [2024-11-15 11:46:33.842004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.167 qpair failed and we were unable to recover it. 00:28:33.167 [2024-11-15 11:46:33.842201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.167 [2024-11-15 11:46:33.842234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.167 qpair failed and we were unable to recover it. 00:28:33.167 [2024-11-15 11:46:33.842429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.167 [2024-11-15 11:46:33.842472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.167 qpair failed and we were unable to recover it. 00:28:33.167 [2024-11-15 11:46:33.842632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.167 [2024-11-15 11:46:33.842667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.167 qpair failed and we were unable to recover it. 00:28:33.167 [2024-11-15 11:46:33.842881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.167 [2024-11-15 11:46:33.842914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.167 qpair failed and we were unable to recover it. 00:28:33.167 [2024-11-15 11:46:33.843116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.167 [2024-11-15 11:46:33.843147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.167 qpair failed and we were unable to recover it. 00:28:33.167 [2024-11-15 11:46:33.843263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.167 [2024-11-15 11:46:33.843298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.167 qpair failed and we were unable to recover it. 00:28:33.167 [2024-11-15 11:46:33.843430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.167 [2024-11-15 11:46:33.843496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.167 qpair failed and we were unable to recover it. 00:28:33.167 [2024-11-15 11:46:33.843771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.167 [2024-11-15 11:46:33.843802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.167 qpair failed and we were unable to recover it. 00:28:33.167 [2024-11-15 11:46:33.844055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.167 [2024-11-15 11:46:33.844087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.167 qpair failed and we were unable to recover it. 00:28:33.167 [2024-11-15 11:46:33.844343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.167 [2024-11-15 11:46:33.844377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.167 qpair failed and we were unable to recover it. 00:28:33.167 [2024-11-15 11:46:33.844593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.167 [2024-11-15 11:46:33.844628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.167 qpair failed and we were unable to recover it. 00:28:33.167 [2024-11-15 11:46:33.844809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.167 [2024-11-15 11:46:33.844820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.167 qpair failed and we were unable to recover it. 00:28:33.167 [2024-11-15 11:46:33.844960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.167 [2024-11-15 11:46:33.844972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.167 qpair failed and we were unable to recover it. 00:28:33.167 [2024-11-15 11:46:33.845207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.167 [2024-11-15 11:46:33.845217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.167 qpair failed and we were unable to recover it. 00:28:33.167 [2024-11-15 11:46:33.845358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.167 [2024-11-15 11:46:33.845368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.167 qpair failed and we were unable to recover it. 00:28:33.167 [2024-11-15 11:46:33.845521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.167 [2024-11-15 11:46:33.845535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.167 qpair failed and we were unable to recover it. 00:28:33.167 [2024-11-15 11:46:33.845748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.167 [2024-11-15 11:46:33.845781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.167 qpair failed and we were unable to recover it. 00:28:33.167 [2024-11-15 11:46:33.845918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.167 [2024-11-15 11:46:33.845955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.167 qpair failed and we were unable to recover it. 00:28:33.167 [2024-11-15 11:46:33.846154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.167 [2024-11-15 11:46:33.846186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.167 qpair failed and we were unable to recover it. 00:28:33.167 [2024-11-15 11:46:33.846325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.167 [2024-11-15 11:46:33.846357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.167 qpair failed and we were unable to recover it. 00:28:33.167 [2024-11-15 11:46:33.846643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.167 [2024-11-15 11:46:33.846677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.167 qpair failed and we were unable to recover it. 00:28:33.167 [2024-11-15 11:46:33.846817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.167 [2024-11-15 11:46:33.846848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.167 qpair failed and we were unable to recover it. 00:28:33.167 [2024-11-15 11:46:33.847030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.167 [2024-11-15 11:46:33.847052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.167 qpair failed and we were unable to recover it. 00:28:33.167 [2024-11-15 11:46:33.847220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.167 [2024-11-15 11:46:33.847251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.167 qpair failed and we were unable to recover it. 00:28:33.167 [2024-11-15 11:46:33.847558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.167 [2024-11-15 11:46:33.847598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.167 qpair failed and we were unable to recover it. 00:28:33.167 [2024-11-15 11:46:33.847799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.167 [2024-11-15 11:46:33.847837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.167 qpair failed and we were unable to recover it. 00:28:33.167 [2024-11-15 11:46:33.848070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.167 [2024-11-15 11:46:33.848104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.167 qpair failed and we were unable to recover it. 00:28:33.167 [2024-11-15 11:46:33.848321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.167 [2024-11-15 11:46:33.848354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.167 qpair failed and we were unable to recover it. 00:28:33.168 [2024-11-15 11:46:33.848561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.168 [2024-11-15 11:46:33.848596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.168 qpair failed and we were unable to recover it. 00:28:33.168 [2024-11-15 11:46:33.848745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.168 [2024-11-15 11:46:33.848770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.168 qpair failed and we were unable to recover it. 00:28:33.168 [2024-11-15 11:46:33.848848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.168 [2024-11-15 11:46:33.848858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.168 qpair failed and we were unable to recover it. 00:28:33.168 [2024-11-15 11:46:33.849011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.168 [2024-11-15 11:46:33.849032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.168 qpair failed and we were unable to recover it. 00:28:33.168 [2024-11-15 11:46:33.849132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.168 [2024-11-15 11:46:33.849165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.168 qpair failed and we were unable to recover it. 00:28:33.168 [2024-11-15 11:46:33.849296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.168 [2024-11-15 11:46:33.849335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.168 qpair failed and we were unable to recover it. 00:28:33.168 [2024-11-15 11:46:33.849479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.168 [2024-11-15 11:46:33.849512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.168 qpair failed and we were unable to recover it. 00:28:33.168 [2024-11-15 11:46:33.849706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.168 [2024-11-15 11:46:33.849742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.168 qpair failed and we were unable to recover it. 00:28:33.168 [2024-11-15 11:46:33.849898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.168 [2024-11-15 11:46:33.849930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.168 qpair failed and we were unable to recover it. 00:28:33.168 [2024-11-15 11:46:33.850138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.168 [2024-11-15 11:46:33.850174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.168 qpair failed and we were unable to recover it. 00:28:33.168 [2024-11-15 11:46:33.850426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.168 [2024-11-15 11:46:33.850467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.168 qpair failed and we were unable to recover it. 00:28:33.168 [2024-11-15 11:46:33.850732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.168 [2024-11-15 11:46:33.850765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.168 qpair failed and we were unable to recover it. 00:28:33.168 [2024-11-15 11:46:33.850988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.168 [2024-11-15 11:46:33.851022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.168 qpair failed and we were unable to recover it. 00:28:33.168 [2024-11-15 11:46:33.851231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.168 [2024-11-15 11:46:33.851265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.168 qpair failed and we were unable to recover it. 00:28:33.168 [2024-11-15 11:46:33.851470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.168 [2024-11-15 11:46:33.851483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.168 qpair failed and we were unable to recover it. 00:28:33.168 [2024-11-15 11:46:33.851715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.168 [2024-11-15 11:46:33.851747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.168 qpair failed and we were unable to recover it. 00:28:33.168 [2024-11-15 11:46:33.851885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.168 [2024-11-15 11:46:33.851920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.168 qpair failed and we were unable to recover it. 00:28:33.168 [2024-11-15 11:46:33.852061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.168 [2024-11-15 11:46:33.852094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.168 qpair failed and we were unable to recover it. 00:28:33.168 [2024-11-15 11:46:33.852313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.168 [2024-11-15 11:46:33.852347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.168 qpair failed and we were unable to recover it. 00:28:33.168 [2024-11-15 11:46:33.852499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.168 [2024-11-15 11:46:33.852534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.168 qpair failed and we were unable to recover it. 00:28:33.168 [2024-11-15 11:46:33.852659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.168 [2024-11-15 11:46:33.852697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.168 qpair failed and we were unable to recover it. 00:28:33.168 [2024-11-15 11:46:33.852812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.168 [2024-11-15 11:46:33.852844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.168 qpair failed and we were unable to recover it. 00:28:33.168 [2024-11-15 11:46:33.852969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.168 [2024-11-15 11:46:33.852979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.168 qpair failed and we were unable to recover it. 00:28:33.168 [2024-11-15 11:46:33.853083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.168 [2024-11-15 11:46:33.853094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.168 qpair failed and we were unable to recover it. 00:28:33.168 [2024-11-15 11:46:33.853335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.168 [2024-11-15 11:46:33.853368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.168 qpair failed and we were unable to recover it. 00:28:33.168 [2024-11-15 11:46:33.853508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.168 [2024-11-15 11:46:33.853540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.168 qpair failed and we were unable to recover it. 00:28:33.168 [2024-11-15 11:46:33.853733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.168 [2024-11-15 11:46:33.853765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.168 qpair failed and we were unable to recover it. 00:28:33.168 [2024-11-15 11:46:33.853925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.168 [2024-11-15 11:46:33.853940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.168 qpair failed and we were unable to recover it. 00:28:33.168 [2024-11-15 11:46:33.854022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.168 [2024-11-15 11:46:33.854043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.168 qpair failed and we were unable to recover it. 00:28:33.168 [2024-11-15 11:46:33.854188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.168 [2024-11-15 11:46:33.854232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.168 qpair failed and we were unable to recover it. 00:28:33.168 [2024-11-15 11:46:33.854417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.168 [2024-11-15 11:46:33.854450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.168 qpair failed and we were unable to recover it. 00:28:33.168 [2024-11-15 11:46:33.854652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.168 [2024-11-15 11:46:33.854664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.168 qpair failed and we were unable to recover it. 00:28:33.168 [2024-11-15 11:46:33.854750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.168 [2024-11-15 11:46:33.854760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.168 qpair failed and we were unable to recover it. 00:28:33.168 [2024-11-15 11:46:33.854915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.168 [2024-11-15 11:46:33.854926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.168 qpair failed and we were unable to recover it. 00:28:33.168 [2024-11-15 11:46:33.855012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.168 [2024-11-15 11:46:33.855022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.168 qpair failed and we were unable to recover it. 00:28:33.168 [2024-11-15 11:46:33.855102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.168 [2024-11-15 11:46:33.855132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.168 qpair failed and we were unable to recover it. 00:28:33.168 [2024-11-15 11:46:33.855416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.168 [2024-11-15 11:46:33.855452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.168 qpair failed and we were unable to recover it. 00:28:33.168 [2024-11-15 11:46:33.855626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.168 [2024-11-15 11:46:33.855661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.168 qpair failed and we were unable to recover it. 00:28:33.169 [2024-11-15 11:46:33.855787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.169 [2024-11-15 11:46:33.855819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.169 qpair failed and we were unable to recover it. 00:28:33.169 [2024-11-15 11:46:33.855962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.169 [2024-11-15 11:46:33.856004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.169 qpair failed and we were unable to recover it. 00:28:33.169 [2024-11-15 11:46:33.856169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.169 [2024-11-15 11:46:33.856179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.169 qpair failed and we were unable to recover it. 00:28:33.169 [2024-11-15 11:46:33.856337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.169 [2024-11-15 11:46:33.856369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.169 qpair failed and we were unable to recover it. 00:28:33.169 [2024-11-15 11:46:33.856498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.169 [2024-11-15 11:46:33.856532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.169 qpair failed and we were unable to recover it. 00:28:33.169 [2024-11-15 11:46:33.856811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.169 [2024-11-15 11:46:33.856844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.169 qpair failed and we were unable to recover it. 00:28:33.169 [2024-11-15 11:46:33.856966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.169 [2024-11-15 11:46:33.856978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.169 qpair failed and we were unable to recover it. 00:28:33.169 [2024-11-15 11:46:33.857214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.169 [2024-11-15 11:46:33.857248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.169 qpair failed and we were unable to recover it. 00:28:33.169 [2024-11-15 11:46:33.857523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.169 [2024-11-15 11:46:33.857560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.169 qpair failed and we were unable to recover it. 00:28:33.169 [2024-11-15 11:46:33.857674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.169 [2024-11-15 11:46:33.857685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.169 qpair failed and we were unable to recover it. 00:28:33.169 [2024-11-15 11:46:33.857772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.169 [2024-11-15 11:46:33.857783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.169 qpair failed and we were unable to recover it. 00:28:33.169 [2024-11-15 11:46:33.857885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.169 [2024-11-15 11:46:33.857895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.169 qpair failed and we were unable to recover it. 00:28:33.169 [2024-11-15 11:46:33.858054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.169 [2024-11-15 11:46:33.858099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.169 qpair failed and we were unable to recover it. 00:28:33.169 [2024-11-15 11:46:33.858231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.169 [2024-11-15 11:46:33.858264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.169 qpair failed and we were unable to recover it. 00:28:33.169 [2024-11-15 11:46:33.858561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.169 [2024-11-15 11:46:33.858605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.169 qpair failed and we were unable to recover it. 00:28:33.169 [2024-11-15 11:46:33.858712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.169 [2024-11-15 11:46:33.858724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.169 qpair failed and we were unable to recover it. 00:28:33.169 [2024-11-15 11:46:33.858881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.169 [2024-11-15 11:46:33.858893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.169 qpair failed and we were unable to recover it. 00:28:33.169 [2024-11-15 11:46:33.858983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.169 [2024-11-15 11:46:33.858993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.169 qpair failed and we were unable to recover it. 00:28:33.169 [2024-11-15 11:46:33.859137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.169 [2024-11-15 11:46:33.859148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.169 qpair failed and we were unable to recover it. 00:28:33.169 [2024-11-15 11:46:33.859354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.169 [2024-11-15 11:46:33.859365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.169 qpair failed and we were unable to recover it. 00:28:33.169 [2024-11-15 11:46:33.859517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.169 [2024-11-15 11:46:33.859529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.169 qpair failed and we were unable to recover it. 00:28:33.169 [2024-11-15 11:46:33.859685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.169 [2024-11-15 11:46:33.859696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.169 qpair failed and we were unable to recover it. 00:28:33.169 [2024-11-15 11:46:33.859772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.169 [2024-11-15 11:46:33.859782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.169 qpair failed and we were unable to recover it. 00:28:33.169 [2024-11-15 11:46:33.859875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.169 [2024-11-15 11:46:33.859886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.169 qpair failed and we were unable to recover it. 00:28:33.169 [2024-11-15 11:46:33.860046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.169 [2024-11-15 11:46:33.860057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.169 qpair failed and we were unable to recover it. 00:28:33.169 [2024-11-15 11:46:33.860153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.169 [2024-11-15 11:46:33.860166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.169 qpair failed and we were unable to recover it. 00:28:33.169 [2024-11-15 11:46:33.860256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.169 [2024-11-15 11:46:33.860266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.169 qpair failed and we were unable to recover it. 00:28:33.169 [2024-11-15 11:46:33.860342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.169 [2024-11-15 11:46:33.860351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.169 qpair failed and we were unable to recover it. 00:28:33.169 [2024-11-15 11:46:33.860501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.169 [2024-11-15 11:46:33.860534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.169 qpair failed and we were unable to recover it. 00:28:33.169 [2024-11-15 11:46:33.860652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.169 [2024-11-15 11:46:33.860690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.169 qpair failed and we were unable to recover it. 00:28:33.169 [2024-11-15 11:46:33.860886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.169 [2024-11-15 11:46:33.860919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.169 qpair failed and we were unable to recover it. 00:28:33.169 [2024-11-15 11:46:33.861115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.169 [2024-11-15 11:46:33.861148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.169 qpair failed and we were unable to recover it. 00:28:33.169 [2024-11-15 11:46:33.861360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.169 [2024-11-15 11:46:33.861392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.169 qpair failed and we were unable to recover it. 00:28:33.169 [2024-11-15 11:46:33.861681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.169 [2024-11-15 11:46:33.861691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.169 qpair failed and we were unable to recover it. 00:28:33.169 [2024-11-15 11:46:33.861784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.169 [2024-11-15 11:46:33.861819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.169 qpair failed and we were unable to recover it. 00:28:33.169 [2024-11-15 11:46:33.862016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.169 [2024-11-15 11:46:33.862052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.169 qpair failed and we were unable to recover it. 00:28:33.169 [2024-11-15 11:46:33.862271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.169 [2024-11-15 11:46:33.862303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.169 qpair failed and we were unable to recover it. 00:28:33.169 [2024-11-15 11:46:33.862603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.169 [2024-11-15 11:46:33.862644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.170 qpair failed and we were unable to recover it. 00:28:33.170 [2024-11-15 11:46:33.862792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.170 [2024-11-15 11:46:33.862823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.170 qpair failed and we were unable to recover it. 00:28:33.170 [2024-11-15 11:46:33.862935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.170 [2024-11-15 11:46:33.862967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.170 qpair failed and we were unable to recover it. 00:28:33.170 [2024-11-15 11:46:33.863218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.170 [2024-11-15 11:46:33.863250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.170 qpair failed and we were unable to recover it. 00:28:33.170 [2024-11-15 11:46:33.863455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.170 [2024-11-15 11:46:33.863510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.170 qpair failed and we were unable to recover it. 00:28:33.170 [2024-11-15 11:46:33.863807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.170 [2024-11-15 11:46:33.863818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.170 qpair failed and we were unable to recover it. 00:28:33.170 [2024-11-15 11:46:33.863960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.170 [2024-11-15 11:46:33.863992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.170 qpair failed and we were unable to recover it. 00:28:33.170 [2024-11-15 11:46:33.864229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.170 [2024-11-15 11:46:33.864263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.170 qpair failed and we were unable to recover it. 00:28:33.170 [2024-11-15 11:46:33.864446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.170 [2024-11-15 11:46:33.864492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.170 qpair failed and we were unable to recover it. 00:28:33.170 [2024-11-15 11:46:33.864609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.170 [2024-11-15 11:46:33.864643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.170 qpair failed and we were unable to recover it. 00:28:33.170 [2024-11-15 11:46:33.864856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.170 [2024-11-15 11:46:33.864887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.170 qpair failed and we were unable to recover it. 00:28:33.170 [2024-11-15 11:46:33.865003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.170 [2024-11-15 11:46:33.865035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.170 qpair failed and we were unable to recover it. 00:28:33.170 [2024-11-15 11:46:33.865232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.170 [2024-11-15 11:46:33.865266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.170 qpair failed and we were unable to recover it. 00:28:33.170 [2024-11-15 11:46:33.865547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.170 [2024-11-15 11:46:33.865581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.170 qpair failed and we were unable to recover it. 00:28:33.170 [2024-11-15 11:46:33.865832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.170 [2024-11-15 11:46:33.865865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.170 qpair failed and we were unable to recover it. 00:28:33.170 [2024-11-15 11:46:33.866069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.170 [2024-11-15 11:46:33.866101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.170 qpair failed and we were unable to recover it. 00:28:33.170 [2024-11-15 11:46:33.866289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.170 [2024-11-15 11:46:33.866322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.170 qpair failed and we were unable to recover it. 00:28:33.170 [2024-11-15 11:46:33.866508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.170 [2024-11-15 11:46:33.866543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.170 qpair failed and we were unable to recover it. 00:28:33.170 [2024-11-15 11:46:33.866742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.170 [2024-11-15 11:46:33.866763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.170 qpair failed and we were unable to recover it. 00:28:33.170 [2024-11-15 11:46:33.866920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.170 [2024-11-15 11:46:33.866952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.170 qpair failed and we were unable to recover it. 00:28:33.170 [2024-11-15 11:46:33.867228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.170 [2024-11-15 11:46:33.867263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.170 qpair failed and we were unable to recover it. 00:28:33.170 [2024-11-15 11:46:33.867517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.170 [2024-11-15 11:46:33.867550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.170 qpair failed and we were unable to recover it. 00:28:33.170 [2024-11-15 11:46:33.867849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.170 [2024-11-15 11:46:33.867880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.170 qpair failed and we were unable to recover it. 00:28:33.170 [2024-11-15 11:46:33.868078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.170 [2024-11-15 11:46:33.868110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.170 qpair failed and we were unable to recover it. 00:28:33.170 [2024-11-15 11:46:33.868238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.170 [2024-11-15 11:46:33.868269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.170 qpair failed and we were unable to recover it. 00:28:33.170 [2024-11-15 11:46:33.868480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.170 [2024-11-15 11:46:33.868516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.170 qpair failed and we were unable to recover it. 00:28:33.170 [2024-11-15 11:46:33.868760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.170 [2024-11-15 11:46:33.868793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.170 qpair failed and we were unable to recover it. 00:28:33.170 [2024-11-15 11:46:33.868911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.170 [2024-11-15 11:46:33.868922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.170 qpair failed and we were unable to recover it. 00:28:33.170 [2024-11-15 11:46:33.869138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.170 [2024-11-15 11:46:33.869171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.170 qpair failed and we were unable to recover it. 00:28:33.170 [2024-11-15 11:46:33.869381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.170 [2024-11-15 11:46:33.869413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.170 qpair failed and we were unable to recover it. 00:28:33.170 [2024-11-15 11:46:33.869612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.170 [2024-11-15 11:46:33.869624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.170 qpair failed and we were unable to recover it. 00:28:33.170 [2024-11-15 11:46:33.869761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.170 [2024-11-15 11:46:33.869771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.170 qpair failed and we were unable to recover it. 00:28:33.170 [2024-11-15 11:46:33.869925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.170 [2024-11-15 11:46:33.869940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.170 qpair failed and we were unable to recover it. 00:28:33.170 [2024-11-15 11:46:33.870105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.170 [2024-11-15 11:46:33.870141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.170 qpair failed and we were unable to recover it. 00:28:33.170 [2024-11-15 11:46:33.870349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.170 [2024-11-15 11:46:33.870382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.170 qpair failed and we were unable to recover it. 00:28:33.170 [2024-11-15 11:46:33.870560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.170 [2024-11-15 11:46:33.870571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.170 qpair failed and we were unable to recover it. 00:28:33.170 [2024-11-15 11:46:33.870658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.170 [2024-11-15 11:46:33.870668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.170 qpair failed and we were unable to recover it. 00:28:33.170 [2024-11-15 11:46:33.870765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.170 [2024-11-15 11:46:33.870774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.170 qpair failed and we were unable to recover it. 00:28:33.170 [2024-11-15 11:46:33.870925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.170 [2024-11-15 11:46:33.870970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.170 qpair failed and we were unable to recover it. 00:28:33.170 [2024-11-15 11:46:33.871170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.170 [2024-11-15 11:46:33.871201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.170 qpair failed and we were unable to recover it. 00:28:33.170 [2024-11-15 11:46:33.871404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.171 [2024-11-15 11:46:33.871441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.171 qpair failed and we were unable to recover it. 00:28:33.171 [2024-11-15 11:46:33.871580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.171 [2024-11-15 11:46:33.871591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.171 qpair failed and we were unable to recover it. 00:28:33.171 [2024-11-15 11:46:33.871668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.171 [2024-11-15 11:46:33.871679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.171 qpair failed and we were unable to recover it. 00:28:33.171 [2024-11-15 11:46:33.871845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.171 [2024-11-15 11:46:33.871877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.171 qpair failed and we were unable to recover it. 00:28:33.171 [2024-11-15 11:46:33.871997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.171 [2024-11-15 11:46:33.872027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.171 qpair failed and we were unable to recover it. 00:28:33.171 [2024-11-15 11:46:33.872216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.171 [2024-11-15 11:46:33.872248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.171 qpair failed and we were unable to recover it. 00:28:33.171 [2024-11-15 11:46:33.872480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.171 [2024-11-15 11:46:33.872514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.171 qpair failed and we were unable to recover it. 00:28:33.171 [2024-11-15 11:46:33.872765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.171 [2024-11-15 11:46:33.872776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.171 qpair failed and we were unable to recover it. 00:28:33.171 [2024-11-15 11:46:33.872865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.171 [2024-11-15 11:46:33.872876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.171 qpair failed and we were unable to recover it. 00:28:33.171 [2024-11-15 11:46:33.872990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.171 [2024-11-15 11:46:33.873021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.171 qpair failed and we were unable to recover it. 00:28:33.171 [2024-11-15 11:46:33.873167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.171 [2024-11-15 11:46:33.873199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.171 qpair failed and we were unable to recover it. 00:28:33.171 [2024-11-15 11:46:33.873380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.171 [2024-11-15 11:46:33.873412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.171 qpair failed and we were unable to recover it. 00:28:33.171 [2024-11-15 11:46:33.873597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.171 [2024-11-15 11:46:33.873629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.171 qpair failed and we were unable to recover it. 00:28:33.171 [2024-11-15 11:46:33.873932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.171 [2024-11-15 11:46:33.873963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.171 qpair failed and we were unable to recover it. 00:28:33.171 [2024-11-15 11:46:33.874096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.171 [2024-11-15 11:46:33.874128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.171 qpair failed and we were unable to recover it. 00:28:33.171 [2024-11-15 11:46:33.874271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.171 [2024-11-15 11:46:33.874303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.171 qpair failed and we were unable to recover it. 00:28:33.171 [2024-11-15 11:46:33.874610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.171 [2024-11-15 11:46:33.874644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.171 qpair failed and we were unable to recover it. 00:28:33.171 [2024-11-15 11:46:33.874854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.171 [2024-11-15 11:46:33.874888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.171 qpair failed and we were unable to recover it. 00:28:33.171 [2024-11-15 11:46:33.875067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.171 [2024-11-15 11:46:33.875098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.171 qpair failed and we were unable to recover it. 00:28:33.171 [2024-11-15 11:46:33.875238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.171 [2024-11-15 11:46:33.875270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.171 qpair failed and we were unable to recover it. 00:28:33.171 [2024-11-15 11:46:33.875420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.171 [2024-11-15 11:46:33.875453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.171 qpair failed and we were unable to recover it. 00:28:33.171 [2024-11-15 11:46:33.875649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.171 [2024-11-15 11:46:33.875679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.171 qpair failed and we were unable to recover it. 00:28:33.171 [2024-11-15 11:46:33.875933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.171 [2024-11-15 11:46:33.875965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.171 qpair failed and we were unable to recover it. 00:28:33.171 [2024-11-15 11:46:33.876171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.171 [2024-11-15 11:46:33.876205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.171 qpair failed and we were unable to recover it. 00:28:33.171 [2024-11-15 11:46:33.876456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.171 [2024-11-15 11:46:33.876470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.171 qpair failed and we were unable to recover it. 00:28:33.171 [2024-11-15 11:46:33.876702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.171 [2024-11-15 11:46:33.876714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.171 qpair failed and we were unable to recover it. 00:28:33.171 [2024-11-15 11:46:33.876867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.171 [2024-11-15 11:46:33.876878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.171 qpair failed and we were unable to recover it. 00:28:33.171 [2024-11-15 11:46:33.877029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.171 [2024-11-15 11:46:33.877040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.171 qpair failed and we were unable to recover it. 00:28:33.171 [2024-11-15 11:46:33.877192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.171 [2024-11-15 11:46:33.877203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.171 qpair failed and we were unable to recover it. 00:28:33.171 [2024-11-15 11:46:33.877447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.171 [2024-11-15 11:46:33.877457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.171 qpair failed and we were unable to recover it. 00:28:33.171 [2024-11-15 11:46:33.877550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.171 [2024-11-15 11:46:33.877561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.171 qpair failed and we were unable to recover it. 00:28:33.171 [2024-11-15 11:46:33.877734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.171 [2024-11-15 11:46:33.877779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.171 qpair failed and we were unable to recover it. 00:28:33.171 [2024-11-15 11:46:33.877916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.171 [2024-11-15 11:46:33.877954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.171 qpair failed and we were unable to recover it. 00:28:33.171 [2024-11-15 11:46:33.878238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.171 [2024-11-15 11:46:33.878270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.171 qpair failed and we were unable to recover it. 00:28:33.171 [2024-11-15 11:46:33.878512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.171 [2024-11-15 11:46:33.878546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.171 qpair failed and we were unable to recover it. 00:28:33.171 [2024-11-15 11:46:33.878732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.171 [2024-11-15 11:46:33.878743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.171 qpair failed and we were unable to recover it. 00:28:33.171 [2024-11-15 11:46:33.878957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.171 [2024-11-15 11:46:33.878991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.171 qpair failed and we were unable to recover it. 00:28:33.171 [2024-11-15 11:46:33.879189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.171 [2024-11-15 11:46:33.879221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.171 qpair failed and we were unable to recover it. 00:28:33.171 [2024-11-15 11:46:33.879430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.171 [2024-11-15 11:46:33.879469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.171 qpair failed and we were unable to recover it. 00:28:33.171 [2024-11-15 11:46:33.879759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.171 [2024-11-15 11:46:33.879785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.171 qpair failed and we were unable to recover it. 00:28:33.171 [2024-11-15 11:46:33.879999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.171 [2024-11-15 11:46:33.880031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.171 qpair failed and we were unable to recover it. 00:28:33.171 [2024-11-15 11:46:33.880223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.171 [2024-11-15 11:46:33.880255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.172 qpair failed and we were unable to recover it. 00:28:33.172 [2024-11-15 11:46:33.880473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.172 [2024-11-15 11:46:33.880507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.172 qpair failed and we were unable to recover it. 00:28:33.172 [2024-11-15 11:46:33.880713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.172 [2024-11-15 11:46:33.880724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.172 qpair failed and we were unable to recover it. 00:28:33.172 [2024-11-15 11:46:33.880922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.172 [2024-11-15 11:46:33.880954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.172 qpair failed and we were unable to recover it. 00:28:33.172 [2024-11-15 11:46:33.881164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.172 [2024-11-15 11:46:33.881196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.172 qpair failed and we were unable to recover it. 00:28:33.172 [2024-11-15 11:46:33.881480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.172 [2024-11-15 11:46:33.881513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.172 qpair failed and we were unable to recover it. 00:28:33.172 [2024-11-15 11:46:33.881640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.172 [2024-11-15 11:46:33.881671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.172 qpair failed and we were unable to recover it. 00:28:33.172 [2024-11-15 11:46:33.881846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.172 [2024-11-15 11:46:33.881856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.172 qpair failed and we were unable to recover it. 00:28:33.172 [2024-11-15 11:46:33.882011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.172 [2024-11-15 11:46:33.882042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.172 qpair failed and we were unable to recover it. 00:28:33.172 [2024-11-15 11:46:33.882256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.172 [2024-11-15 11:46:33.882288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.172 qpair failed and we were unable to recover it. 00:28:33.172 [2024-11-15 11:46:33.882513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.172 [2024-11-15 11:46:33.882546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.172 qpair failed and we were unable to recover it. 00:28:33.172 [2024-11-15 11:46:33.882741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.172 [2024-11-15 11:46:33.882772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.172 qpair failed and we were unable to recover it. 00:28:33.172 [2024-11-15 11:46:33.882922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.172 [2024-11-15 11:46:33.882956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.172 qpair failed and we were unable to recover it. 00:28:33.172 [2024-11-15 11:46:33.883082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.172 [2024-11-15 11:46:33.883095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.172 qpair failed and we were unable to recover it. 00:28:33.172 [2024-11-15 11:46:33.883199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.172 [2024-11-15 11:46:33.883211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.172 qpair failed and we were unable to recover it. 00:28:33.172 [2024-11-15 11:46:33.883381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.172 [2024-11-15 11:46:33.883392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.172 qpair failed and we were unable to recover it. 00:28:33.172 [2024-11-15 11:46:33.883543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.172 [2024-11-15 11:46:33.883555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.172 qpair failed and we were unable to recover it. 00:28:33.172 [2024-11-15 11:46:33.883809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.172 [2024-11-15 11:46:33.883843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.172 qpair failed and we were unable to recover it. 00:28:33.172 [2024-11-15 11:46:33.884055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.172 [2024-11-15 11:46:33.884089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.172 qpair failed and we were unable to recover it. 00:28:33.172 [2024-11-15 11:46:33.884316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.172 [2024-11-15 11:46:33.884348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.172 qpair failed and we were unable to recover it. 00:28:33.172 [2024-11-15 11:46:33.884530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.172 [2024-11-15 11:46:33.884563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.172 qpair failed and we were unable to recover it. 00:28:33.172 [2024-11-15 11:46:33.884751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.172 [2024-11-15 11:46:33.884763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.172 qpair failed and we were unable to recover it. 00:28:33.172 [2024-11-15 11:46:33.884898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.172 [2024-11-15 11:46:33.884911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.172 qpair failed and we were unable to recover it. 00:28:33.172 [2024-11-15 11:46:33.885132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.172 [2024-11-15 11:46:33.885168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.172 qpair failed and we were unable to recover it. 00:28:33.172 [2024-11-15 11:46:33.885402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.172 [2024-11-15 11:46:33.885434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.172 qpair failed and we were unable to recover it. 00:28:33.172 [2024-11-15 11:46:33.885704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.172 [2024-11-15 11:46:33.885715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.172 qpair failed and we were unable to recover it. 00:28:33.172 [2024-11-15 11:46:33.885858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.172 [2024-11-15 11:46:33.885868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.172 qpair failed and we were unable to recover it. 00:28:33.172 [2024-11-15 11:46:33.886074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.172 [2024-11-15 11:46:33.886084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.172 qpair failed and we were unable to recover it. 00:28:33.172 [2024-11-15 11:46:33.886175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.172 [2024-11-15 11:46:33.886185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.172 qpair failed and we were unable to recover it. 00:28:33.172 [2024-11-15 11:46:33.886319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.172 [2024-11-15 11:46:33.886329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.172 qpair failed and we were unable to recover it. 00:28:33.172 [2024-11-15 11:46:33.886404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.172 [2024-11-15 11:46:33.886414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.172 qpair failed and we were unable to recover it. 00:28:33.172 [2024-11-15 11:46:33.886572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.172 [2024-11-15 11:46:33.886587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.172 qpair failed and we were unable to recover it. 00:28:33.172 [2024-11-15 11:46:33.886687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.172 [2024-11-15 11:46:33.886697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.172 qpair failed and we were unable to recover it. 00:28:33.172 [2024-11-15 11:46:33.886819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.172 [2024-11-15 11:46:33.886850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.172 qpair failed and we were unable to recover it. 00:28:33.172 [2024-11-15 11:46:33.887118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.172 [2024-11-15 11:46:33.887151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.172 qpair failed and we were unable to recover it. 00:28:33.172 [2024-11-15 11:46:33.887370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.172 [2024-11-15 11:46:33.887404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.172 qpair failed and we were unable to recover it. 00:28:33.172 [2024-11-15 11:46:33.887652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.173 [2024-11-15 11:46:33.887686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.173 qpair failed and we were unable to recover it. 00:28:33.173 [2024-11-15 11:46:33.887815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.173 [2024-11-15 11:46:33.887825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.173 qpair failed and we were unable to recover it. 00:28:33.173 [2024-11-15 11:46:33.887970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.173 [2024-11-15 11:46:33.887984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.173 qpair failed and we were unable to recover it. 00:28:33.173 [2024-11-15 11:46:33.888124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.173 [2024-11-15 11:46:33.888135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.173 qpair failed and we were unable to recover it. 00:28:33.173 [2024-11-15 11:46:33.888220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.173 [2024-11-15 11:46:33.888231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.173 qpair failed and we were unable to recover it. 00:28:33.173 [2024-11-15 11:46:33.888320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.173 [2024-11-15 11:46:33.888358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.173 qpair failed and we were unable to recover it. 00:28:33.173 [2024-11-15 11:46:33.888562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.173 [2024-11-15 11:46:33.888594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.173 qpair failed and we were unable to recover it. 00:28:33.173 [2024-11-15 11:46:33.888794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.173 [2024-11-15 11:46:33.888835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.173 qpair failed and we were unable to recover it. 00:28:33.173 [2024-11-15 11:46:33.889086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.173 [2024-11-15 11:46:33.889097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.173 qpair failed and we were unable to recover it. 00:28:33.173 [2024-11-15 11:46:33.889255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.173 [2024-11-15 11:46:33.889289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.173 qpair failed and we were unable to recover it. 00:28:33.173 [2024-11-15 11:46:33.889565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.173 [2024-11-15 11:46:33.889608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.173 qpair failed and we were unable to recover it. 00:28:33.173 [2024-11-15 11:46:33.889749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.173 [2024-11-15 11:46:33.889782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.173 qpair failed and we were unable to recover it. 00:28:33.173 [2024-11-15 11:46:33.889972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.173 [2024-11-15 11:46:33.890003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.173 qpair failed and we were unable to recover it. 00:28:33.173 [2024-11-15 11:46:33.890129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.173 [2024-11-15 11:46:33.890160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.173 qpair failed and we were unable to recover it. 00:28:33.173 [2024-11-15 11:46:33.890370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.173 [2024-11-15 11:46:33.890382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.173 qpair failed and we were unable to recover it. 00:28:33.173 [2024-11-15 11:46:33.890531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.173 [2024-11-15 11:46:33.890543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.173 qpair failed and we were unable to recover it. 00:28:33.173 [2024-11-15 11:46:33.890723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.173 [2024-11-15 11:46:33.890734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.173 qpair failed and we were unable to recover it. 00:28:33.173 [2024-11-15 11:46:33.890869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.173 [2024-11-15 11:46:33.890880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.173 qpair failed and we were unable to recover it. 00:28:33.173 [2024-11-15 11:46:33.890952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.173 [2024-11-15 11:46:33.890962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.173 qpair failed and we were unable to recover it. 00:28:33.173 [2024-11-15 11:46:33.891188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.173 [2024-11-15 11:46:33.891200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.173 qpair failed and we were unable to recover it. 00:28:33.173 [2024-11-15 11:46:33.891281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.173 [2024-11-15 11:46:33.891291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.173 qpair failed and we were unable to recover it. 00:28:33.173 [2024-11-15 11:46:33.891381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.173 [2024-11-15 11:46:33.891393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.173 qpair failed and we were unable to recover it. 00:28:33.173 [2024-11-15 11:46:33.891581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.173 [2024-11-15 11:46:33.891609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:33.173 qpair failed and we were unable to recover it. 00:28:33.173 [2024-11-15 11:46:33.891916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.173 [2024-11-15 11:46:33.891953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:33.173 qpair failed and we were unable to recover it. 00:28:33.173 [2024-11-15 11:46:33.892164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.173 [2024-11-15 11:46:33.892197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:33.173 qpair failed and we were unable to recover it. 00:28:33.173 [2024-11-15 11:46:33.892331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.173 [2024-11-15 11:46:33.892364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:33.173 qpair failed and we were unable to recover it. 00:28:33.173 [2024-11-15 11:46:33.892627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.173 [2024-11-15 11:46:33.892661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:33.173 qpair failed and we were unable to recover it. 00:28:33.173 [2024-11-15 11:46:33.892850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.173 [2024-11-15 11:46:33.892882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:33.173 qpair failed and we were unable to recover it. 00:28:33.173 [2024-11-15 11:46:33.893152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.173 [2024-11-15 11:46:33.893163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:33.173 qpair failed and we were unable to recover it. 00:28:33.173 [2024-11-15 11:46:33.893379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.173 [2024-11-15 11:46:33.893391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:33.173 qpair failed and we were unable to recover it. 00:28:33.173 [2024-11-15 11:46:33.893487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.173 [2024-11-15 11:46:33.893497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:33.173 qpair failed and we were unable to recover it. 00:28:33.173 [2024-11-15 11:46:33.893677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.173 [2024-11-15 11:46:33.893709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:33.173 qpair failed and we were unable to recover it. 00:28:33.173 [2024-11-15 11:46:33.893845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.173 [2024-11-15 11:46:33.893878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:33.173 qpair failed and we were unable to recover it. 00:28:33.173 [2024-11-15 11:46:33.894003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.173 [2024-11-15 11:46:33.894035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:33.173 qpair failed and we were unable to recover it. 00:28:33.173 [2024-11-15 11:46:33.894234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.173 [2024-11-15 11:46:33.894269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:33.173 qpair failed and we were unable to recover it. 00:28:33.173 [2024-11-15 11:46:33.894474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.173 [2024-11-15 11:46:33.894533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:33.173 qpair failed and we were unable to recover it. 00:28:33.173 [2024-11-15 11:46:33.894825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.173 [2024-11-15 11:46:33.894856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:33.173 qpair failed and we were unable to recover it. 00:28:33.173 [2024-11-15 11:46:33.895115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.173 [2024-11-15 11:46:33.895138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:33.173 qpair failed and we were unable to recover it. 00:28:33.173 [2024-11-15 11:46:33.895284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.173 [2024-11-15 11:46:33.895318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:33.173 qpair failed and we were unable to recover it. 00:28:33.173 [2024-11-15 11:46:33.895543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.173 [2024-11-15 11:46:33.895576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:33.173 qpair failed and we were unable to recover it. 00:28:33.173 [2024-11-15 11:46:33.895762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.173 [2024-11-15 11:46:33.895795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:33.173 qpair failed and we were unable to recover it. 00:28:33.173 [2024-11-15 11:46:33.895991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.173 [2024-11-15 11:46:33.896003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:33.173 qpair failed and we were unable to recover it. 00:28:33.173 [2024-11-15 11:46:33.896157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.173 [2024-11-15 11:46:33.896196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:33.173 qpair failed and we were unable to recover it. 00:28:33.173 [2024-11-15 11:46:33.896342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.173 [2024-11-15 11:46:33.896354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:33.173 qpair failed and we were unable to recover it. 00:28:33.173 [2024-11-15 11:46:33.896497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.174 [2024-11-15 11:46:33.896509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:33.174 qpair failed and we were unable to recover it. 00:28:33.174 [2024-11-15 11:46:33.896668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.174 [2024-11-15 11:46:33.896680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:33.174 qpair failed and we were unable to recover it. 00:28:33.174 [2024-11-15 11:46:33.896771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.174 [2024-11-15 11:46:33.896801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:33.174 qpair failed and we were unable to recover it. 00:28:33.174 [2024-11-15 11:46:33.897018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.174 [2024-11-15 11:46:33.897051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:33.174 qpair failed and we were unable to recover it. 00:28:33.174 [2024-11-15 11:46:33.897180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.174 [2024-11-15 11:46:33.897212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:33.174 qpair failed and we were unable to recover it. 00:28:33.174 [2024-11-15 11:46:33.897442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.174 [2024-11-15 11:46:33.897492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:33.174 qpair failed and we were unable to recover it. 00:28:33.174 [2024-11-15 11:46:33.897639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.174 [2024-11-15 11:46:33.897674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:33.174 qpair failed and we were unable to recover it. 00:28:33.174 [2024-11-15 11:46:33.897958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.174 [2024-11-15 11:46:33.897996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:33.174 qpair failed and we were unable to recover it. 00:28:33.174 [2024-11-15 11:46:33.898200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.174 [2024-11-15 11:46:33.898234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:33.174 qpair failed and we were unable to recover it. 00:28:33.174 [2024-11-15 11:46:33.898487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.174 [2024-11-15 11:46:33.898520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:33.174 qpair failed and we were unable to recover it. 00:28:33.174 [2024-11-15 11:46:33.898764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.174 [2024-11-15 11:46:33.898797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:33.174 qpair failed and we were unable to recover it. 00:28:33.174 [2024-11-15 11:46:33.898982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.174 [2024-11-15 11:46:33.898993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:33.174 qpair failed and we were unable to recover it. 00:28:33.174 [2024-11-15 11:46:33.899168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.174 [2024-11-15 11:46:33.899179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:33.174 qpair failed and we were unable to recover it. 00:28:33.174 [2024-11-15 11:46:33.899397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.174 [2024-11-15 11:46:33.899441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:33.174 qpair failed and we were unable to recover it. 00:28:33.174 [2024-11-15 11:46:33.899569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.174 [2024-11-15 11:46:33.899602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:33.174 qpair failed and we were unable to recover it. 00:28:33.174 [2024-11-15 11:46:33.899789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.174 [2024-11-15 11:46:33.899821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:33.174 qpair failed and we were unable to recover it. 00:28:33.174 [2024-11-15 11:46:33.900049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.174 [2024-11-15 11:46:33.900081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:33.174 qpair failed and we were unable to recover it. 00:28:33.174 [2024-11-15 11:46:33.900344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.174 [2024-11-15 11:46:33.900378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:33.174 qpair failed and we were unable to recover it. 00:28:33.174 [2024-11-15 11:46:33.900607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.174 [2024-11-15 11:46:33.900650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.174 qpair failed and we were unable to recover it. 00:28:33.174 [2024-11-15 11:46:33.900853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.174 [2024-11-15 11:46:33.900864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.174 qpair failed and we were unable to recover it. 00:28:33.174 [2024-11-15 11:46:33.901023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.174 [2024-11-15 11:46:33.901057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.174 qpair failed and we were unable to recover it. 00:28:33.174 [2024-11-15 11:46:33.901246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.174 [2024-11-15 11:46:33.901279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.174 qpair failed and we were unable to recover it. 00:28:33.174 [2024-11-15 11:46:33.901477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.174 [2024-11-15 11:46:33.901511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.174 qpair failed and we were unable to recover it. 00:28:33.174 [2024-11-15 11:46:33.901765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.174 [2024-11-15 11:46:33.901797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.174 qpair failed and we were unable to recover it. 00:28:33.174 [2024-11-15 11:46:33.901979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.174 [2024-11-15 11:46:33.902010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.174 qpair failed and we were unable to recover it. 00:28:33.174 [2024-11-15 11:46:33.902229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.174 [2024-11-15 11:46:33.902240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.174 qpair failed and we were unable to recover it. 00:28:33.174 [2024-11-15 11:46:33.902368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.174 [2024-11-15 11:46:33.902380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.174 qpair failed and we were unable to recover it. 00:28:33.174 [2024-11-15 11:46:33.902533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.174 [2024-11-15 11:46:33.902572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.174 qpair failed and we were unable to recover it. 00:28:33.174 [2024-11-15 11:46:33.902777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.174 [2024-11-15 11:46:33.902811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.174 qpair failed and we were unable to recover it. 00:28:33.174 [2024-11-15 11:46:33.903121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.174 [2024-11-15 11:46:33.903154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.174 qpair failed and we were unable to recover it. 00:28:33.174 [2024-11-15 11:46:33.903368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.174 [2024-11-15 11:46:33.903408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.174 qpair failed and we were unable to recover it. 00:28:33.174 [2024-11-15 11:46:33.903692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.174 [2024-11-15 11:46:33.903731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.174 qpair failed and we were unable to recover it. 00:28:33.174 [2024-11-15 11:46:33.903887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.174 [2024-11-15 11:46:33.903918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.174 qpair failed and we were unable to recover it. 00:28:33.174 [2024-11-15 11:46:33.904099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.174 [2024-11-15 11:46:33.904133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.174 qpair failed and we were unable to recover it. 00:28:33.174 [2024-11-15 11:46:33.904263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.174 [2024-11-15 11:46:33.904305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.174 qpair failed and we were unable to recover it. 00:28:33.174 [2024-11-15 11:46:33.904523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.174 [2024-11-15 11:46:33.904559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.174 qpair failed and we were unable to recover it. 00:28:33.174 [2024-11-15 11:46:33.904768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.174 [2024-11-15 11:46:33.904801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.174 qpair failed and we were unable to recover it. 00:28:33.174 [2024-11-15 11:46:33.905050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.174 [2024-11-15 11:46:33.905084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.174 qpair failed and we were unable to recover it. 00:28:33.174 [2024-11-15 11:46:33.905213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.174 [2024-11-15 11:46:33.905244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.174 qpair failed and we were unable to recover it. 00:28:33.174 [2024-11-15 11:46:33.905434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.174 [2024-11-15 11:46:33.905476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.174 qpair failed and we were unable to recover it. 00:28:33.174 [2024-11-15 11:46:33.905787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.174 [2024-11-15 11:46:33.905821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.174 qpair failed and we were unable to recover it. 00:28:33.174 [2024-11-15 11:46:33.906022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.174 [2024-11-15 11:46:33.906060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.174 qpair failed and we were unable to recover it. 00:28:33.174 [2024-11-15 11:46:33.906237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.174 [2024-11-15 11:46:33.906247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.175 qpair failed and we were unable to recover it. 00:28:33.175 [2024-11-15 11:46:33.906387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.175 [2024-11-15 11:46:33.906398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.175 qpair failed and we were unable to recover it. 00:28:33.175 [2024-11-15 11:46:33.906492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.175 [2024-11-15 11:46:33.906503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.175 qpair failed and we were unable to recover it. 00:28:33.175 [2024-11-15 11:46:33.906672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.175 [2024-11-15 11:46:33.906683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.175 qpair failed and we were unable to recover it. 00:28:33.175 [2024-11-15 11:46:33.906836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.175 [2024-11-15 11:46:33.906847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.175 qpair failed and we were unable to recover it. 00:28:33.175 [2024-11-15 11:46:33.906974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.175 [2024-11-15 11:46:33.907004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.175 qpair failed and we were unable to recover it. 00:28:33.175 [2024-11-15 11:46:33.907229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.175 [2024-11-15 11:46:33.907261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.175 qpair failed and we were unable to recover it. 00:28:33.175 [2024-11-15 11:46:33.907434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.175 [2024-11-15 11:46:33.907476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.175 qpair failed and we were unable to recover it. 00:28:33.175 [2024-11-15 11:46:33.907735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.175 [2024-11-15 11:46:33.907777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.175 qpair failed and we were unable to recover it. 00:28:33.175 [2024-11-15 11:46:33.907898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.175 [2024-11-15 11:46:33.907930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.175 qpair failed and we were unable to recover it. 00:28:33.175 [2024-11-15 11:46:33.908061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.175 [2024-11-15 11:46:33.908093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.175 qpair failed and we were unable to recover it. 00:28:33.175 [2024-11-15 11:46:33.908355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.175 [2024-11-15 11:46:33.908389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.175 qpair failed and we were unable to recover it. 00:28:33.175 [2024-11-15 11:46:33.908537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.175 [2024-11-15 11:46:33.908570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.175 qpair failed and we were unable to recover it. 00:28:33.175 [2024-11-15 11:46:33.908800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.175 [2024-11-15 11:46:33.908811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.175 qpair failed and we were unable to recover it. 00:28:33.175 [2024-11-15 11:46:33.908904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.175 [2024-11-15 11:46:33.908915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.175 qpair failed and we were unable to recover it. 00:28:33.175 [2024-11-15 11:46:33.909010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.175 [2024-11-15 11:46:33.909020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.175 qpair failed and we were unable to recover it. 00:28:33.175 [2024-11-15 11:46:33.909144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.175 [2024-11-15 11:46:33.909172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:33.175 qpair failed and we were unable to recover it. 00:28:33.175 [2024-11-15 11:46:33.909334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.175 [2024-11-15 11:46:33.909348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:33.175 qpair failed and we were unable to recover it. 00:28:33.175 [2024-11-15 11:46:33.909438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.175 [2024-11-15 11:46:33.909448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:33.175 qpair failed and we were unable to recover it. 00:28:33.175 [2024-11-15 11:46:33.909531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.175 [2024-11-15 11:46:33.909541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:33.175 qpair failed and we were unable to recover it. 00:28:33.175 [2024-11-15 11:46:33.909683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.175 [2024-11-15 11:46:33.909694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:33.175 qpair failed and we were unable to recover it. 00:28:33.175 [2024-11-15 11:46:33.909797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.175 [2024-11-15 11:46:33.909808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:33.175 qpair failed and we were unable to recover it. 00:28:33.175 [2024-11-15 11:46:33.909892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.175 [2024-11-15 11:46:33.909903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:33.175 qpair failed and we were unable to recover it. 00:28:33.175 [2024-11-15 11:46:33.910019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.175 [2024-11-15 11:46:33.910051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:33.175 qpair failed and we were unable to recover it. 00:28:33.175 [2024-11-15 11:46:33.910243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.175 [2024-11-15 11:46:33.910275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:33.175 qpair failed and we were unable to recover it. 00:28:33.175 [2024-11-15 11:46:33.910473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.175 [2024-11-15 11:46:33.910507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:33.175 qpair failed and we were unable to recover it. 00:28:33.175 [2024-11-15 11:46:33.910655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.175 [2024-11-15 11:46:33.910686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:33.175 qpair failed and we were unable to recover it. 00:28:33.175 [2024-11-15 11:46:33.910891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.175 [2024-11-15 11:46:33.910926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:33.175 qpair failed and we were unable to recover it. 00:28:33.175 [2024-11-15 11:46:33.911121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.175 [2024-11-15 11:46:33.911145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:33.175 qpair failed and we were unable to recover it. 00:28:33.175 [2024-11-15 11:46:33.911382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.175 [2024-11-15 11:46:33.911395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:33.175 qpair failed and we were unable to recover it. 00:28:33.175 [2024-11-15 11:46:33.911687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.175 [2024-11-15 11:46:33.911723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:33.175 qpair failed and we were unable to recover it. 00:28:33.175 [2024-11-15 11:46:33.911935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.175 [2024-11-15 11:46:33.911968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:33.175 qpair failed and we were unable to recover it. 00:28:33.175 [2024-11-15 11:46:33.912087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.175 [2024-11-15 11:46:33.912119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:33.175 qpair failed and we were unable to recover it. 00:28:33.175 [2024-11-15 11:46:33.912363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.175 [2024-11-15 11:46:33.912400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:33.175 qpair failed and we were unable to recover it. 00:28:33.175 [2024-11-15 11:46:33.912548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.175 [2024-11-15 11:46:33.912583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:33.175 qpair failed and we were unable to recover it. 00:28:33.175 [2024-11-15 11:46:33.912882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.175 [2024-11-15 11:46:33.912919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:33.175 qpair failed and we were unable to recover it. 00:28:33.175 [2024-11-15 11:46:33.913075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.175 [2024-11-15 11:46:33.913086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:33.175 qpair failed and we were unable to recover it. 00:28:33.175 [2024-11-15 11:46:33.913240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.175 [2024-11-15 11:46:33.913251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:33.175 qpair failed and we were unable to recover it. 00:28:33.175 [2024-11-15 11:46:33.913404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.175 [2024-11-15 11:46:33.913438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:33.175 qpair failed and we were unable to recover it. 00:28:33.175 [2024-11-15 11:46:33.913788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.175 [2024-11-15 11:46:33.913822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:33.175 qpair failed and we were unable to recover it. 00:28:33.175 [2024-11-15 11:46:33.913963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.175 [2024-11-15 11:46:33.913996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:33.175 qpair failed and we were unable to recover it. 00:28:33.175 [2024-11-15 11:46:33.914124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.175 [2024-11-15 11:46:33.914160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:33.176 qpair failed and we were unable to recover it. 00:28:33.176 [2024-11-15 11:46:33.914474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.176 [2024-11-15 11:46:33.914509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:33.176 qpair failed and we were unable to recover it. 00:28:33.176 [2024-11-15 11:46:33.914723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.176 [2024-11-15 11:46:33.914767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:33.176 qpair failed and we were unable to recover it. 00:28:33.176 [2024-11-15 11:46:33.914856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.176 [2024-11-15 11:46:33.914867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:33.176 qpair failed and we were unable to recover it. 00:28:33.176 [2024-11-15 11:46:33.914940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.176 [2024-11-15 11:46:33.914950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:33.176 qpair failed and we were unable to recover it. 00:28:33.176 [2024-11-15 11:46:33.915055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.176 [2024-11-15 11:46:33.915087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:33.176 qpair failed and we were unable to recover it. 00:28:33.176 [2024-11-15 11:46:33.915214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.176 [2024-11-15 11:46:33.915245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:33.176 qpair failed and we were unable to recover it. 00:28:33.176 [2024-11-15 11:46:33.915376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.176 [2024-11-15 11:46:33.915409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:33.176 qpair failed and we were unable to recover it. 00:28:33.176 [2024-11-15 11:46:33.915630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.176 [2024-11-15 11:46:33.915644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:33.176 qpair failed and we were unable to recover it. 00:28:33.176 [2024-11-15 11:46:33.915734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.176 [2024-11-15 11:46:33.915743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:33.176 qpair failed and we were unable to recover it. 00:28:33.176 [2024-11-15 11:46:33.915831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.176 [2024-11-15 11:46:33.915841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:33.176 qpair failed and we were unable to recover it. 00:28:33.176 [2024-11-15 11:46:33.915986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.176 [2024-11-15 11:46:33.915996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:33.176 qpair failed and we were unable to recover it. 00:28:33.176 [2024-11-15 11:46:33.916214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.176 [2024-11-15 11:46:33.916249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:33.176 qpair failed and we were unable to recover it. 00:28:33.176 [2024-11-15 11:46:33.916372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.176 [2024-11-15 11:46:33.916404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:33.176 qpair failed and we were unable to recover it. 00:28:33.176 [2024-11-15 11:46:33.916553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.176 [2024-11-15 11:46:33.916588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:33.176 qpair failed and we were unable to recover it. 00:28:33.176 [2024-11-15 11:46:33.916730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.176 [2024-11-15 11:46:33.916767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.176 qpair failed and we were unable to recover it. 00:28:33.176 [2024-11-15 11:46:33.916984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.176 [2024-11-15 11:46:33.917016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.176 qpair failed and we were unable to recover it. 00:28:33.176 [2024-11-15 11:46:33.917244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.176 [2024-11-15 11:46:33.917276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.176 qpair failed and we were unable to recover it. 00:28:33.176 [2024-11-15 11:46:33.917480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.176 [2024-11-15 11:46:33.917514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.176 qpair failed and we were unable to recover it. 00:28:33.176 [2024-11-15 11:46:33.917663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.176 [2024-11-15 11:46:33.917696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.176 qpair failed and we were unable to recover it. 00:28:33.176 [2024-11-15 11:46:33.917833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.176 [2024-11-15 11:46:33.917866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.176 qpair failed and we were unable to recover it. 00:28:33.176 [2024-11-15 11:46:33.917980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.176 [2024-11-15 11:46:33.918012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.176 qpair failed and we were unable to recover it. 00:28:33.176 [2024-11-15 11:46:33.918197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.176 [2024-11-15 11:46:33.918229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.176 qpair failed and we were unable to recover it. 00:28:33.176 [2024-11-15 11:46:33.918470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.176 [2024-11-15 11:46:33.918503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.176 qpair failed and we were unable to recover it. 00:28:33.176 [2024-11-15 11:46:33.918705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.176 [2024-11-15 11:46:33.918737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.176 qpair failed and we were unable to recover it. 00:28:33.176 [2024-11-15 11:46:33.918953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.176 [2024-11-15 11:46:33.918988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.176 qpair failed and we were unable to recover it. 00:28:33.176 [2024-11-15 11:46:33.919192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.176 [2024-11-15 11:46:33.919204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.176 qpair failed and we were unable to recover it. 00:28:33.176 [2024-11-15 11:46:33.919357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.176 [2024-11-15 11:46:33.919390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.176 qpair failed and we were unable to recover it. 00:28:33.176 [2024-11-15 11:46:33.919521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.176 [2024-11-15 11:46:33.919561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.176 qpair failed and we were unable to recover it. 00:28:33.176 [2024-11-15 11:46:33.919747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.176 [2024-11-15 11:46:33.919778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.176 qpair failed and we were unable to recover it. 00:28:33.176 [2024-11-15 11:46:33.919966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.176 [2024-11-15 11:46:33.919977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.176 qpair failed and we were unable to recover it. 00:28:33.176 [2024-11-15 11:46:33.920168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.176 [2024-11-15 11:46:33.920200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.176 qpair failed and we were unable to recover it. 00:28:33.176 [2024-11-15 11:46:33.920403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.176 [2024-11-15 11:46:33.920434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.176 qpair failed and we were unable to recover it. 00:28:33.176 [2024-11-15 11:46:33.920636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.176 [2024-11-15 11:46:33.920670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.176 qpair failed and we were unable to recover it. 00:28:33.176 [2024-11-15 11:46:33.920851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.176 [2024-11-15 11:46:33.920862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.176 qpair failed and we were unable to recover it. 00:28:33.176 [2024-11-15 11:46:33.920943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.176 [2024-11-15 11:46:33.920971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.176 qpair failed and we were unable to recover it. 00:28:33.176 [2024-11-15 11:46:33.921184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.176 [2024-11-15 11:46:33.921216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.176 qpair failed and we were unable to recover it. 00:28:33.177 [2024-11-15 11:46:33.921419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.177 [2024-11-15 11:46:33.921450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.177 qpair failed and we were unable to recover it. 00:28:33.177 [2024-11-15 11:46:33.921597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.177 [2024-11-15 11:46:33.921629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.177 qpair failed and we were unable to recover it. 00:28:33.177 [2024-11-15 11:46:33.921826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.177 [2024-11-15 11:46:33.921859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.177 qpair failed and we were unable to recover it. 00:28:33.177 [2024-11-15 11:46:33.922321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.177 [2024-11-15 11:46:33.922332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.177 qpair failed and we were unable to recover it. 00:28:33.177 [2024-11-15 11:46:33.922539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.177 [2024-11-15 11:46:33.922550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.177 qpair failed and we were unable to recover it. 00:28:33.177 [2024-11-15 11:46:33.922775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.177 [2024-11-15 11:46:33.922808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.177 qpair failed and we were unable to recover it. 00:28:33.177 [2024-11-15 11:46:33.922937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.177 [2024-11-15 11:46:33.922969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.177 qpair failed and we were unable to recover it. 00:28:33.177 [2024-11-15 11:46:33.923193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.177 [2024-11-15 11:46:33.923226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.177 qpair failed and we were unable to recover it. 00:28:33.177 [2024-11-15 11:46:33.923435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.177 [2024-11-15 11:46:33.923477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.177 qpair failed and we were unable to recover it. 00:28:33.177 [2024-11-15 11:46:33.923670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.177 [2024-11-15 11:46:33.923703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.177 qpair failed and we were unable to recover it. 00:28:33.177 [2024-11-15 11:46:33.923827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.177 [2024-11-15 11:46:33.923860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.177 qpair failed and we were unable to recover it. 00:28:33.177 [2024-11-15 11:46:33.924054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.177 [2024-11-15 11:46:33.924064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.177 qpair failed and we were unable to recover it. 00:28:33.177 [2024-11-15 11:46:33.924227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.177 [2024-11-15 11:46:33.924258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.177 qpair failed and we were unable to recover it. 00:28:33.177 [2024-11-15 11:46:33.924532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.177 [2024-11-15 11:46:33.924566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.177 qpair failed and we were unable to recover it. 00:28:33.177 [2024-11-15 11:46:33.924770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.177 [2024-11-15 11:46:33.924801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.177 qpair failed and we were unable to recover it. 00:28:33.177 [2024-11-15 11:46:33.925022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.177 [2024-11-15 11:46:33.925054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.177 qpair failed and we were unable to recover it. 00:28:33.177 [2024-11-15 11:46:33.925259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.177 [2024-11-15 11:46:33.925291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.177 qpair failed and we were unable to recover it. 00:28:33.177 [2024-11-15 11:46:33.925491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.177 [2024-11-15 11:46:33.925525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.177 qpair failed and we were unable to recover it. 00:28:33.177 [2024-11-15 11:46:33.925654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.177 [2024-11-15 11:46:33.925668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:33.177 qpair failed and we were unable to recover it. 00:28:33.177 [2024-11-15 11:46:33.925754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.177 [2024-11-15 11:46:33.925765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:33.177 qpair failed and we were unable to recover it. 00:28:33.177 [2024-11-15 11:46:33.925910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.177 [2024-11-15 11:46:33.925943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:33.177 qpair failed and we were unable to recover it. 00:28:33.177 [2024-11-15 11:46:33.926153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.177 [2024-11-15 11:46:33.926191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:33.177 qpair failed and we were unable to recover it. 00:28:33.177 [2024-11-15 11:46:33.926396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.177 [2024-11-15 11:46:33.926429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:33.177 qpair failed and we were unable to recover it. 00:28:33.177 [2024-11-15 11:46:33.926565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.177 [2024-11-15 11:46:33.926598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:33.177 qpair failed and we were unable to recover it. 00:28:33.177 [2024-11-15 11:46:33.926720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.177 [2024-11-15 11:46:33.926754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:33.177 qpair failed and we were unable to recover it. 00:28:33.177 [2024-11-15 11:46:33.926905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.177 [2024-11-15 11:46:33.926917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:33.177 qpair failed and we were unable to recover it. 00:28:33.177 [2024-11-15 11:46:33.927009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.177 [2024-11-15 11:46:33.927019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:33.177 qpair failed and we were unable to recover it. 00:28:33.177 [2024-11-15 11:46:33.927084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.177 [2024-11-15 11:46:33.927095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:33.177 qpair failed and we were unable to recover it. 00:28:33.177 [2024-11-15 11:46:33.927225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.177 [2024-11-15 11:46:33.927260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:33.177 qpair failed and we were unable to recover it. 00:28:33.177 [2024-11-15 11:46:33.927575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.177 [2024-11-15 11:46:33.927612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:33.177 qpair failed and we were unable to recover it. 00:28:33.177 [2024-11-15 11:46:33.927928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.177 [2024-11-15 11:46:33.927962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:33.177 qpair failed and we were unable to recover it. 00:28:33.177 [2024-11-15 11:46:33.928188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.177 [2024-11-15 11:46:33.928234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:33.177 qpair failed and we were unable to recover it. 00:28:33.177 [2024-11-15 11:46:33.928370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.177 [2024-11-15 11:46:33.928404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:33.177 qpair failed and we were unable to recover it. 00:28:33.177 [2024-11-15 11:46:33.928678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.177 [2024-11-15 11:46:33.928711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:33.177 qpair failed and we were unable to recover it. 00:28:33.177 [2024-11-15 11:46:33.928923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.177 [2024-11-15 11:46:33.928955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:33.177 qpair failed and we were unable to recover it. 00:28:33.177 [2024-11-15 11:46:33.929274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.177 [2024-11-15 11:46:33.929286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:33.177 qpair failed and we were unable to recover it. 00:28:33.177 [2024-11-15 11:46:33.929380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.177 [2024-11-15 11:46:33.929390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:33.177 qpair failed and we were unable to recover it. 00:28:33.177 [2024-11-15 11:46:33.929565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.177 [2024-11-15 11:46:33.929577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:33.177 qpair failed and we were unable to recover it. 00:28:33.177 [2024-11-15 11:46:33.929786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.177 [2024-11-15 11:46:33.929798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:33.177 qpair failed and we were unable to recover it. 00:28:33.177 [2024-11-15 11:46:33.929954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.177 [2024-11-15 11:46:33.929966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:33.177 qpair failed and we were unable to recover it. 00:28:33.177 [2024-11-15 11:46:33.930058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.177 [2024-11-15 11:46:33.930086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:33.177 qpair failed and we were unable to recover it. 00:28:33.177 [2024-11-15 11:46:33.930280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.177 [2024-11-15 11:46:33.930322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:33.177 qpair failed and we were unable to recover it. 00:28:33.178 [2024-11-15 11:46:33.930516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.178 [2024-11-15 11:46:33.930550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:33.178 qpair failed and we were unable to recover it. 00:28:33.178 [2024-11-15 11:46:33.930783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.178 [2024-11-15 11:46:33.930819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:33.178 qpair failed and we were unable to recover it. 00:28:33.178 [2024-11-15 11:46:33.930956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.178 [2024-11-15 11:46:33.930968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:33.178 qpair failed and we were unable to recover it. 00:28:33.178 [2024-11-15 11:46:33.931110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.178 [2024-11-15 11:46:33.931121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:33.178 qpair failed and we were unable to recover it. 00:28:33.178 [2024-11-15 11:46:33.931266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.178 [2024-11-15 11:46:33.931278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:33.178 qpair failed and we were unable to recover it. 00:28:33.178 [2024-11-15 11:46:33.931485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.178 [2024-11-15 11:46:33.931514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:33.178 qpair failed and we were unable to recover it. 00:28:33.178 [2024-11-15 11:46:33.931681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.178 [2024-11-15 11:46:33.931714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:33.178 qpair failed and we were unable to recover it. 00:28:33.178 [2024-11-15 11:46:33.931857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.178 [2024-11-15 11:46:33.931891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:33.178 qpair failed and we were unable to recover it. 00:28:33.178 [2024-11-15 11:46:33.932026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.178 [2024-11-15 11:46:33.932064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:33.178 qpair failed and we were unable to recover it. 00:28:33.178 [2024-11-15 11:46:33.932194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.178 [2024-11-15 11:46:33.932233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:33.178 qpair failed and we were unable to recover it. 00:28:33.178 [2024-11-15 11:46:33.932536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.178 [2024-11-15 11:46:33.932574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:33.178 qpair failed and we were unable to recover it. 00:28:33.178 [2024-11-15 11:46:33.932781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.178 [2024-11-15 11:46:33.932792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:33.178 qpair failed and we were unable to recover it. 00:28:33.178 [2024-11-15 11:46:33.932949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.178 [2024-11-15 11:46:33.932984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:33.178 qpair failed and we were unable to recover it. 00:28:33.178 [2024-11-15 11:46:33.933175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.178 [2024-11-15 11:46:33.933208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:33.178 qpair failed and we were unable to recover it. 00:28:33.178 [2024-11-15 11:46:33.933359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.178 [2024-11-15 11:46:33.933397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:33.178 qpair failed and we were unable to recover it. 00:28:33.178 [2024-11-15 11:46:33.933611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.178 [2024-11-15 11:46:33.933655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:33.178 qpair failed and we were unable to recover it. 00:28:33.178 [2024-11-15 11:46:33.933790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.178 [2024-11-15 11:46:33.933828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.178 qpair failed and we were unable to recover it. 00:28:33.178 [2024-11-15 11:46:33.934020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.178 [2024-11-15 11:46:33.934031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.178 qpair failed and we were unable to recover it. 00:28:33.178 [2024-11-15 11:46:33.934180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.178 [2024-11-15 11:46:33.934191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.178 qpair failed and we were unable to recover it. 00:28:33.178 [2024-11-15 11:46:33.934456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.178 [2024-11-15 11:46:33.934499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.178 qpair failed and we were unable to recover it. 00:28:33.178 [2024-11-15 11:46:33.934614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.178 [2024-11-15 11:46:33.934647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.178 qpair failed and we were unable to recover it. 00:28:33.178 [2024-11-15 11:46:33.934796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.178 [2024-11-15 11:46:33.934829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.178 qpair failed and we were unable to recover it. 00:28:33.178 [2024-11-15 11:46:33.935040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.178 [2024-11-15 11:46:33.935075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.178 qpair failed and we were unable to recover it. 00:28:33.178 [2024-11-15 11:46:33.935161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.178 [2024-11-15 11:46:33.935171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.178 qpair failed and we were unable to recover it. 00:28:33.178 [2024-11-15 11:46:33.935259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.178 [2024-11-15 11:46:33.935268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.178 qpair failed and we were unable to recover it. 00:28:33.178 [2024-11-15 11:46:33.935511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.178 [2024-11-15 11:46:33.935544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.178 qpair failed and we were unable to recover it. 00:28:33.178 [2024-11-15 11:46:33.935731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.178 [2024-11-15 11:46:33.935764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.178 qpair failed and we were unable to recover it. 00:28:33.178 [2024-11-15 11:46:33.936015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.178 [2024-11-15 11:46:33.936047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.178 qpair failed and we were unable to recover it. 00:28:33.178 [2024-11-15 11:46:33.936306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.178 [2024-11-15 11:46:33.936317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.178 qpair failed and we were unable to recover it. 00:28:33.178 [2024-11-15 11:46:33.936552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.178 [2024-11-15 11:46:33.936566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.178 qpair failed and we were unable to recover it. 00:28:33.178 [2024-11-15 11:46:33.936657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.178 [2024-11-15 11:46:33.936667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.178 qpair failed and we were unable to recover it. 00:28:33.178 [2024-11-15 11:46:33.936847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.178 [2024-11-15 11:46:33.936879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.178 qpair failed and we were unable to recover it. 00:28:33.178 [2024-11-15 11:46:33.937066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.178 [2024-11-15 11:46:33.937098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.178 qpair failed and we were unable to recover it. 00:28:33.178 [2024-11-15 11:46:33.937225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.178 [2024-11-15 11:46:33.937256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.178 qpair failed and we were unable to recover it. 00:28:33.178 [2024-11-15 11:46:33.937470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.178 [2024-11-15 11:46:33.937502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.178 qpair failed and we were unable to recover it. 00:28:33.178 [2024-11-15 11:46:33.937688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.178 [2024-11-15 11:46:33.937721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.178 qpair failed and we were unable to recover it. 00:28:33.178 [2024-11-15 11:46:33.937933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.178 [2024-11-15 11:46:33.937966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.178 qpair failed and we were unable to recover it. 00:28:33.178 [2024-11-15 11:46:33.938142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.178 [2024-11-15 11:46:33.938153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.178 qpair failed and we were unable to recover it. 00:28:33.178 [2024-11-15 11:46:33.938360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.178 [2024-11-15 11:46:33.938393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.178 qpair failed and we were unable to recover it. 00:28:33.178 [2024-11-15 11:46:33.938615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.178 [2024-11-15 11:46:33.938652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.178 qpair failed and we were unable to recover it. 00:28:33.178 [2024-11-15 11:46:33.938959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.178 [2024-11-15 11:46:33.938991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.178 qpair failed and we were unable to recover it. 00:28:33.178 [2024-11-15 11:46:33.939260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.178 [2024-11-15 11:46:33.939292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.178 qpair failed and we were unable to recover it. 00:28:33.178 [2024-11-15 11:46:33.939515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.178 [2024-11-15 11:46:33.939548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.178 qpair failed and we were unable to recover it. 00:28:33.178 [2024-11-15 11:46:33.939673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.178 [2024-11-15 11:46:33.939706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.178 qpair failed and we were unable to recover it. 00:28:33.178 [2024-11-15 11:46:33.939853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.178 [2024-11-15 11:46:33.939865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.178 qpair failed and we were unable to recover it. 00:28:33.178 [2024-11-15 11:46:33.940007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.178 [2024-11-15 11:46:33.940018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.178 qpair failed and we were unable to recover it. 00:28:33.178 [2024-11-15 11:46:33.940257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.179 [2024-11-15 11:46:33.940290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.179 qpair failed and we were unable to recover it. 00:28:33.179 [2024-11-15 11:46:33.940592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.179 [2024-11-15 11:46:33.940627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.179 qpair failed and we were unable to recover it. 00:28:33.179 [2024-11-15 11:46:33.940825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.179 [2024-11-15 11:46:33.940857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.179 qpair failed and we were unable to recover it. 00:28:33.179 [2024-11-15 11:46:33.941010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.179 [2024-11-15 11:46:33.941042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.179 qpair failed and we were unable to recover it. 00:28:33.179 [2024-11-15 11:46:33.941173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.179 [2024-11-15 11:46:33.941207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.179 qpair failed and we were unable to recover it. 00:28:33.179 [2024-11-15 11:46:33.941413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.179 [2024-11-15 11:46:33.941446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.179 qpair failed and we were unable to recover it. 00:28:33.179 [2024-11-15 11:46:33.941681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.179 [2024-11-15 11:46:33.941715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.179 qpair failed and we were unable to recover it. 00:28:33.179 [2024-11-15 11:46:33.941907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.179 [2024-11-15 11:46:33.941940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.179 qpair failed and we were unable to recover it. 00:28:33.179 [2024-11-15 11:46:33.942098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.179 [2024-11-15 11:46:33.942109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.179 qpair failed and we were unable to recover it. 00:28:33.179 [2024-11-15 11:46:33.942270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.179 [2024-11-15 11:46:33.942302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.179 qpair failed and we were unable to recover it. 00:28:33.179 [2024-11-15 11:46:33.942447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.179 [2024-11-15 11:46:33.942497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.179 qpair failed and we were unable to recover it. 00:28:33.179 [2024-11-15 11:46:33.942731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.179 [2024-11-15 11:46:33.942763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.179 qpair failed and we were unable to recover it. 00:28:33.179 [2024-11-15 11:46:33.943025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.179 [2024-11-15 11:46:33.943036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.179 qpair failed and we were unable to recover it. 00:28:33.179 [2024-11-15 11:46:33.943269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.179 [2024-11-15 11:46:33.943281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.179 qpair failed and we were unable to recover it. 00:28:33.179 [2024-11-15 11:46:33.943502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.179 [2024-11-15 11:46:33.943536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.179 qpair failed and we were unable to recover it. 00:28:33.179 [2024-11-15 11:46:33.943659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.179 [2024-11-15 11:46:33.943691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.179 qpair failed and we were unable to recover it. 00:28:33.179 [2024-11-15 11:46:33.943982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.179 [2024-11-15 11:46:33.944020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.179 qpair failed and we were unable to recover it. 00:28:33.179 [2024-11-15 11:46:33.944275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.179 [2024-11-15 11:46:33.944307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.179 qpair failed and we were unable to recover it. 00:28:33.179 [2024-11-15 11:46:33.944434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.179 [2024-11-15 11:46:33.944478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.179 qpair failed and we were unable to recover it. 00:28:33.179 [2024-11-15 11:46:33.944728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.179 [2024-11-15 11:46:33.944740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.179 qpair failed and we were unable to recover it. 00:28:33.179 [2024-11-15 11:46:33.944911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.179 [2024-11-15 11:46:33.944944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.179 qpair failed and we were unable to recover it. 00:28:33.179 [2024-11-15 11:46:33.945212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.179 [2024-11-15 11:46:33.945245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.179 qpair failed and we were unable to recover it. 00:28:33.179 [2024-11-15 11:46:33.945360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.179 [2024-11-15 11:46:33.945392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.179 qpair failed and we were unable to recover it. 00:28:33.179 [2024-11-15 11:46:33.945583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.179 [2024-11-15 11:46:33.945617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.179 qpair failed and we were unable to recover it. 00:28:33.179 [2024-11-15 11:46:33.945896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.179 [2024-11-15 11:46:33.945931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.179 qpair failed and we were unable to recover it. 00:28:33.179 [2024-11-15 11:46:33.946072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.179 [2024-11-15 11:46:33.946083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.179 qpair failed and we were unable to recover it. 00:28:33.179 [2024-11-15 11:46:33.946151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.179 [2024-11-15 11:46:33.946160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.179 qpair failed and we were unable to recover it. 00:28:33.179 [2024-11-15 11:46:33.946235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.179 [2024-11-15 11:46:33.946245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.179 qpair failed and we were unable to recover it. 00:28:33.179 [2024-11-15 11:46:33.946336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.179 [2024-11-15 11:46:33.946346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.179 qpair failed and we were unable to recover it. 00:28:33.179 [2024-11-15 11:46:33.946492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.179 [2024-11-15 11:46:33.946502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.179 qpair failed and we were unable to recover it. 00:28:33.179 [2024-11-15 11:46:33.946703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.179 [2024-11-15 11:46:33.946736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.179 qpair failed and we were unable to recover it. 00:28:33.179 [2024-11-15 11:46:33.946870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.179 [2024-11-15 11:46:33.946901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.179 qpair failed and we were unable to recover it. 00:28:33.179 [2024-11-15 11:46:33.947094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.179 [2024-11-15 11:46:33.947127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.179 qpair failed and we were unable to recover it. 00:28:33.179 [2024-11-15 11:46:33.947418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.179 [2024-11-15 11:46:33.947452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.179 qpair failed and we were unable to recover it. 00:28:33.179 [2024-11-15 11:46:33.947590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.179 [2024-11-15 11:46:33.947622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.179 qpair failed and we were unable to recover it. 00:28:33.179 [2024-11-15 11:46:33.947761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.179 [2024-11-15 11:46:33.947793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.179 qpair failed and we were unable to recover it. 00:28:33.179 [2024-11-15 11:46:33.947920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.179 [2024-11-15 11:46:33.947952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.179 qpair failed and we were unable to recover it. 00:28:33.179 [2024-11-15 11:46:33.948182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.179 [2024-11-15 11:46:33.948192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.179 qpair failed and we were unable to recover it. 00:28:33.179 [2024-11-15 11:46:33.948268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.179 [2024-11-15 11:46:33.948289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.179 qpair failed and we were unable to recover it. 00:28:33.179 [2024-11-15 11:46:33.948359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.179 [2024-11-15 11:46:33.948369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.179 qpair failed and we were unable to recover it. 00:28:33.179 [2024-11-15 11:46:33.948464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.179 [2024-11-15 11:46:33.948474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.179 qpair failed and we were unable to recover it. 00:28:33.179 [2024-11-15 11:46:33.948708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.179 [2024-11-15 11:46:33.948719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.179 qpair failed and we were unable to recover it. 00:28:33.179 [2024-11-15 11:46:33.948896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.179 [2024-11-15 11:46:33.948907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.179 qpair failed and we were unable to recover it. 00:28:33.179 [2024-11-15 11:46:33.949061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.179 [2024-11-15 11:46:33.949093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.179 qpair failed and we were unable to recover it. 00:28:33.179 [2024-11-15 11:46:33.949311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.179 [2024-11-15 11:46:33.949343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.179 qpair failed and we were unable to recover it. 00:28:33.179 [2024-11-15 11:46:33.949545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.179 [2024-11-15 11:46:33.949579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.179 qpair failed and we were unable to recover it. 00:28:33.179 [2024-11-15 11:46:33.949805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.180 [2024-11-15 11:46:33.949838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.180 qpair failed and we were unable to recover it. 00:28:33.180 [2024-11-15 11:46:33.950036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.180 [2024-11-15 11:46:33.950068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.180 qpair failed and we were unable to recover it. 00:28:33.180 [2024-11-15 11:46:33.950267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.180 [2024-11-15 11:46:33.950302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.180 qpair failed and we were unable to recover it. 00:28:33.180 [2024-11-15 11:46:33.950519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.180 [2024-11-15 11:46:33.950553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.180 qpair failed and we were unable to recover it. 00:28:33.180 [2024-11-15 11:46:33.950751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.180 [2024-11-15 11:46:33.950790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.180 qpair failed and we were unable to recover it. 00:28:33.180 [2024-11-15 11:46:33.950910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.180 [2024-11-15 11:46:33.950921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.180 qpair failed and we were unable to recover it. 00:28:33.180 [2024-11-15 11:46:33.951190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.180 [2024-11-15 11:46:33.951202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.180 qpair failed and we were unable to recover it. 00:28:33.180 [2024-11-15 11:46:33.951339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.180 [2024-11-15 11:46:33.951350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.180 qpair failed and we were unable to recover it. 00:28:33.180 [2024-11-15 11:46:33.951431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.180 [2024-11-15 11:46:33.951452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.180 qpair failed and we were unable to recover it. 00:28:33.180 [2024-11-15 11:46:33.951576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.180 [2024-11-15 11:46:33.951586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.180 qpair failed and we were unable to recover it. 00:28:33.180 [2024-11-15 11:46:33.951665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.180 [2024-11-15 11:46:33.951675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.180 qpair failed and we were unable to recover it. 00:28:33.180 [2024-11-15 11:46:33.951737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.180 [2024-11-15 11:46:33.951746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.180 qpair failed and we were unable to recover it. 00:28:33.180 [2024-11-15 11:46:33.952029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.180 [2024-11-15 11:46:33.952060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.180 qpair failed and we were unable to recover it. 00:28:33.180 [2024-11-15 11:46:33.952242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.180 [2024-11-15 11:46:33.952274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.180 qpair failed and we were unable to recover it. 00:28:33.180 [2024-11-15 11:46:33.952417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.180 [2024-11-15 11:46:33.952449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.180 qpair failed and we were unable to recover it. 00:28:33.180 [2024-11-15 11:46:33.952737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.180 [2024-11-15 11:46:33.952769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.180 qpair failed and we were unable to recover it. 00:28:33.180 [2024-11-15 11:46:33.952993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.180 [2024-11-15 11:46:33.953003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.180 qpair failed and we were unable to recover it. 00:28:33.180 [2024-11-15 11:46:33.953239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.180 [2024-11-15 11:46:33.953271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.180 qpair failed and we were unable to recover it. 00:28:33.180 [2024-11-15 11:46:33.953481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.180 [2024-11-15 11:46:33.953515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.180 qpair failed and we were unable to recover it. 00:28:33.180 [2024-11-15 11:46:33.953774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.180 [2024-11-15 11:46:33.953806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.180 qpair failed and we were unable to recover it. 00:28:33.180 [2024-11-15 11:46:33.953995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.180 [2024-11-15 11:46:33.954016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.180 qpair failed and we were unable to recover it. 00:28:33.180 [2024-11-15 11:46:33.954171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.180 [2024-11-15 11:46:33.954203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.180 qpair failed and we were unable to recover it. 00:28:33.180 [2024-11-15 11:46:33.954350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.180 [2024-11-15 11:46:33.954382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.180 qpair failed and we were unable to recover it. 00:28:33.180 [2024-11-15 11:46:33.954516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.180 [2024-11-15 11:46:33.954549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.180 qpair failed and we were unable to recover it. 00:28:33.180 [2024-11-15 11:46:33.954750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.180 [2024-11-15 11:46:33.954762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.180 qpair failed and we were unable to recover it. 00:28:33.180 [2024-11-15 11:46:33.954864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.180 [2024-11-15 11:46:33.954874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.180 qpair failed and we were unable to recover it. 00:28:33.180 [2024-11-15 11:46:33.955078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.180 [2024-11-15 11:46:33.955110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.180 qpair failed and we were unable to recover it. 00:28:33.180 [2024-11-15 11:46:33.955305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.180 [2024-11-15 11:46:33.955337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.180 qpair failed and we were unable to recover it. 00:28:33.181 [2024-11-15 11:46:33.955538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.181 [2024-11-15 11:46:33.955571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.181 qpair failed and we were unable to recover it. 00:28:33.181 [2024-11-15 11:46:33.955828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.181 [2024-11-15 11:46:33.955860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.181 qpair failed and we were unable to recover it. 00:28:33.181 [2024-11-15 11:46:33.955976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.181 [2024-11-15 11:46:33.956008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.181 qpair failed and we were unable to recover it. 00:28:33.181 [2024-11-15 11:46:33.956226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.181 [2024-11-15 11:46:33.956237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.181 qpair failed and we were unable to recover it. 00:28:33.181 [2024-11-15 11:46:33.956329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.181 [2024-11-15 11:46:33.956339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.181 qpair failed and we were unable to recover it. 00:28:33.181 [2024-11-15 11:46:33.956434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.181 [2024-11-15 11:46:33.956444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.181 qpair failed and we were unable to recover it. 00:28:33.181 [2024-11-15 11:46:33.956661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.181 [2024-11-15 11:46:33.956672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.181 qpair failed and we were unable to recover it. 00:28:33.181 [2024-11-15 11:46:33.956824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.181 [2024-11-15 11:46:33.956836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.181 qpair failed and we were unable to recover it. 00:28:33.181 [2024-11-15 11:46:33.956999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.181 [2024-11-15 11:46:33.957011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.181 qpair failed and we were unable to recover it. 00:28:33.181 [2024-11-15 11:46:33.957279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.181 [2024-11-15 11:46:33.957319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.464 qpair failed and we were unable to recover it. 00:28:33.464 [2024-11-15 11:46:33.957517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.464 [2024-11-15 11:46:33.957551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.464 qpair failed and we were unable to recover it. 00:28:33.464 [2024-11-15 11:46:33.957752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.465 [2024-11-15 11:46:33.957763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.465 qpair failed and we were unable to recover it. 00:28:33.465 [2024-11-15 11:46:33.957911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.465 [2024-11-15 11:46:33.957923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.465 qpair failed and we were unable to recover it. 00:28:33.465 [2024-11-15 11:46:33.958002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.465 [2024-11-15 11:46:33.958012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.465 qpair failed and we were unable to recover it. 00:28:33.465 [2024-11-15 11:46:33.958271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.465 [2024-11-15 11:46:33.958303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.465 qpair failed and we were unable to recover it. 00:28:33.465 [2024-11-15 11:46:33.958519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.465 [2024-11-15 11:46:33.958552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.465 qpair failed and we were unable to recover it. 00:28:33.465 [2024-11-15 11:46:33.958879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.465 [2024-11-15 11:46:33.958921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.465 qpair failed and we were unable to recover it. 00:28:33.465 [2024-11-15 11:46:33.959073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.465 [2024-11-15 11:46:33.959108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.465 qpair failed and we were unable to recover it. 00:28:33.465 [2024-11-15 11:46:33.959366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.465 [2024-11-15 11:46:33.959397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.465 qpair failed and we were unable to recover it. 00:28:33.465 [2024-11-15 11:46:33.959618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.465 [2024-11-15 11:46:33.959652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.465 qpair failed and we were unable to recover it. 00:28:33.465 [2024-11-15 11:46:33.959957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.465 [2024-11-15 11:46:33.959995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.465 qpair failed and we were unable to recover it. 00:28:33.465 [2024-11-15 11:46:33.960275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.465 [2024-11-15 11:46:33.960286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.465 qpair failed and we were unable to recover it. 00:28:33.465 [2024-11-15 11:46:33.960449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.465 [2024-11-15 11:46:33.960501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.465 qpair failed and we were unable to recover it. 00:28:33.465 [2024-11-15 11:46:33.960637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.465 [2024-11-15 11:46:33.960647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.465 qpair failed and we were unable to recover it. 00:28:33.465 [2024-11-15 11:46:33.960735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.465 [2024-11-15 11:46:33.960745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.465 qpair failed and we were unable to recover it. 00:28:33.465 [2024-11-15 11:46:33.960987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.465 [2024-11-15 11:46:33.961017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.465 qpair failed and we were unable to recover it. 00:28:33.465 [2024-11-15 11:46:33.961142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.465 [2024-11-15 11:46:33.961172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.465 qpair failed and we were unable to recover it. 00:28:33.465 [2024-11-15 11:46:33.961374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.465 [2024-11-15 11:46:33.961404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.465 qpair failed and we were unable to recover it. 00:28:33.465 [2024-11-15 11:46:33.961547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.465 [2024-11-15 11:46:33.961581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.465 qpair failed and we were unable to recover it. 00:28:33.465 [2024-11-15 11:46:33.961759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.465 [2024-11-15 11:46:33.961769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.465 qpair failed and we were unable to recover it. 00:28:33.465 [2024-11-15 11:46:33.961870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.465 [2024-11-15 11:46:33.961882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.465 qpair failed and we were unable to recover it. 00:28:33.465 [2024-11-15 11:46:33.962035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.465 [2024-11-15 11:46:33.962045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.465 qpair failed and we were unable to recover it. 00:28:33.465 [2024-11-15 11:46:33.962144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.465 [2024-11-15 11:46:33.962153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.465 qpair failed and we were unable to recover it. 00:28:33.465 [2024-11-15 11:46:33.962237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.465 [2024-11-15 11:46:33.962247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.465 qpair failed and we were unable to recover it. 00:28:33.465 [2024-11-15 11:46:33.962368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.465 [2024-11-15 11:46:33.962399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.465 qpair failed and we were unable to recover it. 00:28:33.465 [2024-11-15 11:46:33.962546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.465 [2024-11-15 11:46:33.962578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.465 qpair failed and we were unable to recover it. 00:28:33.465 [2024-11-15 11:46:33.962713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.465 [2024-11-15 11:46:33.962744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.465 qpair failed and we were unable to recover it. 00:28:33.465 [2024-11-15 11:46:33.962936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.465 [2024-11-15 11:46:33.962966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.465 qpair failed and we were unable to recover it. 00:28:33.465 [2024-11-15 11:46:33.963121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.465 [2024-11-15 11:46:33.963154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.465 qpair failed and we were unable to recover it. 00:28:33.465 [2024-11-15 11:46:33.963438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.465 [2024-11-15 11:46:33.963478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.465 qpair failed and we were unable to recover it. 00:28:33.465 [2024-11-15 11:46:33.963667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.465 [2024-11-15 11:46:33.963700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.465 qpair failed and we were unable to recover it. 00:28:33.465 [2024-11-15 11:46:33.963963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.465 [2024-11-15 11:46:33.963997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.465 qpair failed and we were unable to recover it. 00:28:33.465 [2024-11-15 11:46:33.964328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.465 [2024-11-15 11:46:33.964339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.465 qpair failed and we were unable to recover it. 00:28:33.465 [2024-11-15 11:46:33.964540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.465 [2024-11-15 11:46:33.964576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.465 qpair failed and we were unable to recover it. 00:28:33.465 [2024-11-15 11:46:33.964834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.465 [2024-11-15 11:46:33.964870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.465 qpair failed and we were unable to recover it. 00:28:33.465 [2024-11-15 11:46:33.965178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.465 [2024-11-15 11:46:33.965190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.465 qpair failed and we were unable to recover it. 00:28:33.465 [2024-11-15 11:46:33.965406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.465 [2024-11-15 11:46:33.965417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.465 qpair failed and we were unable to recover it. 00:28:33.465 [2024-11-15 11:46:33.965511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.465 [2024-11-15 11:46:33.965521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.465 qpair failed and we were unable to recover it. 00:28:33.465 [2024-11-15 11:46:33.965732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.466 [2024-11-15 11:46:33.965765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.466 qpair failed and we were unable to recover it. 00:28:33.466 [2024-11-15 11:46:33.966027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.466 [2024-11-15 11:46:33.966067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.466 qpair failed and we were unable to recover it. 00:28:33.466 [2024-11-15 11:46:33.966208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.466 [2024-11-15 11:46:33.966240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.466 qpair failed and we were unable to recover it. 00:28:33.466 [2024-11-15 11:46:33.966439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.466 [2024-11-15 11:46:33.966480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.466 qpair failed and we were unable to recover it. 00:28:33.466 [2024-11-15 11:46:33.966706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.466 [2024-11-15 11:46:33.966738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.466 qpair failed and we were unable to recover it. 00:28:33.466 [2024-11-15 11:46:33.966992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.466 [2024-11-15 11:46:33.967024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.466 qpair failed and we were unable to recover it. 00:28:33.466 [2024-11-15 11:46:33.967223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.466 [2024-11-15 11:46:33.967233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.466 qpair failed and we were unable to recover it. 00:28:33.466 [2024-11-15 11:46:33.967439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.466 [2024-11-15 11:46:33.967450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.466 qpair failed and we were unable to recover it. 00:28:33.466 [2024-11-15 11:46:33.967544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.466 [2024-11-15 11:46:33.967555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.466 qpair failed and we were unable to recover it. 00:28:33.466 [2024-11-15 11:46:33.967716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.466 [2024-11-15 11:46:33.967727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.466 qpair failed and we were unable to recover it. 00:28:33.466 [2024-11-15 11:46:33.967860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.466 [2024-11-15 11:46:33.967870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.466 qpair failed and we were unable to recover it. 00:28:33.466 [2024-11-15 11:46:33.968019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.466 [2024-11-15 11:46:33.968030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.466 qpair failed and we were unable to recover it. 00:28:33.466 [2024-11-15 11:46:33.968179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.466 [2024-11-15 11:46:33.968190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.466 qpair failed and we were unable to recover it. 00:28:33.466 [2024-11-15 11:46:33.968347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.466 [2024-11-15 11:46:33.968358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.466 qpair failed and we were unable to recover it. 00:28:33.466 [2024-11-15 11:46:33.968551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.466 [2024-11-15 11:46:33.968585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.466 qpair failed and we were unable to recover it. 00:28:33.466 [2024-11-15 11:46:33.968875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.466 [2024-11-15 11:46:33.968908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.466 qpair failed and we were unable to recover it. 00:28:33.466 [2024-11-15 11:46:33.969089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.466 [2024-11-15 11:46:33.969123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.466 qpair failed and we were unable to recover it. 00:28:33.466 [2024-11-15 11:46:33.969250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.466 [2024-11-15 11:46:33.969261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.466 qpair failed and we were unable to recover it. 00:28:33.466 [2024-11-15 11:46:33.969403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.466 [2024-11-15 11:46:33.969414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.466 qpair failed and we were unable to recover it. 00:28:33.466 [2024-11-15 11:46:33.969567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.466 [2024-11-15 11:46:33.969601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.466 qpair failed and we were unable to recover it. 00:28:33.466 [2024-11-15 11:46:33.969801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.466 [2024-11-15 11:46:33.969833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.466 qpair failed and we were unable to recover it. 00:28:33.466 [2024-11-15 11:46:33.969964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.466 [2024-11-15 11:46:33.969995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.466 qpair failed and we were unable to recover it. 00:28:33.466 [2024-11-15 11:46:33.970197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.466 [2024-11-15 11:46:33.970230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.466 qpair failed and we were unable to recover it. 00:28:33.466 [2024-11-15 11:46:33.970425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.466 [2024-11-15 11:46:33.970456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.466 qpair failed and we were unable to recover it. 00:28:33.466 [2024-11-15 11:46:33.970667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.466 [2024-11-15 11:46:33.970701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.466 qpair failed and we were unable to recover it. 00:28:33.466 [2024-11-15 11:46:33.971019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.466 [2024-11-15 11:46:33.971054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.466 qpair failed and we were unable to recover it. 00:28:33.466 [2024-11-15 11:46:33.971194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.466 [2024-11-15 11:46:33.971204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.466 qpair failed and we were unable to recover it. 00:28:33.466 [2024-11-15 11:46:33.971355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.466 [2024-11-15 11:46:33.971366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.466 qpair failed and we were unable to recover it. 00:28:33.466 [2024-11-15 11:46:33.971575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.466 [2024-11-15 11:46:33.971586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.466 qpair failed and we were unable to recover it. 00:28:33.466 [2024-11-15 11:46:33.971676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.466 [2024-11-15 11:46:33.971688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.466 qpair failed and we were unable to recover it. 00:28:33.466 [2024-11-15 11:46:33.971786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.466 [2024-11-15 11:46:33.971797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.466 qpair failed and we were unable to recover it. 00:28:33.466 [2024-11-15 11:46:33.971941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.466 [2024-11-15 11:46:33.971952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.466 qpair failed and we were unable to recover it. 00:28:33.466 [2024-11-15 11:46:33.972186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.466 [2024-11-15 11:46:33.972198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.466 qpair failed and we were unable to recover it. 00:28:33.466 [2024-11-15 11:46:33.972347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.466 [2024-11-15 11:46:33.972359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.466 qpair failed and we were unable to recover it. 00:28:33.466 [2024-11-15 11:46:33.972529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.466 [2024-11-15 11:46:33.972541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.466 qpair failed and we were unable to recover it. 00:28:33.466 [2024-11-15 11:46:33.972628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.466 [2024-11-15 11:46:33.972638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.466 qpair failed and we were unable to recover it. 00:28:33.466 [2024-11-15 11:46:33.972825] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1930530 is same with the state(6) to be set 00:28:33.466 [2024-11-15 11:46:33.973170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.466 [2024-11-15 11:46:33.973199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.466 qpair failed and we were unable to recover it. 00:28:33.466 [2024-11-15 11:46:33.973351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.466 [2024-11-15 11:46:33.973392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.466 qpair failed and we were unable to recover it. 00:28:33.467 [2024-11-15 11:46:33.973617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.467 [2024-11-15 11:46:33.973651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.467 qpair failed and we were unable to recover it. 00:28:33.467 [2024-11-15 11:46:33.973881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.467 [2024-11-15 11:46:33.973892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.467 qpair failed and we were unable to recover it. 00:28:33.467 [2024-11-15 11:46:33.974078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.467 [2024-11-15 11:46:33.974089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.467 qpair failed and we were unable to recover it. 00:28:33.467 [2024-11-15 11:46:33.974245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.467 [2024-11-15 11:46:33.974278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.467 qpair failed and we were unable to recover it. 00:28:33.467 [2024-11-15 11:46:33.974540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.467 [2024-11-15 11:46:33.974576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.467 qpair failed and we were unable to recover it. 00:28:33.467 [2024-11-15 11:46:33.974760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.467 [2024-11-15 11:46:33.974793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.467 qpair failed and we were unable to recover it. 00:28:33.467 [2024-11-15 11:46:33.975004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.467 [2024-11-15 11:46:33.975015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.467 qpair failed and we were unable to recover it. 00:28:33.467 [2024-11-15 11:46:33.975091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.467 [2024-11-15 11:46:33.975102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.467 qpair failed and we were unable to recover it. 00:28:33.467 [2024-11-15 11:46:33.975261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.467 [2024-11-15 11:46:33.975273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.467 qpair failed and we were unable to recover it. 00:28:33.467 [2024-11-15 11:46:33.975410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.467 [2024-11-15 11:46:33.975445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.467 qpair failed and we were unable to recover it. 00:28:33.467 [2024-11-15 11:46:33.975666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.467 [2024-11-15 11:46:33.975700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.467 qpair failed and we were unable to recover it. 00:28:33.467 [2024-11-15 11:46:33.975983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.467 [2024-11-15 11:46:33.976015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.467 qpair failed and we were unable to recover it. 00:28:33.467 [2024-11-15 11:46:33.976152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.467 [2024-11-15 11:46:33.976163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.467 qpair failed and we were unable to recover it. 00:28:33.467 [2024-11-15 11:46:33.976250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.467 [2024-11-15 11:46:33.976259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.467 qpair failed and we were unable to recover it. 00:28:33.467 [2024-11-15 11:46:33.976504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.467 [2024-11-15 11:46:33.976517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.467 qpair failed and we were unable to recover it. 00:28:33.467 [2024-11-15 11:46:33.976672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.467 [2024-11-15 11:46:33.976683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.467 qpair failed and we were unable to recover it. 00:28:33.467 [2024-11-15 11:46:33.976768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.467 [2024-11-15 11:46:33.976777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.467 qpair failed and we were unable to recover it. 00:28:33.467 [2024-11-15 11:46:33.976987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.467 [2024-11-15 11:46:33.976998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.467 qpair failed and we were unable to recover it. 00:28:33.467 [2024-11-15 11:46:33.977139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.467 [2024-11-15 11:46:33.977150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.467 qpair failed and we were unable to recover it. 00:28:33.467 [2024-11-15 11:46:33.977295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.467 [2024-11-15 11:46:33.977327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.467 qpair failed and we were unable to recover it. 00:28:33.467 [2024-11-15 11:46:33.977581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.467 [2024-11-15 11:46:33.977614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.467 qpair failed and we were unable to recover it. 00:28:33.467 [2024-11-15 11:46:33.977757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.467 [2024-11-15 11:46:33.977789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.467 qpair failed and we were unable to recover it. 00:28:33.467 [2024-11-15 11:46:33.978003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.467 [2024-11-15 11:46:33.978034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.467 qpair failed and we were unable to recover it. 00:28:33.467 [2024-11-15 11:46:33.978212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.467 [2024-11-15 11:46:33.978249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1922550 with addr=10.0.0.2, port=4420 00:28:33.467 qpair failed and we were unable to recover it. 00:28:33.467 [2024-11-15 11:46:33.978342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.467 [2024-11-15 11:46:33.978355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.467 qpair failed and we were unable to recover it. 00:28:33.467 [2024-11-15 11:46:33.978434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.467 [2024-11-15 11:46:33.978444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.467 qpair failed and we were unable to recover it. 00:28:33.467 [2024-11-15 11:46:33.978638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.467 [2024-11-15 11:46:33.978671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.467 qpair failed and we were unable to recover it. 00:28:33.467 [2024-11-15 11:46:33.978812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.467 [2024-11-15 11:46:33.978843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.467 qpair failed and we were unable to recover it. 00:28:33.467 [2024-11-15 11:46:33.979039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.467 [2024-11-15 11:46:33.979072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.467 qpair failed and we were unable to recover it. 00:28:33.467 [2024-11-15 11:46:33.979259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.467 [2024-11-15 11:46:33.979292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.467 qpair failed and we were unable to recover it. 00:28:33.467 [2024-11-15 11:46:33.979489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.467 [2024-11-15 11:46:33.979523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.467 qpair failed and we were unable to recover it. 00:28:33.467 [2024-11-15 11:46:33.979726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.467 [2024-11-15 11:46:33.979758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.467 qpair failed and we were unable to recover it. 00:28:33.467 [2024-11-15 11:46:33.980068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.467 [2024-11-15 11:46:33.980101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.467 qpair failed and we were unable to recover it. 00:28:33.467 [2024-11-15 11:46:33.980357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.467 [2024-11-15 11:46:33.980388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.467 qpair failed and we were unable to recover it. 00:28:33.467 [2024-11-15 11:46:33.980580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.467 [2024-11-15 11:46:33.980615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.467 qpair failed and we were unable to recover it. 00:28:33.467 [2024-11-15 11:46:33.980844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.467 [2024-11-15 11:46:33.980878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.467 qpair failed and we were unable to recover it. 00:28:33.467 [2024-11-15 11:46:33.981076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.467 [2024-11-15 11:46:33.981119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.467 qpair failed and we were unable to recover it. 00:28:33.468 [2024-11-15 11:46:33.981250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.468 [2024-11-15 11:46:33.981283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.468 qpair failed and we were unable to recover it. 00:28:33.468 [2024-11-15 11:46:33.981413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.468 [2024-11-15 11:46:33.981445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.468 qpair failed and we were unable to recover it. 00:28:33.468 [2024-11-15 11:46:33.981713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.468 [2024-11-15 11:46:33.981746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.468 qpair failed and we were unable to recover it. 00:28:33.468 [2024-11-15 11:46:33.981973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.468 [2024-11-15 11:46:33.982005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.468 qpair failed and we were unable to recover it. 00:28:33.468 [2024-11-15 11:46:33.982139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.468 [2024-11-15 11:46:33.982172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.468 qpair failed and we were unable to recover it. 00:28:33.468 [2024-11-15 11:46:33.982307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.468 [2024-11-15 11:46:33.982340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.468 qpair failed and we were unable to recover it. 00:28:33.468 [2024-11-15 11:46:33.982482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.468 [2024-11-15 11:46:33.982516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.468 qpair failed and we were unable to recover it. 00:28:33.468 [2024-11-15 11:46:33.982742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.468 [2024-11-15 11:46:33.982774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.468 qpair failed and we were unable to recover it. 00:28:33.468 [2024-11-15 11:46:33.982982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.468 [2024-11-15 11:46:33.983014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.468 qpair failed and we were unable to recover it. 00:28:33.468 [2024-11-15 11:46:33.983157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.468 [2024-11-15 11:46:33.983168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.468 qpair failed and we were unable to recover it. 00:28:33.468 [2024-11-15 11:46:33.983267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.468 [2024-11-15 11:46:33.983277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.468 qpair failed and we were unable to recover it. 00:28:33.468 [2024-11-15 11:46:33.983432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.468 [2024-11-15 11:46:33.983443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.468 qpair failed and we were unable to recover it. 00:28:33.468 [2024-11-15 11:46:33.983718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.468 [2024-11-15 11:46:33.983752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.468 qpair failed and we were unable to recover it. 00:28:33.468 [2024-11-15 11:46:33.983883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.468 [2024-11-15 11:46:33.983894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.468 qpair failed and we were unable to recover it. 00:28:33.468 [2024-11-15 11:46:33.984061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.468 [2024-11-15 11:46:33.984092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.468 qpair failed and we were unable to recover it. 00:28:33.468 [2024-11-15 11:46:33.984276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.468 [2024-11-15 11:46:33.984310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.468 qpair failed and we were unable to recover it. 00:28:33.468 [2024-11-15 11:46:33.984427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.468 [2024-11-15 11:46:33.984472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.468 qpair failed and we were unable to recover it. 00:28:33.468 [2024-11-15 11:46:33.984672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.468 [2024-11-15 11:46:33.984705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.468 qpair failed and we were unable to recover it. 00:28:33.468 [2024-11-15 11:46:33.984849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.468 [2024-11-15 11:46:33.984882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.468 qpair failed and we were unable to recover it. 00:28:33.468 [2024-11-15 11:46:33.985013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.468 [2024-11-15 11:46:33.985046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.468 qpair failed and we were unable to recover it. 00:28:33.468 [2024-11-15 11:46:33.985306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.468 [2024-11-15 11:46:33.985340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.468 qpair failed and we were unable to recover it. 00:28:33.468 [2024-11-15 11:46:33.985635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.468 [2024-11-15 11:46:33.985669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.468 qpair failed and we were unable to recover it. 00:28:33.468 [2024-11-15 11:46:33.985861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.468 [2024-11-15 11:46:33.985894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.468 qpair failed and we were unable to recover it. 00:28:33.468 [2024-11-15 11:46:33.986020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.468 [2024-11-15 11:46:33.986053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.468 qpair failed and we were unable to recover it. 00:28:33.468 [2024-11-15 11:46:33.986185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.468 [2024-11-15 11:46:33.986218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.468 qpair failed and we were unable to recover it. 00:28:33.468 [2024-11-15 11:46:33.986504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.468 [2024-11-15 11:46:33.986539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.468 qpair failed and we were unable to recover it. 00:28:33.468 [2024-11-15 11:46:33.986865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.468 [2024-11-15 11:46:33.986909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1922550 with addr=10.0.0.2, port=4420 00:28:33.468 qpair failed and we were unable to recover it. 00:28:33.468 [2024-11-15 11:46:33.987041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.468 [2024-11-15 11:46:33.987059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1922550 with addr=10.0.0.2, port=4420 00:28:33.468 qpair failed and we were unable to recover it. 00:28:33.468 [2024-11-15 11:46:33.987211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.468 [2024-11-15 11:46:33.987227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1922550 with addr=10.0.0.2, port=4420 00:28:33.468 qpair failed and we were unable to recover it. 00:28:33.468 [2024-11-15 11:46:33.987311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.468 [2024-11-15 11:46:33.987326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1922550 with addr=10.0.0.2, port=4420 00:28:33.468 qpair failed and we were unable to recover it. 00:28:33.468 [2024-11-15 11:46:33.987504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.468 [2024-11-15 11:46:33.987522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1922550 with addr=10.0.0.2, port=4420 00:28:33.468 qpair failed and we were unable to recover it. 00:28:33.468 [2024-11-15 11:46:33.987692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.468 [2024-11-15 11:46:33.987709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1922550 with addr=10.0.0.2, port=4420 00:28:33.468 qpair failed and we were unable to recover it. 00:28:33.468 [2024-11-15 11:46:33.987873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.468 [2024-11-15 11:46:33.987909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.468 qpair failed and we were unable to recover it. 00:28:33.468 [2024-11-15 11:46:33.988105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.468 [2024-11-15 11:46:33.988139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.468 qpair failed and we were unable to recover it. 00:28:33.468 [2024-11-15 11:46:33.988363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.468 [2024-11-15 11:46:33.988396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.468 qpair failed and we were unable to recover it. 00:28:33.468 [2024-11-15 11:46:33.988533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.468 [2024-11-15 11:46:33.988568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.468 qpair failed and we were unable to recover it. 00:28:33.468 [2024-11-15 11:46:33.988768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.468 [2024-11-15 11:46:33.988799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.468 qpair failed and we were unable to recover it. 00:28:33.468 [2024-11-15 11:46:33.988947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.469 [2024-11-15 11:46:33.988980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.469 qpair failed and we were unable to recover it. 00:28:33.469 [2024-11-15 11:46:33.989163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.469 [2024-11-15 11:46:33.989195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.469 qpair failed and we were unable to recover it. 00:28:33.469 [2024-11-15 11:46:33.989348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.469 [2024-11-15 11:46:33.989381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.469 qpair failed and we were unable to recover it. 00:28:33.469 [2024-11-15 11:46:33.989530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.469 [2024-11-15 11:46:33.989566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.469 qpair failed and we were unable to recover it. 00:28:33.469 [2024-11-15 11:46:33.989754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.469 [2024-11-15 11:46:33.989787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.469 qpair failed and we were unable to recover it. 00:28:33.469 [2024-11-15 11:46:33.989990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.469 [2024-11-15 11:46:33.990023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.469 qpair failed and we were unable to recover it. 00:28:33.469 [2024-11-15 11:46:33.990271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.469 [2024-11-15 11:46:33.990283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.469 qpair failed and we were unable to recover it. 00:28:33.469 [2024-11-15 11:46:33.990489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.469 [2024-11-15 11:46:33.990501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.469 qpair failed and we were unable to recover it. 00:28:33.469 [2024-11-15 11:46:33.990573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.469 [2024-11-15 11:46:33.990583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.469 qpair failed and we were unable to recover it. 00:28:33.469 [2024-11-15 11:46:33.990738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.469 [2024-11-15 11:46:33.990747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.469 qpair failed and we were unable to recover it. 00:28:33.469 [2024-11-15 11:46:33.990812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.469 [2024-11-15 11:46:33.990822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.469 qpair failed and we were unable to recover it. 00:28:33.469 [2024-11-15 11:46:33.990901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.469 [2024-11-15 11:46:33.990910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.469 qpair failed and we were unable to recover it. 00:28:33.469 [2024-11-15 11:46:33.991019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.469 [2024-11-15 11:46:33.991051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.469 qpair failed and we were unable to recover it. 00:28:33.469 [2024-11-15 11:46:33.991315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.469 [2024-11-15 11:46:33.991348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.469 qpair failed and we were unable to recover it. 00:28:33.469 [2024-11-15 11:46:33.991484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.469 [2024-11-15 11:46:33.991519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.469 qpair failed and we were unable to recover it. 00:28:33.469 [2024-11-15 11:46:33.991657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.469 [2024-11-15 11:46:33.991690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.469 qpair failed and we were unable to recover it. 00:28:33.469 [2024-11-15 11:46:33.991883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.469 [2024-11-15 11:46:33.991915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.469 qpair failed and we were unable to recover it. 00:28:33.469 [2024-11-15 11:46:33.992130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.469 [2024-11-15 11:46:33.992172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.469 qpair failed and we were unable to recover it. 00:28:33.469 [2024-11-15 11:46:33.992327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.469 [2024-11-15 11:46:33.992337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.469 qpair failed and we were unable to recover it. 00:28:33.469 [2024-11-15 11:46:33.992522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.469 [2024-11-15 11:46:33.992557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.469 qpair failed and we were unable to recover it. 00:28:33.469 [2024-11-15 11:46:33.992767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.469 [2024-11-15 11:46:33.992800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.469 qpair failed and we were unable to recover it. 00:28:33.469 [2024-11-15 11:46:33.992953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.469 [2024-11-15 11:46:33.992985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.469 qpair failed and we were unable to recover it. 00:28:33.469 [2024-11-15 11:46:33.993183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.469 [2024-11-15 11:46:33.993195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.469 qpair failed and we were unable to recover it. 00:28:33.469 [2024-11-15 11:46:33.993282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.469 [2024-11-15 11:46:33.993292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.469 qpair failed and we were unable to recover it. 00:28:33.469 [2024-11-15 11:46:33.993406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.469 [2024-11-15 11:46:33.993439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.469 qpair failed and we were unable to recover it. 00:28:33.469 [2024-11-15 11:46:33.993662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.469 [2024-11-15 11:46:33.993696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.469 qpair failed and we were unable to recover it. 00:28:33.469 [2024-11-15 11:46:33.993818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.469 [2024-11-15 11:46:33.993851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.469 qpair failed and we were unable to recover it. 00:28:33.469 [2024-11-15 11:46:33.993970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.469 [2024-11-15 11:46:33.994002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.469 qpair failed and we were unable to recover it. 00:28:33.469 [2024-11-15 11:46:33.994118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.469 [2024-11-15 11:46:33.994129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.469 qpair failed and we were unable to recover it. 00:28:33.469 [2024-11-15 11:46:33.994292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.469 [2024-11-15 11:46:33.994305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.469 qpair failed and we were unable to recover it. 00:28:33.469 [2024-11-15 11:46:33.994516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.469 [2024-11-15 11:46:33.994528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.469 qpair failed and we were unable to recover it. 00:28:33.469 [2024-11-15 11:46:33.994684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.469 [2024-11-15 11:46:33.994696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.469 qpair failed and we were unable to recover it. 00:28:33.469 [2024-11-15 11:46:33.994915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.469 [2024-11-15 11:46:33.994948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.470 qpair failed and we were unable to recover it. 00:28:33.470 [2024-11-15 11:46:33.995082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.470 [2024-11-15 11:46:33.995114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.470 qpair failed and we were unable to recover it. 00:28:33.470 [2024-11-15 11:46:33.995238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.470 [2024-11-15 11:46:33.995271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.470 qpair failed and we were unable to recover it. 00:28:33.470 [2024-11-15 11:46:33.995457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.470 [2024-11-15 11:46:33.995501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.470 qpair failed and we were unable to recover it. 00:28:33.470 [2024-11-15 11:46:33.995772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.470 [2024-11-15 11:46:33.995805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.470 qpair failed and we were unable to recover it. 00:28:33.470 [2024-11-15 11:46:33.996011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.470 [2024-11-15 11:46:33.996045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.470 qpair failed and we were unable to recover it. 00:28:33.470 [2024-11-15 11:46:33.996180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.470 [2024-11-15 11:46:33.996212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.470 qpair failed and we were unable to recover it. 00:28:33.470 [2024-11-15 11:46:33.996405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.470 [2024-11-15 11:46:33.996438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.470 qpair failed and we were unable to recover it. 00:28:33.470 [2024-11-15 11:46:33.996571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.470 [2024-11-15 11:46:33.996604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.470 qpair failed and we were unable to recover it. 00:28:33.470 [2024-11-15 11:46:33.996742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.470 [2024-11-15 11:46:33.996774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.470 qpair failed and we were unable to recover it. 00:28:33.470 [2024-11-15 11:46:33.996895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.470 [2024-11-15 11:46:33.996927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.470 qpair failed and we were unable to recover it. 00:28:33.470 [2024-11-15 11:46:33.997051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.470 [2024-11-15 11:46:33.997062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.470 qpair failed and we were unable to recover it. 00:28:33.470 [2024-11-15 11:46:33.997275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.470 [2024-11-15 11:46:33.997307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.470 qpair failed and we were unable to recover it. 00:28:33.470 [2024-11-15 11:46:33.997501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.470 [2024-11-15 11:46:33.997534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.470 qpair failed and we were unable to recover it. 00:28:33.470 [2024-11-15 11:46:33.997669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.470 [2024-11-15 11:46:33.997703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.470 qpair failed and we were unable to recover it. 00:28:33.470 [2024-11-15 11:46:33.997957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.470 [2024-11-15 11:46:33.997989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.470 qpair failed and we were unable to recover it. 00:28:33.470 [2024-11-15 11:46:33.998177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.470 [2024-11-15 11:46:33.998211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.470 qpair failed and we were unable to recover it. 00:28:33.470 [2024-11-15 11:46:33.998411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.470 [2024-11-15 11:46:33.998444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.470 qpair failed and we were unable to recover it. 00:28:33.470 [2024-11-15 11:46:33.998601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.470 [2024-11-15 11:46:33.998634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.470 qpair failed and we were unable to recover it. 00:28:33.470 [2024-11-15 11:46:33.998765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.470 [2024-11-15 11:46:33.998798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.470 qpair failed and we were unable to recover it. 00:28:33.470 [2024-11-15 11:46:33.999052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.470 [2024-11-15 11:46:33.999084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.470 qpair failed and we were unable to recover it. 00:28:33.470 [2024-11-15 11:46:33.999291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.470 [2024-11-15 11:46:33.999303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.470 qpair failed and we were unable to recover it. 00:28:33.470 [2024-11-15 11:46:33.999534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.470 [2024-11-15 11:46:33.999568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.470 qpair failed and we were unable to recover it. 00:28:33.470 [2024-11-15 11:46:33.999754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.470 [2024-11-15 11:46:33.999786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.470 qpair failed and we were unable to recover it. 00:28:33.470 [2024-11-15 11:46:33.999931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.470 [2024-11-15 11:46:33.999964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.470 qpair failed and we were unable to recover it. 00:28:33.470 [2024-11-15 11:46:34.000248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.470 [2024-11-15 11:46:34.000282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.470 qpair failed and we were unable to recover it. 00:28:33.470 [2024-11-15 11:46:34.000417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.470 [2024-11-15 11:46:34.000450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.470 qpair failed and we were unable to recover it. 00:28:33.470 [2024-11-15 11:46:34.000595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.470 [2024-11-15 11:46:34.000629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.470 qpair failed and we were unable to recover it. 00:28:33.470 [2024-11-15 11:46:34.000819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.470 [2024-11-15 11:46:34.000830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.470 qpair failed and we were unable to recover it. 00:28:33.470 [2024-11-15 11:46:34.001020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.470 [2024-11-15 11:46:34.001052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.470 qpair failed and we were unable to recover it. 00:28:33.470 [2024-11-15 11:46:34.001189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.470 [2024-11-15 11:46:34.001222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.470 qpair failed and we were unable to recover it. 00:28:33.470 [2024-11-15 11:46:34.001440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.470 [2024-11-15 11:46:34.001483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.470 qpair failed and we were unable to recover it. 00:28:33.470 [2024-11-15 11:46:34.001740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.470 [2024-11-15 11:46:34.001773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.470 qpair failed and we were unable to recover it. 00:28:33.470 [2024-11-15 11:46:34.001902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.470 [2024-11-15 11:46:34.001935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.470 qpair failed and we were unable to recover it. 00:28:33.470 [2024-11-15 11:46:34.002150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.470 [2024-11-15 11:46:34.002182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.470 qpair failed and we were unable to recover it. 00:28:33.470 [2024-11-15 11:46:34.002368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.470 [2024-11-15 11:46:34.002400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.470 qpair failed and we were unable to recover it. 00:28:33.470 [2024-11-15 11:46:34.002669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.470 [2024-11-15 11:46:34.002703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.470 qpair failed and we were unable to recover it. 00:28:33.470 [2024-11-15 11:46:34.002951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.470 [2024-11-15 11:46:34.002964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.470 qpair failed and we were unable to recover it. 00:28:33.470 [2024-11-15 11:46:34.003057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.470 [2024-11-15 11:46:34.003067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.470 qpair failed and we were unable to recover it. 00:28:33.471 [2024-11-15 11:46:34.003262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.471 [2024-11-15 11:46:34.003294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.471 qpair failed and we were unable to recover it. 00:28:33.471 [2024-11-15 11:46:34.003514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.471 [2024-11-15 11:46:34.003549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.471 qpair failed and we were unable to recover it. 00:28:33.471 [2024-11-15 11:46:34.003683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.471 [2024-11-15 11:46:34.003717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.471 qpair failed and we were unable to recover it. 00:28:33.471 [2024-11-15 11:46:34.003847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.471 [2024-11-15 11:46:34.003881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.471 qpair failed and we were unable to recover it. 00:28:33.471 [2024-11-15 11:46:34.004007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.471 [2024-11-15 11:46:34.004040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.471 qpair failed and we were unable to recover it. 00:28:33.471 [2024-11-15 11:46:34.004188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.471 [2024-11-15 11:46:34.004220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.471 qpair failed and we were unable to recover it. 00:28:33.471 [2024-11-15 11:46:34.004439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.471 [2024-11-15 11:46:34.004481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.471 qpair failed and we were unable to recover it. 00:28:33.471 [2024-11-15 11:46:34.004741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.471 [2024-11-15 11:46:34.004775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.471 qpair failed and we were unable to recover it. 00:28:33.471 [2024-11-15 11:46:34.005028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.471 [2024-11-15 11:46:34.005060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.471 qpair failed and we were unable to recover it. 00:28:33.471 [2024-11-15 11:46:34.005328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.471 [2024-11-15 11:46:34.005340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.471 qpair failed and we were unable to recover it. 00:28:33.471 [2024-11-15 11:46:34.005408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.471 [2024-11-15 11:46:34.005418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.471 qpair failed and we were unable to recover it. 00:28:33.471 [2024-11-15 11:46:34.005566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.471 [2024-11-15 11:46:34.005577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.471 qpair failed and we were unable to recover it. 00:28:33.471 [2024-11-15 11:46:34.005651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.471 [2024-11-15 11:46:34.005662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.471 qpair failed and we were unable to recover it. 00:28:33.471 [2024-11-15 11:46:34.005795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.471 [2024-11-15 11:46:34.005805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.471 qpair failed and we were unable to recover it. 00:28:33.471 [2024-11-15 11:46:34.005979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.471 [2024-11-15 11:46:34.005991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.471 qpair failed and we were unable to recover it. 00:28:33.471 [2024-11-15 11:46:34.006141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.471 [2024-11-15 11:46:34.006173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.471 qpair failed and we were unable to recover it. 00:28:33.471 [2024-11-15 11:46:34.006307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.471 [2024-11-15 11:46:34.006341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.471 qpair failed and we were unable to recover it. 00:28:33.471 [2024-11-15 11:46:34.006525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.471 [2024-11-15 11:46:34.006558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.471 qpair failed and we were unable to recover it. 00:28:33.471 [2024-11-15 11:46:34.006747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.471 [2024-11-15 11:46:34.006778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.471 qpair failed and we were unable to recover it. 00:28:33.471 [2024-11-15 11:46:34.006914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.471 [2024-11-15 11:46:34.006946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.471 qpair failed and we were unable to recover it. 00:28:33.471 [2024-11-15 11:46:34.007071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.471 [2024-11-15 11:46:34.007103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.471 qpair failed and we were unable to recover it. 00:28:33.471 [2024-11-15 11:46:34.007241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.471 [2024-11-15 11:46:34.007273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.471 qpair failed and we were unable to recover it. 00:28:33.471 [2024-11-15 11:46:34.007554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.471 [2024-11-15 11:46:34.007588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.471 qpair failed and we were unable to recover it. 00:28:33.471 [2024-11-15 11:46:34.007770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.471 [2024-11-15 11:46:34.007803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.471 qpair failed and we were unable to recover it. 00:28:33.471 [2024-11-15 11:46:34.007999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.471 [2024-11-15 11:46:34.008010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.471 qpair failed and we were unable to recover it. 00:28:33.471 [2024-11-15 11:46:34.008179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.471 [2024-11-15 11:46:34.008190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.471 qpair failed and we were unable to recover it. 00:28:33.471 [2024-11-15 11:46:34.008273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.471 [2024-11-15 11:46:34.008282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.471 qpair failed and we were unable to recover it. 00:28:33.471 [2024-11-15 11:46:34.008428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.471 [2024-11-15 11:46:34.008439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.471 qpair failed and we were unable to recover it. 00:28:33.471 [2024-11-15 11:46:34.008540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.471 [2024-11-15 11:46:34.008551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.471 qpair failed and we were unable to recover it. 00:28:33.471 [2024-11-15 11:46:34.008729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.471 [2024-11-15 11:46:34.008763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.471 qpair failed and we were unable to recover it. 00:28:33.471 [2024-11-15 11:46:34.008973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.471 [2024-11-15 11:46:34.009005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.471 qpair failed and we were unable to recover it. 00:28:33.471 [2024-11-15 11:46:34.009138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.471 [2024-11-15 11:46:34.009149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.471 qpair failed and we were unable to recover it. 00:28:33.471 [2024-11-15 11:46:34.009420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.471 [2024-11-15 11:46:34.009452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.471 qpair failed and we were unable to recover it. 00:28:33.471 [2024-11-15 11:46:34.009576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.471 [2024-11-15 11:46:34.009609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.471 qpair failed and we were unable to recover it. 00:28:33.471 [2024-11-15 11:46:34.009878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.471 [2024-11-15 11:46:34.009910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.471 qpair failed and we were unable to recover it. 00:28:33.471 [2024-11-15 11:46:34.010206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.471 [2024-11-15 11:46:34.010217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.471 qpair failed and we were unable to recover it. 00:28:33.471 [2024-11-15 11:46:34.010483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.471 [2024-11-15 11:46:34.010517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.471 qpair failed and we were unable to recover it. 00:28:33.471 [2024-11-15 11:46:34.010715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.472 [2024-11-15 11:46:34.010748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.472 qpair failed and we were unable to recover it. 00:28:33.472 [2024-11-15 11:46:34.010880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.472 [2024-11-15 11:46:34.010892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.472 qpair failed and we were unable to recover it. 00:28:33.472 [2024-11-15 11:46:34.011049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.472 [2024-11-15 11:46:34.011061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.472 qpair failed and we were unable to recover it. 00:28:33.472 [2024-11-15 11:46:34.011208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.472 [2024-11-15 11:46:34.011219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.472 qpair failed and we were unable to recover it. 00:28:33.472 [2024-11-15 11:46:34.011366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.472 [2024-11-15 11:46:34.011399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.472 qpair failed and we were unable to recover it. 00:28:33.472 [2024-11-15 11:46:34.011693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.472 [2024-11-15 11:46:34.011728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.472 qpair failed and we were unable to recover it. 00:28:33.472 [2024-11-15 11:46:34.011942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.472 [2024-11-15 11:46:34.011975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.472 qpair failed and we were unable to recover it. 00:28:33.472 [2024-11-15 11:46:34.012211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.472 [2024-11-15 11:46:34.012246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.472 qpair failed and we were unable to recover it. 00:28:33.472 [2024-11-15 11:46:34.012336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.472 [2024-11-15 11:46:34.012345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.472 qpair failed and we were unable to recover it. 00:28:33.472 [2024-11-15 11:46:34.012411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.472 [2024-11-15 11:46:34.012421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.472 qpair failed and we were unable to recover it. 00:28:33.472 [2024-11-15 11:46:34.012540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.472 [2024-11-15 11:46:34.012551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.472 qpair failed and we were unable to recover it. 00:28:33.472 [2024-11-15 11:46:34.012769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.472 [2024-11-15 11:46:34.012800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.472 qpair failed and we were unable to recover it. 00:28:33.472 [2024-11-15 11:46:34.013081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.472 [2024-11-15 11:46:34.013114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.472 qpair failed and we were unable to recover it. 00:28:33.472 [2024-11-15 11:46:34.013318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.472 [2024-11-15 11:46:34.013349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.472 qpair failed and we were unable to recover it. 00:28:33.472 [2024-11-15 11:46:34.013547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.472 [2024-11-15 11:46:34.013581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.472 qpair failed and we were unable to recover it. 00:28:33.472 [2024-11-15 11:46:34.013823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.472 [2024-11-15 11:46:34.013857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.472 qpair failed and we were unable to recover it. 00:28:33.472 [2024-11-15 11:46:34.014054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.472 [2024-11-15 11:46:34.014086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.472 qpair failed and we were unable to recover it. 00:28:33.472 [2024-11-15 11:46:34.014369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.472 [2024-11-15 11:46:34.014403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.472 qpair failed and we were unable to recover it. 00:28:33.472 [2024-11-15 11:46:34.014557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.472 [2024-11-15 11:46:34.014590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.472 qpair failed and we were unable to recover it. 00:28:33.472 [2024-11-15 11:46:34.014875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.472 [2024-11-15 11:46:34.014907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.472 qpair failed and we were unable to recover it. 00:28:33.472 [2024-11-15 11:46:34.015092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.472 [2024-11-15 11:46:34.015125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.472 qpair failed and we were unable to recover it. 00:28:33.472 [2024-11-15 11:46:34.015269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.472 [2024-11-15 11:46:34.015302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.472 qpair failed and we were unable to recover it. 00:28:33.472 [2024-11-15 11:46:34.015447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.472 [2024-11-15 11:46:34.015488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.472 qpair failed and we were unable to recover it. 00:28:33.472 [2024-11-15 11:46:34.015604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.472 [2024-11-15 11:46:34.015638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.472 qpair failed and we were unable to recover it. 00:28:33.472 [2024-11-15 11:46:34.015778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.472 [2024-11-15 11:46:34.015810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.472 qpair failed and we were unable to recover it. 00:28:33.472 [2024-11-15 11:46:34.015995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.472 [2024-11-15 11:46:34.016028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.472 qpair failed and we were unable to recover it. 00:28:33.472 [2024-11-15 11:46:34.016238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.472 [2024-11-15 11:46:34.016249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.472 qpair failed and we were unable to recover it. 00:28:33.472 [2024-11-15 11:46:34.016405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.472 [2024-11-15 11:46:34.016438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.472 qpair failed and we were unable to recover it. 00:28:33.472 [2024-11-15 11:46:34.016669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.472 [2024-11-15 11:46:34.016704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.472 qpair failed and we were unable to recover it. 00:28:33.472 [2024-11-15 11:46:34.016883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.472 [2024-11-15 11:46:34.016910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.472 qpair failed and we were unable to recover it. 00:28:33.472 [2024-11-15 11:46:34.017152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.472 [2024-11-15 11:46:34.017185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.472 qpair failed and we were unable to recover it. 00:28:33.472 [2024-11-15 11:46:34.017413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.472 [2024-11-15 11:46:34.017445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.472 qpair failed and we were unable to recover it. 00:28:33.472 [2024-11-15 11:46:34.017656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.472 [2024-11-15 11:46:34.017688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.472 qpair failed and we were unable to recover it. 00:28:33.472 [2024-11-15 11:46:34.017891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.472 [2024-11-15 11:46:34.017923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.472 qpair failed and we were unable to recover it. 00:28:33.472 [2024-11-15 11:46:34.018149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.472 [2024-11-15 11:46:34.018181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.472 qpair failed and we were unable to recover it. 00:28:33.472 [2024-11-15 11:46:34.018316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.472 [2024-11-15 11:46:34.018327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.472 qpair failed and we were unable to recover it. 00:28:33.472 [2024-11-15 11:46:34.018532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.472 [2024-11-15 11:46:34.018566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.472 qpair failed and we were unable to recover it. 00:28:33.472 [2024-11-15 11:46:34.018691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.472 [2024-11-15 11:46:34.018724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.472 qpair failed and we were unable to recover it. 00:28:33.472 [2024-11-15 11:46:34.018851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.473 [2024-11-15 11:46:34.018884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.473 qpair failed and we were unable to recover it. 00:28:33.473 [2024-11-15 11:46:34.019181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.473 [2024-11-15 11:46:34.019214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.473 qpair failed and we were unable to recover it. 00:28:33.473 [2024-11-15 11:46:34.019414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.473 [2024-11-15 11:46:34.019447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.473 qpair failed and we were unable to recover it. 00:28:33.473 [2024-11-15 11:46:34.019582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.473 [2024-11-15 11:46:34.019620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.473 qpair failed and we were unable to recover it. 00:28:33.473 [2024-11-15 11:46:34.019754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.473 [2024-11-15 11:46:34.019787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.473 qpair failed and we were unable to recover it. 00:28:33.473 [2024-11-15 11:46:34.019978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.473 [2024-11-15 11:46:34.020010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.473 qpair failed and we were unable to recover it. 00:28:33.473 [2024-11-15 11:46:34.020167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.473 [2024-11-15 11:46:34.020177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.473 qpair failed and we were unable to recover it. 00:28:33.473 [2024-11-15 11:46:34.020264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.473 [2024-11-15 11:46:34.020274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.473 qpair failed and we were unable to recover it. 00:28:33.473 [2024-11-15 11:46:34.020488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.473 [2024-11-15 11:46:34.020521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.473 qpair failed and we were unable to recover it. 00:28:33.473 [2024-11-15 11:46:34.020726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.473 [2024-11-15 11:46:34.020759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.473 qpair failed and we were unable to recover it. 00:28:33.473 [2024-11-15 11:46:34.020907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.473 [2024-11-15 11:46:34.020937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.473 qpair failed and we were unable to recover it. 00:28:33.473 [2024-11-15 11:46:34.021088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.473 [2024-11-15 11:46:34.021099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.473 qpair failed and we were unable to recover it. 00:28:33.473 [2024-11-15 11:46:34.021316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.473 [2024-11-15 11:46:34.021348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.473 qpair failed and we were unable to recover it. 00:28:33.473 [2024-11-15 11:46:34.021590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.473 [2024-11-15 11:46:34.021625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.473 qpair failed and we were unable to recover it. 00:28:33.473 [2024-11-15 11:46:34.021818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.473 [2024-11-15 11:46:34.021851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.473 qpair failed and we were unable to recover it. 00:28:33.473 [2024-11-15 11:46:34.022041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.473 [2024-11-15 11:46:34.022052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.473 qpair failed and we were unable to recover it. 00:28:33.473 [2024-11-15 11:46:34.022240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.473 [2024-11-15 11:46:34.022273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.473 qpair failed and we were unable to recover it. 00:28:33.473 [2024-11-15 11:46:34.022572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.473 [2024-11-15 11:46:34.022605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.473 qpair failed and we were unable to recover it. 00:28:33.473 [2024-11-15 11:46:34.022737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.473 [2024-11-15 11:46:34.022769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.473 qpair failed and we were unable to recover it. 00:28:33.473 [2024-11-15 11:46:34.022969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.473 [2024-11-15 11:46:34.022980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.473 qpair failed and we were unable to recover it. 00:28:33.473 [2024-11-15 11:46:34.023080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.473 [2024-11-15 11:46:34.023090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.473 qpair failed and we were unable to recover it. 00:28:33.473 [2024-11-15 11:46:34.023229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.473 [2024-11-15 11:46:34.023240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.473 qpair failed and we were unable to recover it. 00:28:33.473 [2024-11-15 11:46:34.023397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.473 [2024-11-15 11:46:34.023429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.473 qpair failed and we were unable to recover it. 00:28:33.473 [2024-11-15 11:46:34.023578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.473 [2024-11-15 11:46:34.023612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.473 qpair failed and we were unable to recover it. 00:28:33.473 [2024-11-15 11:46:34.023796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.473 [2024-11-15 11:46:34.023829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.473 qpair failed and we were unable to recover it. 00:28:33.473 [2024-11-15 11:46:34.024027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.473 [2024-11-15 11:46:34.024059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.473 qpair failed and we were unable to recover it. 00:28:33.473 [2024-11-15 11:46:34.024269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.473 [2024-11-15 11:46:34.024301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.473 qpair failed and we were unable to recover it. 00:28:33.473 [2024-11-15 11:46:34.024425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.473 [2024-11-15 11:46:34.024487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.473 qpair failed and we were unable to recover it. 00:28:33.473 [2024-11-15 11:46:34.024745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.473 [2024-11-15 11:46:34.024778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.473 qpair failed and we were unable to recover it. 00:28:33.473 [2024-11-15 11:46:34.024966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.473 [2024-11-15 11:46:34.025000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.473 qpair failed and we were unable to recover it. 00:28:33.473 [2024-11-15 11:46:34.025223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.473 [2024-11-15 11:46:34.025257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.473 qpair failed and we were unable to recover it. 00:28:33.473 [2024-11-15 11:46:34.025404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.473 [2024-11-15 11:46:34.025414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.473 qpair failed and we were unable to recover it. 00:28:33.473 [2024-11-15 11:46:34.025578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.473 [2024-11-15 11:46:34.025589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.473 qpair failed and we were unable to recover it. 00:28:33.473 [2024-11-15 11:46:34.025735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.473 [2024-11-15 11:46:34.025747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.473 qpair failed and we were unable to recover it. 00:28:33.473 [2024-11-15 11:46:34.025907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.473 [2024-11-15 11:46:34.025942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.473 qpair failed and we were unable to recover it. 00:28:33.473 [2024-11-15 11:46:34.026130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.473 [2024-11-15 11:46:34.026163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.473 qpair failed and we were unable to recover it. 00:28:33.473 [2024-11-15 11:46:34.026307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.473 [2024-11-15 11:46:34.026341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.473 qpair failed and we were unable to recover it. 00:28:33.473 [2024-11-15 11:46:34.026599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.473 [2024-11-15 11:46:34.026633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.473 qpair failed and we were unable to recover it. 00:28:33.473 [2024-11-15 11:46:34.026761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.473 [2024-11-15 11:46:34.026795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.474 qpair failed and we were unable to recover it. 00:28:33.474 [2024-11-15 11:46:34.027037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.474 [2024-11-15 11:46:34.027048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.474 qpair failed and we were unable to recover it. 00:28:33.474 [2024-11-15 11:46:34.027161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.474 [2024-11-15 11:46:34.027173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.474 qpair failed and we were unable to recover it. 00:28:33.474 [2024-11-15 11:46:34.027423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.474 [2024-11-15 11:46:34.027454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.474 qpair failed and we were unable to recover it. 00:28:33.474 [2024-11-15 11:46:34.027664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.474 [2024-11-15 11:46:34.027697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.474 qpair failed and we were unable to recover it. 00:28:33.474 [2024-11-15 11:46:34.027882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.474 [2024-11-15 11:46:34.027925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.474 qpair failed and we were unable to recover it. 00:28:33.474 [2024-11-15 11:46:34.028124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.474 [2024-11-15 11:46:34.028157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.474 qpair failed and we were unable to recover it. 00:28:33.474 [2024-11-15 11:46:34.028339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.474 [2024-11-15 11:46:34.028350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.474 qpair failed and we were unable to recover it. 00:28:33.474 [2024-11-15 11:46:34.028593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.474 [2024-11-15 11:46:34.028626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.474 qpair failed and we were unable to recover it. 00:28:33.474 [2024-11-15 11:46:34.028752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.474 [2024-11-15 11:46:34.028784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.474 qpair failed and we were unable to recover it. 00:28:33.474 [2024-11-15 11:46:34.028895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.474 [2024-11-15 11:46:34.028927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.474 qpair failed and we were unable to recover it. 00:28:33.474 [2024-11-15 11:46:34.029130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.474 [2024-11-15 11:46:34.029162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.474 qpair failed and we were unable to recover it. 00:28:33.474 [2024-11-15 11:46:34.029450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.474 [2024-11-15 11:46:34.029492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.474 qpair failed and we were unable to recover it. 00:28:33.474 [2024-11-15 11:46:34.029769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.474 [2024-11-15 11:46:34.029802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.474 qpair failed and we were unable to recover it. 00:28:33.474 [2024-11-15 11:46:34.030029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.474 [2024-11-15 11:46:34.030062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.474 qpair failed and we were unable to recover it. 00:28:33.474 [2024-11-15 11:46:34.030194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.474 [2024-11-15 11:46:34.030225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.474 qpair failed and we were unable to recover it. 00:28:33.474 [2024-11-15 11:46:34.030399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.474 [2024-11-15 11:46:34.030409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.474 qpair failed and we were unable to recover it. 00:28:33.474 [2024-11-15 11:46:34.030585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.474 [2024-11-15 11:46:34.030597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.474 qpair failed and we were unable to recover it. 00:28:33.474 [2024-11-15 11:46:34.030805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.474 [2024-11-15 11:46:34.030816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.474 qpair failed and we were unable to recover it. 00:28:33.474 [2024-11-15 11:46:34.030887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.474 [2024-11-15 11:46:34.030897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.474 qpair failed and we were unable to recover it. 00:28:33.474 [2024-11-15 11:46:34.031137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.474 [2024-11-15 11:46:34.031170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.474 qpair failed and we were unable to recover it. 00:28:33.474 [2024-11-15 11:46:34.031370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.474 [2024-11-15 11:46:34.031402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.474 qpair failed and we were unable to recover it. 00:28:33.474 [2024-11-15 11:46:34.031602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.474 [2024-11-15 11:46:34.031635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.474 qpair failed and we were unable to recover it. 00:28:33.474 [2024-11-15 11:46:34.031843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.474 [2024-11-15 11:46:34.031875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.474 qpair failed and we were unable to recover it. 00:28:33.474 [2024-11-15 11:46:34.032135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.474 [2024-11-15 11:46:34.032168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.474 qpair failed and we were unable to recover it. 00:28:33.474 [2024-11-15 11:46:34.032368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.474 [2024-11-15 11:46:34.032402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.474 qpair failed and we were unable to recover it. 00:28:33.474 [2024-11-15 11:46:34.032709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.474 [2024-11-15 11:46:34.032742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.474 qpair failed and we were unable to recover it. 00:28:33.474 [2024-11-15 11:46:34.032872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.474 [2024-11-15 11:46:34.032904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.474 qpair failed and we were unable to recover it. 00:28:33.474 [2024-11-15 11:46:34.033152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.474 [2024-11-15 11:46:34.033186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.474 qpair failed and we were unable to recover it. 00:28:33.474 [2024-11-15 11:46:34.033301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.474 [2024-11-15 11:46:34.033342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.474 qpair failed and we were unable to recover it. 00:28:33.474 [2024-11-15 11:46:34.033480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.474 [2024-11-15 11:46:34.033492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.474 qpair failed and we were unable to recover it. 00:28:33.474 [2024-11-15 11:46:34.033644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.474 [2024-11-15 11:46:34.033656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.474 qpair failed and we were unable to recover it. 00:28:33.474 [2024-11-15 11:46:34.033743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.474 [2024-11-15 11:46:34.033754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.474 qpair failed and we were unable to recover it. 00:28:33.474 [2024-11-15 11:46:34.033896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.474 [2024-11-15 11:46:34.033928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.474 qpair failed and we were unable to recover it. 00:28:33.474 [2024-11-15 11:46:34.034113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.475 [2024-11-15 11:46:34.034147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.475 qpair failed and we were unable to recover it. 00:28:33.475 [2024-11-15 11:46:34.034296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.475 [2024-11-15 11:46:34.034329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.475 qpair failed and we were unable to recover it. 00:28:33.475 [2024-11-15 11:46:34.034605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.475 [2024-11-15 11:46:34.034638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.475 qpair failed and we were unable to recover it. 00:28:33.475 [2024-11-15 11:46:34.034850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.475 [2024-11-15 11:46:34.034883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.475 qpair failed and we were unable to recover it. 00:28:33.475 [2024-11-15 11:46:34.035148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.475 [2024-11-15 11:46:34.035180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.475 qpair failed and we were unable to recover it. 00:28:33.475 [2024-11-15 11:46:34.035300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.475 [2024-11-15 11:46:34.035334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.475 qpair failed and we were unable to recover it. 00:28:33.475 [2024-11-15 11:46:34.035466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.475 [2024-11-15 11:46:34.035499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.475 qpair failed and we were unable to recover it. 00:28:33.475 [2024-11-15 11:46:34.035743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.475 [2024-11-15 11:46:34.035776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.475 qpair failed and we were unable to recover it. 00:28:33.475 [2024-11-15 11:46:34.035976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.475 [2024-11-15 11:46:34.036010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.475 qpair failed and we were unable to recover it. 00:28:33.475 [2024-11-15 11:46:34.036218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.475 [2024-11-15 11:46:34.036250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.475 qpair failed and we were unable to recover it. 00:28:33.475 [2024-11-15 11:46:34.036399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.475 [2024-11-15 11:46:34.036432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.475 qpair failed and we were unable to recover it. 00:28:33.475 [2024-11-15 11:46:34.036741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.475 [2024-11-15 11:46:34.036781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.475 qpair failed and we were unable to recover it. 00:28:33.475 [2024-11-15 11:46:34.036921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.475 [2024-11-15 11:46:34.036955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.475 qpair failed and we were unable to recover it. 00:28:33.475 [2024-11-15 11:46:34.037093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.475 [2024-11-15 11:46:34.037125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.475 qpair failed and we were unable to recover it. 00:28:33.475 [2024-11-15 11:46:34.037330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.475 [2024-11-15 11:46:34.037363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.475 qpair failed and we were unable to recover it. 00:28:33.475 [2024-11-15 11:46:34.037491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.475 [2024-11-15 11:46:34.037525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.475 qpair failed and we were unable to recover it. 00:28:33.475 [2024-11-15 11:46:34.037742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.475 [2024-11-15 11:46:34.037775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.475 qpair failed and we were unable to recover it. 00:28:33.475 [2024-11-15 11:46:34.038039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.475 [2024-11-15 11:46:34.038072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.475 qpair failed and we were unable to recover it. 00:28:33.475 [2024-11-15 11:46:34.038190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.475 [2024-11-15 11:46:34.038222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.475 qpair failed and we were unable to recover it. 00:28:33.475 [2024-11-15 11:46:34.038473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.475 [2024-11-15 11:46:34.038484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.475 qpair failed and we were unable to recover it. 00:28:33.475 [2024-11-15 11:46:34.038643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.475 [2024-11-15 11:46:34.038677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.475 qpair failed and we were unable to recover it. 00:28:33.475 [2024-11-15 11:46:34.038868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.475 [2024-11-15 11:46:34.038879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.475 qpair failed and we were unable to recover it. 00:28:33.475 [2024-11-15 11:46:34.039041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.475 [2024-11-15 11:46:34.039053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.475 qpair failed and we were unable to recover it. 00:28:33.475 [2024-11-15 11:46:34.039204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.475 [2024-11-15 11:46:34.039215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.475 qpair failed and we were unable to recover it. 00:28:33.475 [2024-11-15 11:46:34.039371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.475 [2024-11-15 11:46:34.039406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.475 qpair failed and we were unable to recover it. 00:28:33.475 [2024-11-15 11:46:34.039617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.475 [2024-11-15 11:46:34.039652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.475 qpair failed and we were unable to recover it. 00:28:33.475 [2024-11-15 11:46:34.039834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.475 [2024-11-15 11:46:34.039866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.475 qpair failed and we were unable to recover it. 00:28:33.475 [2024-11-15 11:46:34.040074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.475 [2024-11-15 11:46:34.040085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.475 qpair failed and we were unable to recover it. 00:28:33.475 [2024-11-15 11:46:34.040244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.475 [2024-11-15 11:46:34.040277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.475 qpair failed and we were unable to recover it. 00:28:33.475 [2024-11-15 11:46:34.040423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.475 [2024-11-15 11:46:34.040456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.475 qpair failed and we were unable to recover it. 00:28:33.475 [2024-11-15 11:46:34.040769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.475 [2024-11-15 11:46:34.040801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.475 qpair failed and we were unable to recover it. 00:28:33.475 [2024-11-15 11:46:34.041013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.475 [2024-11-15 11:46:34.041025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.475 qpair failed and we were unable to recover it. 00:28:33.475 [2024-11-15 11:46:34.041202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.475 [2024-11-15 11:46:34.041235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.475 qpair failed and we were unable to recover it. 00:28:33.475 [2024-11-15 11:46:34.041373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.475 [2024-11-15 11:46:34.041405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.475 qpair failed and we were unable to recover it. 00:28:33.475 [2024-11-15 11:46:34.041558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.475 [2024-11-15 11:46:34.041593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.475 qpair failed and we were unable to recover it. 00:28:33.475 [2024-11-15 11:46:34.041820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.475 [2024-11-15 11:46:34.041853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.475 qpair failed and we were unable to recover it. 00:28:33.475 [2024-11-15 11:46:34.042041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.475 [2024-11-15 11:46:34.042051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.475 qpair failed and we were unable to recover it. 00:28:33.475 [2024-11-15 11:46:34.042292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.475 [2024-11-15 11:46:34.042324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.475 qpair failed and we were unable to recover it. 00:28:33.475 [2024-11-15 11:46:34.042528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.475 [2024-11-15 11:46:34.042563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.476 qpair failed and we were unable to recover it. 00:28:33.476 [2024-11-15 11:46:34.042768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.476 [2024-11-15 11:46:34.042802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.476 qpair failed and we were unable to recover it. 00:28:33.476 [2024-11-15 11:46:34.042949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.476 [2024-11-15 11:46:34.042960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.476 qpair failed and we were unable to recover it. 00:28:33.476 [2024-11-15 11:46:34.043122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.476 [2024-11-15 11:46:34.043154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.476 qpair failed and we were unable to recover it. 00:28:33.476 [2024-11-15 11:46:34.043340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.476 [2024-11-15 11:46:34.043373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.476 qpair failed and we were unable to recover it. 00:28:33.476 [2024-11-15 11:46:34.043652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.476 [2024-11-15 11:46:34.043687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.476 qpair failed and we were unable to recover it. 00:28:33.476 [2024-11-15 11:46:34.043943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.476 [2024-11-15 11:46:34.043977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.476 qpair failed and we were unable to recover it. 00:28:33.476 [2024-11-15 11:46:34.044176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.476 [2024-11-15 11:46:34.044210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.476 qpair failed and we were unable to recover it. 00:28:33.476 [2024-11-15 11:46:34.044498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.476 [2024-11-15 11:46:34.044532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.476 qpair failed and we were unable to recover it. 00:28:33.476 [2024-11-15 11:46:34.044788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.476 [2024-11-15 11:46:34.044821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.476 qpair failed and we were unable to recover it. 00:28:33.476 [2024-11-15 11:46:34.045099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.476 [2024-11-15 11:46:34.045137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.476 qpair failed and we were unable to recover it. 00:28:33.476 [2024-11-15 11:46:34.045239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.476 [2024-11-15 11:46:34.045249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.476 qpair failed and we were unable to recover it. 00:28:33.476 [2024-11-15 11:46:34.045479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.476 [2024-11-15 11:46:34.045516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.476 qpair failed and we were unable to recover it. 00:28:33.476 [2024-11-15 11:46:34.045746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.476 [2024-11-15 11:46:34.045787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.476 qpair failed and we were unable to recover it. 00:28:33.476 [2024-11-15 11:46:34.045936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.476 [2024-11-15 11:46:34.045969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.476 qpair failed and we were unable to recover it. 00:28:33.476 [2024-11-15 11:46:34.046117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.476 [2024-11-15 11:46:34.046148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.476 qpair failed and we were unable to recover it. 00:28:33.476 [2024-11-15 11:46:34.046389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.476 [2024-11-15 11:46:34.046400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.476 qpair failed and we were unable to recover it. 00:28:33.476 [2024-11-15 11:46:34.046553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.476 [2024-11-15 11:46:34.046564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.476 qpair failed and we were unable to recover it. 00:28:33.476 [2024-11-15 11:46:34.046703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.476 [2024-11-15 11:46:34.046714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.476 qpair failed and we were unable to recover it. 00:28:33.476 [2024-11-15 11:46:34.046793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.476 [2024-11-15 11:46:34.046803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.476 qpair failed and we were unable to recover it. 00:28:33.476 [2024-11-15 11:46:34.046938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.476 [2024-11-15 11:46:34.046949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.476 qpair failed and we were unable to recover it. 00:28:33.476 [2024-11-15 11:46:34.047051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.476 [2024-11-15 11:46:34.047084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.476 qpair failed and we were unable to recover it. 00:28:33.476 [2024-11-15 11:46:34.047320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.476 [2024-11-15 11:46:34.047351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.476 qpair failed and we were unable to recover it. 00:28:33.476 [2024-11-15 11:46:34.047481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.476 [2024-11-15 11:46:34.047516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.476 qpair failed and we were unable to recover it. 00:28:33.476 [2024-11-15 11:46:34.047650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.476 [2024-11-15 11:46:34.047682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.476 qpair failed and we were unable to recover it. 00:28:33.476 [2024-11-15 11:46:34.047962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.476 [2024-11-15 11:46:34.047994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.476 qpair failed and we were unable to recover it. 00:28:33.476 [2024-11-15 11:46:34.048196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.476 [2024-11-15 11:46:34.048229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.476 qpair failed and we were unable to recover it. 00:28:33.476 [2024-11-15 11:46:34.048376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.476 [2024-11-15 11:46:34.048409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.476 qpair failed and we were unable to recover it. 00:28:33.476 [2024-11-15 11:46:34.048624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.476 [2024-11-15 11:46:34.048659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.476 qpair failed and we were unable to recover it. 00:28:33.476 [2024-11-15 11:46:34.048849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.476 [2024-11-15 11:46:34.048883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.476 qpair failed and we were unable to recover it. 00:28:33.476 [2024-11-15 11:46:34.049087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.476 [2024-11-15 11:46:34.049099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.476 qpair failed and we were unable to recover it. 00:28:33.476 [2024-11-15 11:46:34.049238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.476 [2024-11-15 11:46:34.049271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.476 qpair failed and we were unable to recover it. 00:28:33.476 [2024-11-15 11:46:34.049495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.476 [2024-11-15 11:46:34.049528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.476 qpair failed and we were unable to recover it. 00:28:33.476 [2024-11-15 11:46:34.049724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.476 [2024-11-15 11:46:34.049757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.476 qpair failed and we were unable to recover it. 00:28:33.476 [2024-11-15 11:46:34.049983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.476 [2024-11-15 11:46:34.050016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.476 qpair failed and we were unable to recover it. 00:28:33.476 [2024-11-15 11:46:34.050203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.476 [2024-11-15 11:46:34.050214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.476 qpair failed and we were unable to recover it. 00:28:33.476 [2024-11-15 11:46:34.050465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.476 [2024-11-15 11:46:34.050499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.476 qpair failed and we were unable to recover it. 00:28:33.476 [2024-11-15 11:46:34.050685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.476 [2024-11-15 11:46:34.050716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.476 qpair failed and we were unable to recover it. 00:28:33.476 [2024-11-15 11:46:34.050912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.476 [2024-11-15 11:46:34.050946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.477 qpair failed and we were unable to recover it. 00:28:33.477 [2024-11-15 11:46:34.051135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.477 [2024-11-15 11:46:34.051146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.477 qpair failed and we were unable to recover it. 00:28:33.477 [2024-11-15 11:46:34.051328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.477 [2024-11-15 11:46:34.051361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.477 qpair failed and we were unable to recover it. 00:28:33.477 [2024-11-15 11:46:34.051644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.477 [2024-11-15 11:46:34.051678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.477 qpair failed and we were unable to recover it. 00:28:33.477 [2024-11-15 11:46:34.051924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.477 [2024-11-15 11:46:34.051956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.477 qpair failed and we were unable to recover it. 00:28:33.477 [2024-11-15 11:46:34.052186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.477 [2024-11-15 11:46:34.052216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.477 qpair failed and we were unable to recover it. 00:28:33.477 [2024-11-15 11:46:34.052482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.477 [2024-11-15 11:46:34.052516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.477 qpair failed and we were unable to recover it. 00:28:33.477 [2024-11-15 11:46:34.052777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.477 [2024-11-15 11:46:34.052810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.477 qpair failed and we were unable to recover it. 00:28:33.477 [2024-11-15 11:46:34.052945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.477 [2024-11-15 11:46:34.052977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.477 qpair failed and we were unable to recover it. 00:28:33.477 [2024-11-15 11:46:34.053168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.477 [2024-11-15 11:46:34.053201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.477 qpair failed and we were unable to recover it. 00:28:33.477 [2024-11-15 11:46:34.053390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.477 [2024-11-15 11:46:34.053402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.477 qpair failed and we were unable to recover it. 00:28:33.477 [2024-11-15 11:46:34.053550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.477 [2024-11-15 11:46:34.053593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.477 qpair failed and we were unable to recover it. 00:28:33.477 [2024-11-15 11:46:34.053780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.477 [2024-11-15 11:46:34.053812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.477 qpair failed and we were unable to recover it. 00:28:33.477 [2024-11-15 11:46:34.054027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.477 [2024-11-15 11:46:34.054058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.477 qpair failed and we were unable to recover it. 00:28:33.477 [2024-11-15 11:46:34.054184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.477 [2024-11-15 11:46:34.054196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.477 qpair failed and we were unable to recover it. 00:28:33.477 [2024-11-15 11:46:34.054288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.477 [2024-11-15 11:46:34.054300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.477 qpair failed and we were unable to recover it. 00:28:33.477 [2024-11-15 11:46:34.054485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.477 [2024-11-15 11:46:34.054520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.477 qpair failed and we were unable to recover it. 00:28:33.477 [2024-11-15 11:46:34.054648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.477 [2024-11-15 11:46:34.054680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.477 qpair failed and we were unable to recover it. 00:28:33.477 [2024-11-15 11:46:34.054822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.477 [2024-11-15 11:46:34.054856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.477 qpair failed and we were unable to recover it. 00:28:33.477 [2024-11-15 11:46:34.055007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.477 [2024-11-15 11:46:34.055038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.477 qpair failed and we were unable to recover it. 00:28:33.477 [2024-11-15 11:46:34.055176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.477 [2024-11-15 11:46:34.055208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.477 qpair failed and we were unable to recover it. 00:28:33.477 [2024-11-15 11:46:34.055426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.477 [2024-11-15 11:46:34.055437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.477 qpair failed and we were unable to recover it. 00:28:33.477 [2024-11-15 11:46:34.055520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.477 [2024-11-15 11:46:34.055531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.477 qpair failed and we were unable to recover it. 00:28:33.477 [2024-11-15 11:46:34.055633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.477 [2024-11-15 11:46:34.055663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.477 qpair failed and we were unable to recover it. 00:28:33.477 [2024-11-15 11:46:34.055869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.477 [2024-11-15 11:46:34.055904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.477 qpair failed and we were unable to recover it. 00:28:33.477 [2024-11-15 11:46:34.056119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.477 [2024-11-15 11:46:34.056155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.477 qpair failed and we were unable to recover it. 00:28:33.477 [2024-11-15 11:46:34.056335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.477 [2024-11-15 11:46:34.056346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.477 qpair failed and we were unable to recover it. 00:28:33.477 [2024-11-15 11:46:34.056517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.477 [2024-11-15 11:46:34.056532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.477 qpair failed and we were unable to recover it. 00:28:33.477 [2024-11-15 11:46:34.056791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.477 [2024-11-15 11:46:34.056802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.477 qpair failed and we were unable to recover it. 00:28:33.477 [2024-11-15 11:46:34.057035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.477 [2024-11-15 11:46:34.057046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.477 qpair failed and we were unable to recover it. 00:28:33.477 [2024-11-15 11:46:34.057190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.477 [2024-11-15 11:46:34.057200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.477 qpair failed and we were unable to recover it. 00:28:33.477 [2024-11-15 11:46:34.057365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.477 [2024-11-15 11:46:34.057402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.477 qpair failed and we were unable to recover it. 00:28:33.477 [2024-11-15 11:46:34.057603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.477 [2024-11-15 11:46:34.057638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.477 qpair failed and we were unable to recover it. 00:28:33.477 [2024-11-15 11:46:34.057766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.477 [2024-11-15 11:46:34.057799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.477 qpair failed and we were unable to recover it. 00:28:33.477 [2024-11-15 11:46:34.057936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.477 [2024-11-15 11:46:34.057968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.477 qpair failed and we were unable to recover it. 00:28:33.477 [2024-11-15 11:46:34.058222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.477 [2024-11-15 11:46:34.058255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.477 qpair failed and we were unable to recover it. 00:28:33.477 [2024-11-15 11:46:34.058522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.477 [2024-11-15 11:46:34.058556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.477 qpair failed and we were unable to recover it. 00:28:33.477 [2024-11-15 11:46:34.058806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.477 [2024-11-15 11:46:34.058840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.477 qpair failed and we were unable to recover it. 00:28:33.477 [2024-11-15 11:46:34.059061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.477 [2024-11-15 11:46:34.059092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.477 qpair failed and we were unable to recover it. 00:28:33.478 [2024-11-15 11:46:34.059337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.478 [2024-11-15 11:46:34.059348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.478 qpair failed and we were unable to recover it. 00:28:33.478 [2024-11-15 11:46:34.059511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.478 [2024-11-15 11:46:34.059546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.478 qpair failed and we were unable to recover it. 00:28:33.478 [2024-11-15 11:46:34.059757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.478 [2024-11-15 11:46:34.059789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.478 qpair failed and we were unable to recover it. 00:28:33.478 [2024-11-15 11:46:34.059982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.478 [2024-11-15 11:46:34.059993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.478 qpair failed and we were unable to recover it. 00:28:33.478 [2024-11-15 11:46:34.060150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.478 [2024-11-15 11:46:34.060161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.478 qpair failed and we were unable to recover it. 00:28:33.478 [2024-11-15 11:46:34.060304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.478 [2024-11-15 11:46:34.060339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.478 qpair failed and we were unable to recover it. 00:28:33.478 [2024-11-15 11:46:34.060546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.478 [2024-11-15 11:46:34.060581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.478 qpair failed and we were unable to recover it. 00:28:33.478 [2024-11-15 11:46:34.060719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.478 [2024-11-15 11:46:34.060753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.478 qpair failed and we were unable to recover it. 00:28:33.478 [2024-11-15 11:46:34.060875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.478 [2024-11-15 11:46:34.060907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.478 qpair failed and we were unable to recover it. 00:28:33.478 [2024-11-15 11:46:34.061090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.478 [2024-11-15 11:46:34.061122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.478 qpair failed and we were unable to recover it. 00:28:33.478 [2024-11-15 11:46:34.061303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.478 [2024-11-15 11:46:34.061314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.478 qpair failed and we were unable to recover it. 00:28:33.478 [2024-11-15 11:46:34.061555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.478 [2024-11-15 11:46:34.061589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.478 qpair failed and we were unable to recover it. 00:28:33.478 [2024-11-15 11:46:34.061778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.478 [2024-11-15 11:46:34.061812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.478 qpair failed and we were unable to recover it. 00:28:33.478 [2024-11-15 11:46:34.061950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.478 [2024-11-15 11:46:34.061983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.478 qpair failed and we were unable to recover it. 00:28:33.478 [2024-11-15 11:46:34.062262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.478 [2024-11-15 11:46:34.062295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.478 qpair failed and we were unable to recover it. 00:28:33.478 [2024-11-15 11:46:34.062585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.478 [2024-11-15 11:46:34.062618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.478 qpair failed and we were unable to recover it. 00:28:33.478 [2024-11-15 11:46:34.062837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.478 [2024-11-15 11:46:34.062870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.478 qpair failed and we were unable to recover it. 00:28:33.478 [2024-11-15 11:46:34.063009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.478 [2024-11-15 11:46:34.063020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.478 qpair failed and we were unable to recover it. 00:28:33.478 [2024-11-15 11:46:34.063268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.478 [2024-11-15 11:46:34.063278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.478 qpair failed and we were unable to recover it. 00:28:33.478 [2024-11-15 11:46:34.063434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.478 [2024-11-15 11:46:34.063475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.478 qpair failed and we were unable to recover it. 00:28:33.478 [2024-11-15 11:46:34.063658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.478 [2024-11-15 11:46:34.063690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.478 qpair failed and we were unable to recover it. 00:28:33.478 [2024-11-15 11:46:34.063945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.478 [2024-11-15 11:46:34.063976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.478 qpair failed and we were unable to recover it. 00:28:33.478 [2024-11-15 11:46:34.064089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.478 [2024-11-15 11:46:34.064100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.478 qpair failed and we were unable to recover it. 00:28:33.478 [2024-11-15 11:46:34.064310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.478 [2024-11-15 11:46:34.064321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.478 qpair failed and we were unable to recover it. 00:28:33.478 [2024-11-15 11:46:34.064564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.478 [2024-11-15 11:46:34.064577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.478 qpair failed and we were unable to recover it. 00:28:33.478 [2024-11-15 11:46:34.064723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.478 [2024-11-15 11:46:34.064734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.478 qpair failed and we were unable to recover it. 00:28:33.478 [2024-11-15 11:46:34.064940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.478 [2024-11-15 11:46:34.064971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.478 qpair failed and we were unable to recover it. 00:28:33.478 [2024-11-15 11:46:34.065169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.478 [2024-11-15 11:46:34.065179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.478 qpair failed and we were unable to recover it. 00:28:33.478 [2024-11-15 11:46:34.065267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.478 [2024-11-15 11:46:34.065278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.478 qpair failed and we were unable to recover it. 00:28:33.478 [2024-11-15 11:46:34.065387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.478 [2024-11-15 11:46:34.065417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.478 qpair failed and we were unable to recover it. 00:28:33.478 [2024-11-15 11:46:34.065731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.478 [2024-11-15 11:46:34.065764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.478 qpair failed and we were unable to recover it. 00:28:33.478 [2024-11-15 11:46:34.066018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.478 [2024-11-15 11:46:34.066051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.478 qpair failed and we were unable to recover it. 00:28:33.478 [2024-11-15 11:46:34.066148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.478 [2024-11-15 11:46:34.066158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.478 qpair failed and we were unable to recover it. 00:28:33.478 [2024-11-15 11:46:34.066296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.478 [2024-11-15 11:46:34.066313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.478 qpair failed and we were unable to recover it. 00:28:33.479 [2024-11-15 11:46:34.066484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.479 [2024-11-15 11:46:34.066496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.479 qpair failed and we were unable to recover it. 00:28:33.479 [2024-11-15 11:46:34.066720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.479 [2024-11-15 11:46:34.066753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.479 qpair failed and we were unable to recover it. 00:28:33.479 [2024-11-15 11:46:34.066946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.479 [2024-11-15 11:46:34.066979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.479 qpair failed and we were unable to recover it. 00:28:33.479 [2024-11-15 11:46:34.067096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.479 [2024-11-15 11:46:34.067128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.479 qpair failed and we were unable to recover it. 00:28:33.479 [2024-11-15 11:46:34.067382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.479 [2024-11-15 11:46:34.067415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.479 qpair failed and we were unable to recover it. 00:28:33.479 [2024-11-15 11:46:34.067637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.479 [2024-11-15 11:46:34.067670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.479 qpair failed and we were unable to recover it. 00:28:33.479 [2024-11-15 11:46:34.067796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.479 [2024-11-15 11:46:34.067829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.479 qpair failed and we were unable to recover it. 00:28:33.479 [2024-11-15 11:46:34.067963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.479 [2024-11-15 11:46:34.067997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.479 qpair failed and we were unable to recover it. 00:28:33.479 [2024-11-15 11:46:34.068318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.479 [2024-11-15 11:46:34.068351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.479 qpair failed and we were unable to recover it. 00:28:33.479 [2024-11-15 11:46:34.068577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.479 [2024-11-15 11:46:34.068618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.479 qpair failed and we were unable to recover it. 00:28:33.479 [2024-11-15 11:46:34.068755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.479 [2024-11-15 11:46:34.068786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.479 qpair failed and we were unable to recover it. 00:28:33.479 [2024-11-15 11:46:34.068972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.479 [2024-11-15 11:46:34.069004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.479 qpair failed and we were unable to recover it. 00:28:33.479 [2024-11-15 11:46:34.069213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.479 [2024-11-15 11:46:34.069246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.479 qpair failed and we were unable to recover it. 00:28:33.479 [2024-11-15 11:46:34.069447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.479 [2024-11-15 11:46:34.069490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.479 qpair failed and we were unable to recover it. 00:28:33.479 [2024-11-15 11:46:34.069687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.479 [2024-11-15 11:46:34.069718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.479 qpair failed and we were unable to recover it. 00:28:33.479 [2024-11-15 11:46:34.069907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.479 [2024-11-15 11:46:34.069943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.479 qpair failed and we were unable to recover it. 00:28:33.479 [2024-11-15 11:46:34.070197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.479 [2024-11-15 11:46:34.070230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.479 qpair failed and we were unable to recover it. 00:28:33.479 [2024-11-15 11:46:34.070360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.479 [2024-11-15 11:46:34.070393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.479 qpair failed and we were unable to recover it. 00:28:33.479 [2024-11-15 11:46:34.070645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.479 [2024-11-15 11:46:34.070679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.479 qpair failed and we were unable to recover it. 00:28:33.479 [2024-11-15 11:46:34.070920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.479 [2024-11-15 11:46:34.070953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.479 qpair failed and we were unable to recover it. 00:28:33.479 [2024-11-15 11:46:34.071138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.479 [2024-11-15 11:46:34.071170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.479 qpair failed and we were unable to recover it. 00:28:33.479 [2024-11-15 11:46:34.071352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.479 [2024-11-15 11:46:34.071385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.479 qpair failed and we were unable to recover it. 00:28:33.479 [2024-11-15 11:46:34.071709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.479 [2024-11-15 11:46:34.071743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.479 qpair failed and we were unable to recover it. 00:28:33.479 [2024-11-15 11:46:34.071989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.479 [2024-11-15 11:46:34.072023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.479 qpair failed and we were unable to recover it. 00:28:33.479 [2024-11-15 11:46:34.072228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.479 [2024-11-15 11:46:34.072261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.479 qpair failed and we were unable to recover it. 00:28:33.479 [2024-11-15 11:46:34.072534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.479 [2024-11-15 11:46:34.072568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.479 qpair failed and we were unable to recover it. 00:28:33.479 [2024-11-15 11:46:34.072844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.479 [2024-11-15 11:46:34.072877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.479 qpair failed and we were unable to recover it. 00:28:33.479 [2024-11-15 11:46:34.073011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.479 [2024-11-15 11:46:34.073022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.479 qpair failed and we were unable to recover it. 00:28:33.479 [2024-11-15 11:46:34.073104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.479 [2024-11-15 11:46:34.073115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.479 qpair failed and we were unable to recover it. 00:28:33.479 [2024-11-15 11:46:34.073336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.479 [2024-11-15 11:46:34.073370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.479 qpair failed and we were unable to recover it. 00:28:33.479 [2024-11-15 11:46:34.073491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.479 [2024-11-15 11:46:34.073524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.479 qpair failed and we were unable to recover it. 00:28:33.479 [2024-11-15 11:46:34.073644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.479 [2024-11-15 11:46:34.073676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.479 qpair failed and we were unable to recover it. 00:28:33.479 [2024-11-15 11:46:34.073931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.479 [2024-11-15 11:46:34.073964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.479 qpair failed and we were unable to recover it. 00:28:33.479 [2024-11-15 11:46:34.074095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.479 [2024-11-15 11:46:34.074118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.479 qpair failed and we were unable to recover it. 00:28:33.479 [2024-11-15 11:46:34.074264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.479 [2024-11-15 11:46:34.074274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.479 qpair failed and we were unable to recover it. 00:28:33.479 [2024-11-15 11:46:34.074423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.479 [2024-11-15 11:46:34.074470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.479 qpair failed and we were unable to recover it. 00:28:33.479 [2024-11-15 11:46:34.074673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.479 [2024-11-15 11:46:34.074706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.479 qpair failed and we were unable to recover it. 00:28:33.479 [2024-11-15 11:46:34.074902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.479 [2024-11-15 11:46:34.074934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.479 qpair failed and we were unable to recover it. 00:28:33.480 [2024-11-15 11:46:34.075187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.480 [2024-11-15 11:46:34.075219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.480 qpair failed and we were unable to recover it. 00:28:33.480 [2024-11-15 11:46:34.075355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.480 [2024-11-15 11:46:34.075388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.480 qpair failed and we were unable to recover it. 00:28:33.480 [2024-11-15 11:46:34.075565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.480 [2024-11-15 11:46:34.075576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.480 qpair failed and we were unable to recover it. 00:28:33.480 [2024-11-15 11:46:34.075791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.480 [2024-11-15 11:46:34.075801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.480 qpair failed and we were unable to recover it. 00:28:33.480 [2024-11-15 11:46:34.075946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.480 [2024-11-15 11:46:34.075957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.480 qpair failed and we were unable to recover it. 00:28:33.480 [2024-11-15 11:46:34.076047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.480 [2024-11-15 11:46:34.076082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.480 qpair failed and we were unable to recover it. 00:28:33.480 [2024-11-15 11:46:34.076207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.480 [2024-11-15 11:46:34.076219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.480 qpair failed and we were unable to recover it. 00:28:33.480 [2024-11-15 11:46:34.076435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.480 [2024-11-15 11:46:34.076446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.480 qpair failed and we were unable to recover it. 00:28:33.480 [2024-11-15 11:46:34.076657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.480 [2024-11-15 11:46:34.076668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.480 qpair failed and we were unable to recover it. 00:28:33.480 [2024-11-15 11:46:34.076809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.480 [2024-11-15 11:46:34.076820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.480 qpair failed and we were unable to recover it. 00:28:33.480 [2024-11-15 11:46:34.077106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.480 [2024-11-15 11:46:34.077139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.480 qpair failed and we were unable to recover it. 00:28:33.480 [2024-11-15 11:46:34.077343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.480 [2024-11-15 11:46:34.077381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.480 qpair failed and we were unable to recover it. 00:28:33.480 [2024-11-15 11:46:34.077525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.480 [2024-11-15 11:46:34.077536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.480 qpair failed and we were unable to recover it. 00:28:33.480 [2024-11-15 11:46:34.077618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.480 [2024-11-15 11:46:34.077629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.480 qpair failed and we were unable to recover it. 00:28:33.480 [2024-11-15 11:46:34.077844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.480 [2024-11-15 11:46:34.077877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.480 qpair failed and we were unable to recover it. 00:28:33.480 [2024-11-15 11:46:34.078079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.480 [2024-11-15 11:46:34.078113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.480 qpair failed and we were unable to recover it. 00:28:33.480 [2024-11-15 11:46:34.078228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.480 [2024-11-15 11:46:34.078260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.480 qpair failed and we were unable to recover it. 00:28:33.480 [2024-11-15 11:46:34.078383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.480 [2024-11-15 11:46:34.078417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.480 qpair failed and we were unable to recover it. 00:28:33.480 [2024-11-15 11:46:34.078620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.480 [2024-11-15 11:46:34.078655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.480 qpair failed and we were unable to recover it. 00:28:33.480 [2024-11-15 11:46:34.078849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.480 [2024-11-15 11:46:34.078882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.480 qpair failed and we were unable to recover it. 00:28:33.480 [2024-11-15 11:46:34.079077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.480 [2024-11-15 11:46:34.079111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.480 qpair failed and we were unable to recover it. 00:28:33.480 [2024-11-15 11:46:34.079330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.480 [2024-11-15 11:46:34.079364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.480 qpair failed and we were unable to recover it. 00:28:33.480 [2024-11-15 11:46:34.079641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.480 [2024-11-15 11:46:34.079653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.480 qpair failed and we were unable to recover it. 00:28:33.480 [2024-11-15 11:46:34.079805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.480 [2024-11-15 11:46:34.079837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.480 qpair failed and we were unable to recover it. 00:28:33.480 [2024-11-15 11:46:34.079969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.480 [2024-11-15 11:46:34.080001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.480 qpair failed and we were unable to recover it. 00:28:33.480 [2024-11-15 11:46:34.080196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.480 [2024-11-15 11:46:34.080230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.480 qpair failed and we were unable to recover it. 00:28:33.480 [2024-11-15 11:46:34.080355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.480 [2024-11-15 11:46:34.080389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.480 qpair failed and we were unable to recover it. 00:28:33.480 [2024-11-15 11:46:34.080516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.480 [2024-11-15 11:46:34.080549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.480 qpair failed and we were unable to recover it. 00:28:33.480 [2024-11-15 11:46:34.080684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.480 [2024-11-15 11:46:34.080716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.480 qpair failed and we were unable to recover it. 00:28:33.480 [2024-11-15 11:46:34.080943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.480 [2024-11-15 11:46:34.080975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.480 qpair failed and we were unable to recover it. 00:28:33.480 [2024-11-15 11:46:34.081116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.480 [2024-11-15 11:46:34.081149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.480 qpair failed and we were unable to recover it. 00:28:33.480 [2024-11-15 11:46:34.081277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.480 [2024-11-15 11:46:34.081309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.480 qpair failed and we were unable to recover it. 00:28:33.480 [2024-11-15 11:46:34.081500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.480 [2024-11-15 11:46:34.081512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.480 qpair failed and we were unable to recover it. 00:28:33.480 [2024-11-15 11:46:34.081662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.480 [2024-11-15 11:46:34.081695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.480 qpair failed and we were unable to recover it. 00:28:33.480 [2024-11-15 11:46:34.081888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.480 [2024-11-15 11:46:34.081920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.480 qpair failed and we were unable to recover it. 00:28:33.480 [2024-11-15 11:46:34.082110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.480 [2024-11-15 11:46:34.082141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.480 qpair failed and we were unable to recover it. 00:28:33.480 [2024-11-15 11:46:34.082339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.480 [2024-11-15 11:46:34.082350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.480 qpair failed and we were unable to recover it. 00:28:33.480 [2024-11-15 11:46:34.082503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.481 [2024-11-15 11:46:34.082515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.481 qpair failed and we were unable to recover it. 00:28:33.481 [2024-11-15 11:46:34.082656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.481 [2024-11-15 11:46:34.082667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.481 qpair failed and we were unable to recover it. 00:28:33.481 [2024-11-15 11:46:34.082742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.481 [2024-11-15 11:46:34.082752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.481 qpair failed and we were unable to recover it. 00:28:33.481 [2024-11-15 11:46:34.082959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.481 [2024-11-15 11:46:34.082970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.481 qpair failed and we were unable to recover it. 00:28:33.481 [2024-11-15 11:46:34.083105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.481 [2024-11-15 11:46:34.083117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.481 qpair failed and we were unable to recover it. 00:28:33.481 [2024-11-15 11:46:34.083258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.481 [2024-11-15 11:46:34.083289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.481 qpair failed and we were unable to recover it. 00:28:33.481 [2024-11-15 11:46:34.083407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.481 [2024-11-15 11:46:34.083440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.481 qpair failed and we were unable to recover it. 00:28:33.481 [2024-11-15 11:46:34.083649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.481 [2024-11-15 11:46:34.083680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.481 qpair failed and we were unable to recover it. 00:28:33.481 [2024-11-15 11:46:34.083873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.481 [2024-11-15 11:46:34.083906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.481 qpair failed and we were unable to recover it. 00:28:33.481 [2024-11-15 11:46:34.084040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.481 [2024-11-15 11:46:34.084073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.481 qpair failed and we were unable to recover it. 00:28:33.481 [2024-11-15 11:46:34.084375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.481 [2024-11-15 11:46:34.084407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.481 qpair failed and we were unable to recover it. 00:28:33.481 [2024-11-15 11:46:34.084601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.481 [2024-11-15 11:46:34.084635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.481 qpair failed and we were unable to recover it. 00:28:33.481 [2024-11-15 11:46:34.084931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.481 [2024-11-15 11:46:34.084964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.481 qpair failed and we were unable to recover it. 00:28:33.481 [2024-11-15 11:46:34.085076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.481 [2024-11-15 11:46:34.085110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.481 qpair failed and we were unable to recover it. 00:28:33.481 [2024-11-15 11:46:34.085328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.481 [2024-11-15 11:46:34.085341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.481 qpair failed and we were unable to recover it. 00:28:33.481 [2024-11-15 11:46:34.085439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.481 [2024-11-15 11:46:34.085449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.481 qpair failed and we were unable to recover it. 00:28:33.481 [2024-11-15 11:46:34.085542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.481 [2024-11-15 11:46:34.085551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.481 qpair failed and we were unable to recover it. 00:28:33.481 [2024-11-15 11:46:34.085765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.481 [2024-11-15 11:46:34.085797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.481 qpair failed and we were unable to recover it. 00:28:33.481 [2024-11-15 11:46:34.086010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.481 [2024-11-15 11:46:34.086042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.481 qpair failed and we were unable to recover it. 00:28:33.481 [2024-11-15 11:46:34.086181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.481 [2024-11-15 11:46:34.086214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.481 qpair failed and we were unable to recover it. 00:28:33.481 [2024-11-15 11:46:34.086468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.481 [2024-11-15 11:46:34.086479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.481 qpair failed and we were unable to recover it. 00:28:33.481 [2024-11-15 11:46:34.086712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.481 [2024-11-15 11:46:34.086744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.481 qpair failed and we were unable to recover it. 00:28:33.481 [2024-11-15 11:46:34.086871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.481 [2024-11-15 11:46:34.086904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.481 qpair failed and we were unable to recover it. 00:28:33.481 [2024-11-15 11:46:34.087043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.481 [2024-11-15 11:46:34.087076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.481 qpair failed and we were unable to recover it. 00:28:33.481 [2024-11-15 11:46:34.087191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.481 [2024-11-15 11:46:34.087223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.481 qpair failed and we were unable to recover it. 00:28:33.481 [2024-11-15 11:46:34.087353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.481 [2024-11-15 11:46:34.087387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.481 qpair failed and we were unable to recover it. 00:28:33.481 [2024-11-15 11:46:34.087542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.481 [2024-11-15 11:46:34.087554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.481 qpair failed and we were unable to recover it. 00:28:33.481 [2024-11-15 11:46:34.087626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.481 [2024-11-15 11:46:34.087635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.481 qpair failed and we were unable to recover it. 00:28:33.481 [2024-11-15 11:46:34.087739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.481 [2024-11-15 11:46:34.087770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.481 qpair failed and we were unable to recover it. 00:28:33.481 [2024-11-15 11:46:34.088054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.481 [2024-11-15 11:46:34.088087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.481 qpair failed and we were unable to recover it. 00:28:33.481 [2024-11-15 11:46:34.088288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.481 [2024-11-15 11:46:34.088319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.481 qpair failed and we were unable to recover it. 00:28:33.481 [2024-11-15 11:46:34.088450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.481 [2024-11-15 11:46:34.088501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.481 qpair failed and we were unable to recover it. 00:28:33.481 [2024-11-15 11:46:34.088750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.481 [2024-11-15 11:46:34.088761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.481 qpair failed and we were unable to recover it. 00:28:33.481 [2024-11-15 11:46:34.088907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.481 [2024-11-15 11:46:34.088941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.481 qpair failed and we were unable to recover it. 00:28:33.481 [2024-11-15 11:46:34.089056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.481 [2024-11-15 11:46:34.089088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.481 qpair failed and we were unable to recover it. 00:28:33.481 [2024-11-15 11:46:34.089281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.481 [2024-11-15 11:46:34.089313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.481 qpair failed and we were unable to recover it. 00:28:33.481 [2024-11-15 11:46:34.089526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.481 [2024-11-15 11:46:34.089561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.481 qpair failed and we were unable to recover it. 00:28:33.481 [2024-11-15 11:46:34.089721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.481 [2024-11-15 11:46:34.089753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.482 qpair failed and we were unable to recover it. 00:28:33.482 [2024-11-15 11:46:34.089981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.482 [2024-11-15 11:46:34.090015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.482 qpair failed and we were unable to recover it. 00:28:33.482 [2024-11-15 11:46:34.090207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.482 [2024-11-15 11:46:34.090219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.482 qpair failed and we were unable to recover it. 00:28:33.482 [2024-11-15 11:46:34.090308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.482 [2024-11-15 11:46:34.090319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.482 qpair failed and we were unable to recover it. 00:28:33.482 [2024-11-15 11:46:34.090480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.482 [2024-11-15 11:46:34.090492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.482 qpair failed and we were unable to recover it. 00:28:33.482 [2024-11-15 11:46:34.090652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.482 [2024-11-15 11:46:34.090685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.482 qpair failed and we were unable to recover it. 00:28:33.482 [2024-11-15 11:46:34.090866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.482 [2024-11-15 11:46:34.090899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.482 qpair failed and we were unable to recover it. 00:28:33.482 [2024-11-15 11:46:34.091026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.482 [2024-11-15 11:46:34.091060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.482 qpair failed and we were unable to recover it. 00:28:33.482 [2024-11-15 11:46:34.091174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.482 [2024-11-15 11:46:34.091207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.482 qpair failed and we were unable to recover it. 00:28:33.482 [2024-11-15 11:46:34.091410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.482 [2024-11-15 11:46:34.091444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.482 qpair failed and we were unable to recover it. 00:28:33.482 [2024-11-15 11:46:34.091594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.482 [2024-11-15 11:46:34.091606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.482 qpair failed and we were unable to recover it. 00:28:33.482 [2024-11-15 11:46:34.091837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.482 [2024-11-15 11:46:34.091875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.482 qpair failed and we were unable to recover it. 00:28:33.482 [2024-11-15 11:46:34.092076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.482 [2024-11-15 11:46:34.092109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.482 qpair failed and we were unable to recover it. 00:28:33.482 [2024-11-15 11:46:34.092384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.482 [2024-11-15 11:46:34.092424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.482 qpair failed and we were unable to recover it. 00:28:33.482 [2024-11-15 11:46:34.092644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.482 [2024-11-15 11:46:34.092656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.482 qpair failed and we were unable to recover it. 00:28:33.482 [2024-11-15 11:46:34.092802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.482 [2024-11-15 11:46:34.092814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.482 qpair failed and we were unable to recover it. 00:28:33.482 [2024-11-15 11:46:34.092883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.482 [2024-11-15 11:46:34.092893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.482 qpair failed and we were unable to recover it. 00:28:33.482 [2024-11-15 11:46:34.093167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.482 [2024-11-15 11:46:34.093208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.482 qpair failed and we were unable to recover it. 00:28:33.482 [2024-11-15 11:46:34.093419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.482 [2024-11-15 11:46:34.093470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.482 qpair failed and we were unable to recover it. 00:28:33.482 [2024-11-15 11:46:34.093731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.482 [2024-11-15 11:46:34.093763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.482 qpair failed and we were unable to recover it. 00:28:33.482 [2024-11-15 11:46:34.094051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.482 [2024-11-15 11:46:34.094094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.482 qpair failed and we were unable to recover it. 00:28:33.482 [2024-11-15 11:46:34.094306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.482 [2024-11-15 11:46:34.094340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.482 qpair failed and we were unable to recover it. 00:28:33.482 [2024-11-15 11:46:34.094498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.482 [2024-11-15 11:46:34.094510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.482 qpair failed and we were unable to recover it. 00:28:33.482 [2024-11-15 11:46:34.094678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.482 [2024-11-15 11:46:34.094710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.482 qpair failed and we were unable to recover it. 00:28:33.482 [2024-11-15 11:46:34.094985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.482 [2024-11-15 11:46:34.095022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.482 qpair failed and we were unable to recover it. 00:28:33.482 [2024-11-15 11:46:34.095210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.482 [2024-11-15 11:46:34.095241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.482 qpair failed and we were unable to recover it. 00:28:33.482 [2024-11-15 11:46:34.095425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.482 [2024-11-15 11:46:34.095436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.482 qpair failed and we were unable to recover it. 00:28:33.482 [2024-11-15 11:46:34.095580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.482 [2024-11-15 11:46:34.095614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.482 qpair failed and we were unable to recover it. 00:28:33.482 [2024-11-15 11:46:34.095838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.482 [2024-11-15 11:46:34.095874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.482 qpair failed and we were unable to recover it. 00:28:33.482 [2024-11-15 11:46:34.096083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.482 [2024-11-15 11:46:34.096119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.482 qpair failed and we were unable to recover it. 00:28:33.482 [2024-11-15 11:46:34.096382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.482 [2024-11-15 11:46:34.096414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.482 qpair failed and we were unable to recover it. 00:28:33.482 [2024-11-15 11:46:34.096707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.482 [2024-11-15 11:46:34.096736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.482 qpair failed and we were unable to recover it. 00:28:33.482 [2024-11-15 11:46:34.096882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.482 [2024-11-15 11:46:34.096915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.482 qpair failed and we were unable to recover it. 00:28:33.482 [2024-11-15 11:46:34.097107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.482 [2024-11-15 11:46:34.097140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.482 qpair failed and we were unable to recover it. 00:28:33.482 [2024-11-15 11:46:34.097270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.482 [2024-11-15 11:46:34.097304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.482 qpair failed and we were unable to recover it. 00:28:33.482 [2024-11-15 11:46:34.097496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.482 [2024-11-15 11:46:34.097533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.482 qpair failed and we were unable to recover it. 00:28:33.482 [2024-11-15 11:46:34.097766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.482 [2024-11-15 11:46:34.097799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.482 qpair failed and we were unable to recover it. 00:28:33.482 [2024-11-15 11:46:34.098011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.482 [2024-11-15 11:46:34.098042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.482 qpair failed and we were unable to recover it. 00:28:33.482 [2024-11-15 11:46:34.098238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.482 [2024-11-15 11:46:34.098249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.482 qpair failed and we were unable to recover it. 00:28:33.483 [2024-11-15 11:46:34.098335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.483 [2024-11-15 11:46:34.098344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.483 qpair failed and we were unable to recover it. 00:28:33.483 [2024-11-15 11:46:34.098530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.483 [2024-11-15 11:46:34.098563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.483 qpair failed and we were unable to recover it. 00:28:33.483 [2024-11-15 11:46:34.098866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.483 [2024-11-15 11:46:34.098899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.483 qpair failed and we were unable to recover it. 00:28:33.483 [2024-11-15 11:46:34.099030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.483 [2024-11-15 11:46:34.099063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.483 qpair failed and we were unable to recover it. 00:28:33.483 [2024-11-15 11:46:34.099196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.483 [2024-11-15 11:46:34.099231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.483 qpair failed and we were unable to recover it. 00:28:33.483 [2024-11-15 11:46:34.099457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.483 [2024-11-15 11:46:34.099502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.483 qpair failed and we were unable to recover it. 00:28:33.483 [2024-11-15 11:46:34.099762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.483 [2024-11-15 11:46:34.099794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.483 qpair failed and we were unable to recover it. 00:28:33.483 [2024-11-15 11:46:34.099986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.483 [2024-11-15 11:46:34.100020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.483 qpair failed and we were unable to recover it. 00:28:33.483 [2024-11-15 11:46:34.100198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.483 [2024-11-15 11:46:34.100210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.483 qpair failed and we were unable to recover it. 00:28:33.483 [2024-11-15 11:46:34.100389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.483 [2024-11-15 11:46:34.100400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.483 qpair failed and we were unable to recover it. 00:28:33.483 [2024-11-15 11:46:34.100473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.483 [2024-11-15 11:46:34.100483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.483 qpair failed and we were unable to recover it. 00:28:33.483 [2024-11-15 11:46:34.100577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.483 [2024-11-15 11:46:34.100587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.483 qpair failed and we were unable to recover it. 00:28:33.483 [2024-11-15 11:46:34.100741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.483 [2024-11-15 11:46:34.100776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.483 qpair failed and we were unable to recover it. 00:28:33.483 [2024-11-15 11:46:34.100976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.483 [2024-11-15 11:46:34.101013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.483 qpair failed and we were unable to recover it. 00:28:33.483 [2024-11-15 11:46:34.101163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.483 [2024-11-15 11:46:34.101195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.483 qpair failed and we were unable to recover it. 00:28:33.483 [2024-11-15 11:46:34.101383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.483 [2024-11-15 11:46:34.101394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.483 qpair failed and we were unable to recover it. 00:28:33.483 [2024-11-15 11:46:34.101476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.483 [2024-11-15 11:46:34.101487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.483 qpair failed and we were unable to recover it. 00:28:33.483 [2024-11-15 11:46:34.101633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.483 [2024-11-15 11:46:34.101645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.483 qpair failed and we were unable to recover it. 00:28:33.483 [2024-11-15 11:46:34.101708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.483 [2024-11-15 11:46:34.101720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.483 qpair failed and we were unable to recover it. 00:28:33.483 [2024-11-15 11:46:34.101899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.483 [2024-11-15 11:46:34.101932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.483 qpair failed and we were unable to recover it. 00:28:33.483 [2024-11-15 11:46:34.102058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.483 [2024-11-15 11:46:34.102090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.483 qpair failed and we were unable to recover it. 00:28:33.483 [2024-11-15 11:46:34.102300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.483 [2024-11-15 11:46:34.102334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.483 qpair failed and we were unable to recover it. 00:28:33.483 [2024-11-15 11:46:34.102613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.483 [2024-11-15 11:46:34.102625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.483 qpair failed and we were unable to recover it. 00:28:33.483 [2024-11-15 11:46:34.102773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.483 [2024-11-15 11:46:34.102806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.483 qpair failed and we were unable to recover it. 00:28:33.483 [2024-11-15 11:46:34.103073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.483 [2024-11-15 11:46:34.103108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.483 qpair failed and we were unable to recover it. 00:28:33.483 [2024-11-15 11:46:34.103363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.483 [2024-11-15 11:46:34.103395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.483 qpair failed and we were unable to recover it. 00:28:33.483 [2024-11-15 11:46:34.103649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.483 [2024-11-15 11:46:34.103683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.483 qpair failed and we were unable to recover it. 00:28:33.483 [2024-11-15 11:46:34.103889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.483 [2024-11-15 11:46:34.103923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.483 qpair failed and we were unable to recover it. 00:28:33.483 [2024-11-15 11:46:34.104042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.483 [2024-11-15 11:46:34.104076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.483 qpair failed and we were unable to recover it. 00:28:33.483 [2024-11-15 11:46:34.104260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.483 [2024-11-15 11:46:34.104295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.483 qpair failed and we were unable to recover it. 00:28:33.483 [2024-11-15 11:46:34.104481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.483 [2024-11-15 11:46:34.104493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.483 qpair failed and we were unable to recover it. 00:28:33.483 [2024-11-15 11:46:34.104745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.483 [2024-11-15 11:46:34.104781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.483 qpair failed and we were unable to recover it. 00:28:33.483 [2024-11-15 11:46:34.104902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.483 [2024-11-15 11:46:34.104935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.484 qpair failed and we were unable to recover it. 00:28:33.484 [2024-11-15 11:46:34.105138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.484 [2024-11-15 11:46:34.105172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.484 qpair failed and we were unable to recover it. 00:28:33.484 [2024-11-15 11:46:34.105288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.484 [2024-11-15 11:46:34.105299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.484 qpair failed and we were unable to recover it. 00:28:33.484 [2024-11-15 11:46:34.105477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.484 [2024-11-15 11:46:34.105488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.484 qpair failed and we were unable to recover it. 00:28:33.484 [2024-11-15 11:46:34.105570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.484 [2024-11-15 11:46:34.105581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.484 qpair failed and we were unable to recover it. 00:28:33.484 [2024-11-15 11:46:34.105681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.484 [2024-11-15 11:46:34.105691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.484 qpair failed and we were unable to recover it. 00:28:33.484 [2024-11-15 11:46:34.105795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.484 [2024-11-15 11:46:34.105805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.484 qpair failed and we were unable to recover it. 00:28:33.484 [2024-11-15 11:46:34.105903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.484 [2024-11-15 11:46:34.105912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.484 qpair failed and we were unable to recover it. 00:28:33.484 [2024-11-15 11:46:34.106132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.484 [2024-11-15 11:46:34.106167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.484 qpair failed and we were unable to recover it. 00:28:33.484 [2024-11-15 11:46:34.106360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.484 [2024-11-15 11:46:34.106393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.484 qpair failed and we were unable to recover it. 00:28:33.484 [2024-11-15 11:46:34.106647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.484 [2024-11-15 11:46:34.106683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.484 qpair failed and we were unable to recover it. 00:28:33.484 [2024-11-15 11:46:34.106825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.484 [2024-11-15 11:46:34.106859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.484 qpair failed and we were unable to recover it. 00:28:33.484 [2024-11-15 11:46:34.107083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.484 [2024-11-15 11:46:34.107114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.484 qpair failed and we were unable to recover it. 00:28:33.484 [2024-11-15 11:46:34.107257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.484 [2024-11-15 11:46:34.107290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.484 qpair failed and we were unable to recover it. 00:28:33.484 [2024-11-15 11:46:34.107551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.484 [2024-11-15 11:46:34.107585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.484 qpair failed and we were unable to recover it. 00:28:33.484 [2024-11-15 11:46:34.107768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.484 [2024-11-15 11:46:34.107801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.484 qpair failed and we were unable to recover it. 00:28:33.484 [2024-11-15 11:46:34.108004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.484 [2024-11-15 11:46:34.108037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.484 qpair failed and we were unable to recover it. 00:28:33.484 [2024-11-15 11:46:34.108227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.484 [2024-11-15 11:46:34.108259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.484 qpair failed and we were unable to recover it. 00:28:33.484 [2024-11-15 11:46:34.108534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.484 [2024-11-15 11:46:34.108545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.484 qpair failed and we were unable to recover it. 00:28:33.484 [2024-11-15 11:46:34.108684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.484 [2024-11-15 11:46:34.108716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.484 qpair failed and we were unable to recover it. 00:28:33.484 [2024-11-15 11:46:34.108970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.484 [2024-11-15 11:46:34.109003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.484 qpair failed and we were unable to recover it. 00:28:33.484 [2024-11-15 11:46:34.109251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.484 [2024-11-15 11:46:34.109278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.484 qpair failed and we were unable to recover it. 00:28:33.484 [2024-11-15 11:46:34.109447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.484 [2024-11-15 11:46:34.109464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.484 qpair failed and we were unable to recover it. 00:28:33.484 [2024-11-15 11:46:34.109667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.484 [2024-11-15 11:46:34.109699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.484 qpair failed and we were unable to recover it. 00:28:33.484 [2024-11-15 11:46:34.109895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.484 [2024-11-15 11:46:34.109927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.484 qpair failed and we were unable to recover it. 00:28:33.484 [2024-11-15 11:46:34.110125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.484 [2024-11-15 11:46:34.110135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.484 qpair failed and we were unable to recover it. 00:28:33.484 [2024-11-15 11:46:34.110349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.484 [2024-11-15 11:46:34.110362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.484 qpair failed and we were unable to recover it. 00:28:33.484 [2024-11-15 11:46:34.110463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.484 [2024-11-15 11:46:34.110474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.484 qpair failed and we were unable to recover it. 00:28:33.484 [2024-11-15 11:46:34.110614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.484 [2024-11-15 11:46:34.110645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.484 qpair failed and we were unable to recover it. 00:28:33.484 [2024-11-15 11:46:34.110870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.484 [2024-11-15 11:46:34.110901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.484 qpair failed and we were unable to recover it. 00:28:33.484 [2024-11-15 11:46:34.111029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.484 [2024-11-15 11:46:34.111062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.484 qpair failed and we were unable to recover it. 00:28:33.484 [2024-11-15 11:46:34.111252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.484 [2024-11-15 11:46:34.111284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.484 qpair failed and we were unable to recover it. 00:28:33.484 [2024-11-15 11:46:34.111519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.484 [2024-11-15 11:46:34.111553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.484 qpair failed and we were unable to recover it. 00:28:33.484 [2024-11-15 11:46:34.111842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.484 [2024-11-15 11:46:34.111876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.484 qpair failed and we were unable to recover it. 00:28:33.484 [2024-11-15 11:46:34.112064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.484 [2024-11-15 11:46:34.112097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.484 qpair failed and we were unable to recover it. 00:28:33.484 [2024-11-15 11:46:34.112320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.484 [2024-11-15 11:46:34.112352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.484 qpair failed and we were unable to recover it. 00:28:33.484 [2024-11-15 11:46:34.112450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.484 [2024-11-15 11:46:34.112471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.484 qpair failed and we were unable to recover it. 00:28:33.484 [2024-11-15 11:46:34.112603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.484 [2024-11-15 11:46:34.112614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.484 qpair failed and we were unable to recover it. 00:28:33.484 [2024-11-15 11:46:34.112782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.484 [2024-11-15 11:46:34.112793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.485 qpair failed and we were unable to recover it. 00:28:33.485 [2024-11-15 11:46:34.112896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.485 [2024-11-15 11:46:34.112928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.485 qpair failed and we were unable to recover it. 00:28:33.485 [2024-11-15 11:46:34.113163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.485 [2024-11-15 11:46:34.113196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.485 qpair failed and we were unable to recover it. 00:28:33.485 [2024-11-15 11:46:34.113346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.485 [2024-11-15 11:46:34.113378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.485 qpair failed and we were unable to recover it. 00:28:33.485 [2024-11-15 11:46:34.113627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.485 [2024-11-15 11:46:34.113639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.485 qpair failed and we were unable to recover it. 00:28:33.485 [2024-11-15 11:46:34.113784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.485 [2024-11-15 11:46:34.113795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.485 qpair failed and we were unable to recover it. 00:28:33.485 [2024-11-15 11:46:34.113885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.485 [2024-11-15 11:46:34.113895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.485 qpair failed and we were unable to recover it. 00:28:33.485 [2024-11-15 11:46:34.114035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.485 [2024-11-15 11:46:34.114066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.485 qpair failed and we were unable to recover it. 00:28:33.485 [2024-11-15 11:46:34.114269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.485 [2024-11-15 11:46:34.114302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.485 qpair failed and we were unable to recover it. 00:28:33.485 [2024-11-15 11:46:34.114493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.485 [2024-11-15 11:46:34.114526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.485 qpair failed and we were unable to recover it. 00:28:33.485 [2024-11-15 11:46:34.114813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.485 [2024-11-15 11:46:34.114824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.485 qpair failed and we were unable to recover it. 00:28:33.485 [2024-11-15 11:46:34.114962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.485 [2024-11-15 11:46:34.114972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.485 qpair failed and we were unable to recover it. 00:28:33.485 [2024-11-15 11:46:34.115071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.485 [2024-11-15 11:46:34.115081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.485 qpair failed and we were unable to recover it. 00:28:33.485 [2024-11-15 11:46:34.115231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.485 [2024-11-15 11:46:34.115241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.485 qpair failed and we were unable to recover it. 00:28:33.485 [2024-11-15 11:46:34.115398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.485 [2024-11-15 11:46:34.115431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.485 qpair failed and we were unable to recover it. 00:28:33.485 [2024-11-15 11:46:34.115741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.485 [2024-11-15 11:46:34.115774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.485 qpair failed and we were unable to recover it. 00:28:33.485 [2024-11-15 11:46:34.116032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.485 [2024-11-15 11:46:34.116066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.485 qpair failed and we were unable to recover it. 00:28:33.485 [2024-11-15 11:46:34.116219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.485 [2024-11-15 11:46:34.116230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.485 qpair failed and we were unable to recover it. 00:28:33.485 [2024-11-15 11:46:34.116317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.485 [2024-11-15 11:46:34.116326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.485 qpair failed and we were unable to recover it. 00:28:33.485 [2024-11-15 11:46:34.116481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.485 [2024-11-15 11:46:34.116493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.485 qpair failed and we were unable to recover it. 00:28:33.485 [2024-11-15 11:46:34.116640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.485 [2024-11-15 11:46:34.116653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.485 qpair failed and we were unable to recover it. 00:28:33.485 [2024-11-15 11:46:34.116803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.485 [2024-11-15 11:46:34.116815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.485 qpair failed and we were unable to recover it. 00:28:33.485 [2024-11-15 11:46:34.116885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.485 [2024-11-15 11:46:34.116894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.485 qpair failed and we were unable to recover it. 00:28:33.485 [2024-11-15 11:46:34.116993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.485 [2024-11-15 11:46:34.117004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.485 qpair failed and we were unable to recover it. 00:28:33.485 [2024-11-15 11:46:34.117113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.485 [2024-11-15 11:46:34.117145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.485 qpair failed and we were unable to recover it. 00:28:33.485 [2024-11-15 11:46:34.117350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.485 [2024-11-15 11:46:34.117383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.485 qpair failed and we were unable to recover it. 00:28:33.485 [2024-11-15 11:46:34.117654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.485 [2024-11-15 11:46:34.117688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.485 qpair failed and we were unable to recover it. 00:28:33.485 [2024-11-15 11:46:34.118004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.485 [2024-11-15 11:46:34.118037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.485 qpair failed and we were unable to recover it. 00:28:33.485 [2024-11-15 11:46:34.118188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.485 [2024-11-15 11:46:34.118226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.485 qpair failed and we were unable to recover it. 00:28:33.485 [2024-11-15 11:46:34.118360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.485 [2024-11-15 11:46:34.118393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.485 qpair failed and we were unable to recover it. 00:28:33.485 [2024-11-15 11:46:34.118603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.485 [2024-11-15 11:46:34.118637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.485 qpair failed and we were unable to recover it. 00:28:33.485 [2024-11-15 11:46:34.118828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.485 [2024-11-15 11:46:34.118860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.485 qpair failed and we were unable to recover it. 00:28:33.485 [2024-11-15 11:46:34.119009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.485 [2024-11-15 11:46:34.119042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.485 qpair failed and we were unable to recover it. 00:28:33.485 [2024-11-15 11:46:34.119175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.485 [2024-11-15 11:46:34.119207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.485 qpair failed and we were unable to recover it. 00:28:33.485 [2024-11-15 11:46:34.119393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.485 [2024-11-15 11:46:34.119426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.485 qpair failed and we were unable to recover it. 00:28:33.485 [2024-11-15 11:46:34.119524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.485 [2024-11-15 11:46:34.119550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.485 qpair failed and we were unable to recover it. 00:28:33.485 [2024-11-15 11:46:34.119842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.485 [2024-11-15 11:46:34.119909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:33.485 qpair failed and we were unable to recover it. 00:28:33.485 [2024-11-15 11:46:34.120197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.485 [2024-11-15 11:46:34.120234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:33.485 qpair failed and we were unable to recover it. 00:28:33.485 [2024-11-15 11:46:34.120452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.486 [2024-11-15 11:46:34.120507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:33.486 qpair failed and we were unable to recover it. 00:28:33.486 [2024-11-15 11:46:34.120823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.486 [2024-11-15 11:46:34.120857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:33.486 qpair failed and we were unable to recover it. 00:28:33.486 [2024-11-15 11:46:34.121065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.486 [2024-11-15 11:46:34.121098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:33.486 qpair failed and we were unable to recover it. 00:28:33.486 [2024-11-15 11:46:34.121248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.486 [2024-11-15 11:46:34.121281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:33.486 qpair failed and we were unable to recover it. 00:28:33.486 [2024-11-15 11:46:34.121441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.486 [2024-11-15 11:46:34.121454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:33.486 qpair failed and we were unable to recover it. 00:28:33.486 [2024-11-15 11:46:34.121626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.486 [2024-11-15 11:46:34.121637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:33.486 qpair failed and we were unable to recover it. 00:28:33.486 [2024-11-15 11:46:34.121794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.486 [2024-11-15 11:46:34.121830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:33.486 qpair failed and we were unable to recover it. 00:28:33.486 [2024-11-15 11:46:34.122029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.486 [2024-11-15 11:46:34.122061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:33.486 qpair failed and we were unable to recover it. 00:28:33.486 [2024-11-15 11:46:34.122278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.486 [2024-11-15 11:46:34.122310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:33.486 qpair failed and we were unable to recover it. 00:28:33.486 [2024-11-15 11:46:34.122465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.486 [2024-11-15 11:46:34.122476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:33.486 qpair failed and we were unable to recover it. 00:28:33.486 [2024-11-15 11:46:34.122576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.486 [2024-11-15 11:46:34.122586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:33.486 qpair failed and we were unable to recover it. 00:28:33.486 [2024-11-15 11:46:34.122736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.486 [2024-11-15 11:46:34.122748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:33.486 qpair failed and we were unable to recover it. 00:28:33.486 [2024-11-15 11:46:34.122819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.486 [2024-11-15 11:46:34.122829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:33.486 qpair failed and we were unable to recover it. 00:28:33.486 [2024-11-15 11:46:34.122987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.486 [2024-11-15 11:46:34.123026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:33.486 qpair failed and we were unable to recover it. 00:28:33.486 [2024-11-15 11:46:34.123142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.486 [2024-11-15 11:46:34.123172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:33.486 qpair failed and we were unable to recover it. 00:28:33.486 [2024-11-15 11:46:34.123373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.486 [2024-11-15 11:46:34.123407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:33.486 qpair failed and we were unable to recover it. 00:28:33.486 [2024-11-15 11:46:34.123607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.486 [2024-11-15 11:46:34.123618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:33.486 qpair failed and we were unable to recover it. 00:28:33.486 [2024-11-15 11:46:34.123797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.486 [2024-11-15 11:46:34.123870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1922550 with addr=10.0.0.2, port=4420 00:28:33.486 qpair failed and we were unable to recover it. 00:28:33.486 [2024-11-15 11:46:34.124032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.486 [2024-11-15 11:46:34.124067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.486 qpair failed and we were unable to recover it. 00:28:33.486 [2024-11-15 11:46:34.124256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.486 [2024-11-15 11:46:34.124289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.486 qpair failed and we were unable to recover it. 00:28:33.486 [2024-11-15 11:46:34.124413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.486 [2024-11-15 11:46:34.124424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.486 qpair failed and we were unable to recover it. 00:28:33.486 [2024-11-15 11:46:34.124629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.486 [2024-11-15 11:46:34.124664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.486 qpair failed and we were unable to recover it. 00:28:33.486 [2024-11-15 11:46:34.124891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.486 [2024-11-15 11:46:34.124924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.486 qpair failed and we were unable to recover it. 00:28:33.486 [2024-11-15 11:46:34.125232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.486 [2024-11-15 11:46:34.125265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.486 qpair failed and we were unable to recover it. 00:28:33.486 [2024-11-15 11:46:34.125469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.486 [2024-11-15 11:46:34.125501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.486 qpair failed and we were unable to recover it. 00:28:33.486 [2024-11-15 11:46:34.125663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.486 [2024-11-15 11:46:34.125676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.486 qpair failed and we were unable to recover it. 00:28:33.486 [2024-11-15 11:46:34.125819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.486 [2024-11-15 11:46:34.125829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.486 qpair failed and we were unable to recover it. 00:28:33.486 [2024-11-15 11:46:34.126070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.486 [2024-11-15 11:46:34.126103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.486 qpair failed and we were unable to recover it. 00:28:33.486 [2024-11-15 11:46:34.126287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.486 [2024-11-15 11:46:34.126321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.486 qpair failed and we were unable to recover it. 00:28:33.486 [2024-11-15 11:46:34.126516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.486 [2024-11-15 11:46:34.126550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.486 qpair failed and we were unable to recover it. 00:28:33.486 [2024-11-15 11:46:34.126749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.486 [2024-11-15 11:46:34.126787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.486 qpair failed and we were unable to recover it. 00:28:33.486 [2024-11-15 11:46:34.126903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.486 [2024-11-15 11:46:34.126935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.486 qpair failed and we were unable to recover it. 00:28:33.486 [2024-11-15 11:46:34.127149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.486 [2024-11-15 11:46:34.127182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.486 qpair failed and we were unable to recover it. 00:28:33.486 [2024-11-15 11:46:34.127376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.486 [2024-11-15 11:46:34.127386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.486 qpair failed and we were unable to recover it. 00:28:33.486 [2024-11-15 11:46:34.127556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.486 [2024-11-15 11:46:34.127591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.486 qpair failed and we were unable to recover it. 00:28:33.486 [2024-11-15 11:46:34.127782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.486 [2024-11-15 11:46:34.127822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.486 qpair failed and we were unable to recover it. 00:28:33.486 [2024-11-15 11:46:34.128044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.486 [2024-11-15 11:46:34.128078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.486 qpair failed and we were unable to recover it. 00:28:33.486 [2024-11-15 11:46:34.128375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.486 [2024-11-15 11:46:34.128409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.486 qpair failed and we were unable to recover it. 00:28:33.486 [2024-11-15 11:46:34.128620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.486 [2024-11-15 11:46:34.128655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.486 qpair failed and we were unable to recover it. 00:28:33.487 [2024-11-15 11:46:34.128934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.487 [2024-11-15 11:46:34.128965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.487 qpair failed and we were unable to recover it. 00:28:33.487 [2024-11-15 11:46:34.129184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.487 [2024-11-15 11:46:34.129223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.487 qpair failed and we were unable to recover it. 00:28:33.487 [2024-11-15 11:46:34.129432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.487 [2024-11-15 11:46:34.129475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.487 qpair failed and we were unable to recover it. 00:28:33.487 [2024-11-15 11:46:34.129685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.487 [2024-11-15 11:46:34.129697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.487 qpair failed and we were unable to recover it. 00:28:33.487 [2024-11-15 11:46:34.129873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.487 [2024-11-15 11:46:34.129905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.487 qpair failed and we were unable to recover it. 00:28:33.487 [2024-11-15 11:46:34.130112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.487 [2024-11-15 11:46:34.130144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.487 qpair failed and we were unable to recover it. 00:28:33.487 [2024-11-15 11:46:34.130348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.487 [2024-11-15 11:46:34.130381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.487 qpair failed and we were unable to recover it. 00:28:33.487 [2024-11-15 11:46:34.130579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.487 [2024-11-15 11:46:34.130591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.487 qpair failed and we were unable to recover it. 00:28:33.487 [2024-11-15 11:46:34.130813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.487 [2024-11-15 11:46:34.130849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.487 qpair failed and we were unable to recover it. 00:28:33.487 [2024-11-15 11:46:34.131121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.487 [2024-11-15 11:46:34.131154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.487 qpair failed and we were unable to recover it. 00:28:33.487 [2024-11-15 11:46:34.131351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.487 [2024-11-15 11:46:34.131385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.487 qpair failed and we were unable to recover it. 00:28:33.487 [2024-11-15 11:46:34.131525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.487 [2024-11-15 11:46:34.131560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.487 qpair failed and we were unable to recover it. 00:28:33.487 [2024-11-15 11:46:34.131746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.487 [2024-11-15 11:46:34.131779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.487 qpair failed and we were unable to recover it. 00:28:33.487 [2024-11-15 11:46:34.132056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.487 [2024-11-15 11:46:34.132089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.487 qpair failed and we were unable to recover it. 00:28:33.487 [2024-11-15 11:46:34.132349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.487 [2024-11-15 11:46:34.132360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.487 qpair failed and we were unable to recover it. 00:28:33.487 [2024-11-15 11:46:34.132521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.487 [2024-11-15 11:46:34.132533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.487 qpair failed and we were unable to recover it. 00:28:33.487 [2024-11-15 11:46:34.132687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.487 [2024-11-15 11:46:34.132699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.487 qpair failed and we were unable to recover it. 00:28:33.487 [2024-11-15 11:46:34.132858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.487 [2024-11-15 11:46:34.132870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.487 qpair failed and we were unable to recover it. 00:28:33.487 [2024-11-15 11:46:34.132957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.487 [2024-11-15 11:46:34.132968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.487 qpair failed and we were unable to recover it. 00:28:33.487 [2024-11-15 11:46:34.133074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.487 [2024-11-15 11:46:34.133084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.487 qpair failed and we were unable to recover it. 00:28:33.487 [2024-11-15 11:46:34.133238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.487 [2024-11-15 11:46:34.133271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.487 qpair failed and we were unable to recover it. 00:28:33.487 [2024-11-15 11:46:34.133383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.487 [2024-11-15 11:46:34.133416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.487 qpair failed and we were unable to recover it. 00:28:33.487 [2024-11-15 11:46:34.133691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.487 [2024-11-15 11:46:34.133726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.487 qpair failed and we were unable to recover it. 00:28:33.487 [2024-11-15 11:46:34.133867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.487 [2024-11-15 11:46:34.133899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.487 qpair failed and we were unable to recover it. 00:28:33.487 [2024-11-15 11:46:34.134027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.487 [2024-11-15 11:46:34.134059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.487 qpair failed and we were unable to recover it. 00:28:33.487 [2024-11-15 11:46:34.134247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.487 [2024-11-15 11:46:34.134278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.487 qpair failed and we were unable to recover it. 00:28:33.487 [2024-11-15 11:46:34.134549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.487 [2024-11-15 11:46:34.134561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.487 qpair failed and we were unable to recover it. 00:28:33.487 [2024-11-15 11:46:34.134716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.487 [2024-11-15 11:46:34.134728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.487 qpair failed and we were unable to recover it. 00:28:33.487 [2024-11-15 11:46:34.134896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.487 [2024-11-15 11:46:34.134929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.487 qpair failed and we were unable to recover it. 00:28:33.487 [2024-11-15 11:46:34.135058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.487 [2024-11-15 11:46:34.135092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.487 qpair failed and we were unable to recover it. 00:28:33.487 [2024-11-15 11:46:34.135301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.487 [2024-11-15 11:46:34.135333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.487 qpair failed and we were unable to recover it. 00:28:33.487 [2024-11-15 11:46:34.135588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.487 [2024-11-15 11:46:34.135602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.487 qpair failed and we were unable to recover it. 00:28:33.487 [2024-11-15 11:46:34.135753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.488 [2024-11-15 11:46:34.135785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.488 qpair failed and we were unable to recover it. 00:28:33.488 [2024-11-15 11:46:34.136046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.488 [2024-11-15 11:46:34.136080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.488 qpair failed and we were unable to recover it. 00:28:33.488 [2024-11-15 11:46:34.136355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.488 [2024-11-15 11:46:34.136388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.488 qpair failed and we were unable to recover it. 00:28:33.488 [2024-11-15 11:46:34.136585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.488 [2024-11-15 11:46:34.136611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.488 qpair failed and we were unable to recover it. 00:28:33.488 [2024-11-15 11:46:34.136755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.488 [2024-11-15 11:46:34.136766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.488 qpair failed and we were unable to recover it. 00:28:33.488 [2024-11-15 11:46:34.136983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.488 [2024-11-15 11:46:34.136994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.488 qpair failed and we were unable to recover it. 00:28:33.488 [2024-11-15 11:46:34.137085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.488 [2024-11-15 11:46:34.137095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.488 qpair failed and we were unable to recover it. 00:28:33.488 [2024-11-15 11:46:34.137277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.488 [2024-11-15 11:46:34.137288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.488 qpair failed and we were unable to recover it. 00:28:33.488 [2024-11-15 11:46:34.137362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.488 [2024-11-15 11:46:34.137372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.488 qpair failed and we were unable to recover it. 00:28:33.488 [2024-11-15 11:46:34.137529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.488 [2024-11-15 11:46:34.137539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.488 qpair failed and we were unable to recover it. 00:28:33.488 [2024-11-15 11:46:34.137694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.488 [2024-11-15 11:46:34.137705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.488 qpair failed and we were unable to recover it. 00:28:33.488 [2024-11-15 11:46:34.137847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.488 [2024-11-15 11:46:34.137858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.488 qpair failed and we were unable to recover it. 00:28:33.488 [2024-11-15 11:46:34.137944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.488 [2024-11-15 11:46:34.137954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.488 qpair failed and we were unable to recover it. 00:28:33.488 [2024-11-15 11:46:34.138150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.488 [2024-11-15 11:46:34.138161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.488 qpair failed and we were unable to recover it. 00:28:33.488 [2024-11-15 11:46:34.138234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.488 [2024-11-15 11:46:34.138244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.488 qpair failed and we were unable to recover it. 00:28:33.488 [2024-11-15 11:46:34.138337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.488 [2024-11-15 11:46:34.138347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.488 qpair failed and we were unable to recover it. 00:28:33.488 [2024-11-15 11:46:34.138446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.488 [2024-11-15 11:46:34.138463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.488 qpair failed and we were unable to recover it. 00:28:33.488 [2024-11-15 11:46:34.138606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.488 [2024-11-15 11:46:34.138618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.488 qpair failed and we were unable to recover it. 00:28:33.488 [2024-11-15 11:46:34.138752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.488 [2024-11-15 11:46:34.138801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:33.488 qpair failed and we were unable to recover it. 00:28:33.488 [2024-11-15 11:46:34.138946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.488 [2024-11-15 11:46:34.138977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:33.488 qpair failed and we were unable to recover it. 00:28:33.488 [2024-11-15 11:46:34.139110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.488 [2024-11-15 11:46:34.139143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:33.488 qpair failed and we were unable to recover it. 00:28:33.488 [2024-11-15 11:46:34.139263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.488 [2024-11-15 11:46:34.139294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:33.488 qpair failed and we were unable to recover it. 00:28:33.488 [2024-11-15 11:46:34.139427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.488 [2024-11-15 11:46:34.139439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:33.488 qpair failed and we were unable to recover it. 00:28:33.488 [2024-11-15 11:46:34.139594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.488 [2024-11-15 11:46:34.139629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:33.488 qpair failed and we were unable to recover it. 00:28:33.488 [2024-11-15 11:46:34.139829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.488 [2024-11-15 11:46:34.139860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:33.488 qpair failed and we were unable to recover it. 00:28:33.488 [2024-11-15 11:46:34.140072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.488 [2024-11-15 11:46:34.140104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:33.488 qpair failed and we were unable to recover it. 00:28:33.488 [2024-11-15 11:46:34.140379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.488 [2024-11-15 11:46:34.140390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:33.488 qpair failed and we were unable to recover it. 00:28:33.488 [2024-11-15 11:46:34.140537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.488 [2024-11-15 11:46:34.140577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:33.488 qpair failed and we were unable to recover it. 00:28:33.488 [2024-11-15 11:46:34.140775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.488 [2024-11-15 11:46:34.140806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:33.488 qpair failed and we were unable to recover it. 00:28:33.488 [2024-11-15 11:46:34.140992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.488 [2024-11-15 11:46:34.141023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:33.488 qpair failed and we were unable to recover it. 00:28:33.488 [2024-11-15 11:46:34.141145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.488 [2024-11-15 11:46:34.141168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:33.488 qpair failed and we were unable to recover it. 00:28:33.488 [2024-11-15 11:46:34.141383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.488 [2024-11-15 11:46:34.141414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:33.488 qpair failed and we were unable to recover it. 00:28:33.488 [2024-11-15 11:46:34.141659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.488 [2024-11-15 11:46:34.141692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:33.488 qpair failed and we were unable to recover it. 00:28:33.488 [2024-11-15 11:46:34.141888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.488 [2024-11-15 11:46:34.141920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:33.488 qpair failed and we were unable to recover it. 00:28:33.488 [2024-11-15 11:46:34.142064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.488 [2024-11-15 11:46:34.142096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:33.488 qpair failed and we were unable to recover it. 00:28:33.488 [2024-11-15 11:46:34.142310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.488 [2024-11-15 11:46:34.142340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:33.488 qpair failed and we were unable to recover it. 00:28:33.488 [2024-11-15 11:46:34.142488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.488 [2024-11-15 11:46:34.142520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:33.488 qpair failed and we were unable to recover it. 00:28:33.488 [2024-11-15 11:46:34.142628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.488 [2024-11-15 11:46:34.142638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:33.488 qpair failed and we were unable to recover it. 00:28:33.489 [2024-11-15 11:46:34.142711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.489 [2024-11-15 11:46:34.142722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:33.489 qpair failed and we were unable to recover it. 00:28:33.489 [2024-11-15 11:46:34.142908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.489 [2024-11-15 11:46:34.142922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:33.489 qpair failed and we were unable to recover it. 00:28:33.489 [2024-11-15 11:46:34.143069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.489 [2024-11-15 11:46:34.143079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:33.489 qpair failed and we were unable to recover it. 00:28:33.489 [2024-11-15 11:46:34.143223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.489 [2024-11-15 11:46:34.143234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:33.489 qpair failed and we were unable to recover it. 00:28:33.489 [2024-11-15 11:46:34.143443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.489 [2024-11-15 11:46:34.143485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:33.489 qpair failed and we were unable to recover it. 00:28:33.489 [2024-11-15 11:46:34.143613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.489 [2024-11-15 11:46:34.143645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:33.489 qpair failed and we were unable to recover it. 00:28:33.489 [2024-11-15 11:46:34.143784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.489 [2024-11-15 11:46:34.143815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:33.489 qpair failed and we were unable to recover it. 00:28:33.489 [2024-11-15 11:46:34.144068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.489 [2024-11-15 11:46:34.144101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:33.489 qpair failed and we were unable to recover it. 00:28:33.489 [2024-11-15 11:46:34.144359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.489 [2024-11-15 11:46:34.144390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:33.489 qpair failed and we were unable to recover it. 00:28:33.489 [2024-11-15 11:46:34.144586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.489 [2024-11-15 11:46:34.144619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:33.489 qpair failed and we were unable to recover it. 00:28:33.489 [2024-11-15 11:46:34.144898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.489 [2024-11-15 11:46:34.144929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:33.489 qpair failed and we were unable to recover it. 00:28:33.489 [2024-11-15 11:46:34.145065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.489 [2024-11-15 11:46:34.145098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:33.489 qpair failed and we were unable to recover it. 00:28:33.489 [2024-11-15 11:46:34.145222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.489 [2024-11-15 11:46:34.145232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:33.489 qpair failed and we were unable to recover it. 00:28:33.489 [2024-11-15 11:46:34.145315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.489 [2024-11-15 11:46:34.145325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:33.489 qpair failed and we were unable to recover it. 00:28:33.489 [2024-11-15 11:46:34.145499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.489 [2024-11-15 11:46:34.145535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:33.489 qpair failed and we were unable to recover it. 00:28:33.489 [2024-11-15 11:46:34.145662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.489 [2024-11-15 11:46:34.145693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:33.489 qpair failed and we were unable to recover it. 00:28:33.489 [2024-11-15 11:46:34.145813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.489 [2024-11-15 11:46:34.145846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:33.489 qpair failed and we were unable to recover it. 00:28:33.489 [2024-11-15 11:46:34.146069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.489 [2024-11-15 11:46:34.146102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:33.489 qpair failed and we were unable to recover it. 00:28:33.489 [2024-11-15 11:46:34.146317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.489 [2024-11-15 11:46:34.146348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:33.489 qpair failed and we were unable to recover it. 00:28:33.489 [2024-11-15 11:46:34.146649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.489 [2024-11-15 11:46:34.146682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:33.489 qpair failed and we were unable to recover it. 00:28:33.489 [2024-11-15 11:46:34.146938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.489 [2024-11-15 11:46:34.146970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:33.489 qpair failed and we were unable to recover it. 00:28:33.489 [2024-11-15 11:46:34.147157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.489 [2024-11-15 11:46:34.147190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:33.489 qpair failed and we were unable to recover it. 00:28:33.489 [2024-11-15 11:46:34.147438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.489 [2024-11-15 11:46:34.147450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:33.489 qpair failed and we were unable to recover it. 00:28:33.489 [2024-11-15 11:46:34.147618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.489 [2024-11-15 11:46:34.147629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:33.489 qpair failed and we were unable to recover it. 00:28:33.489 [2024-11-15 11:46:34.147700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.489 [2024-11-15 11:46:34.147710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:33.489 qpair failed and we were unable to recover it. 00:28:33.489 [2024-11-15 11:46:34.147850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.489 [2024-11-15 11:46:34.147860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:33.489 qpair failed and we were unable to recover it. 00:28:33.489 [2024-11-15 11:46:34.147945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.489 [2024-11-15 11:46:34.147955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:33.489 qpair failed and we were unable to recover it. 00:28:33.489 [2024-11-15 11:46:34.148083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.489 [2024-11-15 11:46:34.148113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:33.489 qpair failed and we were unable to recover it. 00:28:33.489 [2024-11-15 11:46:34.148258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.489 [2024-11-15 11:46:34.148296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.489 qpair failed and we were unable to recover it. 00:28:33.489 [2024-11-15 11:46:34.148494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.489 [2024-11-15 11:46:34.148528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.489 qpair failed and we were unable to recover it. 00:28:33.489 [2024-11-15 11:46:34.148836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.489 [2024-11-15 11:46:34.148870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.489 qpair failed and we were unable to recover it. 00:28:33.489 [2024-11-15 11:46:34.149143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.489 [2024-11-15 11:46:34.149176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.489 qpair failed and we were unable to recover it. 00:28:33.489 [2024-11-15 11:46:34.149364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.489 [2024-11-15 11:46:34.149395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.489 qpair failed and we were unable to recover it. 00:28:33.489 [2024-11-15 11:46:34.149537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.489 [2024-11-15 11:46:34.149571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.489 qpair failed and we were unable to recover it. 00:28:33.489 [2024-11-15 11:46:34.149792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.489 [2024-11-15 11:46:34.149825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.489 qpair failed and we were unable to recover it. 00:28:33.489 [2024-11-15 11:46:34.150122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.489 [2024-11-15 11:46:34.150155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.489 qpair failed and we were unable to recover it. 00:28:33.489 [2024-11-15 11:46:34.150288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.489 [2024-11-15 11:46:34.150327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.489 qpair failed and we were unable to recover it. 00:28:33.490 [2024-11-15 11:46:34.150468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.490 [2024-11-15 11:46:34.150503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.490 qpair failed and we were unable to recover it. 00:28:33.490 [2024-11-15 11:46:34.150723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.490 [2024-11-15 11:46:34.150734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.490 qpair failed and we were unable to recover it. 00:28:33.490 [2024-11-15 11:46:34.150826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.490 [2024-11-15 11:46:34.150836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.490 qpair failed and we were unable to recover it. 00:28:33.490 [2024-11-15 11:46:34.151106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.490 [2024-11-15 11:46:34.151139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.490 qpair failed and we were unable to recover it. 00:28:33.490 [2024-11-15 11:46:34.151330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.490 [2024-11-15 11:46:34.151369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.490 qpair failed and we were unable to recover it. 00:28:33.490 [2024-11-15 11:46:34.151660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.490 [2024-11-15 11:46:34.151700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.490 qpair failed and we were unable to recover it. 00:28:33.490 [2024-11-15 11:46:34.151986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.490 [2024-11-15 11:46:34.152026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.490 qpair failed and we were unable to recover it. 00:28:33.490 [2024-11-15 11:46:34.152228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.490 [2024-11-15 11:46:34.152262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.490 qpair failed and we were unable to recover it. 00:28:33.490 [2024-11-15 11:46:34.152470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.490 [2024-11-15 11:46:34.152505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.490 qpair failed and we were unable to recover it. 00:28:33.490 [2024-11-15 11:46:34.152713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.490 [2024-11-15 11:46:34.152724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.490 qpair failed and we were unable to recover it. 00:28:33.490 [2024-11-15 11:46:34.152812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.490 [2024-11-15 11:46:34.152822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.490 qpair failed and we were unable to recover it. 00:28:33.490 [2024-11-15 11:46:34.153033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.490 [2024-11-15 11:46:34.153066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.490 qpair failed and we were unable to recover it. 00:28:33.490 [2024-11-15 11:46:34.153274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.490 [2024-11-15 11:46:34.153306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.490 qpair failed and we were unable to recover it. 00:28:33.490 [2024-11-15 11:46:34.153515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.490 [2024-11-15 11:46:34.153526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.490 qpair failed and we were unable to recover it. 00:28:33.490 [2024-11-15 11:46:34.153716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.490 [2024-11-15 11:46:34.153750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.490 qpair failed and we were unable to recover it. 00:28:33.490 [2024-11-15 11:46:34.153890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.490 [2024-11-15 11:46:34.153924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.490 qpair failed and we were unable to recover it. 00:28:33.490 [2024-11-15 11:46:34.154061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.490 [2024-11-15 11:46:34.154099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.490 qpair failed and we were unable to recover it. 00:28:33.490 [2024-11-15 11:46:34.154220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.490 [2024-11-15 11:46:34.154252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.490 qpair failed and we were unable to recover it. 00:28:33.490 [2024-11-15 11:46:34.154542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.490 [2024-11-15 11:46:34.154578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.490 qpair failed and we were unable to recover it. 00:28:33.490 [2024-11-15 11:46:34.154876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.490 [2024-11-15 11:46:34.154887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.490 qpair failed and we were unable to recover it. 00:28:33.490 [2024-11-15 11:46:34.155121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.490 [2024-11-15 11:46:34.155132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.490 qpair failed and we were unable to recover it. 00:28:33.490 [2024-11-15 11:46:34.155285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.490 [2024-11-15 11:46:34.155296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.490 qpair failed and we were unable to recover it. 00:28:33.490 [2024-11-15 11:46:34.155552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.490 [2024-11-15 11:46:34.155564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.490 qpair failed and we were unable to recover it. 00:28:33.490 [2024-11-15 11:46:34.155787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.490 [2024-11-15 11:46:34.155799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.490 qpair failed and we were unable to recover it. 00:28:33.490 [2024-11-15 11:46:34.155952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.490 [2024-11-15 11:46:34.155965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.490 qpair failed and we were unable to recover it. 00:28:33.490 [2024-11-15 11:46:34.156121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.490 [2024-11-15 11:46:34.156153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.490 qpair failed and we were unable to recover it. 00:28:33.490 [2024-11-15 11:46:34.156345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.490 [2024-11-15 11:46:34.156377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.490 qpair failed and we were unable to recover it. 00:28:33.490 [2024-11-15 11:46:34.156499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.490 [2024-11-15 11:46:34.156533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.490 qpair failed and we were unable to recover it. 00:28:33.490 [2024-11-15 11:46:34.156647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.490 [2024-11-15 11:46:34.156680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.490 qpair failed and we were unable to recover it. 00:28:33.490 [2024-11-15 11:46:34.156904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.490 [2024-11-15 11:46:34.156937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.490 qpair failed and we were unable to recover it. 00:28:33.490 [2024-11-15 11:46:34.157143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.490 [2024-11-15 11:46:34.157178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.490 qpair failed and we were unable to recover it. 00:28:33.490 [2024-11-15 11:46:34.157396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.490 [2024-11-15 11:46:34.157408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.490 qpair failed and we were unable to recover it. 00:28:33.490 [2024-11-15 11:46:34.157560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.490 [2024-11-15 11:46:34.157594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.490 qpair failed and we were unable to recover it. 00:28:33.490 [2024-11-15 11:46:34.157909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.490 [2024-11-15 11:46:34.157943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.490 qpair failed and we were unable to recover it. 00:28:33.490 [2024-11-15 11:46:34.158151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.490 [2024-11-15 11:46:34.158162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.490 qpair failed and we were unable to recover it. 00:28:33.490 [2024-11-15 11:46:34.158310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.490 [2024-11-15 11:46:34.158321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.490 qpair failed and we were unable to recover it. 00:28:33.490 [2024-11-15 11:46:34.158400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.490 [2024-11-15 11:46:34.158409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.490 qpair failed and we were unable to recover it. 00:28:33.490 [2024-11-15 11:46:34.158566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.491 [2024-11-15 11:46:34.158578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.491 qpair failed and we were unable to recover it. 00:28:33.491 [2024-11-15 11:46:34.158806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.491 [2024-11-15 11:46:34.158822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.491 qpair failed and we were unable to recover it. 00:28:33.491 [2024-11-15 11:46:34.158927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.491 [2024-11-15 11:46:34.158937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.491 qpair failed and we were unable to recover it. 00:28:33.491 [2024-11-15 11:46:34.159096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.491 [2024-11-15 11:46:34.159107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.491 qpair failed and we were unable to recover it. 00:28:33.491 [2024-11-15 11:46:34.159257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.491 [2024-11-15 11:46:34.159268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.491 qpair failed and we were unable to recover it. 00:28:33.491 [2024-11-15 11:46:34.159383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.491 [2024-11-15 11:46:34.159417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.491 qpair failed and we were unable to recover it. 00:28:33.491 [2024-11-15 11:46:34.159687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.491 [2024-11-15 11:46:34.159720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.491 qpair failed and we were unable to recover it. 00:28:33.491 [2024-11-15 11:46:34.159860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.491 [2024-11-15 11:46:34.159900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.491 qpair failed and we were unable to recover it. 00:28:33.491 [2024-11-15 11:46:34.160089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.491 [2024-11-15 11:46:34.160123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.491 qpair failed and we were unable to recover it. 00:28:33.491 [2024-11-15 11:46:34.160254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.491 [2024-11-15 11:46:34.160266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.491 qpair failed and we were unable to recover it. 00:28:33.491 [2024-11-15 11:46:34.160353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.491 [2024-11-15 11:46:34.160362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.491 qpair failed and we were unable to recover it. 00:28:33.491 [2024-11-15 11:46:34.160430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.491 [2024-11-15 11:46:34.160441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.491 qpair failed and we were unable to recover it. 00:28:33.491 [2024-11-15 11:46:34.160518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.491 [2024-11-15 11:46:34.160529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.491 qpair failed and we were unable to recover it. 00:28:33.491 [2024-11-15 11:46:34.160667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.491 [2024-11-15 11:46:34.160676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.491 qpair failed and we were unable to recover it. 00:28:33.491 [2024-11-15 11:46:34.160743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.491 [2024-11-15 11:46:34.160753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.491 qpair failed and we were unable to recover it. 00:28:33.491 [2024-11-15 11:46:34.160818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.491 [2024-11-15 11:46:34.160827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.491 qpair failed and we were unable to recover it. 00:28:33.491 [2024-11-15 11:46:34.161008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.491 [2024-11-15 11:46:34.161018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.491 qpair failed and we were unable to recover it. 00:28:33.491 [2024-11-15 11:46:34.161248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.491 [2024-11-15 11:46:34.161282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.491 qpair failed and we were unable to recover it. 00:28:33.491 [2024-11-15 11:46:34.161485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.491 [2024-11-15 11:46:34.161520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.491 qpair failed and we were unable to recover it. 00:28:33.491 [2024-11-15 11:46:34.161724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.491 [2024-11-15 11:46:34.161756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.491 qpair failed and we were unable to recover it. 00:28:33.491 [2024-11-15 11:46:34.161978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.491 [2024-11-15 11:46:34.162011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.491 qpair failed and we were unable to recover it. 00:28:33.491 [2024-11-15 11:46:34.162321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.491 [2024-11-15 11:46:34.162357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.491 qpair failed and we were unable to recover it. 00:28:33.491 [2024-11-15 11:46:34.162544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.491 [2024-11-15 11:46:34.162585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.491 qpair failed and we were unable to recover it. 00:28:33.491 [2024-11-15 11:46:34.162734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.491 [2024-11-15 11:46:34.162746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.491 qpair failed and we were unable to recover it. 00:28:33.491 [2024-11-15 11:46:34.162830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.491 [2024-11-15 11:46:34.162862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.491 qpair failed and we were unable to recover it. 00:28:33.491 [2024-11-15 11:46:34.163063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.491 [2024-11-15 11:46:34.163096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.491 qpair failed and we were unable to recover it. 00:28:33.491 [2024-11-15 11:46:34.163223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.491 [2024-11-15 11:46:34.163259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.491 qpair failed and we were unable to recover it. 00:28:33.491 [2024-11-15 11:46:34.163489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.491 [2024-11-15 11:46:34.163532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.491 qpair failed and we were unable to recover it. 00:28:33.491 [2024-11-15 11:46:34.163723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.491 [2024-11-15 11:46:34.163755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.491 qpair failed and we were unable to recover it. 00:28:33.491 [2024-11-15 11:46:34.163966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.491 [2024-11-15 11:46:34.163999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.491 qpair failed and we were unable to recover it. 00:28:33.491 [2024-11-15 11:46:34.164129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.491 [2024-11-15 11:46:34.164162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.491 qpair failed and we were unable to recover it. 00:28:33.491 [2024-11-15 11:46:34.164427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.491 [2024-11-15 11:46:34.164480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.491 qpair failed and we were unable to recover it. 00:28:33.491 [2024-11-15 11:46:34.164717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.491 [2024-11-15 11:46:34.164727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.491 qpair failed and we were unable to recover it. 00:28:33.491 [2024-11-15 11:46:34.164865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.491 [2024-11-15 11:46:34.164876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.491 qpair failed and we were unable to recover it. 00:28:33.491 [2024-11-15 11:46:34.164972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.491 [2024-11-15 11:46:34.164985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.491 qpair failed and we were unable to recover it. 00:28:33.491 [2024-11-15 11:46:34.165125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.491 [2024-11-15 11:46:34.165137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.491 qpair failed and we were unable to recover it. 00:28:33.491 [2024-11-15 11:46:34.165224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.491 [2024-11-15 11:46:34.165234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.491 qpair failed and we were unable to recover it. 00:28:33.492 [2024-11-15 11:46:34.165332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.492 [2024-11-15 11:46:34.165341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.492 qpair failed and we were unable to recover it. 00:28:33.492 [2024-11-15 11:46:34.165410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.492 [2024-11-15 11:46:34.165420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.492 qpair failed and we were unable to recover it. 00:28:33.492 [2024-11-15 11:46:34.165495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.492 [2024-11-15 11:46:34.165506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.492 qpair failed and we were unable to recover it. 00:28:33.492 [2024-11-15 11:46:34.165710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.492 [2024-11-15 11:46:34.165722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.492 qpair failed and we were unable to recover it. 00:28:33.492 [2024-11-15 11:46:34.165890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.492 [2024-11-15 11:46:34.165922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.492 qpair failed and we were unable to recover it. 00:28:33.492 [2024-11-15 11:46:34.166234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.492 [2024-11-15 11:46:34.166268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.492 qpair failed and we were unable to recover it. 00:28:33.492 [2024-11-15 11:46:34.166411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.492 [2024-11-15 11:46:34.166423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.492 qpair failed and we were unable to recover it. 00:28:33.492 [2024-11-15 11:46:34.166654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.492 [2024-11-15 11:46:34.166687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.492 qpair failed and we were unable to recover it. 00:28:33.492 [2024-11-15 11:46:34.166904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.492 [2024-11-15 11:46:34.166937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.492 qpair failed and we were unable to recover it. 00:28:33.492 [2024-11-15 11:46:34.167061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.492 [2024-11-15 11:46:34.167095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.492 qpair failed and we were unable to recover it. 00:28:33.492 [2024-11-15 11:46:34.167298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.492 [2024-11-15 11:46:34.167332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.492 qpair failed and we were unable to recover it. 00:28:33.492 [2024-11-15 11:46:34.167470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.492 [2024-11-15 11:46:34.167482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.492 qpair failed and we were unable to recover it. 00:28:33.492 [2024-11-15 11:46:34.167607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.492 [2024-11-15 11:46:34.167618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.492 qpair failed and we were unable to recover it. 00:28:33.492 [2024-11-15 11:46:34.167767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.492 [2024-11-15 11:46:34.167780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.492 qpair failed and we were unable to recover it. 00:28:33.492 [2024-11-15 11:46:34.167928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.492 [2024-11-15 11:46:34.167938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.492 qpair failed and we were unable to recover it. 00:28:33.492 [2024-11-15 11:46:34.168019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.492 [2024-11-15 11:46:34.168031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.492 qpair failed and we were unable to recover it. 00:28:33.492 [2024-11-15 11:46:34.168276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.492 [2024-11-15 11:46:34.168309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.492 qpair failed and we were unable to recover it. 00:28:33.492 [2024-11-15 11:46:34.168437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.492 [2024-11-15 11:46:34.168447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.492 qpair failed and we were unable to recover it. 00:28:33.492 [2024-11-15 11:46:34.168537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.492 [2024-11-15 11:46:34.168547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.492 qpair failed and we were unable to recover it. 00:28:33.492 [2024-11-15 11:46:34.168825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.492 [2024-11-15 11:46:34.168860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.492 qpair failed and we were unable to recover it. 00:28:33.492 [2024-11-15 11:46:34.168977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.492 [2024-11-15 11:46:34.169010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.492 qpair failed and we were unable to recover it. 00:28:33.492 [2024-11-15 11:46:34.169229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.492 [2024-11-15 11:46:34.169241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.492 qpair failed and we were unable to recover it. 00:28:33.492 [2024-11-15 11:46:34.169395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.492 [2024-11-15 11:46:34.169408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.492 qpair failed and we were unable to recover it. 00:28:33.492 [2024-11-15 11:46:34.169557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.492 [2024-11-15 11:46:34.169568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.492 qpair failed and we were unable to recover it. 00:28:33.492 [2024-11-15 11:46:34.169645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.492 [2024-11-15 11:46:34.169655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.492 qpair failed and we were unable to recover it. 00:28:33.492 [2024-11-15 11:46:34.169903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.492 [2024-11-15 11:46:34.169935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.492 qpair failed and we were unable to recover it. 00:28:33.492 [2024-11-15 11:46:34.170066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.492 [2024-11-15 11:46:34.170104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.492 qpair failed and we were unable to recover it. 00:28:33.492 [2024-11-15 11:46:34.170295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.492 [2024-11-15 11:46:34.170329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.492 qpair failed and we were unable to recover it. 00:28:33.492 [2024-11-15 11:46:34.170610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.492 [2024-11-15 11:46:34.170622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.492 qpair failed and we were unable to recover it. 00:28:33.492 [2024-11-15 11:46:34.170757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.492 [2024-11-15 11:46:34.170768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.492 qpair failed and we were unable to recover it. 00:28:33.492 [2024-11-15 11:46:34.170968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.492 [2024-11-15 11:46:34.171003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.492 qpair failed and we were unable to recover it. 00:28:33.492 [2024-11-15 11:46:34.171126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.492 [2024-11-15 11:46:34.171159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.492 qpair failed and we were unable to recover it. 00:28:33.493 [2024-11-15 11:46:34.171413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.493 [2024-11-15 11:46:34.171448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.493 qpair failed and we were unable to recover it. 00:28:33.493 [2024-11-15 11:46:34.171570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.493 [2024-11-15 11:46:34.171584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.493 qpair failed and we were unable to recover it. 00:28:33.493 [2024-11-15 11:46:34.171673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.493 [2024-11-15 11:46:34.171683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.493 qpair failed and we were unable to recover it. 00:28:33.493 [2024-11-15 11:46:34.171859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.493 [2024-11-15 11:46:34.171891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.493 qpair failed and we were unable to recover it. 00:28:33.493 [2024-11-15 11:46:34.172023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.493 [2024-11-15 11:46:34.172057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.493 qpair failed and we were unable to recover it. 00:28:33.493 [2024-11-15 11:46:34.172313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.493 [2024-11-15 11:46:34.172352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.493 qpair failed and we were unable to recover it. 00:28:33.493 [2024-11-15 11:46:34.172537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.493 [2024-11-15 11:46:34.172549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.493 qpair failed and we were unable to recover it. 00:28:33.493 [2024-11-15 11:46:34.172701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.493 [2024-11-15 11:46:34.172713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.493 qpair failed and we were unable to recover it. 00:28:33.493 [2024-11-15 11:46:34.172853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.493 [2024-11-15 11:46:34.172864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.493 qpair failed and we were unable to recover it. 00:28:33.493 [2024-11-15 11:46:34.173003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.493 [2024-11-15 11:46:34.173016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.493 qpair failed and we were unable to recover it. 00:28:33.493 [2024-11-15 11:46:34.173166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.493 [2024-11-15 11:46:34.173200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.493 qpair failed and we were unable to recover it. 00:28:33.493 [2024-11-15 11:46:34.173333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.493 [2024-11-15 11:46:34.173365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.493 qpair failed and we were unable to recover it. 00:28:33.493 [2024-11-15 11:46:34.173569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.493 [2024-11-15 11:46:34.173604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.493 qpair failed and we were unable to recover it. 00:28:33.493 [2024-11-15 11:46:34.173740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.493 [2024-11-15 11:46:34.173774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.493 qpair failed and we were unable to recover it. 00:28:33.493 [2024-11-15 11:46:34.173897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.493 [2024-11-15 11:46:34.173930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.493 qpair failed and we were unable to recover it. 00:28:33.493 [2024-11-15 11:46:34.174199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.493 [2024-11-15 11:46:34.174235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.493 qpair failed and we were unable to recover it. 00:28:33.493 [2024-11-15 11:46:34.174506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.493 [2024-11-15 11:46:34.174520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.493 qpair failed and we were unable to recover it. 00:28:33.493 [2024-11-15 11:46:34.174607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.493 [2024-11-15 11:46:34.174618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.493 qpair failed and we were unable to recover it. 00:28:33.493 [2024-11-15 11:46:34.174711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.493 [2024-11-15 11:46:34.174721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.493 qpair failed and we were unable to recover it. 00:28:33.493 [2024-11-15 11:46:34.174873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.493 [2024-11-15 11:46:34.174906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.493 qpair failed and we were unable to recover it. 00:28:33.493 [2024-11-15 11:46:34.175109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.493 [2024-11-15 11:46:34.175142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.493 qpair failed and we were unable to recover it. 00:28:33.493 [2024-11-15 11:46:34.175447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.493 [2024-11-15 11:46:34.175497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.493 qpair failed and we were unable to recover it. 00:28:33.493 [2024-11-15 11:46:34.175695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.493 [2024-11-15 11:46:34.175728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.493 qpair failed and we were unable to recover it. 00:28:33.493 [2024-11-15 11:46:34.175916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.493 [2024-11-15 11:46:34.175950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.493 qpair failed and we were unable to recover it. 00:28:33.493 [2024-11-15 11:46:34.176089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.493 [2024-11-15 11:46:34.176122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.493 qpair failed and we were unable to recover it. 00:28:33.493 [2024-11-15 11:46:34.176274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.493 [2024-11-15 11:46:34.176307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.493 qpair failed and we were unable to recover it. 00:28:33.493 [2024-11-15 11:46:34.176622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.493 [2024-11-15 11:46:34.176658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.493 qpair failed and we were unable to recover it. 00:28:33.493 [2024-11-15 11:46:34.176818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.493 [2024-11-15 11:46:34.176853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.493 qpair failed and we were unable to recover it. 00:28:33.493 [2024-11-15 11:46:34.177114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.493 [2024-11-15 11:46:34.177149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.493 qpair failed and we were unable to recover it. 00:28:33.493 [2024-11-15 11:46:34.177363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.493 [2024-11-15 11:46:34.177397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.493 qpair failed and we were unable to recover it. 00:28:33.493 [2024-11-15 11:46:34.177614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.493 [2024-11-15 11:46:34.177626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.493 qpair failed and we were unable to recover it. 00:28:33.493 [2024-11-15 11:46:34.177704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.493 [2024-11-15 11:46:34.177713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.493 qpair failed and we were unable to recover it. 00:28:33.493 [2024-11-15 11:46:34.177865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.493 [2024-11-15 11:46:34.177878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.493 qpair failed and we were unable to recover it. 00:28:33.493 [2024-11-15 11:46:34.178136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.493 [2024-11-15 11:46:34.178172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.493 qpair failed and we were unable to recover it. 00:28:33.493 [2024-11-15 11:46:34.178309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.494 [2024-11-15 11:46:34.178321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.494 qpair failed and we were unable to recover it. 00:28:33.494 [2024-11-15 11:46:34.178642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.494 [2024-11-15 11:46:34.178677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.494 qpair failed and we were unable to recover it. 00:28:33.494 [2024-11-15 11:46:34.178899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.494 [2024-11-15 11:46:34.178931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.494 qpair failed and we were unable to recover it. 00:28:33.494 [2024-11-15 11:46:34.179052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.494 [2024-11-15 11:46:34.179085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.494 qpair failed and we were unable to recover it. 00:28:33.494 [2024-11-15 11:46:34.179266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.494 [2024-11-15 11:46:34.179299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.494 qpair failed and we were unable to recover it. 00:28:33.494 [2024-11-15 11:46:34.179429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.494 [2024-11-15 11:46:34.179483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.494 qpair failed and we were unable to recover it. 00:28:33.494 [2024-11-15 11:46:34.179633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.494 [2024-11-15 11:46:34.179666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.494 qpair failed and we were unable to recover it. 00:28:33.494 [2024-11-15 11:46:34.179937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.494 [2024-11-15 11:46:34.179949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.494 qpair failed and we were unable to recover it. 00:28:33.494 [2024-11-15 11:46:34.180133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.494 [2024-11-15 11:46:34.180144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.494 qpair failed and we were unable to recover it. 00:28:33.494 [2024-11-15 11:46:34.180298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.494 [2024-11-15 11:46:34.180310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.494 qpair failed and we were unable to recover it. 00:28:33.494 [2024-11-15 11:46:34.180376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.494 [2024-11-15 11:46:34.180398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.494 qpair failed and we were unable to recover it. 00:28:33.494 [2024-11-15 11:46:34.180541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.494 [2024-11-15 11:46:34.180554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.494 qpair failed and we were unable to recover it. 00:28:33.494 [2024-11-15 11:46:34.180738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.494 [2024-11-15 11:46:34.180773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.494 qpair failed and we were unable to recover it. 00:28:33.494 [2024-11-15 11:46:34.181030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.494 [2024-11-15 11:46:34.181063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.494 qpair failed and we were unable to recover it. 00:28:33.494 [2024-11-15 11:46:34.181191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.494 [2024-11-15 11:46:34.181224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.494 qpair failed and we were unable to recover it. 00:28:33.494 [2024-11-15 11:46:34.181354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.494 [2024-11-15 11:46:34.181387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.494 qpair failed and we were unable to recover it. 00:28:33.494 [2024-11-15 11:46:34.181648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.494 [2024-11-15 11:46:34.181682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.494 qpair failed and we were unable to recover it. 00:28:33.494 [2024-11-15 11:46:34.181882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.494 [2024-11-15 11:46:34.181915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.494 qpair failed and we were unable to recover it. 00:28:33.494 [2024-11-15 11:46:34.182136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.494 [2024-11-15 11:46:34.182168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.494 qpair failed and we were unable to recover it. 00:28:33.494 [2024-11-15 11:46:34.182363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.494 [2024-11-15 11:46:34.182396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.494 qpair failed and we were unable to recover it. 00:28:33.494 [2024-11-15 11:46:34.182682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.494 [2024-11-15 11:46:34.182694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.494 qpair failed and we were unable to recover it. 00:28:33.494 [2024-11-15 11:46:34.182832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.494 [2024-11-15 11:46:34.182843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.494 qpair failed and we were unable to recover it. 00:28:33.494 [2024-11-15 11:46:34.183071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.494 [2024-11-15 11:46:34.183103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.494 qpair failed and we were unable to recover it. 00:28:33.494 [2024-11-15 11:46:34.183389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.494 [2024-11-15 11:46:34.183423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.494 qpair failed and we were unable to recover it. 00:28:33.494 [2024-11-15 11:46:34.183645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.494 [2024-11-15 11:46:34.183677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.494 qpair failed and we were unable to recover it. 00:28:33.494 [2024-11-15 11:46:34.183883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.494 [2024-11-15 11:46:34.183915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.494 qpair failed and we were unable to recover it. 00:28:33.494 [2024-11-15 11:46:34.184179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.494 [2024-11-15 11:46:34.184213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.494 qpair failed and we were unable to recover it. 00:28:33.494 [2024-11-15 11:46:34.184396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.494 [2024-11-15 11:46:34.184430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.494 qpair failed and we were unable to recover it. 00:28:33.494 [2024-11-15 11:46:34.184662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.494 [2024-11-15 11:46:34.184735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:33.494 qpair failed and we were unable to recover it. 00:28:33.494 [2024-11-15 11:46:34.184904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.494 [2024-11-15 11:46:34.184942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:33.494 qpair failed and we were unable to recover it. 00:28:33.494 [2024-11-15 11:46:34.185190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.494 [2024-11-15 11:46:34.185228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:33.494 qpair failed and we were unable to recover it. 00:28:33.494 [2024-11-15 11:46:34.185424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.494 [2024-11-15 11:46:34.185436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:33.494 qpair failed and we were unable to recover it. 00:28:33.494 [2024-11-15 11:46:34.185583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.494 [2024-11-15 11:46:34.185619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:33.494 qpair failed and we were unable to recover it. 00:28:33.494 [2024-11-15 11:46:34.185816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.494 [2024-11-15 11:46:34.185848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:33.494 qpair failed and we were unable to recover it. 00:28:33.494 [2024-11-15 11:46:34.186046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.494 [2024-11-15 11:46:34.186078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:33.494 qpair failed and we were unable to recover it. 00:28:33.494 [2024-11-15 11:46:34.186272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.494 [2024-11-15 11:46:34.186304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:33.494 qpair failed and we were unable to recover it. 00:28:33.494 [2024-11-15 11:46:34.186566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.494 [2024-11-15 11:46:34.186602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:33.494 qpair failed and we were unable to recover it. 00:28:33.495 [2024-11-15 11:46:34.186743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.495 [2024-11-15 11:46:34.186778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:33.495 qpair failed and we were unable to recover it. 00:28:33.495 [2024-11-15 11:46:34.186990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.495 [2024-11-15 11:46:34.187023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:33.495 qpair failed and we were unable to recover it. 00:28:33.495 [2024-11-15 11:46:34.187178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.495 [2024-11-15 11:46:34.187223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:33.495 qpair failed and we were unable to recover it. 00:28:33.495 [2024-11-15 11:46:34.187418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.495 [2024-11-15 11:46:34.187448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:33.495 qpair failed and we were unable to recover it. 00:28:33.495 [2024-11-15 11:46:34.187607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.495 [2024-11-15 11:46:34.187620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:33.495 qpair failed and we were unable to recover it. 00:28:33.495 [2024-11-15 11:46:34.187828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.495 [2024-11-15 11:46:34.187861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:33.495 qpair failed and we were unable to recover it. 00:28:33.495 [2024-11-15 11:46:34.188009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.495 [2024-11-15 11:46:34.188045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:33.495 qpair failed and we were unable to recover it. 00:28:33.495 [2024-11-15 11:46:34.188255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.495 [2024-11-15 11:46:34.188287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:33.495 qpair failed and we were unable to recover it. 00:28:33.495 [2024-11-15 11:46:34.188479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.495 [2024-11-15 11:46:34.188491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:33.495 qpair failed and we were unable to recover it. 00:28:33.495 [2024-11-15 11:46:34.188734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.495 [2024-11-15 11:46:34.188765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:33.495 qpair failed and we were unable to recover it. 00:28:33.495 [2024-11-15 11:46:34.188904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.495 [2024-11-15 11:46:34.188936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:33.495 qpair failed and we were unable to recover it. 00:28:33.495 [2024-11-15 11:46:34.189133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.495 [2024-11-15 11:46:34.189167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:33.495 qpair failed and we were unable to recover it. 00:28:33.495 [2024-11-15 11:46:34.189429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.495 [2024-11-15 11:46:34.189480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:33.495 qpair failed and we were unable to recover it. 00:28:33.495 [2024-11-15 11:46:34.189762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.495 [2024-11-15 11:46:34.189796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:33.495 qpair failed and we were unable to recover it. 00:28:33.495 [2024-11-15 11:46:34.190016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.495 [2024-11-15 11:46:34.190056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:33.495 qpair failed and we were unable to recover it. 00:28:33.495 [2024-11-15 11:46:34.190262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.495 [2024-11-15 11:46:34.190295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:33.495 qpair failed and we were unable to recover it. 00:28:33.495 [2024-11-15 11:46:34.190499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.495 [2024-11-15 11:46:34.190512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:33.495 qpair failed and we were unable to recover it. 00:28:33.495 [2024-11-15 11:46:34.190651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.495 [2024-11-15 11:46:34.190662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:33.495 qpair failed and we were unable to recover it. 00:28:33.495 [2024-11-15 11:46:34.190752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.495 [2024-11-15 11:46:34.190762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:33.495 qpair failed and we were unable to recover it. 00:28:33.495 [2024-11-15 11:46:34.190868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.495 [2024-11-15 11:46:34.190878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:33.495 qpair failed and we were unable to recover it. 00:28:33.495 [2024-11-15 11:46:34.190957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.495 [2024-11-15 11:46:34.190967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:33.495 qpair failed and we were unable to recover it. 00:28:33.495 [2024-11-15 11:46:34.191043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.495 [2024-11-15 11:46:34.191053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:33.495 qpair failed and we were unable to recover it. 00:28:33.495 [2024-11-15 11:46:34.191226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.495 [2024-11-15 11:46:34.191238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:33.495 qpair failed and we were unable to recover it. 00:28:33.495 [2024-11-15 11:46:34.191387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.495 [2024-11-15 11:46:34.191418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:33.495 qpair failed and we were unable to recover it. 00:28:33.495 [2024-11-15 11:46:34.191626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.495 [2024-11-15 11:46:34.191659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:33.495 qpair failed and we were unable to recover it. 00:28:33.495 [2024-11-15 11:46:34.191788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.495 [2024-11-15 11:46:34.191818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:33.495 qpair failed and we were unable to recover it. 00:28:33.495 [2024-11-15 11:46:34.192047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.495 [2024-11-15 11:46:34.192082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:33.495 qpair failed and we were unable to recover it. 00:28:33.495 [2024-11-15 11:46:34.192362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.495 [2024-11-15 11:46:34.192397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:33.495 qpair failed and we were unable to recover it. 00:28:33.495 [2024-11-15 11:46:34.192549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.495 [2024-11-15 11:46:34.192593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:33.495 qpair failed and we were unable to recover it. 00:28:33.495 [2024-11-15 11:46:34.192712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.495 [2024-11-15 11:46:34.192723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:33.495 qpair failed and we were unable to recover it. 00:28:33.495 [2024-11-15 11:46:34.192955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.495 [2024-11-15 11:46:34.192967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:33.495 qpair failed and we were unable to recover it. 00:28:33.495 [2024-11-15 11:46:34.193062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.495 [2024-11-15 11:46:34.193073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:33.495 qpair failed and we were unable to recover it. 00:28:33.495 [2024-11-15 11:46:34.193207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.495 [2024-11-15 11:46:34.193217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:33.495 qpair failed and we were unable to recover it. 00:28:33.495 [2024-11-15 11:46:34.193304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.495 [2024-11-15 11:46:34.193316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:33.495 qpair failed and we were unable to recover it. 00:28:33.495 [2024-11-15 11:46:34.193398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.495 [2024-11-15 11:46:34.193409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:33.495 qpair failed and we were unable to recover it. 00:28:33.495 [2024-11-15 11:46:34.193491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.495 [2024-11-15 11:46:34.193502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:33.495 qpair failed and we were unable to recover it. 00:28:33.495 [2024-11-15 11:46:34.193598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.496 [2024-11-15 11:46:34.193609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:33.496 qpair failed and we were unable to recover it. 00:28:33.496 [2024-11-15 11:46:34.193767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.496 [2024-11-15 11:46:34.193803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:33.496 qpair failed and we were unable to recover it. 00:28:33.496 [2024-11-15 11:46:34.194002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.496 [2024-11-15 11:46:34.194033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:33.496 qpair failed and we were unable to recover it. 00:28:33.496 [2024-11-15 11:46:34.194154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.496 [2024-11-15 11:46:34.194189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:33.496 qpair failed and we were unable to recover it. 00:28:33.496 [2024-11-15 11:46:34.194394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.496 [2024-11-15 11:46:34.194427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:33.496 qpair failed and we were unable to recover it. 00:28:33.496 [2024-11-15 11:46:34.194671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.496 [2024-11-15 11:46:34.194705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:33.496 qpair failed and we were unable to recover it. 00:28:33.496 [2024-11-15 11:46:34.194899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.496 [2024-11-15 11:46:34.194932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:33.496 qpair failed and we were unable to recover it. 00:28:33.496 [2024-11-15 11:46:34.195132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.496 [2024-11-15 11:46:34.195164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:33.496 qpair failed and we were unable to recover it. 00:28:33.496 [2024-11-15 11:46:34.195345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.496 [2024-11-15 11:46:34.195378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:33.496 qpair failed and we were unable to recover it. 00:28:33.496 [2024-11-15 11:46:34.195526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.496 [2024-11-15 11:46:34.195537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:33.496 qpair failed and we were unable to recover it. 00:28:33.496 [2024-11-15 11:46:34.195665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.496 [2024-11-15 11:46:34.195677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:33.496 qpair failed and we were unable to recover it. 00:28:33.496 [2024-11-15 11:46:34.195920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.496 [2024-11-15 11:46:34.195954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:33.496 qpair failed and we were unable to recover it. 00:28:33.496 [2024-11-15 11:46:34.196215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.496 [2024-11-15 11:46:34.196246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:33.496 qpair failed and we were unable to recover it. 00:28:33.496 [2024-11-15 11:46:34.196436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.496 [2024-11-15 11:46:34.196480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:33.496 qpair failed and we were unable to recover it. 00:28:33.496 [2024-11-15 11:46:34.196595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.496 [2024-11-15 11:46:34.196606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:33.496 qpair failed and we were unable to recover it. 00:28:33.496 [2024-11-15 11:46:34.196684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.496 [2024-11-15 11:46:34.196695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:33.496 qpair failed and we were unable to recover it. 00:28:33.496 [2024-11-15 11:46:34.196926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.496 [2024-11-15 11:46:34.196959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:33.496 qpair failed and we were unable to recover it. 00:28:33.496 [2024-11-15 11:46:34.197103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.496 [2024-11-15 11:46:34.197134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:33.496 qpair failed and we were unable to recover it. 00:28:33.496 [2024-11-15 11:46:34.197327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.496 [2024-11-15 11:46:34.197365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:33.496 qpair failed and we were unable to recover it. 00:28:33.496 [2024-11-15 11:46:34.197501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.496 [2024-11-15 11:46:34.197511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:33.496 qpair failed and we were unable to recover it. 00:28:33.496 [2024-11-15 11:46:34.197708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.496 [2024-11-15 11:46:34.197719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:33.496 qpair failed and we were unable to recover it. 00:28:33.496 [2024-11-15 11:46:34.197998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.496 [2024-11-15 11:46:34.198029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:33.496 qpair failed and we were unable to recover it. 00:28:33.496 [2024-11-15 11:46:34.198222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.496 [2024-11-15 11:46:34.198255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:33.496 qpair failed and we were unable to recover it. 00:28:33.496 [2024-11-15 11:46:34.198468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.496 [2024-11-15 11:46:34.198479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:33.496 qpair failed and we were unable to recover it. 00:28:33.496 [2024-11-15 11:46:34.198633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.496 [2024-11-15 11:46:34.198644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:33.496 qpair failed and we were unable to recover it. 00:28:33.496 [2024-11-15 11:46:34.198725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.496 [2024-11-15 11:46:34.198738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:33.496 qpair failed and we were unable to recover it. 00:28:33.496 [2024-11-15 11:46:34.198892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.496 [2024-11-15 11:46:34.198925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:33.496 qpair failed and we were unable to recover it. 00:28:33.496 [2024-11-15 11:46:34.199065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.496 [2024-11-15 11:46:34.199096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:33.496 qpair failed and we were unable to recover it. 00:28:33.496 [2024-11-15 11:46:34.199328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.496 [2024-11-15 11:46:34.199363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:33.496 qpair failed and we were unable to recover it. 00:28:33.496 [2024-11-15 11:46:34.199572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.496 [2024-11-15 11:46:34.199583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:33.496 qpair failed and we were unable to recover it. 00:28:33.496 [2024-11-15 11:46:34.199719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.496 [2024-11-15 11:46:34.199729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:33.496 qpair failed and we were unable to recover it. 00:28:33.496 [2024-11-15 11:46:34.200005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.496 [2024-11-15 11:46:34.200037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:33.496 qpair failed and we were unable to recover it. 00:28:33.496 [2024-11-15 11:46:34.200341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.496 [2024-11-15 11:46:34.200372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:33.496 qpair failed and we were unable to recover it. 00:28:33.496 [2024-11-15 11:46:34.200566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.496 [2024-11-15 11:46:34.200601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:33.496 qpair failed and we were unable to recover it. 00:28:33.496 [2024-11-15 11:46:34.200888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.496 [2024-11-15 11:46:34.200921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:33.496 qpair failed and we were unable to recover it. 00:28:33.496 [2024-11-15 11:46:34.201054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.496 [2024-11-15 11:46:34.201086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:33.496 qpair failed and we were unable to recover it. 00:28:33.496 [2024-11-15 11:46:34.201345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.496 [2024-11-15 11:46:34.201378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:33.496 qpair failed and we were unable to recover it. 00:28:33.496 [2024-11-15 11:46:34.201520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.496 [2024-11-15 11:46:34.201553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:33.496 qpair failed and we were unable to recover it. 00:28:33.496 [2024-11-15 11:46:34.201686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.497 [2024-11-15 11:46:34.201719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:33.497 qpair failed and we were unable to recover it. 00:28:33.497 [2024-11-15 11:46:34.201899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.497 [2024-11-15 11:46:34.201910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:33.497 qpair failed and we were unable to recover it. 00:28:33.497 [2024-11-15 11:46:34.202126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.497 [2024-11-15 11:46:34.202159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:33.497 qpair failed and we were unable to recover it. 00:28:33.497 [2024-11-15 11:46:34.202300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.497 [2024-11-15 11:46:34.202332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:33.497 qpair failed and we were unable to recover it. 00:28:33.497 [2024-11-15 11:46:34.202478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.497 [2024-11-15 11:46:34.202510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:33.497 qpair failed and we were unable to recover it. 00:28:33.497 [2024-11-15 11:46:34.202720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.497 [2024-11-15 11:46:34.202751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:33.497 qpair failed and we were unable to recover it. 00:28:33.497 [2024-11-15 11:46:34.202961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.497 [2024-11-15 11:46:34.202993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:33.497 qpair failed and we were unable to recover it. 00:28:33.497 [2024-11-15 11:46:34.203265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.497 [2024-11-15 11:46:34.203298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:33.497 qpair failed and we were unable to recover it. 00:28:33.497 [2024-11-15 11:46:34.203426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.497 [2024-11-15 11:46:34.203437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:33.497 qpair failed and we were unable to recover it. 00:28:33.497 [2024-11-15 11:46:34.203549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.497 [2024-11-15 11:46:34.203560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:33.497 qpair failed and we were unable to recover it. 00:28:33.497 [2024-11-15 11:46:34.203768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.497 [2024-11-15 11:46:34.203781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:33.497 qpair failed and we were unable to recover it. 00:28:33.497 [2024-11-15 11:46:34.204018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.497 [2024-11-15 11:46:34.204050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:33.497 qpair failed and we were unable to recover it. 00:28:33.497 [2024-11-15 11:46:34.204241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.497 [2024-11-15 11:46:34.204273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:33.497 qpair failed and we were unable to recover it. 00:28:33.497 [2024-11-15 11:46:34.204396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.497 [2024-11-15 11:46:34.204429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:33.497 qpair failed and we were unable to recover it. 00:28:33.497 [2024-11-15 11:46:34.204601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.497 [2024-11-15 11:46:34.204612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:33.497 qpair failed and we were unable to recover it. 00:28:33.497 [2024-11-15 11:46:34.204826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.497 [2024-11-15 11:46:34.204859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:33.497 qpair failed and we were unable to recover it. 00:28:33.497 [2024-11-15 11:46:34.205005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.497 [2024-11-15 11:46:34.205038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:33.497 qpair failed and we were unable to recover it. 00:28:33.497 [2024-11-15 11:46:34.205243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.497 [2024-11-15 11:46:34.205277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:33.497 qpair failed and we were unable to recover it. 00:28:33.497 [2024-11-15 11:46:34.205480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.497 [2024-11-15 11:46:34.205514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:33.497 qpair failed and we were unable to recover it. 00:28:33.497 [2024-11-15 11:46:34.205699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.497 [2024-11-15 11:46:34.205738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:33.497 qpair failed and we were unable to recover it. 00:28:33.497 [2024-11-15 11:46:34.205894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.497 [2024-11-15 11:46:34.205906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:33.497 qpair failed and we were unable to recover it. 00:28:33.497 [2024-11-15 11:46:34.206083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.497 [2024-11-15 11:46:34.206116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:33.497 qpair failed and we were unable to recover it. 00:28:33.497 [2024-11-15 11:46:34.206369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.497 [2024-11-15 11:46:34.206403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:33.497 qpair failed and we were unable to recover it. 00:28:33.497 [2024-11-15 11:46:34.206578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.497 [2024-11-15 11:46:34.206589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:33.497 qpair failed and we were unable to recover it. 00:28:33.497 [2024-11-15 11:46:34.206800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.497 [2024-11-15 11:46:34.206812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:33.497 qpair failed and we were unable to recover it. 00:28:33.497 [2024-11-15 11:46:34.206953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.497 [2024-11-15 11:46:34.206964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:33.497 qpair failed and we were unable to recover it. 00:28:33.497 [2024-11-15 11:46:34.207150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.497 [2024-11-15 11:46:34.207184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:33.497 qpair failed and we were unable to recover it. 00:28:33.497 [2024-11-15 11:46:34.207468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.497 [2024-11-15 11:46:34.207480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:33.497 qpair failed and we were unable to recover it. 00:28:33.497 [2024-11-15 11:46:34.207709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.497 [2024-11-15 11:46:34.207742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:33.497 qpair failed and we were unable to recover it. 00:28:33.497 [2024-11-15 11:46:34.207960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.497 [2024-11-15 11:46:34.207993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:33.497 qpair failed and we were unable to recover it. 00:28:33.497 [2024-11-15 11:46:34.208218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.497 [2024-11-15 11:46:34.208251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:33.497 qpair failed and we were unable to recover it. 00:28:33.497 [2024-11-15 11:46:34.208377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.497 [2024-11-15 11:46:34.208388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:33.497 qpair failed and we were unable to recover it. 00:28:33.497 [2024-11-15 11:46:34.208533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.498 [2024-11-15 11:46:34.208543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:33.498 qpair failed and we were unable to recover it. 00:28:33.498 [2024-11-15 11:46:34.208709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.498 [2024-11-15 11:46:34.208720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:33.498 qpair failed and we were unable to recover it. 00:28:33.498 [2024-11-15 11:46:34.208944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.498 [2024-11-15 11:46:34.208981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:33.498 qpair failed and we were unable to recover it. 00:28:33.498 [2024-11-15 11:46:34.209274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.498 [2024-11-15 11:46:34.209308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:33.498 qpair failed and we were unable to recover it. 00:28:33.498 [2024-11-15 11:46:34.209437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.498 [2024-11-15 11:46:34.209448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:33.498 qpair failed and we were unable to recover it. 00:28:33.498 [2024-11-15 11:46:34.209529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.498 [2024-11-15 11:46:34.209541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:33.498 qpair failed and we were unable to recover it. 00:28:33.498 [2024-11-15 11:46:34.209631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.498 [2024-11-15 11:46:34.209641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:33.498 qpair failed and we were unable to recover it. 00:28:33.498 [2024-11-15 11:46:34.209783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.498 [2024-11-15 11:46:34.209815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:33.498 qpair failed and we were unable to recover it. 00:28:33.498 [2024-11-15 11:46:34.209947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.498 [2024-11-15 11:46:34.209978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:33.498 qpair failed and we were unable to recover it. 00:28:33.498 [2024-11-15 11:46:34.210161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.498 [2024-11-15 11:46:34.210190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:33.498 qpair failed and we were unable to recover it. 00:28:33.498 [2024-11-15 11:46:34.210326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.498 [2024-11-15 11:46:34.210359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:33.498 qpair failed and we were unable to recover it. 00:28:33.498 [2024-11-15 11:46:34.210556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.498 [2024-11-15 11:46:34.210588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:33.498 qpair failed and we were unable to recover it. 00:28:33.498 [2024-11-15 11:46:34.210845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.498 [2024-11-15 11:46:34.210877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:33.498 qpair failed and we were unable to recover it. 00:28:33.498 [2024-11-15 11:46:34.211078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.498 [2024-11-15 11:46:34.211110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:33.498 qpair failed and we were unable to recover it. 00:28:33.498 [2024-11-15 11:46:34.211313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.498 [2024-11-15 11:46:34.211344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:33.498 qpair failed and we were unable to recover it. 00:28:33.498 [2024-11-15 11:46:34.211592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.498 [2024-11-15 11:46:34.211629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:33.498 qpair failed and we were unable to recover it. 00:28:33.498 [2024-11-15 11:46:34.211827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.498 [2024-11-15 11:46:34.211838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:33.498 qpair failed and we were unable to recover it. 00:28:33.498 [2024-11-15 11:46:34.211994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.498 [2024-11-15 11:46:34.212027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:33.498 qpair failed and we were unable to recover it. 00:28:33.498 [2024-11-15 11:46:34.212171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.498 [2024-11-15 11:46:34.212203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:33.498 qpair failed and we were unable to recover it. 00:28:33.498 [2024-11-15 11:46:34.212392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.498 [2024-11-15 11:46:34.212423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:33.498 qpair failed and we were unable to recover it. 00:28:33.498 [2024-11-15 11:46:34.212645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.498 [2024-11-15 11:46:34.212657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:33.498 qpair failed and we were unable to recover it. 00:28:33.498 [2024-11-15 11:46:34.212772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.498 [2024-11-15 11:46:34.212803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:33.498 qpair failed and we were unable to recover it. 00:28:33.498 [2024-11-15 11:46:34.212919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.498 [2024-11-15 11:46:34.212952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:33.498 qpair failed and we were unable to recover it. 00:28:33.498 [2024-11-15 11:46:34.213136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.498 [2024-11-15 11:46:34.213169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:33.498 qpair failed and we were unable to recover it. 00:28:33.498 [2024-11-15 11:46:34.213454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.498 [2024-11-15 11:46:34.213498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:33.498 qpair failed and we were unable to recover it. 00:28:33.498 [2024-11-15 11:46:34.213777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.498 [2024-11-15 11:46:34.213810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:33.498 qpair failed and we were unable to recover it. 00:28:33.498 [2024-11-15 11:46:34.213999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.498 [2024-11-15 11:46:34.214009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:33.498 qpair failed and we were unable to recover it. 00:28:33.498 [2024-11-15 11:46:34.214074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.498 [2024-11-15 11:46:34.214085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:33.498 qpair failed and we were unable to recover it. 00:28:33.498 [2024-11-15 11:46:34.214389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.498 [2024-11-15 11:46:34.214428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:33.498 qpair failed and we were unable to recover it. 00:28:33.498 [2024-11-15 11:46:34.214647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.498 [2024-11-15 11:46:34.214681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:33.498 qpair failed and we were unable to recover it. 00:28:33.498 [2024-11-15 11:46:34.214936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.498 [2024-11-15 11:46:34.214969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:33.498 qpair failed and we were unable to recover it. 00:28:33.498 [2024-11-15 11:46:34.215167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.498 [2024-11-15 11:46:34.215200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:33.498 qpair failed and we were unable to recover it. 00:28:33.498 [2024-11-15 11:46:34.215388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.498 [2024-11-15 11:46:34.215420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:33.498 qpair failed and we were unable to recover it. 00:28:33.498 [2024-11-15 11:46:34.215547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.498 [2024-11-15 11:46:34.215558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:33.498 qpair failed and we were unable to recover it. 00:28:33.498 [2024-11-15 11:46:34.215708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.498 [2024-11-15 11:46:34.215720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:33.498 qpair failed and we were unable to recover it. 00:28:33.498 [2024-11-15 11:46:34.216005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.498 [2024-11-15 11:46:34.216038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:33.498 qpair failed and we were unable to recover it. 00:28:33.498 [2024-11-15 11:46:34.216326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.498 [2024-11-15 11:46:34.216358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:33.498 qpair failed and we were unable to recover it. 00:28:33.498 [2024-11-15 11:46:34.216500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.499 [2024-11-15 11:46:34.216536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:33.499 qpair failed and we were unable to recover it. 00:28:33.499 [2024-11-15 11:46:34.216753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.499 [2024-11-15 11:46:34.216788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:33.499 qpair failed and we were unable to recover it. 00:28:33.499 [2024-11-15 11:46:34.217040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.499 [2024-11-15 11:46:34.217075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:33.499 qpair failed and we were unable to recover it. 00:28:33.499 [2024-11-15 11:46:34.217194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.499 [2024-11-15 11:46:34.217227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:33.499 qpair failed and we were unable to recover it. 00:28:33.499 [2024-11-15 11:46:34.217426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.499 [2024-11-15 11:46:34.217467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:33.499 qpair failed and we were unable to recover it. 00:28:33.499 [2024-11-15 11:46:34.217668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.499 [2024-11-15 11:46:34.217702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:33.499 qpair failed and we were unable to recover it. 00:28:33.499 [2024-11-15 11:46:34.217909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.499 [2024-11-15 11:46:34.217921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:33.499 qpair failed and we were unable to recover it. 00:28:33.499 [2024-11-15 11:46:34.217991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.499 [2024-11-15 11:46:34.218001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:33.499 qpair failed and we were unable to recover it. 00:28:33.499 [2024-11-15 11:46:34.218124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.499 [2024-11-15 11:46:34.218159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:33.499 qpair failed and we were unable to recover it. 00:28:33.499 [2024-11-15 11:46:34.218308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.499 [2024-11-15 11:46:34.218339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:33.499 qpair failed and we were unable to recover it. 00:28:33.499 [2024-11-15 11:46:34.218545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.499 [2024-11-15 11:46:34.218580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:33.499 qpair failed and we were unable to recover it. 00:28:33.499 [2024-11-15 11:46:34.218756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.499 [2024-11-15 11:46:34.218768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:33.499 qpair failed and we were unable to recover it. 00:28:33.499 [2024-11-15 11:46:34.219007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.499 [2024-11-15 11:46:34.219018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:33.499 qpair failed and we were unable to recover it. 00:28:33.499 [2024-11-15 11:46:34.219194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.499 [2024-11-15 11:46:34.219224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:33.499 qpair failed and we were unable to recover it. 00:28:33.499 [2024-11-15 11:46:34.219356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.499 [2024-11-15 11:46:34.219388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:33.499 qpair failed and we were unable to recover it. 00:28:33.499 [2024-11-15 11:46:34.219676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.499 [2024-11-15 11:46:34.219722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:33.499 qpair failed and we were unable to recover it. 00:28:33.499 [2024-11-15 11:46:34.219804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.499 [2024-11-15 11:46:34.219815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:33.499 qpair failed and we were unable to recover it. 00:28:33.499 [2024-11-15 11:46:34.219948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.499 [2024-11-15 11:46:34.219959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:33.499 qpair failed and we were unable to recover it. 00:28:33.499 [2024-11-15 11:46:34.220129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.499 [2024-11-15 11:46:34.220140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:33.499 qpair failed and we were unable to recover it. 00:28:33.499 [2024-11-15 11:46:34.220313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.499 [2024-11-15 11:46:34.220345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:33.499 qpair failed and we were unable to recover it. 00:28:33.499 [2024-11-15 11:46:34.220536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.499 [2024-11-15 11:46:34.220588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:33.499 qpair failed and we were unable to recover it. 00:28:33.499 [2024-11-15 11:46:34.220832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.499 [2024-11-15 11:46:34.220871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:33.499 qpair failed and we were unable to recover it. 00:28:33.499 [2024-11-15 11:46:34.221025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.499 [2024-11-15 11:46:34.221059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:33.499 qpair failed and we were unable to recover it. 00:28:33.499 [2024-11-15 11:46:34.221260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.499 [2024-11-15 11:46:34.221295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:33.499 qpair failed and we were unable to recover it. 00:28:33.499 [2024-11-15 11:46:34.221509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.499 [2024-11-15 11:46:34.221552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:33.499 qpair failed and we were unable to recover it. 00:28:33.499 [2024-11-15 11:46:34.221759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.499 [2024-11-15 11:46:34.221790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:33.499 qpair failed and we were unable to recover it. 00:28:33.499 [2024-11-15 11:46:34.221968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.499 [2024-11-15 11:46:34.221978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:33.499 qpair failed and we were unable to recover it. 00:28:33.499 [2024-11-15 11:46:34.222135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.499 [2024-11-15 11:46:34.222167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:33.499 qpair failed and we were unable to recover it. 00:28:33.499 [2024-11-15 11:46:34.222420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.499 [2024-11-15 11:46:34.222453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:33.499 qpair failed and we were unable to recover it. 00:28:33.499 [2024-11-15 11:46:34.222691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.499 [2024-11-15 11:46:34.222703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:33.499 qpair failed and we were unable to recover it. 00:28:33.499 [2024-11-15 11:46:34.222888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.499 [2024-11-15 11:46:34.222899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:33.499 qpair failed and we were unable to recover it. 00:28:33.499 [2024-11-15 11:46:34.222994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.499 [2024-11-15 11:46:34.223008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:33.499 qpair failed and we were unable to recover it. 00:28:33.499 [2024-11-15 11:46:34.223097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.499 [2024-11-15 11:46:34.223108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:33.499 qpair failed and we were unable to recover it. 00:28:33.499 [2024-11-15 11:46:34.223248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.499 [2024-11-15 11:46:34.223259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:33.499 qpair failed and we were unable to recover it. 00:28:33.499 [2024-11-15 11:46:34.223385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.499 [2024-11-15 11:46:34.223417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:33.499 qpair failed and we were unable to recover it. 00:28:33.499 [2024-11-15 11:46:34.223676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.499 [2024-11-15 11:46:34.223687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:33.499 qpair failed and we were unable to recover it. 00:28:33.499 [2024-11-15 11:46:34.223921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.499 [2024-11-15 11:46:34.223932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:33.499 qpair failed and we were unable to recover it. 00:28:33.500 [2024-11-15 11:46:34.224017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.500 [2024-11-15 11:46:34.224029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:33.500 qpair failed and we were unable to recover it. 00:28:33.500 [2024-11-15 11:46:34.224239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.500 [2024-11-15 11:46:34.224250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:33.500 qpair failed and we were unable to recover it. 00:28:33.500 [2024-11-15 11:46:34.224478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.500 [2024-11-15 11:46:34.224511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:33.500 qpair failed and we were unable to recover it. 00:28:33.500 [2024-11-15 11:46:34.224729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.500 [2024-11-15 11:46:34.224761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:33.500 qpair failed and we were unable to recover it. 00:28:33.500 [2024-11-15 11:46:34.224886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.500 [2024-11-15 11:46:34.224917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:33.500 qpair failed and we were unable to recover it. 00:28:33.500 [2024-11-15 11:46:34.225125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.500 [2024-11-15 11:46:34.225157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:33.500 qpair failed and we were unable to recover it. 00:28:33.500 [2024-11-15 11:46:34.225339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.500 [2024-11-15 11:46:34.225369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:33.500 qpair failed and we were unable to recover it. 00:28:33.500 [2024-11-15 11:46:34.225490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.500 [2024-11-15 11:46:34.225522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:33.500 qpair failed and we were unable to recover it. 00:28:33.500 [2024-11-15 11:46:34.225756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.500 [2024-11-15 11:46:34.225768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:33.500 qpair failed and we were unable to recover it. 00:28:33.500 [2024-11-15 11:46:34.225939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.500 [2024-11-15 11:46:34.225972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:33.500 qpair failed and we were unable to recover it. 00:28:33.500 [2024-11-15 11:46:34.226233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.500 [2024-11-15 11:46:34.226266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:33.500 qpair failed and we were unable to recover it. 00:28:33.500 [2024-11-15 11:46:34.226404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.500 [2024-11-15 11:46:34.226429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:33.500 qpair failed and we were unable to recover it. 00:28:33.500 [2024-11-15 11:46:34.226568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.500 [2024-11-15 11:46:34.226580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:33.500 qpair failed and we were unable to recover it. 00:28:33.500 [2024-11-15 11:46:34.226746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.500 [2024-11-15 11:46:34.226777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:33.500 qpair failed and we were unable to recover it. 00:28:33.500 [2024-11-15 11:46:34.226974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.500 [2024-11-15 11:46:34.227005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:33.500 qpair failed and we were unable to recover it. 00:28:33.500 [2024-11-15 11:46:34.227218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.500 [2024-11-15 11:46:34.227250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:33.500 qpair failed and we were unable to recover it. 00:28:33.500 [2024-11-15 11:46:34.227504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.500 [2024-11-15 11:46:34.227539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:33.500 qpair failed and we were unable to recover it. 00:28:33.500 [2024-11-15 11:46:34.227820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.500 [2024-11-15 11:46:34.227831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:33.500 qpair failed and we were unable to recover it. 00:28:33.500 [2024-11-15 11:46:34.227983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.500 [2024-11-15 11:46:34.227994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:33.500 qpair failed and we were unable to recover it. 00:28:33.500 [2024-11-15 11:46:34.228153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.500 [2024-11-15 11:46:34.228186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:33.500 qpair failed and we were unable to recover it. 00:28:33.500 [2024-11-15 11:46:34.228298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.500 [2024-11-15 11:46:34.228330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:33.500 qpair failed and we were unable to recover it. 00:28:33.500 [2024-11-15 11:46:34.228468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.500 [2024-11-15 11:46:34.228496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.500 qpair failed and we were unable to recover it. 00:28:33.500 [2024-11-15 11:46:34.228661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.500 [2024-11-15 11:46:34.228698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.500 qpair failed and we were unable to recover it. 00:28:33.500 [2024-11-15 11:46:34.228971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.500 [2024-11-15 11:46:34.229006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.500 qpair failed and we were unable to recover it. 00:28:33.500 [2024-11-15 11:46:34.229218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.500 [2024-11-15 11:46:34.229256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.500 qpair failed and we were unable to recover it. 00:28:33.500 [2024-11-15 11:46:34.229380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.500 [2024-11-15 11:46:34.229414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.500 qpair failed and we were unable to recover it. 00:28:33.500 [2024-11-15 11:46:34.229724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.500 [2024-11-15 11:46:34.229764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.500 qpair failed and we were unable to recover it. 00:28:33.500 [2024-11-15 11:46:34.229912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.500 [2024-11-15 11:46:34.229924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.500 qpair failed and we were unable to recover it. 00:28:33.500 [2024-11-15 11:46:34.230111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.500 [2024-11-15 11:46:34.230145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.500 qpair failed and we were unable to recover it. 00:28:33.500 [2024-11-15 11:46:34.230337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.500 [2024-11-15 11:46:34.230369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.500 qpair failed and we were unable to recover it. 00:28:33.500 [2024-11-15 11:46:34.230651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.500 [2024-11-15 11:46:34.230685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.500 qpair failed and we were unable to recover it. 00:28:33.500 [2024-11-15 11:46:34.230827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.500 [2024-11-15 11:46:34.230838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.500 qpair failed and we were unable to recover it. 00:28:33.500 [2024-11-15 11:46:34.231074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.500 [2024-11-15 11:46:34.231109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.500 qpair failed and we were unable to recover it. 00:28:33.500 [2024-11-15 11:46:34.231340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.500 [2024-11-15 11:46:34.231372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.500 qpair failed and we were unable to recover it. 00:28:33.500 [2024-11-15 11:46:34.231506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.500 [2024-11-15 11:46:34.231550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.500 qpair failed and we were unable to recover it. 00:28:33.500 [2024-11-15 11:46:34.231731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.500 [2024-11-15 11:46:34.231742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.500 qpair failed and we were unable to recover it. 00:28:33.500 [2024-11-15 11:46:34.231903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.500 [2024-11-15 11:46:34.231935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.500 qpair failed and we were unable to recover it. 00:28:33.500 [2024-11-15 11:46:34.232204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.500 [2024-11-15 11:46:34.232237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.501 qpair failed and we were unable to recover it. 00:28:33.501 [2024-11-15 11:46:34.232525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.501 [2024-11-15 11:46:34.232560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.501 qpair failed and we were unable to recover it. 00:28:33.501 [2024-11-15 11:46:34.232767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.501 [2024-11-15 11:46:34.232800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.501 qpair failed and we were unable to recover it. 00:28:33.501 [2024-11-15 11:46:34.233028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.501 [2024-11-15 11:46:34.233061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.501 qpair failed and we were unable to recover it. 00:28:33.501 [2024-11-15 11:46:34.233270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.501 [2024-11-15 11:46:34.233304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.501 qpair failed and we were unable to recover it. 00:28:33.501 [2024-11-15 11:46:34.233562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.501 [2024-11-15 11:46:34.233584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.501 qpair failed and we were unable to recover it. 00:28:33.501 [2024-11-15 11:46:34.233866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.501 [2024-11-15 11:46:34.233877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.501 qpair failed and we were unable to recover it. 00:28:33.501 [2024-11-15 11:46:34.234129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.501 [2024-11-15 11:46:34.234140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.501 qpair failed and we were unable to recover it. 00:28:33.501 [2024-11-15 11:46:34.234325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.501 [2024-11-15 11:46:34.234358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.501 qpair failed and we were unable to recover it. 00:28:33.501 [2024-11-15 11:46:34.234556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.501 [2024-11-15 11:46:34.234589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.501 qpair failed and we were unable to recover it. 00:28:33.501 [2024-11-15 11:46:34.234845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.501 [2024-11-15 11:46:34.234878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.501 qpair failed and we were unable to recover it. 00:28:33.501 [2024-11-15 11:46:34.235089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.501 [2024-11-15 11:46:34.235123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.501 qpair failed and we were unable to recover it. 00:28:33.501 [2024-11-15 11:46:34.235330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.501 [2024-11-15 11:46:34.235364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.501 qpair failed and we were unable to recover it. 00:28:33.501 [2024-11-15 11:46:34.235656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.501 [2024-11-15 11:46:34.235691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.501 qpair failed and we were unable to recover it. 00:28:33.501 [2024-11-15 11:46:34.235887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.501 [2024-11-15 11:46:34.235899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.501 qpair failed and we were unable to recover it. 00:28:33.501 [2024-11-15 11:46:34.236041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.501 [2024-11-15 11:46:34.236053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.501 qpair failed and we were unable to recover it. 00:28:33.501 [2024-11-15 11:46:34.236206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.501 [2024-11-15 11:46:34.236218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.501 qpair failed and we were unable to recover it. 00:28:33.501 [2024-11-15 11:46:34.236288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.501 [2024-11-15 11:46:34.236299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.501 qpair failed and we were unable to recover it. 00:28:33.501 [2024-11-15 11:46:34.236395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.501 [2024-11-15 11:46:34.236406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.501 qpair failed and we were unable to recover it. 00:28:33.501 [2024-11-15 11:46:34.236538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.501 [2024-11-15 11:46:34.236550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.501 qpair failed and we were unable to recover it. 00:28:33.501 [2024-11-15 11:46:34.236750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.501 [2024-11-15 11:46:34.236784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.501 qpair failed and we were unable to recover it. 00:28:33.501 [2024-11-15 11:46:34.236981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.501 [2024-11-15 11:46:34.237014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.501 qpair failed and we were unable to recover it. 00:28:33.501 [2024-11-15 11:46:34.237206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.501 [2024-11-15 11:46:34.237241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.501 qpair failed and we were unable to recover it. 00:28:33.501 [2024-11-15 11:46:34.237545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.501 [2024-11-15 11:46:34.237580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.501 qpair failed and we were unable to recover it. 00:28:33.501 [2024-11-15 11:46:34.237843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.501 [2024-11-15 11:46:34.237886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.501 qpair failed and we were unable to recover it. 00:28:33.501 [2024-11-15 11:46:34.237973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.501 [2024-11-15 11:46:34.237985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.501 qpair failed and we were unable to recover it. 00:28:33.501 [2024-11-15 11:46:34.238153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.501 [2024-11-15 11:46:34.238164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.501 qpair failed and we were unable to recover it. 00:28:33.501 [2024-11-15 11:46:34.238260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.501 [2024-11-15 11:46:34.238271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.501 qpair failed and we were unable to recover it. 00:28:33.501 [2024-11-15 11:46:34.238412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.501 [2024-11-15 11:46:34.238423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.501 qpair failed and we were unable to recover it. 00:28:33.501 [2024-11-15 11:46:34.238581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.501 [2024-11-15 11:46:34.238615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.501 qpair failed and we were unable to recover it. 00:28:33.501 [2024-11-15 11:46:34.238831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.501 [2024-11-15 11:46:34.238864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.501 qpair failed and we were unable to recover it. 00:28:33.501 [2024-11-15 11:46:34.239051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.501 [2024-11-15 11:46:34.239085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.501 qpair failed and we were unable to recover it. 00:28:33.501 [2024-11-15 11:46:34.239347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.501 [2024-11-15 11:46:34.239380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.501 qpair failed and we were unable to recover it. 00:28:33.501 [2024-11-15 11:46:34.239496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.502 [2024-11-15 11:46:34.239531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.502 qpair failed and we were unable to recover it. 00:28:33.502 [2024-11-15 11:46:34.239716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.502 [2024-11-15 11:46:34.239750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.502 qpair failed and we were unable to recover it. 00:28:33.502 [2024-11-15 11:46:34.239934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.502 [2024-11-15 11:46:34.239945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.502 qpair failed and we were unable to recover it. 00:28:33.502 [2024-11-15 11:46:34.240026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.502 [2024-11-15 11:46:34.240037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.502 qpair failed and we were unable to recover it. 00:28:33.502 [2024-11-15 11:46:34.240264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.502 [2024-11-15 11:46:34.240303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.502 qpair failed and we were unable to recover it. 00:28:33.502 [2024-11-15 11:46:34.240547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.502 [2024-11-15 11:46:34.240558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.502 qpair failed and we were unable to recover it. 00:28:33.502 [2024-11-15 11:46:34.240631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.502 [2024-11-15 11:46:34.240642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.502 qpair failed and we were unable to recover it. 00:28:33.502 [2024-11-15 11:46:34.240806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.502 [2024-11-15 11:46:34.240816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.502 qpair failed and we were unable to recover it. 00:28:33.502 [2024-11-15 11:46:34.240963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.502 [2024-11-15 11:46:34.240974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.502 qpair failed and we were unable to recover it. 00:28:33.502 [2024-11-15 11:46:34.241128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.502 [2024-11-15 11:46:34.241138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.502 qpair failed and we were unable to recover it. 00:28:33.502 [2024-11-15 11:46:34.241287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.502 [2024-11-15 11:46:34.241297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.502 qpair failed and we were unable to recover it. 00:28:33.502 [2024-11-15 11:46:34.241379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.502 [2024-11-15 11:46:34.241390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.502 qpair failed and we were unable to recover it. 00:28:33.502 [2024-11-15 11:46:34.241615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.502 [2024-11-15 11:46:34.241648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.502 qpair failed and we were unable to recover it. 00:28:33.502 [2024-11-15 11:46:34.241867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.502 [2024-11-15 11:46:34.241901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.502 qpair failed and we were unable to recover it. 00:28:33.502 [2024-11-15 11:46:34.242157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.502 [2024-11-15 11:46:34.242191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.502 qpair failed and we were unable to recover it. 00:28:33.502 [2024-11-15 11:46:34.242308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.502 [2024-11-15 11:46:34.242341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.502 qpair failed and we were unable to recover it. 00:28:33.502 [2024-11-15 11:46:34.242542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.502 [2024-11-15 11:46:34.242575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.502 qpair failed and we were unable to recover it. 00:28:33.502 [2024-11-15 11:46:34.242793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.502 [2024-11-15 11:46:34.242825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.502 qpair failed and we were unable to recover it. 00:28:33.502 [2024-11-15 11:46:34.243024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.502 [2024-11-15 11:46:34.243057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.502 qpair failed and we were unable to recover it. 00:28:33.502 [2024-11-15 11:46:34.243252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.502 [2024-11-15 11:46:34.243285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.502 qpair failed and we were unable to recover it. 00:28:33.502 [2024-11-15 11:46:34.243412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.502 [2024-11-15 11:46:34.243423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.502 qpair failed and we were unable to recover it. 00:28:33.502 [2024-11-15 11:46:34.243504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.502 [2024-11-15 11:46:34.243516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.502 qpair failed and we were unable to recover it. 00:28:33.502 [2024-11-15 11:46:34.243638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.502 [2024-11-15 11:46:34.243669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.502 qpair failed and we were unable to recover it. 00:28:33.502 [2024-11-15 11:46:34.243954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.502 [2024-11-15 11:46:34.243988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.502 qpair failed and we were unable to recover it. 00:28:33.502 [2024-11-15 11:46:34.244180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.502 [2024-11-15 11:46:34.244213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.502 qpair failed and we were unable to recover it. 00:28:33.502 [2024-11-15 11:46:34.244415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.502 [2024-11-15 11:46:34.244447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.502 qpair failed and we were unable to recover it. 00:28:33.502 [2024-11-15 11:46:34.244780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.502 [2024-11-15 11:46:34.244813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.502 qpair failed and we were unable to recover it. 00:28:33.502 [2024-11-15 11:46:34.245086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.502 [2024-11-15 11:46:34.245119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.502 qpair failed and we were unable to recover it. 00:28:33.502 [2024-11-15 11:46:34.245326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.502 [2024-11-15 11:46:34.245359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.502 qpair failed and we were unable to recover it. 00:28:33.502 [2024-11-15 11:46:34.245617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.502 [2024-11-15 11:46:34.245657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.502 qpair failed and we were unable to recover it. 00:28:33.502 [2024-11-15 11:46:34.245937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.502 [2024-11-15 11:46:34.245970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.502 qpair failed and we were unable to recover it. 00:28:33.502 [2024-11-15 11:46:34.246232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.502 [2024-11-15 11:46:34.246304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.502 qpair failed and we were unable to recover it. 00:28:33.502 [2024-11-15 11:46:34.246536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.502 [2024-11-15 11:46:34.246549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.502 qpair failed and we were unable to recover it. 00:28:33.502 [2024-11-15 11:46:34.246775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.502 [2024-11-15 11:46:34.246787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.502 qpair failed and we were unable to recover it. 00:28:33.502 [2024-11-15 11:46:34.247023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.502 [2024-11-15 11:46:34.247034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.502 qpair failed and we were unable to recover it. 00:28:33.502 [2024-11-15 11:46:34.247212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.502 [2024-11-15 11:46:34.247243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.502 qpair failed and we were unable to recover it. 00:28:33.502 [2024-11-15 11:46:34.247396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.502 [2024-11-15 11:46:34.247430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.503 qpair failed and we were unable to recover it. 00:28:33.503 [2024-11-15 11:46:34.247572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.503 [2024-11-15 11:46:34.247585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.503 qpair failed and we were unable to recover it. 00:28:33.503 [2024-11-15 11:46:34.247737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.503 [2024-11-15 11:46:34.247748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.503 qpair failed and we were unable to recover it. 00:28:33.503 [2024-11-15 11:46:34.247862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.503 [2024-11-15 11:46:34.247893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.503 qpair failed and we were unable to recover it. 00:28:33.503 [2024-11-15 11:46:34.248021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.503 [2024-11-15 11:46:34.248053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.503 qpair failed and we were unable to recover it. 00:28:33.503 [2024-11-15 11:46:34.248312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.503 [2024-11-15 11:46:34.248344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.503 qpair failed and we were unable to recover it. 00:28:33.503 [2024-11-15 11:46:34.248591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.503 [2024-11-15 11:46:34.248624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.503 qpair failed and we were unable to recover it. 00:28:33.503 [2024-11-15 11:46:34.248864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.503 [2024-11-15 11:46:34.248898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.503 qpair failed and we were unable to recover it. 00:28:33.503 [2024-11-15 11:46:34.249104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.503 [2024-11-15 11:46:34.249143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.503 qpair failed and we were unable to recover it. 00:28:33.503 [2024-11-15 11:46:34.249287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.503 [2024-11-15 11:46:34.249321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.503 qpair failed and we were unable to recover it. 00:28:33.503 [2024-11-15 11:46:34.249512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.503 [2024-11-15 11:46:34.249546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.503 qpair failed and we were unable to recover it. 00:28:33.503 [2024-11-15 11:46:34.249748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.503 [2024-11-15 11:46:34.249781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.503 qpair failed and we were unable to recover it. 00:28:33.503 [2024-11-15 11:46:34.250007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.503 [2024-11-15 11:46:34.250018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.503 qpair failed and we were unable to recover it. 00:28:33.503 [2024-11-15 11:46:34.250155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.503 [2024-11-15 11:46:34.250167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.503 qpair failed and we were unable to recover it. 00:28:33.503 [2024-11-15 11:46:34.250347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.503 [2024-11-15 11:46:34.250388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.503 qpair failed and we were unable to recover it. 00:28:33.503 [2024-11-15 11:46:34.250534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.503 [2024-11-15 11:46:34.250574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.503 qpair failed and we were unable to recover it. 00:28:33.503 [2024-11-15 11:46:34.250772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.503 [2024-11-15 11:46:34.250818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.503 qpair failed and we were unable to recover it. 00:28:33.503 [2024-11-15 11:46:34.250924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.503 [2024-11-15 11:46:34.250936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.503 qpair failed and we were unable to recover it. 00:28:33.503 [2024-11-15 11:46:34.251111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.503 [2024-11-15 11:46:34.251145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.503 qpair failed and we were unable to recover it. 00:28:33.503 [2024-11-15 11:46:34.251290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.503 [2024-11-15 11:46:34.251323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.503 qpair failed and we were unable to recover it. 00:28:33.503 [2024-11-15 11:46:34.251595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.503 [2024-11-15 11:46:34.251630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.503 qpair failed and we were unable to recover it. 00:28:33.503 [2024-11-15 11:46:34.251863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.503 [2024-11-15 11:46:34.251898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.503 qpair failed and we were unable to recover it. 00:28:33.503 [2024-11-15 11:46:34.252105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.503 [2024-11-15 11:46:34.252144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.503 qpair failed and we were unable to recover it. 00:28:33.503 [2024-11-15 11:46:34.252290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.503 [2024-11-15 11:46:34.252322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.503 qpair failed and we were unable to recover it. 00:28:33.503 [2024-11-15 11:46:34.252502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.503 [2024-11-15 11:46:34.252540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.503 qpair failed and we were unable to recover it. 00:28:33.503 [2024-11-15 11:46:34.252778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.503 [2024-11-15 11:46:34.252812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.503 qpair failed and we were unable to recover it. 00:28:33.503 [2024-11-15 11:46:34.252998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.503 [2024-11-15 11:46:34.253032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.503 qpair failed and we were unable to recover it. 00:28:33.503 [2024-11-15 11:46:34.253240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.503 [2024-11-15 11:46:34.253274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.503 qpair failed and we were unable to recover it. 00:28:33.503 [2024-11-15 11:46:34.253415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.503 [2024-11-15 11:46:34.253448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.503 qpair failed and we were unable to recover it. 00:28:33.503 [2024-11-15 11:46:34.253705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.503 [2024-11-15 11:46:34.253716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.503 qpair failed and we were unable to recover it. 00:28:33.503 [2024-11-15 11:46:34.253895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.503 [2024-11-15 11:46:34.253930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.503 qpair failed and we were unable to recover it. 00:28:33.503 [2024-11-15 11:46:34.254074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.503 [2024-11-15 11:46:34.254107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.503 qpair failed and we were unable to recover it. 00:28:33.503 [2024-11-15 11:46:34.254294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.503 [2024-11-15 11:46:34.254329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.503 qpair failed and we were unable to recover it. 00:28:33.503 [2024-11-15 11:46:34.254521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.503 [2024-11-15 11:46:34.254555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.503 qpair failed and we were unable to recover it. 00:28:33.503 [2024-11-15 11:46:34.254745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.503 [2024-11-15 11:46:34.254778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.503 qpair failed and we were unable to recover it. 00:28:33.503 [2024-11-15 11:46:34.255043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.503 [2024-11-15 11:46:34.255114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:33.503 qpair failed and we were unable to recover it. 00:28:33.503 [2024-11-15 11:46:34.255403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.503 [2024-11-15 11:46:34.255488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1922550 with addr=10.0.0.2, port=4420 00:28:33.503 qpair failed and we were unable to recover it. 00:28:33.503 [2024-11-15 11:46:34.255723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.503 [2024-11-15 11:46:34.255760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1922550 with addr=10.0.0.2, port=4420 00:28:33.503 qpair failed and we were unable to recover it. 00:28:33.503 [2024-11-15 11:46:34.255993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.504 [2024-11-15 11:46:34.256027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1922550 with addr=10.0.0.2, port=4420 00:28:33.504 qpair failed and we were unable to recover it. 00:28:33.504 [2024-11-15 11:46:34.256164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.504 [2024-11-15 11:46:34.256197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1922550 with addr=10.0.0.2, port=4420 00:28:33.504 qpair failed and we were unable to recover it. 00:28:33.504 [2024-11-15 11:46:34.256386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.504 [2024-11-15 11:46:34.256419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1922550 with addr=10.0.0.2, port=4420 00:28:33.504 qpair failed and we were unable to recover it. 00:28:33.504 [2024-11-15 11:46:34.256626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.504 [2024-11-15 11:46:34.256661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1922550 with addr=10.0.0.2, port=4420 00:28:33.504 qpair failed and we were unable to recover it. 00:28:33.504 [2024-11-15 11:46:34.256889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.504 [2024-11-15 11:46:34.256906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1922550 with addr=10.0.0.2, port=4420 00:28:33.504 qpair failed and we were unable to recover it. 00:28:33.504 [2024-11-15 11:46:34.257070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.504 [2024-11-15 11:46:34.257087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1922550 with addr=10.0.0.2, port=4420 00:28:33.504 qpair failed and we were unable to recover it. 00:28:33.504 [2024-11-15 11:46:34.257310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.504 [2024-11-15 11:46:34.257344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1922550 with addr=10.0.0.2, port=4420 00:28:33.504 qpair failed and we were unable to recover it. 00:28:33.504 [2024-11-15 11:46:34.257483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.504 [2024-11-15 11:46:34.257527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1922550 with addr=10.0.0.2, port=4420 00:28:33.504 qpair failed and we were unable to recover it. 00:28:33.504 [2024-11-15 11:46:34.257678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.504 [2024-11-15 11:46:34.257695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1922550 with addr=10.0.0.2, port=4420 00:28:33.504 qpair failed and we were unable to recover it. 00:28:33.504 [2024-11-15 11:46:34.257891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.504 [2024-11-15 11:46:34.257931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.504 qpair failed and we were unable to recover it. 00:28:33.504 [2024-11-15 11:46:34.258121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.504 [2024-11-15 11:46:34.258154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.504 qpair failed and we were unable to recover it. 00:28:33.504 [2024-11-15 11:46:34.258315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.504 [2024-11-15 11:46:34.258349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.504 qpair failed and we were unable to recover it. 00:28:33.504 [2024-11-15 11:46:34.258483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.504 [2024-11-15 11:46:34.258517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.504 qpair failed and we were unable to recover it. 00:28:33.504 [2024-11-15 11:46:34.258704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.504 [2024-11-15 11:46:34.258736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.504 qpair failed and we were unable to recover it. 00:28:33.504 [2024-11-15 11:46:34.258873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.504 [2024-11-15 11:46:34.258906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.504 qpair failed and we were unable to recover it. 00:28:33.504 [2024-11-15 11:46:34.259111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.504 [2024-11-15 11:46:34.259145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.504 qpair failed and we were unable to recover it. 00:28:33.504 [2024-11-15 11:46:34.259408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.504 [2024-11-15 11:46:34.259441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.504 qpair failed and we were unable to recover it. 00:28:33.504 [2024-11-15 11:46:34.259659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.504 [2024-11-15 11:46:34.259693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.504 qpair failed and we were unable to recover it. 00:28:33.504 [2024-11-15 11:46:34.259884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.504 [2024-11-15 11:46:34.259895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.504 qpair failed and we were unable to recover it. 00:28:33.504 [2024-11-15 11:46:34.260033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.504 [2024-11-15 11:46:34.260044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.504 qpair failed and we were unable to recover it. 00:28:33.504 [2024-11-15 11:46:34.260109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.504 [2024-11-15 11:46:34.260119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.504 qpair failed and we were unable to recover it. 00:28:33.504 [2024-11-15 11:46:34.260271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.504 [2024-11-15 11:46:34.260282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.504 qpair failed and we were unable to recover it. 00:28:33.504 [2024-11-15 11:46:34.260433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.504 [2024-11-15 11:46:34.260443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.504 qpair failed and we were unable to recover it. 00:28:33.504 [2024-11-15 11:46:34.260604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.504 [2024-11-15 11:46:34.260615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.504 qpair failed and we were unable to recover it. 00:28:33.504 [2024-11-15 11:46:34.260858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.504 [2024-11-15 11:46:34.260891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.504 qpair failed and we were unable to recover it. 00:28:33.504 [2024-11-15 11:46:34.261044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.504 [2024-11-15 11:46:34.261077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.504 qpair failed and we were unable to recover it. 00:28:33.504 [2024-11-15 11:46:34.261212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.504 [2024-11-15 11:46:34.261244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.504 qpair failed and we were unable to recover it. 00:28:33.504 [2024-11-15 11:46:34.261358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.504 [2024-11-15 11:46:34.261390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.504 qpair failed and we were unable to recover it. 00:28:33.504 [2024-11-15 11:46:34.261652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.504 [2024-11-15 11:46:34.261686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.504 qpair failed and we were unable to recover it. 00:28:33.504 [2024-11-15 11:46:34.261825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.504 [2024-11-15 11:46:34.261837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.504 qpair failed and we were unable to recover it. 00:28:33.504 [2024-11-15 11:46:34.261932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.504 [2024-11-15 11:46:34.261942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.504 qpair failed and we were unable to recover it. 00:28:33.504 [2024-11-15 11:46:34.262044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.504 [2024-11-15 11:46:34.262055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.504 qpair failed and we were unable to recover it. 00:28:33.504 [2024-11-15 11:46:34.262147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.504 [2024-11-15 11:46:34.262159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.504 qpair failed and we were unable to recover it. 00:28:33.504 [2024-11-15 11:46:34.262327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.504 [2024-11-15 11:46:34.262338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.504 qpair failed and we were unable to recover it. 00:28:33.504 [2024-11-15 11:46:34.262496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.504 [2024-11-15 11:46:34.262506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.504 qpair failed and we were unable to recover it. 00:28:33.504 [2024-11-15 11:46:34.262580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.504 [2024-11-15 11:46:34.262592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.504 qpair failed and we were unable to recover it. 00:28:33.504 [2024-11-15 11:46:34.262672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.504 [2024-11-15 11:46:34.262683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.504 qpair failed and we were unable to recover it. 00:28:33.504 [2024-11-15 11:46:34.262788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.504 [2024-11-15 11:46:34.262802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.505 qpair failed and we were unable to recover it. 00:28:33.505 [2024-11-15 11:46:34.262892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.505 [2024-11-15 11:46:34.262904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.505 qpair failed and we were unable to recover it. 00:28:33.505 [2024-11-15 11:46:34.262999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.505 [2024-11-15 11:46:34.263010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.505 qpair failed and we were unable to recover it. 00:28:33.505 [2024-11-15 11:46:34.263079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.505 [2024-11-15 11:46:34.263089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.505 qpair failed and we were unable to recover it. 00:28:33.505 [2024-11-15 11:46:34.263252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.505 [2024-11-15 11:46:34.263263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.505 qpair failed and we were unable to recover it. 00:28:33.505 [2024-11-15 11:46:34.263425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.505 [2024-11-15 11:46:34.263457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.505 qpair failed and we were unable to recover it. 00:28:33.505 [2024-11-15 11:46:34.263593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.505 [2024-11-15 11:46:34.263626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.505 qpair failed and we were unable to recover it. 00:28:33.505 [2024-11-15 11:46:34.263757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.505 [2024-11-15 11:46:34.263787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.505 qpair failed and we were unable to recover it. 00:28:33.505 [2024-11-15 11:46:34.264095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.505 [2024-11-15 11:46:34.264105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.505 qpair failed and we were unable to recover it. 00:28:33.505 [2024-11-15 11:46:34.264192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.505 [2024-11-15 11:46:34.264214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.505 qpair failed and we were unable to recover it. 00:28:33.505 [2024-11-15 11:46:34.264302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.505 [2024-11-15 11:46:34.264312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.505 qpair failed and we were unable to recover it. 00:28:33.505 [2024-11-15 11:46:34.264445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.505 [2024-11-15 11:46:34.264455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.505 qpair failed and we were unable to recover it. 00:28:33.505 [2024-11-15 11:46:34.264602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.505 [2024-11-15 11:46:34.264613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.505 qpair failed and we were unable to recover it. 00:28:33.505 [2024-11-15 11:46:34.264765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.505 [2024-11-15 11:46:34.264775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.505 qpair failed and we were unable to recover it. 00:28:33.505 [2024-11-15 11:46:34.264876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.505 [2024-11-15 11:46:34.264908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.505 qpair failed and we were unable to recover it. 00:28:33.505 [2024-11-15 11:46:34.265102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.505 [2024-11-15 11:46:34.265134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.505 qpair failed and we were unable to recover it. 00:28:33.505 [2024-11-15 11:46:34.265256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.505 [2024-11-15 11:46:34.265287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.505 qpair failed and we were unable to recover it. 00:28:33.505 [2024-11-15 11:46:34.265405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.505 [2024-11-15 11:46:34.265448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.505 qpair failed and we were unable to recover it. 00:28:33.505 [2024-11-15 11:46:34.265690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.505 [2024-11-15 11:46:34.265705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:33.505 qpair failed and we were unable to recover it. 00:28:33.505 [2024-11-15 11:46:34.265912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.505 [2024-11-15 11:46:34.265923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:33.505 qpair failed and we were unable to recover it. 00:28:33.505 [2024-11-15 11:46:34.266073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.505 [2024-11-15 11:46:34.266083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:33.505 qpair failed and we were unable to recover it. 00:28:33.505 [2024-11-15 11:46:34.266165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.505 [2024-11-15 11:46:34.266190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:33.505 qpair failed and we were unable to recover it. 00:28:33.505 [2024-11-15 11:46:34.266394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.505 [2024-11-15 11:46:34.266426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:33.505 qpair failed and we were unable to recover it. 00:28:33.505 [2024-11-15 11:46:34.266662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.505 [2024-11-15 11:46:34.266698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1922550 with addr=10.0.0.2, port=4420 00:28:33.505 qpair failed and we were unable to recover it. 00:28:33.505 [2024-11-15 11:46:34.266839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.505 [2024-11-15 11:46:34.266855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1922550 with addr=10.0.0.2, port=4420 00:28:33.505 qpair failed and we were unable to recover it. 00:28:33.505 [2024-11-15 11:46:34.266999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.505 [2024-11-15 11:46:34.267033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1922550 with addr=10.0.0.2, port=4420 00:28:33.505 qpair failed and we were unable to recover it. 00:28:33.505 [2024-11-15 11:46:34.267237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.505 [2024-11-15 11:46:34.267268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1922550 with addr=10.0.0.2, port=4420 00:28:33.505 qpair failed and we were unable to recover it. 00:28:33.505 [2024-11-15 11:46:34.267396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.505 [2024-11-15 11:46:34.267431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.505 qpair failed and we were unable to recover it. 00:28:33.505 [2024-11-15 11:46:34.267754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.505 [2024-11-15 11:46:34.267765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.505 qpair failed and we were unable to recover it. 00:28:33.505 [2024-11-15 11:46:34.267965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.505 [2024-11-15 11:46:34.267975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.505 qpair failed and we were unable to recover it. 00:28:33.505 [2024-11-15 11:46:34.268062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.505 [2024-11-15 11:46:34.268073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.505 qpair failed and we were unable to recover it. 00:28:33.505 [2024-11-15 11:46:34.268231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.505 [2024-11-15 11:46:34.268242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.505 qpair failed and we were unable to recover it. 00:28:33.505 [2024-11-15 11:46:34.268480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.505 [2024-11-15 11:46:34.268508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.505 qpair failed and we were unable to recover it. 00:28:33.505 [2024-11-15 11:46:34.268569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.505 [2024-11-15 11:46:34.268579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.505 qpair failed and we were unable to recover it. 00:28:33.505 [2024-11-15 11:46:34.268726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.505 [2024-11-15 11:46:34.268737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.505 qpair failed and we were unable to recover it. 00:28:33.505 [2024-11-15 11:46:34.269020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.505 [2024-11-15 11:46:34.269052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.505 qpair failed and we were unable to recover it. 00:28:33.505 [2024-11-15 11:46:34.269320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.505 [2024-11-15 11:46:34.269353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.505 qpair failed and we were unable to recover it. 00:28:33.505 [2024-11-15 11:46:34.269605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.505 [2024-11-15 11:46:34.269617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.505 qpair failed and we were unable to recover it. 00:28:33.506 [2024-11-15 11:46:34.269753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.506 [2024-11-15 11:46:34.269764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.506 qpair failed and we were unable to recover it. 00:28:33.506 [2024-11-15 11:46:34.269977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.506 [2024-11-15 11:46:34.269988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.506 qpair failed and we were unable to recover it. 00:28:33.506 [2024-11-15 11:46:34.270152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.506 [2024-11-15 11:46:34.270165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.506 qpair failed and we were unable to recover it. 00:28:33.506 [2024-11-15 11:46:34.270267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.506 [2024-11-15 11:46:34.270299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.506 qpair failed and we were unable to recover it. 00:28:33.506 [2024-11-15 11:46:34.270443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.506 [2024-11-15 11:46:34.270486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.506 qpair failed and we were unable to recover it. 00:28:33.506 [2024-11-15 11:46:34.270641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.506 [2024-11-15 11:46:34.270672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.506 qpair failed and we were unable to recover it. 00:28:33.506 [2024-11-15 11:46:34.270910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.506 [2024-11-15 11:46:34.270940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.506 qpair failed and we were unable to recover it. 00:28:33.506 [2024-11-15 11:46:34.271151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.506 [2024-11-15 11:46:34.271183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.506 qpair failed and we were unable to recover it. 00:28:33.506 [2024-11-15 11:46:34.271372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.506 [2024-11-15 11:46:34.271404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.506 qpair failed and we were unable to recover it. 00:28:33.506 [2024-11-15 11:46:34.271690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.506 [2024-11-15 11:46:34.271701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.506 qpair failed and we were unable to recover it. 00:28:33.506 [2024-11-15 11:46:34.271920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.506 [2024-11-15 11:46:34.271952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.506 qpair failed and we were unable to recover it. 00:28:33.506 [2024-11-15 11:46:34.272163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.506 [2024-11-15 11:46:34.272195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.506 qpair failed and we were unable to recover it. 00:28:33.506 [2024-11-15 11:46:34.272344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.506 [2024-11-15 11:46:34.272377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.506 qpair failed and we were unable to recover it. 00:28:33.506 [2024-11-15 11:46:34.272655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.506 [2024-11-15 11:46:34.272666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.506 qpair failed and we were unable to recover it. 00:28:33.506 [2024-11-15 11:46:34.272829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.506 [2024-11-15 11:46:34.272861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.506 qpair failed and we were unable to recover it. 00:28:33.506 [2024-11-15 11:46:34.272995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.506 [2024-11-15 11:46:34.273026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.506 qpair failed and we were unable to recover it. 00:28:33.506 [2024-11-15 11:46:34.273155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.506 [2024-11-15 11:46:34.273187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.506 qpair failed and we were unable to recover it. 00:28:33.506 [2024-11-15 11:46:34.273372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.506 [2024-11-15 11:46:34.273405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.506 qpair failed and we were unable to recover it. 00:28:33.506 [2024-11-15 11:46:34.273614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.506 [2024-11-15 11:46:34.273648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.506 qpair failed and we were unable to recover it. 00:28:33.506 [2024-11-15 11:46:34.273956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.506 [2024-11-15 11:46:34.273988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.506 qpair failed and we were unable to recover it. 00:28:33.506 [2024-11-15 11:46:34.274169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.506 [2024-11-15 11:46:34.274201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.506 qpair failed and we were unable to recover it. 00:28:33.506 [2024-11-15 11:46:34.274335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.506 [2024-11-15 11:46:34.274367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.506 qpair failed and we were unable to recover it. 00:28:33.506 [2024-11-15 11:46:34.274554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.506 [2024-11-15 11:46:34.274586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.506 qpair failed and we were unable to recover it. 00:28:33.506 [2024-11-15 11:46:34.274881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.506 [2024-11-15 11:46:34.274914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.506 qpair failed and we were unable to recover it. 00:28:33.506 [2024-11-15 11:46:34.275209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.506 [2024-11-15 11:46:34.275241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.506 qpair failed and we were unable to recover it. 00:28:33.506 [2024-11-15 11:46:34.275465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.506 [2024-11-15 11:46:34.275499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.506 qpair failed and we were unable to recover it. 00:28:33.506 [2024-11-15 11:46:34.275689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.506 [2024-11-15 11:46:34.275720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.506 qpair failed and we were unable to recover it. 00:28:33.506 [2024-11-15 11:46:34.275948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.506 [2024-11-15 11:46:34.275979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.506 qpair failed and we were unable to recover it. 00:28:33.506 [2024-11-15 11:46:34.276231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.506 [2024-11-15 11:46:34.276264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.506 qpair failed and we were unable to recover it. 00:28:33.506 [2024-11-15 11:46:34.276548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.506 [2024-11-15 11:46:34.276582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.506 qpair failed and we were unable to recover it. 00:28:33.506 [2024-11-15 11:46:34.276729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.506 [2024-11-15 11:46:34.276760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.506 qpair failed and we were unable to recover it. 00:28:33.506 [2024-11-15 11:46:34.277024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.506 [2024-11-15 11:46:34.277054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.506 qpair failed and we were unable to recover it. 00:28:33.506 [2024-11-15 11:46:34.277336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.506 [2024-11-15 11:46:34.277367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.506 qpair failed and we were unable to recover it. 00:28:33.506 [2024-11-15 11:46:34.277561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.506 [2024-11-15 11:46:34.277593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.506 qpair failed and we were unable to recover it. 00:28:33.506 [2024-11-15 11:46:34.277804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.506 [2024-11-15 11:46:34.277837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.506 qpair failed and we were unable to recover it. 00:28:33.506 [2024-11-15 11:46:34.277968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.506 [2024-11-15 11:46:34.277978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.506 qpair failed and we were unable to recover it. 00:28:33.506 [2024-11-15 11:46:34.278225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.507 [2024-11-15 11:46:34.278257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.507 qpair failed and we were unable to recover it. 00:28:33.507 [2024-11-15 11:46:34.278529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.507 [2024-11-15 11:46:34.278562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.507 qpair failed and we were unable to recover it. 00:28:33.507 [2024-11-15 11:46:34.278846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.507 [2024-11-15 11:46:34.278879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.507 qpair failed and we were unable to recover it. 00:28:33.507 [2024-11-15 11:46:34.279019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.507 [2024-11-15 11:46:34.279051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.507 qpair failed and we were unable to recover it. 00:28:33.507 [2024-11-15 11:46:34.279176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.507 [2024-11-15 11:46:34.279188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.507 qpair failed and we were unable to recover it. 00:28:33.507 [2024-11-15 11:46:34.279425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.507 [2024-11-15 11:46:34.279457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.507 qpair failed and we were unable to recover it. 00:28:33.507 [2024-11-15 11:46:34.279612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.507 [2024-11-15 11:46:34.279650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.507 qpair failed and we were unable to recover it. 00:28:33.507 [2024-11-15 11:46:34.279860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.507 [2024-11-15 11:46:34.279893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.507 qpair failed and we were unable to recover it. 00:28:33.507 [2024-11-15 11:46:34.280000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.507 [2024-11-15 11:46:34.280010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.507 qpair failed and we were unable to recover it. 00:28:33.507 [2024-11-15 11:46:34.280082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.507 [2024-11-15 11:46:34.280093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.507 qpair failed and we were unable to recover it. 00:28:33.507 [2024-11-15 11:46:34.280332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.507 [2024-11-15 11:46:34.280360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:33.507 qpair failed and we were unable to recover it. 00:28:33.507 [2024-11-15 11:46:34.280578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.507 [2024-11-15 11:46:34.280591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:33.507 qpair failed and we were unable to recover it. 00:28:33.507 [2024-11-15 11:46:34.280827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.507 [2024-11-15 11:46:34.280839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:33.507 qpair failed and we were unable to recover it. 00:28:33.507 [2024-11-15 11:46:34.280911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.507 [2024-11-15 11:46:34.280922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:33.507 qpair failed and we were unable to recover it. 00:28:33.507 [2024-11-15 11:46:34.281065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.507 [2024-11-15 11:46:34.281075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:33.507 qpair failed and we were unable to recover it. 00:28:33.507 [2024-11-15 11:46:34.281221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.507 [2024-11-15 11:46:34.281232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:33.507 qpair failed and we were unable to recover it. 00:28:33.507 [2024-11-15 11:46:34.281308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.507 [2024-11-15 11:46:34.281318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:33.507 qpair failed and we were unable to recover it. 00:28:33.507 [2024-11-15 11:46:34.281400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.507 [2024-11-15 11:46:34.281409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:33.507 qpair failed and we were unable to recover it. 00:28:33.507 [2024-11-15 11:46:34.281560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.507 [2024-11-15 11:46:34.281588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:33.507 qpair failed and we were unable to recover it. 00:28:33.507 [2024-11-15 11:46:34.281773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.507 [2024-11-15 11:46:34.281806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:33.507 qpair failed and we were unable to recover it. 00:28:33.507 [2024-11-15 11:46:34.282027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.507 [2024-11-15 11:46:34.282060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:33.507 qpair failed and we were unable to recover it. 00:28:33.507 [2024-11-15 11:46:34.282255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.507 [2024-11-15 11:46:34.282285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:33.507 qpair failed and we were unable to recover it. 00:28:33.507 [2024-11-15 11:46:34.282502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.507 [2024-11-15 11:46:34.282534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:33.507 qpair failed and we were unable to recover it. 00:28:33.507 [2024-11-15 11:46:34.282816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.507 [2024-11-15 11:46:34.282847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:33.507 qpair failed and we were unable to recover it. 00:28:33.507 [2024-11-15 11:46:34.282984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.507 [2024-11-15 11:46:34.283015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:33.507 qpair failed and we were unable to recover it. 00:28:33.507 [2024-11-15 11:46:34.283197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.507 [2024-11-15 11:46:34.283227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:33.507 qpair failed and we were unable to recover it. 00:28:33.507 [2024-11-15 11:46:34.283436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.507 [2024-11-15 11:46:34.283475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:33.507 qpair failed and we were unable to recover it. 00:28:33.507 [2024-11-15 11:46:34.283763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.507 [2024-11-15 11:46:34.283795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:33.507 qpair failed and we were unable to recover it. 00:28:33.507 [2024-11-15 11:46:34.283918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.507 [2024-11-15 11:46:34.283949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:33.507 qpair failed and we were unable to recover it. 00:28:33.507 [2024-11-15 11:46:34.284144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.507 [2024-11-15 11:46:34.284176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:33.507 qpair failed and we were unable to recover it. 00:28:33.507 [2024-11-15 11:46:34.284375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.507 [2024-11-15 11:46:34.284408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:33.507 qpair failed and we were unable to recover it. 00:28:33.507 [2024-11-15 11:46:34.284659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.507 [2024-11-15 11:46:34.284671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:33.507 qpair failed and we were unable to recover it. 00:28:33.507 [2024-11-15 11:46:34.284810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.507 [2024-11-15 11:46:34.284840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:33.507 qpair failed and we were unable to recover it. 00:28:33.507 [2024-11-15 11:46:34.285139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.507 [2024-11-15 11:46:34.285212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.507 qpair failed and we were unable to recover it. 00:28:33.507 [2024-11-15 11:46:34.285362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.507 [2024-11-15 11:46:34.285397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.507 qpair failed and we were unable to recover it. 00:28:33.507 [2024-11-15 11:46:34.285611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.507 [2024-11-15 11:46:34.285644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.507 qpair failed and we were unable to recover it. 00:28:33.507 [2024-11-15 11:46:34.285830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.507 [2024-11-15 11:46:34.285840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.507 qpair failed and we were unable to recover it. 00:28:33.507 [2024-11-15 11:46:34.286013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.507 [2024-11-15 11:46:34.286045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.507 qpair failed and we were unable to recover it. 00:28:33.507 [2024-11-15 11:46:34.286242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.507 [2024-11-15 11:46:34.286273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.507 qpair failed and we were unable to recover it. 00:28:33.507 [2024-11-15 11:46:34.286501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.507 [2024-11-15 11:46:34.286534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.507 qpair failed and we were unable to recover it. 00:28:33.507 [2024-11-15 11:46:34.286745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.507 [2024-11-15 11:46:34.286756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.508 qpair failed and we were unable to recover it. 00:28:33.508 [2024-11-15 11:46:34.286915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.508 [2024-11-15 11:46:34.286947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.508 qpair failed and we were unable to recover it. 00:28:33.508 [2024-11-15 11:46:34.287207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.508 [2024-11-15 11:46:34.287240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.508 qpair failed and we were unable to recover it. 00:28:33.508 [2024-11-15 11:46:34.287473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.508 [2024-11-15 11:46:34.287511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.508 qpair failed and we were unable to recover it. 00:28:33.508 [2024-11-15 11:46:34.287674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.508 [2024-11-15 11:46:34.287685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.508 qpair failed and we were unable to recover it. 00:28:33.508 [2024-11-15 11:46:34.287845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.508 [2024-11-15 11:46:34.287877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.508 qpair failed and we were unable to recover it. 00:28:33.508 [2024-11-15 11:46:34.288064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.508 [2024-11-15 11:46:34.288102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.508 qpair failed and we were unable to recover it. 00:28:33.508 [2024-11-15 11:46:34.288323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.508 [2024-11-15 11:46:34.288356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.508 qpair failed and we were unable to recover it. 00:28:33.508 [2024-11-15 11:46:34.288555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.508 [2024-11-15 11:46:34.288589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.508 qpair failed and we were unable to recover it. 00:28:33.508 [2024-11-15 11:46:34.288839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.508 [2024-11-15 11:46:34.288872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.508 qpair failed and we were unable to recover it. 00:28:33.508 [2024-11-15 11:46:34.289166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.508 [2024-11-15 11:46:34.289177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.508 qpair failed and we were unable to recover it. 00:28:33.508 [2024-11-15 11:46:34.289378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.508 [2024-11-15 11:46:34.289410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.508 qpair failed and we were unable to recover it. 00:28:33.508 [2024-11-15 11:46:34.289616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.508 [2024-11-15 11:46:34.289627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.508 qpair failed and we were unable to recover it. 00:28:33.508 [2024-11-15 11:46:34.289834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.508 [2024-11-15 11:46:34.289845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.508 qpair failed and we were unable to recover it. 00:28:33.508 [2024-11-15 11:46:34.290028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.508 [2024-11-15 11:46:34.290039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.508 qpair failed and we were unable to recover it. 00:28:33.508 [2024-11-15 11:46:34.290216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.508 [2024-11-15 11:46:34.290248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.508 qpair failed and we were unable to recover it. 00:28:33.508 [2024-11-15 11:46:34.290447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.508 [2024-11-15 11:46:34.290488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.508 qpair failed and we were unable to recover it. 00:28:33.508 [2024-11-15 11:46:34.290740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.508 [2024-11-15 11:46:34.290750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.508 qpair failed and we were unable to recover it. 00:28:33.508 [2024-11-15 11:46:34.290906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.508 [2024-11-15 11:46:34.290916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.508 qpair failed and we were unable to recover it. 00:28:33.508 [2024-11-15 11:46:34.291169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.508 [2024-11-15 11:46:34.291214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.508 qpair failed and we were unable to recover it. 00:28:33.791 [2024-11-15 11:46:34.291360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.791 [2024-11-15 11:46:34.291392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.791 qpair failed and we were unable to recover it. 00:28:33.791 [2024-11-15 11:46:34.291665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.791 [2024-11-15 11:46:34.291677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.791 qpair failed and we were unable to recover it. 00:28:33.791 [2024-11-15 11:46:34.291823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.791 [2024-11-15 11:46:34.291856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.791 qpair failed and we were unable to recover it. 00:28:33.791 [2024-11-15 11:46:34.292050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.791 [2024-11-15 11:46:34.292082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.791 qpair failed and we were unable to recover it. 00:28:33.791 [2024-11-15 11:46:34.292366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.791 [2024-11-15 11:46:34.292398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.791 qpair failed and we were unable to recover it. 00:28:33.791 [2024-11-15 11:46:34.292616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.791 [2024-11-15 11:46:34.292649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.791 qpair failed and we were unable to recover it. 00:28:33.792 [2024-11-15 11:46:34.292927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.792 [2024-11-15 11:46:34.292959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.792 qpair failed and we were unable to recover it. 00:28:33.792 [2024-11-15 11:46:34.293231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.792 [2024-11-15 11:46:34.293241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.792 qpair failed and we were unable to recover it. 00:28:33.792 [2024-11-15 11:46:34.293319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.792 [2024-11-15 11:46:34.293330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.792 qpair failed and we were unable to recover it. 00:28:33.792 [2024-11-15 11:46:34.293432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.792 [2024-11-15 11:46:34.293442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.792 qpair failed and we were unable to recover it. 00:28:33.792 [2024-11-15 11:46:34.293651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.792 [2024-11-15 11:46:34.293685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.792 qpair failed and we were unable to recover it. 00:28:33.792 [2024-11-15 11:46:34.293817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.792 [2024-11-15 11:46:34.293848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.792 qpair failed and we were unable to recover it. 00:28:33.792 [2024-11-15 11:46:34.294053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.792 [2024-11-15 11:46:34.294086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.792 qpair failed and we were unable to recover it. 00:28:33.792 [2024-11-15 11:46:34.294212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.792 [2024-11-15 11:46:34.294246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.792 qpair failed and we were unable to recover it. 00:28:33.792 [2024-11-15 11:46:34.294434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.792 [2024-11-15 11:46:34.294477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.792 qpair failed and we were unable to recover it. 00:28:33.792 [2024-11-15 11:46:34.294692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.792 [2024-11-15 11:46:34.294726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.792 qpair failed and we were unable to recover it. 00:28:33.792 [2024-11-15 11:46:34.294988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.792 [2024-11-15 11:46:34.295021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.792 qpair failed and we were unable to recover it. 00:28:33.792 [2024-11-15 11:46:34.295233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.792 [2024-11-15 11:46:34.295265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.792 qpair failed and we were unable to recover it. 00:28:33.792 [2024-11-15 11:46:34.295412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.792 [2024-11-15 11:46:34.295444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.792 qpair failed and we were unable to recover it. 00:28:33.792 [2024-11-15 11:46:34.295644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.792 [2024-11-15 11:46:34.295677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.792 qpair failed and we were unable to recover it. 00:28:33.792 [2024-11-15 11:46:34.295874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.792 [2024-11-15 11:46:34.295907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.792 qpair failed and we were unable to recover it. 00:28:33.792 [2024-11-15 11:46:34.296097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.792 [2024-11-15 11:46:34.296128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.792 qpair failed and we were unable to recover it. 00:28:33.792 [2024-11-15 11:46:34.296310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.792 [2024-11-15 11:46:34.296342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.792 qpair failed and we were unable to recover it. 00:28:33.792 [2024-11-15 11:46:34.296622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.792 [2024-11-15 11:46:34.296654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.792 qpair failed and we were unable to recover it. 00:28:33.792 [2024-11-15 11:46:34.296856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.792 [2024-11-15 11:46:34.296888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.792 qpair failed and we were unable to recover it. 00:28:33.792 [2024-11-15 11:46:34.297091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.792 [2024-11-15 11:46:34.297102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.792 qpair failed and we were unable to recover it. 00:28:33.792 [2024-11-15 11:46:34.297267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.792 [2024-11-15 11:46:34.297304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.792 qpair failed and we were unable to recover it. 00:28:33.792 [2024-11-15 11:46:34.297539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.792 [2024-11-15 11:46:34.297573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.792 qpair failed and we were unable to recover it. 00:28:33.792 [2024-11-15 11:46:34.297776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.792 [2024-11-15 11:46:34.297809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.792 qpair failed and we were unable to recover it. 00:28:33.792 [2024-11-15 11:46:34.297899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.792 [2024-11-15 11:46:34.297910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.792 qpair failed and we were unable to recover it. 00:28:33.792 [2024-11-15 11:46:34.298105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.792 [2024-11-15 11:46:34.298138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.792 qpair failed and we were unable to recover it. 00:28:33.792 [2024-11-15 11:46:34.298269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.792 [2024-11-15 11:46:34.298300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.792 qpair failed and we were unable to recover it. 00:28:33.792 [2024-11-15 11:46:34.298516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.792 [2024-11-15 11:46:34.298550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.792 qpair failed and we were unable to recover it. 00:28:33.792 [2024-11-15 11:46:34.298832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.792 [2024-11-15 11:46:34.298865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.792 qpair failed and we were unable to recover it. 00:28:33.792 [2024-11-15 11:46:34.299007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.792 [2024-11-15 11:46:34.299039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.792 qpair failed and we were unable to recover it. 00:28:33.792 [2024-11-15 11:46:34.299223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.792 [2024-11-15 11:46:34.299256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.792 qpair failed and we were unable to recover it. 00:28:33.792 [2024-11-15 11:46:34.299457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.792 [2024-11-15 11:46:34.299498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.792 qpair failed and we were unable to recover it. 00:28:33.792 [2024-11-15 11:46:34.299637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.792 [2024-11-15 11:46:34.299669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.792 qpair failed and we were unable to recover it. 00:28:33.792 [2024-11-15 11:46:34.299868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.792 [2024-11-15 11:46:34.299900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.792 qpair failed and we were unable to recover it. 00:28:33.792 [2024-11-15 11:46:34.300030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.792 [2024-11-15 11:46:34.300063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.792 qpair failed and we were unable to recover it. 00:28:33.792 [2024-11-15 11:46:34.300200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.792 [2024-11-15 11:46:34.300233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.792 qpair failed and we were unable to recover it. 00:28:33.792 [2024-11-15 11:46:34.300421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.792 [2024-11-15 11:46:34.300453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.792 qpair failed and we were unable to recover it. 00:28:33.792 [2024-11-15 11:46:34.300567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.792 [2024-11-15 11:46:34.300579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.792 qpair failed and we were unable to recover it. 00:28:33.792 [2024-11-15 11:46:34.300668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.792 [2024-11-15 11:46:34.300679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.793 qpair failed and we were unable to recover it. 00:28:33.793 [2024-11-15 11:46:34.300903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.793 [2024-11-15 11:46:34.300935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.793 qpair failed and we were unable to recover it. 00:28:33.793 [2024-11-15 11:46:34.301153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.793 [2024-11-15 11:46:34.301186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.793 qpair failed and we were unable to recover it. 00:28:33.793 [2024-11-15 11:46:34.301324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.793 [2024-11-15 11:46:34.301357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.793 qpair failed and we were unable to recover it. 00:28:33.793 [2024-11-15 11:46:34.301542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.793 [2024-11-15 11:46:34.301577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.793 qpair failed and we were unable to recover it. 00:28:33.793 [2024-11-15 11:46:34.301783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.793 [2024-11-15 11:46:34.301794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.793 qpair failed and we were unable to recover it. 00:28:33.793 [2024-11-15 11:46:34.301948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.793 [2024-11-15 11:46:34.301980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.793 qpair failed and we were unable to recover it. 00:28:33.793 [2024-11-15 11:46:34.302098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.793 [2024-11-15 11:46:34.302131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.793 qpair failed and we were unable to recover it. 00:28:33.793 [2024-11-15 11:46:34.302321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.793 [2024-11-15 11:46:34.302354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.793 qpair failed and we were unable to recover it. 00:28:33.793 [2024-11-15 11:46:34.302559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.793 [2024-11-15 11:46:34.302593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.793 qpair failed and we were unable to recover it. 00:28:33.793 [2024-11-15 11:46:34.302795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.793 [2024-11-15 11:46:34.302808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.793 qpair failed and we were unable to recover it. 00:28:33.793 [2024-11-15 11:46:34.302868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.793 [2024-11-15 11:46:34.302880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.793 qpair failed and we were unable to recover it. 00:28:33.793 [2024-11-15 11:46:34.303044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.793 [2024-11-15 11:46:34.303076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.793 qpair failed and we were unable to recover it. 00:28:33.793 [2024-11-15 11:46:34.303271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.793 [2024-11-15 11:46:34.303304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.793 qpair failed and we were unable to recover it. 00:28:33.793 [2024-11-15 11:46:34.303449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.793 [2024-11-15 11:46:34.303491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.793 qpair failed and we were unable to recover it. 00:28:33.793 [2024-11-15 11:46:34.303677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.793 [2024-11-15 11:46:34.303687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.793 qpair failed and we were unable to recover it. 00:28:33.793 [2024-11-15 11:46:34.303831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.793 [2024-11-15 11:46:34.303864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.793 qpair failed and we were unable to recover it. 00:28:33.793 [2024-11-15 11:46:34.304088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.793 [2024-11-15 11:46:34.304120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.793 qpair failed and we were unable to recover it. 00:28:33.793 [2024-11-15 11:46:34.304383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.793 [2024-11-15 11:46:34.304415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.793 qpair failed and we were unable to recover it. 00:28:33.793 [2024-11-15 11:46:34.304650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.793 [2024-11-15 11:46:34.304682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.793 qpair failed and we were unable to recover it. 00:28:33.793 [2024-11-15 11:46:34.304882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.793 [2024-11-15 11:46:34.304914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.793 qpair failed and we were unable to recover it. 00:28:33.793 [2024-11-15 11:46:34.305168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.793 [2024-11-15 11:46:34.305201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.793 qpair failed and we were unable to recover it. 00:28:33.793 [2024-11-15 11:46:34.305467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.793 [2024-11-15 11:46:34.305502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.793 qpair failed and we were unable to recover it. 00:28:33.793 [2024-11-15 11:46:34.305734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.793 [2024-11-15 11:46:34.305745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.793 qpair failed and we were unable to recover it. 00:28:33.793 [2024-11-15 11:46:34.305950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.793 [2024-11-15 11:46:34.305961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.793 qpair failed and we were unable to recover it. 00:28:33.793 [2024-11-15 11:46:34.306167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.793 [2024-11-15 11:46:34.306179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.793 qpair failed and we were unable to recover it. 00:28:33.793 [2024-11-15 11:46:34.306434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.793 [2024-11-15 11:46:34.306445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.793 qpair failed and we were unable to recover it. 00:28:33.793 [2024-11-15 11:46:34.306587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.793 [2024-11-15 11:46:34.306598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.793 qpair failed and we were unable to recover it. 00:28:33.793 [2024-11-15 11:46:34.306804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.793 [2024-11-15 11:46:34.306815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.793 qpair failed and we were unable to recover it. 00:28:33.793 [2024-11-15 11:46:34.306971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.793 [2024-11-15 11:46:34.307003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.793 qpair failed and we were unable to recover it. 00:28:33.793 [2024-11-15 11:46:34.307146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.793 [2024-11-15 11:46:34.307178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.793 qpair failed and we were unable to recover it. 00:28:33.793 [2024-11-15 11:46:34.307470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.793 [2024-11-15 11:46:34.307504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.793 qpair failed and we were unable to recover it. 00:28:33.793 [2024-11-15 11:46:34.307685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.793 [2024-11-15 11:46:34.307696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.793 qpair failed and we were unable to recover it. 00:28:33.793 [2024-11-15 11:46:34.307875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.793 [2024-11-15 11:46:34.307907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.793 qpair failed and we were unable to recover it. 00:28:33.793 [2024-11-15 11:46:34.308090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.793 [2024-11-15 11:46:34.308124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.793 qpair failed and we were unable to recover it. 00:28:33.793 [2024-11-15 11:46:34.308314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.793 [2024-11-15 11:46:34.308345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.793 qpair failed and we were unable to recover it. 00:28:33.793 [2024-11-15 11:46:34.308484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.793 [2024-11-15 11:46:34.308519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.793 qpair failed and we were unable to recover it. 00:28:33.793 [2024-11-15 11:46:34.308787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.793 [2024-11-15 11:46:34.308821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.794 qpair failed and we were unable to recover it. 00:28:33.794 [2024-11-15 11:46:34.309023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.794 [2024-11-15 11:46:34.309056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.794 qpair failed and we were unable to recover it. 00:28:33.794 [2024-11-15 11:46:34.309240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.794 [2024-11-15 11:46:34.309252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.794 qpair failed and we were unable to recover it. 00:28:33.794 [2024-11-15 11:46:34.309437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.794 [2024-11-15 11:46:34.309496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.794 qpair failed and we were unable to recover it. 00:28:33.794 [2024-11-15 11:46:34.309721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.794 [2024-11-15 11:46:34.309754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.794 qpair failed and we were unable to recover it. 00:28:33.794 [2024-11-15 11:46:34.310058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.794 [2024-11-15 11:46:34.310092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.794 qpair failed and we were unable to recover it. 00:28:33.794 [2024-11-15 11:46:34.310289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.794 [2024-11-15 11:46:34.310322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.794 qpair failed and we were unable to recover it. 00:28:33.794 [2024-11-15 11:46:34.310445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.794 [2024-11-15 11:46:34.310487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.794 qpair failed and we were unable to recover it. 00:28:33.794 [2024-11-15 11:46:34.310629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.794 [2024-11-15 11:46:34.310640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.794 qpair failed and we were unable to recover it. 00:28:33.794 [2024-11-15 11:46:34.310794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.794 [2024-11-15 11:46:34.310805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.794 qpair failed and we were unable to recover it. 00:28:33.794 [2024-11-15 11:46:34.311037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.794 [2024-11-15 11:46:34.311049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.794 qpair failed and we were unable to recover it. 00:28:33.794 [2024-11-15 11:46:34.311311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.794 [2024-11-15 11:46:34.311345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.794 qpair failed and we were unable to recover it. 00:28:33.794 [2024-11-15 11:46:34.311575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.794 [2024-11-15 11:46:34.311609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.794 qpair failed and we were unable to recover it. 00:28:33.794 [2024-11-15 11:46:34.311819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.794 [2024-11-15 11:46:34.311833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.794 qpair failed and we were unable to recover it. 00:28:33.794 [2024-11-15 11:46:34.311989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.794 [2024-11-15 11:46:34.312022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.794 qpair failed and we were unable to recover it. 00:28:33.794 [2024-11-15 11:46:34.312300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.794 [2024-11-15 11:46:34.312334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.794 qpair failed and we were unable to recover it. 00:28:33.794 [2024-11-15 11:46:34.312564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.794 [2024-11-15 11:46:34.312597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.794 qpair failed and we were unable to recover it. 00:28:33.794 [2024-11-15 11:46:34.312699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.794 [2024-11-15 11:46:34.312710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.794 qpair failed and we were unable to recover it. 00:28:33.794 [2024-11-15 11:46:34.312909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.794 [2024-11-15 11:46:34.312942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.794 qpair failed and we were unable to recover it. 00:28:33.794 [2024-11-15 11:46:34.313140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.794 [2024-11-15 11:46:34.313172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.794 qpair failed and we were unable to recover it. 00:28:33.794 [2024-11-15 11:46:34.313365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.794 [2024-11-15 11:46:34.313397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.794 qpair failed and we were unable to recover it. 00:28:33.794 [2024-11-15 11:46:34.313537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.794 [2024-11-15 11:46:34.313571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.794 qpair failed and we were unable to recover it. 00:28:33.794 [2024-11-15 11:46:34.313761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.794 [2024-11-15 11:46:34.313794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.794 qpair failed and we were unable to recover it. 00:28:33.794 [2024-11-15 11:46:34.314083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.794 [2024-11-15 11:46:34.314116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.794 qpair failed and we were unable to recover it. 00:28:33.794 [2024-11-15 11:46:34.314326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.794 [2024-11-15 11:46:34.314358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.794 qpair failed and we were unable to recover it. 00:28:33.794 [2024-11-15 11:46:34.314574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.794 [2024-11-15 11:46:34.314607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.794 qpair failed and we were unable to recover it. 00:28:33.794 [2024-11-15 11:46:34.314722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.794 [2024-11-15 11:46:34.314734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.794 qpair failed and we were unable to recover it. 00:28:33.794 [2024-11-15 11:46:34.314823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.794 [2024-11-15 11:46:34.314834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.794 qpair failed and we were unable to recover it. 00:28:33.794 [2024-11-15 11:46:34.314900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.794 [2024-11-15 11:46:34.314911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.794 qpair failed and we were unable to recover it. 00:28:33.794 [2024-11-15 11:46:34.315033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.794 [2024-11-15 11:46:34.315066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.794 qpair failed and we were unable to recover it. 00:28:33.794 [2024-11-15 11:46:34.315339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.794 [2024-11-15 11:46:34.315371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.794 qpair failed and we were unable to recover it. 00:28:33.794 [2024-11-15 11:46:34.315626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.794 [2024-11-15 11:46:34.315661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.794 qpair failed and we were unable to recover it. 00:28:33.794 [2024-11-15 11:46:34.315789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.794 [2024-11-15 11:46:34.315822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.794 qpair failed and we were unable to recover it. 00:28:33.794 [2024-11-15 11:46:34.316071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.794 [2024-11-15 11:46:34.316082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.794 qpair failed and we were unable to recover it. 00:28:33.794 [2024-11-15 11:46:34.316257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.794 [2024-11-15 11:46:34.316290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.794 qpair failed and we were unable to recover it. 00:28:33.794 [2024-11-15 11:46:34.316414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.794 [2024-11-15 11:46:34.316445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.794 qpair failed and we were unable to recover it. 00:28:33.794 [2024-11-15 11:46:34.316640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.794 [2024-11-15 11:46:34.316673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.794 qpair failed and we were unable to recover it. 00:28:33.794 [2024-11-15 11:46:34.316869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.795 [2024-11-15 11:46:34.316902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.795 qpair failed and we were unable to recover it. 00:28:33.795 [2024-11-15 11:46:34.317149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.795 [2024-11-15 11:46:34.317160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.795 qpair failed and we were unable to recover it. 00:28:33.795 [2024-11-15 11:46:34.317257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.795 [2024-11-15 11:46:34.317290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.795 qpair failed and we were unable to recover it. 00:28:33.795 [2024-11-15 11:46:34.317613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.795 [2024-11-15 11:46:34.317648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.795 qpair failed and we were unable to recover it. 00:28:33.795 [2024-11-15 11:46:34.317906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.795 [2024-11-15 11:46:34.317917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.795 qpair failed and we were unable to recover it. 00:28:33.795 [2024-11-15 11:46:34.318104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.795 [2024-11-15 11:46:34.318116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.795 qpair failed and we were unable to recover it. 00:28:33.795 [2024-11-15 11:46:34.318266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.795 [2024-11-15 11:46:34.318277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.795 qpair failed and we were unable to recover it. 00:28:33.795 [2024-11-15 11:46:34.318416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.795 [2024-11-15 11:46:34.318427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.795 qpair failed and we were unable to recover it. 00:28:33.795 [2024-11-15 11:46:34.318574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.795 [2024-11-15 11:46:34.318585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.795 qpair failed and we were unable to recover it. 00:28:33.795 [2024-11-15 11:46:34.318677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.795 [2024-11-15 11:46:34.318688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.795 qpair failed and we were unable to recover it. 00:28:33.795 [2024-11-15 11:46:34.318894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.795 [2024-11-15 11:46:34.318905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.795 qpair failed and we were unable to recover it. 00:28:33.795 [2024-11-15 11:46:34.319161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.795 [2024-11-15 11:46:34.319194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.795 qpair failed and we were unable to recover it. 00:28:33.795 [2024-11-15 11:46:34.319512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.795 [2024-11-15 11:46:34.319547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.795 qpair failed and we were unable to recover it. 00:28:33.795 [2024-11-15 11:46:34.319736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.795 [2024-11-15 11:46:34.319768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.795 qpair failed and we were unable to recover it. 00:28:33.795 [2024-11-15 11:46:34.319956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.795 [2024-11-15 11:46:34.319988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.795 qpair failed and we were unable to recover it. 00:28:33.795 [2024-11-15 11:46:34.320179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.795 [2024-11-15 11:46:34.320191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.795 qpair failed and we were unable to recover it. 00:28:33.795 [2024-11-15 11:46:34.320407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.795 [2024-11-15 11:46:34.320445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.795 qpair failed and we were unable to recover it. 00:28:33.795 [2024-11-15 11:46:34.320567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.795 [2024-11-15 11:46:34.320600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.795 qpair failed and we were unable to recover it. 00:28:33.795 [2024-11-15 11:46:34.320827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.795 [2024-11-15 11:46:34.320858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.795 qpair failed and we were unable to recover it. 00:28:33.795 [2024-11-15 11:46:34.321050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.795 [2024-11-15 11:46:34.321061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.795 qpair failed and we were unable to recover it. 00:28:33.795 [2024-11-15 11:46:34.321235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.795 [2024-11-15 11:46:34.321267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.795 qpair failed and we were unable to recover it. 00:28:33.795 [2024-11-15 11:46:34.321526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.795 [2024-11-15 11:46:34.321561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.795 qpair failed and we were unable to recover it. 00:28:33.795 [2024-11-15 11:46:34.321792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.795 [2024-11-15 11:46:34.321803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.795 qpair failed and we were unable to recover it. 00:28:33.795 [2024-11-15 11:46:34.322034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.795 [2024-11-15 11:46:34.322045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.795 qpair failed and we were unable to recover it. 00:28:33.795 [2024-11-15 11:46:34.322223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.795 [2024-11-15 11:46:34.322255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.795 qpair failed and we were unable to recover it. 00:28:33.795 [2024-11-15 11:46:34.322537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.795 [2024-11-15 11:46:34.322576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.795 qpair failed and we were unable to recover it. 00:28:33.795 [2024-11-15 11:46:34.322675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.795 [2024-11-15 11:46:34.322686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.795 qpair failed and we were unable to recover it. 00:28:33.795 [2024-11-15 11:46:34.322835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.795 [2024-11-15 11:46:34.322861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.795 qpair failed and we were unable to recover it. 00:28:33.795 [2024-11-15 11:46:34.323049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.795 [2024-11-15 11:46:34.323083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.795 qpair failed and we were unable to recover it. 00:28:33.795 [2024-11-15 11:46:34.323273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.795 [2024-11-15 11:46:34.323306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.795 qpair failed and we were unable to recover it. 00:28:33.795 [2024-11-15 11:46:34.323533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.795 [2024-11-15 11:46:34.323568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.795 qpair failed and we were unable to recover it. 00:28:33.795 [2024-11-15 11:46:34.323786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.795 [2024-11-15 11:46:34.323798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.795 qpair failed and we were unable to recover it. 00:28:33.795 [2024-11-15 11:46:34.323987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.795 [2024-11-15 11:46:34.324019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.795 qpair failed and we were unable to recover it. 00:28:33.795 [2024-11-15 11:46:34.324216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.795 [2024-11-15 11:46:34.324248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.795 qpair failed and we were unable to recover it. 00:28:33.795 [2024-11-15 11:46:34.324390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.795 [2024-11-15 11:46:34.324423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.795 qpair failed and we were unable to recover it. 00:28:33.795 [2024-11-15 11:46:34.324657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.795 [2024-11-15 11:46:34.324691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.795 qpair failed and we were unable to recover it. 00:28:33.795 [2024-11-15 11:46:34.324869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.795 [2024-11-15 11:46:34.324901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.795 qpair failed and we were unable to recover it. 00:28:33.795 [2024-11-15 11:46:34.325091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.795 [2024-11-15 11:46:34.325137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.796 qpair failed and we were unable to recover it. 00:28:33.796 [2024-11-15 11:46:34.325289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.796 [2024-11-15 11:46:34.325300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.796 qpair failed and we were unable to recover it. 00:28:33.796 [2024-11-15 11:46:34.325455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.796 [2024-11-15 11:46:34.325516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.796 qpair failed and we were unable to recover it. 00:28:33.796 [2024-11-15 11:46:34.325705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.796 [2024-11-15 11:46:34.325738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.796 qpair failed and we were unable to recover it. 00:28:33.796 [2024-11-15 11:46:34.325927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.796 [2024-11-15 11:46:34.325959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.796 qpair failed and we were unable to recover it. 00:28:33.796 [2024-11-15 11:46:34.326085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.796 [2024-11-15 11:46:34.326095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.796 qpair failed and we were unable to recover it. 00:28:33.796 [2024-11-15 11:46:34.326306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.796 [2024-11-15 11:46:34.326339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.796 qpair failed and we were unable to recover it. 00:28:33.796 [2024-11-15 11:46:34.326623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.796 [2024-11-15 11:46:34.326657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.796 qpair failed and we were unable to recover it. 00:28:33.796 [2024-11-15 11:46:34.326934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.796 [2024-11-15 11:46:34.326967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.796 qpair failed and we were unable to recover it. 00:28:33.796 [2024-11-15 11:46:34.327095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.796 [2024-11-15 11:46:34.327127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.796 qpair failed and we were unable to recover it. 00:28:33.796 [2024-11-15 11:46:34.327433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.796 [2024-11-15 11:46:34.327475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.796 qpair failed and we were unable to recover it. 00:28:33.796 [2024-11-15 11:46:34.327758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.796 [2024-11-15 11:46:34.327792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.796 qpair failed and we were unable to recover it. 00:28:33.796 [2024-11-15 11:46:34.328053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.796 [2024-11-15 11:46:34.328064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.796 qpair failed and we were unable to recover it. 00:28:33.796 [2024-11-15 11:46:34.328159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.796 [2024-11-15 11:46:34.328170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.796 qpair failed and we were unable to recover it. 00:28:33.796 [2024-11-15 11:46:34.328338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.796 [2024-11-15 11:46:34.328349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.796 qpair failed and we were unable to recover it. 00:28:33.796 [2024-11-15 11:46:34.328429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.796 [2024-11-15 11:46:34.328441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.796 qpair failed and we were unable to recover it. 00:28:33.796 [2024-11-15 11:46:34.328522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.796 [2024-11-15 11:46:34.328534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.796 qpair failed and we were unable to recover it. 00:28:33.796 [2024-11-15 11:46:34.328771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.796 [2024-11-15 11:46:34.328803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.796 qpair failed and we were unable to recover it. 00:28:33.796 [2024-11-15 11:46:34.328944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.796 [2024-11-15 11:46:34.328976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.796 qpair failed and we were unable to recover it. 00:28:33.796 [2024-11-15 11:46:34.329197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.796 [2024-11-15 11:46:34.329236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.796 qpair failed and we were unable to recover it. 00:28:33.796 [2024-11-15 11:46:34.329419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.796 [2024-11-15 11:46:34.329452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.796 qpair failed and we were unable to recover it. 00:28:33.796 [2024-11-15 11:46:34.329662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.796 [2024-11-15 11:46:34.329696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.796 qpair failed and we were unable to recover it. 00:28:33.796 [2024-11-15 11:46:34.329892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.796 [2024-11-15 11:46:34.329925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.796 qpair failed and we were unable to recover it. 00:28:33.796 [2024-11-15 11:46:34.330112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.796 [2024-11-15 11:46:34.330123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.796 qpair failed and we were unable to recover it. 00:28:33.796 [2024-11-15 11:46:34.330276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.796 [2024-11-15 11:46:34.330309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.796 qpair failed and we were unable to recover it. 00:28:33.796 [2024-11-15 11:46:34.330499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.796 [2024-11-15 11:46:34.330533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.796 qpair failed and we were unable to recover it. 00:28:33.796 [2024-11-15 11:46:34.330816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.796 [2024-11-15 11:46:34.330848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.796 qpair failed and we were unable to recover it. 00:28:33.796 [2024-11-15 11:46:34.331102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.796 [2024-11-15 11:46:34.331113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.796 qpair failed and we were unable to recover it. 00:28:33.796 [2024-11-15 11:46:34.331363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.796 [2024-11-15 11:46:34.331395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.796 qpair failed and we were unable to recover it. 00:28:33.796 [2024-11-15 11:46:34.331534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.796 [2024-11-15 11:46:34.331568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.796 qpair failed and we were unable to recover it. 00:28:33.796 [2024-11-15 11:46:34.331761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.796 [2024-11-15 11:46:34.331772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.796 qpair failed and we were unable to recover it. 00:28:33.796 [2024-11-15 11:46:34.331921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.797 [2024-11-15 11:46:34.331952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.797 qpair failed and we were unable to recover it. 00:28:33.797 [2024-11-15 11:46:34.332176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.797 [2024-11-15 11:46:34.332208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.797 qpair failed and we were unable to recover it. 00:28:33.797 [2024-11-15 11:46:34.332418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.797 [2024-11-15 11:46:34.332450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.797 qpair failed and we were unable to recover it. 00:28:33.797 [2024-11-15 11:46:34.332619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.797 [2024-11-15 11:46:34.332652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.797 qpair failed and we were unable to recover it. 00:28:33.797 [2024-11-15 11:46:34.332876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.797 [2024-11-15 11:46:34.332909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.797 qpair failed and we were unable to recover it. 00:28:33.797 [2024-11-15 11:46:34.333108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.797 [2024-11-15 11:46:34.333118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.797 qpair failed and we were unable to recover it. 00:28:33.797 [2024-11-15 11:46:34.333357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.797 [2024-11-15 11:46:34.333388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.797 qpair failed and we were unable to recover it. 00:28:33.797 [2024-11-15 11:46:34.333595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.797 [2024-11-15 11:46:34.333628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.797 qpair failed and we were unable to recover it. 00:28:33.797 [2024-11-15 11:46:34.333838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.797 [2024-11-15 11:46:34.333880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.797 qpair failed and we were unable to recover it. 00:28:33.797 [2024-11-15 11:46:34.333964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.797 [2024-11-15 11:46:34.333974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.797 qpair failed and we were unable to recover it. 00:28:33.797 [2024-11-15 11:46:34.334195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.797 [2024-11-15 11:46:34.334227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.797 qpair failed and we were unable to recover it. 00:28:33.797 [2024-11-15 11:46:34.334454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.797 [2024-11-15 11:46:34.334498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.797 qpair failed and we were unable to recover it. 00:28:33.797 [2024-11-15 11:46:34.334695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.797 [2024-11-15 11:46:34.334728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.797 qpair failed and we were unable to recover it. 00:28:33.797 [2024-11-15 11:46:34.334967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.797 [2024-11-15 11:46:34.334978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.797 qpair failed and we were unable to recover it. 00:28:33.797 [2024-11-15 11:46:34.335185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.797 [2024-11-15 11:46:34.335196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.797 qpair failed and we were unable to recover it. 00:28:33.797 [2024-11-15 11:46:34.335293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.797 [2024-11-15 11:46:34.335304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.797 qpair failed and we were unable to recover it. 00:28:33.797 [2024-11-15 11:46:34.335492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.797 [2024-11-15 11:46:34.335526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.797 qpair failed and we were unable to recover it. 00:28:33.797 [2024-11-15 11:46:34.335721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.797 [2024-11-15 11:46:34.335753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.797 qpair failed and we were unable to recover it. 00:28:33.797 [2024-11-15 11:46:34.335884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.797 [2024-11-15 11:46:34.335916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.797 qpair failed and we were unable to recover it. 00:28:33.797 [2024-11-15 11:46:34.336164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.797 [2024-11-15 11:46:34.336175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.797 qpair failed and we were unable to recover it. 00:28:33.797 [2024-11-15 11:46:34.336393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.797 [2024-11-15 11:46:34.336425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.797 qpair failed and we were unable to recover it. 00:28:33.797 [2024-11-15 11:46:34.336644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.797 [2024-11-15 11:46:34.336676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.797 qpair failed and we were unable to recover it. 00:28:33.797 [2024-11-15 11:46:34.336789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.797 [2024-11-15 11:46:34.336822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.797 qpair failed and we were unable to recover it. 00:28:33.797 [2024-11-15 11:46:34.337005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.797 [2024-11-15 11:46:34.337016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.797 qpair failed and we were unable to recover it. 00:28:33.797 [2024-11-15 11:46:34.337308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.797 [2024-11-15 11:46:34.337341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.797 qpair failed and we were unable to recover it. 00:28:33.797 [2024-11-15 11:46:34.337539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.797 [2024-11-15 11:46:34.337573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.797 qpair failed and we were unable to recover it. 00:28:33.797 [2024-11-15 11:46:34.337774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.797 [2024-11-15 11:46:34.337807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.797 qpair failed and we were unable to recover it. 00:28:33.797 [2024-11-15 11:46:34.338017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.797 [2024-11-15 11:46:34.338062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.797 qpair failed and we were unable to recover it. 00:28:33.797 [2024-11-15 11:46:34.338245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.797 [2024-11-15 11:46:34.338258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.797 qpair failed and we were unable to recover it. 00:28:33.797 [2024-11-15 11:46:34.338406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.797 [2024-11-15 11:46:34.338437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.797 qpair failed and we were unable to recover it. 00:28:33.797 [2024-11-15 11:46:34.338702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.797 [2024-11-15 11:46:34.338735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.797 qpair failed and we were unable to recover it. 00:28:33.797 [2024-11-15 11:46:34.338917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.797 [2024-11-15 11:46:34.338927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.797 qpair failed and we were unable to recover it. 00:28:33.797 [2024-11-15 11:46:34.339086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.797 [2024-11-15 11:46:34.339097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.797 qpair failed and we were unable to recover it. 00:28:33.797 [2024-11-15 11:46:34.339287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.797 [2024-11-15 11:46:34.339319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.797 qpair failed and we were unable to recover it. 00:28:33.797 [2024-11-15 11:46:34.339586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.797 [2024-11-15 11:46:34.339620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.797 qpair failed and we were unable to recover it. 00:28:33.797 [2024-11-15 11:46:34.339769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.797 [2024-11-15 11:46:34.339801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.797 qpair failed and we were unable to recover it. 00:28:33.797 [2024-11-15 11:46:34.339934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.797 [2024-11-15 11:46:34.339945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.797 qpair failed and we were unable to recover it. 00:28:33.797 [2024-11-15 11:46:34.340169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.797 [2024-11-15 11:46:34.340180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.797 qpair failed and we were unable to recover it. 00:28:33.798 [2024-11-15 11:46:34.340320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.798 [2024-11-15 11:46:34.340331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.798 qpair failed and we were unable to recover it. 00:28:33.798 [2024-11-15 11:46:34.340543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.798 [2024-11-15 11:46:34.340576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.798 qpair failed and we were unable to recover it. 00:28:33.798 [2024-11-15 11:46:34.340827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.798 [2024-11-15 11:46:34.340859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.798 qpair failed and we were unable to recover it. 00:28:33.798 [2024-11-15 11:46:34.341042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.798 [2024-11-15 11:46:34.341053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.798 qpair failed and we were unable to recover it. 00:28:33.798 [2024-11-15 11:46:34.341146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.798 [2024-11-15 11:46:34.341157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.798 qpair failed and we were unable to recover it. 00:28:33.798 [2024-11-15 11:46:34.341300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.798 [2024-11-15 11:46:34.341311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.798 qpair failed and we were unable to recover it. 00:28:33.798 [2024-11-15 11:46:34.341474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.798 [2024-11-15 11:46:34.341508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.798 qpair failed and we were unable to recover it. 00:28:33.798 [2024-11-15 11:46:34.341705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.798 [2024-11-15 11:46:34.341737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.798 qpair failed and we were unable to recover it. 00:28:33.798 [2024-11-15 11:46:34.341871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.798 [2024-11-15 11:46:34.341881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.798 qpair failed and we were unable to recover it. 00:28:33.798 [2024-11-15 11:46:34.342035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.798 [2024-11-15 11:46:34.342046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.798 qpair failed and we were unable to recover it. 00:28:33.798 [2024-11-15 11:46:34.342202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.798 [2024-11-15 11:46:34.342235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.798 qpair failed and we were unable to recover it. 00:28:33.798 [2024-11-15 11:46:34.342490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.798 [2024-11-15 11:46:34.342523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.798 qpair failed and we were unable to recover it. 00:28:33.798 [2024-11-15 11:46:34.342668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.798 [2024-11-15 11:46:34.342701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.798 qpair failed and we were unable to recover it. 00:28:33.798 [2024-11-15 11:46:34.342898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.798 [2024-11-15 11:46:34.342931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.798 qpair failed and we were unable to recover it. 00:28:33.798 [2024-11-15 11:46:34.343129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.798 [2024-11-15 11:46:34.343160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.798 qpair failed and we were unable to recover it. 00:28:33.798 [2024-11-15 11:46:34.343449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.798 [2024-11-15 11:46:34.343462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.798 qpair failed and we were unable to recover it. 00:28:33.798 [2024-11-15 11:46:34.343668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.798 [2024-11-15 11:46:34.343679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.798 qpair failed and we were unable to recover it. 00:28:33.798 [2024-11-15 11:46:34.343770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.798 [2024-11-15 11:46:34.343781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.798 qpair failed and we were unable to recover it. 00:28:33.798 [2024-11-15 11:46:34.343851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.798 [2024-11-15 11:46:34.343861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.798 qpair failed and we were unable to recover it. 00:28:33.798 [2024-11-15 11:46:34.344020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.798 [2024-11-15 11:46:34.344053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.798 qpair failed and we were unable to recover it. 00:28:33.798 [2024-11-15 11:46:34.344303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.798 [2024-11-15 11:46:34.344336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.798 qpair failed and we were unable to recover it. 00:28:33.798 [2024-11-15 11:46:34.344474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.798 [2024-11-15 11:46:34.344509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.798 qpair failed and we were unable to recover it. 00:28:33.798 [2024-11-15 11:46:34.344638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.798 [2024-11-15 11:46:34.344669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.798 qpair failed and we were unable to recover it. 00:28:33.798 [2024-11-15 11:46:34.344931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.798 [2024-11-15 11:46:34.344963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.798 qpair failed and we were unable to recover it. 00:28:33.798 [2024-11-15 11:46:34.345141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.798 [2024-11-15 11:46:34.345152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.798 qpair failed and we were unable to recover it. 00:28:33.798 [2024-11-15 11:46:34.345360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.798 [2024-11-15 11:46:34.345370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.798 qpair failed and we were unable to recover it. 00:28:33.798 [2024-11-15 11:46:34.345505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.798 [2024-11-15 11:46:34.345516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.798 qpair failed and we were unable to recover it. 00:28:33.798 [2024-11-15 11:46:34.345667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.798 [2024-11-15 11:46:34.345678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.798 qpair failed and we were unable to recover it. 00:28:33.798 [2024-11-15 11:46:34.345816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.798 [2024-11-15 11:46:34.345827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.798 qpair failed and we were unable to recover it. 00:28:33.798 [2024-11-15 11:46:34.345988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.798 [2024-11-15 11:46:34.346021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.798 qpair failed and we were unable to recover it. 00:28:33.798 [2024-11-15 11:46:34.346214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.798 [2024-11-15 11:46:34.346252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.798 qpair failed and we were unable to recover it. 00:28:33.798 [2024-11-15 11:46:34.346446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.798 [2024-11-15 11:46:34.346498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.798 qpair failed and we were unable to recover it. 00:28:33.798 [2024-11-15 11:46:34.346719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.798 [2024-11-15 11:46:34.346753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.798 qpair failed and we were unable to recover it. 00:28:33.798 [2024-11-15 11:46:34.347006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.798 [2024-11-15 11:46:34.347037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.798 qpair failed and we were unable to recover it. 00:28:33.798 [2024-11-15 11:46:34.347269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.798 [2024-11-15 11:46:34.347279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.798 qpair failed and we were unable to recover it. 00:28:33.798 [2024-11-15 11:46:34.347418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.798 [2024-11-15 11:46:34.347451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.798 qpair failed and we were unable to recover it. 00:28:33.798 [2024-11-15 11:46:34.347580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.798 [2024-11-15 11:46:34.347616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.799 qpair failed and we were unable to recover it. 00:28:33.799 [2024-11-15 11:46:34.347847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.799 [2024-11-15 11:46:34.347857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.799 qpair failed and we were unable to recover it. 00:28:33.799 [2024-11-15 11:46:34.347993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.799 [2024-11-15 11:46:34.348004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.799 qpair failed and we were unable to recover it. 00:28:33.799 [2024-11-15 11:46:34.348245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.799 [2024-11-15 11:46:34.348277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.799 qpair failed and we were unable to recover it. 00:28:33.799 [2024-11-15 11:46:34.348502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.799 [2024-11-15 11:46:34.348536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.799 qpair failed and we were unable to recover it. 00:28:33.799 [2024-11-15 11:46:34.348823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.799 [2024-11-15 11:46:34.348855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.799 qpair failed and we were unable to recover it. 00:28:33.799 [2024-11-15 11:46:34.349064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.799 [2024-11-15 11:46:34.349097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.799 qpair failed and we were unable to recover it. 00:28:33.799 [2024-11-15 11:46:34.349281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.799 [2024-11-15 11:46:34.349312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.799 qpair failed and we were unable to recover it. 00:28:33.799 [2024-11-15 11:46:34.349586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.799 [2024-11-15 11:46:34.349631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.799 qpair failed and we were unable to recover it. 00:28:33.799 [2024-11-15 11:46:34.349837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.799 [2024-11-15 11:46:34.349849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.799 qpair failed and we were unable to recover it. 00:28:33.799 [2024-11-15 11:46:34.349995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.799 [2024-11-15 11:46:34.350006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.799 qpair failed and we were unable to recover it. 00:28:33.799 [2024-11-15 11:46:34.350082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.799 [2024-11-15 11:46:34.350092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.799 qpair failed and we were unable to recover it. 00:28:33.799 [2024-11-15 11:46:34.350243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.799 [2024-11-15 11:46:34.350254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.799 qpair failed and we were unable to recover it. 00:28:33.799 [2024-11-15 11:46:34.350331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.799 [2024-11-15 11:46:34.350358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.799 qpair failed and we were unable to recover it. 00:28:33.799 [2024-11-15 11:46:34.350544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.799 [2024-11-15 11:46:34.350577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.799 qpair failed and we were unable to recover it. 00:28:33.799 [2024-11-15 11:46:34.350821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.799 [2024-11-15 11:46:34.350854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.799 qpair failed and we were unable to recover it. 00:28:33.799 [2024-11-15 11:46:34.351052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.799 [2024-11-15 11:46:34.351063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.799 qpair failed and we were unable to recover it. 00:28:33.799 [2024-11-15 11:46:34.351217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.799 [2024-11-15 11:46:34.351228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.799 qpair failed and we were unable to recover it. 00:28:33.799 [2024-11-15 11:46:34.351521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.799 [2024-11-15 11:46:34.351532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.799 qpair failed and we were unable to recover it. 00:28:33.799 [2024-11-15 11:46:34.351629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.799 [2024-11-15 11:46:34.351640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.799 qpair failed and we were unable to recover it. 00:28:33.799 [2024-11-15 11:46:34.351725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.799 [2024-11-15 11:46:34.351736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.799 qpair failed and we were unable to recover it. 00:28:33.799 [2024-11-15 11:46:34.351913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.799 [2024-11-15 11:46:34.351946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.799 qpair failed and we were unable to recover it. 00:28:33.799 [2024-11-15 11:46:34.352255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.799 [2024-11-15 11:46:34.352288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.799 qpair failed and we were unable to recover it. 00:28:33.799 [2024-11-15 11:46:34.352420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.799 [2024-11-15 11:46:34.352452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.799 qpair failed and we were unable to recover it. 00:28:33.799 [2024-11-15 11:46:34.352649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.799 [2024-11-15 11:46:34.352682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.799 qpair failed and we were unable to recover it. 00:28:33.799 [2024-11-15 11:46:34.352910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.799 [2024-11-15 11:46:34.352944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.799 qpair failed and we were unable to recover it. 00:28:33.799 [2024-11-15 11:46:34.353213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.799 [2024-11-15 11:46:34.353245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.799 qpair failed and we were unable to recover it. 00:28:33.799 [2024-11-15 11:46:34.353530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.799 [2024-11-15 11:46:34.353565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.799 qpair failed and we were unable to recover it. 00:28:33.799 [2024-11-15 11:46:34.353689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.799 [2024-11-15 11:46:34.353700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.799 qpair failed and we were unable to recover it. 00:28:33.799 [2024-11-15 11:46:34.353906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.799 [2024-11-15 11:46:34.353937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.799 qpair failed and we were unable to recover it. 00:28:33.799 [2024-11-15 11:46:34.354190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.799 [2024-11-15 11:46:34.354222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.799 qpair failed and we were unable to recover it. 00:28:33.799 [2024-11-15 11:46:34.354420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.799 [2024-11-15 11:46:34.354451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.799 qpair failed and we were unable to recover it. 00:28:33.799 [2024-11-15 11:46:34.354592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.799 [2024-11-15 11:46:34.354625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.799 qpair failed and we were unable to recover it. 00:28:33.799 [2024-11-15 11:46:34.354880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.799 [2024-11-15 11:46:34.354913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.799 qpair failed and we were unable to recover it. 00:28:33.799 [2024-11-15 11:46:34.355193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.799 [2024-11-15 11:46:34.355206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.799 qpair failed and we were unable to recover it. 00:28:33.799 [2024-11-15 11:46:34.355304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.799 [2024-11-15 11:46:34.355342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.799 qpair failed and we were unable to recover it. 00:28:33.799 [2024-11-15 11:46:34.355566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.799 [2024-11-15 11:46:34.355600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.799 qpair failed and we were unable to recover it. 00:28:33.799 [2024-11-15 11:46:34.355839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.799 [2024-11-15 11:46:34.355850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.799 qpair failed and we were unable to recover it. 00:28:33.799 [2024-11-15 11:46:34.356025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.799 [2024-11-15 11:46:34.356035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.799 qpair failed and we were unable to recover it. 00:28:33.800 [2024-11-15 11:46:34.356131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.800 [2024-11-15 11:46:34.356163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.800 qpair failed and we were unable to recover it. 00:28:33.800 [2024-11-15 11:46:34.356361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.800 [2024-11-15 11:46:34.356394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.800 qpair failed and we were unable to recover it. 00:28:33.800 [2024-11-15 11:46:34.356536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.800 [2024-11-15 11:46:34.356568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.800 qpair failed and we were unable to recover it. 00:28:33.800 [2024-11-15 11:46:34.356775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.800 [2024-11-15 11:46:34.356808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.800 qpair failed and we were unable to recover it. 00:28:33.800 [2024-11-15 11:46:34.357065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.800 [2024-11-15 11:46:34.357109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.800 qpair failed and we were unable to recover it. 00:28:33.800 [2024-11-15 11:46:34.357241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.800 [2024-11-15 11:46:34.357252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.800 qpair failed and we were unable to recover it. 00:28:33.800 [2024-11-15 11:46:34.357483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.800 [2024-11-15 11:46:34.357495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.800 qpair failed and we were unable to recover it. 00:28:33.800 [2024-11-15 11:46:34.357583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.800 [2024-11-15 11:46:34.357594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.800 qpair failed and we were unable to recover it. 00:28:33.800 [2024-11-15 11:46:34.357828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.800 [2024-11-15 11:46:34.357838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.800 qpair failed and we were unable to recover it. 00:28:33.800 [2024-11-15 11:46:34.357988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.800 [2024-11-15 11:46:34.357999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.800 qpair failed and we were unable to recover it. 00:28:33.800 [2024-11-15 11:46:34.358244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.800 [2024-11-15 11:46:34.358275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.800 qpair failed and we were unable to recover it. 00:28:33.800 [2024-11-15 11:46:34.358409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.800 [2024-11-15 11:46:34.358441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.800 qpair failed and we were unable to recover it. 00:28:33.800 [2024-11-15 11:46:34.358586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.800 [2024-11-15 11:46:34.358619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.800 qpair failed and we were unable to recover it. 00:28:33.800 [2024-11-15 11:46:34.358812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.800 [2024-11-15 11:46:34.358843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.800 qpair failed and we were unable to recover it. 00:28:33.800 [2024-11-15 11:46:34.358972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.800 [2024-11-15 11:46:34.359005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.800 qpair failed and we were unable to recover it. 00:28:33.800 [2024-11-15 11:46:34.359201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.800 [2024-11-15 11:46:34.359234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.800 qpair failed and we were unable to recover it. 00:28:33.800 [2024-11-15 11:46:34.359430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.800 [2024-11-15 11:46:34.359468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.800 qpair failed and we were unable to recover it. 00:28:33.800 [2024-11-15 11:46:34.359663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.800 [2024-11-15 11:46:34.359695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.800 qpair failed and we were unable to recover it. 00:28:33.800 [2024-11-15 11:46:34.359886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.800 [2024-11-15 11:46:34.359896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.800 qpair failed and we were unable to recover it. 00:28:33.800 [2024-11-15 11:46:34.360099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.800 [2024-11-15 11:46:34.360110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.800 qpair failed and we were unable to recover it. 00:28:33.800 [2024-11-15 11:46:34.360192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.800 [2024-11-15 11:46:34.360203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.800 qpair failed and we were unable to recover it. 00:28:33.800 [2024-11-15 11:46:34.360474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.800 [2024-11-15 11:46:34.360507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.800 qpair failed and we were unable to recover it. 00:28:33.800 [2024-11-15 11:46:34.360770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.800 [2024-11-15 11:46:34.360802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.800 qpair failed and we were unable to recover it. 00:28:33.800 [2024-11-15 11:46:34.360926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.800 [2024-11-15 11:46:34.360968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.800 qpair failed and we were unable to recover it. 00:28:33.800 [2024-11-15 11:46:34.361119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.800 [2024-11-15 11:46:34.361129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.800 qpair failed and we were unable to recover it. 00:28:33.800 [2024-11-15 11:46:34.361309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.800 [2024-11-15 11:46:34.361341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.800 qpair failed and we were unable to recover it. 00:28:33.800 [2024-11-15 11:46:34.361546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.800 [2024-11-15 11:46:34.361581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.800 qpair failed and we were unable to recover it. 00:28:33.800 [2024-11-15 11:46:34.361765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.800 [2024-11-15 11:46:34.361796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.800 qpair failed and we were unable to recover it. 00:28:33.800 [2024-11-15 11:46:34.361983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.800 [2024-11-15 11:46:34.362015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.800 qpair failed and we were unable to recover it. 00:28:33.800 [2024-11-15 11:46:34.362218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.800 [2024-11-15 11:46:34.362250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.800 qpair failed and we were unable to recover it. 00:28:33.800 [2024-11-15 11:46:34.362489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.800 [2024-11-15 11:46:34.362522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.800 qpair failed and we were unable to recover it. 00:28:33.800 [2024-11-15 11:46:34.362717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.800 [2024-11-15 11:46:34.362750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.800 qpair failed and we were unable to recover it. 00:28:33.800 [2024-11-15 11:46:34.362980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.800 [2024-11-15 11:46:34.363013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.800 qpair failed and we were unable to recover it. 00:28:33.800 [2024-11-15 11:46:34.363141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.800 [2024-11-15 11:46:34.363174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.800 qpair failed and we were unable to recover it. 00:28:33.800 [2024-11-15 11:46:34.363311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.800 [2024-11-15 11:46:34.363344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.800 qpair failed and we were unable to recover it. 00:28:33.800 [2024-11-15 11:46:34.363531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.800 [2024-11-15 11:46:34.363570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.800 qpair failed and we were unable to recover it. 00:28:33.800 [2024-11-15 11:46:34.363719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.800 [2024-11-15 11:46:34.363729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.801 qpair failed and we were unable to recover it. 00:28:33.801 [2024-11-15 11:46:34.363813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.801 [2024-11-15 11:46:34.363823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.801 qpair failed and we were unable to recover it. 00:28:33.801 [2024-11-15 11:46:34.363917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.801 [2024-11-15 11:46:34.363949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.801 qpair failed and we were unable to recover it. 00:28:33.801 [2024-11-15 11:46:34.364233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.801 [2024-11-15 11:46:34.364265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.801 qpair failed and we were unable to recover it. 00:28:33.801 [2024-11-15 11:46:34.364474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.801 [2024-11-15 11:46:34.364507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.801 qpair failed and we were unable to recover it. 00:28:33.801 [2024-11-15 11:46:34.364764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.801 [2024-11-15 11:46:34.364796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.801 qpair failed and we were unable to recover it. 00:28:33.801 [2024-11-15 11:46:34.365031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.801 [2024-11-15 11:46:34.365042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.801 qpair failed and we were unable to recover it. 00:28:33.801 [2024-11-15 11:46:34.365195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.801 [2024-11-15 11:46:34.365206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.801 qpair failed and we were unable to recover it. 00:28:33.801 [2024-11-15 11:46:34.365276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.801 [2024-11-15 11:46:34.365287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.801 qpair failed and we were unable to recover it. 00:28:33.801 [2024-11-15 11:46:34.365395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.801 [2024-11-15 11:46:34.365426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.801 qpair failed and we were unable to recover it. 00:28:33.801 [2024-11-15 11:46:34.365708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.801 [2024-11-15 11:46:34.365781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.801 qpair failed and we were unable to recover it. 00:28:33.801 [2024-11-15 11:46:34.365921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.801 [2024-11-15 11:46:34.365958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.801 qpair failed and we were unable to recover it. 00:28:33.801 [2024-11-15 11:46:34.366076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.801 [2024-11-15 11:46:34.366087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.801 qpair failed and we were unable to recover it. 00:28:33.801 [2024-11-15 11:46:34.366306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.801 [2024-11-15 11:46:34.366338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.801 qpair failed and we were unable to recover it. 00:28:33.801 [2024-11-15 11:46:34.366529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.801 [2024-11-15 11:46:34.366563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.801 qpair failed and we were unable to recover it. 00:28:33.801 [2024-11-15 11:46:34.366700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.801 [2024-11-15 11:46:34.366711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.801 qpair failed and we were unable to recover it. 00:28:33.801 [2024-11-15 11:46:34.366872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.801 [2024-11-15 11:46:34.366882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.801 qpair failed and we were unable to recover it. 00:28:33.801 [2024-11-15 11:46:34.367122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.801 [2024-11-15 11:46:34.367153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.801 qpair failed and we were unable to recover it. 00:28:33.801 [2024-11-15 11:46:34.367277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.801 [2024-11-15 11:46:34.367311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.801 qpair failed and we were unable to recover it. 00:28:33.801 [2024-11-15 11:46:34.367509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.801 [2024-11-15 11:46:34.367541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.801 qpair failed and we were unable to recover it. 00:28:33.801 [2024-11-15 11:46:34.367679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.801 [2024-11-15 11:46:34.367711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.801 qpair failed and we were unable to recover it. 00:28:33.801 [2024-11-15 11:46:34.367989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.801 [2024-11-15 11:46:34.367999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.801 qpair failed and we were unable to recover it. 00:28:33.801 [2024-11-15 11:46:34.368154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.801 [2024-11-15 11:46:34.368164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.801 qpair failed and we were unable to recover it. 00:28:33.801 [2024-11-15 11:46:34.368336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.801 [2024-11-15 11:46:34.368346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.801 qpair failed and we were unable to recover it. 00:28:33.801 [2024-11-15 11:46:34.368424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.801 [2024-11-15 11:46:34.368435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.801 qpair failed and we were unable to recover it. 00:28:33.801 [2024-11-15 11:46:34.368602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.801 [2024-11-15 11:46:34.368613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.801 qpair failed and we were unable to recover it. 00:28:33.801 [2024-11-15 11:46:34.368764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.801 [2024-11-15 11:46:34.368797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.801 qpair failed and we were unable to recover it. 00:28:33.801 [2024-11-15 11:46:34.368989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.801 [2024-11-15 11:46:34.369020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.801 qpair failed and we were unable to recover it. 00:28:33.801 [2024-11-15 11:46:34.369208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.801 [2024-11-15 11:46:34.369240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.801 qpair failed and we were unable to recover it. 00:28:33.801 [2024-11-15 11:46:34.369517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.801 [2024-11-15 11:46:34.369529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.801 qpair failed and we were unable to recover it. 00:28:33.801 [2024-11-15 11:46:34.369769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.801 [2024-11-15 11:46:34.369803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.801 qpair failed and we were unable to recover it. 00:28:33.801 [2024-11-15 11:46:34.369986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.801 [2024-11-15 11:46:34.369996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.801 qpair failed and we were unable to recover it. 00:28:33.801 [2024-11-15 11:46:34.370113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.801 [2024-11-15 11:46:34.370145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.801 qpair failed and we were unable to recover it. 00:28:33.801 [2024-11-15 11:46:34.370360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.801 [2024-11-15 11:46:34.370392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.801 qpair failed and we were unable to recover it. 00:28:33.802 [2024-11-15 11:46:34.370647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.802 [2024-11-15 11:46:34.370680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.802 qpair failed and we were unable to recover it. 00:28:33.802 [2024-11-15 11:46:34.370910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.802 [2024-11-15 11:46:34.370942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.802 qpair failed and we were unable to recover it. 00:28:33.802 [2024-11-15 11:46:34.371221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.802 [2024-11-15 11:46:34.371259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.802 qpair failed and we were unable to recover it. 00:28:33.802 [2024-11-15 11:46:34.371328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.802 [2024-11-15 11:46:34.371340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.802 qpair failed and we were unable to recover it. 00:28:33.802 [2024-11-15 11:46:34.371546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.802 [2024-11-15 11:46:34.371557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.802 qpair failed and we were unable to recover it. 00:28:33.802 [2024-11-15 11:46:34.371766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.802 [2024-11-15 11:46:34.371804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.802 qpair failed and we were unable to recover it. 00:28:33.802 [2024-11-15 11:46:34.372003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.802 [2024-11-15 11:46:34.372036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.802 qpair failed and we were unable to recover it. 00:28:33.802 [2024-11-15 11:46:34.372223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.802 [2024-11-15 11:46:34.372255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.802 qpair failed and we were unable to recover it. 00:28:33.802 [2024-11-15 11:46:34.372379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.802 [2024-11-15 11:46:34.372410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.802 qpair failed and we were unable to recover it. 00:28:33.802 [2024-11-15 11:46:34.372618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.802 [2024-11-15 11:46:34.372651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.802 qpair failed and we were unable to recover it. 00:28:33.802 [2024-11-15 11:46:34.372850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.802 [2024-11-15 11:46:34.372883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.802 qpair failed and we were unable to recover it. 00:28:33.802 [2024-11-15 11:46:34.373161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.802 [2024-11-15 11:46:34.373192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.802 qpair failed and we were unable to recover it. 00:28:33.802 [2024-11-15 11:46:34.373390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.802 [2024-11-15 11:46:34.373422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.802 qpair failed and we were unable to recover it. 00:28:33.802 [2024-11-15 11:46:34.373632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.802 [2024-11-15 11:46:34.373666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.802 qpair failed and we were unable to recover it. 00:28:33.802 [2024-11-15 11:46:34.373926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.802 [2024-11-15 11:46:34.373958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.802 qpair failed and we were unable to recover it. 00:28:33.802 [2024-11-15 11:46:34.374100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.802 [2024-11-15 11:46:34.374132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.802 qpair failed and we were unable to recover it. 00:28:33.802 [2024-11-15 11:46:34.374414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.802 [2024-11-15 11:46:34.374445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.802 qpair failed and we were unable to recover it. 00:28:33.802 [2024-11-15 11:46:34.374677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.802 [2024-11-15 11:46:34.374708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.802 qpair failed and we were unable to recover it. 00:28:33.802 [2024-11-15 11:46:34.374988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.802 [2024-11-15 11:46:34.375019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.802 qpair failed and we were unable to recover it. 00:28:33.802 [2024-11-15 11:46:34.375203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.802 [2024-11-15 11:46:34.375214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.802 qpair failed and we were unable to recover it. 00:28:33.802 [2024-11-15 11:46:34.375475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.802 [2024-11-15 11:46:34.375486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.802 qpair failed and we were unable to recover it. 00:28:33.802 [2024-11-15 11:46:34.375718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.802 [2024-11-15 11:46:34.375729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.802 qpair failed and we were unable to recover it. 00:28:33.802 [2024-11-15 11:46:34.375866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.802 [2024-11-15 11:46:34.375876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.802 qpair failed and we were unable to recover it. 00:28:33.802 [2024-11-15 11:46:34.376037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.802 [2024-11-15 11:46:34.376046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.802 qpair failed and we were unable to recover it. 00:28:33.802 [2024-11-15 11:46:34.376266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.802 [2024-11-15 11:46:34.376297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.802 qpair failed and we were unable to recover it. 00:28:33.802 [2024-11-15 11:46:34.376491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.802 [2024-11-15 11:46:34.376523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.802 qpair failed and we were unable to recover it. 00:28:33.802 [2024-11-15 11:46:34.376778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.802 [2024-11-15 11:46:34.376809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.802 qpair failed and we were unable to recover it. 00:28:33.802 [2024-11-15 11:46:34.377113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.802 [2024-11-15 11:46:34.377144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.802 qpair failed and we were unable to recover it. 00:28:33.802 [2024-11-15 11:46:34.377425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.802 [2024-11-15 11:46:34.377471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.802 qpair failed and we were unable to recover it. 00:28:33.802 [2024-11-15 11:46:34.377669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.802 [2024-11-15 11:46:34.377701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.802 qpair failed and we were unable to recover it. 00:28:33.802 [2024-11-15 11:46:34.377916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.802 [2024-11-15 11:46:34.377947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.802 qpair failed and we were unable to recover it. 00:28:33.802 [2024-11-15 11:46:34.378066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.802 [2024-11-15 11:46:34.378076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.802 qpair failed and we were unable to recover it. 00:28:33.802 [2024-11-15 11:46:34.378178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.802 [2024-11-15 11:46:34.378189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.802 qpair failed and we were unable to recover it. 00:28:33.802 [2024-11-15 11:46:34.378394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.802 [2024-11-15 11:46:34.378405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.802 qpair failed and we were unable to recover it. 00:28:33.802 [2024-11-15 11:46:34.378545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.802 [2024-11-15 11:46:34.378556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.802 qpair failed and we were unable to recover it. 00:28:33.802 [2024-11-15 11:46:34.378722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.802 [2024-11-15 11:46:34.378752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.802 qpair failed and we were unable to recover it. 00:28:33.802 [2024-11-15 11:46:34.378934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.802 [2024-11-15 11:46:34.378966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.802 qpair failed and we were unable to recover it. 00:28:33.802 [2024-11-15 11:46:34.379148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.802 [2024-11-15 11:46:34.379180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.802 qpair failed and we were unable to recover it. 00:28:33.803 [2024-11-15 11:46:34.379356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.803 [2024-11-15 11:46:34.379367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.803 qpair failed and we were unable to recover it. 00:28:33.803 [2024-11-15 11:46:34.379532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.803 [2024-11-15 11:46:34.379566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.803 qpair failed and we were unable to recover it. 00:28:33.803 [2024-11-15 11:46:34.379748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.803 [2024-11-15 11:46:34.379779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.803 qpair failed and we were unable to recover it. 00:28:33.803 [2024-11-15 11:46:34.380085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.803 [2024-11-15 11:46:34.380117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.803 qpair failed and we were unable to recover it. 00:28:33.803 [2024-11-15 11:46:34.380396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.803 [2024-11-15 11:46:34.380428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.803 qpair failed and we were unable to recover it. 00:28:33.803 [2024-11-15 11:46:34.380626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.803 [2024-11-15 11:46:34.380658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.803 qpair failed and we were unable to recover it. 00:28:33.803 [2024-11-15 11:46:34.380843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.803 [2024-11-15 11:46:34.380876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.803 qpair failed and we were unable to recover it. 00:28:33.803 [2024-11-15 11:46:34.381129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.803 [2024-11-15 11:46:34.381167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.803 qpair failed and we were unable to recover it. 00:28:33.803 [2024-11-15 11:46:34.381449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.803 [2024-11-15 11:46:34.381490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.803 qpair failed and we were unable to recover it. 00:28:33.803 [2024-11-15 11:46:34.381721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.803 [2024-11-15 11:46:34.381752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.803 qpair failed and we were unable to recover it. 00:28:33.803 [2024-11-15 11:46:34.382008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.803 [2024-11-15 11:46:34.382040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.803 qpair failed and we were unable to recover it. 00:28:33.803 [2024-11-15 11:46:34.382173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.803 [2024-11-15 11:46:34.382203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.803 qpair failed and we were unable to recover it. 00:28:33.803 [2024-11-15 11:46:34.382384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.803 [2024-11-15 11:46:34.382415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.803 qpair failed and we were unable to recover it. 00:28:33.803 [2024-11-15 11:46:34.382540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.803 [2024-11-15 11:46:34.382572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.803 qpair failed and we were unable to recover it. 00:28:33.803 [2024-11-15 11:46:34.382775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.803 [2024-11-15 11:46:34.382805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.803 qpair failed and we were unable to recover it. 00:28:33.803 [2024-11-15 11:46:34.382993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.803 [2024-11-15 11:46:34.383025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.803 qpair failed and we were unable to recover it. 00:28:33.803 [2024-11-15 11:46:34.383270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.803 [2024-11-15 11:46:34.383280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.803 qpair failed and we were unable to recover it. 00:28:33.803 [2024-11-15 11:46:34.383518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.803 [2024-11-15 11:46:34.383550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.803 qpair failed and we were unable to recover it. 00:28:33.803 [2024-11-15 11:46:34.383698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.803 [2024-11-15 11:46:34.383731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.803 qpair failed and we were unable to recover it. 00:28:33.803 [2024-11-15 11:46:34.384015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.803 [2024-11-15 11:46:34.384046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.803 qpair failed and we were unable to recover it. 00:28:33.803 [2024-11-15 11:46:34.384208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.803 [2024-11-15 11:46:34.384240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.803 qpair failed and we were unable to recover it. 00:28:33.803 [2024-11-15 11:46:34.384441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.803 [2024-11-15 11:46:34.384480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.803 qpair failed and we were unable to recover it. 00:28:33.803 [2024-11-15 11:46:34.384607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.803 [2024-11-15 11:46:34.384639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.803 qpair failed and we were unable to recover it. 00:28:33.803 [2024-11-15 11:46:34.384827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.803 [2024-11-15 11:46:34.384858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.803 qpair failed and we were unable to recover it. 00:28:33.803 [2024-11-15 11:46:34.385113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.803 [2024-11-15 11:46:34.385145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.803 qpair failed and we were unable to recover it. 00:28:33.803 [2024-11-15 11:46:34.385334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.803 [2024-11-15 11:46:34.385366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.803 qpair failed and we were unable to recover it. 00:28:33.803 [2024-11-15 11:46:34.385575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.803 [2024-11-15 11:46:34.385608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.803 qpair failed and we were unable to recover it. 00:28:33.803 [2024-11-15 11:46:34.385810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.803 [2024-11-15 11:46:34.385841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.803 qpair failed and we were unable to recover it. 00:28:33.803 [2024-11-15 11:46:34.385987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.803 [2024-11-15 11:46:34.386018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.803 qpair failed and we were unable to recover it. 00:28:33.803 [2024-11-15 11:46:34.386265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.803 [2024-11-15 11:46:34.386275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.803 qpair failed and we were unable to recover it. 00:28:33.803 [2024-11-15 11:46:34.386409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.803 [2024-11-15 11:46:34.386419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.803 qpair failed and we were unable to recover it. 00:28:33.803 [2024-11-15 11:46:34.386557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.803 [2024-11-15 11:46:34.386568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.803 qpair failed and we were unable to recover it. 00:28:33.803 [2024-11-15 11:46:34.386707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.803 [2024-11-15 11:46:34.386717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.803 qpair failed and we were unable to recover it. 00:28:33.803 [2024-11-15 11:46:34.386870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.803 [2024-11-15 11:46:34.386880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.803 qpair failed and we were unable to recover it. 00:28:33.803 [2024-11-15 11:46:34.387055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.803 [2024-11-15 11:46:34.387066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.803 qpair failed and we were unable to recover it. 00:28:33.803 [2024-11-15 11:46:34.387160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.803 [2024-11-15 11:46:34.387170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.803 qpair failed and we were unable to recover it. 00:28:33.803 [2024-11-15 11:46:34.387308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.803 [2024-11-15 11:46:34.387319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.803 qpair failed and we were unable to recover it. 00:28:33.803 [2024-11-15 11:46:34.387397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.803 [2024-11-15 11:46:34.387407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.803 qpair failed and we were unable to recover it. 00:28:33.804 [2024-11-15 11:46:34.387494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.804 [2024-11-15 11:46:34.387506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.804 qpair failed and we were unable to recover it. 00:28:33.804 [2024-11-15 11:46:34.387692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.804 [2024-11-15 11:46:34.387724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.804 qpair failed and we were unable to recover it. 00:28:33.804 [2024-11-15 11:46:34.387921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.804 [2024-11-15 11:46:34.387952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.804 qpair failed and we were unable to recover it. 00:28:33.804 [2024-11-15 11:46:34.388089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.804 [2024-11-15 11:46:34.388122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.804 qpair failed and we were unable to recover it. 00:28:33.804 [2024-11-15 11:46:34.388267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.804 [2024-11-15 11:46:34.388299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.804 qpair failed and we were unable to recover it. 00:28:33.804 [2024-11-15 11:46:34.388584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.804 [2024-11-15 11:46:34.388617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.804 qpair failed and we were unable to recover it. 00:28:33.804 [2024-11-15 11:46:34.388812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.804 [2024-11-15 11:46:34.388850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.804 qpair failed and we were unable to recover it. 00:28:33.804 [2024-11-15 11:46:34.388988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.804 [2024-11-15 11:46:34.389000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.804 qpair failed and we were unable to recover it. 00:28:33.804 [2024-11-15 11:46:34.389179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.804 [2024-11-15 11:46:34.389211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.804 qpair failed and we were unable to recover it. 00:28:33.804 [2024-11-15 11:46:34.389357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.804 [2024-11-15 11:46:34.389394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.804 qpair failed and we were unable to recover it. 00:28:33.804 [2024-11-15 11:46:34.389604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.804 [2024-11-15 11:46:34.389637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.804 qpair failed and we were unable to recover it. 00:28:33.804 [2024-11-15 11:46:34.389865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.804 [2024-11-15 11:46:34.389897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.804 qpair failed and we were unable to recover it. 00:28:33.804 [2024-11-15 11:46:34.390096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.804 [2024-11-15 11:46:34.390128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.804 qpair failed and we were unable to recover it. 00:28:33.804 [2024-11-15 11:46:34.390410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.804 [2024-11-15 11:46:34.390442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.804 qpair failed and we were unable to recover it. 00:28:33.804 [2024-11-15 11:46:34.390669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.804 [2024-11-15 11:46:34.390701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.804 qpair failed and we were unable to recover it. 00:28:33.804 [2024-11-15 11:46:34.390840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.804 [2024-11-15 11:46:34.390851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.804 qpair failed and we were unable to recover it. 00:28:33.804 [2024-11-15 11:46:34.390989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.804 [2024-11-15 11:46:34.390999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.804 qpair failed and we were unable to recover it. 00:28:33.804 [2024-11-15 11:46:34.391212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.804 [2024-11-15 11:46:34.391223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.804 qpair failed and we were unable to recover it. 00:28:33.804 [2024-11-15 11:46:34.391392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.804 [2024-11-15 11:46:34.391403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.804 qpair failed and we were unable to recover it. 00:28:33.804 [2024-11-15 11:46:34.391556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.804 [2024-11-15 11:46:34.391588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.804 qpair failed and we were unable to recover it. 00:28:33.804 [2024-11-15 11:46:34.391773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.804 [2024-11-15 11:46:34.391805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.804 qpair failed and we were unable to recover it. 00:28:33.804 [2024-11-15 11:46:34.391937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.804 [2024-11-15 11:46:34.391970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.804 qpair failed and we were unable to recover it. 00:28:33.804 [2024-11-15 11:46:34.392147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.804 [2024-11-15 11:46:34.392157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.804 qpair failed and we were unable to recover it. 00:28:33.804 [2024-11-15 11:46:34.392240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.804 [2024-11-15 11:46:34.392251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.804 qpair failed and we were unable to recover it. 00:28:33.804 [2024-11-15 11:46:34.392403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.804 [2024-11-15 11:46:34.392413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.804 qpair failed and we were unable to recover it. 00:28:33.804 [2024-11-15 11:46:34.392551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.804 [2024-11-15 11:46:34.392563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.804 qpair failed and we were unable to recover it. 00:28:33.804 [2024-11-15 11:46:34.392714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.804 [2024-11-15 11:46:34.392724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.804 qpair failed and we were unable to recover it. 00:28:33.804 [2024-11-15 11:46:34.392989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.804 [2024-11-15 11:46:34.393021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.804 qpair failed and we were unable to recover it. 00:28:33.804 [2024-11-15 11:46:34.393165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.804 [2024-11-15 11:46:34.393197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.804 qpair failed and we were unable to recover it. 00:28:33.804 [2024-11-15 11:46:34.393485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.804 [2024-11-15 11:46:34.393519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.804 qpair failed and we were unable to recover it. 00:28:33.804 [2024-11-15 11:46:34.393740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.804 [2024-11-15 11:46:34.393773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.804 qpair failed and we were unable to recover it. 00:28:33.804 [2024-11-15 11:46:34.394073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.804 [2024-11-15 11:46:34.394083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.804 qpair failed and we were unable to recover it. 00:28:33.804 [2024-11-15 11:46:34.394276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.804 [2024-11-15 11:46:34.394307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.804 qpair failed and we were unable to recover it. 00:28:33.804 [2024-11-15 11:46:34.394448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.804 [2024-11-15 11:46:34.394492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.804 qpair failed and we were unable to recover it. 00:28:33.804 [2024-11-15 11:46:34.394723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.804 [2024-11-15 11:46:34.394754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.804 qpair failed and we were unable to recover it. 00:28:33.804 [2024-11-15 11:46:34.394878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.804 [2024-11-15 11:46:34.394909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.804 qpair failed and we were unable to recover it. 00:28:33.804 [2024-11-15 11:46:34.395117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.804 [2024-11-15 11:46:34.395149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.804 qpair failed and we were unable to recover it. 00:28:33.804 [2024-11-15 11:46:34.395336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.804 [2024-11-15 11:46:34.395368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.804 qpair failed and we were unable to recover it. 00:28:33.804 [2024-11-15 11:46:34.395598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.805 [2024-11-15 11:46:34.395631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.805 qpair failed and we were unable to recover it. 00:28:33.805 [2024-11-15 11:46:34.395846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.805 [2024-11-15 11:46:34.395877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.805 qpair failed and we were unable to recover it. 00:28:33.805 [2024-11-15 11:46:34.395994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.805 [2024-11-15 11:46:34.396025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.805 qpair failed and we were unable to recover it. 00:28:33.805 [2024-11-15 11:46:34.396218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.805 [2024-11-15 11:46:34.396250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.805 qpair failed and we were unable to recover it. 00:28:33.805 [2024-11-15 11:46:34.396451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.805 [2024-11-15 11:46:34.396493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.805 qpair failed and we were unable to recover it. 00:28:33.805 [2024-11-15 11:46:34.396749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.805 [2024-11-15 11:46:34.396781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.805 qpair failed and we were unable to recover it. 00:28:33.805 [2024-11-15 11:46:34.397056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.805 [2024-11-15 11:46:34.397068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.805 qpair failed and we were unable to recover it. 00:28:33.805 [2024-11-15 11:46:34.397167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.805 [2024-11-15 11:46:34.397178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.805 qpair failed and we were unable to recover it. 00:28:33.805 [2024-11-15 11:46:34.397407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.805 [2024-11-15 11:46:34.397418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.805 qpair failed and we were unable to recover it. 00:28:33.805 [2024-11-15 11:46:34.397566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.805 [2024-11-15 11:46:34.397577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.805 qpair failed and we were unable to recover it. 00:28:33.805 [2024-11-15 11:46:34.397664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.805 [2024-11-15 11:46:34.397676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.805 qpair failed and we were unable to recover it. 00:28:33.805 [2024-11-15 11:46:34.397825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.805 [2024-11-15 11:46:34.397838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.805 qpair failed and we were unable to recover it. 00:28:33.805 [2024-11-15 11:46:34.397994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.805 [2024-11-15 11:46:34.398027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.805 qpair failed and we were unable to recover it. 00:28:33.805 [2024-11-15 11:46:34.398163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.805 [2024-11-15 11:46:34.398195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.805 qpair failed and we were unable to recover it. 00:28:33.805 [2024-11-15 11:46:34.398452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.805 [2024-11-15 11:46:34.398492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.805 qpair failed and we were unable to recover it. 00:28:33.805 [2024-11-15 11:46:34.398702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.805 [2024-11-15 11:46:34.398734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.805 qpair failed and we were unable to recover it. 00:28:33.805 [2024-11-15 11:46:34.398929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.805 [2024-11-15 11:46:34.398940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.805 qpair failed and we were unable to recover it. 00:28:33.805 [2024-11-15 11:46:34.399108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.805 [2024-11-15 11:46:34.399140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.805 qpair failed and we were unable to recover it. 00:28:33.805 [2024-11-15 11:46:34.399326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.805 [2024-11-15 11:46:34.399357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.805 qpair failed and we were unable to recover it. 00:28:33.805 [2024-11-15 11:46:34.399657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.805 [2024-11-15 11:46:34.399689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.805 qpair failed and we were unable to recover it. 00:28:33.805 [2024-11-15 11:46:34.399807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.805 [2024-11-15 11:46:34.399817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.805 qpair failed and we were unable to recover it. 00:28:33.805 [2024-11-15 11:46:34.399900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.805 [2024-11-15 11:46:34.399910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.805 qpair failed and we were unable to recover it. 00:28:33.805 [2024-11-15 11:46:34.400148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.805 [2024-11-15 11:46:34.400180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.805 qpair failed and we were unable to recover it. 00:28:33.805 [2024-11-15 11:46:34.400380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.805 [2024-11-15 11:46:34.400412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.805 qpair failed and we were unable to recover it. 00:28:33.805 [2024-11-15 11:46:34.400549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.805 [2024-11-15 11:46:34.400581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.805 qpair failed and we were unable to recover it. 00:28:33.805 [2024-11-15 11:46:34.400871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.805 [2024-11-15 11:46:34.400904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.805 qpair failed and we were unable to recover it. 00:28:33.805 [2024-11-15 11:46:34.401090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.805 [2024-11-15 11:46:34.401121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.805 qpair failed and we were unable to recover it. 00:28:33.805 [2024-11-15 11:46:34.401400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.805 [2024-11-15 11:46:34.401433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.805 qpair failed and we were unable to recover it. 00:28:33.805 [2024-11-15 11:46:34.401564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.805 [2024-11-15 11:46:34.401596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.805 qpair failed and we were unable to recover it. 00:28:33.805 [2024-11-15 11:46:34.401897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.805 [2024-11-15 11:46:34.401931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.805 qpair failed and we were unable to recover it. 00:28:33.805 [2024-11-15 11:46:34.402148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.805 [2024-11-15 11:46:34.402179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.805 qpair failed and we were unable to recover it. 00:28:33.805 [2024-11-15 11:46:34.402325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.805 [2024-11-15 11:46:34.402356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.805 qpair failed and we were unable to recover it. 00:28:33.805 [2024-11-15 11:46:34.402622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.805 [2024-11-15 11:46:34.402655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.805 qpair failed and we were unable to recover it. 00:28:33.805 [2024-11-15 11:46:34.402807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.805 [2024-11-15 11:46:34.402840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.806 qpair failed and we were unable to recover it. 00:28:33.806 [2024-11-15 11:46:34.403026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.806 [2024-11-15 11:46:34.403057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.806 qpair failed and we were unable to recover it. 00:28:33.806 [2024-11-15 11:46:34.403226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.806 [2024-11-15 11:46:34.403259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.806 qpair failed and we were unable to recover it. 00:28:33.806 [2024-11-15 11:46:34.403534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.806 [2024-11-15 11:46:34.403566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.806 qpair failed and we were unable to recover it. 00:28:33.806 [2024-11-15 11:46:34.403755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.806 [2024-11-15 11:46:34.403788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.806 qpair failed and we were unable to recover it. 00:28:33.806 [2024-11-15 11:46:34.404059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.806 [2024-11-15 11:46:34.404132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.806 qpair failed and we were unable to recover it. 00:28:33.806 [2024-11-15 11:46:34.404345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.806 [2024-11-15 11:46:34.404380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.806 qpair failed and we were unable to recover it. 00:28:33.806 [2024-11-15 11:46:34.404668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.806 [2024-11-15 11:46:34.404704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.806 qpair failed and we were unable to recover it. 00:28:33.806 [2024-11-15 11:46:34.404904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.806 [2024-11-15 11:46:34.404915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.806 qpair failed and we were unable to recover it. 00:28:33.806 [2024-11-15 11:46:34.405057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.806 [2024-11-15 11:46:34.405090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.806 qpair failed and we were unable to recover it. 00:28:33.806 [2024-11-15 11:46:34.405347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.806 [2024-11-15 11:46:34.405379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.806 qpair failed and we were unable to recover it. 00:28:33.806 [2024-11-15 11:46:34.405591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.806 [2024-11-15 11:46:34.405624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.806 qpair failed and we were unable to recover it. 00:28:33.806 [2024-11-15 11:46:34.405869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.806 [2024-11-15 11:46:34.405902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.806 qpair failed and we were unable to recover it. 00:28:33.806 [2024-11-15 11:46:34.406105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.806 [2024-11-15 11:46:34.406137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.806 qpair failed and we were unable to recover it. 00:28:33.806 [2024-11-15 11:46:34.406418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.806 [2024-11-15 11:46:34.406429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.806 qpair failed and we were unable to recover it. 00:28:33.806 [2024-11-15 11:46:34.406583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.806 [2024-11-15 11:46:34.406595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.806 qpair failed and we were unable to recover it. 00:28:33.806 [2024-11-15 11:46:34.406691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.806 [2024-11-15 11:46:34.406702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.806 qpair failed and we were unable to recover it. 00:28:33.806 [2024-11-15 11:46:34.406868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.806 [2024-11-15 11:46:34.406880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.806 qpair failed and we were unable to recover it. 00:28:33.806 [2024-11-15 11:46:34.407044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.806 [2024-11-15 11:46:34.407086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.806 qpair failed and we were unable to recover it. 00:28:33.806 [2024-11-15 11:46:34.407279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.806 [2024-11-15 11:46:34.407312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.806 qpair failed and we were unable to recover it. 00:28:33.806 [2024-11-15 11:46:34.407517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.806 [2024-11-15 11:46:34.407550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.806 qpair failed and we were unable to recover it. 00:28:33.806 [2024-11-15 11:46:34.407689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.806 [2024-11-15 11:46:34.407723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.806 qpair failed and we were unable to recover it. 00:28:33.806 [2024-11-15 11:46:34.407923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.806 [2024-11-15 11:46:34.407934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.806 qpair failed and we were unable to recover it. 00:28:33.806 [2024-11-15 11:46:34.408077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.806 [2024-11-15 11:46:34.408088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.806 qpair failed and we were unable to recover it. 00:28:33.806 [2024-11-15 11:46:34.408315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.806 [2024-11-15 11:46:34.408348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.806 qpair failed and we were unable to recover it. 00:28:33.806 [2024-11-15 11:46:34.408604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.806 [2024-11-15 11:46:34.408638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.806 qpair failed and we were unable to recover it. 00:28:33.806 [2024-11-15 11:46:34.408864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.806 [2024-11-15 11:46:34.408898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.806 qpair failed and we were unable to recover it. 00:28:33.806 [2024-11-15 11:46:34.409106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.806 [2024-11-15 11:46:34.409139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.806 qpair failed and we were unable to recover it. 00:28:33.806 [2024-11-15 11:46:34.409453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.806 [2024-11-15 11:46:34.409495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.806 qpair failed and we were unable to recover it. 00:28:33.806 [2024-11-15 11:46:34.409720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.806 [2024-11-15 11:46:34.409753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.806 qpair failed and we were unable to recover it. 00:28:33.806 [2024-11-15 11:46:34.409962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.806 [2024-11-15 11:46:34.409973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.806 qpair failed and we were unable to recover it. 00:28:33.806 [2024-11-15 11:46:34.410243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.806 [2024-11-15 11:46:34.410254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.806 qpair failed and we were unable to recover it. 00:28:33.806 [2024-11-15 11:46:34.410438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.806 [2024-11-15 11:46:34.410450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.806 qpair failed and we were unable to recover it. 00:28:33.806 [2024-11-15 11:46:34.410617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.806 [2024-11-15 11:46:34.410628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.806 qpair failed and we were unable to recover it. 00:28:33.806 [2024-11-15 11:46:34.410788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.806 [2024-11-15 11:46:34.410825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.806 qpair failed and we were unable to recover it. 00:28:33.806 [2024-11-15 11:46:34.411104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.806 [2024-11-15 11:46:34.411137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.806 qpair failed and we were unable to recover it. 00:28:33.806 [2024-11-15 11:46:34.411416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.806 [2024-11-15 11:46:34.411448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.806 qpair failed and we were unable to recover it. 00:28:33.806 [2024-11-15 11:46:34.411706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.806 [2024-11-15 11:46:34.411739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.806 qpair failed and we were unable to recover it. 00:28:33.806 [2024-11-15 11:46:34.411936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.807 [2024-11-15 11:46:34.411975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.807 qpair failed and we were unable to recover it. 00:28:33.807 [2024-11-15 11:46:34.412132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.807 [2024-11-15 11:46:34.412143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.807 qpair failed and we were unable to recover it. 00:28:33.807 [2024-11-15 11:46:34.412279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.807 [2024-11-15 11:46:34.412290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.807 qpair failed and we were unable to recover it. 00:28:33.807 [2024-11-15 11:46:34.412443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.807 [2024-11-15 11:46:34.412485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.807 qpair failed and we were unable to recover it. 00:28:33.807 [2024-11-15 11:46:34.412798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.807 [2024-11-15 11:46:34.412831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.807 qpair failed and we were unable to recover it. 00:28:33.807 [2024-11-15 11:46:34.413124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.807 [2024-11-15 11:46:34.413156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.807 qpair failed and we were unable to recover it. 00:28:33.807 [2024-11-15 11:46:34.413378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.807 [2024-11-15 11:46:34.413410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.807 qpair failed and we were unable to recover it. 00:28:33.807 [2024-11-15 11:46:34.413646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.807 [2024-11-15 11:46:34.413686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.807 qpair failed and we were unable to recover it. 00:28:33.807 [2024-11-15 11:46:34.413881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.807 [2024-11-15 11:46:34.413891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.807 qpair failed and we were unable to recover it. 00:28:33.807 [2024-11-15 11:46:34.414104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.807 [2024-11-15 11:46:34.414136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.807 qpair failed and we were unable to recover it. 00:28:33.807 [2024-11-15 11:46:34.414310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.807 [2024-11-15 11:46:34.414343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.807 qpair failed and we were unable to recover it. 00:28:33.807 [2024-11-15 11:46:34.414537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.807 [2024-11-15 11:46:34.414571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.807 qpair failed and we were unable to recover it. 00:28:33.807 [2024-11-15 11:46:34.414711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.807 [2024-11-15 11:46:34.414744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.807 qpair failed and we were unable to recover it. 00:28:33.807 [2024-11-15 11:46:34.414874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.807 [2024-11-15 11:46:34.414906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.807 qpair failed and we were unable to recover it. 00:28:33.807 [2024-11-15 11:46:34.415111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.807 [2024-11-15 11:46:34.415144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.807 qpair failed and we were unable to recover it. 00:28:33.807 [2024-11-15 11:46:34.415398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.807 [2024-11-15 11:46:34.415409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.807 qpair failed and we were unable to recover it. 00:28:33.807 [2024-11-15 11:46:34.415690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.807 [2024-11-15 11:46:34.415723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.807 qpair failed and we were unable to recover it. 00:28:33.807 [2024-11-15 11:46:34.415950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.807 [2024-11-15 11:46:34.415982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.807 qpair failed and we were unable to recover it. 00:28:33.807 [2024-11-15 11:46:34.416237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.807 [2024-11-15 11:46:34.416249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.807 qpair failed and we were unable to recover it. 00:28:33.807 [2024-11-15 11:46:34.416470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.807 [2024-11-15 11:46:34.416503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.807 qpair failed and we were unable to recover it. 00:28:33.807 [2024-11-15 11:46:34.416689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.807 [2024-11-15 11:46:34.416722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.807 qpair failed and we were unable to recover it. 00:28:33.807 [2024-11-15 11:46:34.416955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.807 [2024-11-15 11:46:34.416989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.807 qpair failed and we were unable to recover it. 00:28:33.807 [2024-11-15 11:46:34.417247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.807 [2024-11-15 11:46:34.417259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.807 qpair failed and we were unable to recover it. 00:28:33.807 [2024-11-15 11:46:34.417540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.807 [2024-11-15 11:46:34.417574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.807 qpair failed and we were unable to recover it. 00:28:33.807 [2024-11-15 11:46:34.417761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.807 [2024-11-15 11:46:34.417794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.807 qpair failed and we were unable to recover it. 00:28:33.807 [2024-11-15 11:46:34.418017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.807 [2024-11-15 11:46:34.418050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.807 qpair failed and we were unable to recover it. 00:28:33.807 [2024-11-15 11:46:34.418245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.807 [2024-11-15 11:46:34.418257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.807 qpair failed and we were unable to recover it. 00:28:33.807 [2024-11-15 11:46:34.418439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.807 [2024-11-15 11:46:34.418451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.807 qpair failed and we were unable to recover it. 00:28:33.807 [2024-11-15 11:46:34.418691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.807 [2024-11-15 11:46:34.418704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.807 qpair failed and we were unable to recover it. 00:28:33.807 [2024-11-15 11:46:34.419016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.807 [2024-11-15 11:46:34.419050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.807 qpair failed and we were unable to recover it. 00:28:33.807 [2024-11-15 11:46:34.419335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.807 [2024-11-15 11:46:34.419368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.807 qpair failed and we were unable to recover it. 00:28:33.807 [2024-11-15 11:46:34.419622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.807 [2024-11-15 11:46:34.419657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.807 qpair failed and we were unable to recover it. 00:28:33.807 [2024-11-15 11:46:34.419966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.807 [2024-11-15 11:46:34.419998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.807 qpair failed and we were unable to recover it. 00:28:33.807 [2024-11-15 11:46:34.420237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.807 [2024-11-15 11:46:34.420270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.807 qpair failed and we were unable to recover it. 00:28:33.807 [2024-11-15 11:46:34.420537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.807 [2024-11-15 11:46:34.420572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.807 qpair failed and we were unable to recover it. 00:28:33.807 [2024-11-15 11:46:34.420784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.807 [2024-11-15 11:46:34.420816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.807 qpair failed and we were unable to recover it. 00:28:33.807 [2024-11-15 11:46:34.421013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.807 [2024-11-15 11:46:34.421025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.807 qpair failed and we were unable to recover it. 00:28:33.807 [2024-11-15 11:46:34.421263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.807 [2024-11-15 11:46:34.421296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.807 qpair failed and we were unable to recover it. 00:28:33.808 [2024-11-15 11:46:34.421508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.808 [2024-11-15 11:46:34.421542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.808 qpair failed and we were unable to recover it. 00:28:33.808 [2024-11-15 11:46:34.421771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.808 [2024-11-15 11:46:34.421805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.808 qpair failed and we were unable to recover it. 00:28:33.808 [2024-11-15 11:46:34.422062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.808 [2024-11-15 11:46:34.422073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.808 qpair failed and we were unable to recover it. 00:28:33.808 [2024-11-15 11:46:34.422291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.808 [2024-11-15 11:46:34.422325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.808 qpair failed and we were unable to recover it. 00:28:33.808 [2024-11-15 11:46:34.422588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.808 [2024-11-15 11:46:34.422624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.808 qpair failed and we were unable to recover it. 00:28:33.808 [2024-11-15 11:46:34.422908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.808 [2024-11-15 11:46:34.422942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.808 qpair failed and we were unable to recover it. 00:28:33.808 [2024-11-15 11:46:34.423158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.808 [2024-11-15 11:46:34.423169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.808 qpair failed and we were unable to recover it. 00:28:33.808 [2024-11-15 11:46:34.423416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.808 [2024-11-15 11:46:34.423449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.808 qpair failed and we were unable to recover it. 00:28:33.808 [2024-11-15 11:46:34.423750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.808 [2024-11-15 11:46:34.423783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.808 qpair failed and we were unable to recover it. 00:28:33.808 [2024-11-15 11:46:34.424100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.808 [2024-11-15 11:46:34.424140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.808 qpair failed and we were unable to recover it. 00:28:33.808 [2024-11-15 11:46:34.424433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.808 [2024-11-15 11:46:34.424477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.808 qpair failed and we were unable to recover it. 00:28:33.808 [2024-11-15 11:46:34.424676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.808 [2024-11-15 11:46:34.424709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.808 qpair failed and we were unable to recover it. 00:28:33.808 [2024-11-15 11:46:34.424988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.808 [2024-11-15 11:46:34.425000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.808 qpair failed and we were unable to recover it. 00:28:33.808 [2024-11-15 11:46:34.425175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.808 [2024-11-15 11:46:34.425209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.808 qpair failed and we were unable to recover it. 00:28:33.808 [2024-11-15 11:46:34.425489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.808 [2024-11-15 11:46:34.425524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.808 qpair failed and we were unable to recover it. 00:28:33.808 [2024-11-15 11:46:34.425782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.808 [2024-11-15 11:46:34.425815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.808 qpair failed and we were unable to recover it. 00:28:33.808 [2024-11-15 11:46:34.426054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.808 [2024-11-15 11:46:34.426065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.808 qpair failed and we were unable to recover it. 00:28:33.808 [2024-11-15 11:46:34.426226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.808 [2024-11-15 11:46:34.426237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.808 qpair failed and we were unable to recover it. 00:28:33.808 [2024-11-15 11:46:34.426455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.808 [2024-11-15 11:46:34.426509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.808 qpair failed and we were unable to recover it. 00:28:33.808 [2024-11-15 11:46:34.426749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.808 [2024-11-15 11:46:34.426781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.808 qpair failed and we were unable to recover it. 00:28:33.808 [2024-11-15 11:46:34.426987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.808 [2024-11-15 11:46:34.427020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.808 qpair failed and we were unable to recover it. 00:28:33.808 [2024-11-15 11:46:34.427221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.808 [2024-11-15 11:46:34.427232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.808 qpair failed and we were unable to recover it. 00:28:33.808 [2024-11-15 11:46:34.427321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.808 [2024-11-15 11:46:34.427331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.808 qpair failed and we were unable to recover it. 00:28:33.808 [2024-11-15 11:46:34.427565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.808 [2024-11-15 11:46:34.427600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.808 qpair failed and we were unable to recover it. 00:28:33.808 [2024-11-15 11:46:34.427732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.808 [2024-11-15 11:46:34.427765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.808 qpair failed and we were unable to recover it. 00:28:33.808 [2024-11-15 11:46:34.427954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.808 [2024-11-15 11:46:34.427999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.808 qpair failed and we were unable to recover it. 00:28:33.808 [2024-11-15 11:46:34.428171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.808 [2024-11-15 11:46:34.428182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.808 qpair failed and we were unable to recover it. 00:28:33.808 [2024-11-15 11:46:34.428334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.808 [2024-11-15 11:46:34.428366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.808 qpair failed and we were unable to recover it. 00:28:33.808 [2024-11-15 11:46:34.428708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.808 [2024-11-15 11:46:34.428743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.808 qpair failed and we were unable to recover it. 00:28:33.808 [2024-11-15 11:46:34.429025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.808 [2024-11-15 11:46:34.429058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.808 qpair failed and we were unable to recover it. 00:28:33.808 [2024-11-15 11:46:34.429336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.808 [2024-11-15 11:46:34.429346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.808 qpair failed and we were unable to recover it. 00:28:33.808 [2024-11-15 11:46:34.429584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.808 [2024-11-15 11:46:34.429609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.808 qpair failed and we were unable to recover it. 00:28:33.808 [2024-11-15 11:46:34.429893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.808 [2024-11-15 11:46:34.429926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.808 qpair failed and we were unable to recover it. 00:28:33.808 [2024-11-15 11:46:34.430183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.808 [2024-11-15 11:46:34.430215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.808 qpair failed and we were unable to recover it. 00:28:33.808 [2024-11-15 11:46:34.430521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.808 [2024-11-15 11:46:34.430556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.808 qpair failed and we were unable to recover it. 00:28:33.808 [2024-11-15 11:46:34.430817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.808 [2024-11-15 11:46:34.430850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.808 qpair failed and we were unable to recover it. 00:28:33.808 [2024-11-15 11:46:34.431055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.808 [2024-11-15 11:46:34.431087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.808 qpair failed and we were unable to recover it. 00:28:33.808 [2024-11-15 11:46:34.431353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.808 [2024-11-15 11:46:34.431364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.808 qpair failed and we were unable to recover it. 00:28:33.808 [2024-11-15 11:46:34.431569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.809 [2024-11-15 11:46:34.431581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.809 qpair failed and we were unable to recover it. 00:28:33.809 [2024-11-15 11:46:34.431815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.809 [2024-11-15 11:46:34.431848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.809 qpair failed and we were unable to recover it. 00:28:33.809 [2024-11-15 11:46:34.432047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.809 [2024-11-15 11:46:34.432079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.809 qpair failed and we were unable to recover it. 00:28:33.809 [2024-11-15 11:46:34.432264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.809 [2024-11-15 11:46:34.432296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.809 qpair failed and we were unable to recover it. 00:28:33.809 [2024-11-15 11:46:34.432513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.809 [2024-11-15 11:46:34.432548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.809 qpair failed and we were unable to recover it. 00:28:33.809 [2024-11-15 11:46:34.432825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.809 [2024-11-15 11:46:34.432857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.809 qpair failed and we were unable to recover it. 00:28:33.809 [2024-11-15 11:46:34.433135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.809 [2024-11-15 11:46:34.433168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.809 qpair failed and we were unable to recover it. 00:28:33.809 [2024-11-15 11:46:34.433474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.809 [2024-11-15 11:46:34.433509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.809 qpair failed and we were unable to recover it. 00:28:33.809 [2024-11-15 11:46:34.433793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.809 [2024-11-15 11:46:34.433826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.809 qpair failed and we were unable to recover it. 00:28:33.809 [2024-11-15 11:46:34.434008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.809 [2024-11-15 11:46:34.434041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.809 qpair failed and we were unable to recover it. 00:28:33.809 [2024-11-15 11:46:34.434294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.809 [2024-11-15 11:46:34.434305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.809 qpair failed and we were unable to recover it. 00:28:33.809 [2024-11-15 11:46:34.434552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.809 [2024-11-15 11:46:34.434593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.809 qpair failed and we were unable to recover it. 00:28:33.809 [2024-11-15 11:46:34.434905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.809 [2024-11-15 11:46:34.434937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.809 qpair failed and we were unable to recover it. 00:28:33.809 [2024-11-15 11:46:34.435241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.809 [2024-11-15 11:46:34.435273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.809 qpair failed and we were unable to recover it. 00:28:33.809 [2024-11-15 11:46:34.435572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.809 [2024-11-15 11:46:34.435607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.809 qpair failed and we were unable to recover it. 00:28:33.809 [2024-11-15 11:46:34.435828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.809 [2024-11-15 11:46:34.435860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.809 qpair failed and we were unable to recover it. 00:28:33.809 [2024-11-15 11:46:34.436183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.809 [2024-11-15 11:46:34.436194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.809 qpair failed and we were unable to recover it. 00:28:33.809 [2024-11-15 11:46:34.436429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.809 [2024-11-15 11:46:34.436441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.809 qpair failed and we were unable to recover it. 00:28:33.809 [2024-11-15 11:46:34.436679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.809 [2024-11-15 11:46:34.436691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.809 qpair failed and we were unable to recover it. 00:28:33.809 [2024-11-15 11:46:34.436870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.809 [2024-11-15 11:46:34.436881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.809 qpair failed and we were unable to recover it. 00:28:33.809 [2024-11-15 11:46:34.437046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.809 [2024-11-15 11:46:34.437078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.809 qpair failed and we were unable to recover it. 00:28:33.809 [2024-11-15 11:46:34.437336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.809 [2024-11-15 11:46:34.437369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.809 qpair failed and we were unable to recover it. 00:28:33.809 [2024-11-15 11:46:34.437572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.809 [2024-11-15 11:46:34.437606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.809 qpair failed and we were unable to recover it. 00:28:33.809 [2024-11-15 11:46:34.437875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.809 [2024-11-15 11:46:34.437909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.809 qpair failed and we were unable to recover it. 00:28:33.809 [2024-11-15 11:46:34.438170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.809 [2024-11-15 11:46:34.438203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.809 qpair failed and we were unable to recover it. 00:28:33.809 [2024-11-15 11:46:34.438435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.809 [2024-11-15 11:46:34.438447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.809 qpair failed and we were unable to recover it. 00:28:33.809 [2024-11-15 11:46:34.438694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.809 [2024-11-15 11:46:34.438727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.809 qpair failed and we were unable to recover it. 00:28:33.809 [2024-11-15 11:46:34.438986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.809 [2024-11-15 11:46:34.439019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.809 qpair failed and we were unable to recover it. 00:28:33.809 [2024-11-15 11:46:34.439327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.809 [2024-11-15 11:46:34.439360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.809 qpair failed and we were unable to recover it. 00:28:33.809 [2024-11-15 11:46:34.439663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.809 [2024-11-15 11:46:34.439697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.809 qpair failed and we were unable to recover it. 00:28:33.809 [2024-11-15 11:46:34.439902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.809 [2024-11-15 11:46:34.439934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.809 qpair failed and we were unable to recover it. 00:28:33.809 [2024-11-15 11:46:34.440216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.809 [2024-11-15 11:46:34.440228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.809 qpair failed and we were unable to recover it. 00:28:33.809 [2024-11-15 11:46:34.440446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.809 [2024-11-15 11:46:34.440488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.809 qpair failed and we were unable to recover it. 00:28:33.809 [2024-11-15 11:46:34.440748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.809 [2024-11-15 11:46:34.440781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.809 qpair failed and we were unable to recover it. 00:28:33.809 [2024-11-15 11:46:34.441000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.809 [2024-11-15 11:46:34.441012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.809 qpair failed and we were unable to recover it. 00:28:33.809 [2024-11-15 11:46:34.441103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.809 [2024-11-15 11:46:34.441114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.809 qpair failed and we were unable to recover it. 00:28:33.809 [2024-11-15 11:46:34.441297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.809 [2024-11-15 11:46:34.441328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.809 qpair failed and we were unable to recover it. 00:28:33.809 [2024-11-15 11:46:34.441562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.809 [2024-11-15 11:46:34.441596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.809 qpair failed and we were unable to recover it. 00:28:33.809 [2024-11-15 11:46:34.441812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.810 [2024-11-15 11:46:34.441844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.810 qpair failed and we were unable to recover it. 00:28:33.810 [2024-11-15 11:46:34.442101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.810 [2024-11-15 11:46:34.442132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.810 qpair failed and we were unable to recover it. 00:28:33.810 [2024-11-15 11:46:34.442385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.810 [2024-11-15 11:46:34.442419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.810 qpair failed and we were unable to recover it. 00:28:33.810 [2024-11-15 11:46:34.442724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.810 [2024-11-15 11:46:34.442758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.810 qpair failed and we were unable to recover it. 00:28:33.810 [2024-11-15 11:46:34.443044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.810 [2024-11-15 11:46:34.443076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.810 qpair failed and we were unable to recover it. 00:28:33.810 [2024-11-15 11:46:34.443360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.810 [2024-11-15 11:46:34.443392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.810 qpair failed and we were unable to recover it. 00:28:33.810 [2024-11-15 11:46:34.443683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.810 [2024-11-15 11:46:34.443716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.810 qpair failed and we were unable to recover it. 00:28:33.810 [2024-11-15 11:46:34.444000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.810 [2024-11-15 11:46:34.444036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.810 qpair failed and we were unable to recover it. 00:28:33.810 [2024-11-15 11:46:34.444251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.810 [2024-11-15 11:46:34.444262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.810 qpair failed and we were unable to recover it. 00:28:33.810 [2024-11-15 11:46:34.444498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.810 [2024-11-15 11:46:34.444509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.810 qpair failed and we were unable to recover it. 00:28:33.810 [2024-11-15 11:46:34.444687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.810 [2024-11-15 11:46:34.444698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.810 qpair failed and we were unable to recover it. 00:28:33.810 [2024-11-15 11:46:34.444912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.810 [2024-11-15 11:46:34.444943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.810 qpair failed and we were unable to recover it. 00:28:33.810 [2024-11-15 11:46:34.445142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.810 [2024-11-15 11:46:34.445176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.810 qpair failed and we were unable to recover it. 00:28:33.810 [2024-11-15 11:46:34.445471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.810 [2024-11-15 11:46:34.445510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.810 qpair failed and we were unable to recover it. 00:28:33.810 [2024-11-15 11:46:34.445722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.810 [2024-11-15 11:46:34.445755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.810 qpair failed and we were unable to recover it. 00:28:33.810 [2024-11-15 11:46:34.446029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.810 [2024-11-15 11:46:34.446040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.810 qpair failed and we were unable to recover it. 00:28:33.810 [2024-11-15 11:46:34.446206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.810 [2024-11-15 11:46:34.446217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.810 qpair failed and we were unable to recover it. 00:28:33.810 [2024-11-15 11:46:34.446454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.810 [2024-11-15 11:46:34.446524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.810 qpair failed and we were unable to recover it. 00:28:33.810 [2024-11-15 11:46:34.446811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.810 [2024-11-15 11:46:34.446843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.810 qpair failed and we were unable to recover it. 00:28:33.810 [2024-11-15 11:46:34.447035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.810 [2024-11-15 11:46:34.447068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.810 qpair failed and we were unable to recover it. 00:28:33.810 [2024-11-15 11:46:34.447222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.810 [2024-11-15 11:46:34.447254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.810 qpair failed and we were unable to recover it. 00:28:33.810 [2024-11-15 11:46:34.447543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.810 [2024-11-15 11:46:34.447577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.810 qpair failed and we were unable to recover it. 00:28:33.810 [2024-11-15 11:46:34.447718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.810 [2024-11-15 11:46:34.447750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.810 qpair failed and we were unable to recover it. 00:28:33.810 [2024-11-15 11:46:34.448033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.810 [2024-11-15 11:46:34.448066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.810 qpair failed and we were unable to recover it. 00:28:33.810 [2024-11-15 11:46:34.448336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.810 [2024-11-15 11:46:34.448347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.810 qpair failed and we were unable to recover it. 00:28:33.810 [2024-11-15 11:46:34.448413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.810 [2024-11-15 11:46:34.448425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.810 qpair failed and we were unable to recover it. 00:28:33.810 [2024-11-15 11:46:34.448663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.810 [2024-11-15 11:46:34.448674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.810 qpair failed and we were unable to recover it. 00:28:33.810 [2024-11-15 11:46:34.448907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.810 [2024-11-15 11:46:34.448918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.810 qpair failed and we were unable to recover it. 00:28:33.810 [2024-11-15 11:46:34.449161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.810 [2024-11-15 11:46:34.449172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.810 qpair failed and we were unable to recover it. 00:28:33.810 [2024-11-15 11:46:34.449493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.810 [2024-11-15 11:46:34.449526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.810 qpair failed and we were unable to recover it. 00:28:33.810 [2024-11-15 11:46:34.449803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.810 [2024-11-15 11:46:34.449835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.810 qpair failed and we were unable to recover it. 00:28:33.810 [2024-11-15 11:46:34.450121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.810 [2024-11-15 11:46:34.450153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.810 qpair failed and we were unable to recover it. 00:28:33.810 [2024-11-15 11:46:34.450413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.810 [2024-11-15 11:46:34.450445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.810 qpair failed and we were unable to recover it. 00:28:33.810 [2024-11-15 11:46:34.450755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.810 [2024-11-15 11:46:34.450787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.810 qpair failed and we were unable to recover it. 00:28:33.811 [2024-11-15 11:46:34.451075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.811 [2024-11-15 11:46:34.451106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.811 qpair failed and we were unable to recover it. 00:28:33.811 [2024-11-15 11:46:34.451393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.811 [2024-11-15 11:46:34.451426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.811 qpair failed and we were unable to recover it. 00:28:33.811 [2024-11-15 11:46:34.451637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.811 [2024-11-15 11:46:34.451672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.811 qpair failed and we were unable to recover it. 00:28:33.811 [2024-11-15 11:46:34.451962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.811 [2024-11-15 11:46:34.451994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.811 qpair failed and we were unable to recover it. 00:28:33.811 [2024-11-15 11:46:34.452275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.811 [2024-11-15 11:46:34.452306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.811 qpair failed and we were unable to recover it. 00:28:33.811 [2024-11-15 11:46:34.452597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.811 [2024-11-15 11:46:34.452632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.811 qpair failed and we were unable to recover it. 00:28:33.811 [2024-11-15 11:46:34.452918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.811 [2024-11-15 11:46:34.452950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.811 qpair failed and we were unable to recover it. 00:28:33.811 [2024-11-15 11:46:34.453235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.811 [2024-11-15 11:46:34.453267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.811 qpair failed and we were unable to recover it. 00:28:33.811 [2024-11-15 11:46:34.453581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.811 [2024-11-15 11:46:34.453615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.811 qpair failed and we were unable to recover it. 00:28:33.811 [2024-11-15 11:46:34.453845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.811 [2024-11-15 11:46:34.453878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.811 qpair failed and we were unable to recover it. 00:28:33.811 [2024-11-15 11:46:34.454160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.811 [2024-11-15 11:46:34.454192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.811 qpair failed and we were unable to recover it. 00:28:33.811 [2024-11-15 11:46:34.454477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.811 [2024-11-15 11:46:34.454489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.811 qpair failed and we were unable to recover it. 00:28:33.811 [2024-11-15 11:46:34.454727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.811 [2024-11-15 11:46:34.454738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.811 qpair failed and we were unable to recover it. 00:28:33.811 [2024-11-15 11:46:34.454947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.811 [2024-11-15 11:46:34.454980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.811 qpair failed and we were unable to recover it. 00:28:33.811 [2024-11-15 11:46:34.455244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.811 [2024-11-15 11:46:34.455255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.811 qpair failed and we were unable to recover it. 00:28:33.811 [2024-11-15 11:46:34.455494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.811 [2024-11-15 11:46:34.455527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.811 qpair failed and we were unable to recover it. 00:28:33.811 [2024-11-15 11:46:34.455843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.811 [2024-11-15 11:46:34.455876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.811 qpair failed and we were unable to recover it. 00:28:33.811 [2024-11-15 11:46:34.456097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.811 [2024-11-15 11:46:34.456128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.811 qpair failed and we were unable to recover it. 00:28:33.811 [2024-11-15 11:46:34.456314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.811 [2024-11-15 11:46:34.456346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.811 qpair failed and we were unable to recover it. 00:28:33.811 [2024-11-15 11:46:34.456612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.811 [2024-11-15 11:46:34.456627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.811 qpair failed and we were unable to recover it. 00:28:33.811 [2024-11-15 11:46:34.456792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.811 [2024-11-15 11:46:34.456823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.811 qpair failed and we were unable to recover it. 00:28:33.811 [2024-11-15 11:46:34.457125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.811 [2024-11-15 11:46:34.457135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.811 qpair failed and we were unable to recover it. 00:28:33.811 [2024-11-15 11:46:34.457236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.811 [2024-11-15 11:46:34.457247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.811 qpair failed and we were unable to recover it. 00:28:33.811 [2024-11-15 11:46:34.457499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.811 [2024-11-15 11:46:34.457532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.811 qpair failed and we were unable to recover it. 00:28:33.811 [2024-11-15 11:46:34.457720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.811 [2024-11-15 11:46:34.457752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.811 qpair failed and we were unable to recover it. 00:28:33.811 [2024-11-15 11:46:34.457940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.811 [2024-11-15 11:46:34.457972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.811 qpair failed and we were unable to recover it. 00:28:33.811 [2024-11-15 11:46:34.458171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.811 [2024-11-15 11:46:34.458202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.811 qpair failed and we were unable to recover it. 00:28:33.811 [2024-11-15 11:46:34.458421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.811 [2024-11-15 11:46:34.458453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.811 qpair failed and we were unable to recover it. 00:28:33.811 [2024-11-15 11:46:34.458674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.811 [2024-11-15 11:46:34.458709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.811 qpair failed and we were unable to recover it. 00:28:33.811 [2024-11-15 11:46:34.458907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.811 [2024-11-15 11:46:34.458939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.811 qpair failed and we were unable to recover it. 00:28:33.811 [2024-11-15 11:46:34.459194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.811 [2024-11-15 11:46:34.459226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.811 qpair failed and we were unable to recover it. 00:28:33.811 [2024-11-15 11:46:34.459539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.811 [2024-11-15 11:46:34.459573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.811 qpair failed and we were unable to recover it. 00:28:33.811 [2024-11-15 11:46:34.459835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.811 [2024-11-15 11:46:34.459868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.811 qpair failed and we were unable to recover it. 00:28:33.811 [2024-11-15 11:46:34.460163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.811 [2024-11-15 11:46:34.460195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.811 qpair failed and we were unable to recover it. 00:28:33.811 [2024-11-15 11:46:34.460481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.811 [2024-11-15 11:46:34.460515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.811 qpair failed and we were unable to recover it. 00:28:33.811 [2024-11-15 11:46:34.460801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.811 [2024-11-15 11:46:34.460834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.811 qpair failed and we were unable to recover it. 00:28:33.811 [2024-11-15 11:46:34.461083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.811 [2024-11-15 11:46:34.461094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.811 qpair failed and we were unable to recover it. 00:28:33.811 [2024-11-15 11:46:34.461249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.812 [2024-11-15 11:46:34.461282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.812 qpair failed and we were unable to recover it. 00:28:33.812 [2024-11-15 11:46:34.461491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.812 [2024-11-15 11:46:34.461524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.812 qpair failed and we were unable to recover it. 00:28:33.812 [2024-11-15 11:46:34.461806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.812 [2024-11-15 11:46:34.461839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.812 qpair failed and we were unable to recover it. 00:28:33.812 [2024-11-15 11:46:34.462119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.812 [2024-11-15 11:46:34.462130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.812 qpair failed and we were unable to recover it. 00:28:33.812 [2024-11-15 11:46:34.462372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.812 [2024-11-15 11:46:34.462383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.812 qpair failed and we were unable to recover it. 00:28:33.812 [2024-11-15 11:46:34.462623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.812 [2024-11-15 11:46:34.462634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.812 qpair failed and we were unable to recover it. 00:28:33.812 [2024-11-15 11:46:34.462795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.812 [2024-11-15 11:46:34.462806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.812 qpair failed and we were unable to recover it. 00:28:33.812 [2024-11-15 11:46:34.462977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.812 [2024-11-15 11:46:34.463009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.812 qpair failed and we were unable to recover it. 00:28:33.812 [2024-11-15 11:46:34.463270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.812 [2024-11-15 11:46:34.463304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.812 qpair failed and we were unable to recover it. 00:28:33.812 [2024-11-15 11:46:34.463595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.812 [2024-11-15 11:46:34.463629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.812 qpair failed and we were unable to recover it. 00:28:33.812 [2024-11-15 11:46:34.463842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.812 [2024-11-15 11:46:34.463875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.812 qpair failed and we were unable to recover it. 00:28:33.812 [2024-11-15 11:46:34.464126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.812 [2024-11-15 11:46:34.464136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.812 qpair failed and we were unable to recover it. 00:28:33.812 [2024-11-15 11:46:34.464371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.812 [2024-11-15 11:46:34.464382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.812 qpair failed and we were unable to recover it. 00:28:33.812 [2024-11-15 11:46:34.464622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.812 [2024-11-15 11:46:34.464634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.812 qpair failed and we were unable to recover it. 00:28:33.812 [2024-11-15 11:46:34.464748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.812 [2024-11-15 11:46:34.464759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.812 qpair failed and we were unable to recover it. 00:28:33.812 [2024-11-15 11:46:34.465012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.812 [2024-11-15 11:46:34.465043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.812 qpair failed and we were unable to recover it. 00:28:33.812 [2024-11-15 11:46:34.465328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.812 [2024-11-15 11:46:34.465360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.812 qpair failed and we were unable to recover it. 00:28:33.812 [2024-11-15 11:46:34.465562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.812 [2024-11-15 11:46:34.465573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.812 qpair failed and we were unable to recover it. 00:28:33.812 [2024-11-15 11:46:34.465817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.812 [2024-11-15 11:46:34.465850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.812 qpair failed and we were unable to recover it. 00:28:33.812 [2024-11-15 11:46:34.466059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.812 [2024-11-15 11:46:34.466091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.812 qpair failed and we were unable to recover it. 00:28:33.812 [2024-11-15 11:46:34.466345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.812 [2024-11-15 11:46:34.466378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.812 qpair failed and we were unable to recover it. 00:28:33.812 [2024-11-15 11:46:34.466690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.812 [2024-11-15 11:46:34.466724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.812 qpair failed and we were unable to recover it. 00:28:33.812 [2024-11-15 11:46:34.466919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.812 [2024-11-15 11:46:34.466958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.812 qpair failed and we were unable to recover it. 00:28:33.812 [2024-11-15 11:46:34.467178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.812 [2024-11-15 11:46:34.467210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.812 qpair failed and we were unable to recover it. 00:28:33.812 [2024-11-15 11:46:34.467396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.812 [2024-11-15 11:46:34.467429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.812 qpair failed and we were unable to recover it. 00:28:33.812 [2024-11-15 11:46:34.467818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.812 [2024-11-15 11:46:34.467894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.812 qpair failed and we were unable to recover it. 00:28:33.812 [2024-11-15 11:46:34.468174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.812 [2024-11-15 11:46:34.468216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1922550 with addr=10.0.0.2, port=4420 00:28:33.812 qpair failed and we were unable to recover it. 00:28:33.812 [2024-11-15 11:46:34.468514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.812 [2024-11-15 11:46:34.468543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:33.812 qpair failed and we were unable to recover it. 00:28:33.812 [2024-11-15 11:46:34.468810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.812 [2024-11-15 11:46:34.468845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.812 qpair failed and we were unable to recover it. 00:28:33.812 [2024-11-15 11:46:34.469154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.812 [2024-11-15 11:46:34.469186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.812 qpair failed and we were unable to recover it. 00:28:33.812 [2024-11-15 11:46:34.469450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.812 [2024-11-15 11:46:34.469494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.812 qpair failed and we were unable to recover it. 00:28:33.812 [2024-11-15 11:46:34.469681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.812 [2024-11-15 11:46:34.469714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.812 qpair failed and we were unable to recover it. 00:28:33.812 [2024-11-15 11:46:34.469907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.812 [2024-11-15 11:46:34.469939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.812 qpair failed and we were unable to recover it. 00:28:33.812 [2024-11-15 11:46:34.470217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.812 [2024-11-15 11:46:34.470250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.812 qpair failed and we were unable to recover it. 00:28:33.812 [2024-11-15 11:46:34.470526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.812 [2024-11-15 11:46:34.470561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.812 qpair failed and we were unable to recover it. 00:28:33.812 [2024-11-15 11:46:34.470850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.812 [2024-11-15 11:46:34.470882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.812 qpair failed and we were unable to recover it. 00:28:33.812 [2024-11-15 11:46:34.471146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.812 [2024-11-15 11:46:34.471178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.812 qpair failed and we were unable to recover it. 00:28:33.812 [2024-11-15 11:46:34.471378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.812 [2024-11-15 11:46:34.471411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.812 qpair failed and we were unable to recover it. 00:28:33.812 [2024-11-15 11:46:34.471631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.813 [2024-11-15 11:46:34.471665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.813 qpair failed and we were unable to recover it. 00:28:33.813 [2024-11-15 11:46:34.471944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.813 [2024-11-15 11:46:34.471977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.813 qpair failed and we were unable to recover it. 00:28:33.813 [2024-11-15 11:46:34.472123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.813 [2024-11-15 11:46:34.472155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.813 qpair failed and we were unable to recover it. 00:28:33.813 [2024-11-15 11:46:34.472429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.813 [2024-11-15 11:46:34.472439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.813 qpair failed and we were unable to recover it. 00:28:33.813 [2024-11-15 11:46:34.472610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.813 [2024-11-15 11:46:34.472621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.813 qpair failed and we were unable to recover it. 00:28:33.813 [2024-11-15 11:46:34.472804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.813 [2024-11-15 11:46:34.472814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.813 qpair failed and we were unable to recover it. 00:28:33.813 [2024-11-15 11:46:34.472955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.813 [2024-11-15 11:46:34.472965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.813 qpair failed and we were unable to recover it. 00:28:33.813 [2024-11-15 11:46:34.473103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.813 [2024-11-15 11:46:34.473114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.813 qpair failed and we were unable to recover it. 00:28:33.813 [2024-11-15 11:46:34.473335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.813 [2024-11-15 11:46:34.473368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.813 qpair failed and we were unable to recover it. 00:28:33.813 [2024-11-15 11:46:34.473628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.813 [2024-11-15 11:46:34.473661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.813 qpair failed and we were unable to recover it. 00:28:33.813 [2024-11-15 11:46:34.473922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.813 [2024-11-15 11:46:34.473955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.813 qpair failed and we were unable to recover it. 00:28:33.813 [2024-11-15 11:46:34.474198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.813 [2024-11-15 11:46:34.474210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.813 qpair failed and we were unable to recover it. 00:28:33.813 [2024-11-15 11:46:34.474419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.813 [2024-11-15 11:46:34.474430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.813 qpair failed and we were unable to recover it. 00:28:33.813 [2024-11-15 11:46:34.474668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.813 [2024-11-15 11:46:34.474680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.813 qpair failed and we were unable to recover it. 00:28:33.813 [2024-11-15 11:46:34.474775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.813 [2024-11-15 11:46:34.474786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.813 qpair failed and we were unable to recover it. 00:28:33.813 [2024-11-15 11:46:34.475052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.813 [2024-11-15 11:46:34.475084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.813 qpair failed and we were unable to recover it. 00:28:33.813 [2024-11-15 11:46:34.475399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.813 [2024-11-15 11:46:34.475432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.813 qpair failed and we were unable to recover it. 00:28:33.813 [2024-11-15 11:46:34.475701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.813 [2024-11-15 11:46:34.475733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.813 qpair failed and we were unable to recover it. 00:28:33.813 [2024-11-15 11:46:34.476022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.813 [2024-11-15 11:46:34.476055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.813 qpair failed and we were unable to recover it. 00:28:33.813 [2024-11-15 11:46:34.476364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.813 [2024-11-15 11:46:34.476395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.813 qpair failed and we were unable to recover it. 00:28:33.813 [2024-11-15 11:46:34.476729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.813 [2024-11-15 11:46:34.476764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.813 qpair failed and we were unable to recover it. 00:28:33.813 [2024-11-15 11:46:34.477051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.813 [2024-11-15 11:46:34.477084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.813 qpair failed and we were unable to recover it. 00:28:33.813 [2024-11-15 11:46:34.477370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.813 [2024-11-15 11:46:34.477401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.813 qpair failed and we were unable to recover it. 00:28:33.813 [2024-11-15 11:46:34.477746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.813 [2024-11-15 11:46:34.477780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.813 qpair failed and we were unable to recover it. 00:28:33.813 [2024-11-15 11:46:34.477979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.813 [2024-11-15 11:46:34.478019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.813 qpair failed and we were unable to recover it. 00:28:33.813 [2024-11-15 11:46:34.478207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.813 [2024-11-15 11:46:34.478239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.813 qpair failed and we were unable to recover it. 00:28:33.813 [2024-11-15 11:46:34.478486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.813 [2024-11-15 11:46:34.478498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.813 qpair failed and we were unable to recover it. 00:28:33.813 [2024-11-15 11:46:34.478775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.813 [2024-11-15 11:46:34.478807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.813 qpair failed and we were unable to recover it. 00:28:33.813 [2024-11-15 11:46:34.479066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.813 [2024-11-15 11:46:34.479098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.813 qpair failed and we were unable to recover it. 00:28:33.813 [2024-11-15 11:46:34.479333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.813 [2024-11-15 11:46:34.479366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.813 qpair failed and we were unable to recover it. 00:28:33.813 [2024-11-15 11:46:34.479515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.813 [2024-11-15 11:46:34.479548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.813 qpair failed and we were unable to recover it. 00:28:33.813 [2024-11-15 11:46:34.479837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.813 [2024-11-15 11:46:34.479870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.813 qpair failed and we were unable to recover it. 00:28:33.813 [2024-11-15 11:46:34.480079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.813 [2024-11-15 11:46:34.480111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.813 qpair failed and we were unable to recover it. 00:28:33.813 [2024-11-15 11:46:34.480402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.813 [2024-11-15 11:46:34.480435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.813 qpair failed and we were unable to recover it. 00:28:33.813 [2024-11-15 11:46:34.480721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.813 [2024-11-15 11:46:34.480753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.813 qpair failed and we were unable to recover it. 00:28:33.813 [2024-11-15 11:46:34.481041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.813 [2024-11-15 11:46:34.481074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.813 qpair failed and we were unable to recover it. 00:28:33.813 [2024-11-15 11:46:34.481382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.813 [2024-11-15 11:46:34.481414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.813 qpair failed and we were unable to recover it. 00:28:33.813 [2024-11-15 11:46:34.481712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.813 [2024-11-15 11:46:34.481745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.813 qpair failed and we were unable to recover it. 00:28:33.813 [2024-11-15 11:46:34.482022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.814 [2024-11-15 11:46:34.482055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.814 qpair failed and we were unable to recover it. 00:28:33.814 [2024-11-15 11:46:34.482328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.814 [2024-11-15 11:46:34.482360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.814 qpair failed and we were unable to recover it. 00:28:33.814 [2024-11-15 11:46:34.482663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.814 [2024-11-15 11:46:34.482697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.814 qpair failed and we were unable to recover it. 00:28:33.814 [2024-11-15 11:46:34.482882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.814 [2024-11-15 11:46:34.482916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.814 qpair failed and we were unable to recover it. 00:28:33.814 [2024-11-15 11:46:34.483200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.814 [2024-11-15 11:46:34.483232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.814 qpair failed and we were unable to recover it. 00:28:33.814 [2024-11-15 11:46:34.483378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.814 [2024-11-15 11:46:34.483412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.814 qpair failed and we were unable to recover it. 00:28:33.814 [2024-11-15 11:46:34.483709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.814 [2024-11-15 11:46:34.483742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.814 qpair failed and we were unable to recover it. 00:28:33.814 [2024-11-15 11:46:34.483968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.814 [2024-11-15 11:46:34.484000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.814 qpair failed and we were unable to recover it. 00:28:33.814 [2024-11-15 11:46:34.484223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.814 [2024-11-15 11:46:34.484254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.814 qpair failed and we were unable to recover it. 00:28:33.814 [2024-11-15 11:46:34.484467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.814 [2024-11-15 11:46:34.484501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.814 qpair failed and we were unable to recover it. 00:28:33.814 [2024-11-15 11:46:34.484652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.814 [2024-11-15 11:46:34.484684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.814 qpair failed and we were unable to recover it. 00:28:33.814 [2024-11-15 11:46:34.484874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.814 [2024-11-15 11:46:34.484907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.814 qpair failed and we were unable to recover it. 00:28:33.814 [2024-11-15 11:46:34.485213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.814 [2024-11-15 11:46:34.485246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.814 qpair failed and we were unable to recover it. 00:28:33.814 [2024-11-15 11:46:34.485548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.814 [2024-11-15 11:46:34.485590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.814 qpair failed and we were unable to recover it. 00:28:33.814 [2024-11-15 11:46:34.485818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.814 [2024-11-15 11:46:34.485852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.814 qpair failed and we were unable to recover it. 00:28:33.814 [2024-11-15 11:46:34.486113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.814 [2024-11-15 11:46:34.486124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.814 qpair failed and we were unable to recover it. 00:28:33.814 [2024-11-15 11:46:34.486267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.814 [2024-11-15 11:46:34.486300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.814 qpair failed and we were unable to recover it. 00:28:33.814 [2024-11-15 11:46:34.486535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.814 [2024-11-15 11:46:34.486569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.814 qpair failed and we were unable to recover it. 00:28:33.814 [2024-11-15 11:46:34.486792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.814 [2024-11-15 11:46:34.486824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.814 qpair failed and we were unable to recover it. 00:28:33.814 [2024-11-15 11:46:34.487081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.814 [2024-11-15 11:46:34.487114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.814 qpair failed and we were unable to recover it. 00:28:33.814 [2024-11-15 11:46:34.487336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.814 [2024-11-15 11:46:34.487368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.814 qpair failed and we were unable to recover it. 00:28:33.814 [2024-11-15 11:46:34.487630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.814 [2024-11-15 11:46:34.487642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.814 qpair failed and we were unable to recover it. 00:28:33.814 [2024-11-15 11:46:34.487781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.814 [2024-11-15 11:46:34.487791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.814 qpair failed and we were unable to recover it. 00:28:33.814 [2024-11-15 11:46:34.488021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.814 [2024-11-15 11:46:34.488032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.814 qpair failed and we were unable to recover it. 00:28:33.814 [2024-11-15 11:46:34.488292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.814 [2024-11-15 11:46:34.488303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.814 qpair failed and we were unable to recover it. 00:28:33.814 [2024-11-15 11:46:34.488542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.814 [2024-11-15 11:46:34.488553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.814 qpair failed and we were unable to recover it. 00:28:33.814 [2024-11-15 11:46:34.488644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.814 [2024-11-15 11:46:34.488662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.814 qpair failed and we were unable to recover it. 00:28:33.814 [2024-11-15 11:46:34.488930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.814 [2024-11-15 11:46:34.488962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.814 qpair failed and we were unable to recover it. 00:28:33.814 [2024-11-15 11:46:34.489200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.814 [2024-11-15 11:46:34.489233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.814 qpair failed and we were unable to recover it. 00:28:33.814 [2024-11-15 11:46:34.489497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.814 [2024-11-15 11:46:34.489530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.814 qpair failed and we were unable to recover it. 00:28:33.814 [2024-11-15 11:46:34.489754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.814 [2024-11-15 11:46:34.489787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.814 qpair failed and we were unable to recover it. 00:28:33.814 [2024-11-15 11:46:34.490072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.814 [2024-11-15 11:46:34.490104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.814 qpair failed and we were unable to recover it. 00:28:33.814 [2024-11-15 11:46:34.490357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.814 [2024-11-15 11:46:34.490389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.814 qpair failed and we were unable to recover it. 00:28:33.814 [2024-11-15 11:46:34.490599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.815 [2024-11-15 11:46:34.490610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.815 qpair failed and we were unable to recover it. 00:28:33.815 [2024-11-15 11:46:34.490852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.815 [2024-11-15 11:46:34.490884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.815 qpair failed and we were unable to recover it. 00:28:33.815 [2024-11-15 11:46:34.491198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.815 [2024-11-15 11:46:34.491231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.815 qpair failed and we were unable to recover it. 00:28:33.815 [2024-11-15 11:46:34.491544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.815 [2024-11-15 11:46:34.491577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.815 qpair failed and we were unable to recover it. 00:28:33.815 [2024-11-15 11:46:34.491838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.815 [2024-11-15 11:46:34.491870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.815 qpair failed and we were unable to recover it. 00:28:33.815 [2024-11-15 11:46:34.492178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.815 [2024-11-15 11:46:34.492210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.815 qpair failed and we were unable to recover it. 00:28:33.815 [2024-11-15 11:46:34.492480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.815 [2024-11-15 11:46:34.492513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.815 qpair failed and we were unable to recover it. 00:28:33.815 [2024-11-15 11:46:34.492742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.815 [2024-11-15 11:46:34.492775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.815 qpair failed and we were unable to recover it. 00:28:33.815 [2024-11-15 11:46:34.492965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.815 [2024-11-15 11:46:34.492998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.815 qpair failed and we were unable to recover it. 00:28:33.815 [2024-11-15 11:46:34.493284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.815 [2024-11-15 11:46:34.493316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.815 qpair failed and we were unable to recover it. 00:28:33.815 [2024-11-15 11:46:34.493515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.815 [2024-11-15 11:46:34.493549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.815 qpair failed and we were unable to recover it. 00:28:33.815 [2024-11-15 11:46:34.493800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.815 [2024-11-15 11:46:34.493832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.815 qpair failed and we were unable to recover it. 00:28:33.815 [2024-11-15 11:46:34.494102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.815 [2024-11-15 11:46:34.494134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.815 qpair failed and we were unable to recover it. 00:28:33.815 [2024-11-15 11:46:34.494335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.815 [2024-11-15 11:46:34.494369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.815 qpair failed and we were unable to recover it. 00:28:33.815 [2024-11-15 11:46:34.494658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.815 [2024-11-15 11:46:34.494691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.815 qpair failed and we were unable to recover it. 00:28:33.815 [2024-11-15 11:46:34.494980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.815 [2024-11-15 11:46:34.495013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.815 qpair failed and we were unable to recover it. 00:28:33.815 [2024-11-15 11:46:34.495296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.815 [2024-11-15 11:46:34.495328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.815 qpair failed and we were unable to recover it. 00:28:33.815 [2024-11-15 11:46:34.495541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.815 [2024-11-15 11:46:34.495552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.815 qpair failed and we were unable to recover it. 00:28:33.815 [2024-11-15 11:46:34.495800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.815 [2024-11-15 11:46:34.495833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.815 qpair failed and we were unable to recover it. 00:28:33.815 [2024-11-15 11:46:34.496035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.815 [2024-11-15 11:46:34.496066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.815 qpair failed and we were unable to recover it. 00:28:33.815 [2024-11-15 11:46:34.496287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.815 [2024-11-15 11:46:34.496321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.815 qpair failed and we were unable to recover it. 00:28:33.815 [2024-11-15 11:46:34.496523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.815 [2024-11-15 11:46:34.496534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.815 qpair failed and we were unable to recover it. 00:28:33.815 [2024-11-15 11:46:34.496702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.815 [2024-11-15 11:46:34.496734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.815 qpair failed and we were unable to recover it. 00:28:33.815 [2024-11-15 11:46:34.496961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.815 [2024-11-15 11:46:34.496993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.815 qpair failed and we were unable to recover it. 00:28:33.815 [2024-11-15 11:46:34.497227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.815 [2024-11-15 11:46:34.497260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.815 qpair failed and we were unable to recover it. 00:28:33.815 [2024-11-15 11:46:34.497452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.815 [2024-11-15 11:46:34.497494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.815 qpair failed and we were unable to recover it. 00:28:33.815 [2024-11-15 11:46:34.497786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.815 [2024-11-15 11:46:34.497818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.815 qpair failed and we were unable to recover it. 00:28:33.815 [2024-11-15 11:46:34.498007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.815 [2024-11-15 11:46:34.498040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.815 qpair failed and we were unable to recover it. 00:28:33.815 [2024-11-15 11:46:34.498300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.815 [2024-11-15 11:46:34.498332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.815 qpair failed and we were unable to recover it. 00:28:33.815 [2024-11-15 11:46:34.498642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.815 [2024-11-15 11:46:34.498677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.815 qpair failed and we were unable to recover it. 00:28:33.815 [2024-11-15 11:46:34.498888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.815 [2024-11-15 11:46:34.498920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.815 qpair failed and we were unable to recover it. 00:28:33.815 [2024-11-15 11:46:34.499147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.815 [2024-11-15 11:46:34.499179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.815 qpair failed and we were unable to recover it. 00:28:33.815 [2024-11-15 11:46:34.499378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.815 [2024-11-15 11:46:34.499411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.815 qpair failed and we were unable to recover it. 00:28:33.815 [2024-11-15 11:46:34.499689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.815 [2024-11-15 11:46:34.499729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.815 qpair failed and we were unable to recover it. 00:28:33.815 [2024-11-15 11:46:34.499948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.815 [2024-11-15 11:46:34.499980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.815 qpair failed and we were unable to recover it. 00:28:33.815 [2024-11-15 11:46:34.500239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.815 [2024-11-15 11:46:34.500272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.815 qpair failed and we were unable to recover it. 00:28:33.815 [2024-11-15 11:46:34.500565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.815 [2024-11-15 11:46:34.500577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.815 qpair failed and we were unable to recover it. 00:28:33.815 [2024-11-15 11:46:34.500857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.815 [2024-11-15 11:46:34.500890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.815 qpair failed and we were unable to recover it. 00:28:33.815 [2024-11-15 11:46:34.501075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.816 [2024-11-15 11:46:34.501107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.816 qpair failed and we were unable to recover it. 00:28:33.816 [2024-11-15 11:46:34.501393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.816 [2024-11-15 11:46:34.501426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.816 qpair failed and we were unable to recover it. 00:28:33.816 [2024-11-15 11:46:34.501639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.816 [2024-11-15 11:46:34.501651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.816 qpair failed and we were unable to recover it. 00:28:33.816 [2024-11-15 11:46:34.501881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.816 [2024-11-15 11:46:34.501892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.816 qpair failed and we were unable to recover it. 00:28:33.816 [2024-11-15 11:46:34.502031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.816 [2024-11-15 11:46:34.502041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.816 qpair failed and we were unable to recover it. 00:28:33.816 [2024-11-15 11:46:34.502340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.816 [2024-11-15 11:46:34.502373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.816 qpair failed and we were unable to recover it. 00:28:33.816 [2024-11-15 11:46:34.502507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.816 [2024-11-15 11:46:34.502541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.816 qpair failed and we were unable to recover it. 00:28:33.816 [2024-11-15 11:46:34.502741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.816 [2024-11-15 11:46:34.502774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.816 qpair failed and we were unable to recover it. 00:28:33.816 [2024-11-15 11:46:34.503001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.816 [2024-11-15 11:46:34.503033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.816 qpair failed and we were unable to recover it. 00:28:33.816 [2024-11-15 11:46:34.503181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.816 [2024-11-15 11:46:34.503214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.816 qpair failed and we were unable to recover it. 00:28:33.816 [2024-11-15 11:46:34.503482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.816 [2024-11-15 11:46:34.503515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.816 qpair failed and we were unable to recover it. 00:28:33.816 [2024-11-15 11:46:34.503716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.816 [2024-11-15 11:46:34.503749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.816 qpair failed and we were unable to recover it. 00:28:33.816 [2024-11-15 11:46:34.504016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.816 [2024-11-15 11:46:34.504048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.816 qpair failed and we were unable to recover it. 00:28:33.816 [2024-11-15 11:46:34.504221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.816 [2024-11-15 11:46:34.504232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.816 qpair failed and we were unable to recover it. 00:28:33.816 [2024-11-15 11:46:34.504351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.816 [2024-11-15 11:46:34.504362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.816 qpair failed and we were unable to recover it. 00:28:33.816 [2024-11-15 11:46:34.504615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.816 [2024-11-15 11:46:34.504645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.816 qpair failed and we were unable to recover it. 00:28:33.816 [2024-11-15 11:46:34.504945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.816 [2024-11-15 11:46:34.504977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.816 qpair failed and we were unable to recover it. 00:28:33.816 [2024-11-15 11:46:34.505171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.816 [2024-11-15 11:46:34.505204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.816 qpair failed and we were unable to recover it. 00:28:33.816 [2024-11-15 11:46:34.505479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.816 [2024-11-15 11:46:34.505514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.816 qpair failed and we were unable to recover it. 00:28:33.816 [2024-11-15 11:46:34.505801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.816 [2024-11-15 11:46:34.505832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.816 qpair failed and we were unable to recover it. 00:28:33.816 [2024-11-15 11:46:34.506034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.816 [2024-11-15 11:46:34.506067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.816 qpair failed and we were unable to recover it. 00:28:33.816 [2024-11-15 11:46:34.506256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.816 [2024-11-15 11:46:34.506287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.816 qpair failed and we were unable to recover it. 00:28:33.816 [2024-11-15 11:46:34.506551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.816 [2024-11-15 11:46:34.506585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.816 qpair failed and we were unable to recover it. 00:28:33.816 [2024-11-15 11:46:34.506871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.816 [2024-11-15 11:46:34.506904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.816 qpair failed and we were unable to recover it. 00:28:33.816 [2024-11-15 11:46:34.507191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.816 [2024-11-15 11:46:34.507223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.816 qpair failed and we were unable to recover it. 00:28:33.816 [2024-11-15 11:46:34.507514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.816 [2024-11-15 11:46:34.507547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.816 qpair failed and we were unable to recover it. 00:28:33.816 [2024-11-15 11:46:34.507793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.816 [2024-11-15 11:46:34.507826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.816 qpair failed and we were unable to recover it. 00:28:33.816 [2024-11-15 11:46:34.508017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.816 [2024-11-15 11:46:34.508051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.816 qpair failed and we were unable to recover it. 00:28:33.816 [2024-11-15 11:46:34.508306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.816 [2024-11-15 11:46:34.508338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.816 qpair failed and we were unable to recover it. 00:28:33.816 [2024-11-15 11:46:34.508550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.816 [2024-11-15 11:46:34.508584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.816 qpair failed and we were unable to recover it. 00:28:33.816 [2024-11-15 11:46:34.508797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.816 [2024-11-15 11:46:34.508830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.816 qpair failed and we were unable to recover it. 00:28:33.816 [2024-11-15 11:46:34.509114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.816 [2024-11-15 11:46:34.509146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.816 qpair failed and we were unable to recover it. 00:28:33.816 [2024-11-15 11:46:34.509361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.816 [2024-11-15 11:46:34.509394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.816 qpair failed and we were unable to recover it. 00:28:33.816 [2024-11-15 11:46:34.509526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.816 [2024-11-15 11:46:34.509538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.816 qpair failed and we were unable to recover it. 00:28:33.816 [2024-11-15 11:46:34.509691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.816 [2024-11-15 11:46:34.509703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.816 qpair failed and we were unable to recover it. 00:28:33.816 [2024-11-15 11:46:34.509795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.816 [2024-11-15 11:46:34.509808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.816 qpair failed and we were unable to recover it. 00:28:33.816 [2024-11-15 11:46:34.510070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.816 [2024-11-15 11:46:34.510101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.816 qpair failed and we were unable to recover it. 00:28:33.816 [2024-11-15 11:46:34.510433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.816 [2024-11-15 11:46:34.510487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.816 qpair failed and we were unable to recover it. 00:28:33.816 [2024-11-15 11:46:34.510702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.817 [2024-11-15 11:46:34.510735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.817 qpair failed and we were unable to recover it. 00:28:33.817 [2024-11-15 11:46:34.511025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.817 [2024-11-15 11:46:34.511057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.817 qpair failed and we were unable to recover it. 00:28:33.817 [2024-11-15 11:46:34.511241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.817 [2024-11-15 11:46:34.511274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.817 qpair failed and we were unable to recover it. 00:28:33.817 [2024-11-15 11:46:34.511548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.817 [2024-11-15 11:46:34.511559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.817 qpair failed and we were unable to recover it. 00:28:33.817 [2024-11-15 11:46:34.511763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.817 [2024-11-15 11:46:34.511795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.817 qpair failed and we were unable to recover it. 00:28:33.817 [2024-11-15 11:46:34.512003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.817 [2024-11-15 11:46:34.512035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.817 qpair failed and we were unable to recover it. 00:28:33.817 [2024-11-15 11:46:34.512256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.817 [2024-11-15 11:46:34.512267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.817 qpair failed and we were unable to recover it. 00:28:33.817 [2024-11-15 11:46:34.512522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.817 [2024-11-15 11:46:34.512556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.817 qpair failed and we were unable to recover it. 00:28:33.817 [2024-11-15 11:46:34.512842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.817 [2024-11-15 11:46:34.512875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.817 qpair failed and we were unable to recover it. 00:28:33.817 [2024-11-15 11:46:34.513164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.817 [2024-11-15 11:46:34.513196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.817 qpair failed and we were unable to recover it. 00:28:33.817 [2024-11-15 11:46:34.513455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.817 [2024-11-15 11:46:34.513497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.817 qpair failed and we were unable to recover it. 00:28:33.817 [2024-11-15 11:46:34.513731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.817 [2024-11-15 11:46:34.513764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.817 qpair failed and we were unable to recover it. 00:28:33.817 [2024-11-15 11:46:34.514076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.817 [2024-11-15 11:46:34.514109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.817 qpair failed and we were unable to recover it. 00:28:33.817 [2024-11-15 11:46:34.514328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.817 [2024-11-15 11:46:34.514360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.817 qpair failed and we were unable to recover it. 00:28:33.817 [2024-11-15 11:46:34.514630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.817 [2024-11-15 11:46:34.514663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.817 qpair failed and we were unable to recover it. 00:28:33.817 [2024-11-15 11:46:34.514927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.817 [2024-11-15 11:46:34.514959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.817 qpair failed and we were unable to recover it. 00:28:33.817 [2024-11-15 11:46:34.515274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.817 [2024-11-15 11:46:34.515306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.817 qpair failed and we were unable to recover it. 00:28:33.817 [2024-11-15 11:46:34.515573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.817 [2024-11-15 11:46:34.515584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.817 qpair failed and we were unable to recover it. 00:28:33.817 [2024-11-15 11:46:34.515759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.817 [2024-11-15 11:46:34.515770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.817 qpair failed and we were unable to recover it. 00:28:33.817 [2024-11-15 11:46:34.515997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.817 [2024-11-15 11:46:34.516008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.817 qpair failed and we were unable to recover it. 00:28:33.817 [2024-11-15 11:46:34.516219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.817 [2024-11-15 11:46:34.516230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.817 qpair failed and we were unable to recover it. 00:28:33.817 [2024-11-15 11:46:34.516471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.817 [2024-11-15 11:46:34.516482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.817 qpair failed and we were unable to recover it. 00:28:33.817 [2024-11-15 11:46:34.516630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.817 [2024-11-15 11:46:34.516642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.817 qpair failed and we were unable to recover it. 00:28:33.817 [2024-11-15 11:46:34.516808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.817 [2024-11-15 11:46:34.516839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:33.817 qpair failed and we were unable to recover it. 00:28:33.817 [2024-11-15 11:46:34.517180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.817 [2024-11-15 11:46:34.517255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.817 qpair failed and we were unable to recover it. 00:28:33.817 [2024-11-15 11:46:34.517575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.817 [2024-11-15 11:46:34.517589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.817 qpair failed and we were unable to recover it. 00:28:33.817 [2024-11-15 11:46:34.517809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.817 [2024-11-15 11:46:34.517820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.817 qpair failed and we were unable to recover it. 00:28:33.817 [2024-11-15 11:46:34.518038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.817 [2024-11-15 11:46:34.518050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.817 qpair failed and we were unable to recover it. 00:28:33.817 [2024-11-15 11:46:34.518291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.817 [2024-11-15 11:46:34.518302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.817 qpair failed and we were unable to recover it. 00:28:33.817 [2024-11-15 11:46:34.518603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.817 [2024-11-15 11:46:34.518638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.817 qpair failed and we were unable to recover it. 00:28:33.817 [2024-11-15 11:46:34.518872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.817 [2024-11-15 11:46:34.518905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.817 qpair failed and we were unable to recover it. 00:28:33.817 [2024-11-15 11:46:34.519108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.817 [2024-11-15 11:46:34.519140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.817 qpair failed and we were unable to recover it. 00:28:33.817 [2024-11-15 11:46:34.519285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.817 [2024-11-15 11:46:34.519317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.817 qpair failed and we were unable to recover it. 00:28:33.817 [2024-11-15 11:46:34.519550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.817 [2024-11-15 11:46:34.519562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.817 qpair failed and we were unable to recover it. 00:28:33.817 [2024-11-15 11:46:34.519868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.817 [2024-11-15 11:46:34.519900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.817 qpair failed and we were unable to recover it. 00:28:33.817 [2024-11-15 11:46:34.520103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.817 [2024-11-15 11:46:34.520137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.817 qpair failed and we were unable to recover it. 00:28:33.817 [2024-11-15 11:46:34.520418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.817 [2024-11-15 11:46:34.520452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.817 qpair failed and we were unable to recover it. 00:28:33.817 [2024-11-15 11:46:34.520769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.817 [2024-11-15 11:46:34.520812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.817 qpair failed and we were unable to recover it. 00:28:33.817 [2024-11-15 11:46:34.521083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.818 [2024-11-15 11:46:34.521116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.818 qpair failed and we were unable to recover it. 00:28:33.818 [2024-11-15 11:46:34.521383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.818 [2024-11-15 11:46:34.521416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.818 qpair failed and we were unable to recover it. 00:28:33.818 [2024-11-15 11:46:34.521708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.818 [2024-11-15 11:46:34.521743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.818 qpair failed and we were unable to recover it. 00:28:33.818 [2024-11-15 11:46:34.521930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.818 [2024-11-15 11:46:34.521962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.818 qpair failed and we were unable to recover it. 00:28:33.818 [2024-11-15 11:46:34.522156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.818 [2024-11-15 11:46:34.522167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.818 qpair failed and we were unable to recover it. 00:28:33.818 [2024-11-15 11:46:34.522409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.818 [2024-11-15 11:46:34.522420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.818 qpair failed and we were unable to recover it. 00:28:33.818 [2024-11-15 11:46:34.522585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.818 [2024-11-15 11:46:34.522597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.818 qpair failed and we were unable to recover it. 00:28:33.818 [2024-11-15 11:46:34.522783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.818 [2024-11-15 11:46:34.522794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.818 qpair failed and we were unable to recover it. 00:28:33.818 [2024-11-15 11:46:34.522972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.818 [2024-11-15 11:46:34.523004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.818 qpair failed and we were unable to recover it. 00:28:33.818 [2024-11-15 11:46:34.523305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.818 [2024-11-15 11:46:34.523338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.818 qpair failed and we were unable to recover it. 00:28:33.818 [2024-11-15 11:46:34.523615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.818 [2024-11-15 11:46:34.523648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.818 qpair failed and we were unable to recover it. 00:28:33.818 [2024-11-15 11:46:34.523965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.818 [2024-11-15 11:46:34.523998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.818 qpair failed and we were unable to recover it. 00:28:33.818 [2024-11-15 11:46:34.524129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.818 [2024-11-15 11:46:34.524161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.818 qpair failed and we were unable to recover it. 00:28:33.818 [2024-11-15 11:46:34.524371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.818 [2024-11-15 11:46:34.524404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.818 qpair failed and we were unable to recover it. 00:28:33.818 [2024-11-15 11:46:34.524749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.818 [2024-11-15 11:46:34.524784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.818 qpair failed and we were unable to recover it. 00:28:33.818 [2024-11-15 11:46:34.524988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.818 [2024-11-15 11:46:34.525020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.818 qpair failed and we were unable to recover it. 00:28:33.818 [2024-11-15 11:46:34.525296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.818 [2024-11-15 11:46:34.525338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.818 qpair failed and we were unable to recover it. 00:28:33.818 [2024-11-15 11:46:34.525577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.818 [2024-11-15 11:46:34.525589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.818 qpair failed and we were unable to recover it. 00:28:33.818 [2024-11-15 11:46:34.525828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.818 [2024-11-15 11:46:34.525839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.818 qpair failed and we were unable to recover it. 00:28:33.818 [2024-11-15 11:46:34.526001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.818 [2024-11-15 11:46:34.526013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.818 qpair failed and we were unable to recover it. 00:28:33.818 [2024-11-15 11:46:34.526164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.818 [2024-11-15 11:46:34.526175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.818 qpair failed and we were unable to recover it. 00:28:33.818 [2024-11-15 11:46:34.526405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.818 [2024-11-15 11:46:34.526438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.818 qpair failed and we were unable to recover it. 00:28:33.818 [2024-11-15 11:46:34.526768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.818 [2024-11-15 11:46:34.526800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.818 qpair failed and we were unable to recover it. 00:28:33.818 [2024-11-15 11:46:34.527086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.818 [2024-11-15 11:46:34.527119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.818 qpair failed and we were unable to recover it. 00:28:33.818 [2024-11-15 11:46:34.527413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.818 [2024-11-15 11:46:34.527445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.818 qpair failed and we were unable to recover it. 00:28:33.818 [2024-11-15 11:46:34.527648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.818 [2024-11-15 11:46:34.527660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.818 qpair failed and we were unable to recover it. 00:28:33.818 [2024-11-15 11:46:34.527949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.818 [2024-11-15 11:46:34.527982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.818 qpair failed and we were unable to recover it. 00:28:33.818 [2024-11-15 11:46:34.528171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.818 [2024-11-15 11:46:34.528204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.818 qpair failed and we were unable to recover it. 00:28:33.818 [2024-11-15 11:46:34.528405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.818 [2024-11-15 11:46:34.528437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.818 qpair failed and we were unable to recover it. 00:28:33.818 [2024-11-15 11:46:34.528735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.818 [2024-11-15 11:46:34.528767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.818 qpair failed and we were unable to recover it. 00:28:33.818 [2024-11-15 11:46:34.529069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.818 [2024-11-15 11:46:34.529103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.818 qpair failed and we were unable to recover it. 00:28:33.818 [2024-11-15 11:46:34.529372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.818 [2024-11-15 11:46:34.529405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.818 qpair failed and we were unable to recover it. 00:28:33.818 [2024-11-15 11:46:34.529560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.818 [2024-11-15 11:46:34.529595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.818 qpair failed and we were unable to recover it. 00:28:33.818 [2024-11-15 11:46:34.529886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.818 [2024-11-15 11:46:34.529919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.818 qpair failed and we were unable to recover it. 00:28:33.818 [2024-11-15 11:46:34.530108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.818 [2024-11-15 11:46:34.530141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.818 qpair failed and we were unable to recover it. 00:28:33.818 [2024-11-15 11:46:34.530399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.818 [2024-11-15 11:46:34.530432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.818 qpair failed and we were unable to recover it. 00:28:33.818 [2024-11-15 11:46:34.530671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.818 [2024-11-15 11:46:34.530705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.818 qpair failed and we were unable to recover it. 00:28:33.818 [2024-11-15 11:46:34.530912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.818 [2024-11-15 11:46:34.530945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.818 qpair failed and we were unable to recover it. 00:28:33.818 [2024-11-15 11:46:34.531206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.818 [2024-11-15 11:46:34.531238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.819 qpair failed and we were unable to recover it. 00:28:33.819 [2024-11-15 11:46:34.531537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.819 [2024-11-15 11:46:34.531551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.819 qpair failed and we were unable to recover it. 00:28:33.819 [2024-11-15 11:46:34.531697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.819 [2024-11-15 11:46:34.531709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.819 qpair failed and we were unable to recover it. 00:28:33.819 [2024-11-15 11:46:34.531993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.819 [2024-11-15 11:46:34.532026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.819 qpair failed and we were unable to recover it. 00:28:33.819 [2024-11-15 11:46:34.532318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.819 [2024-11-15 11:46:34.532350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.819 qpair failed and we were unable to recover it. 00:28:33.819 [2024-11-15 11:46:34.532678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.819 [2024-11-15 11:46:34.532712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.819 qpair failed and we were unable to recover it. 00:28:33.819 [2024-11-15 11:46:34.533006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.819 [2024-11-15 11:46:34.533038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.819 qpair failed and we were unable to recover it. 00:28:33.819 [2024-11-15 11:46:34.533275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.819 [2024-11-15 11:46:34.533313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.819 qpair failed and we were unable to recover it. 00:28:33.819 [2024-11-15 11:46:34.533596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.819 [2024-11-15 11:46:34.533630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.819 qpair failed and we were unable to recover it. 00:28:33.819 [2024-11-15 11:46:34.533917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.819 [2024-11-15 11:46:34.533950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.819 qpair failed and we were unable to recover it. 00:28:33.819 [2024-11-15 11:46:34.534237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.819 [2024-11-15 11:46:34.534270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.819 qpair failed and we were unable to recover it. 00:28:33.819 [2024-11-15 11:46:34.534529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.819 [2024-11-15 11:46:34.534564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.819 qpair failed and we were unable to recover it. 00:28:33.819 [2024-11-15 11:46:34.534871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.819 [2024-11-15 11:46:34.534904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.819 qpair failed and we were unable to recover it. 00:28:33.819 [2024-11-15 11:46:34.535176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.819 [2024-11-15 11:46:34.535208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.819 qpair failed and we were unable to recover it. 00:28:33.819 [2024-11-15 11:46:34.535511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.819 [2024-11-15 11:46:34.535522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.819 qpair failed and we were unable to recover it. 00:28:33.819 [2024-11-15 11:46:34.535737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.819 [2024-11-15 11:46:34.535769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.819 qpair failed and we were unable to recover it. 00:28:33.819 [2024-11-15 11:46:34.535971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.819 [2024-11-15 11:46:34.536005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.819 qpair failed and we were unable to recover it. 00:28:33.819 [2024-11-15 11:46:34.536205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.819 [2024-11-15 11:46:34.536218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.819 qpair failed and we were unable to recover it. 00:28:33.819 [2024-11-15 11:46:34.536491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.819 [2024-11-15 11:46:34.536525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.819 qpair failed and we were unable to recover it. 00:28:33.819 [2024-11-15 11:46:34.536816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.819 [2024-11-15 11:46:34.536849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.819 qpair failed and we were unable to recover it. 00:28:33.819 [2024-11-15 11:46:34.537075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.819 [2024-11-15 11:46:34.537108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.819 qpair failed and we were unable to recover it. 00:28:33.819 [2024-11-15 11:46:34.537387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.819 [2024-11-15 11:46:34.537399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.819 qpair failed and we were unable to recover it. 00:28:33.819 [2024-11-15 11:46:34.537499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.819 [2024-11-15 11:46:34.537510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.819 qpair failed and we were unable to recover it. 00:28:33.819 [2024-11-15 11:46:34.537687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.819 [2024-11-15 11:46:34.537718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.819 qpair failed and we were unable to recover it. 00:28:33.819 [2024-11-15 11:46:34.538026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.819 [2024-11-15 11:46:34.538060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.819 qpair failed and we were unable to recover it. 00:28:33.819 [2024-11-15 11:46:34.538331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.819 [2024-11-15 11:46:34.538364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.819 qpair failed and we were unable to recover it. 00:28:33.819 [2024-11-15 11:46:34.538622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.819 [2024-11-15 11:46:34.538634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.819 qpair failed and we were unable to recover it. 00:28:33.819 [2024-11-15 11:46:34.538717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.819 [2024-11-15 11:46:34.538728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.819 qpair failed and we were unable to recover it. 00:28:33.819 [2024-11-15 11:46:34.538971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.819 [2024-11-15 11:46:34.539004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.819 qpair failed and we were unable to recover it. 00:28:33.819 [2024-11-15 11:46:34.539273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.819 [2024-11-15 11:46:34.539305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.819 qpair failed and we were unable to recover it. 00:28:33.819 [2024-11-15 11:46:34.539616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.819 [2024-11-15 11:46:34.539650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.819 qpair failed and we were unable to recover it. 00:28:33.819 [2024-11-15 11:46:34.539916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.819 [2024-11-15 11:46:34.539949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.819 qpair failed and we were unable to recover it. 00:28:33.819 [2024-11-15 11:46:34.540158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.819 [2024-11-15 11:46:34.540190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.819 qpair failed and we were unable to recover it. 00:28:33.819 [2024-11-15 11:46:34.540389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.819 [2024-11-15 11:46:34.540400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.819 qpair failed and we were unable to recover it. 00:28:33.819 [2024-11-15 11:46:34.540574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.820 [2024-11-15 11:46:34.540608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.820 qpair failed and we were unable to recover it. 00:28:33.820 [2024-11-15 11:46:34.540900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.820 [2024-11-15 11:46:34.540933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.820 qpair failed and we were unable to recover it. 00:28:33.820 [2024-11-15 11:46:34.541235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.820 [2024-11-15 11:46:34.541268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.820 qpair failed and we were unable to recover it. 00:28:33.820 [2024-11-15 11:46:34.541545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.820 [2024-11-15 11:46:34.541579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.820 qpair failed and we were unable to recover it. 00:28:33.820 [2024-11-15 11:46:34.541779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.820 [2024-11-15 11:46:34.541812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.820 qpair failed and we were unable to recover it. 00:28:33.820 [2024-11-15 11:46:34.542075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.820 [2024-11-15 11:46:34.542108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.820 qpair failed and we were unable to recover it. 00:28:33.820 [2024-11-15 11:46:34.542216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.820 [2024-11-15 11:46:34.542228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.820 qpair failed and we were unable to recover it. 00:28:33.820 [2024-11-15 11:46:34.542455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.820 [2024-11-15 11:46:34.542498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.820 qpair failed and we were unable to recover it. 00:28:33.820 [2024-11-15 11:46:34.542756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.820 [2024-11-15 11:46:34.542789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.820 qpair failed and we were unable to recover it. 00:28:33.820 [2024-11-15 11:46:34.543063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.820 [2024-11-15 11:46:34.543095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.820 qpair failed and we were unable to recover it. 00:28:33.820 [2024-11-15 11:46:34.543427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.820 [2024-11-15 11:46:34.543472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.820 qpair failed and we were unable to recover it. 00:28:33.820 [2024-11-15 11:46:34.543636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.820 [2024-11-15 11:46:34.543670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.820 qpair failed and we were unable to recover it. 00:28:33.820 [2024-11-15 11:46:34.543891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.820 [2024-11-15 11:46:34.543924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.820 qpair failed and we were unable to recover it. 00:28:33.820 [2024-11-15 11:46:34.544211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.820 [2024-11-15 11:46:34.544222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.820 qpair failed and we were unable to recover it. 00:28:33.820 [2024-11-15 11:46:34.544480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.820 [2024-11-15 11:46:34.544514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.820 qpair failed and we were unable to recover it. 00:28:33.820 [2024-11-15 11:46:34.544718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.820 [2024-11-15 11:46:34.544753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.820 qpair failed and we were unable to recover it. 00:28:33.820 [2024-11-15 11:46:34.545048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.820 [2024-11-15 11:46:34.545080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.820 qpair failed and we were unable to recover it. 00:28:33.820 [2024-11-15 11:46:34.545381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.820 [2024-11-15 11:46:34.545414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.820 qpair failed and we were unable to recover it. 00:28:33.820 [2024-11-15 11:46:34.545728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.820 [2024-11-15 11:46:34.545740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.820 qpair failed and we were unable to recover it. 00:28:33.820 [2024-11-15 11:46:34.545985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.820 [2024-11-15 11:46:34.545996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.820 qpair failed and we were unable to recover it. 00:28:33.820 [2024-11-15 11:46:34.546160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.820 [2024-11-15 11:46:34.546171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.820 qpair failed and we were unable to recover it. 00:28:33.820 [2024-11-15 11:46:34.546263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.820 [2024-11-15 11:46:34.546275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.820 qpair failed and we were unable to recover it. 00:28:33.820 [2024-11-15 11:46:34.546416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.820 [2024-11-15 11:46:34.546426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.820 qpair failed and we were unable to recover it. 00:28:33.820 [2024-11-15 11:46:34.546751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.820 [2024-11-15 11:46:34.546786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.820 qpair failed and we were unable to recover it. 00:28:33.820 [2024-11-15 11:46:34.547065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.820 [2024-11-15 11:46:34.547098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.820 qpair failed and we were unable to recover it. 00:28:33.820 [2024-11-15 11:46:34.547388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.820 [2024-11-15 11:46:34.547420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.820 qpair failed and we were unable to recover it. 00:28:33.820 [2024-11-15 11:46:34.547649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.820 [2024-11-15 11:46:34.547684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.820 qpair failed and we were unable to recover it. 00:28:33.820 [2024-11-15 11:46:34.547964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.820 [2024-11-15 11:46:34.547997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.820 qpair failed and we were unable to recover it. 00:28:33.820 [2024-11-15 11:46:34.548316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.820 [2024-11-15 11:46:34.548348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.820 qpair failed and we were unable to recover it. 00:28:33.820 [2024-11-15 11:46:34.548612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.820 [2024-11-15 11:46:34.548624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.820 qpair failed and we were unable to recover it. 00:28:33.820 [2024-11-15 11:46:34.548837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.820 [2024-11-15 11:46:34.548848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.820 qpair failed and we were unable to recover it. 00:28:33.820 [2024-11-15 11:46:34.549035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.820 [2024-11-15 11:46:34.549045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.820 qpair failed and we were unable to recover it. 00:28:33.820 [2024-11-15 11:46:34.549201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.820 [2024-11-15 11:46:34.549233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.820 qpair failed and we were unable to recover it. 00:28:33.820 [2024-11-15 11:46:34.549441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.820 [2024-11-15 11:46:34.549483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.820 qpair failed and we were unable to recover it. 00:28:33.820 [2024-11-15 11:46:34.549775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.820 [2024-11-15 11:46:34.549813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.820 qpair failed and we were unable to recover it. 00:28:33.820 [2024-11-15 11:46:34.550086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.820 [2024-11-15 11:46:34.550119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.820 qpair failed and we were unable to recover it. 00:28:33.820 [2024-11-15 11:46:34.550341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.820 [2024-11-15 11:46:34.550373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.820 qpair failed and we were unable to recover it. 00:28:33.820 [2024-11-15 11:46:34.550688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.820 [2024-11-15 11:46:34.550722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.820 qpair failed and we were unable to recover it. 00:28:33.820 [2024-11-15 11:46:34.551003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.820 [2024-11-15 11:46:34.551035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.821 qpair failed and we were unable to recover it. 00:28:33.821 [2024-11-15 11:46:34.551240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.821 [2024-11-15 11:46:34.551273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.821 qpair failed and we were unable to recover it. 00:28:33.821 [2024-11-15 11:46:34.551592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.821 [2024-11-15 11:46:34.551626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.821 qpair failed and we were unable to recover it. 00:28:33.821 [2024-11-15 11:46:34.551869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.821 [2024-11-15 11:46:34.551902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.821 qpair failed and we were unable to recover it. 00:28:33.821 [2024-11-15 11:46:34.552217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.821 [2024-11-15 11:46:34.552250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.821 qpair failed and we were unable to recover it. 00:28:33.821 [2024-11-15 11:46:34.552504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.821 [2024-11-15 11:46:34.552515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.821 qpair failed and we were unable to recover it. 00:28:33.821 [2024-11-15 11:46:34.552757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.821 [2024-11-15 11:46:34.552768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.821 qpair failed and we were unable to recover it. 00:28:33.821 [2024-11-15 11:46:34.552979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.821 [2024-11-15 11:46:34.552990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.821 qpair failed and we were unable to recover it. 00:28:33.821 [2024-11-15 11:46:34.553217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.821 [2024-11-15 11:46:34.553228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.821 qpair failed and we were unable to recover it. 00:28:33.821 [2024-11-15 11:46:34.553367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.821 [2024-11-15 11:46:34.553378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.821 qpair failed and we were unable to recover it. 00:28:33.821 [2024-11-15 11:46:34.553540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.821 [2024-11-15 11:46:34.553574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.821 qpair failed and we were unable to recover it. 00:28:33.821 [2024-11-15 11:46:34.553894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.821 [2024-11-15 11:46:34.553928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.821 qpair failed and we were unable to recover it. 00:28:33.821 [2024-11-15 11:46:34.554208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.821 [2024-11-15 11:46:34.554240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.821 qpair failed and we were unable to recover it. 00:28:33.821 [2024-11-15 11:46:34.554499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.821 [2024-11-15 11:46:34.554533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.821 qpair failed and we were unable to recover it. 00:28:33.821 [2024-11-15 11:46:34.554767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.821 [2024-11-15 11:46:34.554800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.821 qpair failed and we were unable to recover it. 00:28:33.821 [2024-11-15 11:46:34.555003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.821 [2024-11-15 11:46:34.555036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.821 qpair failed and we were unable to recover it. 00:28:33.821 [2024-11-15 11:46:34.555239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.821 [2024-11-15 11:46:34.555250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.821 qpair failed and we were unable to recover it. 00:28:33.821 [2024-11-15 11:46:34.555493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.821 [2024-11-15 11:46:34.555526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.821 qpair failed and we were unable to recover it. 00:28:33.821 [2024-11-15 11:46:34.555722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.821 [2024-11-15 11:46:34.555754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.821 qpair failed and we were unable to recover it. 00:28:33.821 [2024-11-15 11:46:34.556069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.821 [2024-11-15 11:46:34.556101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.821 qpair failed and we were unable to recover it. 00:28:33.821 [2024-11-15 11:46:34.556320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.821 [2024-11-15 11:46:34.556353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.821 qpair failed and we were unable to recover it. 00:28:33.821 [2024-11-15 11:46:34.556621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.821 [2024-11-15 11:46:34.556655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.821 qpair failed and we were unable to recover it. 00:28:33.821 [2024-11-15 11:46:34.556854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.821 [2024-11-15 11:46:34.556887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.821 qpair failed and we were unable to recover it. 00:28:33.821 [2024-11-15 11:46:34.557124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.821 [2024-11-15 11:46:34.557157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.821 qpair failed and we were unable to recover it. 00:28:33.821 [2024-11-15 11:46:34.557416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.821 [2024-11-15 11:46:34.557427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.821 qpair failed and we were unable to recover it. 00:28:33.821 [2024-11-15 11:46:34.557585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.821 [2024-11-15 11:46:34.557620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.821 qpair failed and we were unable to recover it. 00:28:33.821 [2024-11-15 11:46:34.557848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.821 [2024-11-15 11:46:34.557881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.821 qpair failed and we were unable to recover it. 00:28:33.821 [2024-11-15 11:46:34.558198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.821 [2024-11-15 11:46:34.558230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.821 qpair failed and we were unable to recover it. 00:28:33.821 [2024-11-15 11:46:34.558525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.821 [2024-11-15 11:46:34.558559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.821 qpair failed and we were unable to recover it. 00:28:33.821 [2024-11-15 11:46:34.558788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.821 [2024-11-15 11:46:34.558821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.821 qpair failed and we were unable to recover it. 00:28:33.821 [2024-11-15 11:46:34.559097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.821 [2024-11-15 11:46:34.559130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.821 qpair failed and we were unable to recover it. 00:28:33.821 [2024-11-15 11:46:34.559420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.821 [2024-11-15 11:46:34.559471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.821 qpair failed and we were unable to recover it. 00:28:33.821 [2024-11-15 11:46:34.559728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.821 [2024-11-15 11:46:34.559739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.821 qpair failed and we were unable to recover it. 00:28:33.821 [2024-11-15 11:46:34.559953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.821 [2024-11-15 11:46:34.559964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.821 qpair failed and we were unable to recover it. 00:28:33.821 [2024-11-15 11:46:34.560256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.821 [2024-11-15 11:46:34.560289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.821 qpair failed and we were unable to recover it. 00:28:33.821 [2024-11-15 11:46:34.560595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.821 [2024-11-15 11:46:34.560630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.821 qpair failed and we were unable to recover it. 00:28:33.821 [2024-11-15 11:46:34.560921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.821 [2024-11-15 11:46:34.560959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.821 qpair failed and we were unable to recover it. 00:28:33.821 [2024-11-15 11:46:34.561237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.821 [2024-11-15 11:46:34.561271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.821 qpair failed and we were unable to recover it. 00:28:33.821 [2024-11-15 11:46:34.561563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.821 [2024-11-15 11:46:34.561597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.821 qpair failed and we were unable to recover it. 00:28:33.822 [2024-11-15 11:46:34.561883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.822 [2024-11-15 11:46:34.561915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.822 qpair failed and we were unable to recover it. 00:28:33.822 [2024-11-15 11:46:34.562149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.822 [2024-11-15 11:46:34.562182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.822 qpair failed and we were unable to recover it. 00:28:33.822 [2024-11-15 11:46:34.562391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.822 [2024-11-15 11:46:34.562424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.822 qpair failed and we were unable to recover it. 00:28:33.822 [2024-11-15 11:46:34.562733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.822 [2024-11-15 11:46:34.562768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.822 qpair failed and we were unable to recover it. 00:28:33.822 [2024-11-15 11:46:34.563019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.822 [2024-11-15 11:46:34.563052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.822 qpair failed and we were unable to recover it. 00:28:33.822 [2024-11-15 11:46:34.563302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.822 [2024-11-15 11:46:34.563335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.822 qpair failed and we were unable to recover it. 00:28:33.822 [2024-11-15 11:46:34.563604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.822 [2024-11-15 11:46:34.563615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.822 qpair failed and we were unable to recover it. 00:28:33.822 [2024-11-15 11:46:34.563727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.822 [2024-11-15 11:46:34.563738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.822 qpair failed and we were unable to recover it. 00:28:33.822 [2024-11-15 11:46:34.563897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.822 [2024-11-15 11:46:34.563929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.822 qpair failed and we were unable to recover it. 00:28:33.822 [2024-11-15 11:46:34.564132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.822 [2024-11-15 11:46:34.564163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.822 qpair failed and we were unable to recover it. 00:28:33.822 [2024-11-15 11:46:34.564444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.822 [2024-11-15 11:46:34.564455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.822 qpair failed and we were unable to recover it. 00:28:33.822 [2024-11-15 11:46:34.564603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.822 [2024-11-15 11:46:34.564614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.822 qpair failed and we were unable to recover it. 00:28:33.822 [2024-11-15 11:46:34.564838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.822 [2024-11-15 11:46:34.564870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.822 qpair failed and we were unable to recover it. 00:28:33.822 [2024-11-15 11:46:34.565152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.822 [2024-11-15 11:46:34.565186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.822 qpair failed and we were unable to recover it. 00:28:33.822 [2024-11-15 11:46:34.565394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.822 [2024-11-15 11:46:34.565405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.822 qpair failed and we were unable to recover it. 00:28:33.822 [2024-11-15 11:46:34.565649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.822 [2024-11-15 11:46:34.565683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.822 qpair failed and we were unable to recover it. 00:28:33.822 [2024-11-15 11:46:34.565948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.822 [2024-11-15 11:46:34.565981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.822 qpair failed and we were unable to recover it. 00:28:33.822 [2024-11-15 11:46:34.566253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.822 [2024-11-15 11:46:34.566285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.822 qpair failed and we were unable to recover it. 00:28:33.822 [2024-11-15 11:46:34.566545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.822 [2024-11-15 11:46:34.566579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.822 qpair failed and we were unable to recover it. 00:28:33.822 [2024-11-15 11:46:34.566841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.822 [2024-11-15 11:46:34.566874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.822 qpair failed and we were unable to recover it. 00:28:33.822 [2024-11-15 11:46:34.567137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.822 [2024-11-15 11:46:34.567169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.822 qpair failed and we were unable to recover it. 00:28:33.822 [2024-11-15 11:46:34.567481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.822 [2024-11-15 11:46:34.567514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.822 qpair failed and we were unable to recover it. 00:28:33.822 [2024-11-15 11:46:34.567772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.822 [2024-11-15 11:46:34.567783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.822 qpair failed and we were unable to recover it. 00:28:33.822 [2024-11-15 11:46:34.567941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.822 [2024-11-15 11:46:34.567974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.822 qpair failed and we were unable to recover it. 00:28:33.822 [2024-11-15 11:46:34.568138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.822 [2024-11-15 11:46:34.568171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.822 qpair failed and we were unable to recover it. 00:28:33.822 [2024-11-15 11:46:34.568498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.822 [2024-11-15 11:46:34.568533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.822 qpair failed and we were unable to recover it. 00:28:33.822 [2024-11-15 11:46:34.568802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.822 [2024-11-15 11:46:34.568836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.822 qpair failed and we were unable to recover it. 00:28:33.822 [2024-11-15 11:46:34.569072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.822 [2024-11-15 11:46:34.569105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.822 qpair failed and we were unable to recover it. 00:28:33.822 [2024-11-15 11:46:34.569364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.822 [2024-11-15 11:46:34.569397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.822 qpair failed and we were unable to recover it. 00:28:33.822 [2024-11-15 11:46:34.569612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.822 [2024-11-15 11:46:34.569646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.822 qpair failed and we were unable to recover it. 00:28:33.822 [2024-11-15 11:46:34.569779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.822 [2024-11-15 11:46:34.569813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.822 qpair failed and we were unable to recover it. 00:28:33.822 [2024-11-15 11:46:34.570014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.822 [2024-11-15 11:46:34.570046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.822 qpair failed and we were unable to recover it. 00:28:33.822 [2024-11-15 11:46:34.570282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.822 [2024-11-15 11:46:34.570315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.822 qpair failed and we were unable to recover it. 00:28:33.822 [2024-11-15 11:46:34.570579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.822 [2024-11-15 11:46:34.570614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.822 qpair failed and we were unable to recover it. 00:28:33.822 [2024-11-15 11:46:34.570915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.822 [2024-11-15 11:46:34.570948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.822 qpair failed and we were unable to recover it. 00:28:33.822 [2024-11-15 11:46:34.571264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.822 [2024-11-15 11:46:34.571298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.822 qpair failed and we were unable to recover it. 00:28:33.822 [2024-11-15 11:46:34.571533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.822 [2024-11-15 11:46:34.571566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.822 qpair failed and we were unable to recover it. 00:28:33.822 [2024-11-15 11:46:34.571845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.822 [2024-11-15 11:46:34.571884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.822 qpair failed and we were unable to recover it. 00:28:33.823 [2024-11-15 11:46:34.572174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.823 [2024-11-15 11:46:34.572206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.823 qpair failed and we were unable to recover it. 00:28:33.823 [2024-11-15 11:46:34.572493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.823 [2024-11-15 11:46:34.572528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.823 qpair failed and we were unable to recover it. 00:28:33.823 [2024-11-15 11:46:34.572747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.823 [2024-11-15 11:46:34.572779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.823 qpair failed and we were unable to recover it. 00:28:33.823 [2024-11-15 11:46:34.573040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.823 [2024-11-15 11:46:34.573072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.823 qpair failed and we were unable to recover it. 00:28:33.823 [2024-11-15 11:46:34.573335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.823 [2024-11-15 11:46:34.573368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.823 qpair failed and we were unable to recover it. 00:28:33.823 [2024-11-15 11:46:34.573587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.823 [2024-11-15 11:46:34.573621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.823 qpair failed and we were unable to recover it. 00:28:33.823 [2024-11-15 11:46:34.573810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.823 [2024-11-15 11:46:34.573843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.823 qpair failed and we were unable to recover it. 00:28:33.823 [2024-11-15 11:46:34.574125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.823 [2024-11-15 11:46:34.574158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.823 qpair failed and we were unable to recover it. 00:28:33.823 [2024-11-15 11:46:34.574430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.823 [2024-11-15 11:46:34.574483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.823 qpair failed and we were unable to recover it. 00:28:33.823 [2024-11-15 11:46:34.574754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.823 [2024-11-15 11:46:34.574765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.823 qpair failed and we were unable to recover it. 00:28:33.823 [2024-11-15 11:46:34.575038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.823 [2024-11-15 11:46:34.575049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.823 qpair failed and we were unable to recover it. 00:28:33.823 [2024-11-15 11:46:34.575253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.823 [2024-11-15 11:46:34.575285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.823 qpair failed and we were unable to recover it. 00:28:33.823 [2024-11-15 11:46:34.575570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.823 [2024-11-15 11:46:34.575604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.823 qpair failed and we were unable to recover it. 00:28:33.823 [2024-11-15 11:46:34.575829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.823 [2024-11-15 11:46:34.575840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.823 qpair failed and we were unable to recover it. 00:28:33.823 [2024-11-15 11:46:34.575983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.823 [2024-11-15 11:46:34.575994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.823 qpair failed and we were unable to recover it. 00:28:33.823 [2024-11-15 11:46:34.576181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.823 [2024-11-15 11:46:34.576192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.823 qpair failed and we were unable to recover it. 00:28:33.823 [2024-11-15 11:46:34.576463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.823 [2024-11-15 11:46:34.576475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.823 qpair failed and we were unable to recover it. 00:28:33.823 [2024-11-15 11:46:34.576717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.823 [2024-11-15 11:46:34.576728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.823 qpair failed and we were unable to recover it. 00:28:33.823 [2024-11-15 11:46:34.576981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.823 [2024-11-15 11:46:34.577013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.823 qpair failed and we were unable to recover it. 00:28:33.823 [2024-11-15 11:46:34.577204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.823 [2024-11-15 11:46:34.577236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.823 qpair failed and we were unable to recover it. 00:28:33.823 [2024-11-15 11:46:34.577446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.823 [2024-11-15 11:46:34.577487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.823 qpair failed and we were unable to recover it. 00:28:33.823 [2024-11-15 11:46:34.577829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.823 [2024-11-15 11:46:34.577863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.823 qpair failed and we were unable to recover it. 00:28:33.823 [2024-11-15 11:46:34.578158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.823 [2024-11-15 11:46:34.578191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.823 qpair failed and we were unable to recover it. 00:28:33.823 [2024-11-15 11:46:34.578472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.823 [2024-11-15 11:46:34.578506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.823 qpair failed and we were unable to recover it. 00:28:33.823 [2024-11-15 11:46:34.578803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.823 [2024-11-15 11:46:34.578836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.823 qpair failed and we were unable to recover it. 00:28:33.823 [2024-11-15 11:46:34.579047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.823 [2024-11-15 11:46:34.579081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.823 qpair failed and we were unable to recover it. 00:28:33.823 [2024-11-15 11:46:34.579370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.823 [2024-11-15 11:46:34.579381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.823 qpair failed and we were unable to recover it. 00:28:33.823 [2024-11-15 11:46:34.579529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.823 [2024-11-15 11:46:34.579564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.823 qpair failed and we were unable to recover it. 00:28:33.823 [2024-11-15 11:46:34.579857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.823 [2024-11-15 11:46:34.579892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.823 qpair failed and we were unable to recover it. 00:28:33.823 [2024-11-15 11:46:34.580212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.823 [2024-11-15 11:46:34.580253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.823 qpair failed and we were unable to recover it. 00:28:33.823 [2024-11-15 11:46:34.580430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.823 [2024-11-15 11:46:34.580442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.823 qpair failed and we were unable to recover it. 00:28:33.823 [2024-11-15 11:46:34.580691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.823 [2024-11-15 11:46:34.580703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.823 qpair failed and we were unable to recover it. 00:28:33.823 [2024-11-15 11:46:34.580899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.823 [2024-11-15 11:46:34.580932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.823 qpair failed and we were unable to recover it. 00:28:33.824 [2024-11-15 11:46:34.581218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.824 [2024-11-15 11:46:34.581260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.824 qpair failed and we were unable to recover it. 00:28:33.824 [2024-11-15 11:46:34.581357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.824 [2024-11-15 11:46:34.581369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.824 qpair failed and we were unable to recover it. 00:28:33.824 [2024-11-15 11:46:34.581613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.824 [2024-11-15 11:46:34.581648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.824 qpair failed and we were unable to recover it. 00:28:33.824 [2024-11-15 11:46:34.581855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.824 [2024-11-15 11:46:34.581890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.824 qpair failed and we were unable to recover it. 00:28:33.824 [2024-11-15 11:46:34.582103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.824 [2024-11-15 11:46:34.582135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.824 qpair failed and we were unable to recover it. 00:28:33.824 [2024-11-15 11:46:34.582424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.824 [2024-11-15 11:46:34.582478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.824 qpair failed and we were unable to recover it. 00:28:33.824 [2024-11-15 11:46:34.582722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.824 [2024-11-15 11:46:34.582759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.824 qpair failed and we were unable to recover it. 00:28:33.824 [2024-11-15 11:46:34.583032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.824 [2024-11-15 11:46:34.583065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.824 qpair failed and we were unable to recover it. 00:28:33.824 [2024-11-15 11:46:34.583329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.824 [2024-11-15 11:46:34.583362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.824 qpair failed and we were unable to recover it. 00:28:33.824 [2024-11-15 11:46:34.583658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.824 [2024-11-15 11:46:34.583693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.824 qpair failed and we were unable to recover it. 00:28:33.824 [2024-11-15 11:46:34.583921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.824 [2024-11-15 11:46:34.583954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.824 qpair failed and we were unable to recover it. 00:28:33.824 [2024-11-15 11:46:34.584176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.824 [2024-11-15 11:46:34.584209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.824 qpair failed and we were unable to recover it. 00:28:33.824 [2024-11-15 11:46:34.584498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.824 [2024-11-15 11:46:34.584534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.824 qpair failed and we were unable to recover it. 00:28:33.824 [2024-11-15 11:46:34.584798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.824 [2024-11-15 11:46:34.584832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.824 qpair failed and we were unable to recover it. 00:28:33.824 [2024-11-15 11:46:34.585048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.824 [2024-11-15 11:46:34.585082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.824 qpair failed and we were unable to recover it. 00:28:33.824 [2024-11-15 11:46:34.585424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.824 [2024-11-15 11:46:34.585468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.824 qpair failed and we were unable to recover it. 00:28:33.824 [2024-11-15 11:46:34.585683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.824 [2024-11-15 11:46:34.585717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.824 qpair failed and we were unable to recover it. 00:28:33.824 [2024-11-15 11:46:34.586000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.824 [2024-11-15 11:46:34.586034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.824 qpair failed and we were unable to recover it. 00:28:33.824 [2024-11-15 11:46:34.586368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.824 [2024-11-15 11:46:34.586402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.824 qpair failed and we were unable to recover it. 00:28:33.824 [2024-11-15 11:46:34.586709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.824 [2024-11-15 11:46:34.586744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.824 qpair failed and we were unable to recover it. 00:28:33.824 [2024-11-15 11:46:34.586996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.824 [2024-11-15 11:46:34.587030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.824 qpair failed and we were unable to recover it. 00:28:33.824 [2024-11-15 11:46:34.587297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.824 [2024-11-15 11:46:34.587331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.824 qpair failed and we were unable to recover it. 00:28:33.824 [2024-11-15 11:46:34.587626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.824 [2024-11-15 11:46:34.587661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.824 qpair failed and we were unable to recover it. 00:28:33.824 [2024-11-15 11:46:34.587894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.824 [2024-11-15 11:46:34.587927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.824 qpair failed and we were unable to recover it. 00:28:33.824 [2024-11-15 11:46:34.588141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.824 [2024-11-15 11:46:34.588174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.824 qpair failed and we were unable to recover it. 00:28:33.824 [2024-11-15 11:46:34.588493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.824 [2024-11-15 11:46:34.588527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.824 qpair failed and we were unable to recover it. 00:28:33.824 [2024-11-15 11:46:34.588817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.824 [2024-11-15 11:46:34.588851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.824 qpair failed and we were unable to recover it. 00:28:33.824 [2024-11-15 11:46:34.589157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.824 [2024-11-15 11:46:34.589190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.824 qpair failed and we were unable to recover it. 00:28:33.824 [2024-11-15 11:46:34.589456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.824 [2024-11-15 11:46:34.589472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.824 qpair failed and we were unable to recover it. 00:28:33.824 [2024-11-15 11:46:34.589760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.824 [2024-11-15 11:46:34.589771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.824 qpair failed and we were unable to recover it. 00:28:33.824 [2024-11-15 11:46:34.589932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.824 [2024-11-15 11:46:34.589944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.824 qpair failed and we were unable to recover it. 00:28:33.824 [2024-11-15 11:46:34.590199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.824 [2024-11-15 11:46:34.590232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.824 qpair failed and we were unable to recover it. 00:28:33.824 [2024-11-15 11:46:34.590529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.824 [2024-11-15 11:46:34.590563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.824 qpair failed and we were unable to recover it. 00:28:33.824 [2024-11-15 11:46:34.590847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.824 [2024-11-15 11:46:34.590859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.824 qpair failed and we were unable to recover it. 00:28:33.824 [2024-11-15 11:46:34.591121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.824 [2024-11-15 11:46:34.591155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.824 qpair failed and we were unable to recover it. 00:28:33.824 [2024-11-15 11:46:34.591367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.824 [2024-11-15 11:46:34.591400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.824 qpair failed and we were unable to recover it. 00:28:33.824 [2024-11-15 11:46:34.591752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.824 [2024-11-15 11:46:34.591764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.824 qpair failed and we were unable to recover it. 00:28:33.825 [2024-11-15 11:46:34.591868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.825 [2024-11-15 11:46:34.591879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.825 qpair failed and we were unable to recover it. 00:28:33.825 [2024-11-15 11:46:34.592088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.825 [2024-11-15 11:46:34.592121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.825 qpair failed and we were unable to recover it. 00:28:33.825 [2024-11-15 11:46:34.592315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.825 [2024-11-15 11:46:34.592349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.825 qpair failed and we were unable to recover it. 00:28:33.825 [2024-11-15 11:46:34.592653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.825 [2024-11-15 11:46:34.592689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.825 qpair failed and we were unable to recover it. 00:28:33.825 [2024-11-15 11:46:34.592903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.825 [2024-11-15 11:46:34.592914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.825 qpair failed and we were unable to recover it. 00:28:33.825 [2024-11-15 11:46:34.593161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.825 [2024-11-15 11:46:34.593193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.825 qpair failed and we were unable to recover it. 00:28:33.825 [2024-11-15 11:46:34.593488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.825 [2024-11-15 11:46:34.593499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.825 qpair failed and we were unable to recover it. 00:28:33.825 [2024-11-15 11:46:34.593676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.825 [2024-11-15 11:46:34.593689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.825 qpair failed and we were unable to recover it. 00:28:33.825 [2024-11-15 11:46:34.593870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.825 [2024-11-15 11:46:34.593903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.825 qpair failed and we were unable to recover it. 00:28:33.825 [2024-11-15 11:46:34.594208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.825 [2024-11-15 11:46:34.594248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.825 qpair failed and we were unable to recover it. 00:28:33.825 [2024-11-15 11:46:34.594585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.825 [2024-11-15 11:46:34.594597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.825 qpair failed and we were unable to recover it. 00:28:33.825 [2024-11-15 11:46:34.594908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.825 [2024-11-15 11:46:34.594942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.825 qpair failed and we were unable to recover it. 00:28:33.825 [2024-11-15 11:46:34.595240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.825 [2024-11-15 11:46:34.595274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.825 qpair failed and we were unable to recover it. 00:28:33.825 [2024-11-15 11:46:34.595552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.825 [2024-11-15 11:46:34.595587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.825 qpair failed and we were unable to recover it. 00:28:33.825 [2024-11-15 11:46:34.595798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.825 [2024-11-15 11:46:34.595832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.825 qpair failed and we were unable to recover it. 00:28:33.825 [2024-11-15 11:46:34.596050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.825 [2024-11-15 11:46:34.596084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.825 qpair failed and we were unable to recover it. 00:28:33.825 [2024-11-15 11:46:34.596396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.825 [2024-11-15 11:46:34.596409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.825 qpair failed and we were unable to recover it. 00:28:33.825 [2024-11-15 11:46:34.596673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.825 [2024-11-15 11:46:34.596687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.825 qpair failed and we were unable to recover it. 00:28:33.825 [2024-11-15 11:46:34.596858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.825 [2024-11-15 11:46:34.596870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.825 qpair failed and we were unable to recover it. 00:28:33.825 [2024-11-15 11:46:34.596975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.825 [2024-11-15 11:46:34.597010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.825 qpair failed and we were unable to recover it. 00:28:33.825 [2024-11-15 11:46:34.597286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.825 [2024-11-15 11:46:34.597319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.825 qpair failed and we were unable to recover it. 00:28:33.825 [2024-11-15 11:46:34.597542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.825 [2024-11-15 11:46:34.597585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.825 qpair failed and we were unable to recover it. 00:28:33.825 [2024-11-15 11:46:34.597808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.825 [2024-11-15 11:46:34.597820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.825 qpair failed and we were unable to recover it. 00:28:33.825 [2024-11-15 11:46:34.598074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.825 [2024-11-15 11:46:34.598086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.825 qpair failed and we were unable to recover it. 00:28:33.825 [2024-11-15 11:46:34.598418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.825 [2024-11-15 11:46:34.598452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.825 qpair failed and we were unable to recover it. 00:28:33.825 [2024-11-15 11:46:34.598666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.825 [2024-11-15 11:46:34.598700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.825 qpair failed and we were unable to recover it. 00:28:33.825 [2024-11-15 11:46:34.598970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.825 [2024-11-15 11:46:34.599022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.825 qpair failed and we were unable to recover it. 00:28:33.825 [2024-11-15 11:46:34.599307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.825 [2024-11-15 11:46:34.599340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.825 qpair failed and we were unable to recover it. 00:28:33.825 [2024-11-15 11:46:34.599618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.825 [2024-11-15 11:46:34.599631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.825 qpair failed and we were unable to recover it. 00:28:33.825 [2024-11-15 11:46:34.599848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.825 [2024-11-15 11:46:34.599860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.825 qpair failed and we were unable to recover it. 00:28:33.825 [2024-11-15 11:46:34.600104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.825 [2024-11-15 11:46:34.600117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.825 qpair failed and we were unable to recover it. 00:28:33.825 [2024-11-15 11:46:34.600292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.825 [2024-11-15 11:46:34.600304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.825 qpair failed and we were unable to recover it. 00:28:33.825 [2024-11-15 11:46:34.600382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.825 [2024-11-15 11:46:34.600394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.825 qpair failed and we were unable to recover it. 00:28:33.825 [2024-11-15 11:46:34.600487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.825 [2024-11-15 11:46:34.600500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.825 qpair failed and we were unable to recover it. 00:28:33.825 [2024-11-15 11:46:34.600668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.825 [2024-11-15 11:46:34.600680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.825 qpair failed and we were unable to recover it. 00:28:33.825 [2024-11-15 11:46:34.600826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.825 [2024-11-15 11:46:34.600838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.825 qpair failed and we were unable to recover it. 00:28:33.825 [2024-11-15 11:46:34.601018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.825 [2024-11-15 11:46:34.601030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.825 qpair failed and we were unable to recover it. 00:28:33.825 [2024-11-15 11:46:34.601243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.825 [2024-11-15 11:46:34.601254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.825 qpair failed and we were unable to recover it. 00:28:33.826 [2024-11-15 11:46:34.601467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.826 [2024-11-15 11:46:34.601479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.826 qpair failed and we were unable to recover it. 00:28:33.826 [2024-11-15 11:46:34.601585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.826 [2024-11-15 11:46:34.601596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.826 qpair failed and we were unable to recover it. 00:28:33.826 [2024-11-15 11:46:34.601760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.826 [2024-11-15 11:46:34.601793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.826 qpair failed and we were unable to recover it. 00:28:33.826 [2024-11-15 11:46:34.602060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.826 [2024-11-15 11:46:34.602093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.826 qpair failed and we were unable to recover it. 00:28:33.826 [2024-11-15 11:46:34.602347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.826 [2024-11-15 11:46:34.602381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.826 qpair failed and we were unable to recover it. 00:28:33.826 [2024-11-15 11:46:34.602654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.826 [2024-11-15 11:46:34.602666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.826 qpair failed and we were unable to recover it. 00:28:33.826 [2024-11-15 11:46:34.602898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.826 [2024-11-15 11:46:34.602931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.826 qpair failed and we were unable to recover it. 00:28:33.826 [2024-11-15 11:46:34.603204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.826 [2024-11-15 11:46:34.603250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.826 qpair failed and we were unable to recover it. 00:28:33.826 [2024-11-15 11:46:34.603486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.826 [2024-11-15 11:46:34.603498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.826 qpair failed and we were unable to recover it. 00:28:33.826 [2024-11-15 11:46:34.603720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.826 [2024-11-15 11:46:34.603754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.826 qpair failed and we were unable to recover it. 00:28:33.826 [2024-11-15 11:46:34.603946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.826 [2024-11-15 11:46:34.603979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.826 qpair failed and we were unable to recover it. 00:28:33.826 [2024-11-15 11:46:34.604196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.826 [2024-11-15 11:46:34.604236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.826 qpair failed and we were unable to recover it. 00:28:33.826 [2024-11-15 11:46:34.604529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.826 [2024-11-15 11:46:34.604563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.826 qpair failed and we were unable to recover it. 00:28:33.826 [2024-11-15 11:46:34.604776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.826 [2024-11-15 11:46:34.604810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.826 qpair failed and we were unable to recover it. 00:28:33.826 [2024-11-15 11:46:34.604965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.826 [2024-11-15 11:46:34.604998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.826 qpair failed and we were unable to recover it. 00:28:33.826 [2024-11-15 11:46:34.605221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.826 [2024-11-15 11:46:34.605255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.826 qpair failed and we were unable to recover it. 00:28:33.826 [2024-11-15 11:46:34.605534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.826 [2024-11-15 11:46:34.605569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.826 qpair failed and we were unable to recover it. 00:28:33.826 [2024-11-15 11:46:34.605857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.826 [2024-11-15 11:46:34.605868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.826 qpair failed and we were unable to recover it. 00:28:33.826 [2024-11-15 11:46:34.606147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.826 [2024-11-15 11:46:34.606181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.826 qpair failed and we were unable to recover it. 00:28:33.826 [2024-11-15 11:46:34.606498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.826 [2024-11-15 11:46:34.606532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.826 qpair failed and we were unable to recover it. 00:28:33.826 [2024-11-15 11:46:34.606807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.826 [2024-11-15 11:46:34.606843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.826 qpair failed and we were unable to recover it. 00:28:33.826 [2024-11-15 11:46:34.607142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.826 [2024-11-15 11:46:34.607175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.826 qpair failed and we were unable to recover it. 00:28:33.826 [2024-11-15 11:46:34.607515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.826 [2024-11-15 11:46:34.607528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.826 qpair failed and we were unable to recover it. 00:28:33.826 [2024-11-15 11:46:34.607808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.826 [2024-11-15 11:46:34.607842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.826 qpair failed and we were unable to recover it. 00:28:33.826 [2024-11-15 11:46:34.608121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.826 [2024-11-15 11:46:34.608154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.826 qpair failed and we were unable to recover it. 00:28:33.826 [2024-11-15 11:46:34.608451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.826 [2024-11-15 11:46:34.608493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.826 qpair failed and we were unable to recover it. 00:28:33.826 [2024-11-15 11:46:34.608611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.826 [2024-11-15 11:46:34.608622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.826 qpair failed and we were unable to recover it. 00:28:33.826 [2024-11-15 11:46:34.608764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.826 [2024-11-15 11:46:34.608805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.826 qpair failed and we were unable to recover it. 00:28:33.826 [2024-11-15 11:46:34.609150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.826 [2024-11-15 11:46:34.609183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.826 qpair failed and we were unable to recover it. 00:28:33.826 [2024-11-15 11:46:34.609403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.826 [2024-11-15 11:46:34.609436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.826 qpair failed and we were unable to recover it. 00:28:33.826 [2024-11-15 11:46:34.609715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.826 [2024-11-15 11:46:34.609749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.826 qpair failed and we were unable to recover it. 00:28:33.826 [2024-11-15 11:46:34.609994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.826 [2024-11-15 11:46:34.610027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.826 qpair failed and we were unable to recover it. 00:28:33.826 [2024-11-15 11:46:34.610360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.826 [2024-11-15 11:46:34.610393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.826 qpair failed and we were unable to recover it. 00:28:33.826 [2024-11-15 11:46:34.610690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.826 [2024-11-15 11:46:34.610724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.826 qpair failed and we were unable to recover it. 00:28:33.826 [2024-11-15 11:46:34.610950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.826 [2024-11-15 11:46:34.610983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.826 qpair failed and we were unable to recover it. 00:28:33.826 [2024-11-15 11:46:34.611126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.826 [2024-11-15 11:46:34.611159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.826 qpair failed and we were unable to recover it. 00:28:33.826 [2024-11-15 11:46:34.611373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.826 [2024-11-15 11:46:34.611406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.826 qpair failed and we were unable to recover it. 00:28:33.826 [2024-11-15 11:46:34.611634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.826 [2024-11-15 11:46:34.611669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.826 qpair failed and we were unable to recover it. 00:28:33.826 [2024-11-15 11:46:34.612000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.827 [2024-11-15 11:46:34.612033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.827 qpair failed and we were unable to recover it. 00:28:33.827 [2024-11-15 11:46:34.612311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.827 [2024-11-15 11:46:34.612345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.827 qpair failed and we were unable to recover it. 00:28:33.827 [2024-11-15 11:46:34.612527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.827 [2024-11-15 11:46:34.612539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.827 qpair failed and we were unable to recover it. 00:28:33.827 [2024-11-15 11:46:34.612729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.827 [2024-11-15 11:46:34.612763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.827 qpair failed and we were unable to recover it. 00:28:33.827 [2024-11-15 11:46:34.613041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.827 [2024-11-15 11:46:34.613074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.827 qpair failed and we were unable to recover it. 00:28:33.827 [2024-11-15 11:46:34.613370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.827 [2024-11-15 11:46:34.613404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.827 qpair failed and we were unable to recover it. 00:28:33.827 [2024-11-15 11:46:34.613734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.827 [2024-11-15 11:46:34.613769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.827 qpair failed and we were unable to recover it. 00:28:33.827 [2024-11-15 11:46:34.614048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.827 [2024-11-15 11:46:34.614082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.827 qpair failed and we were unable to recover it. 00:28:33.827 [2024-11-15 11:46:34.614297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.827 [2024-11-15 11:46:34.614329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.827 qpair failed and we were unable to recover it. 00:28:33.827 [2024-11-15 11:46:34.614531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.827 [2024-11-15 11:46:34.614544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.827 qpair failed and we were unable to recover it. 00:28:33.827 [2024-11-15 11:46:34.614734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.827 [2024-11-15 11:46:34.614768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.827 qpair failed and we were unable to recover it. 00:28:33.827 [2024-11-15 11:46:34.614992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.827 [2024-11-15 11:46:34.615026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.827 qpair failed and we were unable to recover it. 00:28:33.827 [2024-11-15 11:46:34.615356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.827 [2024-11-15 11:46:34.615390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.827 qpair failed and we were unable to recover it. 00:28:33.827 [2024-11-15 11:46:34.615739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.827 [2024-11-15 11:46:34.615780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.827 qpair failed and we were unable to recover it. 00:28:33.827 [2024-11-15 11:46:34.615943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.827 [2024-11-15 11:46:34.615976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.827 qpair failed and we were unable to recover it. 00:28:33.827 [2024-11-15 11:46:34.616258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.827 [2024-11-15 11:46:34.616270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.827 qpair failed and we were unable to recover it. 00:28:33.827 [2024-11-15 11:46:34.616500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.827 [2024-11-15 11:46:34.616512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.827 qpair failed and we were unable to recover it. 00:28:33.827 [2024-11-15 11:46:34.616804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.827 [2024-11-15 11:46:34.616838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.827 qpair failed and we were unable to recover it. 00:28:33.827 [2024-11-15 11:46:34.617097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.827 [2024-11-15 11:46:34.617130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.827 qpair failed and we were unable to recover it. 00:28:33.827 [2024-11-15 11:46:34.617356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.827 [2024-11-15 11:46:34.617390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.827 qpair failed and we were unable to recover it. 00:28:33.827 [2024-11-15 11:46:34.617692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.827 [2024-11-15 11:46:34.617726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.827 qpair failed and we were unable to recover it. 00:28:33.827 [2024-11-15 11:46:34.617981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.827 [2024-11-15 11:46:34.617993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.827 qpair failed and we were unable to recover it. 00:28:33.827 [2024-11-15 11:46:34.618177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.827 [2024-11-15 11:46:34.618188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.827 qpair failed and we were unable to recover it. 00:28:33.827 [2024-11-15 11:46:34.618442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.827 [2024-11-15 11:46:34.618453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.827 qpair failed and we were unable to recover it. 00:28:33.827 [2024-11-15 11:46:34.618693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.827 [2024-11-15 11:46:34.618705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.827 qpair failed and we were unable to recover it. 00:28:33.827 [2024-11-15 11:46:34.618893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.827 [2024-11-15 11:46:34.618905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.827 qpair failed and we were unable to recover it. 00:28:33.827 [2024-11-15 11:46:34.619168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.827 [2024-11-15 11:46:34.619180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.827 qpair failed and we were unable to recover it. 00:28:33.827 [2024-11-15 11:46:34.619439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.827 [2024-11-15 11:46:34.619485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:33.827 qpair failed and we were unable to recover it. 00:28:34.104 [2024-11-15 11:46:34.619703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.104 [2024-11-15 11:46:34.619739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.104 qpair failed and we were unable to recover it. 00:28:34.104 [2024-11-15 11:46:34.619891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.104 [2024-11-15 11:46:34.619924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.104 qpair failed and we were unable to recover it. 00:28:34.104 [2024-11-15 11:46:34.620242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.104 [2024-11-15 11:46:34.620275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.104 qpair failed and we were unable to recover it. 00:28:34.104 [2024-11-15 11:46:34.620532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.104 [2024-11-15 11:46:34.620545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.104 qpair failed and we were unable to recover it. 00:28:34.104 [2024-11-15 11:46:34.620705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.104 [2024-11-15 11:46:34.620717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.104 qpair failed and we were unable to recover it. 00:28:34.104 [2024-11-15 11:46:34.620866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.104 [2024-11-15 11:46:34.620878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.104 qpair failed and we were unable to recover it. 00:28:34.104 [2024-11-15 11:46:34.621037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.104 [2024-11-15 11:46:34.621048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.104 qpair failed and we were unable to recover it. 00:28:34.104 [2024-11-15 11:46:34.621163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.104 [2024-11-15 11:46:34.621175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.104 qpair failed and we were unable to recover it. 00:28:34.104 [2024-11-15 11:46:34.621362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.104 [2024-11-15 11:46:34.621393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.104 qpair failed and we were unable to recover it. 00:28:34.104 [2024-11-15 11:46:34.621694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.104 [2024-11-15 11:46:34.621727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.104 qpair failed and we were unable to recover it. 00:28:34.104 [2024-11-15 11:46:34.622007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.104 [2024-11-15 11:46:34.622040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.104 qpair failed and we were unable to recover it. 00:28:34.104 [2024-11-15 11:46:34.622185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.104 [2024-11-15 11:46:34.622218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.104 qpair failed and we were unable to recover it. 00:28:34.104 [2024-11-15 11:46:34.622515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.104 [2024-11-15 11:46:34.622527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.104 qpair failed and we were unable to recover it. 00:28:34.104 [2024-11-15 11:46:34.622751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.104 [2024-11-15 11:46:34.622784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.104 qpair failed and we were unable to recover it. 00:28:34.104 [2024-11-15 11:46:34.623109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.104 [2024-11-15 11:46:34.623141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.104 qpair failed and we were unable to recover it. 00:28:34.104 [2024-11-15 11:46:34.623442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.104 [2024-11-15 11:46:34.623501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.104 qpair failed and we were unable to recover it. 00:28:34.104 [2024-11-15 11:46:34.623771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.104 [2024-11-15 11:46:34.623804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.104 qpair failed and we were unable to recover it. 00:28:34.104 [2024-11-15 11:46:34.624013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.104 [2024-11-15 11:46:34.624047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.104 qpair failed and we were unable to recover it. 00:28:34.105 [2024-11-15 11:46:34.624292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.105 [2024-11-15 11:46:34.624325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.105 qpair failed and we were unable to recover it. 00:28:34.105 [2024-11-15 11:46:34.624543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.105 [2024-11-15 11:46:34.624578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.105 qpair failed and we were unable to recover it. 00:28:34.105 [2024-11-15 11:46:34.624866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.105 [2024-11-15 11:46:34.624878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.105 qpair failed and we were unable to recover it. 00:28:34.105 [2024-11-15 11:46:34.625056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.105 [2024-11-15 11:46:34.625089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.105 qpair failed and we were unable to recover it. 00:28:34.105 [2024-11-15 11:46:34.625305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.105 [2024-11-15 11:46:34.625339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.105 qpair failed and we were unable to recover it. 00:28:34.105 [2024-11-15 11:46:34.625536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.105 [2024-11-15 11:46:34.625570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.105 qpair failed and we were unable to recover it. 00:28:34.105 [2024-11-15 11:46:34.625871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.105 [2024-11-15 11:46:34.625904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.105 qpair failed and we were unable to recover it. 00:28:34.105 [2024-11-15 11:46:34.626114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.105 [2024-11-15 11:46:34.626153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.105 qpair failed and we were unable to recover it. 00:28:34.105 [2024-11-15 11:46:34.626364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.105 [2024-11-15 11:46:34.626398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.105 qpair failed and we were unable to recover it. 00:28:34.105 [2024-11-15 11:46:34.626745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.105 [2024-11-15 11:46:34.626793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.105 qpair failed and we were unable to recover it. 00:28:34.105 [2024-11-15 11:46:34.626937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.105 [2024-11-15 11:46:34.626949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.105 qpair failed and we were unable to recover it. 00:28:34.105 [2024-11-15 11:46:34.627189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.105 [2024-11-15 11:46:34.627201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.105 qpair failed and we were unable to recover it. 00:28:34.105 [2024-11-15 11:46:34.627491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.105 [2024-11-15 11:46:34.627526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.105 qpair failed and we were unable to recover it. 00:28:34.105 [2024-11-15 11:46:34.627674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.105 [2024-11-15 11:46:34.627708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.105 qpair failed and we were unable to recover it. 00:28:34.105 [2024-11-15 11:46:34.627923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.105 [2024-11-15 11:46:34.627935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.105 qpair failed and we were unable to recover it. 00:28:34.105 [2024-11-15 11:46:34.628188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.105 [2024-11-15 11:46:34.628222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.105 qpair failed and we were unable to recover it. 00:28:34.105 [2024-11-15 11:46:34.628522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.105 [2024-11-15 11:46:34.628556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.105 qpair failed and we were unable to recover it. 00:28:34.105 [2024-11-15 11:46:34.628833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.105 [2024-11-15 11:46:34.628865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.105 qpair failed and we were unable to recover it. 00:28:34.105 [2024-11-15 11:46:34.629060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.105 [2024-11-15 11:46:34.629092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.105 qpair failed and we were unable to recover it. 00:28:34.105 [2024-11-15 11:46:34.629369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.105 [2024-11-15 11:46:34.629409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.105 qpair failed and we were unable to recover it. 00:28:34.105 [2024-11-15 11:46:34.629681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.105 [2024-11-15 11:46:34.629712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.105 qpair failed and we were unable to recover it. 00:28:34.105 [2024-11-15 11:46:34.630017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.105 [2024-11-15 11:46:34.630052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.105 qpair failed and we were unable to recover it. 00:28:34.105 [2024-11-15 11:46:34.630279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.105 [2024-11-15 11:46:34.630313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.105 qpair failed and we were unable to recover it. 00:28:34.105 [2024-11-15 11:46:34.630602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.105 [2024-11-15 11:46:34.630636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.105 qpair failed and we were unable to recover it. 00:28:34.105 [2024-11-15 11:46:34.630776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.105 [2024-11-15 11:46:34.630810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.105 qpair failed and we were unable to recover it. 00:28:34.105 [2024-11-15 11:46:34.631112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.105 [2024-11-15 11:46:34.631144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.105 qpair failed and we were unable to recover it. 00:28:34.105 [2024-11-15 11:46:34.631440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.105 [2024-11-15 11:46:34.631496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.105 qpair failed and we were unable to recover it. 00:28:34.105 [2024-11-15 11:46:34.631723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.105 [2024-11-15 11:46:34.631758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.105 qpair failed and we were unable to recover it. 00:28:34.105 [2024-11-15 11:46:34.632110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.105 [2024-11-15 11:46:34.632143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.105 qpair failed and we were unable to recover it. 00:28:34.105 [2024-11-15 11:46:34.632384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.105 [2024-11-15 11:46:34.632417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.105 qpair failed and we were unable to recover it. 00:28:34.105 [2024-11-15 11:46:34.632742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.105 [2024-11-15 11:46:34.632777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.105 qpair failed and we were unable to recover it. 00:28:34.105 [2024-11-15 11:46:34.633100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.105 [2024-11-15 11:46:34.633134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.105 qpair failed and we were unable to recover it. 00:28:34.105 [2024-11-15 11:46:34.633427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.105 [2024-11-15 11:46:34.633472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.105 qpair failed and we were unable to recover it. 00:28:34.105 [2024-11-15 11:46:34.633727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.105 [2024-11-15 11:46:34.633761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.105 qpair failed and we were unable to recover it. 00:28:34.105 [2024-11-15 11:46:34.634123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.105 [2024-11-15 11:46:34.634202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1922550 with addr=10.0.0.2, port=4420 00:28:34.105 qpair failed and we were unable to recover it. 00:28:34.105 [2024-11-15 11:46:34.634483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.105 [2024-11-15 11:46:34.634536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.105 qpair failed and we were unable to recover it. 00:28:34.105 [2024-11-15 11:46:34.634720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.105 [2024-11-15 11:46:34.634734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.105 qpair failed and we were unable to recover it. 00:28:34.105 [2024-11-15 11:46:34.634908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.105 [2024-11-15 11:46:34.634943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.105 qpair failed and we were unable to recover it. 00:28:34.105 [2024-11-15 11:46:34.635235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.105 [2024-11-15 11:46:34.635269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.105 qpair failed and we were unable to recover it. 00:28:34.105 [2024-11-15 11:46:34.635551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.105 [2024-11-15 11:46:34.635587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.105 qpair failed and we were unable to recover it. 00:28:34.105 [2024-11-15 11:46:34.635883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.105 [2024-11-15 11:46:34.635916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.105 qpair failed and we were unable to recover it. 00:28:34.105 [2024-11-15 11:46:34.636153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.105 [2024-11-15 11:46:34.636186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.105 qpair failed and we were unable to recover it. 00:28:34.105 [2024-11-15 11:46:34.636511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.105 [2024-11-15 11:46:34.636546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.105 qpair failed and we were unable to recover it. 00:28:34.105 [2024-11-15 11:46:34.636904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.105 [2024-11-15 11:46:34.636938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.105 qpair failed and we were unable to recover it. 00:28:34.105 [2024-11-15 11:46:34.637189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.105 [2024-11-15 11:46:34.637222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.105 qpair failed and we were unable to recover it. 00:28:34.105 [2024-11-15 11:46:34.637548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.105 [2024-11-15 11:46:34.637583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.105 qpair failed and we were unable to recover it. 00:28:34.105 [2024-11-15 11:46:34.637795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.105 [2024-11-15 11:46:34.637828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.105 qpair failed and we were unable to recover it. 00:28:34.105 [2024-11-15 11:46:34.637989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.105 [2024-11-15 11:46:34.638032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.105 qpair failed and we were unable to recover it. 00:28:34.105 [2024-11-15 11:46:34.638332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.105 [2024-11-15 11:46:34.638366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.105 qpair failed and we were unable to recover it. 00:28:34.105 [2024-11-15 11:46:34.638656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.105 [2024-11-15 11:46:34.638691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.105 qpair failed and we were unable to recover it. 00:28:34.105 [2024-11-15 11:46:34.638906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.105 [2024-11-15 11:46:34.638940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.105 qpair failed and we were unable to recover it. 00:28:34.105 [2024-11-15 11:46:34.639108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.105 [2024-11-15 11:46:34.639142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.105 qpair failed and we were unable to recover it. 00:28:34.105 [2024-11-15 11:46:34.639361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.105 [2024-11-15 11:46:34.639394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.105 qpair failed and we were unable to recover it. 00:28:34.105 [2024-11-15 11:46:34.639740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.105 [2024-11-15 11:46:34.639774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.105 qpair failed and we were unable to recover it. 00:28:34.105 [2024-11-15 11:46:34.640073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.105 [2024-11-15 11:46:34.640107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.105 qpair failed and we were unable to recover it. 00:28:34.105 [2024-11-15 11:46:34.640251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.105 [2024-11-15 11:46:34.640283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.105 qpair failed and we were unable to recover it. 00:28:34.105 [2024-11-15 11:46:34.640521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.105 [2024-11-15 11:46:34.640556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.105 qpair failed and we were unable to recover it. 00:28:34.105 [2024-11-15 11:46:34.640866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.105 [2024-11-15 11:46:34.640900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.105 qpair failed and we were unable to recover it. 00:28:34.105 [2024-11-15 11:46:34.641124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.106 [2024-11-15 11:46:34.641157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.106 qpair failed and we were unable to recover it. 00:28:34.106 [2024-11-15 11:46:34.641503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.106 [2024-11-15 11:46:34.641538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.106 qpair failed and we were unable to recover it. 00:28:34.106 [2024-11-15 11:46:34.641737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.106 [2024-11-15 11:46:34.641770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.106 qpair failed and we were unable to recover it. 00:28:34.106 [2024-11-15 11:46:34.642019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.106 [2024-11-15 11:46:34.642031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.106 qpair failed and we were unable to recover it. 00:28:34.106 [2024-11-15 11:46:34.642269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.106 [2024-11-15 11:46:34.642282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.106 qpair failed and we were unable to recover it. 00:28:34.106 [2024-11-15 11:46:34.642507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.106 [2024-11-15 11:46:34.642520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.106 qpair failed and we were unable to recover it. 00:28:34.106 [2024-11-15 11:46:34.642711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.106 [2024-11-15 11:46:34.642723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.106 qpair failed and we were unable to recover it. 00:28:34.106 [2024-11-15 11:46:34.643059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.106 [2024-11-15 11:46:34.643092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.106 qpair failed and we were unable to recover it. 00:28:34.106 [2024-11-15 11:46:34.643294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.106 [2024-11-15 11:46:34.643328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.106 qpair failed and we were unable to recover it. 00:28:34.106 [2024-11-15 11:46:34.643548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.106 [2024-11-15 11:46:34.643560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.106 qpair failed and we were unable to recover it. 00:28:34.106 [2024-11-15 11:46:34.643764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.106 [2024-11-15 11:46:34.643797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.106 qpair failed and we were unable to recover it. 00:28:34.106 [2024-11-15 11:46:34.644016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.106 [2024-11-15 11:46:34.644050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.106 qpair failed and we were unable to recover it. 00:28:34.106 [2024-11-15 11:46:34.644350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.106 [2024-11-15 11:46:34.644384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.106 qpair failed and we were unable to recover it. 00:28:34.106 [2024-11-15 11:46:34.644656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.106 [2024-11-15 11:46:34.644669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.106 qpair failed and we were unable to recover it. 00:28:34.106 [2024-11-15 11:46:34.644831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.106 [2024-11-15 11:46:34.644843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.106 qpair failed and we were unable to recover it. 00:28:34.106 [2024-11-15 11:46:34.645096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.106 [2024-11-15 11:46:34.645129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.106 qpair failed and we were unable to recover it. 00:28:34.106 [2024-11-15 11:46:34.645471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.106 [2024-11-15 11:46:34.645552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1922550 with addr=10.0.0.2, port=4420 00:28:34.106 qpair failed and we were unable to recover it. 00:28:34.106 [2024-11-15 11:46:34.645849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.106 [2024-11-15 11:46:34.645870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1922550 with addr=10.0.0.2, port=4420 00:28:34.106 qpair failed and we were unable to recover it. 00:28:34.106 [2024-11-15 11:46:34.646107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.106 [2024-11-15 11:46:34.646125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1922550 with addr=10.0.0.2, port=4420 00:28:34.106 qpair failed and we were unable to recover it. 00:28:34.106 [2024-11-15 11:46:34.646358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.106 [2024-11-15 11:46:34.646376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1922550 with addr=10.0.0.2, port=4420 00:28:34.106 qpair failed and we were unable to recover it. 00:28:34.106 [2024-11-15 11:46:34.646643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.106 [2024-11-15 11:46:34.646692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1922550 with addr=10.0.0.2, port=4420 00:28:34.106 qpair failed and we were unable to recover it. 00:28:34.106 [2024-11-15 11:46:34.646917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.106 [2024-11-15 11:46:34.646950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1922550 with addr=10.0.0.2, port=4420 00:28:34.106 qpair failed and we were unable to recover it. 00:28:34.106 [2024-11-15 11:46:34.647252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.106 [2024-11-15 11:46:34.647286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1922550 with addr=10.0.0.2, port=4420 00:28:34.106 qpair failed and we were unable to recover it. 00:28:34.106 [2024-11-15 11:46:34.647587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.106 [2024-11-15 11:46:34.647623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1922550 with addr=10.0.0.2, port=4420 00:28:34.106 qpair failed and we were unable to recover it. 00:28:34.106 [2024-11-15 11:46:34.647820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.106 [2024-11-15 11:46:34.647853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1922550 with addr=10.0.0.2, port=4420 00:28:34.106 qpair failed and we were unable to recover it. 00:28:34.106 [2024-11-15 11:46:34.648001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.106 [2024-11-15 11:46:34.648035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1922550 with addr=10.0.0.2, port=4420 00:28:34.106 qpair failed and we were unable to recover it. 00:28:34.106 [2024-11-15 11:46:34.648234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.106 [2024-11-15 11:46:34.648267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1922550 with addr=10.0.0.2, port=4420 00:28:34.106 qpair failed and we were unable to recover it. 00:28:34.106 [2024-11-15 11:46:34.648478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.106 [2024-11-15 11:46:34.648514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1922550 with addr=10.0.0.2, port=4420 00:28:34.106 qpair failed and we were unable to recover it. 00:28:34.106 [2024-11-15 11:46:34.648838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.106 [2024-11-15 11:46:34.648873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1922550 with addr=10.0.0.2, port=4420 00:28:34.106 qpair failed and we were unable to recover it. 00:28:34.106 [2024-11-15 11:46:34.649072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.106 [2024-11-15 11:46:34.649106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1922550 with addr=10.0.0.2, port=4420 00:28:34.106 qpair failed and we were unable to recover it. 00:28:34.106 [2024-11-15 11:46:34.649430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.106 [2024-11-15 11:46:34.649489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1922550 with addr=10.0.0.2, port=4420 00:28:34.106 qpair failed and we were unable to recover it. 00:28:34.106 [2024-11-15 11:46:34.649811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.106 [2024-11-15 11:46:34.649845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1922550 with addr=10.0.0.2, port=4420 00:28:34.106 qpair failed and we were unable to recover it. 00:28:34.106 [2024-11-15 11:46:34.650046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.106 [2024-11-15 11:46:34.650081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1922550 with addr=10.0.0.2, port=4420 00:28:34.106 qpair failed and we were unable to recover it. 00:28:34.106 [2024-11-15 11:46:34.650294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.106 [2024-11-15 11:46:34.650329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1922550 with addr=10.0.0.2, port=4420 00:28:34.106 qpair failed and we were unable to recover it. 00:28:34.106 [2024-11-15 11:46:34.650646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.106 [2024-11-15 11:46:34.650686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1922550 with addr=10.0.0.2, port=4420 00:28:34.106 qpair failed and we were unable to recover it. 00:28:34.106 [2024-11-15 11:46:34.650976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.106 [2024-11-15 11:46:34.651010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1922550 with addr=10.0.0.2, port=4420 00:28:34.106 qpair failed and we were unable to recover it. 00:28:34.106 [2024-11-15 11:46:34.651325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.106 [2024-11-15 11:46:34.651360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1922550 with addr=10.0.0.2, port=4420 00:28:34.106 qpair failed and we were unable to recover it. 00:28:34.106 [2024-11-15 11:46:34.651660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.106 [2024-11-15 11:46:34.651697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1922550 with addr=10.0.0.2, port=4420 00:28:34.106 qpair failed and we were unable to recover it. 00:28:34.106 [2024-11-15 11:46:34.651999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.106 [2024-11-15 11:46:34.652040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1922550 with addr=10.0.0.2, port=4420 00:28:34.106 qpair failed and we were unable to recover it. 00:28:34.106 [2024-11-15 11:46:34.652276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.106 [2024-11-15 11:46:34.652310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1922550 with addr=10.0.0.2, port=4420 00:28:34.106 qpair failed and we were unable to recover it. 00:28:34.106 [2024-11-15 11:46:34.652583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.106 [2024-11-15 11:46:34.652602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1922550 with addr=10.0.0.2, port=4420 00:28:34.106 qpair failed and we were unable to recover it. 00:28:34.106 [2024-11-15 11:46:34.652865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.106 [2024-11-15 11:46:34.652883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1922550 with addr=10.0.0.2, port=4420 00:28:34.106 qpair failed and we were unable to recover it. 00:28:34.106 [2024-11-15 11:46:34.653126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.106 [2024-11-15 11:46:34.653144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1922550 with addr=10.0.0.2, port=4420 00:28:34.106 qpair failed and we were unable to recover it. 00:28:34.106 [2024-11-15 11:46:34.653252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.106 [2024-11-15 11:46:34.653273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1922550 with addr=10.0.0.2, port=4420 00:28:34.106 qpair failed and we were unable to recover it. 00:28:34.106 [2024-11-15 11:46:34.653473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.106 [2024-11-15 11:46:34.653492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1922550 with addr=10.0.0.2, port=4420 00:28:34.106 qpair failed and we were unable to recover it. 00:28:34.106 [2024-11-15 11:46:34.653711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.106 [2024-11-15 11:46:34.653745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1922550 with addr=10.0.0.2, port=4420 00:28:34.106 qpair failed and we were unable to recover it. 00:28:34.106 [2024-11-15 11:46:34.653980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.106 [2024-11-15 11:46:34.654015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1922550 with addr=10.0.0.2, port=4420 00:28:34.106 qpair failed and we were unable to recover it. 00:28:34.106 [2024-11-15 11:46:34.654350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.106 [2024-11-15 11:46:34.654384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1922550 with addr=10.0.0.2, port=4420 00:28:34.106 qpair failed and we were unable to recover it. 00:28:34.106 [2024-11-15 11:46:34.654604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.106 [2024-11-15 11:46:34.654639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1922550 with addr=10.0.0.2, port=4420 00:28:34.106 qpair failed and we were unable to recover it. 00:28:34.106 [2024-11-15 11:46:34.654939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.106 [2024-11-15 11:46:34.654974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1922550 with addr=10.0.0.2, port=4420 00:28:34.106 qpair failed and we were unable to recover it. 00:28:34.106 [2024-11-15 11:46:34.655204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.106 [2024-11-15 11:46:34.655239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1922550 with addr=10.0.0.2, port=4420 00:28:34.106 qpair failed and we were unable to recover it. 00:28:34.106 [2024-11-15 11:46:34.655519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.106 [2024-11-15 11:46:34.655555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1922550 with addr=10.0.0.2, port=4420 00:28:34.106 qpair failed and we were unable to recover it. 00:28:34.106 [2024-11-15 11:46:34.655776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.106 [2024-11-15 11:46:34.655808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1922550 with addr=10.0.0.2, port=4420 00:28:34.106 qpair failed and we were unable to recover it. 00:28:34.106 [2024-11-15 11:46:34.656020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.106 [2024-11-15 11:46:34.656054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1922550 with addr=10.0.0.2, port=4420 00:28:34.106 qpair failed and we were unable to recover it. 00:28:34.106 [2024-11-15 11:46:34.656355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.106 [2024-11-15 11:46:34.656389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1922550 with addr=10.0.0.2, port=4420 00:28:34.106 qpair failed and we were unable to recover it. 00:28:34.106 [2024-11-15 11:46:34.656693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.106 [2024-11-15 11:46:34.656727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1922550 with addr=10.0.0.2, port=4420 00:28:34.106 qpair failed and we were unable to recover it. 00:28:34.106 [2024-11-15 11:46:34.657051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.106 [2024-11-15 11:46:34.657086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1922550 with addr=10.0.0.2, port=4420 00:28:34.106 qpair failed and we were unable to recover it. 00:28:34.106 [2024-11-15 11:46:34.657396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.107 [2024-11-15 11:46:34.657431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1922550 with addr=10.0.0.2, port=4420 00:28:34.107 qpair failed and we were unable to recover it. 00:28:34.107 [2024-11-15 11:46:34.657673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.107 [2024-11-15 11:46:34.657708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1922550 with addr=10.0.0.2, port=4420 00:28:34.107 qpair failed and we were unable to recover it. 00:28:34.107 [2024-11-15 11:46:34.657946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.107 [2024-11-15 11:46:34.657981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1922550 with addr=10.0.0.2, port=4420 00:28:34.107 qpair failed and we were unable to recover it. 00:28:34.107 [2024-11-15 11:46:34.658133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.107 [2024-11-15 11:46:34.658168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1922550 with addr=10.0.0.2, port=4420 00:28:34.107 qpair failed and we were unable to recover it. 00:28:34.107 [2024-11-15 11:46:34.658386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.107 [2024-11-15 11:46:34.658419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1922550 with addr=10.0.0.2, port=4420 00:28:34.107 qpair failed and we were unable to recover it. 00:28:34.107 [2024-11-15 11:46:34.658575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.107 [2024-11-15 11:46:34.658609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1922550 with addr=10.0.0.2, port=4420 00:28:34.107 qpair failed and we were unable to recover it. 00:28:34.107 [2024-11-15 11:46:34.658844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.107 [2024-11-15 11:46:34.658880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1922550 with addr=10.0.0.2, port=4420 00:28:34.107 qpair failed and we were unable to recover it. 00:28:34.107 [2024-11-15 11:46:34.659093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.107 [2024-11-15 11:46:34.659127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1922550 with addr=10.0.0.2, port=4420 00:28:34.107 qpair failed and we were unable to recover it. 00:28:34.107 [2024-11-15 11:46:34.659285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.107 [2024-11-15 11:46:34.659320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1922550 with addr=10.0.0.2, port=4420 00:28:34.107 qpair failed and we were unable to recover it. 00:28:34.107 [2024-11-15 11:46:34.659513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.107 [2024-11-15 11:46:34.659533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1922550 with addr=10.0.0.2, port=4420 00:28:34.107 qpair failed and we were unable to recover it. 00:28:34.107 [2024-11-15 11:46:34.659798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.107 [2024-11-15 11:46:34.659831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1922550 with addr=10.0.0.2, port=4420 00:28:34.107 qpair failed and we were unable to recover it. 00:28:34.107 [2024-11-15 11:46:34.660108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.107 [2024-11-15 11:46:34.660142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1922550 with addr=10.0.0.2, port=4420 00:28:34.107 qpair failed and we were unable to recover it. 00:28:34.107 [2024-11-15 11:46:34.660413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.107 [2024-11-15 11:46:34.660447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1922550 with addr=10.0.0.2, port=4420 00:28:34.107 qpair failed and we were unable to recover it. 00:28:34.107 [2024-11-15 11:46:34.660659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.107 [2024-11-15 11:46:34.660700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1922550 with addr=10.0.0.2, port=4420 00:28:34.107 qpair failed and we were unable to recover it. 00:28:34.107 [2024-11-15 11:46:34.660977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.107 [2024-11-15 11:46:34.660996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1922550 with addr=10.0.0.2, port=4420 00:28:34.107 qpair failed and we were unable to recover it. 00:28:34.107 [2024-11-15 11:46:34.661237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.107 [2024-11-15 11:46:34.661255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1922550 with addr=10.0.0.2, port=4420 00:28:34.107 qpair failed and we were unable to recover it. 00:28:34.107 [2024-11-15 11:46:34.661417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.107 [2024-11-15 11:46:34.661435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1922550 with addr=10.0.0.2, port=4420 00:28:34.107 qpair failed and we were unable to recover it. 00:28:34.107 [2024-11-15 11:46:34.661648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.107 [2024-11-15 11:46:34.661666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1922550 with addr=10.0.0.2, port=4420 00:28:34.107 qpair failed and we were unable to recover it. 00:28:34.107 [2024-11-15 11:46:34.661851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.107 [2024-11-15 11:46:34.661885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1922550 with addr=10.0.0.2, port=4420 00:28:34.107 qpair failed and we were unable to recover it. 00:28:34.107 [2024-11-15 11:46:34.662181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.107 [2024-11-15 11:46:34.662215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1922550 with addr=10.0.0.2, port=4420 00:28:34.107 qpair failed and we were unable to recover it. 00:28:34.107 [2024-11-15 11:46:34.662356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.107 [2024-11-15 11:46:34.662389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1922550 with addr=10.0.0.2, port=4420 00:28:34.107 qpair failed and we were unable to recover it. 00:28:34.107 [2024-11-15 11:46:34.662685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.107 [2024-11-15 11:46:34.662704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1922550 with addr=10.0.0.2, port=4420 00:28:34.107 qpair failed and we were unable to recover it. 00:28:34.107 [2024-11-15 11:46:34.662812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.107 [2024-11-15 11:46:34.662857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1922550 with addr=10.0.0.2, port=4420 00:28:34.107 qpair failed and we were unable to recover it. 00:28:34.107 [2024-11-15 11:46:34.662988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.107 [2024-11-15 11:46:34.663022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1922550 with addr=10.0.0.2, port=4420 00:28:34.107 qpair failed and we were unable to recover it. 00:28:34.107 [2024-11-15 11:46:34.663270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.107 [2024-11-15 11:46:34.663303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1922550 with addr=10.0.0.2, port=4420 00:28:34.107 qpair failed and we were unable to recover it. 00:28:34.107 [2024-11-15 11:46:34.663575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.107 [2024-11-15 11:46:34.663623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1922550 with addr=10.0.0.2, port=4420 00:28:34.107 qpair failed and we were unable to recover it. 00:28:34.107 [2024-11-15 11:46:34.663810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.107 [2024-11-15 11:46:34.663827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1922550 with addr=10.0.0.2, port=4420 00:28:34.107 qpair failed and we were unable to recover it. 00:28:34.107 [2024-11-15 11:46:34.664013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.107 [2024-11-15 11:46:34.664031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1922550 with addr=10.0.0.2, port=4420 00:28:34.107 qpair failed and we were unable to recover it. 00:28:34.107 [2024-11-15 11:46:34.664222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.107 [2024-11-15 11:46:34.664256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1922550 with addr=10.0.0.2, port=4420 00:28:34.107 qpair failed and we were unable to recover it. 00:28:34.107 [2024-11-15 11:46:34.664485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.107 [2024-11-15 11:46:34.664519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1922550 with addr=10.0.0.2, port=4420 00:28:34.107 qpair failed and we were unable to recover it. 00:28:34.107 [2024-11-15 11:46:34.664822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.107 [2024-11-15 11:46:34.664855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1922550 with addr=10.0.0.2, port=4420 00:28:34.107 qpair failed and we were unable to recover it. 00:28:34.107 [2024-11-15 11:46:34.665135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.107 [2024-11-15 11:46:34.665169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1922550 with addr=10.0.0.2, port=4420 00:28:34.107 qpair failed and we were unable to recover it. 00:28:34.107 [2024-11-15 11:46:34.665370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.107 [2024-11-15 11:46:34.665404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1922550 with addr=10.0.0.2, port=4420 00:28:34.107 qpair failed and we were unable to recover it. 00:28:34.107 [2024-11-15 11:46:34.665653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.107 [2024-11-15 11:46:34.665672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1922550 with addr=10.0.0.2, port=4420 00:28:34.107 qpair failed and we were unable to recover it. 00:28:34.107 [2024-11-15 11:46:34.665809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.107 [2024-11-15 11:46:34.665843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1922550 with addr=10.0.0.2, port=4420 00:28:34.107 qpair failed and we were unable to recover it. 00:28:34.107 [2024-11-15 11:46:34.665989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.107 [2024-11-15 11:46:34.666023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1922550 with addr=10.0.0.2, port=4420 00:28:34.107 qpair failed and we were unable to recover it. 00:28:34.107 [2024-11-15 11:46:34.666237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.107 [2024-11-15 11:46:34.666271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1922550 with addr=10.0.0.2, port=4420 00:28:34.107 qpair failed and we were unable to recover it. 00:28:34.107 [2024-11-15 11:46:34.666561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.107 [2024-11-15 11:46:34.666579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1922550 with addr=10.0.0.2, port=4420 00:28:34.107 qpair failed and we were unable to recover it. 00:28:34.107 [2024-11-15 11:46:34.666843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.107 [2024-11-15 11:46:34.666862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1922550 with addr=10.0.0.2, port=4420 00:28:34.107 qpair failed and we were unable to recover it. 00:28:34.107 [2024-11-15 11:46:34.667164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.107 [2024-11-15 11:46:34.667197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1922550 with addr=10.0.0.2, port=4420 00:28:34.107 qpair failed and we were unable to recover it. 00:28:34.107 [2024-11-15 11:46:34.667493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.107 [2024-11-15 11:46:34.667532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1922550 with addr=10.0.0.2, port=4420 00:28:34.107 qpair failed and we were unable to recover it. 00:28:34.107 [2024-11-15 11:46:34.667822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.107 [2024-11-15 11:46:34.667857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1922550 with addr=10.0.0.2, port=4420 00:28:34.107 qpair failed and we were unable to recover it. 00:28:34.107 [2024-11-15 11:46:34.668140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.107 [2024-11-15 11:46:34.668174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1922550 with addr=10.0.0.2, port=4420 00:28:34.107 qpair failed and we were unable to recover it. 00:28:34.107 [2024-11-15 11:46:34.668418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.107 [2024-11-15 11:46:34.668452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1922550 with addr=10.0.0.2, port=4420 00:28:34.107 qpair failed and we were unable to recover it. 00:28:34.107 [2024-11-15 11:46:34.668752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.107 [2024-11-15 11:46:34.668787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1922550 with addr=10.0.0.2, port=4420 00:28:34.107 qpair failed and we were unable to recover it. 00:28:34.107 [2024-11-15 11:46:34.668993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.107 [2024-11-15 11:46:34.669011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1922550 with addr=10.0.0.2, port=4420 00:28:34.107 qpair failed and we were unable to recover it. 00:28:34.107 [2024-11-15 11:46:34.669281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.107 [2024-11-15 11:46:34.669314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1922550 with addr=10.0.0.2, port=4420 00:28:34.107 qpair failed and we were unable to recover it. 00:28:34.107 [2024-11-15 11:46:34.669629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.107 [2024-11-15 11:46:34.669664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1922550 with addr=10.0.0.2, port=4420 00:28:34.107 qpair failed and we were unable to recover it. 00:28:34.107 [2024-11-15 11:46:34.669889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.107 [2024-11-15 11:46:34.669923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1922550 with addr=10.0.0.2, port=4420 00:28:34.107 qpair failed and we were unable to recover it. 00:28:34.107 [2024-11-15 11:46:34.670194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.107 [2024-11-15 11:46:34.670228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1922550 with addr=10.0.0.2, port=4420 00:28:34.107 qpair failed and we were unable to recover it. 00:28:34.107 [2024-11-15 11:46:34.670506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.107 [2024-11-15 11:46:34.670541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1922550 with addr=10.0.0.2, port=4420 00:28:34.107 qpair failed and we were unable to recover it. 00:28:34.107 [2024-11-15 11:46:34.670758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.107 [2024-11-15 11:46:34.670791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1922550 with addr=10.0.0.2, port=4420 00:28:34.107 qpair failed and we were unable to recover it. 00:28:34.108 [2024-11-15 11:46:34.671115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.108 [2024-11-15 11:46:34.671148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1922550 with addr=10.0.0.2, port=4420 00:28:34.108 qpair failed and we were unable to recover it. 00:28:34.108 [2024-11-15 11:46:34.671421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.108 [2024-11-15 11:46:34.671455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1922550 with addr=10.0.0.2, port=4420 00:28:34.108 qpair failed and we were unable to recover it. 00:28:34.108 [2024-11-15 11:46:34.671864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.108 [2024-11-15 11:46:34.671944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.108 qpair failed and we were unable to recover it. 00:28:34.108 [2024-11-15 11:46:34.672184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.108 [2024-11-15 11:46:34.672223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.108 qpair failed and we were unable to recover it. 00:28:34.108 [2024-11-15 11:46:34.672542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.108 [2024-11-15 11:46:34.672579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.108 qpair failed and we were unable to recover it. 00:28:34.108 [2024-11-15 11:46:34.672749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.108 [2024-11-15 11:46:34.672784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.108 qpair failed and we were unable to recover it. 00:28:34.108 [2024-11-15 11:46:34.672987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.108 [2024-11-15 11:46:34.673021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.108 qpair failed and we were unable to recover it. 00:28:34.108 [2024-11-15 11:46:34.673261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.108 [2024-11-15 11:46:34.673293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.108 qpair failed and we were unable to recover it. 00:28:34.108 [2024-11-15 11:46:34.673615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.108 [2024-11-15 11:46:34.673651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.108 qpair failed and we were unable to recover it. 00:28:34.108 [2024-11-15 11:46:34.673949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.108 [2024-11-15 11:46:34.673982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.108 qpair failed and we were unable to recover it. 00:28:34.108 [2024-11-15 11:46:34.674203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.108 [2024-11-15 11:46:34.674237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.108 qpair failed and we were unable to recover it. 00:28:34.108 [2024-11-15 11:46:34.674386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.108 [2024-11-15 11:46:34.674420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.108 qpair failed and we were unable to recover it. 00:28:34.108 [2024-11-15 11:46:34.674556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.108 [2024-11-15 11:46:34.674569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.108 qpair failed and we were unable to recover it. 00:28:34.108 [2024-11-15 11:46:34.674737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.108 [2024-11-15 11:46:34.674774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.108 qpair failed and we were unable to recover it. 00:28:34.108 [2024-11-15 11:46:34.674992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.108 [2024-11-15 11:46:34.675024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.108 qpair failed and we were unable to recover it. 00:28:34.108 [2024-11-15 11:46:34.675325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.108 [2024-11-15 11:46:34.675369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.108 qpair failed and we were unable to recover it. 00:28:34.108 [2024-11-15 11:46:34.675667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.108 [2024-11-15 11:46:34.675703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.108 qpair failed and we were unable to recover it. 00:28:34.108 [2024-11-15 11:46:34.675950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.108 [2024-11-15 11:46:34.675984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.108 qpair failed and we were unable to recover it. 00:28:34.108 [2024-11-15 11:46:34.676278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.108 [2024-11-15 11:46:34.676312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.108 qpair failed and we were unable to recover it. 00:28:34.108 [2024-11-15 11:46:34.676591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.108 [2024-11-15 11:46:34.676626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.108 qpair failed and we were unable to recover it. 00:28:34.108 [2024-11-15 11:46:34.676921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.108 [2024-11-15 11:46:34.676933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.108 qpair failed and we were unable to recover it. 00:28:34.108 [2024-11-15 11:46:34.677183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.108 [2024-11-15 11:46:34.677220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.108 qpair failed and we were unable to recover it. 00:28:34.108 [2024-11-15 11:46:34.677503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.108 [2024-11-15 11:46:34.677545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.108 qpair failed and we were unable to recover it. 00:28:34.108 [2024-11-15 11:46:34.677833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.108 [2024-11-15 11:46:34.677866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.108 qpair failed and we were unable to recover it. 00:28:34.108 [2024-11-15 11:46:34.678150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.108 [2024-11-15 11:46:34.678183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.108 qpair failed and we were unable to recover it. 00:28:34.108 [2024-11-15 11:46:34.678430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.108 [2024-11-15 11:46:34.678471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.108 qpair failed and we were unable to recover it. 00:28:34.108 [2024-11-15 11:46:34.678788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.108 [2024-11-15 11:46:34.678822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.108 qpair failed and we were unable to recover it. 00:28:34.108 [2024-11-15 11:46:34.679126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.108 [2024-11-15 11:46:34.679160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.108 qpair failed and we were unable to recover it. 00:28:34.108 [2024-11-15 11:46:34.679435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.108 [2024-11-15 11:46:34.679483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.108 qpair failed and we were unable to recover it. 00:28:34.108 [2024-11-15 11:46:34.679678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.108 [2024-11-15 11:46:34.679690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.108 qpair failed and we were unable to recover it. 00:28:34.108 [2024-11-15 11:46:34.679834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.108 [2024-11-15 11:46:34.679846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.108 qpair failed and we were unable to recover it. 00:28:34.108 [2024-11-15 11:46:34.680031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.108 [2024-11-15 11:46:34.680064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.108 qpair failed and we were unable to recover it. 00:28:34.108 [2024-11-15 11:46:34.680362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.108 [2024-11-15 11:46:34.680396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.108 qpair failed and we were unable to recover it. 00:28:34.108 [2024-11-15 11:46:34.680707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.108 [2024-11-15 11:46:34.680742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.108 qpair failed and we were unable to recover it. 00:28:34.108 [2024-11-15 11:46:34.681044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.108 [2024-11-15 11:46:34.681078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.108 qpair failed and we were unable to recover it. 00:28:34.108 [2024-11-15 11:46:34.681355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.108 [2024-11-15 11:46:34.681387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.108 qpair failed and we were unable to recover it. 00:28:34.108 [2024-11-15 11:46:34.681735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.108 [2024-11-15 11:46:34.681770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.108 qpair failed and we were unable to recover it. 00:28:34.108 [2024-11-15 11:46:34.682088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.108 [2024-11-15 11:46:34.682123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.108 qpair failed and we were unable to recover it. 00:28:34.108 [2024-11-15 11:46:34.682423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.108 [2024-11-15 11:46:34.682455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.108 qpair failed and we were unable to recover it. 00:28:34.108 [2024-11-15 11:46:34.682788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.108 [2024-11-15 11:46:34.682800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.108 qpair failed and we were unable to recover it. 00:28:34.108 [2024-11-15 11:46:34.683061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.108 [2024-11-15 11:46:34.683094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.108 qpair failed and we were unable to recover it. 00:28:34.108 [2024-11-15 11:46:34.683313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.108 [2024-11-15 11:46:34.683345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.108 qpair failed and we were unable to recover it. 00:28:34.108 [2024-11-15 11:46:34.683637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.108 [2024-11-15 11:46:34.683672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.108 qpair failed and we were unable to recover it. 00:28:34.108 [2024-11-15 11:46:34.683872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.108 [2024-11-15 11:46:34.683906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.108 qpair failed and we were unable to recover it. 00:28:34.108 [2024-11-15 11:46:34.684043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.108 [2024-11-15 11:46:34.684055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.108 qpair failed and we were unable to recover it. 00:28:34.108 [2024-11-15 11:46:34.684310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.108 [2024-11-15 11:46:34.684344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.108 qpair failed and we were unable to recover it. 00:28:34.108 [2024-11-15 11:46:34.684568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.108 [2024-11-15 11:46:34.684603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.108 qpair failed and we were unable to recover it. 00:28:34.108 [2024-11-15 11:46:34.684888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.108 [2024-11-15 11:46:34.684922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.108 qpair failed and we were unable to recover it. 00:28:34.108 [2024-11-15 11:46:34.685222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.108 [2024-11-15 11:46:34.685256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.108 qpair failed and we were unable to recover it. 00:28:34.108 [2024-11-15 11:46:34.685570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.108 [2024-11-15 11:46:34.685605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.108 qpair failed and we were unable to recover it. 00:28:34.108 [2024-11-15 11:46:34.685838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.108 [2024-11-15 11:46:34.685871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.108 qpair failed and we were unable to recover it. 00:28:34.108 [2024-11-15 11:46:34.686195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.108 [2024-11-15 11:46:34.686230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.108 qpair failed and we were unable to recover it. 00:28:34.108 [2024-11-15 11:46:34.686481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.108 [2024-11-15 11:46:34.686516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.108 qpair failed and we were unable to recover it. 00:28:34.108 [2024-11-15 11:46:34.686650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.108 [2024-11-15 11:46:34.686663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.108 qpair failed and we were unable to recover it. 00:28:34.108 [2024-11-15 11:46:34.686922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.108 [2024-11-15 11:46:34.686956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.108 qpair failed and we were unable to recover it. 00:28:34.108 [2024-11-15 11:46:34.687184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.108 [2024-11-15 11:46:34.687224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.108 qpair failed and we were unable to recover it. 00:28:34.108 [2024-11-15 11:46:34.687443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.109 [2024-11-15 11:46:34.687488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.109 qpair failed and we were unable to recover it. 00:28:34.109 [2024-11-15 11:46:34.687706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.109 [2024-11-15 11:46:34.687739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.109 qpair failed and we were unable to recover it. 00:28:34.109 [2024-11-15 11:46:34.687931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.109 [2024-11-15 11:46:34.687943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.109 qpair failed and we were unable to recover it. 00:28:34.109 [2024-11-15 11:46:34.688125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.109 [2024-11-15 11:46:34.688159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.109 qpair failed and we were unable to recover it. 00:28:34.109 [2024-11-15 11:46:34.688373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.109 [2024-11-15 11:46:34.688405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.109 qpair failed and we were unable to recover it. 00:28:34.109 [2024-11-15 11:46:34.688714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.109 [2024-11-15 11:46:34.688748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.109 qpair failed and we were unable to recover it. 00:28:34.109 [2024-11-15 11:46:34.689049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.109 [2024-11-15 11:46:34.689082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.109 qpair failed and we were unable to recover it. 00:28:34.109 [2024-11-15 11:46:34.689364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.109 [2024-11-15 11:46:34.689375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.109 qpair failed and we were unable to recover it. 00:28:34.109 [2024-11-15 11:46:34.689622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.109 [2024-11-15 11:46:34.689634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.109 qpair failed and we were unable to recover it. 00:28:34.109 [2024-11-15 11:46:34.689945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.109 [2024-11-15 11:46:34.689978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.109 qpair failed and we were unable to recover it. 00:28:34.109 [2024-11-15 11:46:34.690274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.109 [2024-11-15 11:46:34.690308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.109 qpair failed and we were unable to recover it. 00:28:34.109 [2024-11-15 11:46:34.690526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.109 [2024-11-15 11:46:34.690562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.109 qpair failed and we were unable to recover it. 00:28:34.109 [2024-11-15 11:46:34.690834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.109 [2024-11-15 11:46:34.690847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.109 qpair failed and we were unable to recover it. 00:28:34.109 [2024-11-15 11:46:34.691024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.109 [2024-11-15 11:46:34.691035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.109 qpair failed and we were unable to recover it. 00:28:34.109 [2024-11-15 11:46:34.691290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.109 [2024-11-15 11:46:34.691324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.109 qpair failed and we were unable to recover it. 00:28:34.109 [2024-11-15 11:46:34.691541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.109 [2024-11-15 11:46:34.691575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.109 qpair failed and we were unable to recover it. 00:28:34.109 [2024-11-15 11:46:34.691875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.109 [2024-11-15 11:46:34.691907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.109 qpair failed and we were unable to recover it. 00:28:34.109 [2024-11-15 11:46:34.692191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.109 [2024-11-15 11:46:34.692224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.109 qpair failed and we were unable to recover it. 00:28:34.109 [2024-11-15 11:46:34.692520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.109 [2024-11-15 11:46:34.692556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.109 qpair failed and we were unable to recover it. 00:28:34.109 [2024-11-15 11:46:34.692840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.109 [2024-11-15 11:46:34.692872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.109 qpair failed and we were unable to recover it. 00:28:34.109 [2024-11-15 11:46:34.693167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.109 [2024-11-15 11:46:34.693201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.109 qpair failed and we were unable to recover it. 00:28:34.109 [2024-11-15 11:46:34.693503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.109 [2024-11-15 11:46:34.693540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.109 qpair failed and we were unable to recover it. 00:28:34.109 [2024-11-15 11:46:34.693815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.109 [2024-11-15 11:46:34.693849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.109 qpair failed and we were unable to recover it. 00:28:34.109 [2024-11-15 11:46:34.694159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.109 [2024-11-15 11:46:34.694194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.109 qpair failed and we were unable to recover it. 00:28:34.109 [2024-11-15 11:46:34.694429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.109 [2024-11-15 11:46:34.694471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.109 qpair failed and we were unable to recover it. 00:28:34.109 [2024-11-15 11:46:34.694734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.109 [2024-11-15 11:46:34.694745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.109 qpair failed and we were unable to recover it. 00:28:34.109 [2024-11-15 11:46:34.695003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.109 [2024-11-15 11:46:34.695036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.109 qpair failed and we were unable to recover it. 00:28:34.109 [2024-11-15 11:46:34.695309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.109 [2024-11-15 11:46:34.695342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.109 qpair failed and we were unable to recover it. 00:28:34.109 [2024-11-15 11:46:34.695646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.109 [2024-11-15 11:46:34.695680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.109 qpair failed and we were unable to recover it. 00:28:34.109 [2024-11-15 11:46:34.695996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.109 [2024-11-15 11:46:34.696030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.109 qpair failed and we were unable to recover it. 00:28:34.109 [2024-11-15 11:46:34.696318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.109 [2024-11-15 11:46:34.696351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.109 qpair failed and we were unable to recover it. 00:28:34.109 [2024-11-15 11:46:34.696643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.109 [2024-11-15 11:46:34.696677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.109 qpair failed and we were unable to recover it. 00:28:34.109 [2024-11-15 11:46:34.696963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.109 [2024-11-15 11:46:34.696995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.109 qpair failed and we were unable to recover it. 00:28:34.109 [2024-11-15 11:46:34.697210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.109 [2024-11-15 11:46:34.697244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.109 qpair failed and we were unable to recover it. 00:28:34.109 [2024-11-15 11:46:34.697449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.109 [2024-11-15 11:46:34.697504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.109 qpair failed and we were unable to recover it. 00:28:34.109 [2024-11-15 11:46:34.697771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.109 [2024-11-15 11:46:34.697783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.109 qpair failed and we were unable to recover it. 00:28:34.109 [2024-11-15 11:46:34.697976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.109 [2024-11-15 11:46:34.697988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.109 qpair failed and we were unable to recover it. 00:28:34.109 [2024-11-15 11:46:34.698130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.109 [2024-11-15 11:46:34.698142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.109 qpair failed and we were unable to recover it. 00:28:34.109 [2024-11-15 11:46:34.698432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.109 [2024-11-15 11:46:34.698478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.109 qpair failed and we were unable to recover it. 00:28:34.109 [2024-11-15 11:46:34.698803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.109 [2024-11-15 11:46:34.698845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.109 qpair failed and we were unable to recover it. 00:28:34.109 [2024-11-15 11:46:34.699126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.109 [2024-11-15 11:46:34.699137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.109 qpair failed and we were unable to recover it. 00:28:34.109 [2024-11-15 11:46:34.699390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.109 [2024-11-15 11:46:34.699402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.109 qpair failed and we were unable to recover it. 00:28:34.109 [2024-11-15 11:46:34.699698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.109 [2024-11-15 11:46:34.699732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.109 qpair failed and we were unable to recover it. 00:28:34.109 [2024-11-15 11:46:34.700032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.109 [2024-11-15 11:46:34.700065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.109 qpair failed and we were unable to recover it. 00:28:34.109 [2024-11-15 11:46:34.700305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.109 [2024-11-15 11:46:34.700338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.109 qpair failed and we were unable to recover it. 00:28:34.109 [2024-11-15 11:46:34.700634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.109 [2024-11-15 11:46:34.700669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.109 qpair failed and we were unable to recover it. 00:28:34.109 [2024-11-15 11:46:34.700952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.109 [2024-11-15 11:46:34.700986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.109 qpair failed and we were unable to recover it. 00:28:34.109 [2024-11-15 11:46:34.701284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.109 [2024-11-15 11:46:34.701317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.109 qpair failed and we were unable to recover it. 00:28:34.109 [2024-11-15 11:46:34.701588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.109 [2024-11-15 11:46:34.701622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.109 qpair failed and we were unable to recover it. 00:28:34.109 [2024-11-15 11:46:34.701855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.109 [2024-11-15 11:46:34.701867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.109 qpair failed and we were unable to recover it. 00:28:34.109 [2024-11-15 11:46:34.702121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.109 [2024-11-15 11:46:34.702132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.109 qpair failed and we were unable to recover it. 00:28:34.109 [2024-11-15 11:46:34.702376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.109 [2024-11-15 11:46:34.702408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.109 qpair failed and we were unable to recover it. 00:28:34.109 [2024-11-15 11:46:34.702696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.109 [2024-11-15 11:46:34.702709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.109 qpair failed and we were unable to recover it. 00:28:34.109 [2024-11-15 11:46:34.702885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.109 [2024-11-15 11:46:34.702897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.109 qpair failed and we were unable to recover it. 00:28:34.109 [2024-11-15 11:46:34.703124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.109 [2024-11-15 11:46:34.703157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.109 qpair failed and we were unable to recover it. 00:28:34.109 [2024-11-15 11:46:34.703378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.109 [2024-11-15 11:46:34.703411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.109 qpair failed and we were unable to recover it. 00:28:34.109 [2024-11-15 11:46:34.703619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.109 [2024-11-15 11:46:34.703654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.109 qpair failed and we were unable to recover it. 00:28:34.109 [2024-11-15 11:46:34.703855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.110 [2024-11-15 11:46:34.703866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.110 qpair failed and we were unable to recover it. 00:28:34.110 [2024-11-15 11:46:34.704148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.110 [2024-11-15 11:46:34.704181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.110 qpair failed and we were unable to recover it. 00:28:34.110 [2024-11-15 11:46:34.704393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.110 [2024-11-15 11:46:34.704427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.110 qpair failed and we were unable to recover it. 00:28:34.110 [2024-11-15 11:46:34.704731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.110 [2024-11-15 11:46:34.704764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.110 qpair failed and we were unable to recover it. 00:28:34.110 [2024-11-15 11:46:34.705032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.110 [2024-11-15 11:46:34.705043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.110 qpair failed and we were unable to recover it. 00:28:34.110 [2024-11-15 11:46:34.705118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.110 [2024-11-15 11:46:34.705131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.110 qpair failed and we were unable to recover it. 00:28:34.110 [2024-11-15 11:46:34.705389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.110 [2024-11-15 11:46:34.705401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.110 qpair failed and we were unable to recover it. 00:28:34.110 [2024-11-15 11:46:34.705575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.110 [2024-11-15 11:46:34.705587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.110 qpair failed and we were unable to recover it. 00:28:34.110 [2024-11-15 11:46:34.705865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.110 [2024-11-15 11:46:34.705900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.110 qpair failed and we were unable to recover it. 00:28:34.110 [2024-11-15 11:46:34.706102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.110 [2024-11-15 11:46:34.706136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.110 qpair failed and we were unable to recover it. 00:28:34.110 [2024-11-15 11:46:34.706349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.110 [2024-11-15 11:46:34.706382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.110 qpair failed and we were unable to recover it. 00:28:34.110 [2024-11-15 11:46:34.706682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.110 [2024-11-15 11:46:34.706718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.110 qpair failed and we were unable to recover it. 00:28:34.110 [2024-11-15 11:46:34.707008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.110 [2024-11-15 11:46:34.707021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.110 qpair failed and we were unable to recover it. 00:28:34.110 [2024-11-15 11:46:34.707323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.110 [2024-11-15 11:46:34.707356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.110 qpair failed and we were unable to recover it. 00:28:34.110 [2024-11-15 11:46:34.707659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.110 [2024-11-15 11:46:34.707694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.110 qpair failed and we were unable to recover it. 00:28:34.110 [2024-11-15 11:46:34.707974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.110 [2024-11-15 11:46:34.708007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.110 qpair failed and we were unable to recover it. 00:28:34.110 [2024-11-15 11:46:34.708307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.110 [2024-11-15 11:46:34.708340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.110 qpair failed and we were unable to recover it. 00:28:34.110 [2024-11-15 11:46:34.708605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.110 [2024-11-15 11:46:34.708618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.110 qpair failed and we were unable to recover it. 00:28:34.110 [2024-11-15 11:46:34.708839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.110 [2024-11-15 11:46:34.708851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.110 qpair failed and we were unable to recover it. 00:28:34.110 [2024-11-15 11:46:34.709103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.110 [2024-11-15 11:46:34.709137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.110 qpair failed and we were unable to recover it. 00:28:34.110 [2024-11-15 11:46:34.709415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.110 [2024-11-15 11:46:34.709449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.110 qpair failed and we were unable to recover it. 00:28:34.110 [2024-11-15 11:46:34.709755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.110 [2024-11-15 11:46:34.709791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.110 qpair failed and we were unable to recover it. 00:28:34.110 [2024-11-15 11:46:34.709947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.110 [2024-11-15 11:46:34.709961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.110 qpair failed and we were unable to recover it. 00:28:34.110 [2024-11-15 11:46:34.710207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.110 [2024-11-15 11:46:34.710219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.110 qpair failed and we were unable to recover it. 00:28:34.110 [2024-11-15 11:46:34.710385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.110 [2024-11-15 11:46:34.710398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.110 qpair failed and we were unable to recover it. 00:28:34.110 [2024-11-15 11:46:34.710587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.110 [2024-11-15 11:46:34.710620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.110 qpair failed and we were unable to recover it. 00:28:34.110 [2024-11-15 11:46:34.710835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.110 [2024-11-15 11:46:34.710868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.110 qpair failed and we were unable to recover it. 00:28:34.110 [2024-11-15 11:46:34.711156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.110 [2024-11-15 11:46:34.711189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.110 qpair failed and we were unable to recover it. 00:28:34.110 [2024-11-15 11:46:34.711482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.110 [2024-11-15 11:46:34.711516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.110 qpair failed and we were unable to recover it. 00:28:34.110 [2024-11-15 11:46:34.711790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.110 [2024-11-15 11:46:34.711817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.110 qpair failed and we were unable to recover it. 00:28:34.110 [2024-11-15 11:46:34.712130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.110 [2024-11-15 11:46:34.712164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.110 qpair failed and we were unable to recover it. 00:28:34.110 [2024-11-15 11:46:34.712441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.110 [2024-11-15 11:46:34.712485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.110 qpair failed and we were unable to recover it. 00:28:34.110 [2024-11-15 11:46:34.712777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.110 [2024-11-15 11:46:34.712811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.110 qpair failed and we were unable to recover it. 00:28:34.110 [2024-11-15 11:46:34.713031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.110 [2024-11-15 11:46:34.713066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.110 qpair failed and we were unable to recover it. 00:28:34.110 [2024-11-15 11:46:34.713309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.110 [2024-11-15 11:46:34.713342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.110 qpair failed and we were unable to recover it. 00:28:34.110 [2024-11-15 11:46:34.713569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.110 [2024-11-15 11:46:34.713603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.110 qpair failed and we were unable to recover it. 00:28:34.110 [2024-11-15 11:46:34.713905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.110 [2024-11-15 11:46:34.713917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.110 qpair failed and we were unable to recover it. 00:28:34.110 [2024-11-15 11:46:34.714148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.110 [2024-11-15 11:46:34.714160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.110 qpair failed and we were unable to recover it. 00:28:34.110 [2024-11-15 11:46:34.714449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.110 [2024-11-15 11:46:34.714494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.110 qpair failed and we were unable to recover it. 00:28:34.110 [2024-11-15 11:46:34.714710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.110 [2024-11-15 11:46:34.714744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.110 qpair failed and we were unable to recover it. 00:28:34.110 [2024-11-15 11:46:34.715082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.110 [2024-11-15 11:46:34.715114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.110 qpair failed and we were unable to recover it. 00:28:34.110 [2024-11-15 11:46:34.715273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.110 [2024-11-15 11:46:34.715306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.110 qpair failed and we were unable to recover it. 00:28:34.110 [2024-11-15 11:46:34.715546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.110 [2024-11-15 11:46:34.715581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.110 qpair failed and we were unable to recover it. 00:28:34.110 [2024-11-15 11:46:34.715865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.110 [2024-11-15 11:46:34.715877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.110 qpair failed and we were unable to recover it. 00:28:34.110 [2024-11-15 11:46:34.716054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.110 [2024-11-15 11:46:34.716066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.110 qpair failed and we were unable to recover it. 00:28:34.110 [2024-11-15 11:46:34.716330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.110 [2024-11-15 11:46:34.716364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.110 qpair failed and we were unable to recover it. 00:28:34.110 [2024-11-15 11:46:34.716639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.110 [2024-11-15 11:46:34.716674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.110 qpair failed and we were unable to recover it. 00:28:34.110 [2024-11-15 11:46:34.716977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.110 [2024-11-15 11:46:34.717011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.110 qpair failed and we were unable to recover it. 00:28:34.110 [2024-11-15 11:46:34.717279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.110 [2024-11-15 11:46:34.717290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.110 qpair failed and we were unable to recover it. 00:28:34.110 [2024-11-15 11:46:34.717517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.110 [2024-11-15 11:46:34.717529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.110 qpair failed and we were unable to recover it. 00:28:34.110 [2024-11-15 11:46:34.717684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.110 [2024-11-15 11:46:34.717718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.110 qpair failed and we were unable to recover it. 00:28:34.110 [2024-11-15 11:46:34.717995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.110 [2024-11-15 11:46:34.718029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.110 qpair failed and we were unable to recover it. 00:28:34.110 [2024-11-15 11:46:34.718238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.110 [2024-11-15 11:46:34.718271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.111 qpair failed and we were unable to recover it. 00:28:34.111 [2024-11-15 11:46:34.718484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.111 [2024-11-15 11:46:34.718520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.111 qpair failed and we were unable to recover it. 00:28:34.111 [2024-11-15 11:46:34.718817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.111 [2024-11-15 11:46:34.718851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.111 qpair failed and we were unable to recover it. 00:28:34.111 [2024-11-15 11:46:34.719131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.111 [2024-11-15 11:46:34.719165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.111 qpair failed and we were unable to recover it. 00:28:34.111 [2024-11-15 11:46:34.719384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.111 [2024-11-15 11:46:34.719418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.111 qpair failed and we were unable to recover it. 00:28:34.111 [2024-11-15 11:46:34.719672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.111 [2024-11-15 11:46:34.719708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.111 qpair failed and we were unable to recover it. 00:28:34.111 [2024-11-15 11:46:34.720003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.111 [2024-11-15 11:46:34.720015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.111 qpair failed and we were unable to recover it. 00:28:34.111 [2024-11-15 11:46:34.720249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.111 [2024-11-15 11:46:34.720260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.111 qpair failed and we were unable to recover it. 00:28:34.111 [2024-11-15 11:46:34.720513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.111 [2024-11-15 11:46:34.720526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.111 qpair failed and we were unable to recover it. 00:28:34.111 [2024-11-15 11:46:34.720747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.111 [2024-11-15 11:46:34.720759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.111 qpair failed and we were unable to recover it. 00:28:34.111 [2024-11-15 11:46:34.721024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.111 [2024-11-15 11:46:34.721039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.111 qpair failed and we were unable to recover it. 00:28:34.111 [2024-11-15 11:46:34.721212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.111 [2024-11-15 11:46:34.721224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.111 qpair failed and we were unable to recover it. 00:28:34.111 [2024-11-15 11:46:34.721378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.111 [2024-11-15 11:46:34.721389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.111 qpair failed and we were unable to recover it. 00:28:34.111 [2024-11-15 11:46:34.721654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.111 [2024-11-15 11:46:34.721667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.111 qpair failed and we were unable to recover it. 00:28:34.111 [2024-11-15 11:46:34.721933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.111 [2024-11-15 11:46:34.721946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.111 qpair failed and we were unable to recover it. 00:28:34.111 [2024-11-15 11:46:34.722043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.111 [2024-11-15 11:46:34.722055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.111 qpair failed and we were unable to recover it. 00:28:34.111 [2024-11-15 11:46:34.722233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.111 [2024-11-15 11:46:34.722245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.111 qpair failed and we were unable to recover it. 00:28:34.111 [2024-11-15 11:46:34.722413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.111 [2024-11-15 11:46:34.722425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.111 qpair failed and we were unable to recover it. 00:28:34.111 [2024-11-15 11:46:34.722702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.111 [2024-11-15 11:46:34.722714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.111 qpair failed and we were unable to recover it. 00:28:34.111 [2024-11-15 11:46:34.722964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.111 [2024-11-15 11:46:34.722976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.111 qpair failed and we were unable to recover it. 00:28:34.111 [2024-11-15 11:46:34.723197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.111 [2024-11-15 11:46:34.723210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.111 qpair failed and we were unable to recover it. 00:28:34.111 [2024-11-15 11:46:34.723370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.111 [2024-11-15 11:46:34.723382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.111 qpair failed and we were unable to recover it. 00:28:34.111 [2024-11-15 11:46:34.723475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.111 [2024-11-15 11:46:34.723488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.111 qpair failed and we were unable to recover it. 00:28:34.111 [2024-11-15 11:46:34.723653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.111 [2024-11-15 11:46:34.723665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.111 qpair failed and we were unable to recover it. 00:28:34.111 [2024-11-15 11:46:34.723831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.111 [2024-11-15 11:46:34.723844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.111 qpair failed and we were unable to recover it. 00:28:34.111 [2024-11-15 11:46:34.724146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.111 [2024-11-15 11:46:34.724159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.111 qpair failed and we were unable to recover it. 00:28:34.111 [2024-11-15 11:46:34.724338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.111 [2024-11-15 11:46:34.724351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.111 qpair failed and we were unable to recover it. 00:28:34.111 [2024-11-15 11:46:34.724524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.111 [2024-11-15 11:46:34.724537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.111 qpair failed and we were unable to recover it. 00:28:34.111 [2024-11-15 11:46:34.724682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.111 [2024-11-15 11:46:34.724694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.111 qpair failed and we were unable to recover it. 00:28:34.111 [2024-11-15 11:46:34.724859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.111 [2024-11-15 11:46:34.724871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.111 qpair failed and we were unable to recover it. 00:28:34.111 [2024-11-15 11:46:34.725101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.111 [2024-11-15 11:46:34.725114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.111 qpair failed and we were unable to recover it. 00:28:34.111 [2024-11-15 11:46:34.725386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.111 [2024-11-15 11:46:34.725398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.111 qpair failed and we were unable to recover it. 00:28:34.111 [2024-11-15 11:46:34.725640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.111 [2024-11-15 11:46:34.725652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.111 qpair failed and we were unable to recover it. 00:28:34.111 [2024-11-15 11:46:34.725882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.111 [2024-11-15 11:46:34.725894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.111 qpair failed and we were unable to recover it. 00:28:34.111 [2024-11-15 11:46:34.726114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.111 [2024-11-15 11:46:34.726126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.111 qpair failed and we were unable to recover it. 00:28:34.111 [2024-11-15 11:46:34.726346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.111 [2024-11-15 11:46:34.726359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.111 qpair failed and we were unable to recover it. 00:28:34.111 [2024-11-15 11:46:34.726507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.111 [2024-11-15 11:46:34.726519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.111 qpair failed and we were unable to recover it. 00:28:34.111 [2024-11-15 11:46:34.726765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.111 [2024-11-15 11:46:34.726778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.111 qpair failed and we were unable to recover it. 00:28:34.111 [2024-11-15 11:46:34.726956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.111 [2024-11-15 11:46:34.726968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.111 qpair failed and we were unable to recover it. 00:28:34.111 [2024-11-15 11:46:34.727249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.111 [2024-11-15 11:46:34.727261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.111 qpair failed and we were unable to recover it. 00:28:34.111 [2024-11-15 11:46:34.727531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.111 [2024-11-15 11:46:34.727544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.111 qpair failed and we were unable to recover it. 00:28:34.111 [2024-11-15 11:46:34.727821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.111 [2024-11-15 11:46:34.727833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.111 qpair failed and we were unable to recover it. 00:28:34.111 [2024-11-15 11:46:34.728008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.111 [2024-11-15 11:46:34.728019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.111 qpair failed and we were unable to recover it. 00:28:34.111 [2024-11-15 11:46:34.728268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.111 [2024-11-15 11:46:34.728280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.111 qpair failed and we were unable to recover it. 00:28:34.111 [2024-11-15 11:46:34.728471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.111 [2024-11-15 11:46:34.728483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.111 qpair failed and we were unable to recover it. 00:28:34.111 [2024-11-15 11:46:34.728681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.111 [2024-11-15 11:46:34.728694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.111 qpair failed and we were unable to recover it. 00:28:34.111 [2024-11-15 11:46:34.728979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.111 [2024-11-15 11:46:34.728991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.111 qpair failed and we were unable to recover it. 00:28:34.111 [2024-11-15 11:46:34.729135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.111 [2024-11-15 11:46:34.729146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.111 qpair failed and we were unable to recover it. 00:28:34.111 [2024-11-15 11:46:34.729395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.111 [2024-11-15 11:46:34.729407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.111 qpair failed and we were unable to recover it. 00:28:34.111 [2024-11-15 11:46:34.729680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.111 [2024-11-15 11:46:34.729694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.111 qpair failed and we were unable to recover it. 00:28:34.111 [2024-11-15 11:46:34.729842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.111 [2024-11-15 11:46:34.729854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.111 qpair failed and we were unable to recover it. 00:28:34.111 [2024-11-15 11:46:34.730159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.111 [2024-11-15 11:46:34.730172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.111 qpair failed and we were unable to recover it. 00:28:34.111 [2024-11-15 11:46:34.730359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.111 [2024-11-15 11:46:34.730371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.111 qpair failed and we were unable to recover it. 00:28:34.111 [2024-11-15 11:46:34.730629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.111 [2024-11-15 11:46:34.730641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.111 qpair failed and we were unable to recover it. 00:28:34.111 [2024-11-15 11:46:34.730912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.111 [2024-11-15 11:46:34.730925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.111 qpair failed and we were unable to recover it. 00:28:34.111 [2024-11-15 11:46:34.731201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.111 [2024-11-15 11:46:34.731214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.111 qpair failed and we were unable to recover it. 00:28:34.111 [2024-11-15 11:46:34.731364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.111 [2024-11-15 11:46:34.731376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.111 qpair failed and we were unable to recover it. 00:28:34.112 [2024-11-15 11:46:34.731596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.112 [2024-11-15 11:46:34.731609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.112 qpair failed and we were unable to recover it. 00:28:34.112 [2024-11-15 11:46:34.731799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.112 [2024-11-15 11:46:34.731811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.112 qpair failed and we were unable to recover it. 00:28:34.112 [2024-11-15 11:46:34.732035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.112 [2024-11-15 11:46:34.732047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.112 qpair failed and we were unable to recover it. 00:28:34.112 [2024-11-15 11:46:34.732301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.112 [2024-11-15 11:46:34.732314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.112 qpair failed and we were unable to recover it. 00:28:34.112 [2024-11-15 11:46:34.732504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.112 [2024-11-15 11:46:34.732517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.112 qpair failed and we were unable to recover it. 00:28:34.112 [2024-11-15 11:46:34.732708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.112 [2024-11-15 11:46:34.732721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.112 qpair failed and we were unable to recover it. 00:28:34.112 [2024-11-15 11:46:34.733002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.112 [2024-11-15 11:46:34.733014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.112 qpair failed and we were unable to recover it. 00:28:34.112 [2024-11-15 11:46:34.733128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.112 [2024-11-15 11:46:34.733140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.112 qpair failed and we were unable to recover it. 00:28:34.112 [2024-11-15 11:46:34.733292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.112 [2024-11-15 11:46:34.733305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.112 qpair failed and we were unable to recover it. 00:28:34.112 [2024-11-15 11:46:34.733529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.112 [2024-11-15 11:46:34.733542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.112 qpair failed and we were unable to recover it. 00:28:34.112 [2024-11-15 11:46:34.733790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.112 [2024-11-15 11:46:34.733802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.112 qpair failed and we were unable to recover it. 00:28:34.112 [2024-11-15 11:46:34.733976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.112 [2024-11-15 11:46:34.733988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.112 qpair failed and we were unable to recover it. 00:28:34.112 [2024-11-15 11:46:34.734101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.112 [2024-11-15 11:46:34.734114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.112 qpair failed and we were unable to recover it. 00:28:34.112 [2024-11-15 11:46:34.734299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.112 [2024-11-15 11:46:34.734312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.112 qpair failed and we were unable to recover it. 00:28:34.112 [2024-11-15 11:46:34.734531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.112 [2024-11-15 11:46:34.734544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.112 qpair failed and we were unable to recover it. 00:28:34.112 [2024-11-15 11:46:34.734812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.112 [2024-11-15 11:46:34.734824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.112 qpair failed and we were unable to recover it. 00:28:34.112 [2024-11-15 11:46:34.735072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.112 [2024-11-15 11:46:34.735084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.112 qpair failed and we were unable to recover it. 00:28:34.112 [2024-11-15 11:46:34.735250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.112 [2024-11-15 11:46:34.735262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.112 qpair failed and we were unable to recover it. 00:28:34.112 [2024-11-15 11:46:34.735464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.112 [2024-11-15 11:46:34.735477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.112 qpair failed and we were unable to recover it. 00:28:34.112 [2024-11-15 11:46:34.735645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.112 [2024-11-15 11:46:34.735657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.112 qpair failed and we were unable to recover it. 00:28:34.112 [2024-11-15 11:46:34.735824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.112 [2024-11-15 11:46:34.735838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.112 qpair failed and we were unable to recover it. 00:28:34.112 [2024-11-15 11:46:34.735929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.112 [2024-11-15 11:46:34.735941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.112 qpair failed and we were unable to recover it. 00:28:34.112 [2024-11-15 11:46:34.736162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.112 [2024-11-15 11:46:34.736174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.112 qpair failed and we were unable to recover it. 00:28:34.112 [2024-11-15 11:46:34.736361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.112 [2024-11-15 11:46:34.736374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.112 qpair failed and we were unable to recover it. 00:28:34.112 [2024-11-15 11:46:34.736532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.112 [2024-11-15 11:46:34.736545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.112 qpair failed and we were unable to recover it. 00:28:34.112 [2024-11-15 11:46:34.736793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.112 [2024-11-15 11:46:34.736804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.112 qpair failed and we were unable to recover it. 00:28:34.112 [2024-11-15 11:46:34.737025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.112 [2024-11-15 11:46:34.737037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.112 qpair failed and we were unable to recover it. 00:28:34.112 [2024-11-15 11:46:34.737222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.112 [2024-11-15 11:46:34.737234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.112 qpair failed and we were unable to recover it. 00:28:34.112 [2024-11-15 11:46:34.737470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.112 [2024-11-15 11:46:34.737482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.112 qpair failed and we were unable to recover it. 00:28:34.112 [2024-11-15 11:46:34.737732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.112 [2024-11-15 11:46:34.737745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.112 qpair failed and we were unable to recover it. 00:28:34.112 [2024-11-15 11:46:34.738000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.112 [2024-11-15 11:46:34.738012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.112 qpair failed and we were unable to recover it. 00:28:34.112 [2024-11-15 11:46:34.738260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.112 [2024-11-15 11:46:34.738272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.112 qpair failed and we were unable to recover it. 00:28:34.112 [2024-11-15 11:46:34.738517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.112 [2024-11-15 11:46:34.738529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.112 qpair failed and we were unable to recover it. 00:28:34.112 [2024-11-15 11:46:34.738704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.112 [2024-11-15 11:46:34.738716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.112 qpair failed and we were unable to recover it. 00:28:34.112 [2024-11-15 11:46:34.739016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.112 [2024-11-15 11:46:34.739028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.112 qpair failed and we were unable to recover it. 00:28:34.112 [2024-11-15 11:46:34.739189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.112 [2024-11-15 11:46:34.739201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.112 qpair failed and we were unable to recover it. 00:28:34.112 [2024-11-15 11:46:34.739423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.112 [2024-11-15 11:46:34.739436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.112 qpair failed and we were unable to recover it. 00:28:34.112 [2024-11-15 11:46:34.739623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.112 [2024-11-15 11:46:34.739635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.112 qpair failed and we were unable to recover it. 00:28:34.112 [2024-11-15 11:46:34.739858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.112 [2024-11-15 11:46:34.739870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.112 qpair failed and we were unable to recover it. 00:28:34.112 [2024-11-15 11:46:34.740015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.112 [2024-11-15 11:46:34.740027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.112 qpair failed and we were unable to recover it. 00:28:34.112 [2024-11-15 11:46:34.740119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.112 [2024-11-15 11:46:34.740131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.112 qpair failed and we were unable to recover it. 00:28:34.112 [2024-11-15 11:46:34.740277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.112 [2024-11-15 11:46:34.740289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.112 qpair failed and we were unable to recover it. 00:28:34.112 [2024-11-15 11:46:34.740449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.112 [2024-11-15 11:46:34.740466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.112 qpair failed and we were unable to recover it. 00:28:34.112 [2024-11-15 11:46:34.740633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.112 [2024-11-15 11:46:34.740646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.112 qpair failed and we were unable to recover it. 00:28:34.112 [2024-11-15 11:46:34.740759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.112 [2024-11-15 11:46:34.740771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.112 qpair failed and we were unable to recover it. 00:28:34.112 [2024-11-15 11:46:34.740955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.112 [2024-11-15 11:46:34.740967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.112 qpair failed and we were unable to recover it. 00:28:34.112 [2024-11-15 11:46:34.741144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.112 [2024-11-15 11:46:34.741156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.112 qpair failed and we were unable to recover it. 00:28:34.112 [2024-11-15 11:46:34.741404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.112 [2024-11-15 11:46:34.741415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.112 qpair failed and we were unable to recover it. 00:28:34.112 [2024-11-15 11:46:34.741669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.112 [2024-11-15 11:46:34.741682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.112 qpair failed and we were unable to recover it. 00:28:34.112 [2024-11-15 11:46:34.741924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.112 [2024-11-15 11:46:34.741937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.112 qpair failed and we were unable to recover it. 00:28:34.112 [2024-11-15 11:46:34.742169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.112 [2024-11-15 11:46:34.742182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.112 qpair failed and we were unable to recover it. 00:28:34.112 [2024-11-15 11:46:34.742375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.112 [2024-11-15 11:46:34.742387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.112 qpair failed and we were unable to recover it. 00:28:34.112 [2024-11-15 11:46:34.742570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.112 [2024-11-15 11:46:34.742582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.112 qpair failed and we were unable to recover it. 00:28:34.112 [2024-11-15 11:46:34.742817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.112 [2024-11-15 11:46:34.742830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.112 qpair failed and we were unable to recover it. 00:28:34.112 [2024-11-15 11:46:34.743053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.112 [2024-11-15 11:46:34.743065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.112 qpair failed and we were unable to recover it. 00:28:34.112 [2024-11-15 11:46:34.743226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.112 [2024-11-15 11:46:34.743238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.112 qpair failed and we were unable to recover it. 00:28:34.112 [2024-11-15 11:46:34.743483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.112 [2024-11-15 11:46:34.743495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.113 qpair failed and we were unable to recover it. 00:28:34.113 [2024-11-15 11:46:34.743577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.113 [2024-11-15 11:46:34.743589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.113 qpair failed and we were unable to recover it. 00:28:34.113 [2024-11-15 11:46:34.743696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.113 [2024-11-15 11:46:34.743708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.113 qpair failed and we were unable to recover it. 00:28:34.113 [2024-11-15 11:46:34.743875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.113 [2024-11-15 11:46:34.743886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.113 qpair failed and we were unable to recover it. 00:28:34.113 [2024-11-15 11:46:34.744053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.113 [2024-11-15 11:46:34.744068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.113 qpair failed and we were unable to recover it. 00:28:34.113 [2024-11-15 11:46:34.744337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.113 [2024-11-15 11:46:34.744349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.113 qpair failed and we were unable to recover it. 00:28:34.113 [2024-11-15 11:46:34.744600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.113 [2024-11-15 11:46:34.744612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.113 qpair failed and we were unable to recover it. 00:28:34.113 [2024-11-15 11:46:34.744832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.113 [2024-11-15 11:46:34.744845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.113 qpair failed and we were unable to recover it. 00:28:34.113 [2024-11-15 11:46:34.745101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.113 [2024-11-15 11:46:34.745113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.113 qpair failed and we were unable to recover it. 00:28:34.113 [2024-11-15 11:46:34.745283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.113 [2024-11-15 11:46:34.745294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.113 qpair failed and we were unable to recover it. 00:28:34.113 [2024-11-15 11:46:34.745480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.113 [2024-11-15 11:46:34.745492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.113 qpair failed and we were unable to recover it. 00:28:34.113 [2024-11-15 11:46:34.745715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.113 [2024-11-15 11:46:34.745727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.113 qpair failed and we were unable to recover it. 00:28:34.113 [2024-11-15 11:46:34.745958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.113 [2024-11-15 11:46:34.745971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.113 qpair failed and we were unable to recover it. 00:28:34.113 [2024-11-15 11:46:34.746244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.113 [2024-11-15 11:46:34.746256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.113 qpair failed and we were unable to recover it. 00:28:34.113 [2024-11-15 11:46:34.746428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.113 [2024-11-15 11:46:34.746440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.113 qpair failed and we were unable to recover it. 00:28:34.113 [2024-11-15 11:46:34.746611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.113 [2024-11-15 11:46:34.746624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.113 qpair failed and we were unable to recover it. 00:28:34.113 [2024-11-15 11:46:34.746788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.113 [2024-11-15 11:46:34.746801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.113 qpair failed and we were unable to recover it. 00:28:34.113 [2024-11-15 11:46:34.747043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.113 [2024-11-15 11:46:34.747055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.113 qpair failed and we were unable to recover it. 00:28:34.113 [2024-11-15 11:46:34.747331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.113 [2024-11-15 11:46:34.747343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.113 qpair failed and we were unable to recover it. 00:28:34.113 [2024-11-15 11:46:34.747589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.113 [2024-11-15 11:46:34.747602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.113 qpair failed and we were unable to recover it. 00:28:34.113 [2024-11-15 11:46:34.747853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.113 [2024-11-15 11:46:34.747865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.113 qpair failed and we were unable to recover it. 00:28:34.113 [2024-11-15 11:46:34.748056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.113 [2024-11-15 11:46:34.748067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.113 qpair failed and we were unable to recover it. 00:28:34.113 [2024-11-15 11:46:34.748152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.113 [2024-11-15 11:46:34.748164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.113 qpair failed and we were unable to recover it. 00:28:34.113 [2024-11-15 11:46:34.748409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.113 [2024-11-15 11:46:34.748421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.113 qpair failed and we were unable to recover it. 00:28:34.113 [2024-11-15 11:46:34.748643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.113 [2024-11-15 11:46:34.748656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.113 qpair failed and we were unable to recover it. 00:28:34.113 [2024-11-15 11:46:34.748765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.113 [2024-11-15 11:46:34.748777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.113 qpair failed and we were unable to recover it. 00:28:34.113 [2024-11-15 11:46:34.749024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.113 [2024-11-15 11:46:34.749037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.113 qpair failed and we were unable to recover it. 00:28:34.113 [2024-11-15 11:46:34.749199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.113 [2024-11-15 11:46:34.749211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.113 qpair failed and we were unable to recover it. 00:28:34.113 [2024-11-15 11:46:34.749378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.113 [2024-11-15 11:46:34.749391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.113 qpair failed and we were unable to recover it. 00:28:34.113 [2024-11-15 11:46:34.749666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.113 [2024-11-15 11:46:34.749679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.113 qpair failed and we were unable to recover it. 00:28:34.113 [2024-11-15 11:46:34.749847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.113 [2024-11-15 11:46:34.749859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.113 qpair failed and we were unable to recover it. 00:28:34.113 [2024-11-15 11:46:34.750080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.113 [2024-11-15 11:46:34.750093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.113 qpair failed and we were unable to recover it. 00:28:34.113 [2024-11-15 11:46:34.750343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.113 [2024-11-15 11:46:34.750355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.113 qpair failed and we were unable to recover it. 00:28:34.113 [2024-11-15 11:46:34.750626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.113 [2024-11-15 11:46:34.750639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.113 qpair failed and we were unable to recover it. 00:28:34.113 [2024-11-15 11:46:34.750861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.113 [2024-11-15 11:46:34.750873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.113 qpair failed and we were unable to recover it. 00:28:34.113 [2024-11-15 11:46:34.751120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.113 [2024-11-15 11:46:34.751132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.113 qpair failed and we were unable to recover it. 00:28:34.113 [2024-11-15 11:46:34.751328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.113 [2024-11-15 11:46:34.751340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.113 qpair failed and we were unable to recover it. 00:28:34.113 [2024-11-15 11:46:34.751569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.113 [2024-11-15 11:46:34.751582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.113 qpair failed and we were unable to recover it. 00:28:34.113 [2024-11-15 11:46:34.751809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.113 [2024-11-15 11:46:34.751821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.113 qpair failed and we were unable to recover it. 00:28:34.113 [2024-11-15 11:46:34.752096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.113 [2024-11-15 11:46:34.752107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.113 qpair failed and we were unable to recover it. 00:28:34.113 [2024-11-15 11:46:34.752356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.113 [2024-11-15 11:46:34.752368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.113 qpair failed and we were unable to recover it. 00:28:34.113 [2024-11-15 11:46:34.752632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.113 [2024-11-15 11:46:34.752645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.113 qpair failed and we were unable to recover it. 00:28:34.113 [2024-11-15 11:46:34.752877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.113 [2024-11-15 11:46:34.752889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.113 qpair failed and we were unable to recover it. 00:28:34.113 [2024-11-15 11:46:34.753161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.113 [2024-11-15 11:46:34.753173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.113 qpair failed and we were unable to recover it. 00:28:34.113 [2024-11-15 11:46:34.753412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.113 [2024-11-15 11:46:34.753426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.113 qpair failed and we were unable to recover it. 00:28:34.113 [2024-11-15 11:46:34.753594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.113 [2024-11-15 11:46:34.753606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.113 qpair failed and we were unable to recover it. 00:28:34.113 [2024-11-15 11:46:34.753781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.113 [2024-11-15 11:46:34.753793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.113 qpair failed and we were unable to recover it. 00:28:34.113 [2024-11-15 11:46:34.753898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.113 [2024-11-15 11:46:34.753910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.113 qpair failed and we were unable to recover it. 00:28:34.113 [2024-11-15 11:46:34.754180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.113 [2024-11-15 11:46:34.754192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.113 qpair failed and we were unable to recover it. 00:28:34.113 [2024-11-15 11:46:34.754429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.113 [2024-11-15 11:46:34.754441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.113 qpair failed and we were unable to recover it. 00:28:34.113 [2024-11-15 11:46:34.754628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.113 [2024-11-15 11:46:34.754640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.113 qpair failed and we were unable to recover it. 00:28:34.113 [2024-11-15 11:46:34.754862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.113 [2024-11-15 11:46:34.754874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.113 qpair failed and we were unable to recover it. 00:28:34.113 [2024-11-15 11:46:34.755121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.113 [2024-11-15 11:46:34.755133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.113 qpair failed and we were unable to recover it. 00:28:34.113 [2024-11-15 11:46:34.755414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.114 [2024-11-15 11:46:34.755426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.114 qpair failed and we were unable to recover it. 00:28:34.114 [2024-11-15 11:46:34.755704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.114 [2024-11-15 11:46:34.755716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.114 qpair failed and we were unable to recover it. 00:28:34.114 [2024-11-15 11:46:34.755896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.114 [2024-11-15 11:46:34.755908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.114 qpair failed and we were unable to recover it. 00:28:34.114 [2024-11-15 11:46:34.756050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.114 [2024-11-15 11:46:34.756061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.114 qpair failed and we were unable to recover it. 00:28:34.114 [2024-11-15 11:46:34.756339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.114 [2024-11-15 11:46:34.756351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.114 qpair failed and we were unable to recover it. 00:28:34.114 [2024-11-15 11:46:34.756604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.114 [2024-11-15 11:46:34.756617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.114 qpair failed and we were unable to recover it. 00:28:34.114 [2024-11-15 11:46:34.756798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.114 [2024-11-15 11:46:34.756809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.114 qpair failed and we were unable to recover it. 00:28:34.114 [2024-11-15 11:46:34.756992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.114 [2024-11-15 11:46:34.757004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.114 qpair failed and we were unable to recover it. 00:28:34.114 [2024-11-15 11:46:34.757218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.114 [2024-11-15 11:46:34.757230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.114 qpair failed and we were unable to recover it. 00:28:34.114 [2024-11-15 11:46:34.757503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.114 [2024-11-15 11:46:34.757515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.114 qpair failed and we were unable to recover it. 00:28:34.114 [2024-11-15 11:46:34.757734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.114 [2024-11-15 11:46:34.757747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.114 qpair failed and we were unable to recover it. 00:28:34.114 [2024-11-15 11:46:34.757964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.114 [2024-11-15 11:46:34.757976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.114 qpair failed and we were unable to recover it. 00:28:34.114 [2024-11-15 11:46:34.758194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.114 [2024-11-15 11:46:34.758206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.114 qpair failed and we were unable to recover it. 00:28:34.114 [2024-11-15 11:46:34.758454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.114 [2024-11-15 11:46:34.758469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.114 qpair failed and we were unable to recover it. 00:28:34.114 [2024-11-15 11:46:34.758640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.114 [2024-11-15 11:46:34.758652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.114 qpair failed and we were unable to recover it. 00:28:34.114 [2024-11-15 11:46:34.758835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.114 [2024-11-15 11:46:34.758846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.114 qpair failed and we were unable to recover it. 00:28:34.114 [2024-11-15 11:46:34.759061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.114 [2024-11-15 11:46:34.759072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.114 qpair failed and we were unable to recover it. 00:28:34.114 [2024-11-15 11:46:34.759219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.114 [2024-11-15 11:46:34.759230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.114 qpair failed and we were unable to recover it. 00:28:34.114 [2024-11-15 11:46:34.759480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.114 [2024-11-15 11:46:34.759493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.114 qpair failed and we were unable to recover it. 00:28:34.114 [2024-11-15 11:46:34.759746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.114 [2024-11-15 11:46:34.759758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.114 qpair failed and we were unable to recover it. 00:28:34.114 [2024-11-15 11:46:34.759994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.114 [2024-11-15 11:46:34.760006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.114 qpair failed and we were unable to recover it. 00:28:34.114 [2024-11-15 11:46:34.760150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.114 [2024-11-15 11:46:34.760161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.114 qpair failed and we were unable to recover it. 00:28:34.114 [2024-11-15 11:46:34.760321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.114 [2024-11-15 11:46:34.760333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.114 qpair failed and we were unable to recover it. 00:28:34.114 [2024-11-15 11:46:34.760513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.114 [2024-11-15 11:46:34.760525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.114 qpair failed and we were unable to recover it. 00:28:34.114 [2024-11-15 11:46:34.760776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.114 [2024-11-15 11:46:34.760788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.114 qpair failed and we were unable to recover it. 00:28:34.114 [2024-11-15 11:46:34.760963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.114 [2024-11-15 11:46:34.760975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.114 qpair failed and we were unable to recover it. 00:28:34.114 [2024-11-15 11:46:34.761161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.114 [2024-11-15 11:46:34.761173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.114 qpair failed and we were unable to recover it. 00:28:34.114 [2024-11-15 11:46:34.761360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.114 [2024-11-15 11:46:34.761371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.114 qpair failed and we were unable to recover it. 00:28:34.114 [2024-11-15 11:46:34.761610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.114 [2024-11-15 11:46:34.761622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.114 qpair failed and we were unable to recover it. 00:28:34.114 [2024-11-15 11:46:34.761878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.114 [2024-11-15 11:46:34.761891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.114 qpair failed and we were unable to recover it. 00:28:34.114 [2024-11-15 11:46:34.762084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.114 [2024-11-15 11:46:34.762096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.114 qpair failed and we were unable to recover it. 00:28:34.114 [2024-11-15 11:46:34.762248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.114 [2024-11-15 11:46:34.762261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.114 qpair failed and we were unable to recover it. 00:28:34.114 [2024-11-15 11:46:34.762419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.114 [2024-11-15 11:46:34.762431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.114 qpair failed and we were unable to recover it. 00:28:34.114 [2024-11-15 11:46:34.762663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.114 [2024-11-15 11:46:34.762676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.114 qpair failed and we were unable to recover it. 00:28:34.114 [2024-11-15 11:46:34.762945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.114 [2024-11-15 11:46:34.762958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.114 qpair failed and we were unable to recover it. 00:28:34.114 [2024-11-15 11:46:34.763122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.114 [2024-11-15 11:46:34.763134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.114 qpair failed and we were unable to recover it. 00:28:34.114 [2024-11-15 11:46:34.763409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.114 [2024-11-15 11:46:34.763421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.114 qpair failed and we were unable to recover it. 00:28:34.114 [2024-11-15 11:46:34.763640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.114 [2024-11-15 11:46:34.763652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.114 qpair failed and we were unable to recover it. 00:28:34.114 [2024-11-15 11:46:34.763924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.114 [2024-11-15 11:46:34.763935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.114 qpair failed and we were unable to recover it. 00:28:34.114 [2024-11-15 11:46:34.764097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.114 [2024-11-15 11:46:34.764108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.114 qpair failed and we were unable to recover it. 00:28:34.114 [2024-11-15 11:46:34.764349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.114 [2024-11-15 11:46:34.764360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.114 qpair failed and we were unable to recover it. 00:28:34.114 [2024-11-15 11:46:34.764588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.114 [2024-11-15 11:46:34.764600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.114 qpair failed and we were unable to recover it. 00:28:34.114 [2024-11-15 11:46:34.764784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.114 [2024-11-15 11:46:34.764796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.114 qpair failed and we were unable to recover it. 00:28:34.114 [2024-11-15 11:46:34.765045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.114 [2024-11-15 11:46:34.765057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.114 qpair failed and we were unable to recover it. 00:28:34.114 [2024-11-15 11:46:34.765344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.114 [2024-11-15 11:46:34.765355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.114 qpair failed and we were unable to recover it. 00:28:34.114 [2024-11-15 11:46:34.765573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.114 [2024-11-15 11:46:34.765585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.114 qpair failed and we were unable to recover it. 00:28:34.114 [2024-11-15 11:46:34.765732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.114 [2024-11-15 11:46:34.765743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.114 qpair failed and we were unable to recover it. 00:28:34.114 [2024-11-15 11:46:34.765961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.114 [2024-11-15 11:46:34.765973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.114 qpair failed and we were unable to recover it. 00:28:34.114 [2024-11-15 11:46:34.766283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.114 [2024-11-15 11:46:34.766295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.114 qpair failed and we were unable to recover it. 00:28:34.114 [2024-11-15 11:46:34.766450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.114 [2024-11-15 11:46:34.766467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.114 qpair failed and we were unable to recover it. 00:28:34.114 [2024-11-15 11:46:34.766723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.114 [2024-11-15 11:46:34.766734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.114 qpair failed and we were unable to recover it. 00:28:34.114 [2024-11-15 11:46:34.766974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.114 [2024-11-15 11:46:34.766986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.114 qpair failed and we were unable to recover it. 00:28:34.114 [2024-11-15 11:46:34.767173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.114 [2024-11-15 11:46:34.767185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.114 qpair failed and we were unable to recover it. 00:28:34.114 [2024-11-15 11:46:34.767423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.114 [2024-11-15 11:46:34.767435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.114 qpair failed and we were unable to recover it. 00:28:34.114 [2024-11-15 11:46:34.767666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.114 [2024-11-15 11:46:34.767678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.114 qpair failed and we were unable to recover it. 00:28:34.114 [2024-11-15 11:46:34.767948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.114 [2024-11-15 11:46:34.767960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.114 qpair failed and we were unable to recover it. 00:28:34.114 [2024-11-15 11:46:34.768203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.114 [2024-11-15 11:46:34.768215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.114 qpair failed and we were unable to recover it. 00:28:34.115 [2024-11-15 11:46:34.768457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.115 [2024-11-15 11:46:34.768472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.115 qpair failed and we were unable to recover it. 00:28:34.115 [2024-11-15 11:46:34.768719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.115 [2024-11-15 11:46:34.768731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.115 qpair failed and we were unable to recover it. 00:28:34.115 [2024-11-15 11:46:34.768880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.115 [2024-11-15 11:46:34.768891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.115 qpair failed and we were unable to recover it. 00:28:34.115 [2024-11-15 11:46:34.769067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.115 [2024-11-15 11:46:34.769078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.115 qpair failed and we were unable to recover it. 00:28:34.115 [2024-11-15 11:46:34.769256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.115 [2024-11-15 11:46:34.769267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.115 qpair failed and we were unable to recover it. 00:28:34.115 [2024-11-15 11:46:34.769491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.115 [2024-11-15 11:46:34.769503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.115 qpair failed and we were unable to recover it. 00:28:34.115 [2024-11-15 11:46:34.769735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.115 [2024-11-15 11:46:34.769747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.115 qpair failed and we were unable to recover it. 00:28:34.115 [2024-11-15 11:46:34.770021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.115 [2024-11-15 11:46:34.770032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.115 qpair failed and we were unable to recover it. 00:28:34.115 [2024-11-15 11:46:34.770190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.115 [2024-11-15 11:46:34.770201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.115 qpair failed and we were unable to recover it. 00:28:34.115 [2024-11-15 11:46:34.770441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.115 [2024-11-15 11:46:34.770453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.115 qpair failed and we were unable to recover it. 00:28:34.115 [2024-11-15 11:46:34.770682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.115 [2024-11-15 11:46:34.770694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.115 qpair failed and we were unable to recover it. 00:28:34.115 [2024-11-15 11:46:34.770915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.115 [2024-11-15 11:46:34.770926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.115 qpair failed and we were unable to recover it. 00:28:34.115 [2024-11-15 11:46:34.771187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.115 [2024-11-15 11:46:34.771199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.115 qpair failed and we were unable to recover it. 00:28:34.115 [2024-11-15 11:46:34.771416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.115 [2024-11-15 11:46:34.771427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.115 qpair failed and we were unable to recover it. 00:28:34.115 [2024-11-15 11:46:34.771668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.115 [2024-11-15 11:46:34.771682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.115 qpair failed and we were unable to recover it. 00:28:34.115 [2024-11-15 11:46:34.771971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.115 [2024-11-15 11:46:34.771983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.115 qpair failed and we were unable to recover it. 00:28:34.115 [2024-11-15 11:46:34.772197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.115 [2024-11-15 11:46:34.772209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.115 qpair failed and we were unable to recover it. 00:28:34.115 [2024-11-15 11:46:34.772376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.115 [2024-11-15 11:46:34.772389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.115 qpair failed and we were unable to recover it. 00:28:34.115 [2024-11-15 11:46:34.772611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.115 [2024-11-15 11:46:34.772623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.115 qpair failed and we were unable to recover it. 00:28:34.115 [2024-11-15 11:46:34.772711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.115 [2024-11-15 11:46:34.772723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.115 qpair failed and we were unable to recover it. 00:28:34.115 [2024-11-15 11:46:34.772935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.115 [2024-11-15 11:46:34.772947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.115 qpair failed and we were unable to recover it. 00:28:34.115 [2024-11-15 11:46:34.773109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.115 [2024-11-15 11:46:34.773121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.115 qpair failed and we were unable to recover it. 00:28:34.115 [2024-11-15 11:46:34.773356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.115 [2024-11-15 11:46:34.773367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.115 qpair failed and we were unable to recover it. 00:28:34.115 [2024-11-15 11:46:34.773578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.115 [2024-11-15 11:46:34.773591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.115 qpair failed and we were unable to recover it. 00:28:34.115 [2024-11-15 11:46:34.773807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.115 [2024-11-15 11:46:34.773819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.115 qpair failed and we were unable to recover it. 00:28:34.115 [2024-11-15 11:46:34.774062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.115 [2024-11-15 11:46:34.774074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.115 qpair failed and we were unable to recover it. 00:28:34.115 [2024-11-15 11:46:34.774158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.115 [2024-11-15 11:46:34.774170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.115 qpair failed and we were unable to recover it. 00:28:34.115 [2024-11-15 11:46:34.774331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.115 [2024-11-15 11:46:34.774342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.115 qpair failed and we were unable to recover it. 00:28:34.115 [2024-11-15 11:46:34.774578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.115 [2024-11-15 11:46:34.774590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.115 qpair failed and we were unable to recover it. 00:28:34.115 [2024-11-15 11:46:34.774850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.115 [2024-11-15 11:46:34.774861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.115 qpair failed and we were unable to recover it. 00:28:34.115 [2024-11-15 11:46:34.775021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.115 [2024-11-15 11:46:34.775033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.115 qpair failed and we were unable to recover it. 00:28:34.115 [2024-11-15 11:46:34.775298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.115 [2024-11-15 11:46:34.775309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.115 qpair failed and we were unable to recover it. 00:28:34.115 [2024-11-15 11:46:34.775551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.115 [2024-11-15 11:46:34.775563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.115 qpair failed and we were unable to recover it. 00:28:34.115 [2024-11-15 11:46:34.775808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.115 [2024-11-15 11:46:34.775820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.115 qpair failed and we were unable to recover it. 00:28:34.115 [2024-11-15 11:46:34.775980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.115 [2024-11-15 11:46:34.775991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.115 qpair failed and we were unable to recover it. 00:28:34.115 [2024-11-15 11:46:34.776228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.115 [2024-11-15 11:46:34.776239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.115 qpair failed and we were unable to recover it. 00:28:34.115 [2024-11-15 11:46:34.776450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.115 [2024-11-15 11:46:34.776465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.115 qpair failed and we were unable to recover it. 00:28:34.115 [2024-11-15 11:46:34.776676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.115 [2024-11-15 11:46:34.776687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.115 qpair failed and we were unable to recover it. 00:28:34.115 [2024-11-15 11:46:34.776927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.115 [2024-11-15 11:46:34.776938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.115 qpair failed and we were unable to recover it. 00:28:34.115 [2024-11-15 11:46:34.777202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.115 [2024-11-15 11:46:34.777214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.115 qpair failed and we were unable to recover it. 00:28:34.115 [2024-11-15 11:46:34.777424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.115 [2024-11-15 11:46:34.777435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.115 qpair failed and we were unable to recover it. 00:28:34.115 [2024-11-15 11:46:34.777686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.115 [2024-11-15 11:46:34.777698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.115 qpair failed and we were unable to recover it. 00:28:34.115 [2024-11-15 11:46:34.777929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.115 [2024-11-15 11:46:34.777940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.115 qpair failed and we were unable to recover it. 00:28:34.115 [2024-11-15 11:46:34.778203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.115 [2024-11-15 11:46:34.778214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.115 qpair failed and we were unable to recover it. 00:28:34.115 [2024-11-15 11:46:34.778464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.115 [2024-11-15 11:46:34.778476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.115 qpair failed and we were unable to recover it. 00:28:34.115 [2024-11-15 11:46:34.778716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.115 [2024-11-15 11:46:34.778728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.115 qpair failed and we were unable to recover it. 00:28:34.115 [2024-11-15 11:46:34.778884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.115 [2024-11-15 11:46:34.778895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.115 qpair failed and we were unable to recover it. 00:28:34.115 [2024-11-15 11:46:34.779105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.115 [2024-11-15 11:46:34.779117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.115 qpair failed and we were unable to recover it. 00:28:34.115 [2024-11-15 11:46:34.779358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.115 [2024-11-15 11:46:34.779370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.115 qpair failed and we were unable to recover it. 00:28:34.115 [2024-11-15 11:46:34.779524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.115 [2024-11-15 11:46:34.779536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.115 qpair failed and we were unable to recover it. 00:28:34.115 [2024-11-15 11:46:34.779777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.115 [2024-11-15 11:46:34.779789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.115 qpair failed and we were unable to recover it. 00:28:34.115 [2024-11-15 11:46:34.780056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.115 [2024-11-15 11:46:34.780067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.115 qpair failed and we were unable to recover it. 00:28:34.115 [2024-11-15 11:46:34.780210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.115 [2024-11-15 11:46:34.780221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.115 qpair failed and we were unable to recover it. 00:28:34.115 [2024-11-15 11:46:34.780413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.115 [2024-11-15 11:46:34.780424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.115 qpair failed and we were unable to recover it. 00:28:34.115 [2024-11-15 11:46:34.780690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.115 [2024-11-15 11:46:34.780704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.115 qpair failed and we were unable to recover it. 00:28:34.115 [2024-11-15 11:46:34.780881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.115 [2024-11-15 11:46:34.780892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.115 qpair failed and we were unable to recover it. 00:28:34.115 [2024-11-15 11:46:34.781135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.115 [2024-11-15 11:46:34.781147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.115 qpair failed and we were unable to recover it. 00:28:34.115 [2024-11-15 11:46:34.781387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.116 [2024-11-15 11:46:34.781399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.116 qpair failed and we were unable to recover it. 00:28:34.116 [2024-11-15 11:46:34.781651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.116 [2024-11-15 11:46:34.781664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.116 qpair failed and we were unable to recover it. 00:28:34.116 [2024-11-15 11:46:34.781910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.116 [2024-11-15 11:46:34.781921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.116 qpair failed and we were unable to recover it. 00:28:34.116 [2024-11-15 11:46:34.782159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.116 [2024-11-15 11:46:34.782172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.116 qpair failed and we were unable to recover it. 00:28:34.116 [2024-11-15 11:46:34.782417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.116 [2024-11-15 11:46:34.782429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.116 qpair failed and we were unable to recover it. 00:28:34.116 [2024-11-15 11:46:34.782676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.116 [2024-11-15 11:46:34.782687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.116 qpair failed and we were unable to recover it. 00:28:34.116 [2024-11-15 11:46:34.782928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.116 [2024-11-15 11:46:34.782940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.116 qpair failed and we were unable to recover it. 00:28:34.116 [2024-11-15 11:46:34.783185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.116 [2024-11-15 11:46:34.783196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.116 qpair failed and we were unable to recover it. 00:28:34.116 [2024-11-15 11:46:34.783368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.116 [2024-11-15 11:46:34.783380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.116 qpair failed and we were unable to recover it. 00:28:34.116 [2024-11-15 11:46:34.783468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.116 [2024-11-15 11:46:34.783480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.116 qpair failed and we were unable to recover it. 00:28:34.116 [2024-11-15 11:46:34.783703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.116 [2024-11-15 11:46:34.783715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.116 qpair failed and we were unable to recover it. 00:28:34.116 [2024-11-15 11:46:34.783870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.116 [2024-11-15 11:46:34.783882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.116 qpair failed and we were unable to recover it. 00:28:34.116 [2024-11-15 11:46:34.784090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.116 [2024-11-15 11:46:34.784101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.116 qpair failed and we were unable to recover it. 00:28:34.116 [2024-11-15 11:46:34.784341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.116 [2024-11-15 11:46:34.784351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.116 qpair failed and we were unable to recover it. 00:28:34.116 [2024-11-15 11:46:34.784640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.116 [2024-11-15 11:46:34.784652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.116 qpair failed and we were unable to recover it. 00:28:34.116 [2024-11-15 11:46:34.784866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.116 [2024-11-15 11:46:34.784877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.116 qpair failed and we were unable to recover it. 00:28:34.116 [2024-11-15 11:46:34.784957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.116 [2024-11-15 11:46:34.784968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.116 qpair failed and we were unable to recover it. 00:28:34.116 [2024-11-15 11:46:34.785196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.116 [2024-11-15 11:46:34.785208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.116 qpair failed and we were unable to recover it. 00:28:34.116 [2024-11-15 11:46:34.785432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.116 [2024-11-15 11:46:34.785443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.116 qpair failed and we were unable to recover it. 00:28:34.116 [2024-11-15 11:46:34.785663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.116 [2024-11-15 11:46:34.785675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.116 qpair failed and we were unable to recover it. 00:28:34.116 [2024-11-15 11:46:34.785835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.116 [2024-11-15 11:46:34.785846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.116 qpair failed and we were unable to recover it. 00:28:34.116 [2024-11-15 11:46:34.786054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.116 [2024-11-15 11:46:34.786065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.116 qpair failed and we were unable to recover it. 00:28:34.116 [2024-11-15 11:46:34.786306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.116 [2024-11-15 11:46:34.786318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.116 qpair failed and we were unable to recover it. 00:28:34.116 [2024-11-15 11:46:34.786541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.116 [2024-11-15 11:46:34.786553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.116 qpair failed and we were unable to recover it. 00:28:34.116 [2024-11-15 11:46:34.786741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.116 [2024-11-15 11:46:34.786752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.116 qpair failed and we were unable to recover it. 00:28:34.116 [2024-11-15 11:46:34.786963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.116 [2024-11-15 11:46:34.786974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.116 qpair failed and we were unable to recover it. 00:28:34.116 [2024-11-15 11:46:34.787148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.116 [2024-11-15 11:46:34.787159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.116 qpair failed and we were unable to recover it. 00:28:34.116 [2024-11-15 11:46:34.787386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.116 [2024-11-15 11:46:34.787397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.116 qpair failed and we were unable to recover it. 00:28:34.116 [2024-11-15 11:46:34.787595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.116 [2024-11-15 11:46:34.787607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.116 qpair failed and we were unable to recover it. 00:28:34.116 [2024-11-15 11:46:34.787698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.116 [2024-11-15 11:46:34.787709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.116 qpair failed and we were unable to recover it. 00:28:34.116 [2024-11-15 11:46:34.787958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.116 [2024-11-15 11:46:34.787969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.116 qpair failed and we were unable to recover it. 00:28:34.116 [2024-11-15 11:46:34.788233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.116 [2024-11-15 11:46:34.788245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.116 qpair failed and we were unable to recover it. 00:28:34.116 [2024-11-15 11:46:34.788472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.116 [2024-11-15 11:46:34.788484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.116 qpair failed and we were unable to recover it. 00:28:34.116 [2024-11-15 11:46:34.788592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.116 [2024-11-15 11:46:34.788602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.116 qpair failed and we were unable to recover it. 00:28:34.116 [2024-11-15 11:46:34.788843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.116 [2024-11-15 11:46:34.788854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.116 qpair failed and we were unable to recover it. 00:28:34.116 [2024-11-15 11:46:34.789111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.116 [2024-11-15 11:46:34.789122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.116 qpair failed and we were unable to recover it. 00:28:34.116 [2024-11-15 11:46:34.789364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.116 [2024-11-15 11:46:34.789375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.116 qpair failed and we were unable to recover it. 00:28:34.116 [2024-11-15 11:46:34.789539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.116 [2024-11-15 11:46:34.789553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.116 qpair failed and we were unable to recover it. 00:28:34.116 [2024-11-15 11:46:34.789804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.116 [2024-11-15 11:46:34.789815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.116 qpair failed and we were unable to recover it. 00:28:34.116 [2024-11-15 11:46:34.789955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.116 [2024-11-15 11:46:34.789966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.116 qpair failed and we were unable to recover it. 00:28:34.116 [2024-11-15 11:46:34.790183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.116 [2024-11-15 11:46:34.790194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.116 qpair failed and we were unable to recover it. 00:28:34.116 [2024-11-15 11:46:34.790436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.116 [2024-11-15 11:46:34.790447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.116 qpair failed and we were unable to recover it. 00:28:34.116 [2024-11-15 11:46:34.790681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.116 [2024-11-15 11:46:34.790691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.116 qpair failed and we were unable to recover it. 00:28:34.116 [2024-11-15 11:46:34.790843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.116 [2024-11-15 11:46:34.790855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.116 qpair failed and we were unable to recover it. 00:28:34.116 [2024-11-15 11:46:34.791063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.116 [2024-11-15 11:46:34.791075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.116 qpair failed and we were unable to recover it. 00:28:34.116 [2024-11-15 11:46:34.791227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.116 [2024-11-15 11:46:34.791239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.116 qpair failed and we were unable to recover it. 00:28:34.116 [2024-11-15 11:46:34.791449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.116 [2024-11-15 11:46:34.791464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.116 qpair failed and we were unable to recover it. 00:28:34.116 [2024-11-15 11:46:34.791674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.116 [2024-11-15 11:46:34.791686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.116 qpair failed and we were unable to recover it. 00:28:34.116 [2024-11-15 11:46:34.791943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.116 [2024-11-15 11:46:34.791955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.116 qpair failed and we were unable to recover it. 00:28:34.116 [2024-11-15 11:46:34.792211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.116 [2024-11-15 11:46:34.792221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.116 qpair failed and we were unable to recover it. 00:28:34.116 [2024-11-15 11:46:34.792465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.116 [2024-11-15 11:46:34.792477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.116 qpair failed and we were unable to recover it. 00:28:34.116 [2024-11-15 11:46:34.792723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.116 [2024-11-15 11:46:34.792735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.116 qpair failed and we were unable to recover it. 00:28:34.116 [2024-11-15 11:46:34.792972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.116 [2024-11-15 11:46:34.792983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.116 qpair failed and we were unable to recover it. 00:28:34.116 [2024-11-15 11:46:34.793204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.117 [2024-11-15 11:46:34.793214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.117 qpair failed and we were unable to recover it. 00:28:34.117 [2024-11-15 11:46:34.793479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.117 [2024-11-15 11:46:34.793491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.117 qpair failed and we were unable to recover it. 00:28:34.117 [2024-11-15 11:46:34.793651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.117 [2024-11-15 11:46:34.793662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.117 qpair failed and we were unable to recover it. 00:28:34.117 [2024-11-15 11:46:34.793900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.117 [2024-11-15 11:46:34.793911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.117 qpair failed and we were unable to recover it. 00:28:34.117 [2024-11-15 11:46:34.794066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.117 [2024-11-15 11:46:34.794077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.117 qpair failed and we were unable to recover it. 00:28:34.117 [2024-11-15 11:46:34.794227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.117 [2024-11-15 11:46:34.794239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.117 qpair failed and we were unable to recover it. 00:28:34.117 [2024-11-15 11:46:34.794479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.117 [2024-11-15 11:46:34.794491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.117 qpair failed and we were unable to recover it. 00:28:34.117 [2024-11-15 11:46:34.794729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.117 [2024-11-15 11:46:34.794740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.117 qpair failed and we were unable to recover it. 00:28:34.117 [2024-11-15 11:46:34.794900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.117 [2024-11-15 11:46:34.794911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.117 qpair failed and we were unable to recover it. 00:28:34.117 [2024-11-15 11:46:34.795163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.117 [2024-11-15 11:46:34.795174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.117 qpair failed and we were unable to recover it. 00:28:34.117 [2024-11-15 11:46:34.795330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.117 [2024-11-15 11:46:34.795341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.117 qpair failed and we were unable to recover it. 00:28:34.117 [2024-11-15 11:46:34.795600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.117 [2024-11-15 11:46:34.795612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.117 qpair failed and we were unable to recover it. 00:28:34.117 [2024-11-15 11:46:34.795851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.117 [2024-11-15 11:46:34.795862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.117 qpair failed and we were unable to recover it. 00:28:34.117 [2024-11-15 11:46:34.796035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.117 [2024-11-15 11:46:34.796046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.117 qpair failed and we were unable to recover it. 00:28:34.117 [2024-11-15 11:46:34.796284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.117 [2024-11-15 11:46:34.796295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.117 qpair failed and we were unable to recover it. 00:28:34.117 [2024-11-15 11:46:34.796413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.117 [2024-11-15 11:46:34.796424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.117 qpair failed and we were unable to recover it. 00:28:34.117 [2024-11-15 11:46:34.796565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.117 [2024-11-15 11:46:34.796577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.117 qpair failed and we were unable to recover it. 00:28:34.117 [2024-11-15 11:46:34.796787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.117 [2024-11-15 11:46:34.796799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.117 qpair failed and we were unable to recover it. 00:28:34.117 [2024-11-15 11:46:34.796951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.117 [2024-11-15 11:46:34.796963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.117 qpair failed and we were unable to recover it. 00:28:34.117 [2024-11-15 11:46:34.797147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.117 [2024-11-15 11:46:34.797158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.117 qpair failed and we were unable to recover it. 00:28:34.117 [2024-11-15 11:46:34.797369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.117 [2024-11-15 11:46:34.797381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.117 qpair failed and we were unable to recover it. 00:28:34.117 [2024-11-15 11:46:34.797621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.117 [2024-11-15 11:46:34.797634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.117 qpair failed and we were unable to recover it. 00:28:34.117 [2024-11-15 11:46:34.797889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.117 [2024-11-15 11:46:34.797900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.117 qpair failed and we were unable to recover it. 00:28:34.117 [2024-11-15 11:46:34.798136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.117 [2024-11-15 11:46:34.798148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.117 qpair failed and we were unable to recover it. 00:28:34.117 [2024-11-15 11:46:34.798236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.117 [2024-11-15 11:46:34.798248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.117 qpair failed and we were unable to recover it. 00:28:34.117 [2024-11-15 11:46:34.798443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.117 [2024-11-15 11:46:34.798454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.117 qpair failed and we were unable to recover it. 00:28:34.117 [2024-11-15 11:46:34.798697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.117 [2024-11-15 11:46:34.798708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.117 qpair failed and we were unable to recover it. 00:28:34.117 [2024-11-15 11:46:34.798890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.117 [2024-11-15 11:46:34.798901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.117 qpair failed and we were unable to recover it. 00:28:34.117 [2024-11-15 11:46:34.799056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.117 [2024-11-15 11:46:34.799066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.117 qpair failed and we were unable to recover it. 00:28:34.117 [2024-11-15 11:46:34.799272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.117 [2024-11-15 11:46:34.799283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.117 qpair failed and we were unable to recover it. 00:28:34.117 [2024-11-15 11:46:34.799523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.117 [2024-11-15 11:46:34.799534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.117 qpair failed and we were unable to recover it. 00:28:34.117 [2024-11-15 11:46:34.799686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.117 [2024-11-15 11:46:34.799697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.117 qpair failed and we were unable to recover it. 00:28:34.117 [2024-11-15 11:46:34.799874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.117 [2024-11-15 11:46:34.799885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.117 qpair failed and we were unable to recover it. 00:28:34.117 [2024-11-15 11:46:34.800094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.117 [2024-11-15 11:46:34.800106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.117 qpair failed and we were unable to recover it. 00:28:34.117 [2024-11-15 11:46:34.800370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.117 [2024-11-15 11:46:34.800381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.117 qpair failed and we were unable to recover it. 00:28:34.117 [2024-11-15 11:46:34.800466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.117 [2024-11-15 11:46:34.800478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.117 qpair failed and we were unable to recover it. 00:28:34.117 [2024-11-15 11:46:34.800634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.117 [2024-11-15 11:46:34.800645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.117 qpair failed and we were unable to recover it. 00:28:34.117 [2024-11-15 11:46:34.800786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.117 [2024-11-15 11:46:34.800799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.117 qpair failed and we were unable to recover it. 00:28:34.117 [2024-11-15 11:46:34.800968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.117 [2024-11-15 11:46:34.800979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.117 qpair failed and we were unable to recover it. 00:28:34.117 [2024-11-15 11:46:34.801218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.117 [2024-11-15 11:46:34.801229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.117 qpair failed and we were unable to recover it. 00:28:34.117 [2024-11-15 11:46:34.801466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.117 [2024-11-15 11:46:34.801477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.117 qpair failed and we were unable to recover it. 00:28:34.117 [2024-11-15 11:46:34.801698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.117 [2024-11-15 11:46:34.801710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.117 qpair failed and we were unable to recover it. 00:28:34.117 [2024-11-15 11:46:34.801946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.117 [2024-11-15 11:46:34.801957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.117 qpair failed and we were unable to recover it. 00:28:34.117 [2024-11-15 11:46:34.802111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.117 [2024-11-15 11:46:34.802122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.117 qpair failed and we were unable to recover it. 00:28:34.117 [2024-11-15 11:46:34.802342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.117 [2024-11-15 11:46:34.802352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.117 qpair failed and we were unable to recover it. 00:28:34.117 [2024-11-15 11:46:34.802529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.117 [2024-11-15 11:46:34.802540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.117 qpair failed and we were unable to recover it. 00:28:34.117 [2024-11-15 11:46:34.802765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.117 [2024-11-15 11:46:34.802776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.117 qpair failed and we were unable to recover it. 00:28:34.117 [2024-11-15 11:46:34.803010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.117 [2024-11-15 11:46:34.803021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.117 qpair failed and we were unable to recover it. 00:28:34.117 [2024-11-15 11:46:34.803208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.117 [2024-11-15 11:46:34.803219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.117 qpair failed and we were unable to recover it. 00:28:34.117 [2024-11-15 11:46:34.803376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.117 [2024-11-15 11:46:34.803387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.117 qpair failed and we were unable to recover it. 00:28:34.117 [2024-11-15 11:46:34.803597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.117 [2024-11-15 11:46:34.803609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.117 qpair failed and we were unable to recover it. 00:28:34.117 [2024-11-15 11:46:34.803767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.117 [2024-11-15 11:46:34.803778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.117 qpair failed and we were unable to recover it. 00:28:34.117 [2024-11-15 11:46:34.803876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.117 [2024-11-15 11:46:34.803887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.117 qpair failed and we were unable to recover it. 00:28:34.117 [2024-11-15 11:46:34.804151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.117 [2024-11-15 11:46:34.804162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.117 qpair failed and we were unable to recover it. 00:28:34.117 [2024-11-15 11:46:34.804353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.117 [2024-11-15 11:46:34.804364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.117 qpair failed and we were unable to recover it. 00:28:34.117 [2024-11-15 11:46:34.804546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.117 [2024-11-15 11:46:34.804558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.117 qpair failed and we were unable to recover it. 00:28:34.117 [2024-11-15 11:46:34.804769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.117 [2024-11-15 11:46:34.804781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.117 qpair failed and we were unable to recover it. 00:28:34.117 [2024-11-15 11:46:34.804940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.118 [2024-11-15 11:46:34.804951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.118 qpair failed and we were unable to recover it. 00:28:34.118 [2024-11-15 11:46:34.805112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.118 [2024-11-15 11:46:34.805124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.118 qpair failed and we were unable to recover it. 00:28:34.118 [2024-11-15 11:46:34.805272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.118 [2024-11-15 11:46:34.805285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.118 qpair failed and we were unable to recover it. 00:28:34.118 [2024-11-15 11:46:34.805512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.118 [2024-11-15 11:46:34.805525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.118 qpair failed and we were unable to recover it. 00:28:34.118 [2024-11-15 11:46:34.805609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.118 [2024-11-15 11:46:34.805621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.118 qpair failed and we were unable to recover it. 00:28:34.118 [2024-11-15 11:46:34.805791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.118 [2024-11-15 11:46:34.805803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.118 qpair failed and we were unable to recover it. 00:28:34.118 [2024-11-15 11:46:34.805982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.118 [2024-11-15 11:46:34.805994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.118 qpair failed and we were unable to recover it. 00:28:34.118 [2024-11-15 11:46:34.806180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.118 [2024-11-15 11:46:34.806194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.118 qpair failed and we were unable to recover it. 00:28:34.118 [2024-11-15 11:46:34.806345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.118 [2024-11-15 11:46:34.806368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.118 qpair failed and we were unable to recover it. 00:28:34.118 [2024-11-15 11:46:34.806538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.118 [2024-11-15 11:46:34.806550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.118 qpair failed and we were unable to recover it. 00:28:34.118 [2024-11-15 11:46:34.806804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.118 [2024-11-15 11:46:34.806816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.118 qpair failed and we were unable to recover it. 00:28:34.118 [2024-11-15 11:46:34.806899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.118 [2024-11-15 11:46:34.806911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.118 qpair failed and we were unable to recover it. 00:28:34.118 [2024-11-15 11:46:34.807163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.118 [2024-11-15 11:46:34.807174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.118 qpair failed and we were unable to recover it. 00:28:34.118 [2024-11-15 11:46:34.807329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.118 [2024-11-15 11:46:34.807340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.118 qpair failed and we were unable to recover it. 00:28:34.118 [2024-11-15 11:46:34.807580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.118 [2024-11-15 11:46:34.807592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.118 qpair failed and we were unable to recover it. 00:28:34.118 [2024-11-15 11:46:34.807743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.118 [2024-11-15 11:46:34.807754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.118 qpair failed and we were unable to recover it. 00:28:34.118 [2024-11-15 11:46:34.807967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.118 [2024-11-15 11:46:34.807978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.118 qpair failed and we were unable to recover it. 00:28:34.118 [2024-11-15 11:46:34.808147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.118 [2024-11-15 11:46:34.808159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.118 qpair failed and we were unable to recover it. 00:28:34.118 [2024-11-15 11:46:34.808388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.118 [2024-11-15 11:46:34.808399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.118 qpair failed and we were unable to recover it. 00:28:34.118 [2024-11-15 11:46:34.808539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.118 [2024-11-15 11:46:34.808552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.118 qpair failed and we were unable to recover it. 00:28:34.118 [2024-11-15 11:46:34.808690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.118 [2024-11-15 11:46:34.808701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.118 qpair failed and we were unable to recover it. 00:28:34.118 [2024-11-15 11:46:34.808913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.118 [2024-11-15 11:46:34.808925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.118 qpair failed and we were unable to recover it. 00:28:34.118 [2024-11-15 11:46:34.809158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.118 [2024-11-15 11:46:34.809170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.118 qpair failed and we were unable to recover it. 00:28:34.118 [2024-11-15 11:46:34.809377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.118 [2024-11-15 11:46:34.809389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.118 qpair failed and we were unable to recover it. 00:28:34.118 [2024-11-15 11:46:34.809529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.118 [2024-11-15 11:46:34.809542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.118 qpair failed and we were unable to recover it. 00:28:34.118 [2024-11-15 11:46:34.809707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.118 [2024-11-15 11:46:34.809718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.118 qpair failed and we were unable to recover it. 00:28:34.118 [2024-11-15 11:46:34.809822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.118 [2024-11-15 11:46:34.809834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.118 qpair failed and we were unable to recover it. 00:28:34.118 [2024-11-15 11:46:34.809987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.118 [2024-11-15 11:46:34.809999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.118 qpair failed and we were unable to recover it. 00:28:34.118 [2024-11-15 11:46:34.810161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.118 [2024-11-15 11:46:34.810172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.118 qpair failed and we were unable to recover it. 00:28:34.118 [2024-11-15 11:46:34.810418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.118 [2024-11-15 11:46:34.810430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.118 qpair failed and we were unable to recover it. 00:28:34.118 [2024-11-15 11:46:34.810688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.118 [2024-11-15 11:46:34.810700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.118 qpair failed and we were unable to recover it. 00:28:34.118 [2024-11-15 11:46:34.810941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.118 [2024-11-15 11:46:34.810953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.118 qpair failed and we were unable to recover it. 00:28:34.118 [2024-11-15 11:46:34.811112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.118 [2024-11-15 11:46:34.811124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.118 qpair failed and we were unable to recover it. 00:28:34.118 [2024-11-15 11:46:34.811199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.118 [2024-11-15 11:46:34.811210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.118 qpair failed and we were unable to recover it. 00:28:34.118 [2024-11-15 11:46:34.811493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.118 [2024-11-15 11:46:34.811523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.118 qpair failed and we were unable to recover it. 00:28:34.118 [2024-11-15 11:46:34.811627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.118 [2024-11-15 11:46:34.811641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.118 qpair failed and we were unable to recover it. 00:28:34.118 [2024-11-15 11:46:34.811837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.118 [2024-11-15 11:46:34.811849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.118 qpair failed and we were unable to recover it. 00:28:34.118 [2024-11-15 11:46:34.812065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.118 [2024-11-15 11:46:34.812077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.118 qpair failed and we were unable to recover it. 00:28:34.118 [2024-11-15 11:46:34.812311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.118 [2024-11-15 11:46:34.812323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.118 qpair failed and we were unable to recover it. 00:28:34.118 [2024-11-15 11:46:34.812501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.118 [2024-11-15 11:46:34.812513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.118 qpair failed and we were unable to recover it. 00:28:34.118 [2024-11-15 11:46:34.812724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.118 [2024-11-15 11:46:34.812736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.118 qpair failed and we were unable to recover it. 00:28:34.118 [2024-11-15 11:46:34.812945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.118 [2024-11-15 11:46:34.812956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.118 qpair failed and we were unable to recover it. 00:28:34.118 [2024-11-15 11:46:34.813127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.118 [2024-11-15 11:46:34.813138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.118 qpair failed and we were unable to recover it. 00:28:34.118 [2024-11-15 11:46:34.813350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.118 [2024-11-15 11:46:34.813362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.118 qpair failed and we were unable to recover it. 00:28:34.118 [2024-11-15 11:46:34.813639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.118 [2024-11-15 11:46:34.813651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.118 qpair failed and we were unable to recover it. 00:28:34.118 [2024-11-15 11:46:34.813790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.118 [2024-11-15 11:46:34.813802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.118 qpair failed and we were unable to recover it. 00:28:34.118 [2024-11-15 11:46:34.814012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.118 [2024-11-15 11:46:34.814024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.118 qpair failed and we were unable to recover it. 00:28:34.118 [2024-11-15 11:46:34.814172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.118 [2024-11-15 11:46:34.814194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.118 qpair failed and we were unable to recover it. 00:28:34.118 [2024-11-15 11:46:34.814415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.118 [2024-11-15 11:46:34.814428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.118 qpair failed and we were unable to recover it. 00:28:34.118 [2024-11-15 11:46:34.814650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.118 [2024-11-15 11:46:34.814662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.118 qpair failed and we were unable to recover it. 00:28:34.118 [2024-11-15 11:46:34.814834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.118 [2024-11-15 11:46:34.814846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.118 qpair failed and we were unable to recover it. 00:28:34.118 [2024-11-15 11:46:34.815006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.118 [2024-11-15 11:46:34.815018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.118 qpair failed and we were unable to recover it. 00:28:34.118 [2024-11-15 11:46:34.815183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.118 [2024-11-15 11:46:34.815194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.118 qpair failed and we were unable to recover it. 00:28:34.118 [2024-11-15 11:46:34.815456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.118 [2024-11-15 11:46:34.815474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.118 qpair failed and we were unable to recover it. 00:28:34.118 [2024-11-15 11:46:34.815579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.118 [2024-11-15 11:46:34.815591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.118 qpair failed and we were unable to recover it. 00:28:34.118 [2024-11-15 11:46:34.815820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.118 [2024-11-15 11:46:34.815832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.118 qpair failed and we were unable to recover it. 00:28:34.118 [2024-11-15 11:46:34.816087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.118 [2024-11-15 11:46:34.816099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.118 qpair failed and we were unable to recover it. 00:28:34.118 [2024-11-15 11:46:34.816348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.119 [2024-11-15 11:46:34.816359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.119 qpair failed and we were unable to recover it. 00:28:34.119 [2024-11-15 11:46:34.816567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.119 [2024-11-15 11:46:34.816580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.119 qpair failed and we were unable to recover it. 00:28:34.119 [2024-11-15 11:46:34.816813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.119 [2024-11-15 11:46:34.816825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.119 qpair failed and we were unable to recover it. 00:28:34.119 [2024-11-15 11:46:34.816941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.119 [2024-11-15 11:46:34.816953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.119 qpair failed and we were unable to recover it. 00:28:34.119 [2024-11-15 11:46:34.817136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.119 [2024-11-15 11:46:34.817149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.119 qpair failed and we were unable to recover it. 00:28:34.119 [2024-11-15 11:46:34.817356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.119 [2024-11-15 11:46:34.817368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.119 qpair failed and we were unable to recover it. 00:28:34.119 [2024-11-15 11:46:34.817534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.119 [2024-11-15 11:46:34.817546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.119 qpair failed and we were unable to recover it. 00:28:34.119 [2024-11-15 11:46:34.817696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.119 [2024-11-15 11:46:34.817708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.119 qpair failed and we were unable to recover it. 00:28:34.119 [2024-11-15 11:46:34.817894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.119 [2024-11-15 11:46:34.817906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.119 qpair failed and we were unable to recover it. 00:28:34.119 [2024-11-15 11:46:34.818015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.119 [2024-11-15 11:46:34.818026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.119 qpair failed and we were unable to recover it. 00:28:34.119 [2024-11-15 11:46:34.818233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.119 [2024-11-15 11:46:34.818244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.119 qpair failed and we were unable to recover it. 00:28:34.119 [2024-11-15 11:46:34.818398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.119 [2024-11-15 11:46:34.818410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.119 qpair failed and we were unable to recover it. 00:28:34.119 [2024-11-15 11:46:34.818550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.119 [2024-11-15 11:46:34.818563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.119 qpair failed and we were unable to recover it. 00:28:34.119 [2024-11-15 11:46:34.818783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.119 [2024-11-15 11:46:34.818794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.119 qpair failed and we were unable to recover it. 00:28:34.119 [2024-11-15 11:46:34.819042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.119 [2024-11-15 11:46:34.819053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.119 qpair failed and we were unable to recover it. 00:28:34.119 [2024-11-15 11:46:34.819309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.119 [2024-11-15 11:46:34.819321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.119 qpair failed and we were unable to recover it. 00:28:34.119 [2024-11-15 11:46:34.819559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.119 [2024-11-15 11:46:34.819571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.119 qpair failed and we were unable to recover it. 00:28:34.119 [2024-11-15 11:46:34.819654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.119 [2024-11-15 11:46:34.819666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.119 qpair failed and we were unable to recover it. 00:28:34.119 [2024-11-15 11:46:34.819753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.119 [2024-11-15 11:46:34.819765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.119 qpair failed and we were unable to recover it. 00:28:34.119 [2024-11-15 11:46:34.820002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.119 [2024-11-15 11:46:34.820014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.119 qpair failed and we were unable to recover it. 00:28:34.119 [2024-11-15 11:46:34.820197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.119 [2024-11-15 11:46:34.820209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.119 qpair failed and we were unable to recover it. 00:28:34.119 [2024-11-15 11:46:34.820368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.119 [2024-11-15 11:46:34.820380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.119 qpair failed and we were unable to recover it. 00:28:34.119 [2024-11-15 11:46:34.820537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.119 [2024-11-15 11:46:34.820549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.119 qpair failed and we were unable to recover it. 00:28:34.119 [2024-11-15 11:46:34.820688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.119 [2024-11-15 11:46:34.820700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.119 qpair failed and we were unable to recover it. 00:28:34.119 [2024-11-15 11:46:34.820857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.119 [2024-11-15 11:46:34.820869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.119 qpair failed and we were unable to recover it. 00:28:34.119 [2024-11-15 11:46:34.820982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.119 [2024-11-15 11:46:34.820994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.119 qpair failed and we were unable to recover it. 00:28:34.119 [2024-11-15 11:46:34.821154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.119 [2024-11-15 11:46:34.821166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.119 qpair failed and we were unable to recover it. 00:28:34.119 [2024-11-15 11:46:34.821256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.119 [2024-11-15 11:46:34.821269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.119 qpair failed and we were unable to recover it. 00:28:34.119 [2024-11-15 11:46:34.821446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.119 [2024-11-15 11:46:34.821462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.119 qpair failed and we were unable to recover it. 00:28:34.119 [2024-11-15 11:46:34.821702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.119 [2024-11-15 11:46:34.821715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.119 qpair failed and we were unable to recover it. 00:28:34.119 [2024-11-15 11:46:34.821968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.119 [2024-11-15 11:46:34.821982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.119 qpair failed and we were unable to recover it. 00:28:34.119 [2024-11-15 11:46:34.822132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.119 [2024-11-15 11:46:34.822145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.119 qpair failed and we were unable to recover it. 00:28:34.119 [2024-11-15 11:46:34.822305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.119 [2024-11-15 11:46:34.822317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.119 qpair failed and we were unable to recover it. 00:28:34.119 [2024-11-15 11:46:34.822555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.119 [2024-11-15 11:46:34.822567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.119 qpair failed and we were unable to recover it. 00:28:34.119 [2024-11-15 11:46:34.822774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.119 [2024-11-15 11:46:34.822786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.119 qpair failed and we were unable to recover it. 00:28:34.119 [2024-11-15 11:46:34.823019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.119 [2024-11-15 11:46:34.823030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.119 qpair failed and we were unable to recover it. 00:28:34.119 [2024-11-15 11:46:34.823285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.119 [2024-11-15 11:46:34.823296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.119 qpair failed and we were unable to recover it. 00:28:34.119 [2024-11-15 11:46:34.823438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.119 [2024-11-15 11:46:34.823451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.119 qpair failed and we were unable to recover it. 00:28:34.119 [2024-11-15 11:46:34.823711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.119 [2024-11-15 11:46:34.823723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.119 qpair failed and we were unable to recover it. 00:28:34.119 [2024-11-15 11:46:34.823960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.119 [2024-11-15 11:46:34.823972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.119 qpair failed and we were unable to recover it. 00:28:34.119 [2024-11-15 11:46:34.824073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.119 [2024-11-15 11:46:34.824084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.119 qpair failed and we were unable to recover it. 00:28:34.119 [2024-11-15 11:46:34.824239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.119 [2024-11-15 11:46:34.824250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.119 qpair failed and we were unable to recover it. 00:28:34.119 [2024-11-15 11:46:34.824490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.119 [2024-11-15 11:46:34.824502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.119 qpair failed and we were unable to recover it. 00:28:34.119 [2024-11-15 11:46:34.824663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.119 [2024-11-15 11:46:34.824675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.119 qpair failed and we were unable to recover it. 00:28:34.119 [2024-11-15 11:46:34.824835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.119 [2024-11-15 11:46:34.824847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.119 qpair failed and we were unable to recover it. 00:28:34.119 [2024-11-15 11:46:34.825083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.119 [2024-11-15 11:46:34.825095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.119 qpair failed and we were unable to recover it. 00:28:34.119 [2024-11-15 11:46:34.825233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.119 [2024-11-15 11:46:34.825244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.119 qpair failed and we were unable to recover it. 00:28:34.119 [2024-11-15 11:46:34.825396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.119 [2024-11-15 11:46:34.825407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.119 qpair failed and we were unable to recover it. 00:28:34.119 [2024-11-15 11:46:34.825563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.119 [2024-11-15 11:46:34.825576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.119 qpair failed and we were unable to recover it. 00:28:34.119 [2024-11-15 11:46:34.825752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.119 [2024-11-15 11:46:34.825763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.119 qpair failed and we were unable to recover it. 00:28:34.119 [2024-11-15 11:46:34.826017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.119 [2024-11-15 11:46:34.826029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.119 qpair failed and we were unable to recover it. 00:28:34.119 [2024-11-15 11:46:34.826162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.119 [2024-11-15 11:46:34.826174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.119 qpair failed and we were unable to recover it. 00:28:34.119 [2024-11-15 11:46:34.826396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.119 [2024-11-15 11:46:34.826424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.119 qpair failed and we were unable to recover it. 00:28:34.119 [2024-11-15 11:46:34.826681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.119 [2024-11-15 11:46:34.826693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.119 qpair failed and we were unable to recover it. 00:28:34.119 [2024-11-15 11:46:34.826925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.119 [2024-11-15 11:46:34.826937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.119 qpair failed and we were unable to recover it. 00:28:34.119 [2024-11-15 11:46:34.827176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.119 [2024-11-15 11:46:34.827188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.119 qpair failed and we were unable to recover it. 00:28:34.119 [2024-11-15 11:46:34.827431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.119 [2024-11-15 11:46:34.827442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.119 qpair failed and we were unable to recover it. 00:28:34.119 [2024-11-15 11:46:34.827618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.119 [2024-11-15 11:46:34.827631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.119 qpair failed and we were unable to recover it. 00:28:34.119 [2024-11-15 11:46:34.827788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.119 [2024-11-15 11:46:34.827801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.119 qpair failed and we were unable to recover it. 00:28:34.120 [2024-11-15 11:46:34.827950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.120 [2024-11-15 11:46:34.827962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.120 qpair failed and we were unable to recover it. 00:28:34.120 [2024-11-15 11:46:34.828172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.120 [2024-11-15 11:46:34.828184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.120 qpair failed and we were unable to recover it. 00:28:34.120 [2024-11-15 11:46:34.828434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.120 [2024-11-15 11:46:34.828446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.120 qpair failed and we were unable to recover it. 00:28:34.120 [2024-11-15 11:46:34.828671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.120 [2024-11-15 11:46:34.828683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.120 qpair failed and we were unable to recover it. 00:28:34.120 [2024-11-15 11:46:34.828900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.120 [2024-11-15 11:46:34.828911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.120 qpair failed and we were unable to recover it. 00:28:34.120 [2024-11-15 11:46:34.829074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.120 [2024-11-15 11:46:34.829086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.120 qpair failed and we were unable to recover it. 00:28:34.120 [2024-11-15 11:46:34.829307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.120 [2024-11-15 11:46:34.829319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.120 qpair failed and we were unable to recover it. 00:28:34.120 [2024-11-15 11:46:34.829471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.120 [2024-11-15 11:46:34.829484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.120 qpair failed and we were unable to recover it. 00:28:34.120 [2024-11-15 11:46:34.829597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.120 [2024-11-15 11:46:34.829609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.120 qpair failed and we were unable to recover it. 00:28:34.120 [2024-11-15 11:46:34.829845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.120 [2024-11-15 11:46:34.829857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.120 qpair failed and we were unable to recover it. 00:28:34.120 [2024-11-15 11:46:34.830130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.120 [2024-11-15 11:46:34.830142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.120 qpair failed and we were unable to recover it. 00:28:34.120 [2024-11-15 11:46:34.830362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.120 [2024-11-15 11:46:34.830379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.120 qpair failed and we were unable to recover it. 00:28:34.120 [2024-11-15 11:46:34.830589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.120 [2024-11-15 11:46:34.830601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.120 qpair failed and we were unable to recover it. 00:28:34.120 [2024-11-15 11:46:34.830768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.120 [2024-11-15 11:46:34.830780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.120 qpair failed and we were unable to recover it. 00:28:34.120 [2024-11-15 11:46:34.830932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.120 [2024-11-15 11:46:34.830944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.120 qpair failed and we were unable to recover it. 00:28:34.120 [2024-11-15 11:46:34.831177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.120 [2024-11-15 11:46:34.831188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.120 qpair failed and we were unable to recover it. 00:28:34.120 [2024-11-15 11:46:34.831327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.120 [2024-11-15 11:46:34.831339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.120 qpair failed and we were unable to recover it. 00:28:34.120 [2024-11-15 11:46:34.831560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.120 [2024-11-15 11:46:34.831572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.120 qpair failed and we were unable to recover it. 00:28:34.120 [2024-11-15 11:46:34.831663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.120 [2024-11-15 11:46:34.831675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.120 qpair failed and we were unable to recover it. 00:28:34.120 [2024-11-15 11:46:34.831768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.120 [2024-11-15 11:46:34.831780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.120 qpair failed and we were unable to recover it. 00:28:34.120 [2024-11-15 11:46:34.831913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.120 [2024-11-15 11:46:34.831925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.120 qpair failed and we were unable to recover it. 00:28:34.120 [2024-11-15 11:46:34.832102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.120 [2024-11-15 11:46:34.832114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.120 qpair failed and we were unable to recover it. 00:28:34.120 [2024-11-15 11:46:34.832353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.120 [2024-11-15 11:46:34.832364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.120 qpair failed and we were unable to recover it. 00:28:34.120 [2024-11-15 11:46:34.832519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.120 [2024-11-15 11:46:34.832531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.120 qpair failed and we were unable to recover it. 00:28:34.120 [2024-11-15 11:46:34.832685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.120 [2024-11-15 11:46:34.832696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.120 qpair failed and we were unable to recover it. 00:28:34.120 [2024-11-15 11:46:34.832925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.120 [2024-11-15 11:46:34.832936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.120 qpair failed and we were unable to recover it. 00:28:34.120 [2024-11-15 11:46:34.833074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.120 [2024-11-15 11:46:34.833085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.120 qpair failed and we were unable to recover it. 00:28:34.120 [2024-11-15 11:46:34.833220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.120 [2024-11-15 11:46:34.833231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.120 qpair failed and we were unable to recover it. 00:28:34.120 [2024-11-15 11:46:34.833384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.120 [2024-11-15 11:46:34.833395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.120 qpair failed and we were unable to recover it. 00:28:34.120 [2024-11-15 11:46:34.833576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.120 [2024-11-15 11:46:34.833587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.120 qpair failed and we were unable to recover it. 00:28:34.120 [2024-11-15 11:46:34.833725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.120 [2024-11-15 11:46:34.833736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.120 qpair failed and we were unable to recover it. 00:28:34.120 [2024-11-15 11:46:34.833835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.120 [2024-11-15 11:46:34.833846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.120 qpair failed and we were unable to recover it. 00:28:34.120 [2024-11-15 11:46:34.834019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.120 [2024-11-15 11:46:34.834031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.120 qpair failed and we were unable to recover it. 00:28:34.120 [2024-11-15 11:46:34.834236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.120 [2024-11-15 11:46:34.834247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.120 qpair failed and we were unable to recover it. 00:28:34.120 [2024-11-15 11:46:34.834402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.120 [2024-11-15 11:46:34.834414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.120 qpair failed and we were unable to recover it. 00:28:34.120 [2024-11-15 11:46:34.834680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.120 [2024-11-15 11:46:34.834692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.120 qpair failed and we were unable to recover it. 00:28:34.120 [2024-11-15 11:46:34.834925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.120 [2024-11-15 11:46:34.834937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.120 qpair failed and we were unable to recover it. 00:28:34.120 [2024-11-15 11:46:34.835166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.120 [2024-11-15 11:46:34.835177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.120 qpair failed and we were unable to recover it. 00:28:34.120 [2024-11-15 11:46:34.835346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.120 [2024-11-15 11:46:34.835360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.120 qpair failed and we were unable to recover it. 00:28:34.120 [2024-11-15 11:46:34.835570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.120 [2024-11-15 11:46:34.835582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.120 qpair failed and we were unable to recover it. 00:28:34.120 [2024-11-15 11:46:34.835814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.120 [2024-11-15 11:46:34.835826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.120 qpair failed and we were unable to recover it. 00:28:34.120 [2024-11-15 11:46:34.835992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.120 [2024-11-15 11:46:34.836003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.120 qpair failed and we were unable to recover it. 00:28:34.120 [2024-11-15 11:46:34.836226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.120 [2024-11-15 11:46:34.836238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.120 qpair failed and we were unable to recover it. 00:28:34.120 [2024-11-15 11:46:34.836385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.120 [2024-11-15 11:46:34.836396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.120 qpair failed and we were unable to recover it. 00:28:34.120 [2024-11-15 11:46:34.836632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.120 [2024-11-15 11:46:34.836644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.120 qpair failed and we were unable to recover it. 00:28:34.120 [2024-11-15 11:46:34.836783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.120 [2024-11-15 11:46:34.836794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.120 qpair failed and we were unable to recover it. 00:28:34.120 [2024-11-15 11:46:34.837025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.120 [2024-11-15 11:46:34.837036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.120 qpair failed and we were unable to recover it. 00:28:34.120 [2024-11-15 11:46:34.837133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.120 [2024-11-15 11:46:34.837144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.120 qpair failed and we were unable to recover it. 00:28:34.120 [2024-11-15 11:46:34.837327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.120 [2024-11-15 11:46:34.837340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.120 qpair failed and we were unable to recover it. 00:28:34.120 [2024-11-15 11:46:34.837486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.120 [2024-11-15 11:46:34.837498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.120 qpair failed and we were unable to recover it. 00:28:34.120 [2024-11-15 11:46:34.837706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.120 [2024-11-15 11:46:34.837718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.120 qpair failed and we were unable to recover it. 00:28:34.120 [2024-11-15 11:46:34.837930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.120 [2024-11-15 11:46:34.837941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.120 qpair failed and we were unable to recover it. 00:28:34.120 [2024-11-15 11:46:34.838177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.120 [2024-11-15 11:46:34.838189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.120 qpair failed and we were unable to recover it. 00:28:34.120 [2024-11-15 11:46:34.838328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.120 [2024-11-15 11:46:34.838339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.120 qpair failed and we were unable to recover it. 00:28:34.120 [2024-11-15 11:46:34.838504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.120 [2024-11-15 11:46:34.838516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.120 qpair failed and we were unable to recover it. 00:28:34.120 [2024-11-15 11:46:34.838751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.120 [2024-11-15 11:46:34.838763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.120 qpair failed and we were unable to recover it. 00:28:34.120 [2024-11-15 11:46:34.838971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.120 [2024-11-15 11:46:34.838982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.120 qpair failed and we were unable to recover it. 00:28:34.120 [2024-11-15 11:46:34.839160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.120 [2024-11-15 11:46:34.839172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.120 qpair failed and we were unable to recover it. 00:28:34.120 [2024-11-15 11:46:34.839391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.120 [2024-11-15 11:46:34.839403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.120 qpair failed and we were unable to recover it. 00:28:34.120 [2024-11-15 11:46:34.839554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.120 [2024-11-15 11:46:34.839566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.120 qpair failed and we were unable to recover it. 00:28:34.120 [2024-11-15 11:46:34.839814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.120 [2024-11-15 11:46:34.839826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.120 qpair failed and we were unable to recover it. 00:28:34.120 [2024-11-15 11:46:34.840034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.120 [2024-11-15 11:46:34.840046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.120 qpair failed and we were unable to recover it. 00:28:34.120 [2024-11-15 11:46:34.840312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.120 [2024-11-15 11:46:34.840324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.120 qpair failed and we were unable to recover it. 00:28:34.120 [2024-11-15 11:46:34.840481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.120 [2024-11-15 11:46:34.840492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.120 qpair failed and we were unable to recover it. 00:28:34.120 [2024-11-15 11:46:34.840651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.120 [2024-11-15 11:46:34.840663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.120 qpair failed and we were unable to recover it. 00:28:34.120 [2024-11-15 11:46:34.840771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.120 [2024-11-15 11:46:34.840782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.120 qpair failed and we were unable to recover it. 00:28:34.120 [2024-11-15 11:46:34.841005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.120 [2024-11-15 11:46:34.841016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.120 qpair failed and we were unable to recover it. 00:28:34.120 [2024-11-15 11:46:34.841154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.120 [2024-11-15 11:46:34.841165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.120 qpair failed and we were unable to recover it. 00:28:34.120 [2024-11-15 11:46:34.841408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.120 [2024-11-15 11:46:34.841420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.120 qpair failed and we were unable to recover it. 00:28:34.120 [2024-11-15 11:46:34.841654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.120 [2024-11-15 11:46:34.841666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.120 qpair failed and we were unable to recover it. 00:28:34.120 [2024-11-15 11:46:34.841913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.120 [2024-11-15 11:46:34.841924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.120 qpair failed and we were unable to recover it. 00:28:34.120 [2024-11-15 11:46:34.842063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.120 [2024-11-15 11:46:34.842074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.120 qpair failed and we were unable to recover it. 00:28:34.121 [2024-11-15 11:46:34.842178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.121 [2024-11-15 11:46:34.842189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.121 qpair failed and we were unable to recover it. 00:28:34.121 [2024-11-15 11:46:34.842425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.121 [2024-11-15 11:46:34.842435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.121 qpair failed and we were unable to recover it. 00:28:34.121 [2024-11-15 11:46:34.842614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.121 [2024-11-15 11:46:34.842626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.121 qpair failed and we were unable to recover it. 00:28:34.121 [2024-11-15 11:46:34.842810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.121 [2024-11-15 11:46:34.842821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.121 qpair failed and we were unable to recover it. 00:28:34.121 [2024-11-15 11:46:34.842982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.121 [2024-11-15 11:46:34.842994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.121 qpair failed and we were unable to recover it. 00:28:34.121 [2024-11-15 11:46:34.843199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.121 [2024-11-15 11:46:34.843210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.121 qpair failed and we were unable to recover it. 00:28:34.121 [2024-11-15 11:46:34.843385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.121 [2024-11-15 11:46:34.843399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.121 qpair failed and we were unable to recover it. 00:28:34.121 [2024-11-15 11:46:34.843623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.121 [2024-11-15 11:46:34.843634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.121 qpair failed and we were unable to recover it. 00:28:34.121 [2024-11-15 11:46:34.843896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.121 [2024-11-15 11:46:34.843908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.121 qpair failed and we were unable to recover it. 00:28:34.121 [2024-11-15 11:46:34.844145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.121 [2024-11-15 11:46:34.844156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.121 qpair failed and we were unable to recover it. 00:28:34.121 [2024-11-15 11:46:34.844331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.121 [2024-11-15 11:46:34.844343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.121 qpair failed and we were unable to recover it. 00:28:34.121 [2024-11-15 11:46:34.844510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.121 [2024-11-15 11:46:34.844523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.121 qpair failed and we were unable to recover it. 00:28:34.121 [2024-11-15 11:46:34.844700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.121 [2024-11-15 11:46:34.844713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.121 qpair failed and we were unable to recover it. 00:28:34.121 [2024-11-15 11:46:34.844928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.121 [2024-11-15 11:46:34.844940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.121 qpair failed and we were unable to recover it. 00:28:34.121 [2024-11-15 11:46:34.845154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.121 [2024-11-15 11:46:34.845166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.121 qpair failed and we were unable to recover it. 00:28:34.121 [2024-11-15 11:46:34.845415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.121 [2024-11-15 11:46:34.845428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.121 qpair failed and we were unable to recover it. 00:28:34.121 [2024-11-15 11:46:34.845507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.121 [2024-11-15 11:46:34.845520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.121 qpair failed and we were unable to recover it. 00:28:34.121 [2024-11-15 11:46:34.845620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.121 [2024-11-15 11:46:34.845633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.121 qpair failed and we were unable to recover it. 00:28:34.121 [2024-11-15 11:46:34.845790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.121 [2024-11-15 11:46:34.845802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.121 qpair failed and we were unable to recover it. 00:28:34.121 [2024-11-15 11:46:34.846047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.121 [2024-11-15 11:46:34.846058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.121 qpair failed and we were unable to recover it. 00:28:34.121 [2024-11-15 11:46:34.846220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.121 [2024-11-15 11:46:34.846233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.121 qpair failed and we were unable to recover it. 00:28:34.121 [2024-11-15 11:46:34.846303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.121 [2024-11-15 11:46:34.846315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.121 qpair failed and we were unable to recover it. 00:28:34.121 [2024-11-15 11:46:34.846561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.121 [2024-11-15 11:46:34.846574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.121 qpair failed and we were unable to recover it. 00:28:34.121 [2024-11-15 11:46:34.846727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.121 [2024-11-15 11:46:34.846739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.121 qpair failed and we were unable to recover it. 00:28:34.121 [2024-11-15 11:46:34.846925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.121 [2024-11-15 11:46:34.846937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.121 qpair failed and we were unable to recover it. 00:28:34.121 [2024-11-15 11:46:34.847221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.121 [2024-11-15 11:46:34.847234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.121 qpair failed and we were unable to recover it. 00:28:34.121 [2024-11-15 11:46:34.847380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.121 [2024-11-15 11:46:34.847393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.121 qpair failed and we were unable to recover it. 00:28:34.121 [2024-11-15 11:46:34.847543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.121 [2024-11-15 11:46:34.847556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.121 qpair failed and we were unable to recover it. 00:28:34.121 [2024-11-15 11:46:34.847699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.121 [2024-11-15 11:46:34.847710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.121 qpair failed and we were unable to recover it. 00:28:34.121 [2024-11-15 11:46:34.847934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.121 [2024-11-15 11:46:34.847945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.121 qpair failed and we were unable to recover it. 00:28:34.121 [2024-11-15 11:46:34.848195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.121 [2024-11-15 11:46:34.848206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.121 qpair failed and we were unable to recover it. 00:28:34.121 [2024-11-15 11:46:34.848425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.121 [2024-11-15 11:46:34.848437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.121 qpair failed and we were unable to recover it. 00:28:34.121 [2024-11-15 11:46:34.848687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.121 [2024-11-15 11:46:34.848698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.121 qpair failed and we were unable to recover it. 00:28:34.121 [2024-11-15 11:46:34.848964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.121 [2024-11-15 11:46:34.848976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.121 qpair failed and we were unable to recover it. 00:28:34.121 [2024-11-15 11:46:34.849201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.121 [2024-11-15 11:46:34.849212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.121 qpair failed and we were unable to recover it. 00:28:34.121 [2024-11-15 11:46:34.849365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.121 [2024-11-15 11:46:34.849377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.121 qpair failed and we were unable to recover it. 00:28:34.121 [2024-11-15 11:46:34.849598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.121 [2024-11-15 11:46:34.849610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.121 qpair failed and we were unable to recover it. 00:28:34.121 [2024-11-15 11:46:34.849847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.121 [2024-11-15 11:46:34.849858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.121 qpair failed and we were unable to recover it. 00:28:34.121 [2024-11-15 11:46:34.850041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.121 [2024-11-15 11:46:34.850052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.121 qpair failed and we were unable to recover it. 00:28:34.121 [2024-11-15 11:46:34.850281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.121 [2024-11-15 11:46:34.850293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.121 qpair failed and we were unable to recover it. 00:28:34.121 [2024-11-15 11:46:34.850448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.121 [2024-11-15 11:46:34.850463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.121 qpair failed and we were unable to recover it. 00:28:34.121 [2024-11-15 11:46:34.850678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.121 [2024-11-15 11:46:34.850690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.121 qpair failed and we were unable to recover it. 00:28:34.121 [2024-11-15 11:46:34.850862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.121 [2024-11-15 11:46:34.850873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.121 qpair failed and we were unable to recover it. 00:28:34.121 [2024-11-15 11:46:34.851015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.121 [2024-11-15 11:46:34.851027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.121 qpair failed and we were unable to recover it. 00:28:34.121 [2024-11-15 11:46:34.851164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.121 [2024-11-15 11:46:34.851176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.121 qpair failed and we were unable to recover it. 00:28:34.121 [2024-11-15 11:46:34.851341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.121 [2024-11-15 11:46:34.851352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.121 qpair failed and we were unable to recover it. 00:28:34.121 [2024-11-15 11:46:34.851609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.121 [2024-11-15 11:46:34.851625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.121 qpair failed and we were unable to recover it. 00:28:34.121 [2024-11-15 11:46:34.851840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.121 [2024-11-15 11:46:34.851851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.121 qpair failed and we were unable to recover it. 00:28:34.121 [2024-11-15 11:46:34.852009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.121 [2024-11-15 11:46:34.852020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.121 qpair failed and we were unable to recover it. 00:28:34.121 [2024-11-15 11:46:34.852245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.121 [2024-11-15 11:46:34.852257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.121 qpair failed and we were unable to recover it. 00:28:34.121 [2024-11-15 11:46:34.852424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.121 [2024-11-15 11:46:34.852435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.121 qpair failed and we were unable to recover it. 00:28:34.121 [2024-11-15 11:46:34.852649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.121 [2024-11-15 11:46:34.852660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.121 qpair failed and we were unable to recover it. 00:28:34.121 [2024-11-15 11:46:34.852919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.121 [2024-11-15 11:46:34.852930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.121 qpair failed and we were unable to recover it. 00:28:34.121 [2024-11-15 11:46:34.853094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.121 [2024-11-15 11:46:34.853105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.121 qpair failed and we were unable to recover it. 00:28:34.121 [2024-11-15 11:46:34.853311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.121 [2024-11-15 11:46:34.853323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.121 qpair failed and we were unable to recover it. 00:28:34.121 [2024-11-15 11:46:34.853465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.121 [2024-11-15 11:46:34.853477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.121 qpair failed and we were unable to recover it. 00:28:34.121 [2024-11-15 11:46:34.853635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.121 [2024-11-15 11:46:34.853647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.121 qpair failed and we were unable to recover it. 00:28:34.121 [2024-11-15 11:46:34.853880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.121 [2024-11-15 11:46:34.853892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.121 qpair failed and we were unable to recover it. 00:28:34.121 [2024-11-15 11:46:34.854041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.121 [2024-11-15 11:46:34.854053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.121 qpair failed and we were unable to recover it. 00:28:34.121 [2024-11-15 11:46:34.854337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.121 [2024-11-15 11:46:34.854349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.121 qpair failed and we were unable to recover it. 00:28:34.121 [2024-11-15 11:46:34.854596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.121 [2024-11-15 11:46:34.854609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.121 qpair failed and we were unable to recover it. 00:28:34.121 [2024-11-15 11:46:34.854766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.121 [2024-11-15 11:46:34.854777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.121 qpair failed and we were unable to recover it. 00:28:34.121 [2024-11-15 11:46:34.855014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.121 [2024-11-15 11:46:34.855025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.121 qpair failed and we were unable to recover it. 00:28:34.121 [2024-11-15 11:46:34.855276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.121 [2024-11-15 11:46:34.855288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.121 qpair failed and we were unable to recover it. 00:28:34.121 [2024-11-15 11:46:34.855522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.121 [2024-11-15 11:46:34.855534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.121 qpair failed and we were unable to recover it. 00:28:34.121 [2024-11-15 11:46:34.855740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.121 [2024-11-15 11:46:34.855751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.121 qpair failed and we were unable to recover it. 00:28:34.121 [2024-11-15 11:46:34.856032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.121 [2024-11-15 11:46:34.856044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.121 qpair failed and we were unable to recover it. 00:28:34.121 [2024-11-15 11:46:34.856279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.121 [2024-11-15 11:46:34.856291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.121 qpair failed and we were unable to recover it. 00:28:34.121 [2024-11-15 11:46:34.856429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.121 [2024-11-15 11:46:34.856441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.121 qpair failed and we were unable to recover it. 00:28:34.121 [2024-11-15 11:46:34.856675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.121 [2024-11-15 11:46:34.856687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.121 qpair failed and we were unable to recover it. 00:28:34.121 [2024-11-15 11:46:34.856923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.121 [2024-11-15 11:46:34.856935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.121 qpair failed and we were unable to recover it. 00:28:34.121 [2024-11-15 11:46:34.857111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.121 [2024-11-15 11:46:34.857122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.121 qpair failed and we were unable to recover it. 00:28:34.121 [2024-11-15 11:46:34.857274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.121 [2024-11-15 11:46:34.857285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.121 qpair failed and we were unable to recover it. 00:28:34.122 [2024-11-15 11:46:34.857581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.122 [2024-11-15 11:46:34.857593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.122 qpair failed and we were unable to recover it. 00:28:34.122 [2024-11-15 11:46:34.857741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.122 [2024-11-15 11:46:34.857753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.122 qpair failed and we were unable to recover it. 00:28:34.122 [2024-11-15 11:46:34.857837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.122 [2024-11-15 11:46:34.857848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.122 qpair failed and we were unable to recover it. 00:28:34.122 [2024-11-15 11:46:34.858058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.122 [2024-11-15 11:46:34.858070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.122 qpair failed and we were unable to recover it. 00:28:34.122 [2024-11-15 11:46:34.858302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.122 [2024-11-15 11:46:34.858313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.122 qpair failed and we were unable to recover it. 00:28:34.122 [2024-11-15 11:46:34.858497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.122 [2024-11-15 11:46:34.858508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.122 qpair failed and we were unable to recover it. 00:28:34.122 [2024-11-15 11:46:34.858743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.122 [2024-11-15 11:46:34.858755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.122 qpair failed and we were unable to recover it. 00:28:34.122 [2024-11-15 11:46:34.858925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.122 [2024-11-15 11:46:34.858936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.122 qpair failed and we were unable to recover it. 00:28:34.122 [2024-11-15 11:46:34.859092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.122 [2024-11-15 11:46:34.859104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.122 qpair failed and we were unable to recover it. 00:28:34.122 [2024-11-15 11:46:34.859310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.122 [2024-11-15 11:46:34.859321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.122 qpair failed and we were unable to recover it. 00:28:34.122 [2024-11-15 11:46:34.859503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.122 [2024-11-15 11:46:34.859515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.122 qpair failed and we were unable to recover it. 00:28:34.122 [2024-11-15 11:46:34.859728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.122 [2024-11-15 11:46:34.859740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.122 qpair failed and we were unable to recover it. 00:28:34.122 [2024-11-15 11:46:34.859888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.122 [2024-11-15 11:46:34.859899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.122 qpair failed and we were unable to recover it. 00:28:34.122 [2024-11-15 11:46:34.860107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.122 [2024-11-15 11:46:34.860121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.122 qpair failed and we were unable to recover it. 00:28:34.122 [2024-11-15 11:46:34.860347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.122 [2024-11-15 11:46:34.860358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.122 qpair failed and we were unable to recover it. 00:28:34.122 [2024-11-15 11:46:34.860510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.122 [2024-11-15 11:46:34.860522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.122 qpair failed and we were unable to recover it. 00:28:34.122 [2024-11-15 11:46:34.860675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.122 [2024-11-15 11:46:34.860687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.122 qpair failed and we were unable to recover it. 00:28:34.122 [2024-11-15 11:46:34.860921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.122 [2024-11-15 11:46:34.860933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.122 qpair failed and we were unable to recover it. 00:28:34.122 [2024-11-15 11:46:34.861030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.122 [2024-11-15 11:46:34.861041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.122 qpair failed and we were unable to recover it. 00:28:34.122 [2024-11-15 11:46:34.861281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.122 [2024-11-15 11:46:34.861292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.122 qpair failed and we were unable to recover it. 00:28:34.122 [2024-11-15 11:46:34.861553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.122 [2024-11-15 11:46:34.861565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.122 qpair failed and we were unable to recover it. 00:28:34.122 [2024-11-15 11:46:34.861729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.122 [2024-11-15 11:46:34.861741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.122 qpair failed and we were unable to recover it. 00:28:34.122 [2024-11-15 11:46:34.861888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.122 [2024-11-15 11:46:34.861900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.122 qpair failed and we were unable to recover it. 00:28:34.122 [2024-11-15 11:46:34.862051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.122 [2024-11-15 11:46:34.862062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.122 qpair failed and we were unable to recover it. 00:28:34.122 [2024-11-15 11:46:34.862226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.122 [2024-11-15 11:46:34.862237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.122 qpair failed and we were unable to recover it. 00:28:34.122 [2024-11-15 11:46:34.862322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.122 [2024-11-15 11:46:34.862333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.122 qpair failed and we were unable to recover it. 00:28:34.122 [2024-11-15 11:46:34.862489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.122 [2024-11-15 11:46:34.862500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.122 qpair failed and we were unable to recover it. 00:28:34.122 [2024-11-15 11:46:34.862652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.122 [2024-11-15 11:46:34.862664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.122 qpair failed and we were unable to recover it. 00:28:34.122 [2024-11-15 11:46:34.862903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.122 [2024-11-15 11:46:34.862914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.122 qpair failed and we were unable to recover it. 00:28:34.122 [2024-11-15 11:46:34.863047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.122 [2024-11-15 11:46:34.863059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.122 qpair failed and we were unable to recover it. 00:28:34.122 [2024-11-15 11:46:34.863205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.122 [2024-11-15 11:46:34.863216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.122 qpair failed and we were unable to recover it. 00:28:34.122 [2024-11-15 11:46:34.863362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.122 [2024-11-15 11:46:34.863374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.122 qpair failed and we were unable to recover it. 00:28:34.122 [2024-11-15 11:46:34.863565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.122 [2024-11-15 11:46:34.863577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.122 qpair failed and we were unable to recover it. 00:28:34.122 [2024-11-15 11:46:34.863733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.122 [2024-11-15 11:46:34.863745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.122 qpair failed and we were unable to recover it. 00:28:34.122 [2024-11-15 11:46:34.863969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.122 [2024-11-15 11:46:34.863981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.122 qpair failed and we were unable to recover it. 00:28:34.122 [2024-11-15 11:46:34.864202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.122 [2024-11-15 11:46:34.864213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.122 qpair failed and we were unable to recover it. 00:28:34.122 [2024-11-15 11:46:34.864369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.122 [2024-11-15 11:46:34.864379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.122 qpair failed and we were unable to recover it. 00:28:34.122 [2024-11-15 11:46:34.864533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.122 [2024-11-15 11:46:34.864545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.122 qpair failed and we were unable to recover it. 00:28:34.122 [2024-11-15 11:46:34.864776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.122 [2024-11-15 11:46:34.864787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.122 qpair failed and we were unable to recover it. 00:28:34.122 [2024-11-15 11:46:34.864960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.122 [2024-11-15 11:46:34.864971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.122 qpair failed and we were unable to recover it. 00:28:34.122 [2024-11-15 11:46:34.865181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.122 [2024-11-15 11:46:34.865193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.122 qpair failed and we were unable to recover it. 00:28:34.122 [2024-11-15 11:46:34.865350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.122 [2024-11-15 11:46:34.865362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.122 qpair failed and we were unable to recover it. 00:28:34.122 [2024-11-15 11:46:34.865502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.122 [2024-11-15 11:46:34.865514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.122 qpair failed and we were unable to recover it. 00:28:34.122 [2024-11-15 11:46:34.865688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.122 [2024-11-15 11:46:34.865700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.122 qpair failed and we were unable to recover it. 00:28:34.122 [2024-11-15 11:46:34.865906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.122 [2024-11-15 11:46:34.865916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.122 qpair failed and we were unable to recover it. 00:28:34.122 [2024-11-15 11:46:34.866146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.122 [2024-11-15 11:46:34.866159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.122 qpair failed and we were unable to recover it. 00:28:34.122 [2024-11-15 11:46:34.866295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.122 [2024-11-15 11:46:34.866306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.122 qpair failed and we were unable to recover it. 00:28:34.122 [2024-11-15 11:46:34.866456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.122 [2024-11-15 11:46:34.866471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.122 qpair failed and we were unable to recover it. 00:28:34.122 [2024-11-15 11:46:34.866678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.122 [2024-11-15 11:46:34.866690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.122 qpair failed and we were unable to recover it. 00:28:34.122 [2024-11-15 11:46:34.866925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.122 [2024-11-15 11:46:34.866937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.122 qpair failed and we were unable to recover it. 00:28:34.122 [2024-11-15 11:46:34.867143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.122 [2024-11-15 11:46:34.867155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.122 qpair failed and we were unable to recover it. 00:28:34.122 [2024-11-15 11:46:34.867425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.122 [2024-11-15 11:46:34.867436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.122 qpair failed and we were unable to recover it. 00:28:34.122 [2024-11-15 11:46:34.867517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.122 [2024-11-15 11:46:34.867529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.122 qpair failed and we were unable to recover it. 00:28:34.122 [2024-11-15 11:46:34.867690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.122 [2024-11-15 11:46:34.867704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.122 qpair failed and we were unable to recover it. 00:28:34.122 [2024-11-15 11:46:34.867910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.122 [2024-11-15 11:46:34.867921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.122 qpair failed and we were unable to recover it. 00:28:34.122 [2024-11-15 11:46:34.868201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.122 [2024-11-15 11:46:34.868212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.122 qpair failed and we were unable to recover it. 00:28:34.122 [2024-11-15 11:46:34.868353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.122 [2024-11-15 11:46:34.868364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.122 qpair failed and we were unable to recover it. 00:28:34.122 [2024-11-15 11:46:34.868585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.122 [2024-11-15 11:46:34.868597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.122 qpair failed and we were unable to recover it. 00:28:34.122 [2024-11-15 11:46:34.868832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.122 [2024-11-15 11:46:34.868843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.122 qpair failed and we were unable to recover it. 00:28:34.122 [2024-11-15 11:46:34.868934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.122 [2024-11-15 11:46:34.868946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.122 qpair failed and we were unable to recover it. 00:28:34.122 [2024-11-15 11:46:34.869181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.122 [2024-11-15 11:46:34.869193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.122 qpair failed and we were unable to recover it. 00:28:34.122 [2024-11-15 11:46:34.869405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.122 [2024-11-15 11:46:34.869417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.122 qpair failed and we were unable to recover it. 00:28:34.122 [2024-11-15 11:46:34.869649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.122 [2024-11-15 11:46:34.869661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.122 qpair failed and we were unable to recover it. 00:28:34.122 [2024-11-15 11:46:34.869809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.122 [2024-11-15 11:46:34.869821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.122 qpair failed and we were unable to recover it. 00:28:34.122 [2024-11-15 11:46:34.870042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.122 [2024-11-15 11:46:34.870055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.122 qpair failed and we were unable to recover it. 00:28:34.122 [2024-11-15 11:46:34.870214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.122 [2024-11-15 11:46:34.870226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.122 qpair failed and we were unable to recover it. 00:28:34.122 [2024-11-15 11:46:34.870320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.122 [2024-11-15 11:46:34.870331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.123 qpair failed and we were unable to recover it. 00:28:34.123 [2024-11-15 11:46:34.870511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.123 [2024-11-15 11:46:34.870523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.123 qpair failed and we were unable to recover it. 00:28:34.123 [2024-11-15 11:46:34.870612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.123 [2024-11-15 11:46:34.870623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.123 qpair failed and we were unable to recover it. 00:28:34.123 [2024-11-15 11:46:34.870781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.123 [2024-11-15 11:46:34.870793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.123 qpair failed and we were unable to recover it. 00:28:34.123 [2024-11-15 11:46:34.871030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.123 [2024-11-15 11:46:34.871041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.123 qpair failed and we were unable to recover it. 00:28:34.123 [2024-11-15 11:46:34.871283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.123 [2024-11-15 11:46:34.871295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.123 qpair failed and we were unable to recover it. 00:28:34.123 [2024-11-15 11:46:34.871530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.123 [2024-11-15 11:46:34.871541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.123 qpair failed and we were unable to recover it. 00:28:34.123 [2024-11-15 11:46:34.871776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.123 [2024-11-15 11:46:34.871788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.123 qpair failed and we were unable to recover it. 00:28:34.123 [2024-11-15 11:46:34.872036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.123 [2024-11-15 11:46:34.872048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.123 qpair failed and we were unable to recover it. 00:28:34.123 [2024-11-15 11:46:34.872232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.123 [2024-11-15 11:46:34.872243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.123 qpair failed and we were unable to recover it. 00:28:34.123 [2024-11-15 11:46:34.872448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.123 [2024-11-15 11:46:34.872463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.123 qpair failed and we were unable to recover it. 00:28:34.123 [2024-11-15 11:46:34.872671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.123 [2024-11-15 11:46:34.872683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.123 qpair failed and we were unable to recover it. 00:28:34.123 [2024-11-15 11:46:34.872918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.123 [2024-11-15 11:46:34.872930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.123 qpair failed and we were unable to recover it. 00:28:34.123 [2024-11-15 11:46:34.873166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.123 [2024-11-15 11:46:34.873177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.123 qpair failed and we were unable to recover it. 00:28:34.123 [2024-11-15 11:46:34.873357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.123 [2024-11-15 11:46:34.873368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.123 qpair failed and we were unable to recover it. 00:28:34.123 [2024-11-15 11:46:34.873525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.123 [2024-11-15 11:46:34.873537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.123 qpair failed and we were unable to recover it. 00:28:34.123 [2024-11-15 11:46:34.873676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.123 [2024-11-15 11:46:34.873687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.123 qpair failed and we were unable to recover it. 00:28:34.123 [2024-11-15 11:46:34.873922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.123 [2024-11-15 11:46:34.873934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.123 qpair failed and we were unable to recover it. 00:28:34.123 [2024-11-15 11:46:34.874161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.123 [2024-11-15 11:46:34.874173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.123 qpair failed and we were unable to recover it. 00:28:34.123 [2024-11-15 11:46:34.874277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.123 [2024-11-15 11:46:34.874288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.123 qpair failed and we were unable to recover it. 00:28:34.123 [2024-11-15 11:46:34.874527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.123 [2024-11-15 11:46:34.874539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.123 qpair failed and we were unable to recover it. 00:28:34.123 [2024-11-15 11:46:34.874744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.123 [2024-11-15 11:46:34.874756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.123 qpair failed and we were unable to recover it. 00:28:34.123 [2024-11-15 11:46:34.874983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.123 [2024-11-15 11:46:34.874995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.123 qpair failed and we were unable to recover it. 00:28:34.123 [2024-11-15 11:46:34.875147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.123 [2024-11-15 11:46:34.875159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.123 qpair failed and we were unable to recover it. 00:28:34.123 [2024-11-15 11:46:34.875403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.123 [2024-11-15 11:46:34.875414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.123 qpair failed and we were unable to recover it. 00:28:34.123 [2024-11-15 11:46:34.875571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.123 [2024-11-15 11:46:34.875583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.123 qpair failed and we were unable to recover it. 00:28:34.123 [2024-11-15 11:46:34.875815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.123 [2024-11-15 11:46:34.875827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.123 qpair failed and we were unable to recover it. 00:28:34.123 [2024-11-15 11:46:34.876041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.123 [2024-11-15 11:46:34.876054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.123 qpair failed and we were unable to recover it. 00:28:34.123 [2024-11-15 11:46:34.876245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.123 [2024-11-15 11:46:34.876256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.123 qpair failed and we were unable to recover it. 00:28:34.123 [2024-11-15 11:46:34.876414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.123 [2024-11-15 11:46:34.876425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.123 qpair failed and we were unable to recover it. 00:28:34.123 [2024-11-15 11:46:34.876659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.123 [2024-11-15 11:46:34.876671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.123 qpair failed and we were unable to recover it. 00:28:34.123 [2024-11-15 11:46:34.876806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.123 [2024-11-15 11:46:34.876817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.123 qpair failed and we were unable to recover it. 00:28:34.123 [2024-11-15 11:46:34.876968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.123 [2024-11-15 11:46:34.876981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.123 qpair failed and we were unable to recover it. 00:28:34.123 [2024-11-15 11:46:34.877189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.123 [2024-11-15 11:46:34.877200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.123 qpair failed and we were unable to recover it. 00:28:34.123 [2024-11-15 11:46:34.877294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.123 [2024-11-15 11:46:34.877306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.123 qpair failed and we were unable to recover it. 00:28:34.123 [2024-11-15 11:46:34.877454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.123 [2024-11-15 11:46:34.877470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.123 qpair failed and we were unable to recover it. 00:28:34.123 [2024-11-15 11:46:34.877680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.123 [2024-11-15 11:46:34.877691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.123 qpair failed and we were unable to recover it. 00:28:34.123 [2024-11-15 11:46:34.877851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.123 [2024-11-15 11:46:34.877862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.123 qpair failed and we were unable to recover it. 00:28:34.123 [2024-11-15 11:46:34.878091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.123 [2024-11-15 11:46:34.878102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.123 qpair failed and we were unable to recover it. 00:28:34.123 [2024-11-15 11:46:34.878360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.123 [2024-11-15 11:46:34.878372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.123 qpair failed and we were unable to recover it. 00:28:34.123 [2024-11-15 11:46:34.878469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.123 [2024-11-15 11:46:34.878481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.123 qpair failed and we were unable to recover it. 00:28:34.123 [2024-11-15 11:46:34.878635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.123 [2024-11-15 11:46:34.878647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.123 qpair failed and we were unable to recover it. 00:28:34.123 [2024-11-15 11:46:34.878899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.123 [2024-11-15 11:46:34.878910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.123 qpair failed and we were unable to recover it. 00:28:34.123 [2024-11-15 11:46:34.879001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.123 [2024-11-15 11:46:34.879012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.123 qpair failed and we were unable to recover it. 00:28:34.123 [2024-11-15 11:46:34.879273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.123 [2024-11-15 11:46:34.879285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.123 qpair failed and we were unable to recover it. 00:28:34.123 [2024-11-15 11:46:34.879533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.123 [2024-11-15 11:46:34.879545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.123 qpair failed and we were unable to recover it. 00:28:34.123 [2024-11-15 11:46:34.879629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.123 [2024-11-15 11:46:34.879640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.123 qpair failed and we were unable to recover it. 00:28:34.123 [2024-11-15 11:46:34.879847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.123 [2024-11-15 11:46:34.879858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.123 qpair failed and we were unable to recover it. 00:28:34.123 [2024-11-15 11:46:34.880118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.123 [2024-11-15 11:46:34.880130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.123 qpair failed and we were unable to recover it. 00:28:34.123 [2024-11-15 11:46:34.880390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.123 [2024-11-15 11:46:34.880401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.123 qpair failed and we were unable to recover it. 00:28:34.123 [2024-11-15 11:46:34.880588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.123 [2024-11-15 11:46:34.880600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.123 qpair failed and we were unable to recover it. 00:28:34.123 [2024-11-15 11:46:34.880820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.123 [2024-11-15 11:46:34.880831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.123 qpair failed and we were unable to recover it. 00:28:34.123 [2024-11-15 11:46:34.880969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.123 [2024-11-15 11:46:34.880980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.123 qpair failed and we were unable to recover it. 00:28:34.123 [2024-11-15 11:46:34.881220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.123 [2024-11-15 11:46:34.881231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.123 qpair failed and we were unable to recover it. 00:28:34.123 [2024-11-15 11:46:34.881382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.123 [2024-11-15 11:46:34.881393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.123 qpair failed and we were unable to recover it. 00:28:34.123 [2024-11-15 11:46:34.881553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.123 [2024-11-15 11:46:34.881564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.123 qpair failed and we were unable to recover it. 00:28:34.123 [2024-11-15 11:46:34.881800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.123 [2024-11-15 11:46:34.881812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.123 qpair failed and we were unable to recover it. 00:28:34.123 [2024-11-15 11:46:34.882047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.123 [2024-11-15 11:46:34.882059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.123 qpair failed and we were unable to recover it. 00:28:34.123 [2024-11-15 11:46:34.882312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.123 [2024-11-15 11:46:34.882324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.123 qpair failed and we were unable to recover it. 00:28:34.123 [2024-11-15 11:46:34.882470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.123 [2024-11-15 11:46:34.882482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.123 qpair failed and we were unable to recover it. 00:28:34.123 [2024-11-15 11:46:34.882661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.123 [2024-11-15 11:46:34.882672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.123 qpair failed and we were unable to recover it. 00:28:34.123 [2024-11-15 11:46:34.882837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.123 [2024-11-15 11:46:34.882847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.123 qpair failed and we were unable to recover it. 00:28:34.123 [2024-11-15 11:46:34.883073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.123 [2024-11-15 11:46:34.883084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.123 qpair failed and we were unable to recover it. 00:28:34.123 [2024-11-15 11:46:34.883289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.123 [2024-11-15 11:46:34.883299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.123 qpair failed and we were unable to recover it. 00:28:34.123 [2024-11-15 11:46:34.883532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.123 [2024-11-15 11:46:34.883542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.123 qpair failed and we were unable to recover it. 00:28:34.123 [2024-11-15 11:46:34.883718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.123 [2024-11-15 11:46:34.883729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.123 qpair failed and we were unable to recover it. 00:28:34.123 [2024-11-15 11:46:34.883965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.123 [2024-11-15 11:46:34.883975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.123 qpair failed and we were unable to recover it. 00:28:34.123 [2024-11-15 11:46:34.884126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.123 [2024-11-15 11:46:34.884139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.123 qpair failed and we were unable to recover it. 00:28:34.123 [2024-11-15 11:46:34.884345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.123 [2024-11-15 11:46:34.884356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.123 qpair failed and we were unable to recover it. 00:28:34.123 [2024-11-15 11:46:34.884454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.123 [2024-11-15 11:46:34.884468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.123 qpair failed and we were unable to recover it. 00:28:34.123 [2024-11-15 11:46:34.884674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.123 [2024-11-15 11:46:34.884684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.123 qpair failed and we were unable to recover it. 00:28:34.123 [2024-11-15 11:46:34.884825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.123 [2024-11-15 11:46:34.884836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.123 qpair failed and we were unable to recover it. 00:28:34.123 [2024-11-15 11:46:34.885069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.123 [2024-11-15 11:46:34.885079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.124 qpair failed and we were unable to recover it. 00:28:34.124 [2024-11-15 11:46:34.885334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.124 [2024-11-15 11:46:34.885345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.124 qpair failed and we were unable to recover it. 00:28:34.124 [2024-11-15 11:46:34.885614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.124 [2024-11-15 11:46:34.885626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.124 qpair failed and we were unable to recover it. 00:28:34.124 [2024-11-15 11:46:34.885851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.124 [2024-11-15 11:46:34.885862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.124 qpair failed and we were unable to recover it. 00:28:34.124 [2024-11-15 11:46:34.886121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.124 [2024-11-15 11:46:34.886132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.124 qpair failed and we were unable to recover it. 00:28:34.124 [2024-11-15 11:46:34.886310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.124 [2024-11-15 11:46:34.886321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.124 qpair failed and we were unable to recover it. 00:28:34.124 [2024-11-15 11:46:34.886595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.124 [2024-11-15 11:46:34.886606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.124 qpair failed and we were unable to recover it. 00:28:34.124 [2024-11-15 11:46:34.886854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.124 [2024-11-15 11:46:34.886864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.124 qpair failed and we were unable to recover it. 00:28:34.124 [2024-11-15 11:46:34.886959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.124 [2024-11-15 11:46:34.886970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.124 qpair failed and we were unable to recover it. 00:28:34.124 [2024-11-15 11:46:34.887221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.124 [2024-11-15 11:46:34.887232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.124 qpair failed and we were unable to recover it. 00:28:34.124 [2024-11-15 11:46:34.887483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.124 [2024-11-15 11:46:34.887494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.124 qpair failed and we were unable to recover it. 00:28:34.124 [2024-11-15 11:46:34.887652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.124 [2024-11-15 11:46:34.887663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.124 qpair failed and we were unable to recover it. 00:28:34.124 [2024-11-15 11:46:34.887798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.124 [2024-11-15 11:46:34.887809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.124 qpair failed and we were unable to recover it. 00:28:34.124 [2024-11-15 11:46:34.887957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.124 [2024-11-15 11:46:34.887968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.124 qpair failed and we were unable to recover it. 00:28:34.124 [2024-11-15 11:46:34.888123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.124 [2024-11-15 11:46:34.888134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.124 qpair failed and we were unable to recover it. 00:28:34.124 [2024-11-15 11:46:34.888318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.124 [2024-11-15 11:46:34.888328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.124 qpair failed and we were unable to recover it. 00:28:34.124 [2024-11-15 11:46:34.888532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.124 [2024-11-15 11:46:34.888543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.124 qpair failed and we were unable to recover it. 00:28:34.124 [2024-11-15 11:46:34.888763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.124 [2024-11-15 11:46:34.888774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.124 qpair failed and we were unable to recover it. 00:28:34.124 [2024-11-15 11:46:34.888981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.124 [2024-11-15 11:46:34.888992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.124 qpair failed and we were unable to recover it. 00:28:34.124 [2024-11-15 11:46:34.889224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.124 [2024-11-15 11:46:34.889236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.124 qpair failed and we were unable to recover it. 00:28:34.124 [2024-11-15 11:46:34.889441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.124 [2024-11-15 11:46:34.889453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.124 qpair failed and we were unable to recover it. 00:28:34.124 [2024-11-15 11:46:34.889675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.124 [2024-11-15 11:46:34.889686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.124 qpair failed and we were unable to recover it. 00:28:34.124 [2024-11-15 11:46:34.889868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.124 [2024-11-15 11:46:34.889879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.124 qpair failed and we were unable to recover it. 00:28:34.124 [2024-11-15 11:46:34.890096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.124 [2024-11-15 11:46:34.890108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.124 qpair failed and we were unable to recover it. 00:28:34.124 [2024-11-15 11:46:34.890367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.124 [2024-11-15 11:46:34.890378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.124 qpair failed and we were unable to recover it. 00:28:34.124 [2024-11-15 11:46:34.890621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.124 [2024-11-15 11:46:34.890632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.124 qpair failed and we were unable to recover it. 00:28:34.124 [2024-11-15 11:46:34.890849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.124 [2024-11-15 11:46:34.890859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.124 qpair failed and we were unable to recover it. 00:28:34.124 [2024-11-15 11:46:34.891089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.124 [2024-11-15 11:46:34.891100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.124 qpair failed and we were unable to recover it. 00:28:34.124 [2024-11-15 11:46:34.891333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.124 [2024-11-15 11:46:34.891344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.124 qpair failed and we were unable to recover it. 00:28:34.124 [2024-11-15 11:46:34.891508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.124 [2024-11-15 11:46:34.891520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.124 qpair failed and we were unable to recover it. 00:28:34.124 [2024-11-15 11:46:34.891670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.124 [2024-11-15 11:46:34.891681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.124 qpair failed and we were unable to recover it. 00:28:34.124 [2024-11-15 11:46:34.891862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.124 [2024-11-15 11:46:34.891873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.124 qpair failed and we were unable to recover it. 00:28:34.124 [2024-11-15 11:46:34.892021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.124 [2024-11-15 11:46:34.892032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.124 qpair failed and we were unable to recover it. 00:28:34.124 [2024-11-15 11:46:34.892307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.124 [2024-11-15 11:46:34.892318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.124 qpair failed and we were unable to recover it. 00:28:34.124 [2024-11-15 11:46:34.892532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.124 [2024-11-15 11:46:34.892543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.124 qpair failed and we were unable to recover it. 00:28:34.124 [2024-11-15 11:46:34.892777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.124 [2024-11-15 11:46:34.892792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.124 qpair failed and we were unable to recover it. 00:28:34.124 [2024-11-15 11:46:34.892945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.124 [2024-11-15 11:46:34.892956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.124 qpair failed and we were unable to recover it. 00:28:34.124 [2024-11-15 11:46:34.893109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.124 [2024-11-15 11:46:34.893120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.124 qpair failed and we were unable to recover it. 00:28:34.124 [2024-11-15 11:46:34.893351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.124 [2024-11-15 11:46:34.893362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.124 qpair failed and we were unable to recover it. 00:28:34.124 [2024-11-15 11:46:34.893512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.124 [2024-11-15 11:46:34.893524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.124 qpair failed and we were unable to recover it. 00:28:34.124 [2024-11-15 11:46:34.893739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.124 [2024-11-15 11:46:34.893750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.124 qpair failed and we were unable to recover it. 00:28:34.124 [2024-11-15 11:46:34.893828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.124 [2024-11-15 11:46:34.893838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.124 qpair failed and we were unable to recover it. 00:28:34.124 [2024-11-15 11:46:34.894101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.124 [2024-11-15 11:46:34.894112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.124 qpair failed and we were unable to recover it. 00:28:34.124 [2024-11-15 11:46:34.894261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.124 [2024-11-15 11:46:34.894272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.124 qpair failed and we were unable to recover it. 00:28:34.124 [2024-11-15 11:46:34.894495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.124 [2024-11-15 11:46:34.894507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.124 qpair failed and we were unable to recover it. 00:28:34.124 [2024-11-15 11:46:34.894727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.124 [2024-11-15 11:46:34.894738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.124 qpair failed and we were unable to recover it. 00:28:34.124 [2024-11-15 11:46:34.894817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.124 [2024-11-15 11:46:34.894828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.124 qpair failed and we were unable to recover it. 00:28:34.124 [2024-11-15 11:46:34.894961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.124 [2024-11-15 11:46:34.894971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.124 qpair failed and we were unable to recover it. 00:28:34.124 [2024-11-15 11:46:34.895238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.124 [2024-11-15 11:46:34.895249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.124 qpair failed and we were unable to recover it. 00:28:34.124 [2024-11-15 11:46:34.895488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.124 [2024-11-15 11:46:34.895499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.124 qpair failed and we were unable to recover it. 00:28:34.124 [2024-11-15 11:46:34.895708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.124 [2024-11-15 11:46:34.895719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.124 qpair failed and we were unable to recover it. 00:28:34.124 [2024-11-15 11:46:34.895950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.124 [2024-11-15 11:46:34.895961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.124 qpair failed and we were unable to recover it. 00:28:34.124 [2024-11-15 11:46:34.896163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.124 [2024-11-15 11:46:34.896173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.124 qpair failed and we were unable to recover it. 00:28:34.124 [2024-11-15 11:46:34.896379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.124 [2024-11-15 11:46:34.896389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.124 qpair failed and we were unable to recover it. 00:28:34.124 [2024-11-15 11:46:34.896651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.124 [2024-11-15 11:46:34.896663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.124 qpair failed and we were unable to recover it. 00:28:34.124 [2024-11-15 11:46:34.896888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.124 [2024-11-15 11:46:34.896899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.124 qpair failed and we were unable to recover it. 00:28:34.124 [2024-11-15 11:46:34.897049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.124 [2024-11-15 11:46:34.897060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.124 qpair failed and we were unable to recover it. 00:28:34.124 [2024-11-15 11:46:34.897213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.124 [2024-11-15 11:46:34.897225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.124 qpair failed and we were unable to recover it. 00:28:34.124 [2024-11-15 11:46:34.897432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.124 [2024-11-15 11:46:34.897444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.124 qpair failed and we were unable to recover it. 00:28:34.124 [2024-11-15 11:46:34.897587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.124 [2024-11-15 11:46:34.897599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.124 qpair failed and we were unable to recover it. 00:28:34.124 [2024-11-15 11:46:34.897826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.124 [2024-11-15 11:46:34.897837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.124 qpair failed and we were unable to recover it. 00:28:34.124 [2024-11-15 11:46:34.898075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.124 [2024-11-15 11:46:34.898086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.124 qpair failed and we were unable to recover it. 00:28:34.124 [2024-11-15 11:46:34.898297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.124 [2024-11-15 11:46:34.898308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.124 qpair failed and we were unable to recover it. 00:28:34.124 [2024-11-15 11:46:34.898515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.124 [2024-11-15 11:46:34.898527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.124 qpair failed and we were unable to recover it. 00:28:34.124 [2024-11-15 11:46:34.898787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.124 [2024-11-15 11:46:34.898798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.124 qpair failed and we were unable to recover it. 00:28:34.124 [2024-11-15 11:46:34.898936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.124 [2024-11-15 11:46:34.898947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.124 qpair failed and we were unable to recover it. 00:28:34.124 [2024-11-15 11:46:34.899102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.124 [2024-11-15 11:46:34.899113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.124 qpair failed and we were unable to recover it. 00:28:34.124 [2024-11-15 11:46:34.899346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.124 [2024-11-15 11:46:34.899358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.124 qpair failed and we were unable to recover it. 00:28:34.124 [2024-11-15 11:46:34.899575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.125 [2024-11-15 11:46:34.899586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.125 qpair failed and we were unable to recover it. 00:28:34.125 [2024-11-15 11:46:34.899772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.125 [2024-11-15 11:46:34.899784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.125 qpair failed and we were unable to recover it. 00:28:34.125 [2024-11-15 11:46:34.900018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.125 [2024-11-15 11:46:34.900030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.125 qpair failed and we were unable to recover it. 00:28:34.125 [2024-11-15 11:46:34.900275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.125 [2024-11-15 11:46:34.900286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.125 qpair failed and we were unable to recover it. 00:28:34.125 [2024-11-15 11:46:34.900529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.125 [2024-11-15 11:46:34.900541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.125 qpair failed and we were unable to recover it. 00:28:34.125 [2024-11-15 11:46:34.900708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.125 [2024-11-15 11:46:34.900720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.125 qpair failed and we were unable to recover it. 00:28:34.125 [2024-11-15 11:46:34.900962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.125 [2024-11-15 11:46:34.900973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.125 qpair failed and we were unable to recover it. 00:28:34.125 [2024-11-15 11:46:34.901208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.125 [2024-11-15 11:46:34.901221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.125 qpair failed and we were unable to recover it. 00:28:34.125 [2024-11-15 11:46:34.901376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.125 [2024-11-15 11:46:34.901387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.125 qpair failed and we were unable to recover it. 00:28:34.125 [2024-11-15 11:46:34.901595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.125 [2024-11-15 11:46:34.901606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.125 qpair failed and we were unable to recover it. 00:28:34.125 [2024-11-15 11:46:34.901843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.125 [2024-11-15 11:46:34.901855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.125 qpair failed and we were unable to recover it. 00:28:34.125 [2024-11-15 11:46:34.902090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.125 [2024-11-15 11:46:34.902102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.125 qpair failed and we were unable to recover it. 00:28:34.125 [2024-11-15 11:46:34.902317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.125 [2024-11-15 11:46:34.902329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.125 qpair failed and we were unable to recover it. 00:28:34.125 [2024-11-15 11:46:34.902479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.125 [2024-11-15 11:46:34.902491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.125 qpair failed and we were unable to recover it. 00:28:34.125 [2024-11-15 11:46:34.902627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.125 [2024-11-15 11:46:34.902638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.125 qpair failed and we were unable to recover it. 00:28:34.125 [2024-11-15 11:46:34.902788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.125 [2024-11-15 11:46:34.902800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.125 qpair failed and we were unable to recover it. 00:28:34.125 [2024-11-15 11:46:34.902940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.125 [2024-11-15 11:46:34.902951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.125 qpair failed and we were unable to recover it. 00:28:34.125 [2024-11-15 11:46:34.903102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.125 [2024-11-15 11:46:34.903113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.125 qpair failed and we were unable to recover it. 00:28:34.125 [2024-11-15 11:46:34.903248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.125 [2024-11-15 11:46:34.903259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.125 qpair failed and we were unable to recover it. 00:28:34.125 [2024-11-15 11:46:34.903467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.125 [2024-11-15 11:46:34.903478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.125 qpair failed and we were unable to recover it. 00:28:34.125 [2024-11-15 11:46:34.903634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.125 [2024-11-15 11:46:34.903645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.125 qpair failed and we were unable to recover it. 00:28:34.125 [2024-11-15 11:46:34.903800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.125 [2024-11-15 11:46:34.903811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.125 qpair failed and we were unable to recover it. 00:28:34.125 [2024-11-15 11:46:34.904041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.125 [2024-11-15 11:46:34.904052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.125 qpair failed and we were unable to recover it. 00:28:34.125 [2024-11-15 11:46:34.904227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.125 [2024-11-15 11:46:34.904238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.125 qpair failed and we were unable to recover it. 00:28:34.125 [2024-11-15 11:46:34.904474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.125 [2024-11-15 11:46:34.904485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.125 qpair failed and we were unable to recover it. 00:28:34.125 [2024-11-15 11:46:34.904704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.125 [2024-11-15 11:46:34.904715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.125 qpair failed and we were unable to recover it. 00:28:34.125 [2024-11-15 11:46:34.904947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.125 [2024-11-15 11:46:34.904957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.125 qpair failed and we were unable to recover it. 00:28:34.125 [2024-11-15 11:46:34.905189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.125 [2024-11-15 11:46:34.905200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.125 qpair failed and we were unable to recover it. 00:28:34.125 [2024-11-15 11:46:34.905376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.125 [2024-11-15 11:46:34.905386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.125 qpair failed and we were unable to recover it. 00:28:34.125 [2024-11-15 11:46:34.905570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.125 [2024-11-15 11:46:34.905582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.125 qpair failed and we were unable to recover it. 00:28:34.125 [2024-11-15 11:46:34.905763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.125 [2024-11-15 11:46:34.905774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.125 qpair failed and we were unable to recover it. 00:28:34.125 [2024-11-15 11:46:34.905977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.125 [2024-11-15 11:46:34.905988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.125 qpair failed and we were unable to recover it. 00:28:34.125 [2024-11-15 11:46:34.906202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.125 [2024-11-15 11:46:34.906213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.125 qpair failed and we were unable to recover it. 00:28:34.125 [2024-11-15 11:46:34.906450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.125 [2024-11-15 11:46:34.906466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.125 qpair failed and we were unable to recover it. 00:28:34.125 [2024-11-15 11:46:34.906737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.125 [2024-11-15 11:46:34.906763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.125 qpair failed and we were unable to recover it. 00:28:34.125 [2024-11-15 11:46:34.906911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.125 [2024-11-15 11:46:34.906922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.125 qpair failed and we were unable to recover it. 00:28:34.125 [2024-11-15 11:46:34.907126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.125 [2024-11-15 11:46:34.907137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.125 qpair failed and we were unable to recover it. 00:28:34.125 [2024-11-15 11:46:34.907292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.125 [2024-11-15 11:46:34.907303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.125 qpair failed and we were unable to recover it. 00:28:34.125 [2024-11-15 11:46:34.907385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.125 [2024-11-15 11:46:34.907396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.125 qpair failed and we were unable to recover it. 00:28:34.125 [2024-11-15 11:46:34.907546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.125 [2024-11-15 11:46:34.907558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.125 qpair failed and we were unable to recover it. 00:28:34.125 [2024-11-15 11:46:34.907702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.125 [2024-11-15 11:46:34.907713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.125 qpair failed and we were unable to recover it. 00:28:34.125 [2024-11-15 11:46:34.907950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.125 [2024-11-15 11:46:34.907961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.125 qpair failed and we were unable to recover it. 00:28:34.125 [2024-11-15 11:46:34.908180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.125 [2024-11-15 11:46:34.908191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.125 qpair failed and we were unable to recover it. 00:28:34.125 [2024-11-15 11:46:34.908452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.125 [2024-11-15 11:46:34.908467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.125 qpair failed and we were unable to recover it. 00:28:34.125 [2024-11-15 11:46:34.908707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.125 [2024-11-15 11:46:34.908718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.125 qpair failed and we were unable to recover it. 00:28:34.125 [2024-11-15 11:46:34.908923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.125 [2024-11-15 11:46:34.908935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.125 qpair failed and we were unable to recover it. 00:28:34.125 [2024-11-15 11:46:34.909032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.125 [2024-11-15 11:46:34.909044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.125 qpair failed and we were unable to recover it. 00:28:34.125 [2024-11-15 11:46:34.909192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.125 [2024-11-15 11:46:34.909209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.125 qpair failed and we were unable to recover it. 00:28:34.125 [2024-11-15 11:46:34.909347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.125 [2024-11-15 11:46:34.909358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.125 qpair failed and we were unable to recover it. 00:28:34.125 [2024-11-15 11:46:34.909546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.125 [2024-11-15 11:46:34.909558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.125 qpair failed and we were unable to recover it. 00:28:34.125 [2024-11-15 11:46:34.909709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.125 [2024-11-15 11:46:34.909721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.125 qpair failed and we were unable to recover it. 00:28:34.125 [2024-11-15 11:46:34.909890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.125 [2024-11-15 11:46:34.909901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.125 qpair failed and we were unable to recover it. 00:28:34.125 [2024-11-15 11:46:34.910064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.125 [2024-11-15 11:46:34.910075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.125 qpair failed and we were unable to recover it. 00:28:34.125 [2024-11-15 11:46:34.910341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.125 [2024-11-15 11:46:34.910353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.125 qpair failed and we were unable to recover it. 00:28:34.125 [2024-11-15 11:46:34.910537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.125 [2024-11-15 11:46:34.910548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.125 qpair failed and we were unable to recover it. 00:28:34.125 [2024-11-15 11:46:34.910647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.125 [2024-11-15 11:46:34.910658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.125 qpair failed and we were unable to recover it. 00:28:34.125 [2024-11-15 11:46:34.910810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.125 [2024-11-15 11:46:34.910820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.125 qpair failed and we were unable to recover it. 00:28:34.125 [2024-11-15 11:46:34.911026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.125 [2024-11-15 11:46:34.911037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.125 qpair failed and we were unable to recover it. 00:28:34.125 [2024-11-15 11:46:34.911187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.125 [2024-11-15 11:46:34.911198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.125 qpair failed and we were unable to recover it. 00:28:34.125 [2024-11-15 11:46:34.911284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.125 [2024-11-15 11:46:34.911296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.125 qpair failed and we were unable to recover it. 00:28:34.125 [2024-11-15 11:46:34.911451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.125 [2024-11-15 11:46:34.911467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.125 qpair failed and we were unable to recover it. 00:28:34.125 [2024-11-15 11:46:34.911639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.125 [2024-11-15 11:46:34.911652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.125 qpair failed and we were unable to recover it. 00:28:34.125 [2024-11-15 11:46:34.911785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.125 [2024-11-15 11:46:34.911797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.125 qpair failed and we were unable to recover it. 00:28:34.125 [2024-11-15 11:46:34.912028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.125 [2024-11-15 11:46:34.912039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.125 qpair failed and we were unable to recover it. 00:28:34.125 [2024-11-15 11:46:34.912135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.125 [2024-11-15 11:46:34.912147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.125 qpair failed and we were unable to recover it. 00:28:34.125 [2024-11-15 11:46:34.912322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.125 [2024-11-15 11:46:34.912333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.125 qpair failed and we were unable to recover it. 00:28:34.125 [2024-11-15 11:46:34.912540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.125 [2024-11-15 11:46:34.912552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.125 qpair failed and we were unable to recover it. 00:28:34.125 [2024-11-15 11:46:34.912692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.125 [2024-11-15 11:46:34.912703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.125 qpair failed and we were unable to recover it. 00:28:34.125 [2024-11-15 11:46:34.912933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.125 [2024-11-15 11:46:34.912946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.125 qpair failed and we were unable to recover it. 00:28:34.125 [2024-11-15 11:46:34.913175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.125 [2024-11-15 11:46:34.913186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.125 qpair failed and we were unable to recover it. 00:28:34.125 [2024-11-15 11:46:34.913360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.125 [2024-11-15 11:46:34.913371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.125 qpair failed and we were unable to recover it. 00:28:34.125 [2024-11-15 11:46:34.913456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.125 [2024-11-15 11:46:34.913472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.125 qpair failed and we were unable to recover it. 00:28:34.125 [2024-11-15 11:46:34.913679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.125 [2024-11-15 11:46:34.913690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.125 qpair failed and we were unable to recover it. 00:28:34.126 [2024-11-15 11:46:34.913837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.126 [2024-11-15 11:46:34.913848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.126 qpair failed and we were unable to recover it. 00:28:34.126 [2024-11-15 11:46:34.914068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.126 [2024-11-15 11:46:34.914082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.126 qpair failed and we were unable to recover it. 00:28:34.126 [2024-11-15 11:46:34.914229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.126 [2024-11-15 11:46:34.914240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.126 qpair failed and we were unable to recover it. 00:28:34.126 [2024-11-15 11:46:34.914419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.126 [2024-11-15 11:46:34.914430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.126 qpair failed and we were unable to recover it. 00:28:34.126 [2024-11-15 11:46:34.914608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.126 [2024-11-15 11:46:34.914621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.126 qpair failed and we were unable to recover it. 00:28:34.126 [2024-11-15 11:46:34.914759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.126 [2024-11-15 11:46:34.914770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.126 qpair failed and we were unable to recover it. 00:28:34.126 [2024-11-15 11:46:34.914979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.126 [2024-11-15 11:46:34.914991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.126 qpair failed and we were unable to recover it. 00:28:34.126 [2024-11-15 11:46:34.915141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.126 [2024-11-15 11:46:34.915153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.126 qpair failed and we were unable to recover it. 00:28:34.126 [2024-11-15 11:46:34.915383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.126 [2024-11-15 11:46:34.915395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.126 qpair failed and we were unable to recover it. 00:28:34.126 [2024-11-15 11:46:34.915560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.126 [2024-11-15 11:46:34.915573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.126 qpair failed and we were unable to recover it. 00:28:34.126 [2024-11-15 11:46:34.915808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.126 [2024-11-15 11:46:34.915819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.126 qpair failed and we were unable to recover it. 00:28:34.126 [2024-11-15 11:46:34.916055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.126 [2024-11-15 11:46:34.916067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.126 qpair failed and we were unable to recover it. 00:28:34.126 [2024-11-15 11:46:34.916279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.126 [2024-11-15 11:46:34.916291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.126 qpair failed and we were unable to recover it. 00:28:34.126 [2024-11-15 11:46:34.916429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.126 [2024-11-15 11:46:34.916441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.126 qpair failed and we were unable to recover it. 00:28:34.126 [2024-11-15 11:46:34.916709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.126 [2024-11-15 11:46:34.916722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.126 qpair failed and we were unable to recover it. 00:28:34.126 [2024-11-15 11:46:34.916943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.126 [2024-11-15 11:46:34.916955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.126 qpair failed and we were unable to recover it. 00:28:34.126 [2024-11-15 11:46:34.917213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.126 [2024-11-15 11:46:34.917225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.126 qpair failed and we were unable to recover it. 00:28:34.126 [2024-11-15 11:46:34.917433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.126 [2024-11-15 11:46:34.917443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.126 qpair failed and we were unable to recover it. 00:28:34.126 [2024-11-15 11:46:34.917694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.126 [2024-11-15 11:46:34.917705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.126 qpair failed and we were unable to recover it. 00:28:34.126 [2024-11-15 11:46:34.917966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.126 [2024-11-15 11:46:34.917977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.126 qpair failed and we were unable to recover it. 00:28:34.126 [2024-11-15 11:46:34.918134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.126 [2024-11-15 11:46:34.918145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.126 qpair failed and we were unable to recover it. 00:28:34.126 [2024-11-15 11:46:34.918350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.126 [2024-11-15 11:46:34.918360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.126 qpair failed and we were unable to recover it. 00:28:34.126 [2024-11-15 11:46:34.918599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.126 [2024-11-15 11:46:34.918610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.126 qpair failed and we were unable to recover it. 00:28:34.126 [2024-11-15 11:46:34.918771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.126 [2024-11-15 11:46:34.918782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.126 qpair failed and we were unable to recover it. 00:28:34.126 [2024-11-15 11:46:34.918929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.126 [2024-11-15 11:46:34.918940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.126 qpair failed and we were unable to recover it. 00:28:34.126 [2024-11-15 11:46:34.919173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.126 [2024-11-15 11:46:34.919183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.126 qpair failed and we were unable to recover it. 00:28:34.126 [2024-11-15 11:46:34.919422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.126 [2024-11-15 11:46:34.919432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.126 qpair failed and we were unable to recover it. 00:28:34.126 [2024-11-15 11:46:34.919683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.126 [2024-11-15 11:46:34.919695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.126 qpair failed and we were unable to recover it. 00:28:34.126 [2024-11-15 11:46:34.919850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.126 [2024-11-15 11:46:34.919861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.126 qpair failed and we were unable to recover it. 00:28:34.126 [2024-11-15 11:46:34.920008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.126 [2024-11-15 11:46:34.920019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.126 qpair failed and we were unable to recover it. 00:28:34.126 [2024-11-15 11:46:34.920232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.126 [2024-11-15 11:46:34.920243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.126 qpair failed and we were unable to recover it. 00:28:34.126 [2024-11-15 11:46:34.920486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.126 [2024-11-15 11:46:34.920497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.126 qpair failed and we were unable to recover it. 00:28:34.126 [2024-11-15 11:46:34.920762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.126 [2024-11-15 11:46:34.920774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.126 qpair failed and we were unable to recover it. 00:28:34.126 [2024-11-15 11:46:34.921012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.126 [2024-11-15 11:46:34.921023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.126 qpair failed and we were unable to recover it. 00:28:34.126 [2024-11-15 11:46:34.921230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.126 [2024-11-15 11:46:34.921241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.126 qpair failed and we were unable to recover it. 00:28:34.126 [2024-11-15 11:46:34.921470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.126 [2024-11-15 11:46:34.921481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.126 qpair failed and we were unable to recover it. 00:28:34.126 [2024-11-15 11:46:34.921719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.126 [2024-11-15 11:46:34.921730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.126 qpair failed and we were unable to recover it. 00:28:34.126 [2024-11-15 11:46:34.921816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.126 [2024-11-15 11:46:34.921827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.126 qpair failed and we were unable to recover it. 00:28:34.126 [2024-11-15 11:46:34.921992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.126 [2024-11-15 11:46:34.922003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.126 qpair failed and we were unable to recover it. 00:28:34.126 [2024-11-15 11:46:34.922153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.126 [2024-11-15 11:46:34.922164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.126 qpair failed and we were unable to recover it. 00:28:34.126 [2024-11-15 11:46:34.922331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.126 [2024-11-15 11:46:34.922342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.126 qpair failed and we were unable to recover it. 00:28:34.126 [2024-11-15 11:46:34.922576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.126 [2024-11-15 11:46:34.922589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.126 qpair failed and we were unable to recover it. 00:28:34.126 [2024-11-15 11:46:34.922726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.126 [2024-11-15 11:46:34.922737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.126 qpair failed and we were unable to recover it. 00:28:34.126 [2024-11-15 11:46:34.922993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.126 [2024-11-15 11:46:34.923005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.126 qpair failed and we were unable to recover it. 00:28:34.126 [2024-11-15 11:46:34.923241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.126 [2024-11-15 11:46:34.923253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.126 qpair failed and we were unable to recover it. 00:28:34.126 [2024-11-15 11:46:34.923487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.126 [2024-11-15 11:46:34.923499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.126 qpair failed and we were unable to recover it. 00:28:34.126 [2024-11-15 11:46:34.923636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.126 [2024-11-15 11:46:34.923648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.126 qpair failed and we were unable to recover it. 00:28:34.126 [2024-11-15 11:46:34.923876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.126 [2024-11-15 11:46:34.923887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.126 qpair failed and we were unable to recover it. 00:28:34.126 [2024-11-15 11:46:34.924055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.126 [2024-11-15 11:46:34.924066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.126 qpair failed and we were unable to recover it. 00:28:34.126 [2024-11-15 11:46:34.924305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.126 [2024-11-15 11:46:34.924317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.126 qpair failed and we were unable to recover it. 00:28:34.126 [2024-11-15 11:46:34.924552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.126 [2024-11-15 11:46:34.924564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.126 qpair failed and we were unable to recover it. 00:28:34.126 [2024-11-15 11:46:34.924776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.126 [2024-11-15 11:46:34.924787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.126 qpair failed and we were unable to recover it. 00:28:34.126 [2024-11-15 11:46:34.924956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.126 [2024-11-15 11:46:34.924968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.126 qpair failed and we were unable to recover it. 00:28:34.126 [2024-11-15 11:46:34.925189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.126 [2024-11-15 11:46:34.925200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.126 qpair failed and we were unable to recover it. 00:28:34.126 [2024-11-15 11:46:34.925433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.126 [2024-11-15 11:46:34.925446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.126 qpair failed and we were unable to recover it. 00:28:34.126 [2024-11-15 11:46:34.925611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.126 [2024-11-15 11:46:34.925623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.126 qpair failed and we were unable to recover it. 00:28:34.126 [2024-11-15 11:46:34.925818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.126 [2024-11-15 11:46:34.925829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.126 qpair failed and we were unable to recover it. 00:28:34.126 [2024-11-15 11:46:34.926035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.126 [2024-11-15 11:46:34.926046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.126 qpair failed and we were unable to recover it. 00:28:34.126 [2024-11-15 11:46:34.926305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.126 [2024-11-15 11:46:34.926316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.126 qpair failed and we were unable to recover it. 00:28:34.126 [2024-11-15 11:46:34.926572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.126 [2024-11-15 11:46:34.926584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.126 qpair failed and we were unable to recover it. 00:28:34.126 [2024-11-15 11:46:34.926734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.126 [2024-11-15 11:46:34.926745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.126 qpair failed and we were unable to recover it. 00:28:34.126 [2024-11-15 11:46:34.926895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.126 [2024-11-15 11:46:34.926906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.126 qpair failed and we were unable to recover it. 00:28:34.126 [2024-11-15 11:46:34.927143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.126 [2024-11-15 11:46:34.927155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.126 qpair failed and we were unable to recover it. 00:28:34.126 [2024-11-15 11:46:34.927398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.126 [2024-11-15 11:46:34.927409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.126 qpair failed and we were unable to recover it. 00:28:34.126 [2024-11-15 11:46:34.927653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.126 [2024-11-15 11:46:34.927664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.126 qpair failed and we were unable to recover it. 00:28:34.126 [2024-11-15 11:46:34.927876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.126 [2024-11-15 11:46:34.927886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.126 qpair failed and we were unable to recover it. 00:28:34.126 [2024-11-15 11:46:34.928038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.126 [2024-11-15 11:46:34.928049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.126 qpair failed and we were unable to recover it. 00:28:34.126 [2024-11-15 11:46:34.928288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.126 [2024-11-15 11:46:34.928299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.126 qpair failed and we were unable to recover it. 00:28:34.126 [2024-11-15 11:46:34.928513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.126 [2024-11-15 11:46:34.928525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.126 qpair failed and we were unable to recover it. 00:28:34.126 [2024-11-15 11:46:34.928678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.126 [2024-11-15 11:46:34.928690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.126 qpair failed and we were unable to recover it. 00:28:34.126 [2024-11-15 11:46:34.928919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.126 [2024-11-15 11:46:34.928930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.126 qpair failed and we were unable to recover it. 00:28:34.127 [2024-11-15 11:46:34.929075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.127 [2024-11-15 11:46:34.929086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.127 qpair failed and we were unable to recover it. 00:28:34.127 [2024-11-15 11:46:34.929296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.127 [2024-11-15 11:46:34.929307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.127 qpair failed and we were unable to recover it. 00:28:34.127 [2024-11-15 11:46:34.929560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.127 [2024-11-15 11:46:34.929571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.127 qpair failed and we were unable to recover it. 00:28:34.127 [2024-11-15 11:46:34.929784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.127 [2024-11-15 11:46:34.929795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.127 qpair failed and we were unable to recover it. 00:28:34.127 [2024-11-15 11:46:34.930070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.127 [2024-11-15 11:46:34.930081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.127 qpair failed and we were unable to recover it. 00:28:34.127 [2024-11-15 11:46:34.930316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.127 [2024-11-15 11:46:34.930327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.127 qpair failed and we were unable to recover it. 00:28:34.127 [2024-11-15 11:46:34.930482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.127 [2024-11-15 11:46:34.930494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.127 qpair failed and we were unable to recover it. 00:28:34.127 [2024-11-15 11:46:34.930699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.127 [2024-11-15 11:46:34.930710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.127 qpair failed and we were unable to recover it. 00:28:34.127 [2024-11-15 11:46:34.930866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.127 [2024-11-15 11:46:34.930877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.127 qpair failed and we were unable to recover it. 00:28:34.127 [2024-11-15 11:46:34.931011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.127 [2024-11-15 11:46:34.931022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.127 qpair failed and we were unable to recover it. 00:28:34.127 [2024-11-15 11:46:34.931249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.127 [2024-11-15 11:46:34.931262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.127 qpair failed and we were unable to recover it. 00:28:34.127 [2024-11-15 11:46:34.931429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.127 [2024-11-15 11:46:34.931440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.127 qpair failed and we were unable to recover it. 00:28:34.127 [2024-11-15 11:46:34.931656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.127 [2024-11-15 11:46:34.931668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.127 qpair failed and we were unable to recover it. 00:28:34.127 [2024-11-15 11:46:34.931826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.127 [2024-11-15 11:46:34.931838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.127 qpair failed and we were unable to recover it. 00:28:34.127 [2024-11-15 11:46:34.931995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.127 [2024-11-15 11:46:34.932006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.127 qpair failed and we were unable to recover it. 00:28:34.127 [2024-11-15 11:46:34.932146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.127 [2024-11-15 11:46:34.932158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.127 qpair failed and we were unable to recover it. 00:28:34.127 [2024-11-15 11:46:34.932366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.127 [2024-11-15 11:46:34.932378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.127 qpair failed and we were unable to recover it. 00:28:34.127 [2024-11-15 11:46:34.932567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.127 [2024-11-15 11:46:34.932579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.127 qpair failed and we were unable to recover it. 00:28:34.127 [2024-11-15 11:46:34.932792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.127 [2024-11-15 11:46:34.932804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.127 qpair failed and we were unable to recover it. 00:28:34.127 [2024-11-15 11:46:34.933010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.127 [2024-11-15 11:46:34.933022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.127 qpair failed and we were unable to recover it. 00:28:34.127 [2024-11-15 11:46:34.933280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.127 [2024-11-15 11:46:34.933292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.127 qpair failed and we were unable to recover it. 00:28:34.127 [2024-11-15 11:46:34.933551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.127 [2024-11-15 11:46:34.933563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.127 qpair failed and we were unable to recover it. 00:28:34.127 [2024-11-15 11:46:34.933770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.127 [2024-11-15 11:46:34.933782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.127 qpair failed and we were unable to recover it. 00:28:34.127 [2024-11-15 11:46:34.933948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.127 [2024-11-15 11:46:34.933961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.127 qpair failed and we were unable to recover it. 00:28:34.127 [2024-11-15 11:46:34.934245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.127 [2024-11-15 11:46:34.934256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.127 qpair failed and we were unable to recover it. 00:28:34.127 [2024-11-15 11:46:34.934495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.127 [2024-11-15 11:46:34.934505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.127 qpair failed and we were unable to recover it. 00:28:34.127 [2024-11-15 11:46:34.934638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.127 [2024-11-15 11:46:34.934648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.127 qpair failed and we were unable to recover it. 00:28:34.127 [2024-11-15 11:46:34.934884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.127 [2024-11-15 11:46:34.934895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.127 qpair failed and we were unable to recover it. 00:28:34.127 [2024-11-15 11:46:34.935038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.127 [2024-11-15 11:46:34.935049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.127 qpair failed and we were unable to recover it. 00:28:34.127 [2024-11-15 11:46:34.935281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.127 [2024-11-15 11:46:34.935291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.127 qpair failed and we were unable to recover it. 00:28:34.127 [2024-11-15 11:46:34.935521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.127 [2024-11-15 11:46:34.935532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.127 qpair failed and we were unable to recover it. 00:28:34.127 [2024-11-15 11:46:34.935744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.127 [2024-11-15 11:46:34.935755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.127 qpair failed and we were unable to recover it. 00:28:34.127 [2024-11-15 11:46:34.935937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.127 [2024-11-15 11:46:34.935947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.127 qpair failed and we were unable to recover it. 00:28:34.127 [2024-11-15 11:46:34.936096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.127 [2024-11-15 11:46:34.936107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.127 qpair failed and we were unable to recover it. 00:28:34.127 [2024-11-15 11:46:34.936274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.127 [2024-11-15 11:46:34.936286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.127 qpair failed and we were unable to recover it. 00:28:34.127 [2024-11-15 11:46:34.936563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.127 [2024-11-15 11:46:34.936574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.127 qpair failed and we were unable to recover it. 00:28:34.127 [2024-11-15 11:46:34.936855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.127 [2024-11-15 11:46:34.936866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.127 qpair failed and we were unable to recover it. 00:28:34.127 [2024-11-15 11:46:34.937035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.127 [2024-11-15 11:46:34.937047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.127 qpair failed and we were unable to recover it. 00:28:34.407 [2024-11-15 11:46:34.937281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.407 [2024-11-15 11:46:34.937296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.407 qpair failed and we were unable to recover it. 00:28:34.407 [2024-11-15 11:46:34.937550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.407 [2024-11-15 11:46:34.937563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.407 qpair failed and we were unable to recover it. 00:28:34.407 [2024-11-15 11:46:34.937710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.407 [2024-11-15 11:46:34.937723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.407 qpair failed and we were unable to recover it. 00:28:34.407 [2024-11-15 11:46:34.937967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.407 [2024-11-15 11:46:34.937979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.407 qpair failed and we were unable to recover it. 00:28:34.407 [2024-11-15 11:46:34.938158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.407 [2024-11-15 11:46:34.938169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.407 qpair failed and we were unable to recover it. 00:28:34.407 [2024-11-15 11:46:34.938354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.407 [2024-11-15 11:46:34.938366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.407 qpair failed and we were unable to recover it. 00:28:34.407 [2024-11-15 11:46:34.938522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.407 [2024-11-15 11:46:34.938533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.407 qpair failed and we were unable to recover it. 00:28:34.407 [2024-11-15 11:46:34.938693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.407 [2024-11-15 11:46:34.938704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.407 qpair failed and we were unable to recover it. 00:28:34.407 [2024-11-15 11:46:34.938860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.407 [2024-11-15 11:46:34.938871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.407 qpair failed and we were unable to recover it. 00:28:34.407 [2024-11-15 11:46:34.939077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.407 [2024-11-15 11:46:34.939089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.407 qpair failed and we were unable to recover it. 00:28:34.407 [2024-11-15 11:46:34.939170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.407 [2024-11-15 11:46:34.939182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.407 qpair failed and we were unable to recover it. 00:28:34.407 [2024-11-15 11:46:34.939332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.407 [2024-11-15 11:46:34.939344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.407 qpair failed and we were unable to recover it. 00:28:34.407 [2024-11-15 11:46:34.939502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.407 [2024-11-15 11:46:34.939518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.407 qpair failed and we were unable to recover it. 00:28:34.407 [2024-11-15 11:46:34.939665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.407 [2024-11-15 11:46:34.939677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.407 qpair failed and we were unable to recover it. 00:28:34.407 [2024-11-15 11:46:34.939827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.407 [2024-11-15 11:46:34.939838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.407 qpair failed and we were unable to recover it. 00:28:34.407 [2024-11-15 11:46:34.940079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.407 [2024-11-15 11:46:34.940091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.407 qpair failed and we were unable to recover it. 00:28:34.407 [2024-11-15 11:46:34.940326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.407 [2024-11-15 11:46:34.940338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.407 qpair failed and we were unable to recover it. 00:28:34.407 [2024-11-15 11:46:34.940497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.407 [2024-11-15 11:46:34.940510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.407 qpair failed and we were unable to recover it. 00:28:34.407 [2024-11-15 11:46:34.940618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.407 [2024-11-15 11:46:34.940629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.407 qpair failed and we were unable to recover it. 00:28:34.407 [2024-11-15 11:46:34.940844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.407 [2024-11-15 11:46:34.940855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.407 qpair failed and we were unable to recover it. 00:28:34.407 [2024-11-15 11:46:34.941091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.407 [2024-11-15 11:46:34.941102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.407 qpair failed and we were unable to recover it. 00:28:34.407 [2024-11-15 11:46:34.941336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.407 [2024-11-15 11:46:34.941350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.407 qpair failed and we were unable to recover it. 00:28:34.407 [2024-11-15 11:46:34.941569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.407 [2024-11-15 11:46:34.941581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.408 qpair failed and we were unable to recover it. 00:28:34.408 [2024-11-15 11:46:34.941842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.408 [2024-11-15 11:46:34.941853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.408 qpair failed and we were unable to recover it. 00:28:34.408 [2024-11-15 11:46:34.942064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.408 [2024-11-15 11:46:34.942076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.408 qpair failed and we were unable to recover it. 00:28:34.408 [2024-11-15 11:46:34.942318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.408 [2024-11-15 11:46:34.942329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.408 qpair failed and we were unable to recover it. 00:28:34.408 [2024-11-15 11:46:34.942595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.408 [2024-11-15 11:46:34.942607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.408 qpair failed and we were unable to recover it. 00:28:34.408 [2024-11-15 11:46:34.942757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.408 [2024-11-15 11:46:34.942769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.408 qpair failed and we were unable to recover it. 00:28:34.408 [2024-11-15 11:46:34.942929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.408 [2024-11-15 11:46:34.942941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.408 qpair failed and we were unable to recover it. 00:28:34.408 [2024-11-15 11:46:34.943171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.408 [2024-11-15 11:46:34.943183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.408 qpair failed and we were unable to recover it. 00:28:34.408 [2024-11-15 11:46:34.943339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.408 [2024-11-15 11:46:34.943351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.408 qpair failed and we were unable to recover it. 00:28:34.408 [2024-11-15 11:46:34.943568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.408 [2024-11-15 11:46:34.943580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.408 qpair failed and we were unable to recover it. 00:28:34.408 [2024-11-15 11:46:34.943808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.408 [2024-11-15 11:46:34.943820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.408 qpair failed and we were unable to recover it. 00:28:34.408 [2024-11-15 11:46:34.944030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.408 [2024-11-15 11:46:34.944041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.408 qpair failed and we were unable to recover it. 00:28:34.408 [2024-11-15 11:46:34.944201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.408 [2024-11-15 11:46:34.944213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.408 qpair failed and we were unable to recover it. 00:28:34.408 [2024-11-15 11:46:34.944309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.408 [2024-11-15 11:46:34.944320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.408 qpair failed and we were unable to recover it. 00:28:34.408 [2024-11-15 11:46:34.944473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.408 [2024-11-15 11:46:34.944484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.408 qpair failed and we were unable to recover it. 00:28:34.408 [2024-11-15 11:46:34.944693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.408 [2024-11-15 11:46:34.944705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.408 qpair failed and we were unable to recover it. 00:28:34.408 [2024-11-15 11:46:34.944948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.408 [2024-11-15 11:46:34.944959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.408 qpair failed and we were unable to recover it. 00:28:34.408 [2024-11-15 11:46:34.945176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.408 [2024-11-15 11:46:34.945188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.408 qpair failed and we were unable to recover it. 00:28:34.408 [2024-11-15 11:46:34.945346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.408 [2024-11-15 11:46:34.945358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.408 qpair failed and we were unable to recover it. 00:28:34.408 [2024-11-15 11:46:34.945585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.408 [2024-11-15 11:46:34.945597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.408 qpair failed and we were unable to recover it. 00:28:34.408 [2024-11-15 11:46:34.945750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.408 [2024-11-15 11:46:34.945762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.408 qpair failed and we were unable to recover it. 00:28:34.408 [2024-11-15 11:46:34.945993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.408 [2024-11-15 11:46:34.946005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.408 qpair failed and we were unable to recover it. 00:28:34.408 [2024-11-15 11:46:34.946172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.408 [2024-11-15 11:46:34.946183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.408 qpair failed and we were unable to recover it. 00:28:34.408 [2024-11-15 11:46:34.946347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.408 [2024-11-15 11:46:34.946358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.408 qpair failed and we were unable to recover it. 00:28:34.408 [2024-11-15 11:46:34.946598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.408 [2024-11-15 11:46:34.946610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.408 qpair failed and we were unable to recover it. 00:28:34.408 [2024-11-15 11:46:34.946831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.408 [2024-11-15 11:46:34.946843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.408 qpair failed and we were unable to recover it. 00:28:34.408 [2024-11-15 11:46:34.947025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.408 [2024-11-15 11:46:34.947039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.408 qpair failed and we were unable to recover it. 00:28:34.408 [2024-11-15 11:46:34.947248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.408 [2024-11-15 11:46:34.947259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.408 qpair failed and we were unable to recover it. 00:28:34.408 [2024-11-15 11:46:34.947532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.408 [2024-11-15 11:46:34.947544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.408 qpair failed and we were unable to recover it. 00:28:34.408 [2024-11-15 11:46:34.947775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.408 [2024-11-15 11:46:34.947787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.408 qpair failed and we were unable to recover it. 00:28:34.408 [2024-11-15 11:46:34.947952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.408 [2024-11-15 11:46:34.947966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.408 qpair failed and we were unable to recover it. 00:28:34.408 [2024-11-15 11:46:34.948175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.408 [2024-11-15 11:46:34.948186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.408 qpair failed and we were unable to recover it. 00:28:34.408 [2024-11-15 11:46:34.948409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.408 [2024-11-15 11:46:34.948420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.408 qpair failed and we were unable to recover it. 00:28:34.408 [2024-11-15 11:46:34.948588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.408 [2024-11-15 11:46:34.948599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.408 qpair failed and we were unable to recover it. 00:28:34.408 [2024-11-15 11:46:34.948683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.408 [2024-11-15 11:46:34.948694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.408 qpair failed and we were unable to recover it. 00:28:34.408 [2024-11-15 11:46:34.948844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.408 [2024-11-15 11:46:34.948855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.408 qpair failed and we were unable to recover it. 00:28:34.408 [2024-11-15 11:46:34.949066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.408 [2024-11-15 11:46:34.949078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.408 qpair failed and we were unable to recover it. 00:28:34.408 [2024-11-15 11:46:34.949250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.408 [2024-11-15 11:46:34.949261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.408 qpair failed and we were unable to recover it. 00:28:34.409 [2024-11-15 11:46:34.949485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.409 [2024-11-15 11:46:34.949497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.409 qpair failed and we were unable to recover it. 00:28:34.409 [2024-11-15 11:46:34.949759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.409 [2024-11-15 11:46:34.949770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.409 qpair failed and we were unable to recover it. 00:28:34.409 [2024-11-15 11:46:34.949978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.409 [2024-11-15 11:46:34.949990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.409 qpair failed and we were unable to recover it. 00:28:34.409 [2024-11-15 11:46:34.950241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.409 [2024-11-15 11:46:34.950253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.409 qpair failed and we were unable to recover it. 00:28:34.409 [2024-11-15 11:46:34.950424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.409 [2024-11-15 11:46:34.950435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.409 qpair failed and we were unable to recover it. 00:28:34.409 [2024-11-15 11:46:34.950642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.409 [2024-11-15 11:46:34.950654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.409 qpair failed and we were unable to recover it. 00:28:34.409 [2024-11-15 11:46:34.950879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.409 [2024-11-15 11:46:34.950891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.409 qpair failed and we were unable to recover it. 00:28:34.409 [2024-11-15 11:46:34.951041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.409 [2024-11-15 11:46:34.951052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.409 qpair failed and we were unable to recover it. 00:28:34.409 [2024-11-15 11:46:34.951216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.409 [2024-11-15 11:46:34.951227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.409 qpair failed and we were unable to recover it. 00:28:34.409 [2024-11-15 11:46:34.951377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.409 [2024-11-15 11:46:34.951389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.409 qpair failed and we were unable to recover it. 00:28:34.409 [2024-11-15 11:46:34.951599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.409 [2024-11-15 11:46:34.951611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.409 qpair failed and we were unable to recover it. 00:28:34.409 [2024-11-15 11:46:34.951765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.409 [2024-11-15 11:46:34.951776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.409 qpair failed and we were unable to recover it. 00:28:34.409 [2024-11-15 11:46:34.951929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.409 [2024-11-15 11:46:34.951941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.409 qpair failed and we were unable to recover it. 00:28:34.409 [2024-11-15 11:46:34.952029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.409 [2024-11-15 11:46:34.952041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.409 qpair failed and we were unable to recover it. 00:28:34.409 [2024-11-15 11:46:34.952250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.409 [2024-11-15 11:46:34.952262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.409 qpair failed and we were unable to recover it. 00:28:34.409 [2024-11-15 11:46:34.952481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.409 [2024-11-15 11:46:34.952492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.409 qpair failed and we were unable to recover it. 00:28:34.409 [2024-11-15 11:46:34.952738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.409 [2024-11-15 11:46:34.952749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.409 qpair failed and we were unable to recover it. 00:28:34.409 [2024-11-15 11:46:34.952920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.409 [2024-11-15 11:46:34.952931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.409 qpair failed and we were unable to recover it. 00:28:34.409 [2024-11-15 11:46:34.953083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.409 [2024-11-15 11:46:34.953096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.409 qpair failed and we were unable to recover it. 00:28:34.409 [2024-11-15 11:46:34.953288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.409 [2024-11-15 11:46:34.953301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.409 qpair failed and we were unable to recover it. 00:28:34.409 [2024-11-15 11:46:34.953468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.409 [2024-11-15 11:46:34.953480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.409 qpair failed and we were unable to recover it. 00:28:34.409 [2024-11-15 11:46:34.953629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.409 [2024-11-15 11:46:34.953641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.409 qpair failed and we were unable to recover it. 00:28:34.409 [2024-11-15 11:46:34.953847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.409 [2024-11-15 11:46:34.953858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.409 qpair failed and we were unable to recover it. 00:28:34.409 [2024-11-15 11:46:34.954008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.409 [2024-11-15 11:46:34.954019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.409 qpair failed and we were unable to recover it. 00:28:34.409 [2024-11-15 11:46:34.954231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.409 [2024-11-15 11:46:34.954244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.409 qpair failed and we were unable to recover it. 00:28:34.409 [2024-11-15 11:46:34.954484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.409 [2024-11-15 11:46:34.954496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.409 qpair failed and we were unable to recover it. 00:28:34.409 [2024-11-15 11:46:34.954695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.409 [2024-11-15 11:46:34.954707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.409 qpair failed and we were unable to recover it. 00:28:34.409 [2024-11-15 11:46:34.954846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.409 [2024-11-15 11:46:34.954858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.409 qpair failed and we were unable to recover it. 00:28:34.409 [2024-11-15 11:46:34.955114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.409 [2024-11-15 11:46:34.955126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.409 qpair failed and we were unable to recover it. 00:28:34.409 [2024-11-15 11:46:34.955334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.409 [2024-11-15 11:46:34.955346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.409 qpair failed and we were unable to recover it. 00:28:34.409 [2024-11-15 11:46:34.955606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.409 [2024-11-15 11:46:34.955617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.409 qpair failed and we were unable to recover it. 00:28:34.409 [2024-11-15 11:46:34.955774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.409 [2024-11-15 11:46:34.955785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.409 qpair failed and we were unable to recover it. 00:28:34.409 [2024-11-15 11:46:34.956020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.409 [2024-11-15 11:46:34.956034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.409 qpair failed and we were unable to recover it. 00:28:34.409 [2024-11-15 11:46:34.956214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.409 [2024-11-15 11:46:34.956225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.409 qpair failed and we were unable to recover it. 00:28:34.409 [2024-11-15 11:46:34.956380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.409 [2024-11-15 11:46:34.956391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.409 qpair failed and we were unable to recover it. 00:28:34.409 [2024-11-15 11:46:34.956656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.409 [2024-11-15 11:46:34.956668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.409 qpair failed and we were unable to recover it. 00:28:34.409 [2024-11-15 11:46:34.956827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.409 [2024-11-15 11:46:34.956838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.409 qpair failed and we were unable to recover it. 00:28:34.410 [2024-11-15 11:46:34.957074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.410 [2024-11-15 11:46:34.957086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.410 qpair failed and we were unable to recover it. 00:28:34.410 [2024-11-15 11:46:34.957237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.410 [2024-11-15 11:46:34.957249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.410 qpair failed and we were unable to recover it. 00:28:34.410 [2024-11-15 11:46:34.957388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.410 [2024-11-15 11:46:34.957400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.410 qpair failed and we were unable to recover it. 00:28:34.410 [2024-11-15 11:46:34.957548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.410 [2024-11-15 11:46:34.957560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.410 qpair failed and we were unable to recover it. 00:28:34.410 [2024-11-15 11:46:34.957648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.410 [2024-11-15 11:46:34.957659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.410 qpair failed and we were unable to recover it. 00:28:34.410 [2024-11-15 11:46:34.957909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.410 [2024-11-15 11:46:34.957921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.410 qpair failed and we were unable to recover it. 00:28:34.410 [2024-11-15 11:46:34.958159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.410 [2024-11-15 11:46:34.958170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.410 qpair failed and we were unable to recover it. 00:28:34.410 [2024-11-15 11:46:34.958336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.410 [2024-11-15 11:46:34.958348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.410 qpair failed and we were unable to recover it. 00:28:34.410 [2024-11-15 11:46:34.958525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.410 [2024-11-15 11:46:34.958538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.410 qpair failed and we were unable to recover it. 00:28:34.410 [2024-11-15 11:46:34.958681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.410 [2024-11-15 11:46:34.958692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.410 qpair failed and we were unable to recover it. 00:28:34.410 [2024-11-15 11:46:34.958897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.410 [2024-11-15 11:46:34.958908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.410 qpair failed and we were unable to recover it. 00:28:34.410 [2024-11-15 11:46:34.959143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.410 [2024-11-15 11:46:34.959154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.410 qpair failed and we were unable to recover it. 00:28:34.410 [2024-11-15 11:46:34.959358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.410 [2024-11-15 11:46:34.959369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.410 qpair failed and we were unable to recover it. 00:28:34.410 [2024-11-15 11:46:34.959575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.410 [2024-11-15 11:46:34.959587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.410 qpair failed and we were unable to recover it. 00:28:34.410 [2024-11-15 11:46:34.959821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.410 [2024-11-15 11:46:34.959832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.410 qpair failed and we were unable to recover it. 00:28:34.410 [2024-11-15 11:46:34.959971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.410 [2024-11-15 11:46:34.959981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.410 qpair failed and we were unable to recover it. 00:28:34.410 [2024-11-15 11:46:34.960211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.410 [2024-11-15 11:46:34.960221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.410 qpair failed and we were unable to recover it. 00:28:34.410 [2024-11-15 11:46:34.960379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.410 [2024-11-15 11:46:34.960390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.410 qpair failed and we were unable to recover it. 00:28:34.410 [2024-11-15 11:46:34.960613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.410 [2024-11-15 11:46:34.960625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.410 qpair failed and we were unable to recover it. 00:28:34.410 [2024-11-15 11:46:34.960773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.410 [2024-11-15 11:46:34.960785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.410 qpair failed and we were unable to recover it. 00:28:34.410 [2024-11-15 11:46:34.961013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.410 [2024-11-15 11:46:34.961025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.410 qpair failed and we were unable to recover it. 00:28:34.410 [2024-11-15 11:46:34.961174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.410 [2024-11-15 11:46:34.961185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.410 qpair failed and we were unable to recover it. 00:28:34.410 [2024-11-15 11:46:34.961345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.410 [2024-11-15 11:46:34.961357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.410 qpair failed and we were unable to recover it. 00:28:34.410 [2024-11-15 11:46:34.961504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.410 [2024-11-15 11:46:34.961517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.410 qpair failed and we were unable to recover it. 00:28:34.410 [2024-11-15 11:46:34.961734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.410 [2024-11-15 11:46:34.961756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.410 qpair failed and we were unable to recover it. 00:28:34.410 [2024-11-15 11:46:34.962026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.410 [2024-11-15 11:46:34.962038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.410 qpair failed and we were unable to recover it. 00:28:34.410 [2024-11-15 11:46:34.962130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.410 [2024-11-15 11:46:34.962141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.410 qpair failed and we were unable to recover it. 00:28:34.410 [2024-11-15 11:46:34.962370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.410 [2024-11-15 11:46:34.962382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.410 qpair failed and we were unable to recover it. 00:28:34.410 [2024-11-15 11:46:34.962526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.410 [2024-11-15 11:46:34.962537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.410 qpair failed and we were unable to recover it. 00:28:34.410 [2024-11-15 11:46:34.962626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.410 [2024-11-15 11:46:34.962638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.410 qpair failed and we were unable to recover it. 00:28:34.410 [2024-11-15 11:46:34.962783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.410 [2024-11-15 11:46:34.962795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.410 qpair failed and we were unable to recover it. 00:28:34.410 [2024-11-15 11:46:34.962941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.410 [2024-11-15 11:46:34.962953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.410 qpair failed and we were unable to recover it. 00:28:34.410 [2024-11-15 11:46:34.963166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.410 [2024-11-15 11:46:34.963178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.410 qpair failed and we were unable to recover it. 00:28:34.410 [2024-11-15 11:46:34.963383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.410 [2024-11-15 11:46:34.963395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.410 qpair failed and we were unable to recover it. 00:28:34.410 [2024-11-15 11:46:34.963551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.410 [2024-11-15 11:46:34.963562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.410 qpair failed and we were unable to recover it. 00:28:34.410 [2024-11-15 11:46:34.963801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.410 [2024-11-15 11:46:34.963815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.410 qpair failed and we were unable to recover it. 00:28:34.410 [2024-11-15 11:46:34.964078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.410 [2024-11-15 11:46:34.964090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.410 qpair failed and we were unable to recover it. 00:28:34.410 [2024-11-15 11:46:34.964245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.411 [2024-11-15 11:46:34.964256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.411 qpair failed and we were unable to recover it. 00:28:34.411 [2024-11-15 11:46:34.964547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.411 [2024-11-15 11:46:34.964559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.411 qpair failed and we were unable to recover it. 00:28:34.411 [2024-11-15 11:46:34.964794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.411 [2024-11-15 11:46:34.964806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.411 qpair failed and we were unable to recover it. 00:28:34.411 [2024-11-15 11:46:34.965036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.411 [2024-11-15 11:46:34.965047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.411 qpair failed and we were unable to recover it. 00:28:34.411 [2024-11-15 11:46:34.965298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.411 [2024-11-15 11:46:34.965310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.411 qpair failed and we were unable to recover it. 00:28:34.411 [2024-11-15 11:46:34.965480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.411 [2024-11-15 11:46:34.965492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.411 qpair failed and we were unable to recover it. 00:28:34.411 [2024-11-15 11:46:34.965701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.411 [2024-11-15 11:46:34.965712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.411 qpair failed and we were unable to recover it. 00:28:34.411 [2024-11-15 11:46:34.965933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.411 [2024-11-15 11:46:34.965944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.411 qpair failed and we were unable to recover it. 00:28:34.411 [2024-11-15 11:46:34.966149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.411 [2024-11-15 11:46:34.966160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.411 qpair failed and we were unable to recover it. 00:28:34.411 [2024-11-15 11:46:34.966325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.411 [2024-11-15 11:46:34.966337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.411 qpair failed and we were unable to recover it. 00:28:34.411 [2024-11-15 11:46:34.966566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.411 [2024-11-15 11:46:34.966579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.411 qpair failed and we were unable to recover it. 00:28:34.411 [2024-11-15 11:46:34.966718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.411 [2024-11-15 11:46:34.966728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.411 qpair failed and we were unable to recover it. 00:28:34.411 [2024-11-15 11:46:34.966828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.411 [2024-11-15 11:46:34.966839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.411 qpair failed and we were unable to recover it. 00:28:34.411 [2024-11-15 11:46:34.966933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.411 [2024-11-15 11:46:34.966943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.411 qpair failed and we were unable to recover it. 00:28:34.411 [2024-11-15 11:46:34.967092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.411 [2024-11-15 11:46:34.967103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.411 qpair failed and we were unable to recover it. 00:28:34.411 [2024-11-15 11:46:34.967267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.411 [2024-11-15 11:46:34.967278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.411 qpair failed and we were unable to recover it. 00:28:34.411 [2024-11-15 11:46:34.967431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.411 [2024-11-15 11:46:34.967442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.411 qpair failed and we were unable to recover it. 00:28:34.411 [2024-11-15 11:46:34.967673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.411 [2024-11-15 11:46:34.967685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.411 qpair failed and we were unable to recover it. 00:28:34.411 [2024-11-15 11:46:34.967906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.411 [2024-11-15 11:46:34.967917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.411 qpair failed and we were unable to recover it. 00:28:34.411 [2024-11-15 11:46:34.968074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.411 [2024-11-15 11:46:34.968085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.411 qpair failed and we were unable to recover it. 00:28:34.411 [2024-11-15 11:46:34.968303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.411 [2024-11-15 11:46:34.968314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.411 qpair failed and we were unable to recover it. 00:28:34.411 [2024-11-15 11:46:34.968549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.411 [2024-11-15 11:46:34.968560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.411 qpair failed and we were unable to recover it. 00:28:34.411 [2024-11-15 11:46:34.968744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.411 [2024-11-15 11:46:34.968755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.411 qpair failed and we were unable to recover it. 00:28:34.411 [2024-11-15 11:46:34.968987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.411 [2024-11-15 11:46:34.968999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.411 qpair failed and we were unable to recover it. 00:28:34.411 [2024-11-15 11:46:34.969232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.411 [2024-11-15 11:46:34.969244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.411 qpair failed and we were unable to recover it. 00:28:34.411 [2024-11-15 11:46:34.969503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.411 [2024-11-15 11:46:34.969516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.411 qpair failed and we were unable to recover it. 00:28:34.411 [2024-11-15 11:46:34.969728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.411 [2024-11-15 11:46:34.969740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.411 qpair failed and we were unable to recover it. 00:28:34.411 [2024-11-15 11:46:34.969887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.411 [2024-11-15 11:46:34.969899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.411 qpair failed and we were unable to recover it. 00:28:34.411 [2024-11-15 11:46:34.970076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.411 [2024-11-15 11:46:34.970088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.411 qpair failed and we were unable to recover it. 00:28:34.411 [2024-11-15 11:46:34.970298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.411 [2024-11-15 11:46:34.970310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.411 qpair failed and we were unable to recover it. 00:28:34.411 [2024-11-15 11:46:34.970543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.411 [2024-11-15 11:46:34.970555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.411 qpair failed and we were unable to recover it. 00:28:34.411 [2024-11-15 11:46:34.970823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.411 [2024-11-15 11:46:34.970836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.411 qpair failed and we were unable to recover it. 00:28:34.411 [2024-11-15 11:46:34.971064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.411 [2024-11-15 11:46:34.971075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.411 qpair failed and we were unable to recover it. 00:28:34.411 [2024-11-15 11:46:34.971307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.411 [2024-11-15 11:46:34.971319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.411 qpair failed and we were unable to recover it. 00:28:34.411 [2024-11-15 11:46:34.971549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.411 [2024-11-15 11:46:34.971561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.411 qpair failed and we were unable to recover it. 00:28:34.412 [2024-11-15 11:46:34.971822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.412 [2024-11-15 11:46:34.971834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.412 qpair failed and we were unable to recover it. 00:28:34.412 [2024-11-15 11:46:34.971981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.412 [2024-11-15 11:46:34.971993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.412 qpair failed and we were unable to recover it. 00:28:34.412 [2024-11-15 11:46:34.972088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.412 [2024-11-15 11:46:34.972100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.412 qpair failed and we were unable to recover it. 00:28:34.412 [2024-11-15 11:46:34.972311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.412 [2024-11-15 11:46:34.972325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.412 qpair failed and we were unable to recover it. 00:28:34.412 [2024-11-15 11:46:34.972545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.412 [2024-11-15 11:46:34.972557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.412 qpair failed and we were unable to recover it. 00:28:34.412 [2024-11-15 11:46:34.972848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.412 [2024-11-15 11:46:34.972860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.412 qpair failed and we were unable to recover it. 00:28:34.412 [2024-11-15 11:46:34.973010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.412 [2024-11-15 11:46:34.973022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.412 qpair failed and we were unable to recover it. 00:28:34.412 [2024-11-15 11:46:34.973253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.412 [2024-11-15 11:46:34.973265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.412 qpair failed and we were unable to recover it. 00:28:34.412 [2024-11-15 11:46:34.973431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.412 [2024-11-15 11:46:34.973444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.412 qpair failed and we were unable to recover it. 00:28:34.412 [2024-11-15 11:46:34.973537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.412 [2024-11-15 11:46:34.973549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.412 qpair failed and we were unable to recover it. 00:28:34.412 [2024-11-15 11:46:34.973639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.412 [2024-11-15 11:46:34.973651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.412 qpair failed and we were unable to recover it. 00:28:34.412 [2024-11-15 11:46:34.973887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.412 [2024-11-15 11:46:34.973900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.412 qpair failed and we were unable to recover it. 00:28:34.412 [2024-11-15 11:46:34.974064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.412 [2024-11-15 11:46:34.974076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.412 qpair failed and we were unable to recover it. 00:28:34.412 [2024-11-15 11:46:34.974315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.412 [2024-11-15 11:46:34.974327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.412 qpair failed and we were unable to recover it. 00:28:34.412 [2024-11-15 11:46:34.974512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.412 [2024-11-15 11:46:34.974524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.412 qpair failed and we were unable to recover it. 00:28:34.412 [2024-11-15 11:46:34.974755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.412 [2024-11-15 11:46:34.974767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.412 qpair failed and we were unable to recover it. 00:28:34.412 [2024-11-15 11:46:34.974925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.412 [2024-11-15 11:46:34.974937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.412 qpair failed and we were unable to recover it. 00:28:34.412 [2024-11-15 11:46:34.975159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.412 [2024-11-15 11:46:34.975171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.412 qpair failed and we were unable to recover it. 00:28:34.412 [2024-11-15 11:46:34.975390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.412 [2024-11-15 11:46:34.975401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.412 qpair failed and we were unable to recover it. 00:28:34.412 [2024-11-15 11:46:34.975561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.412 [2024-11-15 11:46:34.975574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.412 qpair failed and we were unable to recover it. 00:28:34.412 [2024-11-15 11:46:34.975777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.412 [2024-11-15 11:46:34.975789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.412 qpair failed and we were unable to recover it. 00:28:34.412 [2024-11-15 11:46:34.976028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.412 [2024-11-15 11:46:34.976040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.412 qpair failed and we were unable to recover it. 00:28:34.412 [2024-11-15 11:46:34.976264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.412 [2024-11-15 11:46:34.976275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.412 qpair failed and we were unable to recover it. 00:28:34.412 [2024-11-15 11:46:34.976505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.412 [2024-11-15 11:46:34.976516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.412 qpair failed and we were unable to recover it. 00:28:34.412 [2024-11-15 11:46:34.976755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.412 [2024-11-15 11:46:34.976767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.412 qpair failed and we were unable to recover it. 00:28:34.412 [2024-11-15 11:46:34.977031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.412 [2024-11-15 11:46:34.977044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.412 qpair failed and we were unable to recover it. 00:28:34.412 [2024-11-15 11:46:34.977283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.412 [2024-11-15 11:46:34.977295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.412 qpair failed and we were unable to recover it. 00:28:34.412 [2024-11-15 11:46:34.977511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.412 [2024-11-15 11:46:34.977523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.412 qpair failed and we were unable to recover it. 00:28:34.412 [2024-11-15 11:46:34.977671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.412 [2024-11-15 11:46:34.977683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.412 qpair failed and we were unable to recover it. 00:28:34.412 [2024-11-15 11:46:34.977916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.412 [2024-11-15 11:46:34.977928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.412 qpair failed and we were unable to recover it. 00:28:34.412 [2024-11-15 11:46:34.978145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.412 [2024-11-15 11:46:34.978157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.412 qpair failed and we were unable to recover it. 00:28:34.412 [2024-11-15 11:46:34.978305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.412 [2024-11-15 11:46:34.978317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.412 qpair failed and we were unable to recover it. 00:28:34.412 [2024-11-15 11:46:34.978551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.412 [2024-11-15 11:46:34.978563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.412 qpair failed and we were unable to recover it. 00:28:34.412 [2024-11-15 11:46:34.978712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.412 [2024-11-15 11:46:34.978723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.413 qpair failed and we were unable to recover it. 00:28:34.413 [2024-11-15 11:46:34.978940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.413 [2024-11-15 11:46:34.978952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.413 qpair failed and we were unable to recover it. 00:28:34.413 [2024-11-15 11:46:34.979190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.413 [2024-11-15 11:46:34.979202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.413 qpair failed and we were unable to recover it. 00:28:34.413 [2024-11-15 11:46:34.979467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.413 [2024-11-15 11:46:34.979480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.413 qpair failed and we were unable to recover it. 00:28:34.413 [2024-11-15 11:46:34.979703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.413 [2024-11-15 11:46:34.979715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.413 qpair failed and we were unable to recover it. 00:28:34.413 [2024-11-15 11:46:34.979922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.413 [2024-11-15 11:46:34.979934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.413 qpair failed and we were unable to recover it. 00:28:34.413 [2024-11-15 11:46:34.980100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.413 [2024-11-15 11:46:34.980111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.413 qpair failed and we were unable to recover it. 00:28:34.413 [2024-11-15 11:46:34.980281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.413 [2024-11-15 11:46:34.980293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.413 qpair failed and we were unable to recover it. 00:28:34.413 [2024-11-15 11:46:34.980468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.413 [2024-11-15 11:46:34.980481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.413 qpair failed and we were unable to recover it. 00:28:34.413 [2024-11-15 11:46:34.980686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.413 [2024-11-15 11:46:34.980698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.413 qpair failed and we were unable to recover it. 00:28:34.413 [2024-11-15 11:46:34.980851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.413 [2024-11-15 11:46:34.980865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.413 qpair failed and we were unable to recover it. 00:28:34.413 [2024-11-15 11:46:34.981141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.413 [2024-11-15 11:46:34.981154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.413 qpair failed and we were unable to recover it. 00:28:34.413 [2024-11-15 11:46:34.981361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.413 [2024-11-15 11:46:34.981373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.413 qpair failed and we were unable to recover it. 00:28:34.413 [2024-11-15 11:46:34.981514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.413 [2024-11-15 11:46:34.981527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.413 qpair failed and we were unable to recover it. 00:28:34.413 [2024-11-15 11:46:34.981674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.413 [2024-11-15 11:46:34.981685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.413 qpair failed and we were unable to recover it. 00:28:34.413 [2024-11-15 11:46:34.981990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.413 [2024-11-15 11:46:34.982003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.413 qpair failed and we were unable to recover it. 00:28:34.413 [2024-11-15 11:46:34.982212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.413 [2024-11-15 11:46:34.982223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.413 qpair failed and we were unable to recover it. 00:28:34.413 [2024-11-15 11:46:34.982478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.413 [2024-11-15 11:46:34.982490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.413 qpair failed and we were unable to recover it. 00:28:34.413 [2024-11-15 11:46:34.982759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.413 [2024-11-15 11:46:34.982772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.413 qpair failed and we were unable to recover it. 00:28:34.413 [2024-11-15 11:46:34.982996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.413 [2024-11-15 11:46:34.983009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.413 qpair failed and we were unable to recover it. 00:28:34.413 [2024-11-15 11:46:34.983268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.413 [2024-11-15 11:46:34.983280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.413 qpair failed and we were unable to recover it. 00:28:34.413 [2024-11-15 11:46:34.983366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.413 [2024-11-15 11:46:34.983377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.413 qpair failed and we were unable to recover it. 00:28:34.413 [2024-11-15 11:46:34.983628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.413 [2024-11-15 11:46:34.983640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.413 qpair failed and we were unable to recover it. 00:28:34.413 [2024-11-15 11:46:34.983876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.413 [2024-11-15 11:46:34.983887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.413 qpair failed and we were unable to recover it. 00:28:34.413 [2024-11-15 11:46:34.984134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.413 [2024-11-15 11:46:34.984146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.413 qpair failed and we were unable to recover it. 00:28:34.413 [2024-11-15 11:46:34.984297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.413 [2024-11-15 11:46:34.984308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.413 qpair failed and we were unable to recover it. 00:28:34.413 [2024-11-15 11:46:34.984564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.413 [2024-11-15 11:46:34.984577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.413 qpair failed and we were unable to recover it. 00:28:34.413 [2024-11-15 11:46:34.984716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.413 [2024-11-15 11:46:34.984728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.413 qpair failed and we were unable to recover it. 00:28:34.413 [2024-11-15 11:46:34.984961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.413 [2024-11-15 11:46:34.984972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.413 qpair failed and we were unable to recover it. 00:28:34.413 [2024-11-15 11:46:34.985109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.413 [2024-11-15 11:46:34.985121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.413 qpair failed and we were unable to recover it. 00:28:34.413 [2024-11-15 11:46:34.985329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.413 [2024-11-15 11:46:34.985342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.413 qpair failed and we were unable to recover it. 00:28:34.413 [2024-11-15 11:46:34.985499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.413 [2024-11-15 11:46:34.985510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.413 qpair failed and we were unable to recover it. 00:28:34.413 [2024-11-15 11:46:34.985749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.413 [2024-11-15 11:46:34.985762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.413 qpair failed and we were unable to recover it. 00:28:34.413 [2024-11-15 11:46:34.985969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.413 [2024-11-15 11:46:34.985980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.413 qpair failed and we were unable to recover it. 00:28:34.413 [2024-11-15 11:46:34.986168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.413 [2024-11-15 11:46:34.986180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.413 qpair failed and we were unable to recover it. 00:28:34.413 [2024-11-15 11:46:34.986345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.413 [2024-11-15 11:46:34.986357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.413 qpair failed and we were unable to recover it. 00:28:34.413 [2024-11-15 11:46:34.986512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.413 [2024-11-15 11:46:34.986524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.413 qpair failed and we were unable to recover it. 00:28:34.413 [2024-11-15 11:46:34.986685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.413 [2024-11-15 11:46:34.986697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.413 qpair failed and we were unable to recover it. 00:28:34.413 [2024-11-15 11:46:34.986792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.414 [2024-11-15 11:46:34.986804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.414 qpair failed and we were unable to recover it. 00:28:34.414 [2024-11-15 11:46:34.987009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.414 [2024-11-15 11:46:34.987021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.414 qpair failed and we were unable to recover it. 00:28:34.414 [2024-11-15 11:46:34.987181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.414 [2024-11-15 11:46:34.987193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.414 qpair failed and we were unable to recover it. 00:28:34.414 [2024-11-15 11:46:34.987448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.414 [2024-11-15 11:46:34.987466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.414 qpair failed and we were unable to recover it. 00:28:34.414 [2024-11-15 11:46:34.987644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.414 [2024-11-15 11:46:34.987656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.414 qpair failed and we were unable to recover it. 00:28:34.414 [2024-11-15 11:46:34.987866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.414 [2024-11-15 11:46:34.987878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.414 qpair failed and we were unable to recover it. 00:28:34.414 [2024-11-15 11:46:34.988139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.414 [2024-11-15 11:46:34.988151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.414 qpair failed and we were unable to recover it. 00:28:34.414 [2024-11-15 11:46:34.988224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.414 [2024-11-15 11:46:34.988235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.414 qpair failed and we were unable to recover it. 00:28:34.414 [2024-11-15 11:46:34.988327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.414 [2024-11-15 11:46:34.988339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.414 qpair failed and we were unable to recover it. 00:28:34.414 [2024-11-15 11:46:34.988592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.414 [2024-11-15 11:46:34.988605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.414 qpair failed and we were unable to recover it. 00:28:34.414 [2024-11-15 11:46:34.988830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.414 [2024-11-15 11:46:34.988841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.414 qpair failed and we were unable to recover it. 00:28:34.414 [2024-11-15 11:46:34.988909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.414 [2024-11-15 11:46:34.988919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.414 qpair failed and we were unable to recover it. 00:28:34.414 [2024-11-15 11:46:34.989096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.414 [2024-11-15 11:46:34.989110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.414 qpair failed and we were unable to recover it. 00:28:34.414 [2024-11-15 11:46:34.989267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.414 [2024-11-15 11:46:34.989279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.414 qpair failed and we were unable to recover it. 00:28:34.414 [2024-11-15 11:46:34.989487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.414 [2024-11-15 11:46:34.989499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.414 qpair failed and we were unable to recover it. 00:28:34.414 [2024-11-15 11:46:34.989663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.414 [2024-11-15 11:46:34.989674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.414 qpair failed and we were unable to recover it. 00:28:34.414 [2024-11-15 11:46:34.989823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.414 [2024-11-15 11:46:34.989834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.414 qpair failed and we were unable to recover it. 00:28:34.414 [2024-11-15 11:46:34.990064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.414 [2024-11-15 11:46:34.990075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.414 qpair failed and we were unable to recover it. 00:28:34.414 [2024-11-15 11:46:34.990328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.414 [2024-11-15 11:46:34.990339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.414 qpair failed and we were unable to recover it. 00:28:34.414 [2024-11-15 11:46:34.990530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.414 [2024-11-15 11:46:34.990542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.414 qpair failed and we were unable to recover it. 00:28:34.414 [2024-11-15 11:46:34.990698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.414 [2024-11-15 11:46:34.990709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.414 qpair failed and we were unable to recover it. 00:28:34.414 [2024-11-15 11:46:34.990807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.414 [2024-11-15 11:46:34.990818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.414 qpair failed and we were unable to recover it. 00:28:34.414 [2024-11-15 11:46:34.991062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.414 [2024-11-15 11:46:34.991074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.414 qpair failed and we were unable to recover it. 00:28:34.414 [2024-11-15 11:46:34.991320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.414 [2024-11-15 11:46:34.991348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.414 qpair failed and we were unable to recover it. 00:28:34.414 [2024-11-15 11:46:34.991556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.414 [2024-11-15 11:46:34.991568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.414 qpair failed and we were unable to recover it. 00:28:34.414 [2024-11-15 11:46:34.991722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.414 [2024-11-15 11:46:34.991733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.414 qpair failed and we were unable to recover it. 00:28:34.414 [2024-11-15 11:46:34.991964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.414 [2024-11-15 11:46:34.991976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.414 qpair failed and we were unable to recover it. 00:28:34.414 [2024-11-15 11:46:34.992144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.414 [2024-11-15 11:46:34.992156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.414 qpair failed and we were unable to recover it. 00:28:34.414 [2024-11-15 11:46:34.992236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.414 [2024-11-15 11:46:34.992247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.414 qpair failed and we were unable to recover it. 00:28:34.414 [2024-11-15 11:46:34.992486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.414 [2024-11-15 11:46:34.992498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.414 qpair failed and we were unable to recover it. 00:28:34.414 [2024-11-15 11:46:34.992760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.414 [2024-11-15 11:46:34.992772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.414 qpair failed and we were unable to recover it. 00:28:34.414 [2024-11-15 11:46:34.992921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.414 [2024-11-15 11:46:34.992933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.414 qpair failed and we were unable to recover it. 00:28:34.414 [2024-11-15 11:46:34.993163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.414 [2024-11-15 11:46:34.993174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.414 qpair failed and we were unable to recover it. 00:28:34.414 [2024-11-15 11:46:34.993448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.414 [2024-11-15 11:46:34.993465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.414 qpair failed and we were unable to recover it. 00:28:34.414 [2024-11-15 11:46:34.993675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.414 [2024-11-15 11:46:34.993686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.414 qpair failed and we were unable to recover it. 00:28:34.414 [2024-11-15 11:46:34.993922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.414 [2024-11-15 11:46:34.993934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.414 qpair failed and we were unable to recover it. 00:28:34.414 [2024-11-15 11:46:34.994169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.414 [2024-11-15 11:46:34.994181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.414 qpair failed and we were unable to recover it. 00:28:34.414 [2024-11-15 11:46:34.994398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.414 [2024-11-15 11:46:34.994410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.414 qpair failed and we were unable to recover it. 00:28:34.414 [2024-11-15 11:46:34.994595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.415 [2024-11-15 11:46:34.994607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.415 qpair failed and we were unable to recover it. 00:28:34.415 [2024-11-15 11:46:34.994851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.415 [2024-11-15 11:46:34.994864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.415 qpair failed and we were unable to recover it. 00:28:34.415 [2024-11-15 11:46:34.994939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.415 [2024-11-15 11:46:34.994951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.415 qpair failed and we were unable to recover it. 00:28:34.415 [2024-11-15 11:46:34.995153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.415 [2024-11-15 11:46:34.995165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.415 qpair failed and we were unable to recover it. 00:28:34.415 [2024-11-15 11:46:34.995251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.415 [2024-11-15 11:46:34.995262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.415 qpair failed and we were unable to recover it. 00:28:34.415 [2024-11-15 11:46:34.995426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.415 [2024-11-15 11:46:34.995438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.415 qpair failed and we were unable to recover it. 00:28:34.415 [2024-11-15 11:46:34.995532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.415 [2024-11-15 11:46:34.995543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.415 qpair failed and we were unable to recover it. 00:28:34.415 [2024-11-15 11:46:34.995783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.415 [2024-11-15 11:46:34.995794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.415 qpair failed and we were unable to recover it. 00:28:34.415 [2024-11-15 11:46:34.996034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.415 [2024-11-15 11:46:34.996044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.415 qpair failed and we were unable to recover it. 00:28:34.415 [2024-11-15 11:46:34.996274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.415 [2024-11-15 11:46:34.996285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.415 qpair failed and we were unable to recover it. 00:28:34.415 [2024-11-15 11:46:34.996367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.415 [2024-11-15 11:46:34.996378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.415 qpair failed and we were unable to recover it. 00:28:34.415 [2024-11-15 11:46:34.996651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.415 [2024-11-15 11:46:34.996662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.415 qpair failed and we were unable to recover it. 00:28:34.415 [2024-11-15 11:46:34.996842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.415 [2024-11-15 11:46:34.996853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.415 qpair failed and we were unable to recover it. 00:28:34.415 [2024-11-15 11:46:34.997003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.415 [2024-11-15 11:46:34.997013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.415 qpair failed and we were unable to recover it. 00:28:34.415 [2024-11-15 11:46:34.997162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.415 [2024-11-15 11:46:34.997175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.415 qpair failed and we were unable to recover it. 00:28:34.415 [2024-11-15 11:46:34.997360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.415 [2024-11-15 11:46:34.997371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.415 qpair failed and we were unable to recover it. 00:28:34.415 [2024-11-15 11:46:34.997453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.415 [2024-11-15 11:46:34.997468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.415 qpair failed and we were unable to recover it. 00:28:34.415 [2024-11-15 11:46:34.997602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.415 [2024-11-15 11:46:34.997613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.415 qpair failed and we were unable to recover it. 00:28:34.415 [2024-11-15 11:46:34.997846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.415 [2024-11-15 11:46:34.997858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.415 qpair failed and we were unable to recover it. 00:28:34.415 [2024-11-15 11:46:34.998151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.415 [2024-11-15 11:46:34.998161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.415 qpair failed and we were unable to recover it. 00:28:34.415 [2024-11-15 11:46:34.998419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.415 [2024-11-15 11:46:34.998431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.415 qpair failed and we were unable to recover it. 00:28:34.415 [2024-11-15 11:46:34.998662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.415 [2024-11-15 11:46:34.998674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.415 qpair failed and we were unable to recover it. 00:28:34.415 [2024-11-15 11:46:34.998880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.415 [2024-11-15 11:46:34.998890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.415 qpair failed and we were unable to recover it. 00:28:34.415 [2024-11-15 11:46:34.999063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.415 [2024-11-15 11:46:34.999074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.415 qpair failed and we were unable to recover it. 00:28:34.415 [2024-11-15 11:46:34.999214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.415 [2024-11-15 11:46:34.999225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.415 qpair failed and we were unable to recover it. 00:28:34.415 [2024-11-15 11:46:34.999463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.415 [2024-11-15 11:46:34.999475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.415 qpair failed and we were unable to recover it. 00:28:34.415 [2024-11-15 11:46:34.999667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.415 [2024-11-15 11:46:34.999678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.415 qpair failed and we were unable to recover it. 00:28:34.415 [2024-11-15 11:46:34.999835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.415 [2024-11-15 11:46:34.999846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.415 qpair failed and we were unable to recover it. 00:28:34.415 [2024-11-15 11:46:34.999951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.415 [2024-11-15 11:46:34.999961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.415 qpair failed and we were unable to recover it. 00:28:34.415 [2024-11-15 11:46:35.000125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.415 [2024-11-15 11:46:35.000136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.415 qpair failed and we were unable to recover it. 00:28:34.415 [2024-11-15 11:46:35.000415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.415 [2024-11-15 11:46:35.000426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.415 qpair failed and we were unable to recover it. 00:28:34.415 [2024-11-15 11:46:35.000587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.415 [2024-11-15 11:46:35.000598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.415 qpair failed and we were unable to recover it. 00:28:34.415 [2024-11-15 11:46:35.000802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.415 [2024-11-15 11:46:35.000813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.415 qpair failed and we were unable to recover it. 00:28:34.415 [2024-11-15 11:46:35.000985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.415 [2024-11-15 11:46:35.000997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.415 qpair failed and we were unable to recover it. 00:28:34.415 [2024-11-15 11:46:35.001133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.415 [2024-11-15 11:46:35.001145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.415 qpair failed and we were unable to recover it. 00:28:34.415 [2024-11-15 11:46:35.001384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.415 [2024-11-15 11:46:35.001395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.415 qpair failed and we were unable to recover it. 00:28:34.415 [2024-11-15 11:46:35.001476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.415 [2024-11-15 11:46:35.001487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.415 qpair failed and we were unable to recover it. 00:28:34.415 [2024-11-15 11:46:35.001715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.415 [2024-11-15 11:46:35.001726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.415 qpair failed and we were unable to recover it. 00:28:34.415 [2024-11-15 11:46:35.001811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.416 [2024-11-15 11:46:35.001822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.416 qpair failed and we were unable to recover it. 00:28:34.416 [2024-11-15 11:46:35.002028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.416 [2024-11-15 11:46:35.002040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.416 qpair failed and we were unable to recover it. 00:28:34.416 [2024-11-15 11:46:35.002137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.416 [2024-11-15 11:46:35.002148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.416 qpair failed and we were unable to recover it. 00:28:34.416 [2024-11-15 11:46:35.002388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.416 [2024-11-15 11:46:35.002401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.416 qpair failed and we were unable to recover it. 00:28:34.416 [2024-11-15 11:46:35.002659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.416 [2024-11-15 11:46:35.002671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.416 qpair failed and we were unable to recover it. 00:28:34.416 [2024-11-15 11:46:35.002896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.416 [2024-11-15 11:46:35.002907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.416 qpair failed and we were unable to recover it. 00:28:34.416 [2024-11-15 11:46:35.003158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.416 [2024-11-15 11:46:35.003169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.416 qpair failed and we were unable to recover it. 00:28:34.416 [2024-11-15 11:46:35.003411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.416 [2024-11-15 11:46:35.003422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.416 qpair failed and we were unable to recover it. 00:28:34.416 [2024-11-15 11:46:35.003597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.416 [2024-11-15 11:46:35.003608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.416 qpair failed and we were unable to recover it. 00:28:34.416 [2024-11-15 11:46:35.003837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.416 [2024-11-15 11:46:35.003848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.416 qpair failed and we were unable to recover it. 00:28:34.416 [2024-11-15 11:46:35.004102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.416 [2024-11-15 11:46:35.004113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.416 qpair failed and we were unable to recover it. 00:28:34.416 [2024-11-15 11:46:35.004212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.416 [2024-11-15 11:46:35.004223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.416 qpair failed and we were unable to recover it. 00:28:34.416 [2024-11-15 11:46:35.004461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.416 [2024-11-15 11:46:35.004472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.416 qpair failed and we were unable to recover it. 00:28:34.416 [2024-11-15 11:46:35.004685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.416 [2024-11-15 11:46:35.004716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.416 qpair failed and we were unable to recover it. 00:28:34.416 [2024-11-15 11:46:35.005002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.416 [2024-11-15 11:46:35.005035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.416 qpair failed and we were unable to recover it. 00:28:34.416 [2024-11-15 11:46:35.005334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.416 [2024-11-15 11:46:35.005345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.416 qpair failed and we were unable to recover it. 00:28:34.416 [2024-11-15 11:46:35.005498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.416 [2024-11-15 11:46:35.005511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.416 qpair failed and we were unable to recover it. 00:28:34.416 [2024-11-15 11:46:35.005720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.416 [2024-11-15 11:46:35.005731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.416 qpair failed and we were unable to recover it. 00:28:34.416 [2024-11-15 11:46:35.005965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.416 [2024-11-15 11:46:35.005976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.416 qpair failed and we were unable to recover it. 00:28:34.416 [2024-11-15 11:46:35.006203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.416 [2024-11-15 11:46:35.006214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.416 qpair failed and we were unable to recover it. 00:28:34.416 [2024-11-15 11:46:35.006476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.416 [2024-11-15 11:46:35.006488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.416 qpair failed and we were unable to recover it. 00:28:34.416 [2024-11-15 11:46:35.006695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.416 [2024-11-15 11:46:35.006706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.416 qpair failed and we were unable to recover it. 00:28:34.416 [2024-11-15 11:46:35.006979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.416 [2024-11-15 11:46:35.006990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.416 qpair failed and we were unable to recover it. 00:28:34.416 [2024-11-15 11:46:35.007144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.416 [2024-11-15 11:46:35.007155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.416 qpair failed and we were unable to recover it. 00:28:34.416 [2024-11-15 11:46:35.007326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.416 [2024-11-15 11:46:35.007337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.416 qpair failed and we were unable to recover it. 00:28:34.416 [2024-11-15 11:46:35.007601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.416 [2024-11-15 11:46:35.007613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.416 qpair failed and we were unable to recover it. 00:28:34.416 [2024-11-15 11:46:35.007874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.416 [2024-11-15 11:46:35.007885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.416 qpair failed and we were unable to recover it. 00:28:34.416 [2024-11-15 11:46:35.008095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.416 [2024-11-15 11:46:35.008106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.416 qpair failed and we were unable to recover it. 00:28:34.416 [2024-11-15 11:46:35.008357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.416 [2024-11-15 11:46:35.008368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.416 qpair failed and we were unable to recover it. 00:28:34.416 [2024-11-15 11:46:35.008514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.416 [2024-11-15 11:46:35.008525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.416 qpair failed and we were unable to recover it. 00:28:34.416 [2024-11-15 11:46:35.008760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.416 [2024-11-15 11:46:35.008772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.416 qpair failed and we were unable to recover it. 00:28:34.416 [2024-11-15 11:46:35.008997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.416 [2024-11-15 11:46:35.009008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.416 qpair failed and we were unable to recover it. 00:28:34.416 [2024-11-15 11:46:35.009180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.416 [2024-11-15 11:46:35.009191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.416 qpair failed and we were unable to recover it. 00:28:34.416 [2024-11-15 11:46:35.009448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.417 [2024-11-15 11:46:35.009463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.417 qpair failed and we were unable to recover it. 00:28:34.417 [2024-11-15 11:46:35.009632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.417 [2024-11-15 11:46:35.009643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.417 qpair failed and we were unable to recover it. 00:28:34.417 [2024-11-15 11:46:35.009848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.417 [2024-11-15 11:46:35.009859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.417 qpair failed and we were unable to recover it. 00:28:34.417 [2024-11-15 11:46:35.010003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.417 [2024-11-15 11:46:35.010014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.417 qpair failed and we were unable to recover it. 00:28:34.417 [2024-11-15 11:46:35.010148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.417 [2024-11-15 11:46:35.010160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.417 qpair failed and we were unable to recover it. 00:28:34.417 [2024-11-15 11:46:35.010225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.417 [2024-11-15 11:46:35.010236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.417 qpair failed and we were unable to recover it. 00:28:34.417 [2024-11-15 11:46:35.010441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.417 [2024-11-15 11:46:35.010453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.417 qpair failed and we were unable to recover it. 00:28:34.417 [2024-11-15 11:46:35.010690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.417 [2024-11-15 11:46:35.010702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.417 qpair failed and we were unable to recover it. 00:28:34.417 [2024-11-15 11:46:35.010938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.417 [2024-11-15 11:46:35.010949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.417 qpair failed and we were unable to recover it. 00:28:34.417 [2024-11-15 11:46:35.011215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.417 [2024-11-15 11:46:35.011226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.417 qpair failed and we were unable to recover it. 00:28:34.417 [2024-11-15 11:46:35.011479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.417 [2024-11-15 11:46:35.011491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.417 qpair failed and we were unable to recover it. 00:28:34.417 [2024-11-15 11:46:35.011704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.417 [2024-11-15 11:46:35.011715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.417 qpair failed and we were unable to recover it. 00:28:34.417 [2024-11-15 11:46:35.011885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.417 [2024-11-15 11:46:35.011896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.417 qpair failed and we were unable to recover it. 00:28:34.417 [2024-11-15 11:46:35.012150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.417 [2024-11-15 11:46:35.012162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.417 qpair failed and we were unable to recover it. 00:28:34.417 [2024-11-15 11:46:35.012397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.417 [2024-11-15 11:46:35.012408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.417 qpair failed and we were unable to recover it. 00:28:34.417 [2024-11-15 11:46:35.012581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.417 [2024-11-15 11:46:35.012592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.417 qpair failed and we were unable to recover it. 00:28:34.417 [2024-11-15 11:46:35.012745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.417 [2024-11-15 11:46:35.012756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.417 qpair failed and we were unable to recover it. 00:28:34.417 [2024-11-15 11:46:35.012970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.417 [2024-11-15 11:46:35.012981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.417 qpair failed and we were unable to recover it. 00:28:34.417 [2024-11-15 11:46:35.013215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.417 [2024-11-15 11:46:35.013226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.417 qpair failed and we were unable to recover it. 00:28:34.417 [2024-11-15 11:46:35.013488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.417 [2024-11-15 11:46:35.013500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.417 qpair failed and we were unable to recover it. 00:28:34.417 [2024-11-15 11:46:35.013733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.417 [2024-11-15 11:46:35.013744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.417 qpair failed and we were unable to recover it. 00:28:34.417 [2024-11-15 11:46:35.013883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.417 [2024-11-15 11:46:35.013893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.417 qpair failed and we were unable to recover it. 00:28:34.417 [2024-11-15 11:46:35.014064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.417 [2024-11-15 11:46:35.014075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.417 qpair failed and we were unable to recover it. 00:28:34.417 [2024-11-15 11:46:35.014340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.417 [2024-11-15 11:46:35.014353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.417 qpair failed and we were unable to recover it. 00:28:34.417 [2024-11-15 11:46:35.014520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.417 [2024-11-15 11:46:35.014532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.417 qpair failed and we were unable to recover it. 00:28:34.417 [2024-11-15 11:46:35.014778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.417 [2024-11-15 11:46:35.014790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.417 qpair failed and we were unable to recover it. 00:28:34.417 [2024-11-15 11:46:35.014974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.417 [2024-11-15 11:46:35.014985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.417 qpair failed and we were unable to recover it. 00:28:34.417 [2024-11-15 11:46:35.015068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.417 [2024-11-15 11:46:35.015079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.417 qpair failed and we were unable to recover it. 00:28:34.417 [2024-11-15 11:46:35.015234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.417 [2024-11-15 11:46:35.015245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.417 qpair failed and we were unable to recover it. 00:28:34.417 [2024-11-15 11:46:35.015481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.417 [2024-11-15 11:46:35.015492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.417 qpair failed and we were unable to recover it. 00:28:34.417 [2024-11-15 11:46:35.015727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.417 [2024-11-15 11:46:35.015737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.417 qpair failed and we were unable to recover it. 00:28:34.417 [2024-11-15 11:46:35.015959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.417 [2024-11-15 11:46:35.015970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.417 qpair failed and we were unable to recover it. 00:28:34.417 [2024-11-15 11:46:35.016151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.417 [2024-11-15 11:46:35.016163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.417 qpair failed and we were unable to recover it. 00:28:34.417 [2024-11-15 11:46:35.016236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.417 [2024-11-15 11:46:35.016247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.417 qpair failed and we were unable to recover it. 00:28:34.417 [2024-11-15 11:46:35.016380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.417 [2024-11-15 11:46:35.016391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.417 qpair failed and we were unable to recover it. 00:28:34.417 [2024-11-15 11:46:35.016573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.417 [2024-11-15 11:46:35.016585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.417 qpair failed and we were unable to recover it. 00:28:34.417 [2024-11-15 11:46:35.016753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.417 [2024-11-15 11:46:35.016764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.417 qpair failed and we were unable to recover it. 00:28:34.417 [2024-11-15 11:46:35.016857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.417 [2024-11-15 11:46:35.016868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.417 qpair failed and we were unable to recover it. 00:28:34.417 [2024-11-15 11:46:35.017092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.418 [2024-11-15 11:46:35.017103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.418 qpair failed and we were unable to recover it. 00:28:34.418 [2024-11-15 11:46:35.017328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.418 [2024-11-15 11:46:35.017339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.418 qpair failed and we were unable to recover it. 00:28:34.418 [2024-11-15 11:46:35.017543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.418 [2024-11-15 11:46:35.017554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.418 qpair failed and we were unable to recover it. 00:28:34.418 [2024-11-15 11:46:35.017736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.418 [2024-11-15 11:46:35.017747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.418 qpair failed and we were unable to recover it. 00:28:34.418 [2024-11-15 11:46:35.017968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.418 [2024-11-15 11:46:35.017979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.418 qpair failed and we were unable to recover it. 00:28:34.418 [2024-11-15 11:46:35.018117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.418 [2024-11-15 11:46:35.018129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.418 qpair failed and we were unable to recover it. 00:28:34.418 [2024-11-15 11:46:35.018302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.418 [2024-11-15 11:46:35.018313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.418 qpair failed and we were unable to recover it. 00:28:34.418 [2024-11-15 11:46:35.018404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.418 [2024-11-15 11:46:35.018416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.418 qpair failed and we were unable to recover it. 00:28:34.418 [2024-11-15 11:46:35.018620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.418 [2024-11-15 11:46:35.018631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.418 qpair failed and we were unable to recover it. 00:28:34.418 [2024-11-15 11:46:35.018768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.418 [2024-11-15 11:46:35.018779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.418 qpair failed and we were unable to recover it. 00:28:34.418 [2024-11-15 11:46:35.018941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.418 [2024-11-15 11:46:35.018952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.418 qpair failed and we were unable to recover it. 00:28:34.418 [2024-11-15 11:46:35.019096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.418 [2024-11-15 11:46:35.019106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.418 qpair failed and we were unable to recover it. 00:28:34.418 [2024-11-15 11:46:35.019340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.418 [2024-11-15 11:46:35.019351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.418 qpair failed and we were unable to recover it. 00:28:34.418 [2024-11-15 11:46:35.019577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.418 [2024-11-15 11:46:35.019588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.418 qpair failed and we were unable to recover it. 00:28:34.418 [2024-11-15 11:46:35.019668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.418 [2024-11-15 11:46:35.019678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.418 qpair failed and we were unable to recover it. 00:28:34.418 [2024-11-15 11:46:35.019908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.418 [2024-11-15 11:46:35.019920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.418 qpair failed and we were unable to recover it. 00:28:34.418 [2024-11-15 11:46:35.020014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.418 [2024-11-15 11:46:35.020025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.418 qpair failed and we were unable to recover it. 00:28:34.418 [2024-11-15 11:46:35.020180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.418 [2024-11-15 11:46:35.020191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.418 qpair failed and we were unable to recover it. 00:28:34.418 [2024-11-15 11:46:35.020353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.418 [2024-11-15 11:46:35.020364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.418 qpair failed and we were unable to recover it. 00:28:34.418 [2024-11-15 11:46:35.020631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.418 [2024-11-15 11:46:35.020643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.418 qpair failed and we were unable to recover it. 00:28:34.418 [2024-11-15 11:46:35.020834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.418 [2024-11-15 11:46:35.020846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.418 qpair failed and we were unable to recover it. 00:28:34.418 [2024-11-15 11:46:35.021028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.418 [2024-11-15 11:46:35.021040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.418 qpair failed and we were unable to recover it. 00:28:34.418 [2024-11-15 11:46:35.021109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.418 [2024-11-15 11:46:35.021121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.418 qpair failed and we were unable to recover it. 00:28:34.418 [2024-11-15 11:46:35.021268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.418 [2024-11-15 11:46:35.021280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.418 qpair failed and we were unable to recover it. 00:28:34.418 [2024-11-15 11:46:35.021438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.418 [2024-11-15 11:46:35.021450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.418 qpair failed and we were unable to recover it. 00:28:34.418 [2024-11-15 11:46:35.021640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.418 [2024-11-15 11:46:35.021657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.418 qpair failed and we were unable to recover it. 00:28:34.418 [2024-11-15 11:46:35.021745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.418 [2024-11-15 11:46:35.021757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.418 qpair failed and we were unable to recover it. 00:28:34.418 [2024-11-15 11:46:35.021940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.418 [2024-11-15 11:46:35.021951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.418 qpair failed and we were unable to recover it. 00:28:34.418 [2024-11-15 11:46:35.022099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.418 [2024-11-15 11:46:35.022110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.418 qpair failed and we were unable to recover it. 00:28:34.418 [2024-11-15 11:46:35.022297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.418 [2024-11-15 11:46:35.022309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.418 qpair failed and we were unable to recover it. 00:28:34.418 [2024-11-15 11:46:35.022449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.418 [2024-11-15 11:46:35.022466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.418 qpair failed and we were unable to recover it. 00:28:34.418 [2024-11-15 11:46:35.022649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.418 [2024-11-15 11:46:35.022661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.418 qpair failed and we were unable to recover it. 00:28:34.418 [2024-11-15 11:46:35.022817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.418 [2024-11-15 11:46:35.022829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.418 qpair failed and we were unable to recover it. 00:28:34.418 [2024-11-15 11:46:35.023066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.418 [2024-11-15 11:46:35.023079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.418 qpair failed and we were unable to recover it. 00:28:34.418 [2024-11-15 11:46:35.023241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.418 [2024-11-15 11:46:35.023252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.418 qpair failed and we were unable to recover it. 00:28:34.418 [2024-11-15 11:46:35.023408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.418 [2024-11-15 11:46:35.023419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.418 qpair failed and we were unable to recover it. 00:28:34.418 [2024-11-15 11:46:35.023575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.418 [2024-11-15 11:46:35.023587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.418 qpair failed and we were unable to recover it. 00:28:34.418 [2024-11-15 11:46:35.023678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.418 [2024-11-15 11:46:35.023689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.418 qpair failed and we were unable to recover it. 00:28:34.418 [2024-11-15 11:46:35.023857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.419 [2024-11-15 11:46:35.023869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.419 qpair failed and we were unable to recover it. 00:28:34.419 [2024-11-15 11:46:35.024107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.419 [2024-11-15 11:46:35.024119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.419 qpair failed and we were unable to recover it. 00:28:34.419 [2024-11-15 11:46:35.024274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.419 [2024-11-15 11:46:35.024286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.419 qpair failed and we were unable to recover it. 00:28:34.419 [2024-11-15 11:46:35.024535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.419 [2024-11-15 11:46:35.024547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.419 qpair failed and we were unable to recover it. 00:28:34.419 [2024-11-15 11:46:35.024742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.419 [2024-11-15 11:46:35.024753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.419 qpair failed and we were unable to recover it. 00:28:34.419 [2024-11-15 11:46:35.024982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.419 [2024-11-15 11:46:35.024993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.419 qpair failed and we were unable to recover it. 00:28:34.419 [2024-11-15 11:46:35.025292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.419 [2024-11-15 11:46:35.025305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.419 qpair failed and we were unable to recover it. 00:28:34.419 [2024-11-15 11:46:35.025400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.419 [2024-11-15 11:46:35.025411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.419 qpair failed and we were unable to recover it. 00:28:34.419 [2024-11-15 11:46:35.025614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.419 [2024-11-15 11:46:35.025626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.419 qpair failed and we were unable to recover it. 00:28:34.419 [2024-11-15 11:46:35.025800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.419 [2024-11-15 11:46:35.025811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.419 qpair failed and we were unable to recover it. 00:28:34.419 [2024-11-15 11:46:35.025965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.419 [2024-11-15 11:46:35.025976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.419 qpair failed and we were unable to recover it. 00:28:34.419 [2024-11-15 11:46:35.026218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.419 [2024-11-15 11:46:35.026230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.419 qpair failed and we were unable to recover it. 00:28:34.419 [2024-11-15 11:46:35.026384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.419 [2024-11-15 11:46:35.026396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.419 qpair failed and we were unable to recover it. 00:28:34.419 [2024-11-15 11:46:35.026586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.419 [2024-11-15 11:46:35.026598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.419 qpair failed and we were unable to recover it. 00:28:34.419 [2024-11-15 11:46:35.026622] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1930530 (9): Bad file descriptor 00:28:34.419 [2024-11-15 11:46:35.026847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.419 [2024-11-15 11:46:35.026861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.419 qpair failed and we were unable to recover it. 00:28:34.419 [2024-11-15 11:46:35.027002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.419 [2024-11-15 11:46:35.027013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.419 qpair failed and we were unable to recover it. 00:28:34.419 [2024-11-15 11:46:35.027180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.419 [2024-11-15 11:46:35.027191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.419 qpair failed and we were unable to recover it. 00:28:34.419 [2024-11-15 11:46:35.027451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.419 [2024-11-15 11:46:35.027466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.419 qpair failed and we were unable to recover it. 00:28:34.419 [2024-11-15 11:46:35.027607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.419 [2024-11-15 11:46:35.027618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.419 qpair failed and we were unable to recover it. 00:28:34.419 [2024-11-15 11:46:35.027774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.419 [2024-11-15 11:46:35.027784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.419 qpair failed and we were unable to recover it. 00:28:34.419 [2024-11-15 11:46:35.027900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.419 [2024-11-15 11:46:35.027912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.419 qpair failed and we were unable to recover it. 00:28:34.419 [2024-11-15 11:46:35.028148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.419 [2024-11-15 11:46:35.028159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.419 qpair failed and we were unable to recover it. 00:28:34.419 [2024-11-15 11:46:35.028296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.419 [2024-11-15 11:46:35.028307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.419 qpair failed and we were unable to recover it. 00:28:34.419 [2024-11-15 11:46:35.028469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.419 [2024-11-15 11:46:35.028481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.419 qpair failed and we were unable to recover it. 00:28:34.419 [2024-11-15 11:46:35.028615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.419 [2024-11-15 11:46:35.028627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.419 qpair failed and we were unable to recover it. 00:28:34.419 [2024-11-15 11:46:35.028777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.419 [2024-11-15 11:46:35.028788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.419 qpair failed and we were unable to recover it. 00:28:34.419 [2024-11-15 11:46:35.028967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.419 [2024-11-15 11:46:35.028978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.419 qpair failed and we were unable to recover it. 00:28:34.419 [2024-11-15 11:46:35.029051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.419 [2024-11-15 11:46:35.029062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.419 qpair failed and we were unable to recover it. 00:28:34.419 [2024-11-15 11:46:35.029163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.419 [2024-11-15 11:46:35.029174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.419 qpair failed and we were unable to recover it. 00:28:34.419 [2024-11-15 11:46:35.029260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.419 [2024-11-15 11:46:35.029272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.419 qpair failed and we were unable to recover it. 00:28:34.419 [2024-11-15 11:46:35.029530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.419 [2024-11-15 11:46:35.029542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.419 qpair failed and we were unable to recover it. 00:28:34.419 [2024-11-15 11:46:35.029744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.419 [2024-11-15 11:46:35.029756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.419 qpair failed and we were unable to recover it. 00:28:34.419 [2024-11-15 11:46:35.030038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.419 [2024-11-15 11:46:35.030049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.419 qpair failed and we were unable to recover it. 00:28:34.419 [2024-11-15 11:46:35.030205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.419 [2024-11-15 11:46:35.030216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.419 qpair failed and we were unable to recover it. 00:28:34.419 [2024-11-15 11:46:35.030450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.419 [2024-11-15 11:46:35.030465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.419 qpair failed and we were unable to recover it. 00:28:34.419 [2024-11-15 11:46:35.030705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.419 [2024-11-15 11:46:35.030716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.419 qpair failed and we were unable to recover it. 00:28:34.419 [2024-11-15 11:46:35.030899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.419 [2024-11-15 11:46:35.030910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.419 qpair failed and we were unable to recover it. 00:28:34.419 [2024-11-15 11:46:35.031143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.419 [2024-11-15 11:46:35.031154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.419 qpair failed and we were unable to recover it. 00:28:34.419 [2024-11-15 11:46:35.031392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.420 [2024-11-15 11:46:35.031403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.420 qpair failed and we were unable to recover it. 00:28:34.420 [2024-11-15 11:46:35.031582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.420 [2024-11-15 11:46:35.031593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.420 qpair failed and we were unable to recover it. 00:28:34.420 [2024-11-15 11:46:35.031813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.420 [2024-11-15 11:46:35.031826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.420 qpair failed and we were unable to recover it. 00:28:34.420 [2024-11-15 11:46:35.032046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.420 [2024-11-15 11:46:35.032057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.420 qpair failed and we were unable to recover it. 00:28:34.420 [2024-11-15 11:46:35.032155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.420 [2024-11-15 11:46:35.032165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.420 qpair failed and we were unable to recover it. 00:28:34.420 [2024-11-15 11:46:35.032397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.420 [2024-11-15 11:46:35.032408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.420 qpair failed and we were unable to recover it. 00:28:34.420 [2024-11-15 11:46:35.032564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.420 [2024-11-15 11:46:35.032575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.420 qpair failed and we were unable to recover it. 00:28:34.420 [2024-11-15 11:46:35.032815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.420 [2024-11-15 11:46:35.032826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.420 qpair failed and we were unable to recover it. 00:28:34.420 [2024-11-15 11:46:35.033089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.420 [2024-11-15 11:46:35.033099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.420 qpair failed and we were unable to recover it. 00:28:34.420 [2024-11-15 11:46:35.033264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.420 [2024-11-15 11:46:35.033275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.420 qpair failed and we were unable to recover it. 00:28:34.420 [2024-11-15 11:46:35.033565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.420 [2024-11-15 11:46:35.033576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.420 qpair failed and we were unable to recover it. 00:28:34.420 [2024-11-15 11:46:35.033784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.420 [2024-11-15 11:46:35.033795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.420 qpair failed and we were unable to recover it. 00:28:34.420 [2024-11-15 11:46:35.033998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.420 [2024-11-15 11:46:35.034008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.420 qpair failed and we were unable to recover it. 00:28:34.420 [2024-11-15 11:46:35.034251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.420 [2024-11-15 11:46:35.034262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.420 qpair failed and we were unable to recover it. 00:28:34.420 [2024-11-15 11:46:35.034417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.420 [2024-11-15 11:46:35.034428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.420 qpair failed and we were unable to recover it. 00:28:34.420 [2024-11-15 11:46:35.034657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.420 [2024-11-15 11:46:35.034668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.420 qpair failed and we were unable to recover it. 00:28:34.420 [2024-11-15 11:46:35.034919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.420 [2024-11-15 11:46:35.034930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.420 qpair failed and we were unable to recover it. 00:28:34.420 [2024-11-15 11:46:35.035136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.420 [2024-11-15 11:46:35.035147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.420 qpair failed and we were unable to recover it. 00:28:34.420 [2024-11-15 11:46:35.035410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.420 [2024-11-15 11:46:35.035421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.420 qpair failed and we were unable to recover it. 00:28:34.420 [2024-11-15 11:46:35.035682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.420 [2024-11-15 11:46:35.035693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.420 qpair failed and we were unable to recover it. 00:28:34.420 [2024-11-15 11:46:35.035867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.420 [2024-11-15 11:46:35.035879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.420 qpair failed and we were unable to recover it. 00:28:34.420 [2024-11-15 11:46:35.036016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.420 [2024-11-15 11:46:35.036028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.420 qpair failed and we were unable to recover it. 00:28:34.420 [2024-11-15 11:46:35.036260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.420 [2024-11-15 11:46:35.036271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.420 qpair failed and we were unable to recover it. 00:28:34.420 [2024-11-15 11:46:35.036351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.420 [2024-11-15 11:46:35.036361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.420 qpair failed and we were unable to recover it. 00:28:34.420 [2024-11-15 11:46:35.036530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.420 [2024-11-15 11:46:35.036542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.420 qpair failed and we were unable to recover it. 00:28:34.420 [2024-11-15 11:46:35.036776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.420 [2024-11-15 11:46:35.036787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.420 qpair failed and we were unable to recover it. 00:28:34.420 [2024-11-15 11:46:35.036972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.420 [2024-11-15 11:46:35.036983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.420 qpair failed and we were unable to recover it. 00:28:34.420 [2024-11-15 11:46:35.037239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.420 [2024-11-15 11:46:35.037250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.420 qpair failed and we were unable to recover it. 00:28:34.420 [2024-11-15 11:46:35.037471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.420 [2024-11-15 11:46:35.037482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.420 qpair failed and we were unable to recover it. 00:28:34.420 [2024-11-15 11:46:35.037720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.420 [2024-11-15 11:46:35.037732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.420 qpair failed and we were unable to recover it. 00:28:34.420 [2024-11-15 11:46:35.037980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.420 [2024-11-15 11:46:35.037991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.420 qpair failed and we were unable to recover it. 00:28:34.420 [2024-11-15 11:46:35.038139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.420 [2024-11-15 11:46:35.038150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.420 qpair failed and we were unable to recover it. 00:28:34.420 [2024-11-15 11:46:35.038300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.420 [2024-11-15 11:46:35.038311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.420 qpair failed and we were unable to recover it. 00:28:34.420 [2024-11-15 11:46:35.038410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.420 [2024-11-15 11:46:35.038420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.420 qpair failed and we were unable to recover it. 00:28:34.420 [2024-11-15 11:46:35.038575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.420 [2024-11-15 11:46:35.038587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.420 qpair failed and we were unable to recover it. 00:28:34.420 [2024-11-15 11:46:35.038794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.420 [2024-11-15 11:46:35.038805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.420 qpair failed and we were unable to recover it. 00:28:34.420 [2024-11-15 11:46:35.038953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.420 [2024-11-15 11:46:35.038965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.420 qpair failed and we were unable to recover it. 00:28:34.420 [2024-11-15 11:46:35.039033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.420 [2024-11-15 11:46:35.039044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.420 qpair failed and we were unable to recover it. 00:28:34.420 [2024-11-15 11:46:35.039180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.421 [2024-11-15 11:46:35.039190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.421 qpair failed and we were unable to recover it. 00:28:34.421 [2024-11-15 11:46:35.039346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.421 [2024-11-15 11:46:35.039357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.421 qpair failed and we were unable to recover it. 00:28:34.421 [2024-11-15 11:46:35.039502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.421 [2024-11-15 11:46:35.039513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.421 qpair failed and we were unable to recover it. 00:28:34.421 [2024-11-15 11:46:35.039780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.421 [2024-11-15 11:46:35.039791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.421 qpair failed and we were unable to recover it. 00:28:34.421 [2024-11-15 11:46:35.039995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.421 [2024-11-15 11:46:35.040008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.421 qpair failed and we were unable to recover it. 00:28:34.421 [2024-11-15 11:46:35.040178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.421 [2024-11-15 11:46:35.040188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.421 qpair failed and we were unable to recover it. 00:28:34.421 [2024-11-15 11:46:35.040355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.421 [2024-11-15 11:46:35.040365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.421 qpair failed and we were unable to recover it. 00:28:34.421 [2024-11-15 11:46:35.040445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.421 [2024-11-15 11:46:35.040456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.421 qpair failed and we were unable to recover it. 00:28:34.421 [2024-11-15 11:46:35.040631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.421 [2024-11-15 11:46:35.040642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.421 qpair failed and we were unable to recover it. 00:28:34.421 [2024-11-15 11:46:35.040876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.421 [2024-11-15 11:46:35.040887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.421 qpair failed and we were unable to recover it. 00:28:34.421 [2024-11-15 11:46:35.040981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.421 [2024-11-15 11:46:35.040993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.421 qpair failed and we were unable to recover it. 00:28:34.421 [2024-11-15 11:46:35.041227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.421 [2024-11-15 11:46:35.041237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.421 qpair failed and we were unable to recover it. 00:28:34.421 [2024-11-15 11:46:35.041392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.421 [2024-11-15 11:46:35.041403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.421 qpair failed and we were unable to recover it. 00:28:34.421 [2024-11-15 11:46:35.041580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.421 [2024-11-15 11:46:35.041591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.421 qpair failed and we were unable to recover it. 00:28:34.421 [2024-11-15 11:46:35.041803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.421 [2024-11-15 11:46:35.041815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.421 qpair failed and we were unable to recover it. 00:28:34.421 [2024-11-15 11:46:35.042028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.421 [2024-11-15 11:46:35.042039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.421 qpair failed and we were unable to recover it. 00:28:34.421 [2024-11-15 11:46:35.042269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.421 [2024-11-15 11:46:35.042280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.421 qpair failed and we were unable to recover it. 00:28:34.421 [2024-11-15 11:46:35.042546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.421 [2024-11-15 11:46:35.042558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.421 qpair failed and we were unable to recover it. 00:28:34.421 [2024-11-15 11:46:35.042715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.421 [2024-11-15 11:46:35.042726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.421 qpair failed and we were unable to recover it. 00:28:34.421 [2024-11-15 11:46:35.043007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.421 [2024-11-15 11:46:35.043019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.421 qpair failed and we were unable to recover it. 00:28:34.421 [2024-11-15 11:46:35.043103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.421 [2024-11-15 11:46:35.043115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.421 qpair failed and we were unable to recover it. 00:28:34.421 [2024-11-15 11:46:35.043261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.421 [2024-11-15 11:46:35.043273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.421 qpair failed and we were unable to recover it. 00:28:34.421 [2024-11-15 11:46:35.043456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.421 [2024-11-15 11:46:35.043475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.421 qpair failed and we were unable to recover it. 00:28:34.421 [2024-11-15 11:46:35.043734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.421 [2024-11-15 11:46:35.043745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.421 qpair failed and we were unable to recover it. 00:28:34.421 [2024-11-15 11:46:35.043975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.421 [2024-11-15 11:46:35.043985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.421 qpair failed and we were unable to recover it. 00:28:34.421 [2024-11-15 11:46:35.044165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.421 [2024-11-15 11:46:35.044175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.421 qpair failed and we were unable to recover it. 00:28:34.421 [2024-11-15 11:46:35.044308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.421 [2024-11-15 11:46:35.044319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.421 qpair failed and we were unable to recover it. 00:28:34.421 [2024-11-15 11:46:35.044472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.421 [2024-11-15 11:46:35.044483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.421 qpair failed and we were unable to recover it. 00:28:34.421 [2024-11-15 11:46:35.044644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.421 [2024-11-15 11:46:35.044655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.421 qpair failed and we were unable to recover it. 00:28:34.421 [2024-11-15 11:46:35.044944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.421 [2024-11-15 11:46:35.044955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.421 qpair failed and we were unable to recover it. 00:28:34.421 [2024-11-15 11:46:35.045139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.421 [2024-11-15 11:46:35.045150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.421 qpair failed and we were unable to recover it. 00:28:34.421 [2024-11-15 11:46:35.045438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.421 [2024-11-15 11:46:35.045449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.421 qpair failed and we were unable to recover it. 00:28:34.421 [2024-11-15 11:46:35.045704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.421 [2024-11-15 11:46:35.045715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.421 qpair failed and we were unable to recover it. 00:28:34.421 [2024-11-15 11:46:35.045936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.422 [2024-11-15 11:46:35.045947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.422 qpair failed and we were unable to recover it. 00:28:34.422 [2024-11-15 11:46:35.046123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.422 [2024-11-15 11:46:35.046134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.422 qpair failed and we were unable to recover it. 00:28:34.422 [2024-11-15 11:46:35.046384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.422 [2024-11-15 11:46:35.046396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.422 qpair failed and we were unable to recover it. 00:28:34.422 [2024-11-15 11:46:35.046624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.422 [2024-11-15 11:46:35.046636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.422 qpair failed and we were unable to recover it. 00:28:34.422 [2024-11-15 11:46:35.046843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.422 [2024-11-15 11:46:35.046854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.422 qpair failed and we were unable to recover it. 00:28:34.422 [2024-11-15 11:46:35.047022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.422 [2024-11-15 11:46:35.047033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.422 qpair failed and we were unable to recover it. 00:28:34.422 [2024-11-15 11:46:35.047198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.422 [2024-11-15 11:46:35.047210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.422 qpair failed and we were unable to recover it. 00:28:34.422 [2024-11-15 11:46:35.047439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.422 [2024-11-15 11:46:35.047450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.422 qpair failed and we were unable to recover it. 00:28:34.422 [2024-11-15 11:46:35.047549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.422 [2024-11-15 11:46:35.047561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.422 qpair failed and we were unable to recover it. 00:28:34.422 [2024-11-15 11:46:35.047767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.422 [2024-11-15 11:46:35.047778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.422 qpair failed and we were unable to recover it. 00:28:34.422 [2024-11-15 11:46:35.047914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.422 [2024-11-15 11:46:35.047925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.422 qpair failed and we were unable to recover it. 00:28:34.422 [2024-11-15 11:46:35.048075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.422 [2024-11-15 11:46:35.048088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.422 qpair failed and we were unable to recover it. 00:28:34.422 [2024-11-15 11:46:35.048185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.422 [2024-11-15 11:46:35.048196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.422 qpair failed and we were unable to recover it. 00:28:34.422 [2024-11-15 11:46:35.048399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.422 [2024-11-15 11:46:35.048410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.422 qpair failed and we were unable to recover it. 00:28:34.422 [2024-11-15 11:46:35.048502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.422 [2024-11-15 11:46:35.048513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.422 qpair failed and we were unable to recover it. 00:28:34.422 [2024-11-15 11:46:35.048590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.422 [2024-11-15 11:46:35.048601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.422 qpair failed and we were unable to recover it. 00:28:34.422 [2024-11-15 11:46:35.048818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.422 [2024-11-15 11:46:35.048829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.422 qpair failed and we were unable to recover it. 00:28:34.422 [2024-11-15 11:46:35.049006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.422 [2024-11-15 11:46:35.049016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.422 qpair failed and we were unable to recover it. 00:28:34.422 [2024-11-15 11:46:35.049342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.422 [2024-11-15 11:46:35.049353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.422 qpair failed and we were unable to recover it. 00:28:34.422 [2024-11-15 11:46:35.049503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.422 [2024-11-15 11:46:35.049515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.422 qpair failed and we were unable to recover it. 00:28:34.422 [2024-11-15 11:46:35.049681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.422 [2024-11-15 11:46:35.049692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.422 qpair failed and we were unable to recover it. 00:28:34.422 [2024-11-15 11:46:35.049925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.422 [2024-11-15 11:46:35.049936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.422 qpair failed and we were unable to recover it. 00:28:34.422 [2024-11-15 11:46:35.050182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.422 [2024-11-15 11:46:35.050194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.422 qpair failed and we were unable to recover it. 00:28:34.422 [2024-11-15 11:46:35.050349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.422 [2024-11-15 11:46:35.050360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.422 qpair failed and we were unable to recover it. 00:28:34.422 [2024-11-15 11:46:35.050510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.422 [2024-11-15 11:46:35.050521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.422 qpair failed and we were unable to recover it. 00:28:34.422 [2024-11-15 11:46:35.050677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.422 [2024-11-15 11:46:35.050688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.422 qpair failed and we were unable to recover it. 00:28:34.422 [2024-11-15 11:46:35.050838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.422 [2024-11-15 11:46:35.050848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.422 qpair failed and we were unable to recover it. 00:28:34.422 [2024-11-15 11:46:35.051084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.422 [2024-11-15 11:46:35.051095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.422 qpair failed and we were unable to recover it. 00:28:34.422 [2024-11-15 11:46:35.051248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.422 [2024-11-15 11:46:35.051259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.422 qpair failed and we were unable to recover it. 00:28:34.422 [2024-11-15 11:46:35.051343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.422 [2024-11-15 11:46:35.051354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.422 qpair failed and we were unable to recover it. 00:28:34.422 [2024-11-15 11:46:35.051587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.422 [2024-11-15 11:46:35.051598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.422 qpair failed and we were unable to recover it. 00:28:34.422 [2024-11-15 11:46:35.051689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.422 [2024-11-15 11:46:35.051700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.422 qpair failed and we were unable to recover it. 00:28:34.422 [2024-11-15 11:46:35.051905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.422 [2024-11-15 11:46:35.051917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.422 qpair failed and we were unable to recover it. 00:28:34.422 [2024-11-15 11:46:35.052053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.422 [2024-11-15 11:46:35.052064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.422 qpair failed and we were unable to recover it. 00:28:34.422 [2024-11-15 11:46:35.052313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.422 [2024-11-15 11:46:35.052324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.422 qpair failed and we were unable to recover it. 00:28:34.422 [2024-11-15 11:46:35.052547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.422 [2024-11-15 11:46:35.052567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.422 qpair failed and we were unable to recover it. 00:28:34.422 [2024-11-15 11:46:35.052724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.422 [2024-11-15 11:46:35.052735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.422 qpair failed and we were unable to recover it. 00:28:34.422 [2024-11-15 11:46:35.052949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.422 [2024-11-15 11:46:35.052959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.422 qpair failed and we were unable to recover it. 00:28:34.422 [2024-11-15 11:46:35.053244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.422 [2024-11-15 11:46:35.053257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.423 qpair failed and we were unable to recover it. 00:28:34.423 [2024-11-15 11:46:35.053481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.423 [2024-11-15 11:46:35.053492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.423 qpair failed and we were unable to recover it. 00:28:34.423 [2024-11-15 11:46:35.053727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.423 [2024-11-15 11:46:35.053738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.423 qpair failed and we were unable to recover it. 00:28:34.423 [2024-11-15 11:46:35.053995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.423 [2024-11-15 11:46:35.054006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.423 qpair failed and we were unable to recover it. 00:28:34.423 [2024-11-15 11:46:35.054108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.423 [2024-11-15 11:46:35.054119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.423 qpair failed and we were unable to recover it. 00:28:34.423 [2024-11-15 11:46:35.054327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.423 [2024-11-15 11:46:35.054338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.423 qpair failed and we were unable to recover it. 00:28:34.423 [2024-11-15 11:46:35.054532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.423 [2024-11-15 11:46:35.054543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.423 qpair failed and we were unable to recover it. 00:28:34.423 [2024-11-15 11:46:35.054696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.423 [2024-11-15 11:46:35.054708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.423 qpair failed and we were unable to recover it. 00:28:34.423 [2024-11-15 11:46:35.054945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.423 [2024-11-15 11:46:35.054956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.423 qpair failed and we were unable to recover it. 00:28:34.423 [2024-11-15 11:46:35.055103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.423 [2024-11-15 11:46:35.055114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.423 qpair failed and we were unable to recover it. 00:28:34.423 [2024-11-15 11:46:35.055325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.423 [2024-11-15 11:46:35.055336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.423 qpair failed and we were unable to recover it. 00:28:34.423 [2024-11-15 11:46:35.055483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.423 [2024-11-15 11:46:35.055494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.423 qpair failed and we were unable to recover it. 00:28:34.423 [2024-11-15 11:46:35.055728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.423 [2024-11-15 11:46:35.055739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.423 qpair failed and we were unable to recover it. 00:28:34.423 [2024-11-15 11:46:35.055962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.423 [2024-11-15 11:46:35.055975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.423 qpair failed and we were unable to recover it. 00:28:34.423 [2024-11-15 11:46:35.056229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.423 [2024-11-15 11:46:35.056239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.423 qpair failed and we were unable to recover it. 00:28:34.423 [2024-11-15 11:46:35.056475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.423 [2024-11-15 11:46:35.056486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.423 qpair failed and we were unable to recover it. 00:28:34.423 [2024-11-15 11:46:35.056728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.423 [2024-11-15 11:46:35.056739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.423 qpair failed and we were unable to recover it. 00:28:34.423 [2024-11-15 11:46:35.056950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.423 [2024-11-15 11:46:35.056960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.423 qpair failed and we were unable to recover it. 00:28:34.423 [2024-11-15 11:46:35.057219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.423 [2024-11-15 11:46:35.057230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.423 qpair failed and we were unable to recover it. 00:28:34.423 [2024-11-15 11:46:35.057477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.423 [2024-11-15 11:46:35.057489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.423 qpair failed and we were unable to recover it. 00:28:34.423 [2024-11-15 11:46:35.057708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.423 [2024-11-15 11:46:35.057719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.423 qpair failed and we were unable to recover it. 00:28:34.423 [2024-11-15 11:46:35.057814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.423 [2024-11-15 11:46:35.057825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.423 qpair failed and we were unable to recover it. 00:28:34.423 [2024-11-15 11:46:35.058057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.423 [2024-11-15 11:46:35.058068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.423 qpair failed and we were unable to recover it. 00:28:34.423 [2024-11-15 11:46:35.058242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.423 [2024-11-15 11:46:35.058252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.423 qpair failed and we were unable to recover it. 00:28:34.423 [2024-11-15 11:46:35.058450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.423 [2024-11-15 11:46:35.058464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.423 qpair failed and we were unable to recover it. 00:28:34.423 [2024-11-15 11:46:35.058692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.423 [2024-11-15 11:46:35.058703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.423 qpair failed and we were unable to recover it. 00:28:34.423 [2024-11-15 11:46:35.058962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.423 [2024-11-15 11:46:35.058972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.423 qpair failed and we were unable to recover it. 00:28:34.423 [2024-11-15 11:46:35.059214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.423 [2024-11-15 11:46:35.059225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.423 qpair failed and we were unable to recover it. 00:28:34.423 [2024-11-15 11:46:35.059430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.423 [2024-11-15 11:46:35.059441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.423 qpair failed and we were unable to recover it. 00:28:34.423 [2024-11-15 11:46:35.059692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.423 [2024-11-15 11:46:35.059703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.423 qpair failed and we were unable to recover it. 00:28:34.423 [2024-11-15 11:46:35.059966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.423 [2024-11-15 11:46:35.059977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.423 qpair failed and we were unable to recover it. 00:28:34.423 [2024-11-15 11:46:35.060138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.423 [2024-11-15 11:46:35.060149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.423 qpair failed and we were unable to recover it. 00:28:34.423 [2024-11-15 11:46:35.060389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.423 [2024-11-15 11:46:35.060400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.423 qpair failed and we were unable to recover it. 00:28:34.423 [2024-11-15 11:46:35.060603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.423 [2024-11-15 11:46:35.060614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.423 qpair failed and we were unable to recover it. 00:28:34.423 [2024-11-15 11:46:35.060772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.423 [2024-11-15 11:46:35.060783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.423 qpair failed and we were unable to recover it. 00:28:34.423 [2024-11-15 11:46:35.060986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.423 [2024-11-15 11:46:35.060997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.423 qpair failed and we were unable to recover it. 00:28:34.423 [2024-11-15 11:46:35.061237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.423 [2024-11-15 11:46:35.061248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.423 qpair failed and we were unable to recover it. 00:28:34.423 [2024-11-15 11:46:35.061510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.423 [2024-11-15 11:46:35.061521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.423 qpair failed and we were unable to recover it. 00:28:34.423 [2024-11-15 11:46:35.061670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.423 [2024-11-15 11:46:35.061681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.423 qpair failed and we were unable to recover it. 00:28:34.424 [2024-11-15 11:46:35.061890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.424 [2024-11-15 11:46:35.061901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.424 qpair failed and we were unable to recover it. 00:28:34.424 [2024-11-15 11:46:35.062130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.424 [2024-11-15 11:46:35.062141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.424 qpair failed and we were unable to recover it. 00:28:34.424 [2024-11-15 11:46:35.062375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.424 [2024-11-15 11:46:35.062386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.424 qpair failed and we were unable to recover it. 00:28:34.424 [2024-11-15 11:46:35.062637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.424 [2024-11-15 11:46:35.062648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.424 qpair failed and we were unable to recover it. 00:28:34.424 [2024-11-15 11:46:35.062803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.424 [2024-11-15 11:46:35.062815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.424 qpair failed and we were unable to recover it. 00:28:34.424 [2024-11-15 11:46:35.062984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.424 [2024-11-15 11:46:35.062995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.424 qpair failed and we were unable to recover it. 00:28:34.424 [2024-11-15 11:46:35.063256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.424 [2024-11-15 11:46:35.063267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.424 qpair failed and we were unable to recover it. 00:28:34.424 [2024-11-15 11:46:35.063558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.424 [2024-11-15 11:46:35.063569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.424 qpair failed and we were unable to recover it. 00:28:34.424 [2024-11-15 11:46:35.063663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.424 [2024-11-15 11:46:35.063675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.424 qpair failed and we were unable to recover it. 00:28:34.424 [2024-11-15 11:46:35.063915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.424 [2024-11-15 11:46:35.063925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.424 qpair failed and we were unable to recover it. 00:28:34.424 [2024-11-15 11:46:35.064074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.424 [2024-11-15 11:46:35.064085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.424 qpair failed and we were unable to recover it. 00:28:34.424 [2024-11-15 11:46:35.064315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.424 [2024-11-15 11:46:35.064326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.424 qpair failed and we were unable to recover it. 00:28:34.424 [2024-11-15 11:46:35.064552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.424 [2024-11-15 11:46:35.064563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.424 qpair failed and we were unable to recover it. 00:28:34.424 [2024-11-15 11:46:35.064828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.424 [2024-11-15 11:46:35.064840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.424 qpair failed and we were unable to recover it. 00:28:34.424 [2024-11-15 11:46:35.065075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.424 [2024-11-15 11:46:35.065088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.424 qpair failed and we were unable to recover it. 00:28:34.424 [2024-11-15 11:46:35.065172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.424 [2024-11-15 11:46:35.065183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.424 qpair failed and we were unable to recover it. 00:28:34.424 [2024-11-15 11:46:35.065363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.424 [2024-11-15 11:46:35.065373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.424 qpair failed and we were unable to recover it. 00:28:34.424 [2024-11-15 11:46:35.065570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.424 [2024-11-15 11:46:35.065580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.424 qpair failed and we were unable to recover it. 00:28:34.424 [2024-11-15 11:46:35.065848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.424 [2024-11-15 11:46:35.065859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.424 qpair failed and we were unable to recover it. 00:28:34.424 [2024-11-15 11:46:35.066117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.424 [2024-11-15 11:46:35.066127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.424 qpair failed and we were unable to recover it. 00:28:34.424 [2024-11-15 11:46:35.066310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.424 [2024-11-15 11:46:35.066321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.424 qpair failed and we were unable to recover it. 00:28:34.424 [2024-11-15 11:46:35.066528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.424 [2024-11-15 11:46:35.066538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.424 qpair failed and we were unable to recover it. 00:28:34.424 [2024-11-15 11:46:35.066750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.424 [2024-11-15 11:46:35.066761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.424 qpair failed and we were unable to recover it. 00:28:34.424 [2024-11-15 11:46:35.066958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.424 [2024-11-15 11:46:35.066969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.424 qpair failed and we were unable to recover it. 00:28:34.424 [2024-11-15 11:46:35.067170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.424 [2024-11-15 11:46:35.067181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.424 qpair failed and we were unable to recover it. 00:28:34.424 [2024-11-15 11:46:35.067383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.424 [2024-11-15 11:46:35.067394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.424 qpair failed and we were unable to recover it. 00:28:34.424 [2024-11-15 11:46:35.067535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.424 [2024-11-15 11:46:35.067546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.424 qpair failed and we were unable to recover it. 00:28:34.424 [2024-11-15 11:46:35.067779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.424 [2024-11-15 11:46:35.067789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.424 qpair failed and we were unable to recover it. 00:28:34.424 [2024-11-15 11:46:35.067968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.424 [2024-11-15 11:46:35.067979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.424 qpair failed and we were unable to recover it. 00:28:34.424 [2024-11-15 11:46:35.068211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.424 [2024-11-15 11:46:35.068222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.424 qpair failed and we were unable to recover it. 00:28:34.424 [2024-11-15 11:46:35.068318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.424 [2024-11-15 11:46:35.068328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.424 qpair failed and we were unable to recover it. 00:28:34.424 [2024-11-15 11:46:35.068533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.424 [2024-11-15 11:46:35.068544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.424 qpair failed and we were unable to recover it. 00:28:34.424 [2024-11-15 11:46:35.068775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.424 [2024-11-15 11:46:35.068786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.424 qpair failed and we were unable to recover it. 00:28:34.424 [2024-11-15 11:46:35.068991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.424 [2024-11-15 11:46:35.069002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.424 qpair failed and we were unable to recover it. 00:28:34.424 [2024-11-15 11:46:35.069155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.424 [2024-11-15 11:46:35.069166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.424 qpair failed and we were unable to recover it. 00:28:34.424 [2024-11-15 11:46:35.069409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.424 [2024-11-15 11:46:35.069420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.424 qpair failed and we were unable to recover it. 00:28:34.424 [2024-11-15 11:46:35.069628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.424 [2024-11-15 11:46:35.069640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.424 qpair failed and we were unable to recover it. 00:28:34.424 [2024-11-15 11:46:35.069914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.424 [2024-11-15 11:46:35.069925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.424 qpair failed and we were unable to recover it. 00:28:34.425 [2024-11-15 11:46:35.070163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.425 [2024-11-15 11:46:35.070173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.425 qpair failed and we were unable to recover it. 00:28:34.425 [2024-11-15 11:46:35.070328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.425 [2024-11-15 11:46:35.070338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.425 qpair failed and we were unable to recover it. 00:28:34.425 [2024-11-15 11:46:35.070478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.425 [2024-11-15 11:46:35.070488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.425 qpair failed and we were unable to recover it. 00:28:34.425 [2024-11-15 11:46:35.070753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.425 [2024-11-15 11:46:35.070768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.425 qpair failed and we were unable to recover it. 00:28:34.425 [2024-11-15 11:46:35.070958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.425 [2024-11-15 11:46:35.070970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.425 qpair failed and we were unable to recover it. 00:28:34.425 [2024-11-15 11:46:35.071123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.425 [2024-11-15 11:46:35.071133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.425 qpair failed and we were unable to recover it. 00:28:34.425 [2024-11-15 11:46:35.071372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.425 [2024-11-15 11:46:35.071383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.425 qpair failed and we were unable to recover it. 00:28:34.425 [2024-11-15 11:46:35.071559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.425 [2024-11-15 11:46:35.071570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.425 qpair failed and we were unable to recover it. 00:28:34.425 [2024-11-15 11:46:35.071774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.425 [2024-11-15 11:46:35.071785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.425 qpair failed and we were unable to recover it. 00:28:34.425 [2024-11-15 11:46:35.071960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.425 [2024-11-15 11:46:35.071972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.425 qpair failed and we were unable to recover it. 00:28:34.425 [2024-11-15 11:46:35.072192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.425 [2024-11-15 11:46:35.072203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.425 qpair failed and we were unable to recover it. 00:28:34.425 [2024-11-15 11:46:35.072403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.425 [2024-11-15 11:46:35.072414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.425 qpair failed and we were unable to recover it. 00:28:34.425 [2024-11-15 11:46:35.072666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.425 [2024-11-15 11:46:35.072679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.425 qpair failed and we were unable to recover it. 00:28:34.425 [2024-11-15 11:46:35.072893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.425 [2024-11-15 11:46:35.072903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.425 qpair failed and we were unable to recover it. 00:28:34.425 [2024-11-15 11:46:35.073006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.425 [2024-11-15 11:46:35.073017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.425 qpair failed and we were unable to recover it. 00:28:34.425 [2024-11-15 11:46:35.073184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.425 [2024-11-15 11:46:35.073195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.425 qpair failed and we were unable to recover it. 00:28:34.425 [2024-11-15 11:46:35.073289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.425 [2024-11-15 11:46:35.073302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.425 qpair failed and we were unable to recover it. 00:28:34.425 [2024-11-15 11:46:35.073401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.425 [2024-11-15 11:46:35.073411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.425 qpair failed and we were unable to recover it. 00:28:34.425 [2024-11-15 11:46:35.073562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.425 [2024-11-15 11:46:35.073573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.425 qpair failed and we were unable to recover it. 00:28:34.425 [2024-11-15 11:46:35.073823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.425 [2024-11-15 11:46:35.073834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.425 qpair failed and we were unable to recover it. 00:28:34.425 [2024-11-15 11:46:35.074066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.425 [2024-11-15 11:46:35.074078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.425 qpair failed and we were unable to recover it. 00:28:34.425 [2024-11-15 11:46:35.074317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.425 [2024-11-15 11:46:35.074329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.425 qpair failed and we were unable to recover it. 00:28:34.425 [2024-11-15 11:46:35.074574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.425 [2024-11-15 11:46:35.074586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.425 qpair failed and we were unable to recover it. 00:28:34.425 [2024-11-15 11:46:35.074760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.425 [2024-11-15 11:46:35.074771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.425 qpair failed and we were unable to recover it. 00:28:34.425 [2024-11-15 11:46:35.074977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.425 [2024-11-15 11:46:35.074989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.425 qpair failed and we were unable to recover it. 00:28:34.425 [2024-11-15 11:46:35.075095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.425 [2024-11-15 11:46:35.075105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.425 qpair failed and we were unable to recover it. 00:28:34.425 [2024-11-15 11:46:35.075176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.425 [2024-11-15 11:46:35.075187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.425 qpair failed and we were unable to recover it. 00:28:34.425 [2024-11-15 11:46:35.075391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.425 [2024-11-15 11:46:35.075403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.425 qpair failed and we were unable to recover it. 00:28:34.425 [2024-11-15 11:46:35.075552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.425 [2024-11-15 11:46:35.075564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.425 qpair failed and we were unable to recover it. 00:28:34.425 [2024-11-15 11:46:35.075709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.425 [2024-11-15 11:46:35.075719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.425 qpair failed and we were unable to recover it. 00:28:34.425 [2024-11-15 11:46:35.075967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.425 [2024-11-15 11:46:35.075978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.425 qpair failed and we were unable to recover it. 00:28:34.425 [2024-11-15 11:46:35.076189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.425 [2024-11-15 11:46:35.076200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.425 qpair failed and we were unable to recover it. 00:28:34.425 [2024-11-15 11:46:35.076353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.425 [2024-11-15 11:46:35.076364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.425 qpair failed and we were unable to recover it. 00:28:34.425 [2024-11-15 11:46:35.076571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.426 [2024-11-15 11:46:35.076583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.426 qpair failed and we were unable to recover it. 00:28:34.426 [2024-11-15 11:46:35.076815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.426 [2024-11-15 11:46:35.076827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.426 qpair failed and we were unable to recover it. 00:28:34.426 [2024-11-15 11:46:35.076977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.426 [2024-11-15 11:46:35.076988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.426 qpair failed and we were unable to recover it. 00:28:34.426 [2024-11-15 11:46:35.077154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.426 [2024-11-15 11:46:35.077166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.426 qpair failed and we were unable to recover it. 00:28:34.426 [2024-11-15 11:46:35.077388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.426 [2024-11-15 11:46:35.077398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.426 qpair failed and we were unable to recover it. 00:28:34.426 [2024-11-15 11:46:35.077615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.426 [2024-11-15 11:46:35.077626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.426 qpair failed and we were unable to recover it. 00:28:34.426 [2024-11-15 11:46:35.077760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.426 [2024-11-15 11:46:35.077771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.426 qpair failed and we were unable to recover it. 00:28:34.426 [2024-11-15 11:46:35.078006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.426 [2024-11-15 11:46:35.078017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.426 qpair failed and we were unable to recover it. 00:28:34.426 [2024-11-15 11:46:35.078279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.426 [2024-11-15 11:46:35.078291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.426 qpair failed and we were unable to recover it. 00:28:34.426 [2024-11-15 11:46:35.078520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.426 [2024-11-15 11:46:35.078532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.426 qpair failed and we were unable to recover it. 00:28:34.426 [2024-11-15 11:46:35.078778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.426 [2024-11-15 11:46:35.078791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.426 qpair failed and we were unable to recover it. 00:28:34.426 [2024-11-15 11:46:35.078954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.426 [2024-11-15 11:46:35.078965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.426 qpair failed and we were unable to recover it. 00:28:34.426 [2024-11-15 11:46:35.079197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.426 [2024-11-15 11:46:35.079208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.426 qpair failed and we were unable to recover it. 00:28:34.426 [2024-11-15 11:46:35.079296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.426 [2024-11-15 11:46:35.079307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.426 qpair failed and we were unable to recover it. 00:28:34.426 [2024-11-15 11:46:35.079583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.426 [2024-11-15 11:46:35.079595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.426 qpair failed and we were unable to recover it. 00:28:34.426 [2024-11-15 11:46:35.079774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.426 [2024-11-15 11:46:35.079785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.426 qpair failed and we were unable to recover it. 00:28:34.426 [2024-11-15 11:46:35.080010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.426 [2024-11-15 11:46:35.080020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.426 qpair failed and we were unable to recover it. 00:28:34.426 [2024-11-15 11:46:35.080227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.426 [2024-11-15 11:46:35.080238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.426 qpair failed and we were unable to recover it. 00:28:34.426 [2024-11-15 11:46:35.080472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.426 [2024-11-15 11:46:35.080483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.426 qpair failed and we were unable to recover it. 00:28:34.426 [2024-11-15 11:46:35.080631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.426 [2024-11-15 11:46:35.080642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.426 qpair failed and we were unable to recover it. 00:28:34.426 [2024-11-15 11:46:35.080828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.426 [2024-11-15 11:46:35.080839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.426 qpair failed and we were unable to recover it. 00:28:34.426 [2024-11-15 11:46:35.080991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.426 [2024-11-15 11:46:35.081003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.426 qpair failed and we were unable to recover it. 00:28:34.426 [2024-11-15 11:46:35.081178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.426 [2024-11-15 11:46:35.081188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.426 qpair failed and we were unable to recover it. 00:28:34.426 [2024-11-15 11:46:35.081398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.426 [2024-11-15 11:46:35.081411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.426 qpair failed and we were unable to recover it. 00:28:34.426 [2024-11-15 11:46:35.081561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.426 [2024-11-15 11:46:35.081572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.426 qpair failed and we were unable to recover it. 00:28:34.426 [2024-11-15 11:46:35.081805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.426 [2024-11-15 11:46:35.081816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.426 qpair failed and we were unable to recover it. 00:28:34.426 [2024-11-15 11:46:35.082010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.426 [2024-11-15 11:46:35.082021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.426 qpair failed and we were unable to recover it. 00:28:34.426 [2024-11-15 11:46:35.082160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.426 [2024-11-15 11:46:35.082171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.426 qpair failed and we were unable to recover it. 00:28:34.426 [2024-11-15 11:46:35.082382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.426 [2024-11-15 11:46:35.082392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.426 qpair failed and we were unable to recover it. 00:28:34.426 [2024-11-15 11:46:35.082546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.426 [2024-11-15 11:46:35.082557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.426 qpair failed and we were unable to recover it. 00:28:34.426 [2024-11-15 11:46:35.082790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.426 [2024-11-15 11:46:35.082801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.426 qpair failed and we were unable to recover it. 00:28:34.426 [2024-11-15 11:46:35.083021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.426 [2024-11-15 11:46:35.083031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.426 qpair failed and we were unable to recover it. 00:28:34.426 [2024-11-15 11:46:35.083291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.426 [2024-11-15 11:46:35.083302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.426 qpair failed and we were unable to recover it. 00:28:34.426 [2024-11-15 11:46:35.083447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.426 [2024-11-15 11:46:35.083468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.426 qpair failed and we were unable to recover it. 00:28:34.426 [2024-11-15 11:46:35.083703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.426 [2024-11-15 11:46:35.083714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.426 qpair failed and we were unable to recover it. 00:28:34.426 [2024-11-15 11:46:35.083950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.426 [2024-11-15 11:46:35.083961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.426 qpair failed and we were unable to recover it. 00:28:34.426 [2024-11-15 11:46:35.084219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.426 [2024-11-15 11:46:35.084230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.426 qpair failed and we were unable to recover it. 00:28:34.426 [2024-11-15 11:46:35.084376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.426 [2024-11-15 11:46:35.084388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.426 qpair failed and we were unable to recover it. 00:28:34.427 [2024-11-15 11:46:35.084469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.427 [2024-11-15 11:46:35.084481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.427 qpair failed and we were unable to recover it. 00:28:34.427 [2024-11-15 11:46:35.084653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.427 [2024-11-15 11:46:35.084664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.427 qpair failed and we were unable to recover it. 00:28:34.427 [2024-11-15 11:46:35.084822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.427 [2024-11-15 11:46:35.084833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.427 qpair failed and we were unable to recover it. 00:28:34.427 [2024-11-15 11:46:35.085032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.427 [2024-11-15 11:46:35.085043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.427 qpair failed and we were unable to recover it. 00:28:34.427 [2024-11-15 11:46:35.085228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.427 [2024-11-15 11:46:35.085238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.427 qpair failed and we were unable to recover it. 00:28:34.427 [2024-11-15 11:46:35.085422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.427 [2024-11-15 11:46:35.085433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.427 qpair failed and we were unable to recover it. 00:28:34.427 [2024-11-15 11:46:35.085675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.427 [2024-11-15 11:46:35.085687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.427 qpair failed and we were unable to recover it. 00:28:34.427 [2024-11-15 11:46:35.085928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.427 [2024-11-15 11:46:35.085940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.427 qpair failed and we were unable to recover it. 00:28:34.427 [2024-11-15 11:46:35.086094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.427 [2024-11-15 11:46:35.086105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.427 qpair failed and we were unable to recover it. 00:28:34.427 [2024-11-15 11:46:35.086185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.427 [2024-11-15 11:46:35.086196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.427 qpair failed and we were unable to recover it. 00:28:34.427 [2024-11-15 11:46:35.086344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.427 [2024-11-15 11:46:35.086355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.427 qpair failed and we were unable to recover it. 00:28:34.427 [2024-11-15 11:46:35.086498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.427 [2024-11-15 11:46:35.086509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.427 qpair failed and we were unable to recover it. 00:28:34.427 [2024-11-15 11:46:35.086577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.427 [2024-11-15 11:46:35.086589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.427 qpair failed and we were unable to recover it. 00:28:34.427 [2024-11-15 11:46:35.086764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.427 [2024-11-15 11:46:35.086774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.427 qpair failed and we were unable to recover it. 00:28:34.427 [2024-11-15 11:46:35.087005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.427 [2024-11-15 11:46:35.087016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.427 qpair failed and we were unable to recover it. 00:28:34.427 [2024-11-15 11:46:35.087250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.427 [2024-11-15 11:46:35.087261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.427 qpair failed and we were unable to recover it. 00:28:34.427 [2024-11-15 11:46:35.087516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.427 [2024-11-15 11:46:35.087527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.427 qpair failed and we were unable to recover it. 00:28:34.427 [2024-11-15 11:46:35.087732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.427 [2024-11-15 11:46:35.087743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.427 qpair failed and we were unable to recover it. 00:28:34.427 [2024-11-15 11:46:35.087915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.427 [2024-11-15 11:46:35.087925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.427 qpair failed and we were unable to recover it. 00:28:34.427 [2024-11-15 11:46:35.088155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.427 [2024-11-15 11:46:35.088166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.427 qpair failed and we were unable to recover it. 00:28:34.427 [2024-11-15 11:46:35.088338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.427 [2024-11-15 11:46:35.088348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.427 qpair failed and we were unable to recover it. 00:28:34.427 [2024-11-15 11:46:35.088523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.427 [2024-11-15 11:46:35.088534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.427 qpair failed and we were unable to recover it. 00:28:34.427 [2024-11-15 11:46:35.088709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.427 [2024-11-15 11:46:35.088720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.427 qpair failed and we were unable to recover it. 00:28:34.427 [2024-11-15 11:46:35.088882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.427 [2024-11-15 11:46:35.088892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.427 qpair failed and we were unable to recover it. 00:28:34.427 [2024-11-15 11:46:35.089148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.427 [2024-11-15 11:46:35.089160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.427 qpair failed and we were unable to recover it. 00:28:34.427 [2024-11-15 11:46:35.089393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.427 [2024-11-15 11:46:35.089406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.427 qpair failed and we were unable to recover it. 00:28:34.427 [2024-11-15 11:46:35.089579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.427 [2024-11-15 11:46:35.089590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.427 qpair failed and we were unable to recover it. 00:28:34.427 [2024-11-15 11:46:35.089784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.427 [2024-11-15 11:46:35.089794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.427 qpair failed and we were unable to recover it. 00:28:34.427 [2024-11-15 11:46:35.090029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.427 [2024-11-15 11:46:35.090040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.427 qpair failed and we were unable to recover it. 00:28:34.427 [2024-11-15 11:46:35.090219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.427 [2024-11-15 11:46:35.090229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.427 qpair failed and we were unable to recover it. 00:28:34.427 [2024-11-15 11:46:35.090362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.427 [2024-11-15 11:46:35.090373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.427 qpair failed and we were unable to recover it. 00:28:34.427 [2024-11-15 11:46:35.090606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.427 [2024-11-15 11:46:35.090617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.427 qpair failed and we were unable to recover it. 00:28:34.427 [2024-11-15 11:46:35.090783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.427 [2024-11-15 11:46:35.090793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.427 qpair failed and we were unable to recover it. 00:28:34.427 [2024-11-15 11:46:35.091011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.427 [2024-11-15 11:46:35.091021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.427 qpair failed and we were unable to recover it. 00:28:34.427 [2024-11-15 11:46:35.091244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.427 [2024-11-15 11:46:35.091254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.427 qpair failed and we were unable to recover it. 00:28:34.427 [2024-11-15 11:46:35.091409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.427 [2024-11-15 11:46:35.091420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.427 qpair failed and we were unable to recover it. 00:28:34.427 [2024-11-15 11:46:35.091645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.427 [2024-11-15 11:46:35.091657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.427 qpair failed and we were unable to recover it. 00:28:34.427 [2024-11-15 11:46:35.091806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.428 [2024-11-15 11:46:35.091817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.428 qpair failed and we were unable to recover it. 00:28:34.428 [2024-11-15 11:46:35.092004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.428 [2024-11-15 11:46:35.092015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.428 qpair failed and we were unable to recover it. 00:28:34.428 [2024-11-15 11:46:35.092184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.428 [2024-11-15 11:46:35.092196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.428 qpair failed and we were unable to recover it. 00:28:34.428 [2024-11-15 11:46:35.092430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.428 [2024-11-15 11:46:35.092441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.428 qpair failed and we were unable to recover it. 00:28:34.428 [2024-11-15 11:46:35.092592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.428 [2024-11-15 11:46:35.092603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.428 qpair failed and we were unable to recover it. 00:28:34.428 [2024-11-15 11:46:35.092754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.428 [2024-11-15 11:46:35.092765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.428 qpair failed and we were unable to recover it. 00:28:34.428 [2024-11-15 11:46:35.092997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.428 [2024-11-15 11:46:35.093007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.428 qpair failed and we were unable to recover it. 00:28:34.428 [2024-11-15 11:46:35.093227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.428 [2024-11-15 11:46:35.093238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.428 qpair failed and we were unable to recover it. 00:28:34.428 [2024-11-15 11:46:35.093496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.428 [2024-11-15 11:46:35.093508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.428 qpair failed and we were unable to recover it. 00:28:34.428 [2024-11-15 11:46:35.093660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.428 [2024-11-15 11:46:35.093671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.428 qpair failed and we were unable to recover it. 00:28:34.428 [2024-11-15 11:46:35.093822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.428 [2024-11-15 11:46:35.093834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.428 qpair failed and we were unable to recover it. 00:28:34.428 [2024-11-15 11:46:35.093908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.428 [2024-11-15 11:46:35.093920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.428 qpair failed and we were unable to recover it. 00:28:34.428 [2024-11-15 11:46:35.094057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.428 [2024-11-15 11:46:35.094069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.428 qpair failed and we were unable to recover it. 00:28:34.428 [2024-11-15 11:46:35.094296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.428 [2024-11-15 11:46:35.094307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.428 qpair failed and we were unable to recover it. 00:28:34.428 [2024-11-15 11:46:35.094544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.428 [2024-11-15 11:46:35.094555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.428 qpair failed and we were unable to recover it. 00:28:34.428 [2024-11-15 11:46:35.094810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.428 [2024-11-15 11:46:35.094822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.428 qpair failed and we were unable to recover it. 00:28:34.428 [2024-11-15 11:46:35.094916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.428 [2024-11-15 11:46:35.094927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.428 qpair failed and we were unable to recover it. 00:28:34.428 [2024-11-15 11:46:35.095170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.428 [2024-11-15 11:46:35.095180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.428 qpair failed and we were unable to recover it. 00:28:34.428 [2024-11-15 11:46:35.095414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.428 [2024-11-15 11:46:35.095425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.428 qpair failed and we were unable to recover it. 00:28:34.428 [2024-11-15 11:46:35.095686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.428 [2024-11-15 11:46:35.095696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.428 qpair failed and we were unable to recover it. 00:28:34.428 [2024-11-15 11:46:35.095858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.428 [2024-11-15 11:46:35.095869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.428 qpair failed and we were unable to recover it. 00:28:34.428 [2024-11-15 11:46:35.096008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.428 [2024-11-15 11:46:35.096019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.428 qpair failed and we were unable to recover it. 00:28:34.428 [2024-11-15 11:46:35.096227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.428 [2024-11-15 11:46:35.096238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.428 qpair failed and we were unable to recover it. 00:28:34.428 [2024-11-15 11:46:35.096303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.428 [2024-11-15 11:46:35.096314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.428 qpair failed and we were unable to recover it. 00:28:34.428 [2024-11-15 11:46:35.096476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.428 [2024-11-15 11:46:35.096487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.428 qpair failed and we were unable to recover it. 00:28:34.428 [2024-11-15 11:46:35.096648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.428 [2024-11-15 11:46:35.096659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.428 qpair failed and we were unable to recover it. 00:28:34.428 [2024-11-15 11:46:35.096847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.428 [2024-11-15 11:46:35.096858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.428 qpair failed and we were unable to recover it. 00:28:34.428 [2024-11-15 11:46:35.097097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.428 [2024-11-15 11:46:35.097108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.428 qpair failed and we were unable to recover it. 00:28:34.428 [2024-11-15 11:46:35.097284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.428 [2024-11-15 11:46:35.097295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.428 qpair failed and we were unable to recover it. 00:28:34.428 [2024-11-15 11:46:35.097535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.428 [2024-11-15 11:46:35.097547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.428 qpair failed and we were unable to recover it. 00:28:34.428 [2024-11-15 11:46:35.097787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.428 [2024-11-15 11:46:35.097799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.428 qpair failed and we were unable to recover it. 00:28:34.428 [2024-11-15 11:46:35.097951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.428 [2024-11-15 11:46:35.097963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.428 qpair failed and we were unable to recover it. 00:28:34.428 [2024-11-15 11:46:35.098198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.428 [2024-11-15 11:46:35.098209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.428 qpair failed and we were unable to recover it. 00:28:34.428 [2024-11-15 11:46:35.098357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.428 [2024-11-15 11:46:35.098368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.428 qpair failed and we were unable to recover it. 00:28:34.428 [2024-11-15 11:46:35.098606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.428 [2024-11-15 11:46:35.098618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.428 qpair failed and we were unable to recover it. 00:28:34.428 [2024-11-15 11:46:35.098757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.428 [2024-11-15 11:46:35.098769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.428 qpair failed and we were unable to recover it. 00:28:34.428 [2024-11-15 11:46:35.099020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.428 [2024-11-15 11:46:35.099031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.428 qpair failed and we were unable to recover it. 00:28:34.428 [2024-11-15 11:46:35.099265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.428 [2024-11-15 11:46:35.099278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.428 qpair failed and we were unable to recover it. 00:28:34.429 [2024-11-15 11:46:35.099444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.429 [2024-11-15 11:46:35.099456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.429 qpair failed and we were unable to recover it. 00:28:34.429 [2024-11-15 11:46:35.099685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.429 [2024-11-15 11:46:35.099696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.429 qpair failed and we were unable to recover it. 00:28:34.429 [2024-11-15 11:46:35.099951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.429 [2024-11-15 11:46:35.099962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.429 qpair failed and we were unable to recover it. 00:28:34.429 [2024-11-15 11:46:35.100102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.429 [2024-11-15 11:46:35.100113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.429 qpair failed and we were unable to recover it. 00:28:34.429 [2024-11-15 11:46:35.100292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.429 [2024-11-15 11:46:35.100303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.429 qpair failed and we were unable to recover it. 00:28:34.429 [2024-11-15 11:46:35.100579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.429 [2024-11-15 11:46:35.100590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.429 qpair failed and we were unable to recover it. 00:28:34.429 [2024-11-15 11:46:35.100764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.429 [2024-11-15 11:46:35.100775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.429 qpair failed and we were unable to recover it. 00:28:34.429 [2024-11-15 11:46:35.100913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.429 [2024-11-15 11:46:35.100924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.429 qpair failed and we were unable to recover it. 00:28:34.429 [2024-11-15 11:46:35.101060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.429 [2024-11-15 11:46:35.101072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.429 qpair failed and we were unable to recover it. 00:28:34.429 [2024-11-15 11:46:35.101303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.429 [2024-11-15 11:46:35.101314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.429 qpair failed and we were unable to recover it. 00:28:34.429 [2024-11-15 11:46:35.101498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.429 [2024-11-15 11:46:35.101510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.429 qpair failed and we were unable to recover it. 00:28:34.429 [2024-11-15 11:46:35.101594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.429 [2024-11-15 11:46:35.101605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.429 qpair failed and we were unable to recover it. 00:28:34.429 [2024-11-15 11:46:35.101811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.429 [2024-11-15 11:46:35.101822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.429 qpair failed and we were unable to recover it. 00:28:34.429 [2024-11-15 11:46:35.102028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.429 [2024-11-15 11:46:35.102039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.429 qpair failed and we were unable to recover it. 00:28:34.429 [2024-11-15 11:46:35.102219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.429 [2024-11-15 11:46:35.102231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.429 qpair failed and we were unable to recover it. 00:28:34.429 [2024-11-15 11:46:35.102388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.429 [2024-11-15 11:46:35.102399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.429 qpair failed and we were unable to recover it. 00:28:34.429 [2024-11-15 11:46:35.102607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.429 [2024-11-15 11:46:35.102619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.429 qpair failed and we were unable to recover it. 00:28:34.429 [2024-11-15 11:46:35.102794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.429 [2024-11-15 11:46:35.102808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.429 qpair failed and we were unable to recover it. 00:28:34.429 [2024-11-15 11:46:35.103044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.429 [2024-11-15 11:46:35.103056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.429 qpair failed and we were unable to recover it. 00:28:34.429 [2024-11-15 11:46:35.103261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.429 [2024-11-15 11:46:35.103273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.429 qpair failed and we were unable to recover it. 00:28:34.429 [2024-11-15 11:46:35.103481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.429 [2024-11-15 11:46:35.103492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.429 qpair failed and we were unable to recover it. 00:28:34.429 [2024-11-15 11:46:35.103728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.429 [2024-11-15 11:46:35.103741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.429 qpair failed and we were unable to recover it. 00:28:34.429 [2024-11-15 11:46:35.103895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.429 [2024-11-15 11:46:35.103907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.429 qpair failed and we were unable to recover it. 00:28:34.429 [2024-11-15 11:46:35.104166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.429 [2024-11-15 11:46:35.104177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.429 qpair failed and we were unable to recover it. 00:28:34.429 [2024-11-15 11:46:35.104361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.429 [2024-11-15 11:46:35.104373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.429 qpair failed and we were unable to recover it. 00:28:34.429 [2024-11-15 11:46:35.104474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.429 [2024-11-15 11:46:35.104485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.429 qpair failed and we were unable to recover it. 00:28:34.429 [2024-11-15 11:46:35.104718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.429 [2024-11-15 11:46:35.104731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.429 qpair failed and we were unable to recover it. 00:28:34.429 [2024-11-15 11:46:35.104999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.429 [2024-11-15 11:46:35.105011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.429 qpair failed and we were unable to recover it. 00:28:34.429 [2024-11-15 11:46:35.105175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.429 [2024-11-15 11:46:35.105187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.429 qpair failed and we were unable to recover it. 00:28:34.429 [2024-11-15 11:46:35.105354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.429 [2024-11-15 11:46:35.105366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.429 qpair failed and we were unable to recover it. 00:28:34.429 [2024-11-15 11:46:35.105650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.429 [2024-11-15 11:46:35.105662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.429 qpair failed and we were unable to recover it. 00:28:34.429 [2024-11-15 11:46:35.105814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.429 [2024-11-15 11:46:35.105826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.429 qpair failed and we were unable to recover it. 00:28:34.429 [2024-11-15 11:46:35.106079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.429 [2024-11-15 11:46:35.106090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.429 qpair failed and we were unable to recover it. 00:28:34.429 [2024-11-15 11:46:35.106237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.429 [2024-11-15 11:46:35.106248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.429 qpair failed and we were unable to recover it. 00:28:34.429 [2024-11-15 11:46:35.106478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.429 [2024-11-15 11:46:35.106490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.429 qpair failed and we were unable to recover it. 00:28:34.429 [2024-11-15 11:46:35.106718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.429 [2024-11-15 11:46:35.106729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.429 qpair failed and we were unable to recover it. 00:28:34.429 [2024-11-15 11:46:35.106873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.430 [2024-11-15 11:46:35.106884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.430 qpair failed and we were unable to recover it. 00:28:34.430 [2024-11-15 11:46:35.107115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.430 [2024-11-15 11:46:35.107126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.430 qpair failed and we were unable to recover it. 00:28:34.430 [2024-11-15 11:46:35.107358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.430 [2024-11-15 11:46:35.107370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.430 qpair failed and we were unable to recover it. 00:28:34.430 [2024-11-15 11:46:35.107625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.430 [2024-11-15 11:46:35.107636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.430 qpair failed and we were unable to recover it. 00:28:34.430 [2024-11-15 11:46:35.107812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.430 [2024-11-15 11:46:35.107824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.430 qpair failed and we were unable to recover it. 00:28:34.430 [2024-11-15 11:46:35.108001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.430 [2024-11-15 11:46:35.108012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.430 qpair failed and we were unable to recover it. 00:28:34.430 [2024-11-15 11:46:35.108153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.430 [2024-11-15 11:46:35.108165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.430 qpair failed and we were unable to recover it. 00:28:34.430 [2024-11-15 11:46:35.108400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.430 [2024-11-15 11:46:35.108411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.430 qpair failed and we were unable to recover it. 00:28:34.430 [2024-11-15 11:46:35.108567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.430 [2024-11-15 11:46:35.108579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.430 qpair failed and we were unable to recover it. 00:28:34.430 [2024-11-15 11:46:35.108795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.430 [2024-11-15 11:46:35.108808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.430 qpair failed and we were unable to recover it. 00:28:34.430 [2024-11-15 11:46:35.108948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.430 [2024-11-15 11:46:35.108960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.430 qpair failed and we were unable to recover it. 00:28:34.430 [2024-11-15 11:46:35.109251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.430 [2024-11-15 11:46:35.109262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.430 qpair failed and we were unable to recover it. 00:28:34.430 [2024-11-15 11:46:35.109495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.430 [2024-11-15 11:46:35.109507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.430 qpair failed and we were unable to recover it. 00:28:34.430 [2024-11-15 11:46:35.109710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.430 [2024-11-15 11:46:35.109722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.430 qpair failed and we were unable to recover it. 00:28:34.430 [2024-11-15 11:46:35.109853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.430 [2024-11-15 11:46:35.109864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.430 qpair failed and we were unable to recover it. 00:28:34.430 [2024-11-15 11:46:35.110009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.430 [2024-11-15 11:46:35.110020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.430 qpair failed and we were unable to recover it. 00:28:34.430 [2024-11-15 11:46:35.110257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.430 [2024-11-15 11:46:35.110270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.430 qpair failed and we were unable to recover it. 00:28:34.430 [2024-11-15 11:46:35.110406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.430 [2024-11-15 11:46:35.110418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.430 qpair failed and we were unable to recover it. 00:28:34.430 [2024-11-15 11:46:35.110668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.430 [2024-11-15 11:46:35.110680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.430 qpair failed and we were unable to recover it. 00:28:34.430 [2024-11-15 11:46:35.110891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.430 [2024-11-15 11:46:35.110902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.430 qpair failed and we were unable to recover it. 00:28:34.430 [2024-11-15 11:46:35.111128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.430 [2024-11-15 11:46:35.111140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.430 qpair failed and we were unable to recover it. 00:28:34.430 [2024-11-15 11:46:35.111290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.430 [2024-11-15 11:46:35.111306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.430 qpair failed and we were unable to recover it. 00:28:34.430 [2024-11-15 11:46:35.111514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.430 [2024-11-15 11:46:35.111526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.430 qpair failed and we were unable to recover it. 00:28:34.430 [2024-11-15 11:46:35.111762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.430 [2024-11-15 11:46:35.111774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.430 qpair failed and we were unable to recover it. 00:28:34.430 [2024-11-15 11:46:35.111939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.430 [2024-11-15 11:46:35.111951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.430 qpair failed and we were unable to recover it. 00:28:34.430 [2024-11-15 11:46:35.112086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.430 [2024-11-15 11:46:35.112098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.430 qpair failed and we were unable to recover it. 00:28:34.430 [2024-11-15 11:46:35.112357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.430 [2024-11-15 11:46:35.112369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.430 qpair failed and we were unable to recover it. 00:28:34.430 [2024-11-15 11:46:35.112629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.430 [2024-11-15 11:46:35.112641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.430 qpair failed and we were unable to recover it. 00:28:34.430 [2024-11-15 11:46:35.112796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.430 [2024-11-15 11:46:35.112807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.430 qpair failed and we were unable to recover it. 00:28:34.430 [2024-11-15 11:46:35.112981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.430 [2024-11-15 11:46:35.112992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.430 qpair failed and we were unable to recover it. 00:28:34.430 [2024-11-15 11:46:35.113203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.430 [2024-11-15 11:46:35.113215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.430 qpair failed and we were unable to recover it. 00:28:34.431 [2024-11-15 11:46:35.113443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.431 [2024-11-15 11:46:35.113454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.431 qpair failed and we were unable to recover it. 00:28:34.431 [2024-11-15 11:46:35.113633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.431 [2024-11-15 11:46:35.113644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.431 qpair failed and we were unable to recover it. 00:28:34.431 [2024-11-15 11:46:35.113792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.431 [2024-11-15 11:46:35.113804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.431 qpair failed and we were unable to recover it. 00:28:34.431 [2024-11-15 11:46:35.114018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.431 [2024-11-15 11:46:35.114029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.431 qpair failed and we were unable to recover it. 00:28:34.431 [2024-11-15 11:46:35.114198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.431 [2024-11-15 11:46:35.114209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.431 qpair failed and we were unable to recover it. 00:28:34.431 [2024-11-15 11:46:35.114447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.431 [2024-11-15 11:46:35.114471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.431 qpair failed and we were unable to recover it. 00:28:34.431 [2024-11-15 11:46:35.114692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.431 [2024-11-15 11:46:35.114704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.431 qpair failed and we were unable to recover it. 00:28:34.431 [2024-11-15 11:46:35.114921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.431 [2024-11-15 11:46:35.114933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.431 qpair failed and we were unable to recover it. 00:28:34.431 [2024-11-15 11:46:35.115168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.431 [2024-11-15 11:46:35.115180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.431 qpair failed and we were unable to recover it. 00:28:34.431 [2024-11-15 11:46:35.115353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.431 [2024-11-15 11:46:35.115364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.431 qpair failed and we were unable to recover it. 00:28:34.431 [2024-11-15 11:46:35.115596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.431 [2024-11-15 11:46:35.115609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.431 qpair failed and we were unable to recover it. 00:28:34.431 [2024-11-15 11:46:35.115699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.431 [2024-11-15 11:46:35.115710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.431 qpair failed and we were unable to recover it. 00:28:34.431 [2024-11-15 11:46:35.115968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.431 [2024-11-15 11:46:35.115979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.431 qpair failed and we were unable to recover it. 00:28:34.431 [2024-11-15 11:46:35.116137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.431 [2024-11-15 11:46:35.116148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.431 qpair failed and we were unable to recover it. 00:28:34.431 [2024-11-15 11:46:35.116358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.431 [2024-11-15 11:46:35.116369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.431 qpair failed and we were unable to recover it. 00:28:34.431 [2024-11-15 11:46:35.116636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.431 [2024-11-15 11:46:35.116648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.431 qpair failed and we were unable to recover it. 00:28:34.431 [2024-11-15 11:46:35.116803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.431 [2024-11-15 11:46:35.116814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.431 qpair failed and we were unable to recover it. 00:28:34.431 [2024-11-15 11:46:35.116971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.431 [2024-11-15 11:46:35.116982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.431 qpair failed and we were unable to recover it. 00:28:34.431 [2024-11-15 11:46:35.117187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.431 [2024-11-15 11:46:35.117198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.431 qpair failed and we were unable to recover it. 00:28:34.431 [2024-11-15 11:46:35.117346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.431 [2024-11-15 11:46:35.117357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.431 qpair failed and we were unable to recover it. 00:28:34.431 [2024-11-15 11:46:35.117587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.431 [2024-11-15 11:46:35.117599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.431 qpair failed and we were unable to recover it. 00:28:34.431 [2024-11-15 11:46:35.117752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.431 [2024-11-15 11:46:35.117763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.431 qpair failed and we were unable to recover it. 00:28:34.431 [2024-11-15 11:46:35.117997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.431 [2024-11-15 11:46:35.118009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.431 qpair failed and we were unable to recover it. 00:28:34.431 [2024-11-15 11:46:35.118217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.431 [2024-11-15 11:46:35.118228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.431 qpair failed and we were unable to recover it. 00:28:34.431 [2024-11-15 11:46:35.118452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.431 [2024-11-15 11:46:35.118474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.431 qpair failed and we were unable to recover it. 00:28:34.431 [2024-11-15 11:46:35.118683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.431 [2024-11-15 11:46:35.118695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.431 qpair failed and we were unable to recover it. 00:28:34.431 [2024-11-15 11:46:35.118929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.431 [2024-11-15 11:46:35.118941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.431 qpair failed and we were unable to recover it. 00:28:34.431 [2024-11-15 11:46:35.119150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.431 [2024-11-15 11:46:35.119161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.431 qpair failed and we were unable to recover it. 00:28:34.431 [2024-11-15 11:46:35.119424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.431 [2024-11-15 11:46:35.119436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.431 qpair failed and we were unable to recover it. 00:28:34.431 [2024-11-15 11:46:35.119596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.431 [2024-11-15 11:46:35.119609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.431 qpair failed and we were unable to recover it. 00:28:34.431 [2024-11-15 11:46:35.119706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.431 [2024-11-15 11:46:35.119720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.431 qpair failed and we were unable to recover it. 00:28:34.431 [2024-11-15 11:46:35.119957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.431 [2024-11-15 11:46:35.119969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.431 qpair failed and we were unable to recover it. 00:28:34.431 [2024-11-15 11:46:35.120192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.431 [2024-11-15 11:46:35.120203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.431 qpair failed and we were unable to recover it. 00:28:34.431 [2024-11-15 11:46:35.120344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.431 [2024-11-15 11:46:35.120356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.431 qpair failed and we were unable to recover it. 00:28:34.431 [2024-11-15 11:46:35.120450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.431 [2024-11-15 11:46:35.120465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.431 qpair failed and we were unable to recover it. 00:28:34.431 [2024-11-15 11:46:35.120622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.431 [2024-11-15 11:46:35.120633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.432 qpair failed and we were unable to recover it. 00:28:34.432 [2024-11-15 11:46:35.120841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.432 [2024-11-15 11:46:35.120853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.432 qpair failed and we were unable to recover it. 00:28:34.432 [2024-11-15 11:46:35.121085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.432 [2024-11-15 11:46:35.121096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.432 qpair failed and we were unable to recover it. 00:28:34.432 [2024-11-15 11:46:35.121272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.432 [2024-11-15 11:46:35.121284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.432 qpair failed and we were unable to recover it. 00:28:34.432 [2024-11-15 11:46:35.121420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.432 [2024-11-15 11:46:35.121432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.432 qpair failed and we were unable to recover it. 00:28:34.432 [2024-11-15 11:46:35.121625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.432 [2024-11-15 11:46:35.121637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.432 qpair failed and we were unable to recover it. 00:28:34.432 [2024-11-15 11:46:35.121797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.432 [2024-11-15 11:46:35.121809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.432 qpair failed and we were unable to recover it. 00:28:34.432 [2024-11-15 11:46:35.122034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.432 [2024-11-15 11:46:35.122045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.432 qpair failed and we were unable to recover it. 00:28:34.432 [2024-11-15 11:46:35.122200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.432 [2024-11-15 11:46:35.122211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.432 qpair failed and we were unable to recover it. 00:28:34.432 [2024-11-15 11:46:35.122474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.432 [2024-11-15 11:46:35.122486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.432 qpair failed and we were unable to recover it. 00:28:34.432 [2024-11-15 11:46:35.122642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.432 [2024-11-15 11:46:35.122654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.432 qpair failed and we were unable to recover it. 00:28:34.432 [2024-11-15 11:46:35.122861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.432 [2024-11-15 11:46:35.122873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.432 qpair failed and we were unable to recover it. 00:28:34.432 [2024-11-15 11:46:35.123108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.432 [2024-11-15 11:46:35.123120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.432 qpair failed and we were unable to recover it. 00:28:34.432 [2024-11-15 11:46:35.123384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.432 [2024-11-15 11:46:35.123395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.432 qpair failed and we were unable to recover it. 00:28:34.432 [2024-11-15 11:46:35.123658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.432 [2024-11-15 11:46:35.123669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.432 qpair failed and we were unable to recover it. 00:28:34.432 [2024-11-15 11:46:35.123750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.432 [2024-11-15 11:46:35.123762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.432 qpair failed and we were unable to recover it. 00:28:34.432 [2024-11-15 11:46:35.123969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.432 [2024-11-15 11:46:35.123980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.432 qpair failed and we were unable to recover it. 00:28:34.432 [2024-11-15 11:46:35.124210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.432 [2024-11-15 11:46:35.124222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.432 qpair failed and we were unable to recover it. 00:28:34.432 [2024-11-15 11:46:35.124471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.432 [2024-11-15 11:46:35.124483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.432 qpair failed and we were unable to recover it. 00:28:34.432 [2024-11-15 11:46:35.124633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.432 [2024-11-15 11:46:35.124645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.432 qpair failed and we were unable to recover it. 00:28:34.432 [2024-11-15 11:46:35.124792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.432 [2024-11-15 11:46:35.124804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.432 qpair failed and we were unable to recover it. 00:28:34.432 [2024-11-15 11:46:35.124954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.432 [2024-11-15 11:46:35.124965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.432 qpair failed and we were unable to recover it. 00:28:34.432 [2024-11-15 11:46:35.125133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.432 [2024-11-15 11:46:35.125144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.432 qpair failed and we were unable to recover it. 00:28:34.432 [2024-11-15 11:46:35.125356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.432 [2024-11-15 11:46:35.125368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.432 qpair failed and we were unable to recover it. 00:28:34.432 [2024-11-15 11:46:35.125527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.432 [2024-11-15 11:46:35.125539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.432 qpair failed and we were unable to recover it. 00:28:34.432 [2024-11-15 11:46:35.125770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.432 [2024-11-15 11:46:35.125783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.432 qpair failed and we were unable to recover it. 00:28:34.432 [2024-11-15 11:46:35.126021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.432 [2024-11-15 11:46:35.126033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.432 qpair failed and we were unable to recover it. 00:28:34.432 [2024-11-15 11:46:35.126285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.432 [2024-11-15 11:46:35.126296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.432 qpair failed and we were unable to recover it. 00:28:34.432 [2024-11-15 11:46:35.126506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.432 [2024-11-15 11:46:35.126518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.432 qpair failed and we were unable to recover it. 00:28:34.432 [2024-11-15 11:46:35.126794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.432 [2024-11-15 11:46:35.126806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.432 qpair failed and we were unable to recover it. 00:28:34.432 [2024-11-15 11:46:35.127041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.432 [2024-11-15 11:46:35.127052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.432 qpair failed and we were unable to recover it. 00:28:34.432 [2024-11-15 11:46:35.127287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.432 [2024-11-15 11:46:35.127300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.432 qpair failed and we were unable to recover it. 00:28:34.432 [2024-11-15 11:46:35.127467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.432 [2024-11-15 11:46:35.127480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.432 qpair failed and we were unable to recover it. 00:28:34.432 [2024-11-15 11:46:35.127618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.432 [2024-11-15 11:46:35.127630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.432 qpair failed and we were unable to recover it. 00:28:34.432 [2024-11-15 11:46:35.127769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.432 [2024-11-15 11:46:35.127781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.432 qpair failed and we were unable to recover it. 00:28:34.432 [2024-11-15 11:46:35.128041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.432 [2024-11-15 11:46:35.128055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.432 qpair failed and we were unable to recover it. 00:28:34.432 [2024-11-15 11:46:35.128228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.433 [2024-11-15 11:46:35.128239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.433 qpair failed and we were unable to recover it. 00:28:34.433 [2024-11-15 11:46:35.128468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.433 [2024-11-15 11:46:35.128480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.433 qpair failed and we were unable to recover it. 00:28:34.433 [2024-11-15 11:46:35.128754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.433 [2024-11-15 11:46:35.128765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.433 qpair failed and we were unable to recover it. 00:28:34.433 [2024-11-15 11:46:35.128937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.433 [2024-11-15 11:46:35.128949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.433 qpair failed and we were unable to recover it. 00:28:34.433 [2024-11-15 11:46:35.129154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.433 [2024-11-15 11:46:35.129165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.433 qpair failed and we were unable to recover it. 00:28:34.433 [2024-11-15 11:46:35.129391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.433 [2024-11-15 11:46:35.129403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.433 qpair failed and we were unable to recover it. 00:28:34.433 [2024-11-15 11:46:35.129542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.433 [2024-11-15 11:46:35.129554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.433 qpair failed and we were unable to recover it. 00:28:34.433 [2024-11-15 11:46:35.129783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.433 [2024-11-15 11:46:35.129794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.433 qpair failed and we were unable to recover it. 00:28:34.433 [2024-11-15 11:46:35.130050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.433 [2024-11-15 11:46:35.130061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.433 qpair failed and we were unable to recover it. 00:28:34.433 [2024-11-15 11:46:35.130214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.433 [2024-11-15 11:46:35.130225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.433 qpair failed and we were unable to recover it. 00:28:34.433 [2024-11-15 11:46:35.130473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.433 [2024-11-15 11:46:35.130485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.433 qpair failed and we were unable to recover it. 00:28:34.433 [2024-11-15 11:46:35.130571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.433 [2024-11-15 11:46:35.130583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.433 qpair failed and we were unable to recover it. 00:28:34.433 [2024-11-15 11:46:35.130834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.433 [2024-11-15 11:46:35.130845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.433 qpair failed and we were unable to recover it. 00:28:34.433 [2024-11-15 11:46:35.130983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.433 [2024-11-15 11:46:35.130994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.433 qpair failed and we were unable to recover it. 00:28:34.433 [2024-11-15 11:46:35.131229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.433 [2024-11-15 11:46:35.131241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.433 qpair failed and we were unable to recover it. 00:28:34.433 [2024-11-15 11:46:35.131473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.433 [2024-11-15 11:46:35.131484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.433 qpair failed and we were unable to recover it. 00:28:34.433 [2024-11-15 11:46:35.131638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.433 [2024-11-15 11:46:35.131648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.433 qpair failed and we were unable to recover it. 00:28:34.433 [2024-11-15 11:46:35.131874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.433 [2024-11-15 11:46:35.131885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.433 qpair failed and we were unable to recover it. 00:28:34.433 [2024-11-15 11:46:35.132108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.433 [2024-11-15 11:46:35.132120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.433 qpair failed and we were unable to recover it. 00:28:34.433 [2024-11-15 11:46:35.132332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.433 [2024-11-15 11:46:35.132343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.433 qpair failed and we were unable to recover it. 00:28:34.433 [2024-11-15 11:46:35.132565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.433 [2024-11-15 11:46:35.132576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.433 qpair failed and we were unable to recover it. 00:28:34.433 [2024-11-15 11:46:35.132665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.433 [2024-11-15 11:46:35.132677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.433 qpair failed and we were unable to recover it. 00:28:34.433 [2024-11-15 11:46:35.132742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.433 [2024-11-15 11:46:35.132753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.433 qpair failed and we were unable to recover it. 00:28:34.433 [2024-11-15 11:46:35.132914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.433 [2024-11-15 11:46:35.132925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.433 qpair failed and we were unable to recover it. 00:28:34.433 [2024-11-15 11:46:35.133101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.433 [2024-11-15 11:46:35.133111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.433 qpair failed and we were unable to recover it. 00:28:34.433 [2024-11-15 11:46:35.133246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.433 [2024-11-15 11:46:35.133257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.433 qpair failed and we were unable to recover it. 00:28:34.433 [2024-11-15 11:46:35.133413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.433 [2024-11-15 11:46:35.133425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.433 qpair failed and we were unable to recover it. 00:28:34.433 [2024-11-15 11:46:35.133603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.433 [2024-11-15 11:46:35.133616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.433 qpair failed and we were unable to recover it. 00:28:34.433 [2024-11-15 11:46:35.133813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.433 [2024-11-15 11:46:35.133825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.433 qpair failed and we were unable to recover it. 00:28:34.433 [2024-11-15 11:46:35.133962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.433 [2024-11-15 11:46:35.133974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.433 qpair failed and we were unable to recover it. 00:28:34.433 [2024-11-15 11:46:35.134229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.433 [2024-11-15 11:46:35.134240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.433 qpair failed and we were unable to recover it. 00:28:34.433 [2024-11-15 11:46:35.134399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.433 [2024-11-15 11:46:35.134411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.433 qpair failed and we were unable to recover it. 00:28:34.433 [2024-11-15 11:46:35.134497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.433 [2024-11-15 11:46:35.134509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.433 qpair failed and we were unable to recover it. 00:28:34.433 [2024-11-15 11:46:35.134725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.433 [2024-11-15 11:46:35.134738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.433 qpair failed and we were unable to recover it. 00:28:34.433 [2024-11-15 11:46:35.134982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.433 [2024-11-15 11:46:35.134994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.433 qpair failed and we were unable to recover it. 00:28:34.433 [2024-11-15 11:46:35.135214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.433 [2024-11-15 11:46:35.135225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.434 qpair failed and we were unable to recover it. 00:28:34.434 [2024-11-15 11:46:35.135444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.434 [2024-11-15 11:46:35.135456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.434 qpair failed and we were unable to recover it. 00:28:34.434 [2024-11-15 11:46:35.135606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.434 [2024-11-15 11:46:35.135617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.434 qpair failed and we were unable to recover it. 00:28:34.434 [2024-11-15 11:46:35.135852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.434 [2024-11-15 11:46:35.135863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.434 qpair failed and we were unable to recover it. 00:28:34.434 [2024-11-15 11:46:35.136082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.434 [2024-11-15 11:46:35.136095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.434 qpair failed and we were unable to recover it. 00:28:34.434 [2024-11-15 11:46:35.136246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.434 [2024-11-15 11:46:35.136256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.434 qpair failed and we were unable to recover it. 00:28:34.434 [2024-11-15 11:46:35.136410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.434 [2024-11-15 11:46:35.136421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.434 qpair failed and we were unable to recover it. 00:28:34.434 [2024-11-15 11:46:35.136683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.434 [2024-11-15 11:46:35.136694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.434 qpair failed and we were unable to recover it. 00:28:34.434 [2024-11-15 11:46:35.136928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.434 [2024-11-15 11:46:35.136939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.434 qpair failed and we were unable to recover it. 00:28:34.434 [2024-11-15 11:46:35.137085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.434 [2024-11-15 11:46:35.137095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.434 qpair failed and we were unable to recover it. 00:28:34.434 [2024-11-15 11:46:35.137319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.434 [2024-11-15 11:46:35.137330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.434 qpair failed and we were unable to recover it. 00:28:34.434 [2024-11-15 11:46:35.137503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.434 [2024-11-15 11:46:35.137514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.434 qpair failed and we were unable to recover it. 00:28:34.434 [2024-11-15 11:46:35.137723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.434 [2024-11-15 11:46:35.137734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.434 qpair failed and we were unable to recover it. 00:28:34.434 [2024-11-15 11:46:35.137955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.434 [2024-11-15 11:46:35.137966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.434 qpair failed and we were unable to recover it. 00:28:34.434 [2024-11-15 11:46:35.138043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.434 [2024-11-15 11:46:35.138054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.434 qpair failed and we were unable to recover it. 00:28:34.434 [2024-11-15 11:46:35.138255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.434 [2024-11-15 11:46:35.138266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.434 qpair failed and we were unable to recover it. 00:28:34.434 [2024-11-15 11:46:35.138417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.434 [2024-11-15 11:46:35.138427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.434 qpair failed and we were unable to recover it. 00:28:34.434 [2024-11-15 11:46:35.138630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.434 [2024-11-15 11:46:35.138641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.434 qpair failed and we were unable to recover it. 00:28:34.434 [2024-11-15 11:46:35.138791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.434 [2024-11-15 11:46:35.138802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.434 qpair failed and we were unable to recover it. 00:28:34.434 [2024-11-15 11:46:35.138955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.434 [2024-11-15 11:46:35.138966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.434 qpair failed and we were unable to recover it. 00:28:34.434 [2024-11-15 11:46:35.139115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.434 [2024-11-15 11:46:35.139126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.434 qpair failed and we were unable to recover it. 00:28:34.434 [2024-11-15 11:46:35.139385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.434 [2024-11-15 11:46:35.139396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.434 qpair failed and we were unable to recover it. 00:28:34.434 [2024-11-15 11:46:35.139550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.434 [2024-11-15 11:46:35.139560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.434 qpair failed and we were unable to recover it. 00:28:34.434 [2024-11-15 11:46:35.139779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.434 [2024-11-15 11:46:35.139790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.434 qpair failed and we were unable to recover it. 00:28:34.434 [2024-11-15 11:46:35.139963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.434 [2024-11-15 11:46:35.139974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.434 qpair failed and we were unable to recover it. 00:28:34.434 [2024-11-15 11:46:35.140112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.434 [2024-11-15 11:46:35.140123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.434 qpair failed and we were unable to recover it. 00:28:34.434 [2024-11-15 11:46:35.140358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.434 [2024-11-15 11:46:35.140369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.434 qpair failed and we were unable to recover it. 00:28:34.434 [2024-11-15 11:46:35.140516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.434 [2024-11-15 11:46:35.140527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.434 qpair failed and we were unable to recover it. 00:28:34.434 [2024-11-15 11:46:35.140606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.434 [2024-11-15 11:46:35.140617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.434 qpair failed and we were unable to recover it. 00:28:34.434 [2024-11-15 11:46:35.140790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.434 [2024-11-15 11:46:35.140800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.434 qpair failed and we were unable to recover it. 00:28:34.434 [2024-11-15 11:46:35.140944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.434 [2024-11-15 11:46:35.140955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.434 qpair failed and we were unable to recover it. 00:28:34.434 [2024-11-15 11:46:35.141217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.434 [2024-11-15 11:46:35.141228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.434 qpair failed and we were unable to recover it. 00:28:34.434 [2024-11-15 11:46:35.141397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.434 [2024-11-15 11:46:35.141408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.434 qpair failed and we were unable to recover it. 00:28:34.434 [2024-11-15 11:46:35.141613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.434 [2024-11-15 11:46:35.141624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.434 qpair failed and we were unable to recover it. 00:28:34.434 [2024-11-15 11:46:35.141855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.434 [2024-11-15 11:46:35.141866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.434 qpair failed and we were unable to recover it. 00:28:34.434 [2024-11-15 11:46:35.142159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.434 [2024-11-15 11:46:35.142169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.434 qpair failed and we were unable to recover it. 00:28:34.434 [2024-11-15 11:46:35.142416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.434 [2024-11-15 11:46:35.142427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.434 qpair failed and we were unable to recover it. 00:28:34.434 [2024-11-15 11:46:35.142634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.434 [2024-11-15 11:46:35.142645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.435 qpair failed and we were unable to recover it. 00:28:34.435 [2024-11-15 11:46:35.142917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.435 [2024-11-15 11:46:35.142929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.435 qpair failed and we were unable to recover it. 00:28:34.435 [2024-11-15 11:46:35.143140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.435 [2024-11-15 11:46:35.143150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.435 qpair failed and we were unable to recover it. 00:28:34.435 [2024-11-15 11:46:35.143357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.435 [2024-11-15 11:46:35.143369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.435 qpair failed and we were unable to recover it. 00:28:34.435 [2024-11-15 11:46:35.143541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.435 [2024-11-15 11:46:35.143553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.435 qpair failed and we were unable to recover it. 00:28:34.435 [2024-11-15 11:46:35.143731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.435 [2024-11-15 11:46:35.143743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.435 qpair failed and we were unable to recover it. 00:28:34.435 [2024-11-15 11:46:35.143978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.435 [2024-11-15 11:46:35.143988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.435 qpair failed and we were unable to recover it. 00:28:34.435 [2024-11-15 11:46:35.144146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.435 [2024-11-15 11:46:35.144159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.435 qpair failed and we were unable to recover it. 00:28:34.435 [2024-11-15 11:46:35.144313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.435 [2024-11-15 11:46:35.144324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.435 qpair failed and we were unable to recover it. 00:28:34.435 [2024-11-15 11:46:35.144553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.435 [2024-11-15 11:46:35.144564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.435 qpair failed and we were unable to recover it. 00:28:34.435 [2024-11-15 11:46:35.144719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.435 [2024-11-15 11:46:35.144730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.435 qpair failed and we were unable to recover it. 00:28:34.435 [2024-11-15 11:46:35.144879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.435 [2024-11-15 11:46:35.144889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.435 qpair failed and we were unable to recover it. 00:28:34.435 [2024-11-15 11:46:35.145029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.435 [2024-11-15 11:46:35.145041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.435 qpair failed and we were unable to recover it. 00:28:34.435 [2024-11-15 11:46:35.145187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.435 [2024-11-15 11:46:35.145197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.435 qpair failed and we were unable to recover it. 00:28:34.435 [2024-11-15 11:46:35.145431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.435 [2024-11-15 11:46:35.145443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.435 qpair failed and we were unable to recover it. 00:28:34.435 [2024-11-15 11:46:35.145600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.435 [2024-11-15 11:46:35.145612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.435 qpair failed and we were unable to recover it. 00:28:34.435 [2024-11-15 11:46:35.145693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.435 [2024-11-15 11:46:35.145704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.435 qpair failed and we were unable to recover it. 00:28:34.435 [2024-11-15 11:46:35.145910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.435 [2024-11-15 11:46:35.145921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.435 qpair failed and we were unable to recover it. 00:28:34.435 [2024-11-15 11:46:35.146176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.435 [2024-11-15 11:46:35.146188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.435 qpair failed and we were unable to recover it. 00:28:34.435 [2024-11-15 11:46:35.146370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.435 [2024-11-15 11:46:35.146382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.435 qpair failed and we were unable to recover it. 00:28:34.435 [2024-11-15 11:46:35.146567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.435 [2024-11-15 11:46:35.146579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.435 qpair failed and we were unable to recover it. 00:28:34.435 [2024-11-15 11:46:35.146736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.435 [2024-11-15 11:46:35.146747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.435 qpair failed and we were unable to recover it. 00:28:34.435 [2024-11-15 11:46:35.146931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.435 [2024-11-15 11:46:35.146942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.435 qpair failed and we were unable to recover it. 00:28:34.435 [2024-11-15 11:46:35.147102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.435 [2024-11-15 11:46:35.147113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.435 qpair failed and we were unable to recover it. 00:28:34.435 [2024-11-15 11:46:35.147360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.435 [2024-11-15 11:46:35.147371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.435 qpair failed and we were unable to recover it. 00:28:34.435 [2024-11-15 11:46:35.147614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.435 [2024-11-15 11:46:35.147625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.435 qpair failed and we were unable to recover it. 00:28:34.435 [2024-11-15 11:46:35.147861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.435 [2024-11-15 11:46:35.147873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.435 qpair failed and we were unable to recover it. 00:28:34.435 [2024-11-15 11:46:35.148031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.435 [2024-11-15 11:46:35.148042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.435 qpair failed and we were unable to recover it. 00:28:34.435 [2024-11-15 11:46:35.148252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.435 [2024-11-15 11:46:35.148264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.435 qpair failed and we were unable to recover it. 00:28:34.435 [2024-11-15 11:46:35.148468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.435 [2024-11-15 11:46:35.148479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.435 qpair failed and we were unable to recover it. 00:28:34.435 [2024-11-15 11:46:35.148633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.435 [2024-11-15 11:46:35.148643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.435 qpair failed and we were unable to recover it. 00:28:34.435 [2024-11-15 11:46:35.148711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.435 [2024-11-15 11:46:35.148722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.435 qpair failed and we were unable to recover it. 00:28:34.436 [2024-11-15 11:46:35.148862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.436 [2024-11-15 11:46:35.148873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.436 qpair failed and we were unable to recover it. 00:28:34.436 [2024-11-15 11:46:35.149093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.436 [2024-11-15 11:46:35.149103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.436 qpair failed and we were unable to recover it. 00:28:34.436 [2024-11-15 11:46:35.149318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.436 [2024-11-15 11:46:35.149329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.436 qpair failed and we were unable to recover it. 00:28:34.436 [2024-11-15 11:46:35.149543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.436 [2024-11-15 11:46:35.149556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.436 qpair failed and we were unable to recover it. 00:28:34.436 [2024-11-15 11:46:35.149790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.436 [2024-11-15 11:46:35.149802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.436 qpair failed and we were unable to recover it. 00:28:34.436 [2024-11-15 11:46:35.149947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.436 [2024-11-15 11:46:35.149959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.436 qpair failed and we were unable to recover it. 00:28:34.436 [2024-11-15 11:46:35.150184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.436 [2024-11-15 11:46:35.150196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.436 qpair failed and we were unable to recover it. 00:28:34.436 [2024-11-15 11:46:35.150438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.436 [2024-11-15 11:46:35.150450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.436 qpair failed and we were unable to recover it. 00:28:34.436 [2024-11-15 11:46:35.150705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.436 [2024-11-15 11:46:35.150717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.436 qpair failed and we were unable to recover it. 00:28:34.436 [2024-11-15 11:46:35.150861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.436 [2024-11-15 11:46:35.150871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.436 qpair failed and we were unable to recover it. 00:28:34.436 [2024-11-15 11:46:35.151100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.436 [2024-11-15 11:46:35.151111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.436 qpair failed and we were unable to recover it. 00:28:34.436 [2024-11-15 11:46:35.151339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.436 [2024-11-15 11:46:35.151349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.436 qpair failed and we were unable to recover it. 00:28:34.436 [2024-11-15 11:46:35.151613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.436 [2024-11-15 11:46:35.151624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.436 qpair failed and we were unable to recover it. 00:28:34.436 [2024-11-15 11:46:35.151712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.436 [2024-11-15 11:46:35.151723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.436 qpair failed and we were unable to recover it. 00:28:34.436 [2024-11-15 11:46:35.151881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.436 [2024-11-15 11:46:35.151892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.436 qpair failed and we were unable to recover it. 00:28:34.436 [2024-11-15 11:46:35.152101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.436 [2024-11-15 11:46:35.152115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.436 qpair failed and we were unable to recover it. 00:28:34.436 [2024-11-15 11:46:35.152350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.436 [2024-11-15 11:46:35.152361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.436 qpair failed and we were unable to recover it. 00:28:34.436 [2024-11-15 11:46:35.152524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.436 [2024-11-15 11:46:35.152535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.436 qpair failed and we were unable to recover it. 00:28:34.436 [2024-11-15 11:46:35.152767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.436 [2024-11-15 11:46:35.152778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.436 qpair failed and we were unable to recover it. 00:28:34.436 [2024-11-15 11:46:35.152987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.436 [2024-11-15 11:46:35.152999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.436 qpair failed and we were unable to recover it. 00:28:34.436 [2024-11-15 11:46:35.153159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.436 [2024-11-15 11:46:35.153170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.436 qpair failed and we were unable to recover it. 00:28:34.436 [2024-11-15 11:46:35.153441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.436 [2024-11-15 11:46:35.153451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.436 qpair failed and we were unable to recover it. 00:28:34.436 [2024-11-15 11:46:35.153717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.436 [2024-11-15 11:46:35.153728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.436 qpair failed and we were unable to recover it. 00:28:34.436 [2024-11-15 11:46:35.153961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.436 [2024-11-15 11:46:35.153972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.436 qpair failed and we were unable to recover it. 00:28:34.436 [2024-11-15 11:46:35.154203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.436 [2024-11-15 11:46:35.154215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.436 qpair failed and we were unable to recover it. 00:28:34.436 [2024-11-15 11:46:35.154441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.436 [2024-11-15 11:46:35.154452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.436 qpair failed and we were unable to recover it. 00:28:34.436 [2024-11-15 11:46:35.154686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.436 [2024-11-15 11:46:35.154698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.436 qpair failed and we were unable to recover it. 00:28:34.436 [2024-11-15 11:46:35.154843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.436 [2024-11-15 11:46:35.154854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.436 qpair failed and we were unable to recover it. 00:28:34.436 [2024-11-15 11:46:35.155025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.436 [2024-11-15 11:46:35.155036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.436 qpair failed and we were unable to recover it. 00:28:34.436 [2024-11-15 11:46:35.155256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.436 [2024-11-15 11:46:35.155268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.436 qpair failed and we were unable to recover it. 00:28:34.436 [2024-11-15 11:46:35.155504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.436 [2024-11-15 11:46:35.155517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.436 qpair failed and we were unable to recover it. 00:28:34.436 [2024-11-15 11:46:35.155669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.436 [2024-11-15 11:46:35.155679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.436 qpair failed and we were unable to recover it. 00:28:34.436 [2024-11-15 11:46:35.155900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.436 [2024-11-15 11:46:35.155910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.436 qpair failed and we were unable to recover it. 00:28:34.436 [2024-11-15 11:46:35.156058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.436 [2024-11-15 11:46:35.156070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.436 qpair failed and we were unable to recover it. 00:28:34.436 [2024-11-15 11:46:35.156315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.436 [2024-11-15 11:46:35.156325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.436 qpair failed and we were unable to recover it. 00:28:34.436 [2024-11-15 11:46:35.156481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.436 [2024-11-15 11:46:35.156493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.436 qpair failed and we were unable to recover it. 00:28:34.436 [2024-11-15 11:46:35.156658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.437 [2024-11-15 11:46:35.156670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.437 qpair failed and we were unable to recover it. 00:28:34.437 [2024-11-15 11:46:35.156809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.437 [2024-11-15 11:46:35.156820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.437 qpair failed and we were unable to recover it. 00:28:34.437 [2024-11-15 11:46:35.157063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.437 [2024-11-15 11:46:35.157074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.437 qpair failed and we were unable to recover it. 00:28:34.437 [2024-11-15 11:46:35.157312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.437 [2024-11-15 11:46:35.157325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.437 qpair failed and we were unable to recover it. 00:28:34.437 [2024-11-15 11:46:35.157564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.437 [2024-11-15 11:46:35.157576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.437 qpair failed and we were unable to recover it. 00:28:34.437 [2024-11-15 11:46:35.157814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.437 [2024-11-15 11:46:35.157826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.437 qpair failed and we were unable to recover it. 00:28:34.437 [2024-11-15 11:46:35.158081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.437 [2024-11-15 11:46:35.158098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.437 qpair failed and we were unable to recover it. 00:28:34.437 [2024-11-15 11:46:35.158236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.437 [2024-11-15 11:46:35.158247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.437 qpair failed and we were unable to recover it. 00:28:34.437 [2024-11-15 11:46:35.158436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.437 [2024-11-15 11:46:35.158448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.437 qpair failed and we were unable to recover it. 00:28:34.437 [2024-11-15 11:46:35.158695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.437 [2024-11-15 11:46:35.158708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.437 qpair failed and we were unable to recover it. 00:28:34.437 [2024-11-15 11:46:35.158893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.437 [2024-11-15 11:46:35.158905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.437 qpair failed and we were unable to recover it. 00:28:34.437 [2024-11-15 11:46:35.159137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.437 [2024-11-15 11:46:35.159149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.437 qpair failed and we were unable to recover it. 00:28:34.437 [2024-11-15 11:46:35.159383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.437 [2024-11-15 11:46:35.159395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.437 qpair failed and we were unable to recover it. 00:28:34.437 [2024-11-15 11:46:35.159648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.437 [2024-11-15 11:46:35.159660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.437 qpair failed and we were unable to recover it. 00:28:34.437 [2024-11-15 11:46:35.159741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.437 [2024-11-15 11:46:35.159753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.437 qpair failed and we were unable to recover it. 00:28:34.437 [2024-11-15 11:46:35.160011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.437 [2024-11-15 11:46:35.160023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.437 qpair failed and we were unable to recover it. 00:28:34.437 [2024-11-15 11:46:35.160256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.437 [2024-11-15 11:46:35.160267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.437 qpair failed and we were unable to recover it. 00:28:34.437 [2024-11-15 11:46:35.160476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.437 [2024-11-15 11:46:35.160488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.437 qpair failed and we were unable to recover it. 00:28:34.437 [2024-11-15 11:46:35.160644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.437 [2024-11-15 11:46:35.160656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.437 qpair failed and we were unable to recover it. 00:28:34.437 [2024-11-15 11:46:35.160893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.437 [2024-11-15 11:46:35.160906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.437 qpair failed and we were unable to recover it. 00:28:34.437 [2024-11-15 11:46:35.161181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.437 [2024-11-15 11:46:35.161193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.437 qpair failed and we were unable to recover it. 00:28:34.437 [2024-11-15 11:46:35.161408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.437 [2024-11-15 11:46:35.161420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.437 qpair failed and we were unable to recover it. 00:28:34.437 [2024-11-15 11:46:35.161657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.437 [2024-11-15 11:46:35.161669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.437 qpair failed and we were unable to recover it. 00:28:34.437 [2024-11-15 11:46:35.161825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.437 [2024-11-15 11:46:35.161836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.437 qpair failed and we were unable to recover it. 00:28:34.437 [2024-11-15 11:46:35.162014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.437 [2024-11-15 11:46:35.162025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.437 qpair failed and we were unable to recover it. 00:28:34.437 [2024-11-15 11:46:35.162171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.437 [2024-11-15 11:46:35.162183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.437 qpair failed and we were unable to recover it. 00:28:34.437 [2024-11-15 11:46:35.162323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.437 [2024-11-15 11:46:35.162334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.437 qpair failed and we were unable to recover it. 00:28:34.437 [2024-11-15 11:46:35.162573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.437 [2024-11-15 11:46:35.162585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.437 qpair failed and we were unable to recover it. 00:28:34.437 [2024-11-15 11:46:35.162735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.437 [2024-11-15 11:46:35.162746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.437 qpair failed and we were unable to recover it. 00:28:34.437 [2024-11-15 11:46:35.162882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.437 [2024-11-15 11:46:35.162893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.437 qpair failed and we were unable to recover it. 00:28:34.437 [2024-11-15 11:46:35.163034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.437 [2024-11-15 11:46:35.163046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.437 qpair failed and we were unable to recover it. 00:28:34.437 [2024-11-15 11:46:35.163218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.437 [2024-11-15 11:46:35.163230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.437 qpair failed and we were unable to recover it. 00:28:34.437 [2024-11-15 11:46:35.163482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.437 [2024-11-15 11:46:35.163494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.437 qpair failed and we were unable to recover it. 00:28:34.437 [2024-11-15 11:46:35.163666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.437 [2024-11-15 11:46:35.163678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.437 qpair failed and we were unable to recover it. 00:28:34.437 [2024-11-15 11:46:35.163915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.437 [2024-11-15 11:46:35.163926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.437 qpair failed and we were unable to recover it. 00:28:34.437 [2024-11-15 11:46:35.164083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.437 [2024-11-15 11:46:35.164095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.437 qpair failed and we were unable to recover it. 00:28:34.437 [2024-11-15 11:46:35.164188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.437 [2024-11-15 11:46:35.164200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.437 qpair failed and we were unable to recover it. 00:28:34.437 [2024-11-15 11:46:35.164433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.437 [2024-11-15 11:46:35.164444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.437 qpair failed and we were unable to recover it. 00:28:34.437 [2024-11-15 11:46:35.164687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.438 [2024-11-15 11:46:35.164698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.438 qpair failed and we were unable to recover it. 00:28:34.438 [2024-11-15 11:46:35.164852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.438 [2024-11-15 11:46:35.164862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.438 qpair failed and we were unable to recover it. 00:28:34.438 [2024-11-15 11:46:35.165011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.438 [2024-11-15 11:46:35.165022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.438 qpair failed and we were unable to recover it. 00:28:34.438 [2024-11-15 11:46:35.165229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.438 [2024-11-15 11:46:35.165240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.438 qpair failed and we were unable to recover it. 00:28:34.438 [2024-11-15 11:46:35.165417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.438 [2024-11-15 11:46:35.165427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.438 qpair failed and we were unable to recover it. 00:28:34.438 [2024-11-15 11:46:35.165561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.438 [2024-11-15 11:46:35.165573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.438 qpair failed and we were unable to recover it. 00:28:34.438 [2024-11-15 11:46:35.165795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.438 [2024-11-15 11:46:35.165807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.438 qpair failed and we were unable to recover it. 00:28:34.438 [2024-11-15 11:46:35.165948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.438 [2024-11-15 11:46:35.165959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.438 qpair failed and we were unable to recover it. 00:28:34.438 [2024-11-15 11:46:35.166230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.438 [2024-11-15 11:46:35.166261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1922550 with addr=10.0.0.2, port=4420 00:28:34.438 qpair failed and we were unable to recover it. 00:28:34.438 [2024-11-15 11:46:35.166419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.438 [2024-11-15 11:46:35.166434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.438 qpair failed and we were unable to recover it. 00:28:34.438 [2024-11-15 11:46:35.166722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.438 [2024-11-15 11:46:35.166734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.438 qpair failed and we were unable to recover it. 00:28:34.438 [2024-11-15 11:46:35.166891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.438 [2024-11-15 11:46:35.166903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.438 qpair failed and we were unable to recover it. 00:28:34.438 [2024-11-15 11:46:35.167158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.438 [2024-11-15 11:46:35.167170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.438 qpair failed and we were unable to recover it. 00:28:34.438 [2024-11-15 11:46:35.167308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.438 [2024-11-15 11:46:35.167319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.438 qpair failed and we were unable to recover it. 00:28:34.438 [2024-11-15 11:46:35.167552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.438 [2024-11-15 11:46:35.167564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.438 qpair failed and we were unable to recover it. 00:28:34.438 [2024-11-15 11:46:35.167823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.438 [2024-11-15 11:46:35.167835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.438 qpair failed and we were unable to recover it. 00:28:34.438 [2024-11-15 11:46:35.168006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.438 [2024-11-15 11:46:35.168018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.438 qpair failed and we were unable to recover it. 00:28:34.438 [2024-11-15 11:46:35.168301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.438 [2024-11-15 11:46:35.168312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.438 qpair failed and we were unable to recover it. 00:28:34.438 [2024-11-15 11:46:35.168550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.438 [2024-11-15 11:46:35.168561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.438 qpair failed and we were unable to recover it. 00:28:34.438 [2024-11-15 11:46:35.168801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.438 [2024-11-15 11:46:35.168812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.438 qpair failed and we were unable to recover it. 00:28:34.438 [2024-11-15 11:46:35.168971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.438 [2024-11-15 11:46:35.168983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.438 qpair failed and we were unable to recover it. 00:28:34.438 [2024-11-15 11:46:35.169190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.438 [2024-11-15 11:46:35.169207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.438 qpair failed and we were unable to recover it. 00:28:34.438 [2024-11-15 11:46:35.169279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.438 [2024-11-15 11:46:35.169291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.438 qpair failed and we were unable to recover it. 00:28:34.438 [2024-11-15 11:46:35.169376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.438 [2024-11-15 11:46:35.169387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.438 qpair failed and we were unable to recover it. 00:28:34.438 [2024-11-15 11:46:35.169611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.438 [2024-11-15 11:46:35.169623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.438 qpair failed and we were unable to recover it. 00:28:34.438 [2024-11-15 11:46:35.169794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.438 [2024-11-15 11:46:35.169806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.438 qpair failed and we were unable to recover it. 00:28:34.438 [2024-11-15 11:46:35.169950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.438 [2024-11-15 11:46:35.169961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.438 qpair failed and we were unable to recover it. 00:28:34.438 [2024-11-15 11:46:35.170197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.438 [2024-11-15 11:46:35.170208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.438 qpair failed and we were unable to recover it. 00:28:34.438 [2024-11-15 11:46:35.170291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.438 [2024-11-15 11:46:35.170303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.438 qpair failed and we were unable to recover it. 00:28:34.438 [2024-11-15 11:46:35.170516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.438 [2024-11-15 11:46:35.170528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.438 qpair failed and we were unable to recover it. 00:28:34.438 [2024-11-15 11:46:35.170675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.438 [2024-11-15 11:46:35.170687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.438 qpair failed and we were unable to recover it. 00:28:34.438 [2024-11-15 11:46:35.170771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.438 [2024-11-15 11:46:35.170782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.438 qpair failed and we were unable to recover it. 00:28:34.438 [2024-11-15 11:46:35.170990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.438 [2024-11-15 11:46:35.171002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.438 qpair failed and we were unable to recover it. 00:28:34.438 [2024-11-15 11:46:35.171151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.438 [2024-11-15 11:46:35.171162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.438 qpair failed and we were unable to recover it. 00:28:34.438 [2024-11-15 11:46:35.171310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.438 [2024-11-15 11:46:35.171322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.438 qpair failed and we were unable to recover it. 00:28:34.438 [2024-11-15 11:46:35.171529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.438 [2024-11-15 11:46:35.171541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.438 qpair failed and we were unable to recover it. 00:28:34.438 [2024-11-15 11:46:35.171692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.438 [2024-11-15 11:46:35.171704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.438 qpair failed and we were unable to recover it. 00:28:34.438 [2024-11-15 11:46:35.171950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.438 [2024-11-15 11:46:35.171962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.438 qpair failed and we were unable to recover it. 00:28:34.438 [2024-11-15 11:46:35.172110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.439 [2024-11-15 11:46:35.172121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.439 qpair failed and we were unable to recover it. 00:28:34.439 [2024-11-15 11:46:35.172341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.439 [2024-11-15 11:46:35.172353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.439 qpair failed and we were unable to recover it. 00:28:34.439 [2024-11-15 11:46:35.172489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.439 [2024-11-15 11:46:35.172502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.439 qpair failed and we were unable to recover it. 00:28:34.439 [2024-11-15 11:46:35.172665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.439 [2024-11-15 11:46:35.172677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.439 qpair failed and we were unable to recover it. 00:28:34.439 [2024-11-15 11:46:35.172831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.439 [2024-11-15 11:46:35.172842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.439 qpair failed and we were unable to recover it. 00:28:34.439 [2024-11-15 11:46:35.173119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.439 [2024-11-15 11:46:35.173131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.439 qpair failed and we were unable to recover it. 00:28:34.439 [2024-11-15 11:46:35.173361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.439 [2024-11-15 11:46:35.173373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.439 qpair failed and we were unable to recover it. 00:28:34.439 [2024-11-15 11:46:35.173581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.439 [2024-11-15 11:46:35.173593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.439 qpair failed and we were unable to recover it. 00:28:34.439 [2024-11-15 11:46:35.173803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.439 [2024-11-15 11:46:35.173815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.439 qpair failed and we were unable to recover it. 00:28:34.439 [2024-11-15 11:46:35.173997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.439 [2024-11-15 11:46:35.174009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.439 qpair failed and we were unable to recover it. 00:28:34.439 [2024-11-15 11:46:35.174218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.439 [2024-11-15 11:46:35.174230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.439 qpair failed and we were unable to recover it. 00:28:34.439 [2024-11-15 11:46:35.174379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.439 [2024-11-15 11:46:35.174390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.439 qpair failed and we were unable to recover it. 00:28:34.439 [2024-11-15 11:46:35.174661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.439 [2024-11-15 11:46:35.174671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.439 qpair failed and we were unable to recover it. 00:28:34.439 [2024-11-15 11:46:35.174906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.439 [2024-11-15 11:46:35.174917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.439 qpair failed and we were unable to recover it. 00:28:34.439 [2024-11-15 11:46:35.175145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.439 [2024-11-15 11:46:35.175156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.439 qpair failed and we were unable to recover it. 00:28:34.439 [2024-11-15 11:46:35.175382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.439 [2024-11-15 11:46:35.175394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.439 qpair failed and we were unable to recover it. 00:28:34.439 [2024-11-15 11:46:35.175627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.439 [2024-11-15 11:46:35.175639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.439 qpair failed and we were unable to recover it. 00:28:34.439 [2024-11-15 11:46:35.175817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.439 [2024-11-15 11:46:35.175828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.439 qpair failed and we were unable to recover it. 00:28:34.439 [2024-11-15 11:46:35.176035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.439 [2024-11-15 11:46:35.176047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.439 qpair failed and we were unable to recover it. 00:28:34.439 [2024-11-15 11:46:35.176279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.439 [2024-11-15 11:46:35.176290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.439 qpair failed and we were unable to recover it. 00:28:34.439 [2024-11-15 11:46:35.176528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.439 [2024-11-15 11:46:35.176540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.439 qpair failed and we were unable to recover it. 00:28:34.439 [2024-11-15 11:46:35.176704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.439 [2024-11-15 11:46:35.176717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.439 qpair failed and we were unable to recover it. 00:28:34.439 [2024-11-15 11:46:35.176944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.439 [2024-11-15 11:46:35.176955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.439 qpair failed and we were unable to recover it. 00:28:34.439 [2024-11-15 11:46:35.177165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.439 [2024-11-15 11:46:35.177179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.439 qpair failed and we were unable to recover it. 00:28:34.439 [2024-11-15 11:46:35.177391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.439 [2024-11-15 11:46:35.177402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.439 qpair failed and we were unable to recover it. 00:28:34.439 [2024-11-15 11:46:35.177638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.439 [2024-11-15 11:46:35.177649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.439 qpair failed and we were unable to recover it. 00:28:34.439 [2024-11-15 11:46:35.177803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.439 [2024-11-15 11:46:35.177814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.439 qpair failed and we were unable to recover it. 00:28:34.439 [2024-11-15 11:46:35.177965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.439 [2024-11-15 11:46:35.177977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.439 qpair failed and we were unable to recover it. 00:28:34.439 [2024-11-15 11:46:35.178066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.439 [2024-11-15 11:46:35.178077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.439 qpair failed and we were unable to recover it. 00:28:34.439 [2024-11-15 11:46:35.178231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.439 [2024-11-15 11:46:35.178244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.439 qpair failed and we were unable to recover it. 00:28:34.439 [2024-11-15 11:46:35.178476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.439 [2024-11-15 11:46:35.178488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.439 qpair failed and we were unable to recover it. 00:28:34.439 [2024-11-15 11:46:35.178624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.439 [2024-11-15 11:46:35.178635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.439 qpair failed and we were unable to recover it. 00:28:34.439 [2024-11-15 11:46:35.178802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.439 [2024-11-15 11:46:35.178814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.439 qpair failed and we were unable to recover it. 00:28:34.439 [2024-11-15 11:46:35.179050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.439 [2024-11-15 11:46:35.179062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.440 qpair failed and we were unable to recover it. 00:28:34.440 [2024-11-15 11:46:35.179299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.440 [2024-11-15 11:46:35.179310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.440 qpair failed and we were unable to recover it. 00:28:34.440 [2024-11-15 11:46:35.179492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.440 [2024-11-15 11:46:35.179504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.440 qpair failed and we were unable to recover it. 00:28:34.440 [2024-11-15 11:46:35.179756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.440 [2024-11-15 11:46:35.179767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.440 qpair failed and we were unable to recover it. 00:28:34.440 [2024-11-15 11:46:35.180007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.440 [2024-11-15 11:46:35.180017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.440 qpair failed and we were unable to recover it. 00:28:34.440 [2024-11-15 11:46:35.180199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.440 [2024-11-15 11:46:35.180211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.440 qpair failed and we were unable to recover it. 00:28:34.440 [2024-11-15 11:46:35.180391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.440 [2024-11-15 11:46:35.180401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.440 qpair failed and we were unable to recover it. 00:28:34.440 [2024-11-15 11:46:35.180611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.440 [2024-11-15 11:46:35.180623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.440 qpair failed and we were unable to recover it. 00:28:34.440 [2024-11-15 11:46:35.180853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.440 [2024-11-15 11:46:35.180864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.440 qpair failed and we were unable to recover it. 00:28:34.440 [2024-11-15 11:46:35.180936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.440 [2024-11-15 11:46:35.180946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.440 qpair failed and we were unable to recover it. 00:28:34.440 [2024-11-15 11:46:35.181089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.440 [2024-11-15 11:46:35.181100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.440 qpair failed and we were unable to recover it. 00:28:34.440 [2024-11-15 11:46:35.181275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.440 [2024-11-15 11:46:35.181285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.440 qpair failed and we were unable to recover it. 00:28:34.440 [2024-11-15 11:46:35.181379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.440 [2024-11-15 11:46:35.181390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.440 qpair failed and we were unable to recover it. 00:28:34.440 [2024-11-15 11:46:35.181541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.440 [2024-11-15 11:46:35.181551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.440 qpair failed and we were unable to recover it. 00:28:34.440 [2024-11-15 11:46:35.181782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.440 [2024-11-15 11:46:35.181792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.440 qpair failed and we were unable to recover it. 00:28:34.440 [2024-11-15 11:46:35.181877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.440 [2024-11-15 11:46:35.181887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.440 qpair failed and we were unable to recover it. 00:28:34.440 [2024-11-15 11:46:35.182033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.440 [2024-11-15 11:46:35.182043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.440 qpair failed and we were unable to recover it. 00:28:34.440 [2024-11-15 11:46:35.182288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.440 [2024-11-15 11:46:35.182302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.440 qpair failed and we were unable to recover it. 00:28:34.440 [2024-11-15 11:46:35.182450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.440 [2024-11-15 11:46:35.182465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.440 qpair failed and we were unable to recover it. 00:28:34.440 [2024-11-15 11:46:35.182673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.440 [2024-11-15 11:46:35.182683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.440 qpair failed and we were unable to recover it. 00:28:34.440 [2024-11-15 11:46:35.182916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.440 [2024-11-15 11:46:35.182926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.440 qpair failed and we were unable to recover it. 00:28:34.440 [2024-11-15 11:46:35.183002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.440 [2024-11-15 11:46:35.183013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.440 qpair failed and we were unable to recover it. 00:28:34.440 [2024-11-15 11:46:35.183223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.440 [2024-11-15 11:46:35.183234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.440 qpair failed and we were unable to recover it. 00:28:34.440 [2024-11-15 11:46:35.183333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.440 [2024-11-15 11:46:35.183344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.440 qpair failed and we were unable to recover it. 00:28:34.440 [2024-11-15 11:46:35.183508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.440 [2024-11-15 11:46:35.183520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.440 qpair failed and we were unable to recover it. 00:28:34.440 [2024-11-15 11:46:35.183752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.440 [2024-11-15 11:46:35.183764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.440 qpair failed and we were unable to recover it. 00:28:34.440 [2024-11-15 11:46:35.183995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.440 [2024-11-15 11:46:35.184006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.440 qpair failed and we were unable to recover it. 00:28:34.440 [2024-11-15 11:46:35.184231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.440 [2024-11-15 11:46:35.184243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.440 qpair failed and we were unable to recover it. 00:28:34.440 [2024-11-15 11:46:35.184506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.440 [2024-11-15 11:46:35.184518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.440 qpair failed and we were unable to recover it. 00:28:34.440 [2024-11-15 11:46:35.184656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.440 [2024-11-15 11:46:35.184668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.440 qpair failed and we were unable to recover it. 00:28:34.440 [2024-11-15 11:46:35.184901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.440 [2024-11-15 11:46:35.184914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.440 qpair failed and we were unable to recover it. 00:28:34.440 [2024-11-15 11:46:35.185146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.440 [2024-11-15 11:46:35.185157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.440 qpair failed and we were unable to recover it. 00:28:34.440 [2024-11-15 11:46:35.185416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.440 [2024-11-15 11:46:35.185427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.440 qpair failed and we were unable to recover it. 00:28:34.440 [2024-11-15 11:46:35.185566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.440 [2024-11-15 11:46:35.185578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.440 qpair failed and we were unable to recover it. 00:28:34.440 [2024-11-15 11:46:35.185785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.440 [2024-11-15 11:46:35.185800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.440 qpair failed and we were unable to recover it. 00:28:34.440 [2024-11-15 11:46:35.186039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.440 [2024-11-15 11:46:35.186050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.440 qpair failed and we were unable to recover it. 00:28:34.440 [2024-11-15 11:46:35.186220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.440 [2024-11-15 11:46:35.186231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.440 qpair failed and we were unable to recover it. 00:28:34.440 [2024-11-15 11:46:35.186390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.440 [2024-11-15 11:46:35.186402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.440 qpair failed and we were unable to recover it. 00:28:34.440 [2024-11-15 11:46:35.186609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.440 [2024-11-15 11:46:35.186621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.441 qpair failed and we were unable to recover it. 00:28:34.441 [2024-11-15 11:46:35.186788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.441 [2024-11-15 11:46:35.186799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.441 qpair failed and we were unable to recover it. 00:28:34.441 [2024-11-15 11:46:35.186951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.441 [2024-11-15 11:46:35.186961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.441 qpair failed and we were unable to recover it. 00:28:34.441 [2024-11-15 11:46:35.187111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.441 [2024-11-15 11:46:35.187122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.441 qpair failed and we were unable to recover it. 00:28:34.441 [2024-11-15 11:46:35.187286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.441 [2024-11-15 11:46:35.187296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.441 qpair failed and we were unable to recover it. 00:28:34.441 [2024-11-15 11:46:35.187439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.441 [2024-11-15 11:46:35.187450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.441 qpair failed and we were unable to recover it. 00:28:34.441 [2024-11-15 11:46:35.187684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.441 [2024-11-15 11:46:35.187696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.441 qpair failed and we were unable to recover it. 00:28:34.441 [2024-11-15 11:46:35.187841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.441 [2024-11-15 11:46:35.187852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.441 qpair failed and we were unable to recover it. 00:28:34.441 [2024-11-15 11:46:35.188039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.441 [2024-11-15 11:46:35.188050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.441 qpair failed and we were unable to recover it. 00:28:34.441 [2024-11-15 11:46:35.188278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.441 [2024-11-15 11:46:35.188289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.441 qpair failed and we were unable to recover it. 00:28:34.441 [2024-11-15 11:46:35.188445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.441 [2024-11-15 11:46:35.188456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.441 qpair failed and we were unable to recover it. 00:28:34.441 [2024-11-15 11:46:35.188690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.441 [2024-11-15 11:46:35.188702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.441 qpair failed and we were unable to recover it. 00:28:34.441 [2024-11-15 11:46:35.188912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.441 [2024-11-15 11:46:35.188923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.441 qpair failed and we were unable to recover it. 00:28:34.441 [2024-11-15 11:46:35.189167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.441 [2024-11-15 11:46:35.189197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.441 qpair failed and we were unable to recover it. 00:28:34.441 [2024-11-15 11:46:35.189389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.441 [2024-11-15 11:46:35.189421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.441 qpair failed and we were unable to recover it. 00:28:34.441 [2024-11-15 11:46:35.189694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.441 [2024-11-15 11:46:35.189727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.441 qpair failed and we were unable to recover it. 00:28:34.441 [2024-11-15 11:46:35.190019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.441 [2024-11-15 11:46:35.190030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.441 qpair failed and we were unable to recover it. 00:28:34.441 [2024-11-15 11:46:35.190330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.441 [2024-11-15 11:46:35.190363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.441 qpair failed and we were unable to recover it. 00:28:34.441 [2024-11-15 11:46:35.190563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.441 [2024-11-15 11:46:35.190596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.441 qpair failed and we were unable to recover it. 00:28:34.441 [2024-11-15 11:46:35.190879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.441 [2024-11-15 11:46:35.190916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.441 qpair failed and we were unable to recover it. 00:28:34.441 [2024-11-15 11:46:35.191157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.441 [2024-11-15 11:46:35.191190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.441 qpair failed and we were unable to recover it. 00:28:34.441 [2024-11-15 11:46:35.191444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.441 [2024-11-15 11:46:35.191495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.441 qpair failed and we were unable to recover it. 00:28:34.441 [2024-11-15 11:46:35.191805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.441 [2024-11-15 11:46:35.191837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.441 qpair failed and we were unable to recover it. 00:28:34.441 [2024-11-15 11:46:35.192139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.441 [2024-11-15 11:46:35.192150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.441 qpair failed and we were unable to recover it. 00:28:34.441 [2024-11-15 11:46:35.192367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.441 [2024-11-15 11:46:35.192378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.441 qpair failed and we were unable to recover it. 00:28:34.441 [2024-11-15 11:46:35.192608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.441 [2024-11-15 11:46:35.192619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.441 qpair failed and we were unable to recover it. 00:28:34.441 [2024-11-15 11:46:35.192777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.441 [2024-11-15 11:46:35.192788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.441 qpair failed and we were unable to recover it. 00:28:34.441 [2024-11-15 11:46:35.192892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.441 [2024-11-15 11:46:35.192903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.441 qpair failed and we were unable to recover it. 00:28:34.441 [2024-11-15 11:46:35.193057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.441 [2024-11-15 11:46:35.193068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.441 qpair failed and we were unable to recover it. 00:28:34.441 [2024-11-15 11:46:35.193300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.441 [2024-11-15 11:46:35.193310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.441 qpair failed and we were unable to recover it. 00:28:34.441 [2024-11-15 11:46:35.193475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.441 [2024-11-15 11:46:35.193508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.441 qpair failed and we were unable to recover it. 00:28:34.441 [2024-11-15 11:46:35.193783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.441 [2024-11-15 11:46:35.193815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.441 qpair failed and we were unable to recover it. 00:28:34.441 [2024-11-15 11:46:35.194015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.441 [2024-11-15 11:46:35.194047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.441 qpair failed and we were unable to recover it. 00:28:34.441 [2024-11-15 11:46:35.194368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.441 [2024-11-15 11:46:35.194401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.441 qpair failed and we were unable to recover it. 00:28:34.441 [2024-11-15 11:46:35.194695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.441 [2024-11-15 11:46:35.194729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.441 qpair failed and we were unable to recover it. 00:28:34.441 [2024-11-15 11:46:35.195002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.441 [2024-11-15 11:46:35.195033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.441 qpair failed and we were unable to recover it. 00:28:34.441 [2024-11-15 11:46:35.195242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.441 [2024-11-15 11:46:35.195275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.441 qpair failed and we were unable to recover it. 00:28:34.441 [2024-11-15 11:46:35.195481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.441 [2024-11-15 11:46:35.195514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.441 qpair failed and we were unable to recover it. 00:28:34.441 [2024-11-15 11:46:35.195769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.441 [2024-11-15 11:46:35.195801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.441 qpair failed and we were unable to recover it. 00:28:34.442 [2024-11-15 11:46:35.196059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.442 [2024-11-15 11:46:35.196090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.442 qpair failed and we were unable to recover it. 00:28:34.442 [2024-11-15 11:46:35.196233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.442 [2024-11-15 11:46:35.196244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.442 qpair failed and we were unable to recover it. 00:28:34.442 [2024-11-15 11:46:35.196409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.442 [2024-11-15 11:46:35.196440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.442 qpair failed and we were unable to recover it. 00:28:34.442 [2024-11-15 11:46:35.196676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.442 [2024-11-15 11:46:35.196709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.442 qpair failed and we were unable to recover it. 00:28:34.442 [2024-11-15 11:46:35.196897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.442 [2024-11-15 11:46:35.196929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.442 qpair failed and we were unable to recover it. 00:28:34.442 [2024-11-15 11:46:35.197105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.442 [2024-11-15 11:46:35.197116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.442 qpair failed and we were unable to recover it. 00:28:34.442 [2024-11-15 11:46:35.197333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.442 [2024-11-15 11:46:35.197366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.442 qpair failed and we were unable to recover it. 00:28:34.442 [2024-11-15 11:46:35.197701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.442 [2024-11-15 11:46:35.197735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.442 qpair failed and we were unable to recover it. 00:28:34.442 [2024-11-15 11:46:35.197887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.442 [2024-11-15 11:46:35.197898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.442 qpair failed and we were unable to recover it. 00:28:34.442 [2024-11-15 11:46:35.198058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.442 [2024-11-15 11:46:35.198090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.442 qpair failed and we were unable to recover it. 00:28:34.442 [2024-11-15 11:46:35.198342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.442 [2024-11-15 11:46:35.198376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.442 qpair failed and we were unable to recover it. 00:28:34.442 [2024-11-15 11:46:35.198702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.442 [2024-11-15 11:46:35.198735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.442 qpair failed and we were unable to recover it. 00:28:34.442 [2024-11-15 11:46:35.198937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.442 [2024-11-15 11:46:35.198978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.442 qpair failed and we were unable to recover it. 00:28:34.442 [2024-11-15 11:46:35.199142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.442 [2024-11-15 11:46:35.199153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.442 qpair failed and we were unable to recover it. 00:28:34.442 [2024-11-15 11:46:35.199402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.442 [2024-11-15 11:46:35.199413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.442 qpair failed and we were unable to recover it. 00:28:34.442 [2024-11-15 11:46:35.199582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.442 [2024-11-15 11:46:35.199592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.442 qpair failed and we were unable to recover it. 00:28:34.442 [2024-11-15 11:46:35.199776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.442 [2024-11-15 11:46:35.199808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.442 qpair failed and we were unable to recover it. 00:28:34.442 [2024-11-15 11:46:35.200017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.442 [2024-11-15 11:46:35.200048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.442 qpair failed and we were unable to recover it. 00:28:34.442 [2024-11-15 11:46:35.200249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.442 [2024-11-15 11:46:35.200283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.442 qpair failed and we were unable to recover it. 00:28:34.442 [2024-11-15 11:46:35.200598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.442 [2024-11-15 11:46:35.200630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.442 qpair failed and we were unable to recover it. 00:28:34.442 [2024-11-15 11:46:35.200764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.442 [2024-11-15 11:46:35.200803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.442 qpair failed and we were unable to recover it. 00:28:34.442 [2024-11-15 11:46:35.201000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.442 [2024-11-15 11:46:35.201031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.442 qpair failed and we were unable to recover it. 00:28:34.442 [2024-11-15 11:46:35.201342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.442 [2024-11-15 11:46:35.201374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.442 qpair failed and we were unable to recover it. 00:28:34.442 [2024-11-15 11:46:35.201638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.442 [2024-11-15 11:46:35.201671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.442 qpair failed and we were unable to recover it. 00:28:34.442 [2024-11-15 11:46:35.201818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.442 [2024-11-15 11:46:35.201829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.442 qpair failed and we were unable to recover it. 00:28:34.442 [2024-11-15 11:46:35.202027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.442 [2024-11-15 11:46:35.202058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.442 qpair failed and we were unable to recover it. 00:28:34.442 [2024-11-15 11:46:35.202191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.442 [2024-11-15 11:46:35.202223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.442 qpair failed and we were unable to recover it. 00:28:34.442 [2024-11-15 11:46:35.202470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.442 [2024-11-15 11:46:35.202504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.442 qpair failed and we were unable to recover it. 00:28:34.442 [2024-11-15 11:46:35.202756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.442 [2024-11-15 11:46:35.202787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.442 qpair failed and we were unable to recover it. 00:28:34.442 [2024-11-15 11:46:35.202980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.442 [2024-11-15 11:46:35.203011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.442 qpair failed and we were unable to recover it. 00:28:34.442 [2024-11-15 11:46:35.203222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.442 [2024-11-15 11:46:35.203233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.442 qpair failed and we were unable to recover it. 00:28:34.442 [2024-11-15 11:46:35.203337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.442 [2024-11-15 11:46:35.203380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.442 qpair failed and we were unable to recover it. 00:28:34.442 [2024-11-15 11:46:35.203524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.442 [2024-11-15 11:46:35.203558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.442 qpair failed and we were unable to recover it. 00:28:34.442 [2024-11-15 11:46:35.203745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.442 [2024-11-15 11:46:35.203778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.442 qpair failed and we were unable to recover it. 00:28:34.442 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh: line 36: 1407508 Killed "${NVMF_APP[@]}" "$@" 00:28:34.442 [2024-11-15 11:46:35.204069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.442 [2024-11-15 11:46:35.204102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.442 qpair failed and we were unable to recover it. 00:28:34.442 [2024-11-15 11:46:35.204285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.442 [2024-11-15 11:46:35.204296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.442 qpair failed and we were unable to recover it. 00:28:34.442 [2024-11-15 11:46:35.204452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.442 [2024-11-15 11:46:35.204467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.442 qpair failed and we were unable to recover it. 00:28:34.442 11:46:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@48 -- # disconnect_init 10.0.0.2 00:28:34.443 [2024-11-15 11:46:35.204611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.443 [2024-11-15 11:46:35.204623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.443 qpair failed and we were unable to recover it. 00:28:34.443 [2024-11-15 11:46:35.204754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.443 [2024-11-15 11:46:35.204764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.443 qpair failed and we were unable to recover it. 00:28:34.443 [2024-11-15 11:46:35.204920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.443 [2024-11-15 11:46:35.204932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.443 qpair failed and we were unable to recover it. 00:28:34.443 11:46:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:28:34.443 [2024-11-15 11:46:35.205191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.443 [2024-11-15 11:46:35.205202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.443 qpair failed and we were unable to recover it. 00:28:34.443 11:46:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:28:34.443 [2024-11-15 11:46:35.205406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.443 [2024-11-15 11:46:35.205416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.443 qpair failed and we were unable to recover it. 00:28:34.443 [2024-11-15 11:46:35.205508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.443 [2024-11-15 11:46:35.205520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.443 qpair failed and we were unable to recover it. 00:28:34.443 11:46:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@724 -- # xtrace_disable 00:28:34.443 [2024-11-15 11:46:35.205627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.443 [2024-11-15 11:46:35.205639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.443 qpair failed and we were unable to recover it. 00:28:34.443 [2024-11-15 11:46:35.205825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.443 [2024-11-15 11:46:35.205837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b9 11:46:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:34.443 0 with addr=10.0.0.2, port=4420 00:28:34.443 qpair failed and we were unable to recover it. 00:28:34.443 [2024-11-15 11:46:35.206120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.443 [2024-11-15 11:46:35.206131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.443 qpair failed and we were unable to recover it. 00:28:34.443 [2024-11-15 11:46:35.206310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.443 [2024-11-15 11:46:35.206321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.443 qpair failed and we were unable to recover it. 00:28:34.443 [2024-11-15 11:46:35.206488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.443 [2024-11-15 11:46:35.206499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.443 qpair failed and we were unable to recover it. 00:28:34.443 [2024-11-15 11:46:35.206670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.443 [2024-11-15 11:46:35.206682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.443 qpair failed and we were unable to recover it. 00:28:34.443 [2024-11-15 11:46:35.206779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.443 [2024-11-15 11:46:35.206790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.443 qpair failed and we were unable to recover it. 00:28:34.443 [2024-11-15 11:46:35.206961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.443 [2024-11-15 11:46:35.206972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.443 qpair failed and we were unable to recover it. 00:28:34.443 [2024-11-15 11:46:35.207177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.443 [2024-11-15 11:46:35.207188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.443 qpair failed and we were unable to recover it. 00:28:34.443 [2024-11-15 11:46:35.207330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.443 [2024-11-15 11:46:35.207341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.443 qpair failed and we were unable to recover it. 00:28:34.443 [2024-11-15 11:46:35.207494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.443 [2024-11-15 11:46:35.207505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.443 qpair failed and we were unable to recover it. 00:28:34.443 [2024-11-15 11:46:35.207647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.443 [2024-11-15 11:46:35.207658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.443 qpair failed and we were unable to recover it. 00:28:34.443 [2024-11-15 11:46:35.207811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.443 [2024-11-15 11:46:35.207821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.443 qpair failed and we were unable to recover it. 00:28:34.443 [2024-11-15 11:46:35.207958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.443 [2024-11-15 11:46:35.207969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.443 qpair failed and we were unable to recover it. 00:28:34.443 [2024-11-15 11:46:35.208057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.443 [2024-11-15 11:46:35.208068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.443 qpair failed and we were unable to recover it. 00:28:34.443 [2024-11-15 11:46:35.208295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.443 [2024-11-15 11:46:35.208306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.443 qpair failed and we were unable to recover it. 00:28:34.443 [2024-11-15 11:46:35.208505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.443 [2024-11-15 11:46:35.208516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.443 qpair failed and we were unable to recover it. 00:28:34.443 [2024-11-15 11:46:35.208607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.443 [2024-11-15 11:46:35.208618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.443 qpair failed and we were unable to recover it. 00:28:34.443 [2024-11-15 11:46:35.208766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.443 [2024-11-15 11:46:35.208777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.443 qpair failed and we were unable to recover it. 00:28:34.443 [2024-11-15 11:46:35.208996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.443 [2024-11-15 11:46:35.209007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.443 qpair failed and we were unable to recover it. 00:28:34.443 [2024-11-15 11:46:35.209190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.443 [2024-11-15 11:46:35.209201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.443 qpair failed and we were unable to recover it. 00:28:34.443 [2024-11-15 11:46:35.209407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.443 [2024-11-15 11:46:35.209417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.443 qpair failed and we were unable to recover it. 00:28:34.443 [2024-11-15 11:46:35.209488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.443 [2024-11-15 11:46:35.209499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.443 qpair failed and we were unable to recover it. 00:28:34.443 [2024-11-15 11:46:35.209706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.443 [2024-11-15 11:46:35.209718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.443 qpair failed and we were unable to recover it. 00:28:34.443 [2024-11-15 11:46:35.209816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.443 [2024-11-15 11:46:35.209827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.443 qpair failed and we were unable to recover it. 00:28:34.443 [2024-11-15 11:46:35.209931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.443 [2024-11-15 11:46:35.209942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.443 qpair failed and we were unable to recover it. 00:28:34.443 [2024-11-15 11:46:35.210175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.443 [2024-11-15 11:46:35.210186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.443 qpair failed and we were unable to recover it. 00:28:34.443 [2024-11-15 11:46:35.210357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.443 [2024-11-15 11:46:35.210368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.443 qpair failed and we were unable to recover it. 00:28:34.443 [2024-11-15 11:46:35.210474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.443 [2024-11-15 11:46:35.210486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.443 qpair failed and we were unable to recover it. 00:28:34.443 [2024-11-15 11:46:35.210717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.443 [2024-11-15 11:46:35.210728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.443 qpair failed and we were unable to recover it. 00:28:34.443 [2024-11-15 11:46:35.210824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.444 [2024-11-15 11:46:35.210834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.444 qpair failed and we were unable to recover it. 00:28:34.444 [2024-11-15 11:46:35.210993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.444 [2024-11-15 11:46:35.211004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.444 qpair failed and we were unable to recover it. 00:28:34.444 [2024-11-15 11:46:35.211092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.444 [2024-11-15 11:46:35.211102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.444 qpair failed and we were unable to recover it. 00:28:34.444 [2024-11-15 11:46:35.211351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.444 [2024-11-15 11:46:35.211363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.444 qpair failed and we were unable to recover it. 00:28:34.444 [2024-11-15 11:46:35.211583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.444 [2024-11-15 11:46:35.211595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.444 qpair failed and we were unable to recover it. 00:28:34.444 [2024-11-15 11:46:35.211755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.444 [2024-11-15 11:46:35.211766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.444 qpair failed and we were unable to recover it. 00:28:34.444 [2024-11-15 11:46:35.211845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.444 [2024-11-15 11:46:35.211855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.444 qpair failed and we were unable to recover it. 00:28:34.444 [2024-11-15 11:46:35.211953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.444 [2024-11-15 11:46:35.211964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.444 qpair failed and we were unable to recover it. 00:28:34.444 [2024-11-15 11:46:35.212046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.444 [2024-11-15 11:46:35.212057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.444 qpair failed and we were unable to recover it. 00:28:34.444 [2024-11-15 11:46:35.212296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.444 [2024-11-15 11:46:35.212308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.444 qpair failed and we were unable to recover it. 00:28:34.444 [2024-11-15 11:46:35.212512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.444 [2024-11-15 11:46:35.212524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.444 qpair failed and we were unable to recover it. 00:28:34.444 11:46:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@509 -- # nvmfpid=1408311 00:28:34.444 [2024-11-15 11:46:35.212679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.444 [2024-11-15 11:46:35.212691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.444 qpair failed and we were unable to recover it. 00:28:34.444 [2024-11-15 11:46:35.212932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.444 [2024-11-15 11:46:35.212944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.444 qpair failed and we were unable to recover it. 00:28:34.444 11:46:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@510 -- # waitforlisten 1408311 00:28:34.444 [2024-11-15 11:46:35.213036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.444 [2024-11-15 11:46:35.213048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.444 qpair failed and we were unable to recover it. 00:28:34.444 11:46:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:28:34.444 [2024-11-15 11:46:35.213300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.444 [2024-11-15 11:46:35.213312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.444 qpair failed and we were unable to recover it. 00:28:34.444 11:46:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@833 -- # '[' -z 1408311 ']' 00:28:34.444 [2024-11-15 11:46:35.213469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.444 [2024-11-15 11:46:35.213481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.444 qpair failed and we were unable to recover it. 00:28:34.444 [2024-11-15 11:46:35.213618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.444 [2024-11-15 11:46:35.213631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.444 qpair failed and we were unable to recover it. 00:28:34.444 11:46:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:34.444 [2024-11-15 11:46:35.213772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.444 [2024-11-15 11:46:35.213783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.444 qpair failed and we were unable to recover it. 00:28:34.444 11:46:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@838 -- # local max_retries=100 00:28:34.444 [2024-11-15 11:46:35.214004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.444 [2024-11-15 11:46:35.214015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.444 qpair failed and we were unable to recover it. 00:28:34.444 11:46:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:34.444 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:34.444 [2024-11-15 11:46:35.214230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.444 [2024-11-15 11:46:35.214242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.444 qpair failed and we were unable to recover it. 00:28:34.444 [2024-11-15 11:46:35.214404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.444 [2024-11-15 11:46:35.214416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.444 qpair failed and we were unable to recover it. 00:28:34.444 11:46:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@842 -- # xtrace_disable 00:28:34.444 [2024-11-15 11:46:35.214660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.444 [2024-11-15 11:46:35.214673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.444 qpair failed and we were unable to recover it. 00:28:34.444 11:46:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:34.444 [2024-11-15 11:46:35.214808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.444 [2024-11-15 11:46:35.214821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.444 qpair failed and we were unable to recover it. 00:28:34.444 [2024-11-15 11:46:35.215028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.444 [2024-11-15 11:46:35.215039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.444 qpair failed and we were unable to recover it. 00:28:34.444 [2024-11-15 11:46:35.215322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.444 [2024-11-15 11:46:35.215334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.444 qpair failed and we were unable to recover it. 00:28:34.444 [2024-11-15 11:46:35.215484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.444 [2024-11-15 11:46:35.215495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.444 qpair failed and we were unable to recover it. 00:28:34.444 [2024-11-15 11:46:35.215713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.444 [2024-11-15 11:46:35.215724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.444 qpair failed and we were unable to recover it. 00:28:34.444 [2024-11-15 11:46:35.215887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.444 [2024-11-15 11:46:35.215898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.445 qpair failed and we were unable to recover it. 00:28:34.445 [2024-11-15 11:46:35.216001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.445 [2024-11-15 11:46:35.216012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.445 qpair failed and we were unable to recover it. 00:28:34.445 [2024-11-15 11:46:35.216089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.445 [2024-11-15 11:46:35.216101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.445 qpair failed and we were unable to recover it. 00:28:34.445 [2024-11-15 11:46:35.216356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.445 [2024-11-15 11:46:35.216368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.445 qpair failed and we were unable to recover it. 00:28:34.445 [2024-11-15 11:46:35.216515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.445 [2024-11-15 11:46:35.216526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.445 qpair failed and we were unable to recover it. 00:28:34.445 [2024-11-15 11:46:35.216702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.445 [2024-11-15 11:46:35.216713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.445 qpair failed and we were unable to recover it. 00:28:34.445 [2024-11-15 11:46:35.216816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.445 [2024-11-15 11:46:35.216838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.445 qpair failed and we were unable to recover it. 00:28:34.445 [2024-11-15 11:46:35.216910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.445 [2024-11-15 11:46:35.216922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.445 qpair failed and we were unable to recover it. 00:28:34.445 [2024-11-15 11:46:35.217153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.445 [2024-11-15 11:46:35.217164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.445 qpair failed and we were unable to recover it. 00:28:34.445 [2024-11-15 11:46:35.217349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.445 [2024-11-15 11:46:35.217359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.445 qpair failed and we were unable to recover it. 00:28:34.445 [2024-11-15 11:46:35.217455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.445 [2024-11-15 11:46:35.217471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.445 qpair failed and we were unable to recover it. 00:28:34.445 [2024-11-15 11:46:35.217676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.445 [2024-11-15 11:46:35.217688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.445 qpair failed and we were unable to recover it. 00:28:34.445 [2024-11-15 11:46:35.217847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.445 [2024-11-15 11:46:35.217858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.445 qpair failed and we were unable to recover it. 00:28:34.445 [2024-11-15 11:46:35.218014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.445 [2024-11-15 11:46:35.218025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.445 qpair failed and we were unable to recover it. 00:28:34.445 [2024-11-15 11:46:35.218310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.445 [2024-11-15 11:46:35.218321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.445 qpair failed and we were unable to recover it. 00:28:34.445 [2024-11-15 11:46:35.218424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.445 [2024-11-15 11:46:35.218435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.445 qpair failed and we were unable to recover it. 00:28:34.445 [2024-11-15 11:46:35.218593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.445 [2024-11-15 11:46:35.218605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.445 qpair failed and we were unable to recover it. 00:28:34.445 [2024-11-15 11:46:35.218756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.445 [2024-11-15 11:46:35.218768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.445 qpair failed and we were unable to recover it. 00:28:34.445 [2024-11-15 11:46:35.218874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.445 [2024-11-15 11:46:35.218884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.445 qpair failed and we were unable to recover it. 00:28:34.445 [2024-11-15 11:46:35.219137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.445 [2024-11-15 11:46:35.219148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.445 qpair failed and we were unable to recover it. 00:28:34.445 [2024-11-15 11:46:35.219312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.445 [2024-11-15 11:46:35.219322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.445 qpair failed and we were unable to recover it. 00:28:34.445 [2024-11-15 11:46:35.219485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.445 [2024-11-15 11:46:35.219496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.445 qpair failed and we were unable to recover it. 00:28:34.445 [2024-11-15 11:46:35.219598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.445 [2024-11-15 11:46:35.219609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.445 qpair failed and we were unable to recover it. 00:28:34.445 [2024-11-15 11:46:35.219691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.445 [2024-11-15 11:46:35.219701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.445 qpair failed and we were unable to recover it. 00:28:34.445 [2024-11-15 11:46:35.219946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.445 [2024-11-15 11:46:35.219957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.445 qpair failed and we were unable to recover it. 00:28:34.445 [2024-11-15 11:46:35.220120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.445 [2024-11-15 11:46:35.220132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.445 qpair failed and we were unable to recover it. 00:28:34.445 [2024-11-15 11:46:35.220367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.445 [2024-11-15 11:46:35.220378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.445 qpair failed and we were unable to recover it. 00:28:34.445 [2024-11-15 11:46:35.220596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.445 [2024-11-15 11:46:35.220607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.445 qpair failed and we were unable to recover it. 00:28:34.445 [2024-11-15 11:46:35.220706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.445 [2024-11-15 11:46:35.220717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.445 qpair failed and we were unable to recover it. 00:28:34.445 [2024-11-15 11:46:35.220866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.445 [2024-11-15 11:46:35.220877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.445 qpair failed and we were unable to recover it. 00:28:34.445 [2024-11-15 11:46:35.221033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.445 [2024-11-15 11:46:35.221045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.445 qpair failed and we were unable to recover it. 00:28:34.445 [2024-11-15 11:46:35.221192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.445 [2024-11-15 11:46:35.221203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.445 qpair failed and we were unable to recover it. 00:28:34.445 [2024-11-15 11:46:35.221280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.445 [2024-11-15 11:46:35.221290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.445 qpair failed and we were unable to recover it. 00:28:34.445 [2024-11-15 11:46:35.221440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.445 [2024-11-15 11:46:35.221451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.445 qpair failed and we were unable to recover it. 00:28:34.445 [2024-11-15 11:46:35.221641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.445 [2024-11-15 11:46:35.221653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.445 qpair failed and we were unable to recover it. 00:28:34.445 [2024-11-15 11:46:35.221802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.445 [2024-11-15 11:46:35.221813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.445 qpair failed and we were unable to recover it. 00:28:34.445 [2024-11-15 11:46:35.221904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.445 [2024-11-15 11:46:35.221918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.445 qpair failed and we were unable to recover it. 00:28:34.445 [2024-11-15 11:46:35.222098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.445 [2024-11-15 11:46:35.222109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.445 qpair failed and we were unable to recover it. 00:28:34.445 [2024-11-15 11:46:35.222187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.445 [2024-11-15 11:46:35.222197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.446 qpair failed and we were unable to recover it. 00:28:34.446 [2024-11-15 11:46:35.222435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.446 [2024-11-15 11:46:35.222446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.446 qpair failed and we were unable to recover it. 00:28:34.446 [2024-11-15 11:46:35.222622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.446 [2024-11-15 11:46:35.222634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.446 qpair failed and we were unable to recover it. 00:28:34.446 [2024-11-15 11:46:35.222866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.446 [2024-11-15 11:46:35.222877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.446 qpair failed and we were unable to recover it. 00:28:34.446 [2024-11-15 11:46:35.223017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.446 [2024-11-15 11:46:35.223029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.446 qpair failed and we were unable to recover it. 00:28:34.446 [2024-11-15 11:46:35.223246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.446 [2024-11-15 11:46:35.223258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.446 qpair failed and we were unable to recover it. 00:28:34.446 [2024-11-15 11:46:35.223468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.446 [2024-11-15 11:46:35.223480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.446 qpair failed and we were unable to recover it. 00:28:34.446 [2024-11-15 11:46:35.223659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.446 [2024-11-15 11:46:35.223670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.446 qpair failed and we were unable to recover it. 00:28:34.446 [2024-11-15 11:46:35.223823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.446 [2024-11-15 11:46:35.223836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.446 qpair failed and we were unable to recover it. 00:28:34.446 [2024-11-15 11:46:35.223983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.446 [2024-11-15 11:46:35.223995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.446 qpair failed and we were unable to recover it. 00:28:34.446 [2024-11-15 11:46:35.224166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.446 [2024-11-15 11:46:35.224178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.446 qpair failed and we were unable to recover it. 00:28:34.446 [2024-11-15 11:46:35.224357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.446 [2024-11-15 11:46:35.224368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.446 qpair failed and we were unable to recover it. 00:28:34.446 [2024-11-15 11:46:35.224568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.446 [2024-11-15 11:46:35.224579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.446 qpair failed and we were unable to recover it. 00:28:34.446 [2024-11-15 11:46:35.224648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.446 [2024-11-15 11:46:35.224659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.446 qpair failed and we were unable to recover it. 00:28:34.446 [2024-11-15 11:46:35.224826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.446 [2024-11-15 11:46:35.224837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.446 qpair failed and we were unable to recover it. 00:28:34.446 [2024-11-15 11:46:35.224929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.446 [2024-11-15 11:46:35.224939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.446 qpair failed and we were unable to recover it. 00:28:34.446 [2024-11-15 11:46:35.225169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.446 [2024-11-15 11:46:35.225180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.446 qpair failed and we were unable to recover it. 00:28:34.446 [2024-11-15 11:46:35.225352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.446 [2024-11-15 11:46:35.225364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.446 qpair failed and we were unable to recover it. 00:28:34.446 [2024-11-15 11:46:35.225620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.446 [2024-11-15 11:46:35.225631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.446 qpair failed and we were unable to recover it. 00:28:34.446 [2024-11-15 11:46:35.225710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.446 [2024-11-15 11:46:35.225721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.446 qpair failed and we were unable to recover it. 00:28:34.446 [2024-11-15 11:46:35.225924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.446 [2024-11-15 11:46:35.225935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.446 qpair failed and we were unable to recover it. 00:28:34.446 [2024-11-15 11:46:35.226096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.446 [2024-11-15 11:46:35.226106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.446 qpair failed and we were unable to recover it. 00:28:34.446 [2024-11-15 11:46:35.226255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.446 [2024-11-15 11:46:35.226266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.446 qpair failed and we were unable to recover it. 00:28:34.446 [2024-11-15 11:46:35.226472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.446 [2024-11-15 11:46:35.226484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.446 qpair failed and we were unable to recover it. 00:28:34.446 [2024-11-15 11:46:35.226570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.446 [2024-11-15 11:46:35.226581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.446 qpair failed and we were unable to recover it. 00:28:34.446 [2024-11-15 11:46:35.226805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.446 [2024-11-15 11:46:35.226816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.446 qpair failed and we were unable to recover it. 00:28:34.446 [2024-11-15 11:46:35.226966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.446 [2024-11-15 11:46:35.226978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.446 qpair failed and we were unable to recover it. 00:28:34.446 [2024-11-15 11:46:35.227130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.446 [2024-11-15 11:46:35.227141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.446 qpair failed and we were unable to recover it. 00:28:34.446 [2024-11-15 11:46:35.227294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.446 [2024-11-15 11:46:35.227306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.446 qpair failed and we were unable to recover it. 00:28:34.446 [2024-11-15 11:46:35.227463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.446 [2024-11-15 11:46:35.227474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.446 qpair failed and we were unable to recover it. 00:28:34.446 [2024-11-15 11:46:35.227696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.446 [2024-11-15 11:46:35.227707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.446 qpair failed and we were unable to recover it. 00:28:34.446 [2024-11-15 11:46:35.227858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.446 [2024-11-15 11:46:35.227868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.446 qpair failed and we were unable to recover it. 00:28:34.446 [2024-11-15 11:46:35.227953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.446 [2024-11-15 11:46:35.227964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.446 qpair failed and we were unable to recover it. 00:28:34.446 [2024-11-15 11:46:35.228192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.446 [2024-11-15 11:46:35.228203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.446 qpair failed and we were unable to recover it. 00:28:34.446 [2024-11-15 11:46:35.228472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.446 [2024-11-15 11:46:35.228483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.446 qpair failed and we were unable to recover it. 00:28:34.446 [2024-11-15 11:46:35.228652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.446 [2024-11-15 11:46:35.228663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.446 qpair failed and we were unable to recover it. 00:28:34.446 [2024-11-15 11:46:35.228839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.446 [2024-11-15 11:46:35.228850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.446 qpair failed and we were unable to recover it. 00:28:34.446 [2024-11-15 11:46:35.228995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.446 [2024-11-15 11:46:35.229006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.446 qpair failed and we were unable to recover it. 00:28:34.446 [2024-11-15 11:46:35.229276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.446 [2024-11-15 11:46:35.229286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.446 qpair failed and we were unable to recover it. 00:28:34.447 [2024-11-15 11:46:35.229420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.447 [2024-11-15 11:46:35.229431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.447 qpair failed and we were unable to recover it. 00:28:34.447 [2024-11-15 11:46:35.229668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.447 [2024-11-15 11:46:35.229680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.447 qpair failed and we were unable to recover it. 00:28:34.447 [2024-11-15 11:46:35.229832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.447 [2024-11-15 11:46:35.229843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.447 qpair failed and we were unable to recover it. 00:28:34.447 [2024-11-15 11:46:35.230054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.447 [2024-11-15 11:46:35.230065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.447 qpair failed and we were unable to recover it. 00:28:34.447 [2024-11-15 11:46:35.230286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.447 [2024-11-15 11:46:35.230297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.447 qpair failed and we were unable to recover it. 00:28:34.447 [2024-11-15 11:46:35.230569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.447 [2024-11-15 11:46:35.230580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.447 qpair failed and we were unable to recover it. 00:28:34.447 [2024-11-15 11:46:35.230785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.447 [2024-11-15 11:46:35.230796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.447 qpair failed and we were unable to recover it. 00:28:34.447 [2024-11-15 11:46:35.230951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.447 [2024-11-15 11:46:35.230961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.447 qpair failed and we were unable to recover it. 00:28:34.447 [2024-11-15 11:46:35.231122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.447 [2024-11-15 11:46:35.231134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.447 qpair failed and we were unable to recover it. 00:28:34.447 [2024-11-15 11:46:35.231324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.447 [2024-11-15 11:46:35.231337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.447 qpair failed and we were unable to recover it. 00:28:34.447 [2024-11-15 11:46:35.231412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.447 [2024-11-15 11:46:35.231423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.447 qpair failed and we were unable to recover it. 00:28:34.447 [2024-11-15 11:46:35.231686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.447 [2024-11-15 11:46:35.231696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.447 qpair failed and we were unable to recover it. 00:28:34.447 [2024-11-15 11:46:35.231768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.447 [2024-11-15 11:46:35.231779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.447 qpair failed and we were unable to recover it. 00:28:34.447 [2024-11-15 11:46:35.231989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.447 [2024-11-15 11:46:35.232000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.447 qpair failed and we were unable to recover it. 00:28:34.447 [2024-11-15 11:46:35.232285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.447 [2024-11-15 11:46:35.232296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.447 qpair failed and we were unable to recover it. 00:28:34.447 [2024-11-15 11:46:35.232517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.447 [2024-11-15 11:46:35.232529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.447 qpair failed and we were unable to recover it. 00:28:34.447 [2024-11-15 11:46:35.232690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.447 [2024-11-15 11:46:35.232701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.447 qpair failed and we were unable to recover it. 00:28:34.447 [2024-11-15 11:46:35.232865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.447 [2024-11-15 11:46:35.232876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.447 qpair failed and we were unable to recover it. 00:28:34.447 [2024-11-15 11:46:35.233064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.447 [2024-11-15 11:46:35.233075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.447 qpair failed and we were unable to recover it. 00:28:34.447 [2024-11-15 11:46:35.233243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.447 [2024-11-15 11:46:35.233254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.447 qpair failed and we were unable to recover it. 00:28:34.447 [2024-11-15 11:46:35.233402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.447 [2024-11-15 11:46:35.233412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.447 qpair failed and we were unable to recover it. 00:28:34.447 [2024-11-15 11:46:35.233581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.447 [2024-11-15 11:46:35.233592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.447 qpair failed and we were unable to recover it. 00:28:34.447 [2024-11-15 11:46:35.233738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.447 [2024-11-15 11:46:35.233749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.447 qpair failed and we were unable to recover it. 00:28:34.447 [2024-11-15 11:46:35.234024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.447 [2024-11-15 11:46:35.234035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.447 qpair failed and we were unable to recover it. 00:28:34.447 [2024-11-15 11:46:35.234215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.447 [2024-11-15 11:46:35.234225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.447 qpair failed and we were unable to recover it. 00:28:34.447 [2024-11-15 11:46:35.234491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.447 [2024-11-15 11:46:35.234502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.447 qpair failed and we were unable to recover it. 00:28:34.447 [2024-11-15 11:46:35.234735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.447 [2024-11-15 11:46:35.234746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.447 qpair failed and we were unable to recover it. 00:28:34.447 [2024-11-15 11:46:35.234913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.447 [2024-11-15 11:46:35.234924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.447 qpair failed and we were unable to recover it. 00:28:34.447 [2024-11-15 11:46:35.235101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.447 [2024-11-15 11:46:35.235113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.447 qpair failed and we were unable to recover it. 00:28:34.447 [2024-11-15 11:46:35.235409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.447 [2024-11-15 11:46:35.235420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.447 qpair failed and we were unable to recover it. 00:28:34.447 [2024-11-15 11:46:35.235519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.447 [2024-11-15 11:46:35.235531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.447 qpair failed and we were unable to recover it. 00:28:34.447 [2024-11-15 11:46:35.235698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.447 [2024-11-15 11:46:35.235708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.447 qpair failed and we were unable to recover it. 00:28:34.447 [2024-11-15 11:46:35.235885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.447 [2024-11-15 11:46:35.235897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.447 qpair failed and we were unable to recover it. 00:28:34.447 [2024-11-15 11:46:35.235970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.447 [2024-11-15 11:46:35.235980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.447 qpair failed and we were unable to recover it. 00:28:34.447 [2024-11-15 11:46:35.236259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.447 [2024-11-15 11:46:35.236271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.447 qpair failed and we were unable to recover it. 00:28:34.447 [2024-11-15 11:46:35.236461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.447 [2024-11-15 11:46:35.236472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.447 qpair failed and we were unable to recover it. 00:28:34.447 [2024-11-15 11:46:35.236618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.447 [2024-11-15 11:46:35.236629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.447 qpair failed and we were unable to recover it. 00:28:34.447 [2024-11-15 11:46:35.236884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.448 [2024-11-15 11:46:35.236895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.448 qpair failed and we were unable to recover it. 00:28:34.448 [2024-11-15 11:46:35.237043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.448 [2024-11-15 11:46:35.237053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.448 qpair failed and we were unable to recover it. 00:28:34.448 [2024-11-15 11:46:35.237145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.448 [2024-11-15 11:46:35.237156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.448 qpair failed and we were unable to recover it. 00:28:34.448 [2024-11-15 11:46:35.237231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.448 [2024-11-15 11:46:35.237241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.448 qpair failed and we were unable to recover it. 00:28:34.732 [2024-11-15 11:46:35.237394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.732 [2024-11-15 11:46:35.237407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.732 qpair failed and we were unable to recover it. 00:28:34.732 [2024-11-15 11:46:35.237500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.732 [2024-11-15 11:46:35.237512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.732 qpair failed and we were unable to recover it. 00:28:34.732 [2024-11-15 11:46:35.237653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.732 [2024-11-15 11:46:35.237667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.732 qpair failed and we were unable to recover it. 00:28:34.732 [2024-11-15 11:46:35.237879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.732 [2024-11-15 11:46:35.237890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.732 qpair failed and we were unable to recover it. 00:28:34.732 [2024-11-15 11:46:35.238058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.732 [2024-11-15 11:46:35.238069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.732 qpair failed and we were unable to recover it. 00:28:34.732 [2024-11-15 11:46:35.238296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.732 [2024-11-15 11:46:35.238307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.732 qpair failed and we were unable to recover it. 00:28:34.733 [2024-11-15 11:46:35.238516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.733 [2024-11-15 11:46:35.238528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.733 qpair failed and we were unable to recover it. 00:28:34.733 [2024-11-15 11:46:35.238641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.733 [2024-11-15 11:46:35.238652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.733 qpair failed and we were unable to recover it. 00:28:34.733 [2024-11-15 11:46:35.238748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.733 [2024-11-15 11:46:35.238760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.733 qpair failed and we were unable to recover it. 00:28:34.733 [2024-11-15 11:46:35.238995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.733 [2024-11-15 11:46:35.239006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.733 qpair failed and we were unable to recover it. 00:28:34.733 [2024-11-15 11:46:35.239163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.733 [2024-11-15 11:46:35.239173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.733 qpair failed and we were unable to recover it. 00:28:34.733 [2024-11-15 11:46:35.239379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.733 [2024-11-15 11:46:35.239390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.733 qpair failed and we were unable to recover it. 00:28:34.733 [2024-11-15 11:46:35.239543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.733 [2024-11-15 11:46:35.239554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.733 qpair failed and we were unable to recover it. 00:28:34.733 [2024-11-15 11:46:35.239731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.733 [2024-11-15 11:46:35.239742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.733 qpair failed and we were unable to recover it. 00:28:34.733 [2024-11-15 11:46:35.239947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.733 [2024-11-15 11:46:35.239958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.733 qpair failed and we were unable to recover it. 00:28:34.733 [2024-11-15 11:46:35.240155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.733 [2024-11-15 11:46:35.240166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.733 qpair failed and we were unable to recover it. 00:28:34.733 [2024-11-15 11:46:35.240251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.733 [2024-11-15 11:46:35.240262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.733 qpair failed and we were unable to recover it. 00:28:34.733 [2024-11-15 11:46:35.240499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.733 [2024-11-15 11:46:35.240511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.733 qpair failed and we were unable to recover it. 00:28:34.733 [2024-11-15 11:46:35.240692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.733 [2024-11-15 11:46:35.240703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.733 qpair failed and we were unable to recover it. 00:28:34.733 [2024-11-15 11:46:35.240885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.733 [2024-11-15 11:46:35.240895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.733 qpair failed and we were unable to recover it. 00:28:34.733 [2024-11-15 11:46:35.241077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.733 [2024-11-15 11:46:35.241088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.733 qpair failed and we were unable to recover it. 00:28:34.733 [2024-11-15 11:46:35.241246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.733 [2024-11-15 11:46:35.241256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.733 qpair failed and we were unable to recover it. 00:28:34.733 [2024-11-15 11:46:35.241408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.733 [2024-11-15 11:46:35.241419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.733 qpair failed and we were unable to recover it. 00:28:34.733 [2024-11-15 11:46:35.241655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.733 [2024-11-15 11:46:35.241667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.733 qpair failed and we were unable to recover it. 00:28:34.733 [2024-11-15 11:46:35.241884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.733 [2024-11-15 11:46:35.241896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.733 qpair failed and we were unable to recover it. 00:28:34.733 [2024-11-15 11:46:35.242124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.733 [2024-11-15 11:46:35.242135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.733 qpair failed and we were unable to recover it. 00:28:34.733 [2024-11-15 11:46:35.242276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.733 [2024-11-15 11:46:35.242287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.733 qpair failed and we were unable to recover it. 00:28:34.733 [2024-11-15 11:46:35.242509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.733 [2024-11-15 11:46:35.242521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.733 qpair failed and we were unable to recover it. 00:28:34.733 [2024-11-15 11:46:35.242667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.733 [2024-11-15 11:46:35.242678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.733 qpair failed and we were unable to recover it. 00:28:34.733 [2024-11-15 11:46:35.242911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.733 [2024-11-15 11:46:35.242921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.733 qpair failed and we were unable to recover it. 00:28:34.733 [2024-11-15 11:46:35.243020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.733 [2024-11-15 11:46:35.243031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.733 qpair failed and we were unable to recover it. 00:28:34.733 [2024-11-15 11:46:35.243291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.733 [2024-11-15 11:46:35.243302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.733 qpair failed and we were unable to recover it. 00:28:34.733 [2024-11-15 11:46:35.243547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.733 [2024-11-15 11:46:35.243558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.733 qpair failed and we were unable to recover it. 00:28:34.733 [2024-11-15 11:46:35.243710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.733 [2024-11-15 11:46:35.243721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.733 qpair failed and we were unable to recover it. 00:28:34.733 [2024-11-15 11:46:35.243874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.733 [2024-11-15 11:46:35.243885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.733 qpair failed and we were unable to recover it. 00:28:34.733 [2024-11-15 11:46:35.244103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.733 [2024-11-15 11:46:35.244114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.733 qpair failed and we were unable to recover it. 00:28:34.733 [2024-11-15 11:46:35.244253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.733 [2024-11-15 11:46:35.244265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.733 qpair failed and we were unable to recover it. 00:28:34.733 [2024-11-15 11:46:35.244411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.733 [2024-11-15 11:46:35.244422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.733 qpair failed and we were unable to recover it. 00:28:34.733 [2024-11-15 11:46:35.244571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.733 [2024-11-15 11:46:35.244583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.733 qpair failed and we were unable to recover it. 00:28:34.733 [2024-11-15 11:46:35.244734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.733 [2024-11-15 11:46:35.244745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.733 qpair failed and we were unable to recover it. 00:28:34.733 [2024-11-15 11:46:35.244952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.733 [2024-11-15 11:46:35.244963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.733 qpair failed and we were unable to recover it. 00:28:34.733 [2024-11-15 11:46:35.245205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.733 [2024-11-15 11:46:35.245216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.733 qpair failed and we were unable to recover it. 00:28:34.733 [2024-11-15 11:46:35.245432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.733 [2024-11-15 11:46:35.245442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.733 qpair failed and we were unable to recover it. 00:28:34.733 [2024-11-15 11:46:35.245677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.733 [2024-11-15 11:46:35.245688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.734 qpair failed and we were unable to recover it. 00:28:34.734 [2024-11-15 11:46:35.245860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.734 [2024-11-15 11:46:35.245870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.734 qpair failed and we were unable to recover it. 00:28:34.734 [2024-11-15 11:46:35.245938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.734 [2024-11-15 11:46:35.245948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.734 qpair failed and we were unable to recover it. 00:28:34.734 [2024-11-15 11:46:35.246136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.734 [2024-11-15 11:46:35.246147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.734 qpair failed and we were unable to recover it. 00:28:34.734 [2024-11-15 11:46:35.246378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.734 [2024-11-15 11:46:35.246389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.734 qpair failed and we were unable to recover it. 00:28:34.734 [2024-11-15 11:46:35.246642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.734 [2024-11-15 11:46:35.246655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.734 qpair failed and we were unable to recover it. 00:28:34.734 [2024-11-15 11:46:35.246794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.734 [2024-11-15 11:46:35.246806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.734 qpair failed and we were unable to recover it. 00:28:34.734 [2024-11-15 11:46:35.247036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.734 [2024-11-15 11:46:35.247047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.734 qpair failed and we were unable to recover it. 00:28:34.734 [2024-11-15 11:46:35.247278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.734 [2024-11-15 11:46:35.247288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.734 qpair failed and we were unable to recover it. 00:28:34.734 [2024-11-15 11:46:35.247429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.734 [2024-11-15 11:46:35.247440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.734 qpair failed and we were unable to recover it. 00:28:34.734 [2024-11-15 11:46:35.247776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.734 [2024-11-15 11:46:35.247787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.734 qpair failed and we were unable to recover it. 00:28:34.734 [2024-11-15 11:46:35.248054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.734 [2024-11-15 11:46:35.248065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.734 qpair failed and we were unable to recover it. 00:28:34.734 [2024-11-15 11:46:35.248296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.734 [2024-11-15 11:46:35.248307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.734 qpair failed and we were unable to recover it. 00:28:34.734 [2024-11-15 11:46:35.248512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.734 [2024-11-15 11:46:35.248523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.734 qpair failed and we were unable to recover it. 00:28:34.734 [2024-11-15 11:46:35.248730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.734 [2024-11-15 11:46:35.248741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.734 qpair failed and we were unable to recover it. 00:28:34.734 [2024-11-15 11:46:35.248970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.734 [2024-11-15 11:46:35.248981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.734 qpair failed and we were unable to recover it. 00:28:34.734 [2024-11-15 11:46:35.249068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.734 [2024-11-15 11:46:35.249078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.734 qpair failed and we were unable to recover it. 00:28:34.734 [2024-11-15 11:46:35.249242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.734 [2024-11-15 11:46:35.249253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.734 qpair failed and we were unable to recover it. 00:28:34.734 [2024-11-15 11:46:35.249490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.734 [2024-11-15 11:46:35.249501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.734 qpair failed and we were unable to recover it. 00:28:34.734 [2024-11-15 11:46:35.249600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.734 [2024-11-15 11:46:35.249610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.734 qpair failed and we were unable to recover it. 00:28:34.734 [2024-11-15 11:46:35.249798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.734 [2024-11-15 11:46:35.249809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.734 qpair failed and we were unable to recover it. 00:28:34.734 [2024-11-15 11:46:35.249904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.734 [2024-11-15 11:46:35.249915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.734 qpair failed and we were unable to recover it. 00:28:34.734 [2024-11-15 11:46:35.250052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.734 [2024-11-15 11:46:35.250063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.734 qpair failed and we were unable to recover it. 00:28:34.734 [2024-11-15 11:46:35.250246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.734 [2024-11-15 11:46:35.250257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.734 qpair failed and we were unable to recover it. 00:28:34.734 [2024-11-15 11:46:35.250417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.734 [2024-11-15 11:46:35.250428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.734 qpair failed and we were unable to recover it. 00:28:34.734 [2024-11-15 11:46:35.250515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.734 [2024-11-15 11:46:35.250526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.734 qpair failed and we were unable to recover it. 00:28:34.734 [2024-11-15 11:46:35.250617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.734 [2024-11-15 11:46:35.250628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.734 qpair failed and we were unable to recover it. 00:28:34.734 [2024-11-15 11:46:35.250774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.734 [2024-11-15 11:46:35.250785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.734 qpair failed and we were unable to recover it. 00:28:34.734 [2024-11-15 11:46:35.250949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.734 [2024-11-15 11:46:35.250960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.734 qpair failed and we were unable to recover it. 00:28:34.734 [2024-11-15 11:46:35.251166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.734 [2024-11-15 11:46:35.251177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.734 qpair failed and we were unable to recover it. 00:28:34.734 [2024-11-15 11:46:35.251382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.734 [2024-11-15 11:46:35.251393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.734 qpair failed and we were unable to recover it. 00:28:34.734 [2024-11-15 11:46:35.251617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.734 [2024-11-15 11:46:35.251629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.734 qpair failed and we were unable to recover it. 00:28:34.734 [2024-11-15 11:46:35.251803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.734 [2024-11-15 11:46:35.251814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.734 qpair failed and we were unable to recover it. 00:28:34.734 [2024-11-15 11:46:35.252048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.734 [2024-11-15 11:46:35.252059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.734 qpair failed and we were unable to recover it. 00:28:34.734 [2024-11-15 11:46:35.252247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.734 [2024-11-15 11:46:35.252258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.734 qpair failed and we were unable to recover it. 00:28:34.734 [2024-11-15 11:46:35.252473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.734 [2024-11-15 11:46:35.252484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.734 qpair failed and we were unable to recover it. 00:28:34.734 [2024-11-15 11:46:35.252660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.734 [2024-11-15 11:46:35.252671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.734 qpair failed and we were unable to recover it. 00:28:34.734 [2024-11-15 11:46:35.252893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.734 [2024-11-15 11:46:35.252904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.734 qpair failed and we were unable to recover it. 00:28:34.734 [2024-11-15 11:46:35.253051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.734 [2024-11-15 11:46:35.253061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.735 qpair failed and we were unable to recover it. 00:28:34.735 [2024-11-15 11:46:35.253231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.735 [2024-11-15 11:46:35.253241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.735 qpair failed and we were unable to recover it. 00:28:34.735 [2024-11-15 11:46:35.253506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.735 [2024-11-15 11:46:35.253517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.735 qpair failed and we were unable to recover it. 00:28:34.735 [2024-11-15 11:46:35.253667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.735 [2024-11-15 11:46:35.253678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.735 qpair failed and we were unable to recover it. 00:28:34.735 [2024-11-15 11:46:35.253857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.735 [2024-11-15 11:46:35.253867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.735 qpair failed and we were unable to recover it. 00:28:34.735 [2024-11-15 11:46:35.254002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.735 [2024-11-15 11:46:35.254011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.735 qpair failed and we were unable to recover it. 00:28:34.735 [2024-11-15 11:46:35.254232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.735 [2024-11-15 11:46:35.254243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.735 qpair failed and we were unable to recover it. 00:28:34.735 [2024-11-15 11:46:35.254450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.735 [2024-11-15 11:46:35.254464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.735 qpair failed and we were unable to recover it. 00:28:34.735 [2024-11-15 11:46:35.254608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.735 [2024-11-15 11:46:35.254619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.735 qpair failed and we were unable to recover it. 00:28:34.735 [2024-11-15 11:46:35.254856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.735 [2024-11-15 11:46:35.254867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.735 qpair failed and we were unable to recover it. 00:28:34.735 [2024-11-15 11:46:35.255137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.735 [2024-11-15 11:46:35.255149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.735 qpair failed and we were unable to recover it. 00:28:34.735 [2024-11-15 11:46:35.255372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.735 [2024-11-15 11:46:35.255382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.735 qpair failed and we were unable to recover it. 00:28:34.735 [2024-11-15 11:46:35.255614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.735 [2024-11-15 11:46:35.255627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.735 qpair failed and we were unable to recover it. 00:28:34.735 [2024-11-15 11:46:35.255890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.735 [2024-11-15 11:46:35.255901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.735 qpair failed and we were unable to recover it. 00:28:34.735 [2024-11-15 11:46:35.256138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.735 [2024-11-15 11:46:35.256150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.735 qpair failed and we were unable to recover it. 00:28:34.735 [2024-11-15 11:46:35.256316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.735 [2024-11-15 11:46:35.256327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.735 qpair failed and we were unable to recover it. 00:28:34.735 [2024-11-15 11:46:35.256589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.735 [2024-11-15 11:46:35.256601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.735 qpair failed and we were unable to recover it. 00:28:34.735 [2024-11-15 11:46:35.256837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.735 [2024-11-15 11:46:35.256848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.735 qpair failed and we were unable to recover it. 00:28:34.735 [2024-11-15 11:46:35.256928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.735 [2024-11-15 11:46:35.256940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.735 qpair failed and we were unable to recover it. 00:28:34.735 [2024-11-15 11:46:35.257161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.735 [2024-11-15 11:46:35.257172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.735 qpair failed and we were unable to recover it. 00:28:34.735 [2024-11-15 11:46:35.257349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.735 [2024-11-15 11:46:35.257360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.735 qpair failed and we were unable to recover it. 00:28:34.735 [2024-11-15 11:46:35.257579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.735 [2024-11-15 11:46:35.257591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.735 qpair failed and we were unable to recover it. 00:28:34.735 [2024-11-15 11:46:35.257769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.735 [2024-11-15 11:46:35.257780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.735 qpair failed and we were unable to recover it. 00:28:34.735 [2024-11-15 11:46:35.257946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.735 [2024-11-15 11:46:35.257956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.735 qpair failed and we were unable to recover it. 00:28:34.735 [2024-11-15 11:46:35.258114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.735 [2024-11-15 11:46:35.258124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.735 qpair failed and we were unable to recover it. 00:28:34.735 [2024-11-15 11:46:35.258279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.735 [2024-11-15 11:46:35.258291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.735 qpair failed and we were unable to recover it. 00:28:34.735 [2024-11-15 11:46:35.258425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.735 [2024-11-15 11:46:35.258435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.735 qpair failed and we were unable to recover it. 00:28:34.735 [2024-11-15 11:46:35.258600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.735 [2024-11-15 11:46:35.258611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.735 qpair failed and we were unable to recover it. 00:28:34.735 [2024-11-15 11:46:35.258868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.735 [2024-11-15 11:46:35.258879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.735 qpair failed and we were unable to recover it. 00:28:34.735 [2024-11-15 11:46:35.259130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.735 [2024-11-15 11:46:35.259141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.735 qpair failed and we were unable to recover it. 00:28:34.735 [2024-11-15 11:46:35.259361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.735 [2024-11-15 11:46:35.259372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.735 qpair failed and we were unable to recover it. 00:28:34.735 [2024-11-15 11:46:35.259625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.735 [2024-11-15 11:46:35.259637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.735 qpair failed and we were unable to recover it. 00:28:34.735 [2024-11-15 11:46:35.259855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.735 [2024-11-15 11:46:35.259866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.735 qpair failed and we were unable to recover it. 00:28:34.735 [2024-11-15 11:46:35.260033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.735 [2024-11-15 11:46:35.260044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.735 qpair failed and we were unable to recover it. 00:28:34.735 [2024-11-15 11:46:35.260190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.735 [2024-11-15 11:46:35.260200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.735 qpair failed and we were unable to recover it. 00:28:34.735 [2024-11-15 11:46:35.260348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.735 [2024-11-15 11:46:35.260358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.735 qpair failed and we were unable to recover it. 00:28:34.735 [2024-11-15 11:46:35.260449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.735 [2024-11-15 11:46:35.260462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.735 qpair failed and we were unable to recover it. 00:28:34.735 [2024-11-15 11:46:35.260652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.735 [2024-11-15 11:46:35.260664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.735 qpair failed and we were unable to recover it. 00:28:34.735 [2024-11-15 11:46:35.260932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.735 [2024-11-15 11:46:35.260943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.736 qpair failed and we were unable to recover it. 00:28:34.736 [2024-11-15 11:46:35.261198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.736 [2024-11-15 11:46:35.261209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.736 qpair failed and we were unable to recover it. 00:28:34.736 [2024-11-15 11:46:35.261358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.736 [2024-11-15 11:46:35.261369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.736 qpair failed and we were unable to recover it. 00:28:34.736 [2024-11-15 11:46:35.261615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.736 [2024-11-15 11:46:35.261626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.736 qpair failed and we were unable to recover it. 00:28:34.736 [2024-11-15 11:46:35.261860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.736 [2024-11-15 11:46:35.261871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.736 qpair failed and we were unable to recover it. 00:28:34.736 [2024-11-15 11:46:35.261953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.736 [2024-11-15 11:46:35.261964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.736 qpair failed and we were unable to recover it. 00:28:34.736 [2024-11-15 11:46:35.262181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.736 [2024-11-15 11:46:35.262191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.736 qpair failed and we were unable to recover it. 00:28:34.736 [2024-11-15 11:46:35.262398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.736 [2024-11-15 11:46:35.262408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.736 qpair failed and we were unable to recover it. 00:28:34.736 [2024-11-15 11:46:35.262552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.736 [2024-11-15 11:46:35.262564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.736 qpair failed and we were unable to recover it. 00:28:34.736 [2024-11-15 11:46:35.262800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.736 [2024-11-15 11:46:35.262813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.736 qpair failed and we were unable to recover it. 00:28:34.736 [2024-11-15 11:46:35.262903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.736 [2024-11-15 11:46:35.262914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.736 qpair failed and we were unable to recover it. 00:28:34.736 [2024-11-15 11:46:35.263156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.736 [2024-11-15 11:46:35.263167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.736 qpair failed and we were unable to recover it. 00:28:34.736 [2024-11-15 11:46:35.263378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.736 [2024-11-15 11:46:35.263389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.736 qpair failed and we were unable to recover it. 00:28:34.736 [2024-11-15 11:46:35.263561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.736 [2024-11-15 11:46:35.263572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.736 qpair failed and we were unable to recover it. 00:28:34.736 [2024-11-15 11:46:35.263724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.736 [2024-11-15 11:46:35.263734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.736 qpair failed and we were unable to recover it. 00:28:34.736 [2024-11-15 11:46:35.263940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.736 [2024-11-15 11:46:35.263951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.736 qpair failed and we were unable to recover it. 00:28:34.736 [2024-11-15 11:46:35.264100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.736 [2024-11-15 11:46:35.264111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.736 qpair failed and we were unable to recover it. 00:28:34.736 [2024-11-15 11:46:35.264251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.736 [2024-11-15 11:46:35.264262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.736 qpair failed and we were unable to recover it. 00:28:34.736 [2024-11-15 11:46:35.264343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.736 [2024-11-15 11:46:35.264354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.736 qpair failed and we were unable to recover it. 00:28:34.736 [2024-11-15 11:46:35.264518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.736 [2024-11-15 11:46:35.264530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.736 qpair failed and we were unable to recover it. 00:28:34.736 [2024-11-15 11:46:35.264708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.736 [2024-11-15 11:46:35.264720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.736 qpair failed and we were unable to recover it. 00:28:34.736 [2024-11-15 11:46:35.264899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.736 [2024-11-15 11:46:35.264910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.736 qpair failed and we were unable to recover it. 00:28:34.736 [2024-11-15 11:46:35.265146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.736 [2024-11-15 11:46:35.265158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.736 qpair failed and we were unable to recover it. 00:28:34.736 [2024-11-15 11:46:35.265413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.736 [2024-11-15 11:46:35.265423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.736 qpair failed and we were unable to recover it. 00:28:34.736 [2024-11-15 11:46:35.265521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.736 [2024-11-15 11:46:35.265533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.736 qpair failed and we were unable to recover it. 00:28:34.736 [2024-11-15 11:46:35.265755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.736 [2024-11-15 11:46:35.265766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.736 qpair failed and we were unable to recover it. 00:28:34.736 [2024-11-15 11:46:35.265945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.736 [2024-11-15 11:46:35.265956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.736 qpair failed and we were unable to recover it. 00:28:34.736 [2024-11-15 11:46:35.266160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.736 [2024-11-15 11:46:35.266171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.736 qpair failed and we were unable to recover it. 00:28:34.736 [2024-11-15 11:46:35.266308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.736 [2024-11-15 11:46:35.266319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.736 qpair failed and we were unable to recover it. 00:28:34.736 [2024-11-15 11:46:35.266522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.736 [2024-11-15 11:46:35.266533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.736 qpair failed and we were unable to recover it. 00:28:34.736 [2024-11-15 11:46:35.266736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.736 [2024-11-15 11:46:35.266747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.736 qpair failed and we were unable to recover it. 00:28:34.736 [2024-11-15 11:46:35.267006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.736 [2024-11-15 11:46:35.267017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.736 qpair failed and we were unable to recover it. 00:28:34.736 [2024-11-15 11:46:35.267228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.736 [2024-11-15 11:46:35.267239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.736 qpair failed and we were unable to recover it. 00:28:34.736 [2024-11-15 11:46:35.267454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.736 [2024-11-15 11:46:35.267468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.736 qpair failed and we were unable to recover it. 00:28:34.736 [2024-11-15 11:46:35.267552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.736 [2024-11-15 11:46:35.267562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.736 qpair failed and we were unable to recover it. 00:28:34.736 [2024-11-15 11:46:35.267775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.736 [2024-11-15 11:46:35.267786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.736 qpair failed and we were unable to recover it. 00:28:34.736 [2024-11-15 11:46:35.267997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.736 [2024-11-15 11:46:35.268008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.736 qpair failed and we were unable to recover it. 00:28:34.736 [2024-11-15 11:46:35.268093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.736 [2024-11-15 11:46:35.268105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.736 qpair failed and we were unable to recover it. 00:28:34.737 [2024-11-15 11:46:35.268161] Starting SPDK v25.01-pre git sha1 4b2d483c6 / DPDK 24.03.0 initialization... 00:28:34.737 [2024-11-15 11:46:35.268216] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:34.737 [2024-11-15 11:46:35.268373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.737 [2024-11-15 11:46:35.268383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.737 qpair failed and we were unable to recover it. 00:28:34.737 [2024-11-15 11:46:35.268529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.737 [2024-11-15 11:46:35.268538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.737 qpair failed and we were unable to recover it. 00:28:34.737 [2024-11-15 11:46:35.268681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.737 [2024-11-15 11:46:35.268690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.737 qpair failed and we were unable to recover it. 00:28:34.737 [2024-11-15 11:46:35.268919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.737 [2024-11-15 11:46:35.268929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.737 qpair failed and we were unable to recover it. 00:28:34.737 [2024-11-15 11:46:35.269161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.737 [2024-11-15 11:46:35.269172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.737 qpair failed and we were unable to recover it. 00:28:34.737 [2024-11-15 11:46:35.269350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.737 [2024-11-15 11:46:35.269360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.737 qpair failed and we were unable to recover it. 00:28:34.737 [2024-11-15 11:46:35.269516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.737 [2024-11-15 11:46:35.269527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.737 qpair failed and we were unable to recover it. 00:28:34.737 [2024-11-15 11:46:35.269705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.737 [2024-11-15 11:46:35.269716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.737 qpair failed and we were unable to recover it. 00:28:34.737 [2024-11-15 11:46:35.270001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.737 [2024-11-15 11:46:35.270012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.737 qpair failed and we were unable to recover it. 00:28:34.737 [2024-11-15 11:46:35.270183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.737 [2024-11-15 11:46:35.270194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.737 qpair failed and we were unable to recover it. 00:28:34.737 [2024-11-15 11:46:35.270380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.737 [2024-11-15 11:46:35.270407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.737 qpair failed and we were unable to recover it. 00:28:34.737 [2024-11-15 11:46:35.270496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.737 [2024-11-15 11:46:35.270508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.737 qpair failed and we were unable to recover it. 00:28:34.737 [2024-11-15 11:46:35.270741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.737 [2024-11-15 11:46:35.270753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.737 qpair failed and we were unable to recover it. 00:28:34.737 [2024-11-15 11:46:35.270930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.737 [2024-11-15 11:46:35.270941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.737 qpair failed and we were unable to recover it. 00:28:34.737 [2024-11-15 11:46:35.271166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.737 [2024-11-15 11:46:35.271177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.737 qpair failed and we were unable to recover it. 00:28:34.737 [2024-11-15 11:46:35.271325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.737 [2024-11-15 11:46:35.271336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.737 qpair failed and we were unable to recover it. 00:28:34.737 [2024-11-15 11:46:35.271510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.737 [2024-11-15 11:46:35.271522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.737 qpair failed and we were unable to recover it. 00:28:34.737 [2024-11-15 11:46:35.271725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.737 [2024-11-15 11:46:35.271736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.737 qpair failed and we were unable to recover it. 00:28:34.737 [2024-11-15 11:46:35.271908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.737 [2024-11-15 11:46:35.271919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.737 qpair failed and we were unable to recover it. 00:28:34.737 [2024-11-15 11:46:35.272122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.737 [2024-11-15 11:46:35.272132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.737 qpair failed and we were unable to recover it. 00:28:34.737 [2024-11-15 11:46:35.272297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.737 [2024-11-15 11:46:35.272308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.737 qpair failed and we were unable to recover it. 00:28:34.737 [2024-11-15 11:46:35.272466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.737 [2024-11-15 11:46:35.272477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.737 qpair failed and we were unable to recover it. 00:28:34.737 [2024-11-15 11:46:35.272653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.737 [2024-11-15 11:46:35.272664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.737 qpair failed and we were unable to recover it. 00:28:34.737 [2024-11-15 11:46:35.272814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.737 [2024-11-15 11:46:35.272829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.737 qpair failed and we were unable to recover it. 00:28:34.737 [2024-11-15 11:46:35.272910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.737 [2024-11-15 11:46:35.272921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.737 qpair failed and we were unable to recover it. 00:28:34.737 [2024-11-15 11:46:35.273099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.737 [2024-11-15 11:46:35.273111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.737 qpair failed and we were unable to recover it. 00:28:34.737 [2024-11-15 11:46:35.273347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.737 [2024-11-15 11:46:35.273358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.737 qpair failed and we were unable to recover it. 00:28:34.737 [2024-11-15 11:46:35.273534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.737 [2024-11-15 11:46:35.273546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.737 qpair failed and we were unable to recover it. 00:28:34.737 [2024-11-15 11:46:35.273783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.737 [2024-11-15 11:46:35.273794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.737 qpair failed and we were unable to recover it. 00:28:34.737 [2024-11-15 11:46:35.274036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.737 [2024-11-15 11:46:35.274047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.737 qpair failed and we were unable to recover it. 00:28:34.737 [2024-11-15 11:46:35.274258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.738 [2024-11-15 11:46:35.274268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.738 qpair failed and we were unable to recover it. 00:28:34.738 [2024-11-15 11:46:35.274499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.738 [2024-11-15 11:46:35.274511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.738 qpair failed and we were unable to recover it. 00:28:34.738 [2024-11-15 11:46:35.274720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.738 [2024-11-15 11:46:35.274731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.738 qpair failed and we were unable to recover it. 00:28:34.738 [2024-11-15 11:46:35.274938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.738 [2024-11-15 11:46:35.274949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.738 qpair failed and we were unable to recover it. 00:28:34.738 [2024-11-15 11:46:35.275093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.738 [2024-11-15 11:46:35.275104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.738 qpair failed and we were unable to recover it. 00:28:34.738 [2024-11-15 11:46:35.275341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.738 [2024-11-15 11:46:35.275352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.738 qpair failed and we were unable to recover it. 00:28:34.738 [2024-11-15 11:46:35.275575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.738 [2024-11-15 11:46:35.275587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.738 qpair failed and we were unable to recover it. 00:28:34.738 [2024-11-15 11:46:35.275795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.738 [2024-11-15 11:46:35.275806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.738 qpair failed and we were unable to recover it. 00:28:34.738 [2024-11-15 11:46:35.276040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.738 [2024-11-15 11:46:35.276051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.738 qpair failed and we were unable to recover it. 00:28:34.738 [2024-11-15 11:46:35.276232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.738 [2024-11-15 11:46:35.276243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.738 qpair failed and we were unable to recover it. 00:28:34.738 [2024-11-15 11:46:35.276401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.738 [2024-11-15 11:46:35.276412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.738 qpair failed and we were unable to recover it. 00:28:34.738 [2024-11-15 11:46:35.276673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.738 [2024-11-15 11:46:35.276684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.738 qpair failed and we were unable to recover it. 00:28:34.738 [2024-11-15 11:46:35.276911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.738 [2024-11-15 11:46:35.276922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.738 qpair failed and we were unable to recover it. 00:28:34.738 [2024-11-15 11:46:35.277178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.738 [2024-11-15 11:46:35.277189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.738 qpair failed and we were unable to recover it. 00:28:34.738 [2024-11-15 11:46:35.277369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.738 [2024-11-15 11:46:35.277380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.738 qpair failed and we were unable to recover it. 00:28:34.738 [2024-11-15 11:46:35.277659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.738 [2024-11-15 11:46:35.277671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.738 qpair failed and we were unable to recover it. 00:28:34.738 [2024-11-15 11:46:35.277925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.738 [2024-11-15 11:46:35.277936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.738 qpair failed and we were unable to recover it. 00:28:34.738 [2024-11-15 11:46:35.278150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.738 [2024-11-15 11:46:35.278161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.738 qpair failed and we were unable to recover it. 00:28:34.738 [2024-11-15 11:46:35.278306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.738 [2024-11-15 11:46:35.278317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.738 qpair failed and we were unable to recover it. 00:28:34.738 [2024-11-15 11:46:35.278485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.738 [2024-11-15 11:46:35.278496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.738 qpair failed and we were unable to recover it. 00:28:34.738 [2024-11-15 11:46:35.278636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.738 [2024-11-15 11:46:35.278648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.738 qpair failed and we were unable to recover it. 00:28:34.738 [2024-11-15 11:46:35.278789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.738 [2024-11-15 11:46:35.278800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.738 qpair failed and we were unable to recover it. 00:28:34.738 [2024-11-15 11:46:35.278964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.738 [2024-11-15 11:46:35.278975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.738 qpair failed and we were unable to recover it. 00:28:34.738 [2024-11-15 11:46:35.279230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.738 [2024-11-15 11:46:35.279241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.738 qpair failed and we were unable to recover it. 00:28:34.738 [2024-11-15 11:46:35.279448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.738 [2024-11-15 11:46:35.279464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.738 qpair failed and we were unable to recover it. 00:28:34.738 [2024-11-15 11:46:35.279689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.738 [2024-11-15 11:46:35.279700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.738 qpair failed and we were unable to recover it. 00:28:34.738 [2024-11-15 11:46:35.279854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.738 [2024-11-15 11:46:35.279865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.738 qpair failed and we were unable to recover it. 00:28:34.738 [2024-11-15 11:46:35.280087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.738 [2024-11-15 11:46:35.280099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.738 qpair failed and we were unable to recover it. 00:28:34.738 [2024-11-15 11:46:35.280359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.738 [2024-11-15 11:46:35.280369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.738 qpair failed and we were unable to recover it. 00:28:34.738 [2024-11-15 11:46:35.280655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.738 [2024-11-15 11:46:35.280667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.738 qpair failed and we were unable to recover it. 00:28:34.738 [2024-11-15 11:46:35.280761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.738 [2024-11-15 11:46:35.280772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.738 qpair failed and we were unable to recover it. 00:28:34.738 [2024-11-15 11:46:35.280939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.738 [2024-11-15 11:46:35.280950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.738 qpair failed and we were unable to recover it. 00:28:34.738 [2024-11-15 11:46:35.281155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.738 [2024-11-15 11:46:35.281165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.738 qpair failed and we were unable to recover it. 00:28:34.738 [2024-11-15 11:46:35.281326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.738 [2024-11-15 11:46:35.281337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.738 qpair failed and we were unable to recover it. 00:28:34.738 [2024-11-15 11:46:35.281487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.738 [2024-11-15 11:46:35.281498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.738 qpair failed and we were unable to recover it. 00:28:34.738 [2024-11-15 11:46:35.281724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.738 [2024-11-15 11:46:35.281735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.738 qpair failed and we were unable to recover it. 00:28:34.738 [2024-11-15 11:46:35.281886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.738 [2024-11-15 11:46:35.281897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.738 qpair failed and we were unable to recover it. 00:28:34.738 [2024-11-15 11:46:35.282052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.738 [2024-11-15 11:46:35.282063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.738 qpair failed and we were unable to recover it. 00:28:34.738 [2024-11-15 11:46:35.282210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.739 [2024-11-15 11:46:35.282221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.739 qpair failed and we were unable to recover it. 00:28:34.739 [2024-11-15 11:46:35.282292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.739 [2024-11-15 11:46:35.282303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.739 qpair failed and we were unable to recover it. 00:28:34.739 [2024-11-15 11:46:35.282384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.739 [2024-11-15 11:46:35.282394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.739 qpair failed and we were unable to recover it. 00:28:34.739 [2024-11-15 11:46:35.282532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.739 [2024-11-15 11:46:35.282543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.739 qpair failed and we were unable to recover it. 00:28:34.739 [2024-11-15 11:46:35.282798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.739 [2024-11-15 11:46:35.282809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.739 qpair failed and we were unable to recover it. 00:28:34.739 [2024-11-15 11:46:35.283086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.739 [2024-11-15 11:46:35.283096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.739 qpair failed and we were unable to recover it. 00:28:34.739 [2024-11-15 11:46:35.283249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.739 [2024-11-15 11:46:35.283260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.739 qpair failed and we were unable to recover it. 00:28:34.739 [2024-11-15 11:46:35.283406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.739 [2024-11-15 11:46:35.283417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.739 qpair failed and we were unable to recover it. 00:28:34.739 [2024-11-15 11:46:35.283590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.739 [2024-11-15 11:46:35.283601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.739 qpair failed and we were unable to recover it. 00:28:34.739 [2024-11-15 11:46:35.283760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.739 [2024-11-15 11:46:35.283771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.739 qpair failed and we were unable to recover it. 00:28:34.739 [2024-11-15 11:46:35.283991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.739 [2024-11-15 11:46:35.284002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.739 qpair failed and we were unable to recover it. 00:28:34.739 [2024-11-15 11:46:35.284139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.739 [2024-11-15 11:46:35.284150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.739 qpair failed and we were unable to recover it. 00:28:34.739 [2024-11-15 11:46:35.284292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.739 [2024-11-15 11:46:35.284303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.739 qpair failed and we were unable to recover it. 00:28:34.739 [2024-11-15 11:46:35.284451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.739 [2024-11-15 11:46:35.284466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.739 qpair failed and we were unable to recover it. 00:28:34.739 [2024-11-15 11:46:35.284607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.739 [2024-11-15 11:46:35.284618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.739 qpair failed and we were unable to recover it. 00:28:34.739 [2024-11-15 11:46:35.284844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.739 [2024-11-15 11:46:35.284855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.739 qpair failed and we were unable to recover it. 00:28:34.739 [2024-11-15 11:46:35.285086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.739 [2024-11-15 11:46:35.285097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.739 qpair failed and we were unable to recover it. 00:28:34.739 [2024-11-15 11:46:35.285379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.739 [2024-11-15 11:46:35.285390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.739 qpair failed and we were unable to recover it. 00:28:34.739 [2024-11-15 11:46:35.285528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.739 [2024-11-15 11:46:35.285539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.739 qpair failed and we were unable to recover it. 00:28:34.739 [2024-11-15 11:46:35.285771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.739 [2024-11-15 11:46:35.285782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.739 qpair failed and we were unable to recover it. 00:28:34.739 [2024-11-15 11:46:35.285962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.739 [2024-11-15 11:46:35.285972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.739 qpair failed and we were unable to recover it. 00:28:34.739 [2024-11-15 11:46:35.286180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.739 [2024-11-15 11:46:35.286191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.739 qpair failed and we were unable to recover it. 00:28:34.739 [2024-11-15 11:46:35.286348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.739 [2024-11-15 11:46:35.286361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.739 qpair failed and we were unable to recover it. 00:28:34.739 [2024-11-15 11:46:35.286500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.739 [2024-11-15 11:46:35.286511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.739 qpair failed and we were unable to recover it. 00:28:34.739 [2024-11-15 11:46:35.286651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.739 [2024-11-15 11:46:35.286662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.739 qpair failed and we were unable to recover it. 00:28:34.739 [2024-11-15 11:46:35.286815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.739 [2024-11-15 11:46:35.286826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.739 qpair failed and we were unable to recover it. 00:28:34.739 [2024-11-15 11:46:35.287033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.739 [2024-11-15 11:46:35.287044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.739 qpair failed and we were unable to recover it. 00:28:34.739 [2024-11-15 11:46:35.287265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.739 [2024-11-15 11:46:35.287276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.739 qpair failed and we were unable to recover it. 00:28:34.739 [2024-11-15 11:46:35.287563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.739 [2024-11-15 11:46:35.287575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.739 qpair failed and we were unable to recover it. 00:28:34.739 [2024-11-15 11:46:35.287799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.739 [2024-11-15 11:46:35.287810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.739 qpair failed and we were unable to recover it. 00:28:34.739 [2024-11-15 11:46:35.287897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.739 [2024-11-15 11:46:35.287909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.739 qpair failed and we were unable to recover it. 00:28:34.739 [2024-11-15 11:46:35.288009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.739 [2024-11-15 11:46:35.288021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.739 qpair failed and we were unable to recover it. 00:28:34.739 [2024-11-15 11:46:35.288159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.739 [2024-11-15 11:46:35.288169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.739 qpair failed and we were unable to recover it. 00:28:34.739 [2024-11-15 11:46:35.288425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.739 [2024-11-15 11:46:35.288435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.739 qpair failed and we were unable to recover it. 00:28:34.739 [2024-11-15 11:46:35.288535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.739 [2024-11-15 11:46:35.288546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.739 qpair failed and we were unable to recover it. 00:28:34.739 [2024-11-15 11:46:35.288685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.739 [2024-11-15 11:46:35.288695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.739 qpair failed and we were unable to recover it. 00:28:34.739 [2024-11-15 11:46:35.288918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.739 [2024-11-15 11:46:35.288929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.739 qpair failed and we were unable to recover it. 00:28:34.739 [2024-11-15 11:46:35.289173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.739 [2024-11-15 11:46:35.289184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.739 qpair failed and we were unable to recover it. 00:28:34.739 [2024-11-15 11:46:35.289331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.740 [2024-11-15 11:46:35.289342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.740 qpair failed and we were unable to recover it. 00:28:34.740 [2024-11-15 11:46:35.289520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.740 [2024-11-15 11:46:35.289531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.740 qpair failed and we were unable to recover it. 00:28:34.740 [2024-11-15 11:46:35.289791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.740 [2024-11-15 11:46:35.289802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.740 qpair failed and we were unable to recover it. 00:28:34.740 [2024-11-15 11:46:35.290047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.740 [2024-11-15 11:46:35.290058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.740 qpair failed and we were unable to recover it. 00:28:34.740 [2024-11-15 11:46:35.290283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.740 [2024-11-15 11:46:35.290294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.740 qpair failed and we were unable to recover it. 00:28:34.740 [2024-11-15 11:46:35.290465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.740 [2024-11-15 11:46:35.290477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.740 qpair failed and we were unable to recover it. 00:28:34.740 [2024-11-15 11:46:35.290547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.740 [2024-11-15 11:46:35.290558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.740 qpair failed and we were unable to recover it. 00:28:34.740 [2024-11-15 11:46:35.290789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.740 [2024-11-15 11:46:35.290799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.740 qpair failed and we were unable to recover it. 00:28:34.740 [2024-11-15 11:46:35.291006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.740 [2024-11-15 11:46:35.291017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.740 qpair failed and we were unable to recover it. 00:28:34.740 [2024-11-15 11:46:35.291172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.740 [2024-11-15 11:46:35.291183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.740 qpair failed and we were unable to recover it. 00:28:34.740 [2024-11-15 11:46:35.291416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.740 [2024-11-15 11:46:35.291427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.740 qpair failed and we were unable to recover it. 00:28:34.740 [2024-11-15 11:46:35.291576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.740 [2024-11-15 11:46:35.291587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.740 qpair failed and we were unable to recover it. 00:28:34.740 [2024-11-15 11:46:35.291806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.740 [2024-11-15 11:46:35.291817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.740 qpair failed and we were unable to recover it. 00:28:34.740 [2024-11-15 11:46:35.292058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.740 [2024-11-15 11:46:35.292069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.740 qpair failed and we were unable to recover it. 00:28:34.740 [2024-11-15 11:46:35.292204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.740 [2024-11-15 11:46:35.292215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.740 qpair failed and we were unable to recover it. 00:28:34.740 [2024-11-15 11:46:35.292294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.740 [2024-11-15 11:46:35.292305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.740 qpair failed and we were unable to recover it. 00:28:34.740 [2024-11-15 11:46:35.292465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.740 [2024-11-15 11:46:35.292476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.740 qpair failed and we were unable to recover it. 00:28:34.740 [2024-11-15 11:46:35.292629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.740 [2024-11-15 11:46:35.292640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.740 qpair failed and we were unable to recover it. 00:28:34.740 [2024-11-15 11:46:35.292846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.740 [2024-11-15 11:46:35.292857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.740 qpair failed and we were unable to recover it. 00:28:34.740 [2024-11-15 11:46:35.293103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.740 [2024-11-15 11:46:35.293114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.740 qpair failed and we were unable to recover it. 00:28:34.740 [2024-11-15 11:46:35.293266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.740 [2024-11-15 11:46:35.293276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.740 qpair failed and we were unable to recover it. 00:28:34.740 [2024-11-15 11:46:35.293449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.740 [2024-11-15 11:46:35.293473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.740 qpair failed and we were unable to recover it. 00:28:34.740 [2024-11-15 11:46:35.293705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.740 [2024-11-15 11:46:35.293716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.740 qpair failed and we were unable to recover it. 00:28:34.740 [2024-11-15 11:46:35.293950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.740 [2024-11-15 11:46:35.293961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.740 qpair failed and we were unable to recover it. 00:28:34.740 [2024-11-15 11:46:35.294169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.740 [2024-11-15 11:46:35.294182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.740 qpair failed and we were unable to recover it. 00:28:34.740 [2024-11-15 11:46:35.294340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.740 [2024-11-15 11:46:35.294351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.740 qpair failed and we were unable to recover it. 00:28:34.740 [2024-11-15 11:46:35.294581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.740 [2024-11-15 11:46:35.294593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.740 qpair failed and we were unable to recover it. 00:28:34.740 [2024-11-15 11:46:35.294752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.740 [2024-11-15 11:46:35.294763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.740 qpair failed and we were unable to recover it. 00:28:34.740 [2024-11-15 11:46:35.294844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.740 [2024-11-15 11:46:35.294854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.740 qpair failed and we were unable to recover it. 00:28:34.740 [2024-11-15 11:46:35.295102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.740 [2024-11-15 11:46:35.295113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.740 qpair failed and we were unable to recover it. 00:28:34.740 [2024-11-15 11:46:35.295258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.740 [2024-11-15 11:46:35.295269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.740 qpair failed and we were unable to recover it. 00:28:34.740 [2024-11-15 11:46:35.295529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.740 [2024-11-15 11:46:35.295540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.740 qpair failed and we were unable to recover it. 00:28:34.740 [2024-11-15 11:46:35.295773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.740 [2024-11-15 11:46:35.295784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.740 qpair failed and we were unable to recover it. 00:28:34.740 [2024-11-15 11:46:35.295943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.740 [2024-11-15 11:46:35.295954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.740 qpair failed and we were unable to recover it. 00:28:34.740 [2024-11-15 11:46:35.296093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.740 [2024-11-15 11:46:35.296104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.740 qpair failed and we were unable to recover it. 00:28:34.740 [2024-11-15 11:46:35.296237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.740 [2024-11-15 11:46:35.296248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.740 qpair failed and we were unable to recover it. 00:28:34.740 [2024-11-15 11:46:35.296481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.740 [2024-11-15 11:46:35.296493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.740 qpair failed and we were unable to recover it. 00:28:34.740 [2024-11-15 11:46:35.296727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.740 [2024-11-15 11:46:35.296739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.740 qpair failed and we were unable to recover it. 00:28:34.740 [2024-11-15 11:46:35.296994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.741 [2024-11-15 11:46:35.297005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.741 qpair failed and we were unable to recover it. 00:28:34.741 [2024-11-15 11:46:35.297238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.741 [2024-11-15 11:46:35.297248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.741 qpair failed and we were unable to recover it. 00:28:34.741 [2024-11-15 11:46:35.297455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.741 [2024-11-15 11:46:35.297469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.741 qpair failed and we were unable to recover it. 00:28:34.741 [2024-11-15 11:46:35.297621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.741 [2024-11-15 11:46:35.297632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.741 qpair failed and we were unable to recover it. 00:28:34.741 [2024-11-15 11:46:35.297766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.741 [2024-11-15 11:46:35.297777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.741 qpair failed and we were unable to recover it. 00:28:34.741 [2024-11-15 11:46:35.298002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.741 [2024-11-15 11:46:35.298013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.741 qpair failed and we were unable to recover it. 00:28:34.741 [2024-11-15 11:46:35.298249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.741 [2024-11-15 11:46:35.298259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.741 qpair failed and we were unable to recover it. 00:28:34.741 [2024-11-15 11:46:35.298518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.741 [2024-11-15 11:46:35.298529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.741 qpair failed and we were unable to recover it. 00:28:34.741 [2024-11-15 11:46:35.298759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.741 [2024-11-15 11:46:35.298771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.741 qpair failed and we were unable to recover it. 00:28:34.741 [2024-11-15 11:46:35.298935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.741 [2024-11-15 11:46:35.298945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.741 qpair failed and we were unable to recover it. 00:28:34.741 [2024-11-15 11:46:35.299153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.741 [2024-11-15 11:46:35.299164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.741 qpair failed and we were unable to recover it. 00:28:34.741 [2024-11-15 11:46:35.299316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.741 [2024-11-15 11:46:35.299327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.741 qpair failed and we were unable to recover it. 00:28:34.741 [2024-11-15 11:46:35.299411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.741 [2024-11-15 11:46:35.299422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.741 qpair failed and we were unable to recover it. 00:28:34.741 [2024-11-15 11:46:35.299687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.741 [2024-11-15 11:46:35.299698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.741 qpair failed and we were unable to recover it. 00:28:34.741 [2024-11-15 11:46:35.299904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.741 [2024-11-15 11:46:35.299916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.741 qpair failed and we were unable to recover it. 00:28:34.741 [2024-11-15 11:46:35.300161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.741 [2024-11-15 11:46:35.300172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.741 qpair failed and we were unable to recover it. 00:28:34.741 [2024-11-15 11:46:35.300350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.741 [2024-11-15 11:46:35.300361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.741 qpair failed and we were unable to recover it. 00:28:34.741 [2024-11-15 11:46:35.300520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.741 [2024-11-15 11:46:35.300531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.741 qpair failed and we were unable to recover it. 00:28:34.741 [2024-11-15 11:46:35.300740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.741 [2024-11-15 11:46:35.300750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.741 qpair failed and we were unable to recover it. 00:28:34.741 [2024-11-15 11:46:35.300932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.741 [2024-11-15 11:46:35.300943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.741 qpair failed and we were unable to recover it. 00:28:34.741 [2024-11-15 11:46:35.301037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.741 [2024-11-15 11:46:35.301047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.741 qpair failed and we were unable to recover it. 00:28:34.741 [2024-11-15 11:46:35.301193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.741 [2024-11-15 11:46:35.301204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.741 qpair failed and we were unable to recover it. 00:28:34.741 [2024-11-15 11:46:35.301447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.741 [2024-11-15 11:46:35.301461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.741 qpair failed and we were unable to recover it. 00:28:34.741 [2024-11-15 11:46:35.301603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.741 [2024-11-15 11:46:35.301613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.741 qpair failed and we were unable to recover it. 00:28:34.741 [2024-11-15 11:46:35.301832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.741 [2024-11-15 11:46:35.301842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.741 qpair failed and we were unable to recover it. 00:28:34.741 [2024-11-15 11:46:35.302020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.741 [2024-11-15 11:46:35.302032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.741 qpair failed and we were unable to recover it. 00:28:34.741 [2024-11-15 11:46:35.302307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.741 [2024-11-15 11:46:35.302320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.741 qpair failed and we were unable to recover it. 00:28:34.741 [2024-11-15 11:46:35.302533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.741 [2024-11-15 11:46:35.302544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.741 qpair failed and we were unable to recover it. 00:28:34.741 [2024-11-15 11:46:35.302686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.741 [2024-11-15 11:46:35.302697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.741 qpair failed and we were unable to recover it. 00:28:34.741 [2024-11-15 11:46:35.302945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.741 [2024-11-15 11:46:35.302956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.741 qpair failed and we were unable to recover it. 00:28:34.741 [2024-11-15 11:46:35.303056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.741 [2024-11-15 11:46:35.303067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.741 qpair failed and we were unable to recover it. 00:28:34.741 [2024-11-15 11:46:35.303217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.741 [2024-11-15 11:46:35.303228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.741 qpair failed and we were unable to recover it. 00:28:34.741 [2024-11-15 11:46:35.303378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.741 [2024-11-15 11:46:35.303389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.741 qpair failed and we were unable to recover it. 00:28:34.741 [2024-11-15 11:46:35.303537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.741 [2024-11-15 11:46:35.303548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.741 qpair failed and we were unable to recover it. 00:28:34.741 [2024-11-15 11:46:35.303811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.741 [2024-11-15 11:46:35.303822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.741 qpair failed and we were unable to recover it. 00:28:34.741 [2024-11-15 11:46:35.304055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.741 [2024-11-15 11:46:35.304066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.742 qpair failed and we were unable to recover it. 00:28:34.742 [2024-11-15 11:46:35.304225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.742 [2024-11-15 11:46:35.304236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.742 qpair failed and we were unable to recover it. 00:28:34.742 [2024-11-15 11:46:35.304443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.742 [2024-11-15 11:46:35.304454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.742 qpair failed and we were unable to recover it. 00:28:34.742 [2024-11-15 11:46:35.304619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.742 [2024-11-15 11:46:35.304631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.742 qpair failed and we were unable to recover it. 00:28:34.742 [2024-11-15 11:46:35.304865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.742 [2024-11-15 11:46:35.304877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.742 qpair failed and we were unable to recover it. 00:28:34.742 [2024-11-15 11:46:35.305114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.742 [2024-11-15 11:46:35.305124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.742 qpair failed and we were unable to recover it. 00:28:34.742 [2024-11-15 11:46:35.305339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.742 [2024-11-15 11:46:35.305349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.742 qpair failed and we were unable to recover it. 00:28:34.742 [2024-11-15 11:46:35.305582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.742 [2024-11-15 11:46:35.305593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.742 qpair failed and we were unable to recover it. 00:28:34.742 [2024-11-15 11:46:35.305822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.742 [2024-11-15 11:46:35.305833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.742 qpair failed and we were unable to recover it. 00:28:34.742 [2024-11-15 11:46:35.306061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.742 [2024-11-15 11:46:35.306072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.742 qpair failed and we were unable to recover it. 00:28:34.742 [2024-11-15 11:46:35.306216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.742 [2024-11-15 11:46:35.306227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.742 qpair failed and we were unable to recover it. 00:28:34.742 [2024-11-15 11:46:35.306401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.742 [2024-11-15 11:46:35.306411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.742 qpair failed and we were unable to recover it. 00:28:34.742 [2024-11-15 11:46:35.306609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.742 [2024-11-15 11:46:35.306621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.742 qpair failed and we were unable to recover it. 00:28:34.742 [2024-11-15 11:46:35.306851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.742 [2024-11-15 11:46:35.306862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.742 qpair failed and we were unable to recover it. 00:28:34.742 [2024-11-15 11:46:35.307064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.742 [2024-11-15 11:46:35.307076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.742 qpair failed and we were unable to recover it. 00:28:34.742 [2024-11-15 11:46:35.307213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.742 [2024-11-15 11:46:35.307224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.742 qpair failed and we were unable to recover it. 00:28:34.742 [2024-11-15 11:46:35.307473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.742 [2024-11-15 11:46:35.307484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.742 qpair failed and we were unable to recover it. 00:28:34.742 [2024-11-15 11:46:35.307572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.742 [2024-11-15 11:46:35.307584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.742 qpair failed and we were unable to recover it. 00:28:34.742 [2024-11-15 11:46:35.307794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.742 [2024-11-15 11:46:35.307805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.742 qpair failed and we were unable to recover it. 00:28:34.742 [2024-11-15 11:46:35.307962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.742 [2024-11-15 11:46:35.307973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.742 qpair failed and we were unable to recover it. 00:28:34.742 [2024-11-15 11:46:35.308208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.742 [2024-11-15 11:46:35.308219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.742 qpair failed and we were unable to recover it. 00:28:34.742 [2024-11-15 11:46:35.308428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.742 [2024-11-15 11:46:35.308439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.742 qpair failed and we were unable to recover it. 00:28:34.742 [2024-11-15 11:46:35.308676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.742 [2024-11-15 11:46:35.308687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.742 qpair failed and we were unable to recover it. 00:28:34.742 [2024-11-15 11:46:35.308874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.742 [2024-11-15 11:46:35.308885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.742 qpair failed and we were unable to recover it. 00:28:34.742 [2024-11-15 11:46:35.309116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.742 [2024-11-15 11:46:35.309127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.742 qpair failed and we were unable to recover it. 00:28:34.742 [2024-11-15 11:46:35.309341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.742 [2024-11-15 11:46:35.309351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.742 qpair failed and we were unable to recover it. 00:28:34.742 [2024-11-15 11:46:35.309517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.742 [2024-11-15 11:46:35.309528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.742 qpair failed and we were unable to recover it. 00:28:34.742 [2024-11-15 11:46:35.309735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.742 [2024-11-15 11:46:35.309746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.742 qpair failed and we were unable to recover it. 00:28:34.742 [2024-11-15 11:46:35.309978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.742 [2024-11-15 11:46:35.309989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.742 qpair failed and we were unable to recover it. 00:28:34.742 [2024-11-15 11:46:35.310068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.742 [2024-11-15 11:46:35.310079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.742 qpair failed and we were unable to recover it. 00:28:34.742 [2024-11-15 11:46:35.310304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.742 [2024-11-15 11:46:35.310315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.742 qpair failed and we were unable to recover it. 00:28:34.742 [2024-11-15 11:46:35.310467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.742 [2024-11-15 11:46:35.310480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.742 qpair failed and we were unable to recover it. 00:28:34.742 [2024-11-15 11:46:35.310753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.742 [2024-11-15 11:46:35.310765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.742 qpair failed and we were unable to recover it. 00:28:34.742 [2024-11-15 11:46:35.311017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.742 [2024-11-15 11:46:35.311028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.742 qpair failed and we were unable to recover it. 00:28:34.742 [2024-11-15 11:46:35.311301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.742 [2024-11-15 11:46:35.311313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.742 qpair failed and we were unable to recover it. 00:28:34.742 [2024-11-15 11:46:35.311450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.742 [2024-11-15 11:46:35.311464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.742 qpair failed and we were unable to recover it. 00:28:34.742 [2024-11-15 11:46:35.311602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.742 [2024-11-15 11:46:35.311613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.743 qpair failed and we were unable to recover it. 00:28:34.743 [2024-11-15 11:46:35.311851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.743 [2024-11-15 11:46:35.311862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.743 qpair failed and we were unable to recover it. 00:28:34.743 [2024-11-15 11:46:35.312028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.743 [2024-11-15 11:46:35.312039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.743 qpair failed and we were unable to recover it. 00:28:34.743 [2024-11-15 11:46:35.312256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.743 [2024-11-15 11:46:35.312268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.743 qpair failed and we were unable to recover it. 00:28:34.743 [2024-11-15 11:46:35.312421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.743 [2024-11-15 11:46:35.312432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.743 qpair failed and we were unable to recover it. 00:28:34.743 [2024-11-15 11:46:35.312568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.743 [2024-11-15 11:46:35.312580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.743 qpair failed and we were unable to recover it. 00:28:34.743 [2024-11-15 11:46:35.312718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.743 [2024-11-15 11:46:35.312729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.743 qpair failed and we were unable to recover it. 00:28:34.743 [2024-11-15 11:46:35.312899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.743 [2024-11-15 11:46:35.312910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.743 qpair failed and we were unable to recover it. 00:28:34.743 [2024-11-15 11:46:35.313056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.743 [2024-11-15 11:46:35.313068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.743 qpair failed and we were unable to recover it. 00:28:34.743 [2024-11-15 11:46:35.313273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.743 [2024-11-15 11:46:35.313285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.743 qpair failed and we were unable to recover it. 00:28:34.743 [2024-11-15 11:46:35.313436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.743 [2024-11-15 11:46:35.313447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.743 qpair failed and we were unable to recover it. 00:28:34.743 [2024-11-15 11:46:35.313585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.743 [2024-11-15 11:46:35.313597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.743 qpair failed and we were unable to recover it. 00:28:34.743 [2024-11-15 11:46:35.313822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.743 [2024-11-15 11:46:35.313833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.743 qpair failed and we were unable to recover it. 00:28:34.743 [2024-11-15 11:46:35.313898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.743 [2024-11-15 11:46:35.313909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.743 qpair failed and we were unable to recover it. 00:28:34.743 [2024-11-15 11:46:35.314124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.743 [2024-11-15 11:46:35.314135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.743 qpair failed and we were unable to recover it. 00:28:34.743 [2024-11-15 11:46:35.314343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.743 [2024-11-15 11:46:35.314354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.743 qpair failed and we were unable to recover it. 00:28:34.743 [2024-11-15 11:46:35.314577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.743 [2024-11-15 11:46:35.314589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.743 qpair failed and we were unable to recover it. 00:28:34.743 [2024-11-15 11:46:35.314842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.743 [2024-11-15 11:46:35.314853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.743 qpair failed and we were unable to recover it. 00:28:34.743 [2024-11-15 11:46:35.314943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.743 [2024-11-15 11:46:35.314954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.743 qpair failed and we were unable to recover it. 00:28:34.743 [2024-11-15 11:46:35.315186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.743 [2024-11-15 11:46:35.315197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.743 qpair failed and we were unable to recover it. 00:28:34.743 [2024-11-15 11:46:35.315345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.743 [2024-11-15 11:46:35.315356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.743 qpair failed and we were unable to recover it. 00:28:34.743 [2024-11-15 11:46:35.315605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.743 [2024-11-15 11:46:35.315616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.743 qpair failed and we were unable to recover it. 00:28:34.743 [2024-11-15 11:46:35.315781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.743 [2024-11-15 11:46:35.315793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.743 qpair failed and we were unable to recover it. 00:28:34.743 [2024-11-15 11:46:35.315951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.743 [2024-11-15 11:46:35.315962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.743 qpair failed and we were unable to recover it. 00:28:34.743 [2024-11-15 11:46:35.316224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.743 [2024-11-15 11:46:35.316235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.743 qpair failed and we were unable to recover it. 00:28:34.743 [2024-11-15 11:46:35.316373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.743 [2024-11-15 11:46:35.316384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.743 qpair failed and we were unable to recover it. 00:28:34.743 [2024-11-15 11:46:35.316621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.743 [2024-11-15 11:46:35.316632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.743 qpair failed and we were unable to recover it. 00:28:34.743 [2024-11-15 11:46:35.316789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.743 [2024-11-15 11:46:35.316800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.743 qpair failed and we were unable to recover it. 00:28:34.743 [2024-11-15 11:46:35.317036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.743 [2024-11-15 11:46:35.317047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.743 qpair failed and we were unable to recover it. 00:28:34.743 [2024-11-15 11:46:35.317135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.743 [2024-11-15 11:46:35.317145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.743 qpair failed and we were unable to recover it. 00:28:34.743 [2024-11-15 11:46:35.317376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.743 [2024-11-15 11:46:35.317388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.743 qpair failed and we were unable to recover it. 00:28:34.743 [2024-11-15 11:46:35.317609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.743 [2024-11-15 11:46:35.317621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.743 qpair failed and we were unable to recover it. 00:28:34.743 [2024-11-15 11:46:35.317773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.743 [2024-11-15 11:46:35.317784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.743 qpair failed and we were unable to recover it. 00:28:34.743 [2024-11-15 11:46:35.318014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.743 [2024-11-15 11:46:35.318025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.743 qpair failed and we were unable to recover it. 00:28:34.743 [2024-11-15 11:46:35.318248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.743 [2024-11-15 11:46:35.318258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.743 qpair failed and we were unable to recover it. 00:28:34.743 [2024-11-15 11:46:35.318409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.743 [2024-11-15 11:46:35.318422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.743 qpair failed and we were unable to recover it. 00:28:34.743 [2024-11-15 11:46:35.318651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.743 [2024-11-15 11:46:35.318664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.743 qpair failed and we were unable to recover it. 00:28:34.743 [2024-11-15 11:46:35.318846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.743 [2024-11-15 11:46:35.318857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.743 qpair failed and we were unable to recover it. 00:28:34.743 [2024-11-15 11:46:35.318993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.743 [2024-11-15 11:46:35.319004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.743 qpair failed and we were unable to recover it. 00:28:34.744 [2024-11-15 11:46:35.319173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.744 [2024-11-15 11:46:35.319184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.744 qpair failed and we were unable to recover it. 00:28:34.744 [2024-11-15 11:46:35.319347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.744 [2024-11-15 11:46:35.319359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.744 qpair failed and we were unable to recover it. 00:28:34.744 [2024-11-15 11:46:35.319439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.744 [2024-11-15 11:46:35.319450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.744 qpair failed and we were unable to recover it. 00:28:34.744 [2024-11-15 11:46:35.319632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.744 [2024-11-15 11:46:35.319643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.744 qpair failed and we were unable to recover it. 00:28:34.744 [2024-11-15 11:46:35.319831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.744 [2024-11-15 11:46:35.319843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.744 qpair failed and we were unable to recover it. 00:28:34.744 [2024-11-15 11:46:35.320099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.744 [2024-11-15 11:46:35.320110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.744 qpair failed and we were unable to recover it. 00:28:34.744 [2024-11-15 11:46:35.320339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.744 [2024-11-15 11:46:35.320350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.744 qpair failed and we were unable to recover it. 00:28:34.744 [2024-11-15 11:46:35.320586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.744 [2024-11-15 11:46:35.320598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.744 qpair failed and we were unable to recover it. 00:28:34.744 [2024-11-15 11:46:35.320855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.744 [2024-11-15 11:46:35.320867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.744 qpair failed and we were unable to recover it. 00:28:34.744 [2024-11-15 11:46:35.321071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.744 [2024-11-15 11:46:35.321082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.744 qpair failed and we were unable to recover it. 00:28:34.744 [2024-11-15 11:46:35.321308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.744 [2024-11-15 11:46:35.321319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.744 qpair failed and we were unable to recover it. 00:28:34.744 [2024-11-15 11:46:35.321527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.744 [2024-11-15 11:46:35.321538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.744 qpair failed and we were unable to recover it. 00:28:34.744 [2024-11-15 11:46:35.321684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.744 [2024-11-15 11:46:35.321695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.744 qpair failed and we were unable to recover it. 00:28:34.744 [2024-11-15 11:46:35.321855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.744 [2024-11-15 11:46:35.321866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.744 qpair failed and we were unable to recover it. 00:28:34.744 [2024-11-15 11:46:35.322015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.744 [2024-11-15 11:46:35.322026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.744 qpair failed and we were unable to recover it. 00:28:34.744 [2024-11-15 11:46:35.322162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.744 [2024-11-15 11:46:35.322173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.744 qpair failed and we were unable to recover it. 00:28:34.744 [2024-11-15 11:46:35.322266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.744 [2024-11-15 11:46:35.322277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.744 qpair failed and we were unable to recover it. 00:28:34.744 [2024-11-15 11:46:35.322413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.744 [2024-11-15 11:46:35.322424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.744 qpair failed and we were unable to recover it. 00:28:34.744 [2024-11-15 11:46:35.322511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.744 [2024-11-15 11:46:35.322522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.744 qpair failed and we were unable to recover it. 00:28:34.744 [2024-11-15 11:46:35.322673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.744 [2024-11-15 11:46:35.322684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.744 qpair failed and we were unable to recover it. 00:28:34.744 [2024-11-15 11:46:35.322907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.744 [2024-11-15 11:46:35.322917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.744 qpair failed and we were unable to recover it. 00:28:34.744 [2024-11-15 11:46:35.323142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.744 [2024-11-15 11:46:35.323153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.744 qpair failed and we were unable to recover it. 00:28:34.744 [2024-11-15 11:46:35.323303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.744 [2024-11-15 11:46:35.323314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.744 qpair failed and we were unable to recover it. 00:28:34.744 [2024-11-15 11:46:35.323456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.744 [2024-11-15 11:46:35.323471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.744 qpair failed and we were unable to recover it. 00:28:34.744 [2024-11-15 11:46:35.323676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.744 [2024-11-15 11:46:35.323688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.744 qpair failed and we were unable to recover it. 00:28:34.744 [2024-11-15 11:46:35.323893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.744 [2024-11-15 11:46:35.323904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.744 qpair failed and we were unable to recover it. 00:28:34.744 [2024-11-15 11:46:35.323986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.744 [2024-11-15 11:46:35.323996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.744 qpair failed and we were unable to recover it. 00:28:34.744 [2024-11-15 11:46:35.324152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.744 [2024-11-15 11:46:35.324163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.744 qpair failed and we were unable to recover it. 00:28:34.744 [2024-11-15 11:46:35.324313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.744 [2024-11-15 11:46:35.324324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.744 qpair failed and we were unable to recover it. 00:28:34.744 [2024-11-15 11:46:35.324426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.744 [2024-11-15 11:46:35.324437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.744 qpair failed and we were unable to recover it. 00:28:34.744 [2024-11-15 11:46:35.324576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.744 [2024-11-15 11:46:35.324588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.744 qpair failed and we were unable to recover it. 00:28:34.744 [2024-11-15 11:46:35.324754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.744 [2024-11-15 11:46:35.324765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.744 qpair failed and we were unable to recover it. 00:28:34.744 [2024-11-15 11:46:35.324985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.744 [2024-11-15 11:46:35.324997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.744 qpair failed and we were unable to recover it. 00:28:34.744 [2024-11-15 11:46:35.325177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.744 [2024-11-15 11:46:35.325188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.744 qpair failed and we were unable to recover it. 00:28:34.744 [2024-11-15 11:46:35.325422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.744 [2024-11-15 11:46:35.325433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.744 qpair failed and we were unable to recover it. 00:28:34.744 [2024-11-15 11:46:35.325581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.744 [2024-11-15 11:46:35.325592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.744 qpair failed and we were unable to recover it. 00:28:34.744 [2024-11-15 11:46:35.325748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.744 [2024-11-15 11:46:35.325761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.744 qpair failed and we were unable to recover it. 00:28:34.744 [2024-11-15 11:46:35.325967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.744 [2024-11-15 11:46:35.325978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.744 qpair failed and we were unable to recover it. 00:28:34.745 [2024-11-15 11:46:35.326123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.745 [2024-11-15 11:46:35.326134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.745 qpair failed and we were unable to recover it. 00:28:34.745 [2024-11-15 11:46:35.326211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.745 [2024-11-15 11:46:35.326221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.745 qpair failed and we were unable to recover it. 00:28:34.745 [2024-11-15 11:46:35.326395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.745 [2024-11-15 11:46:35.326405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.745 qpair failed and we were unable to recover it. 00:28:34.745 [2024-11-15 11:46:35.326611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.745 [2024-11-15 11:46:35.326623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.745 qpair failed and we were unable to recover it. 00:28:34.745 [2024-11-15 11:46:35.326895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.745 [2024-11-15 11:46:35.326906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.745 qpair failed and we were unable to recover it. 00:28:34.745 [2024-11-15 11:46:35.327114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.745 [2024-11-15 11:46:35.327125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.745 qpair failed and we were unable to recover it. 00:28:34.745 [2024-11-15 11:46:35.327361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.745 [2024-11-15 11:46:35.327372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.745 qpair failed and we were unable to recover it. 00:28:34.745 [2024-11-15 11:46:35.327522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.745 [2024-11-15 11:46:35.327533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.745 qpair failed and we were unable to recover it. 00:28:34.745 [2024-11-15 11:46:35.327752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.745 [2024-11-15 11:46:35.327762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.745 qpair failed and we were unable to recover it. 00:28:34.745 [2024-11-15 11:46:35.327899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.745 [2024-11-15 11:46:35.327910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.745 qpair failed and we were unable to recover it. 00:28:34.745 [2024-11-15 11:46:35.328063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.745 [2024-11-15 11:46:35.328074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.745 qpair failed and we were unable to recover it. 00:28:34.745 [2024-11-15 11:46:35.328294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.745 [2024-11-15 11:46:35.328305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.745 qpair failed and we were unable to recover it. 00:28:34.745 [2024-11-15 11:46:35.328443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.745 [2024-11-15 11:46:35.328454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.745 qpair failed and we were unable to recover it. 00:28:34.745 [2024-11-15 11:46:35.328536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.745 [2024-11-15 11:46:35.328547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.745 qpair failed and we were unable to recover it. 00:28:34.745 [2024-11-15 11:46:35.328751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.745 [2024-11-15 11:46:35.328761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.745 qpair failed and we were unable to recover it. 00:28:34.745 [2024-11-15 11:46:35.329003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.745 [2024-11-15 11:46:35.329014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.745 qpair failed and we were unable to recover it. 00:28:34.745 [2024-11-15 11:46:35.329166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.745 [2024-11-15 11:46:35.329177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.745 qpair failed and we were unable to recover it. 00:28:34.745 [2024-11-15 11:46:35.329418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.745 [2024-11-15 11:46:35.329429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.745 qpair failed and we were unable to recover it. 00:28:34.745 [2024-11-15 11:46:35.329582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.745 [2024-11-15 11:46:35.329593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.745 qpair failed and we were unable to recover it. 00:28:34.745 [2024-11-15 11:46:35.329666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.745 [2024-11-15 11:46:35.329677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.745 qpair failed and we were unable to recover it. 00:28:34.745 [2024-11-15 11:46:35.329908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.745 [2024-11-15 11:46:35.329919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.745 qpair failed and we were unable to recover it. 00:28:34.745 [2024-11-15 11:46:35.330167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.745 [2024-11-15 11:46:35.330178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.745 qpair failed and we were unable to recover it. 00:28:34.745 [2024-11-15 11:46:35.330345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.745 [2024-11-15 11:46:35.330355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.745 qpair failed and we were unable to recover it. 00:28:34.745 [2024-11-15 11:46:35.330510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.745 [2024-11-15 11:46:35.330521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.745 qpair failed and we were unable to recover it. 00:28:34.745 [2024-11-15 11:46:35.330679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.745 [2024-11-15 11:46:35.330689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.745 qpair failed and we were unable to recover it. 00:28:34.745 [2024-11-15 11:46:35.330927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.745 [2024-11-15 11:46:35.330938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.745 qpair failed and we were unable to recover it. 00:28:34.745 [2024-11-15 11:46:35.331202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.745 [2024-11-15 11:46:35.331213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.745 qpair failed and we were unable to recover it. 00:28:34.745 [2024-11-15 11:46:35.331363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.745 [2024-11-15 11:46:35.331374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.745 qpair failed and we were unable to recover it. 00:28:34.745 [2024-11-15 11:46:35.331576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.745 [2024-11-15 11:46:35.331587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.745 qpair failed and we were unable to recover it. 00:28:34.745 [2024-11-15 11:46:35.331727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.745 [2024-11-15 11:46:35.331738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.745 qpair failed and we were unable to recover it. 00:28:34.745 [2024-11-15 11:46:35.331973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.745 [2024-11-15 11:46:35.331984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.745 qpair failed and we were unable to recover it. 00:28:34.745 [2024-11-15 11:46:35.332217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.745 [2024-11-15 11:46:35.332228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.745 qpair failed and we were unable to recover it. 00:28:34.745 [2024-11-15 11:46:35.332369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.745 [2024-11-15 11:46:35.332380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.745 qpair failed and we were unable to recover it. 00:28:34.745 [2024-11-15 11:46:35.332572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.745 [2024-11-15 11:46:35.332583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.745 qpair failed and we were unable to recover it. 00:28:34.745 [2024-11-15 11:46:35.332736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.745 [2024-11-15 11:46:35.332747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.745 qpair failed and we were unable to recover it. 00:28:34.745 [2024-11-15 11:46:35.332894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.745 [2024-11-15 11:46:35.332905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.745 qpair failed and we were unable to recover it. 00:28:34.745 [2024-11-15 11:46:35.333048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.745 [2024-11-15 11:46:35.333059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.745 qpair failed and we were unable to recover it. 00:28:34.745 [2024-11-15 11:46:35.333264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.745 [2024-11-15 11:46:35.333275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.746 qpair failed and we were unable to recover it. 00:28:34.746 [2024-11-15 11:46:35.333531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.746 [2024-11-15 11:46:35.333545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.746 qpair failed and we were unable to recover it. 00:28:34.746 [2024-11-15 11:46:35.333770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.746 [2024-11-15 11:46:35.333780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.746 qpair failed and we were unable to recover it. 00:28:34.746 [2024-11-15 11:46:35.333926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.746 [2024-11-15 11:46:35.333936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.746 qpair failed and we were unable to recover it. 00:28:34.746 [2024-11-15 11:46:35.334113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.746 [2024-11-15 11:46:35.334124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.746 qpair failed and we were unable to recover it. 00:28:34.746 [2024-11-15 11:46:35.334274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.746 [2024-11-15 11:46:35.334284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.746 qpair failed and we were unable to recover it. 00:28:34.746 [2024-11-15 11:46:35.334430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.746 [2024-11-15 11:46:35.334442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.746 qpair failed and we were unable to recover it. 00:28:34.746 [2024-11-15 11:46:35.334682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.746 [2024-11-15 11:46:35.334694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.746 qpair failed and we were unable to recover it. 00:28:34.746 [2024-11-15 11:46:35.334940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.746 [2024-11-15 11:46:35.334951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.746 qpair failed and we were unable to recover it. 00:28:34.746 [2024-11-15 11:46:35.335192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.746 [2024-11-15 11:46:35.335203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.746 qpair failed and we were unable to recover it. 00:28:34.746 [2024-11-15 11:46:35.335289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.746 [2024-11-15 11:46:35.335300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.746 qpair failed and we were unable to recover it. 00:28:34.746 [2024-11-15 11:46:35.335564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.746 [2024-11-15 11:46:35.335576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.746 qpair failed and we were unable to recover it. 00:28:34.746 [2024-11-15 11:46:35.335733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.746 [2024-11-15 11:46:35.335744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.746 qpair failed and we were unable to recover it. 00:28:34.746 [2024-11-15 11:46:35.335842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.746 [2024-11-15 11:46:35.335852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.746 qpair failed and we were unable to recover it. 00:28:34.746 [2024-11-15 11:46:35.336058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.746 [2024-11-15 11:46:35.336069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.746 qpair failed and we were unable to recover it. 00:28:34.746 [2024-11-15 11:46:35.336217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.746 [2024-11-15 11:46:35.336229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.746 qpair failed and we were unable to recover it. 00:28:34.746 [2024-11-15 11:46:35.336431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.746 [2024-11-15 11:46:35.336442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.746 qpair failed and we were unable to recover it. 00:28:34.746 [2024-11-15 11:46:35.336649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.746 [2024-11-15 11:46:35.336660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.746 qpair failed and we were unable to recover it. 00:28:34.746 [2024-11-15 11:46:35.336808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.746 [2024-11-15 11:46:35.336819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.746 qpair failed and we were unable to recover it. 00:28:34.746 [2024-11-15 11:46:35.336956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.746 [2024-11-15 11:46:35.336966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.746 qpair failed and we were unable to recover it. 00:28:34.746 [2024-11-15 11:46:35.337139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.746 [2024-11-15 11:46:35.337150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.746 qpair failed and we were unable to recover it. 00:28:34.746 [2024-11-15 11:46:35.337368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.746 [2024-11-15 11:46:35.337379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.746 qpair failed and we were unable to recover it. 00:28:34.746 [2024-11-15 11:46:35.337558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.746 [2024-11-15 11:46:35.337570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.746 qpair failed and we were unable to recover it. 00:28:34.746 [2024-11-15 11:46:35.337800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.746 [2024-11-15 11:46:35.337811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.746 qpair failed and we were unable to recover it. 00:28:34.746 [2024-11-15 11:46:35.337907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.746 [2024-11-15 11:46:35.337918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.746 qpair failed and we were unable to recover it. 00:28:34.746 [2024-11-15 11:46:35.338121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.746 [2024-11-15 11:46:35.338132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.746 qpair failed and we were unable to recover it. 00:28:34.746 [2024-11-15 11:46:35.338243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.746 [2024-11-15 11:46:35.338254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.746 qpair failed and we were unable to recover it. 00:28:34.746 [2024-11-15 11:46:35.338427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.746 [2024-11-15 11:46:35.338438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.746 qpair failed and we were unable to recover it. 00:28:34.746 [2024-11-15 11:46:35.338662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.746 [2024-11-15 11:46:35.338692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.746 qpair failed and we were unable to recover it. 00:28:34.746 [2024-11-15 11:46:35.338915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.746 [2024-11-15 11:46:35.338929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.746 qpair failed and we were unable to recover it. 00:28:34.746 [2024-11-15 11:46:35.339100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.746 [2024-11-15 11:46:35.339110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.746 qpair failed and we were unable to recover it. 00:28:34.746 [2024-11-15 11:46:35.339269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.746 [2024-11-15 11:46:35.339280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.746 qpair failed and we were unable to recover it. 00:28:34.746 [2024-11-15 11:46:35.339432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.746 [2024-11-15 11:46:35.339443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.746 qpair failed and we were unable to recover it. 00:28:34.746 [2024-11-15 11:46:35.339628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.746 [2024-11-15 11:46:35.339639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.746 qpair failed and we were unable to recover it. 00:28:34.746 [2024-11-15 11:46:35.339859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.746 [2024-11-15 11:46:35.339870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.746 qpair failed and we were unable to recover it. 00:28:34.747 [2024-11-15 11:46:35.340075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.747 [2024-11-15 11:46:35.340086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.747 qpair failed and we were unable to recover it. 00:28:34.747 [2024-11-15 11:46:35.340315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.747 [2024-11-15 11:46:35.340325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.747 qpair failed and we were unable to recover it. 00:28:34.747 [2024-11-15 11:46:35.340495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.747 [2024-11-15 11:46:35.340505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.747 qpair failed and we were unable to recover it. 00:28:34.747 [2024-11-15 11:46:35.340752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.747 [2024-11-15 11:46:35.340763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.747 qpair failed and we were unable to recover it. 00:28:34.747 [2024-11-15 11:46:35.340933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.747 [2024-11-15 11:46:35.340944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.747 qpair failed and we were unable to recover it. 00:28:34.747 [2024-11-15 11:46:35.341091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.747 [2024-11-15 11:46:35.341102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.747 qpair failed and we were unable to recover it. 00:28:34.747 [2024-11-15 11:46:35.341238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.747 [2024-11-15 11:46:35.341252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.747 qpair failed and we were unable to recover it. 00:28:34.747 [2024-11-15 11:46:35.341463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.747 [2024-11-15 11:46:35.341475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.747 qpair failed and we were unable to recover it. 00:28:34.747 [2024-11-15 11:46:35.341756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.747 [2024-11-15 11:46:35.341767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.747 qpair failed and we were unable to recover it. 00:28:34.747 [2024-11-15 11:46:35.341917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.747 [2024-11-15 11:46:35.341929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.747 qpair failed and we were unable to recover it. 00:28:34.747 [2024-11-15 11:46:35.341939] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:28:34.747 [2024-11-15 11:46:35.342157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.747 [2024-11-15 11:46:35.342169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.747 qpair failed and we were unable to recover it. 00:28:34.747 [2024-11-15 11:46:35.342321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.747 [2024-11-15 11:46:35.342331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.747 qpair failed and we were unable to recover it. 00:28:34.747 [2024-11-15 11:46:35.342415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.747 [2024-11-15 11:46:35.342425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.747 qpair failed and we were unable to recover it. 00:28:34.747 [2024-11-15 11:46:35.342572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.747 [2024-11-15 11:46:35.342583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.747 qpair failed and we were unable to recover it. 00:28:34.747 [2024-11-15 11:46:35.342820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.747 [2024-11-15 11:46:35.342831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.747 qpair failed and we were unable to recover it. 00:28:34.747 [2024-11-15 11:46:35.342967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.747 [2024-11-15 11:46:35.342978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.747 qpair failed and we were unable to recover it. 00:28:34.747 [2024-11-15 11:46:35.343153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.747 [2024-11-15 11:46:35.343164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.747 qpair failed and we were unable to recover it. 00:28:34.747 [2024-11-15 11:46:35.343397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.747 [2024-11-15 11:46:35.343407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.747 qpair failed and we were unable to recover it. 00:28:34.747 [2024-11-15 11:46:35.343663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.747 [2024-11-15 11:46:35.343674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.747 qpair failed and we were unable to recover it. 00:28:34.747 [2024-11-15 11:46:35.343910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.747 [2024-11-15 11:46:35.343922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.747 qpair failed and we were unable to recover it. 00:28:34.747 [2024-11-15 11:46:35.344181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.747 [2024-11-15 11:46:35.344191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.747 qpair failed and we were unable to recover it. 00:28:34.747 [2024-11-15 11:46:35.344398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.747 [2024-11-15 11:46:35.344409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.747 qpair failed and we were unable to recover it. 00:28:34.747 [2024-11-15 11:46:35.344661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.747 [2024-11-15 11:46:35.344672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.747 qpair failed and we were unable to recover it. 00:28:34.747 [2024-11-15 11:46:35.344911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.747 [2024-11-15 11:46:35.344921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.747 qpair failed and we were unable to recover it. 00:28:34.747 [2024-11-15 11:46:35.345080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.747 [2024-11-15 11:46:35.345091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.747 qpair failed and we were unable to recover it. 00:28:34.747 [2024-11-15 11:46:35.345317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.747 [2024-11-15 11:46:35.345329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.747 qpair failed and we were unable to recover it. 00:28:34.747 [2024-11-15 11:46:35.345484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.747 [2024-11-15 11:46:35.345496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.747 qpair failed and we were unable to recover it. 00:28:34.747 [2024-11-15 11:46:35.345715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.747 [2024-11-15 11:46:35.345726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.747 qpair failed and we were unable to recover it. 00:28:34.747 [2024-11-15 11:46:35.345953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.747 [2024-11-15 11:46:35.345964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.747 qpair failed and we were unable to recover it. 00:28:34.747 [2024-11-15 11:46:35.346117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.747 [2024-11-15 11:46:35.346128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.747 qpair failed and we were unable to recover it. 00:28:34.747 [2024-11-15 11:46:35.346349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.747 [2024-11-15 11:46:35.346360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.747 qpair failed and we were unable to recover it. 00:28:34.747 [2024-11-15 11:46:35.346589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.747 [2024-11-15 11:46:35.346601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.747 qpair failed and we were unable to recover it. 00:28:34.747 [2024-11-15 11:46:35.346756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.747 [2024-11-15 11:46:35.346767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.747 qpair failed and we were unable to recover it. 00:28:34.747 [2024-11-15 11:46:35.346988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.747 [2024-11-15 11:46:35.347000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.747 qpair failed and we were unable to recover it. 00:28:34.747 [2024-11-15 11:46:35.347138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.747 [2024-11-15 11:46:35.347149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.747 qpair failed and we were unable to recover it. 00:28:34.747 [2024-11-15 11:46:35.347309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.747 [2024-11-15 11:46:35.347321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.747 qpair failed and we were unable to recover it. 00:28:34.747 [2024-11-15 11:46:35.347404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.747 [2024-11-15 11:46:35.347415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.747 qpair failed and we were unable to recover it. 00:28:34.747 [2024-11-15 11:46:35.347622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.748 [2024-11-15 11:46:35.347634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.748 qpair failed and we were unable to recover it. 00:28:34.748 [2024-11-15 11:46:35.347725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.748 [2024-11-15 11:46:35.347735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.748 qpair failed and we were unable to recover it. 00:28:34.748 [2024-11-15 11:46:35.347965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.748 [2024-11-15 11:46:35.347977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.748 qpair failed and we were unable to recover it. 00:28:34.748 [2024-11-15 11:46:35.348238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.748 [2024-11-15 11:46:35.348250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.748 qpair failed and we were unable to recover it. 00:28:34.748 [2024-11-15 11:46:35.348399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.748 [2024-11-15 11:46:35.348411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.748 qpair failed and we were unable to recover it. 00:28:34.748 [2024-11-15 11:46:35.348578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.748 [2024-11-15 11:46:35.348590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.748 qpair failed and we were unable to recover it. 00:28:34.748 [2024-11-15 11:46:35.348761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.748 [2024-11-15 11:46:35.348773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.748 qpair failed and we were unable to recover it. 00:28:34.748 [2024-11-15 11:46:35.349034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.748 [2024-11-15 11:46:35.349045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.748 qpair failed and we were unable to recover it. 00:28:34.748 [2024-11-15 11:46:35.349280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.748 [2024-11-15 11:46:35.349291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.748 qpair failed and we were unable to recover it. 00:28:34.748 [2024-11-15 11:46:35.349522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.748 [2024-11-15 11:46:35.349534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.748 qpair failed and we were unable to recover it. 00:28:34.748 [2024-11-15 11:46:35.349630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.748 [2024-11-15 11:46:35.349642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.748 qpair failed and we were unable to recover it. 00:28:34.748 [2024-11-15 11:46:35.349795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.748 [2024-11-15 11:46:35.349807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.748 qpair failed and we were unable to recover it. 00:28:34.748 [2024-11-15 11:46:35.350013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.748 [2024-11-15 11:46:35.350025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.748 qpair failed and we were unable to recover it. 00:28:34.748 [2024-11-15 11:46:35.350193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.748 [2024-11-15 11:46:35.350205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.748 qpair failed and we were unable to recover it. 00:28:34.748 [2024-11-15 11:46:35.350408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.748 [2024-11-15 11:46:35.350420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.748 qpair failed and we were unable to recover it. 00:28:34.748 [2024-11-15 11:46:35.350593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.748 [2024-11-15 11:46:35.350606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.748 qpair failed and we were unable to recover it. 00:28:34.748 [2024-11-15 11:46:35.350767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.748 [2024-11-15 11:46:35.350779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.748 qpair failed and we were unable to recover it. 00:28:34.748 [2024-11-15 11:46:35.350984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.748 [2024-11-15 11:46:35.350996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.748 qpair failed and we were unable to recover it. 00:28:34.748 [2024-11-15 11:46:35.351234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.748 [2024-11-15 11:46:35.351246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.748 qpair failed and we were unable to recover it. 00:28:34.748 [2024-11-15 11:46:35.351406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.748 [2024-11-15 11:46:35.351417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.748 qpair failed and we were unable to recover it. 00:28:34.748 [2024-11-15 11:46:35.351652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.748 [2024-11-15 11:46:35.351664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.748 qpair failed and we were unable to recover it. 00:28:34.748 [2024-11-15 11:46:35.351769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.748 [2024-11-15 11:46:35.351780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.748 qpair failed and we were unable to recover it. 00:28:34.748 [2024-11-15 11:46:35.351923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.748 [2024-11-15 11:46:35.351936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.748 qpair failed and we were unable to recover it. 00:28:34.748 [2024-11-15 11:46:35.352071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.748 [2024-11-15 11:46:35.352083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.748 qpair failed and we were unable to recover it. 00:28:34.748 [2024-11-15 11:46:35.352313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.748 [2024-11-15 11:46:35.352326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.748 qpair failed and we were unable to recover it. 00:28:34.748 [2024-11-15 11:46:35.352588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.748 [2024-11-15 11:46:35.352601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.748 qpair failed and we were unable to recover it. 00:28:34.748 [2024-11-15 11:46:35.352780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.748 [2024-11-15 11:46:35.352791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.748 qpair failed and we were unable to recover it. 00:28:34.748 [2024-11-15 11:46:35.352938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.748 [2024-11-15 11:46:35.352951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.748 qpair failed and we were unable to recover it. 00:28:34.748 [2024-11-15 11:46:35.353132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.748 [2024-11-15 11:46:35.353143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.748 qpair failed and we were unable to recover it. 00:28:34.748 [2024-11-15 11:46:35.353316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.748 [2024-11-15 11:46:35.353328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.748 qpair failed and we were unable to recover it. 00:28:34.748 [2024-11-15 11:46:35.353537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.748 [2024-11-15 11:46:35.353549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.748 qpair failed and we were unable to recover it. 00:28:34.748 [2024-11-15 11:46:35.353767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.748 [2024-11-15 11:46:35.353779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.748 qpair failed and we were unable to recover it. 00:28:34.748 [2024-11-15 11:46:35.354065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.748 [2024-11-15 11:46:35.354076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.748 qpair failed and we were unable to recover it. 00:28:34.748 [2024-11-15 11:46:35.354247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.748 [2024-11-15 11:46:35.354258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.748 qpair failed and we were unable to recover it. 00:28:34.748 [2024-11-15 11:46:35.354412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.748 [2024-11-15 11:46:35.354422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.748 qpair failed and we were unable to recover it. 00:28:34.748 [2024-11-15 11:46:35.354573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.748 [2024-11-15 11:46:35.354585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.748 qpair failed and we were unable to recover it. 00:28:34.748 [2024-11-15 11:46:35.354691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.748 [2024-11-15 11:46:35.354702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.748 qpair failed and we were unable to recover it. 00:28:34.748 [2024-11-15 11:46:35.354932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.748 [2024-11-15 11:46:35.354943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.748 qpair failed and we were unable to recover it. 00:28:34.748 [2024-11-15 11:46:35.355176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.748 [2024-11-15 11:46:35.355187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.749 qpair failed and we were unable to recover it. 00:28:34.749 [2024-11-15 11:46:35.355281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.749 [2024-11-15 11:46:35.355292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.749 qpair failed and we were unable to recover it. 00:28:34.749 [2024-11-15 11:46:35.355363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.749 [2024-11-15 11:46:35.355374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.749 qpair failed and we were unable to recover it. 00:28:34.749 [2024-11-15 11:46:35.355566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.749 [2024-11-15 11:46:35.355578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.749 qpair failed and we were unable to recover it. 00:28:34.749 [2024-11-15 11:46:35.355660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.749 [2024-11-15 11:46:35.355671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.749 qpair failed and we were unable to recover it. 00:28:34.749 [2024-11-15 11:46:35.355829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.749 [2024-11-15 11:46:35.355840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.749 qpair failed and we were unable to recover it. 00:28:34.749 [2024-11-15 11:46:35.355987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.749 [2024-11-15 11:46:35.355998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.749 qpair failed and we were unable to recover it. 00:28:34.749 [2024-11-15 11:46:35.356252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.749 [2024-11-15 11:46:35.356263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.749 qpair failed and we were unable to recover it. 00:28:34.749 [2024-11-15 11:46:35.356467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.749 [2024-11-15 11:46:35.356478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.749 qpair failed and we were unable to recover it. 00:28:34.749 [2024-11-15 11:46:35.356708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.749 [2024-11-15 11:46:35.356719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.749 qpair failed and we were unable to recover it. 00:28:34.749 [2024-11-15 11:46:35.356952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.749 [2024-11-15 11:46:35.356963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.749 qpair failed and we were unable to recover it. 00:28:34.749 [2024-11-15 11:46:35.357051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.749 [2024-11-15 11:46:35.357067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.749 qpair failed and we were unable to recover it. 00:28:34.749 [2024-11-15 11:46:35.357216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.749 [2024-11-15 11:46:35.357231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.749 qpair failed and we were unable to recover it. 00:28:34.749 [2024-11-15 11:46:35.357386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.749 [2024-11-15 11:46:35.357397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.749 qpair failed and we were unable to recover it. 00:28:34.749 [2024-11-15 11:46:35.357613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.749 [2024-11-15 11:46:35.357625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.749 qpair failed and we were unable to recover it. 00:28:34.749 [2024-11-15 11:46:35.357874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.749 [2024-11-15 11:46:35.357886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.749 qpair failed and we were unable to recover it. 00:28:34.749 [2024-11-15 11:46:35.358163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.749 [2024-11-15 11:46:35.358173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.749 qpair failed and we were unable to recover it. 00:28:34.749 [2024-11-15 11:46:35.358384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.749 [2024-11-15 11:46:35.358395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.749 qpair failed and we were unable to recover it. 00:28:34.749 [2024-11-15 11:46:35.358574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.749 [2024-11-15 11:46:35.358586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.749 qpair failed and we were unable to recover it. 00:28:34.749 [2024-11-15 11:46:35.358797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.749 [2024-11-15 11:46:35.358807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.749 qpair failed and we were unable to recover it. 00:28:34.749 [2024-11-15 11:46:35.358987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.749 [2024-11-15 11:46:35.358998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.749 qpair failed and we were unable to recover it. 00:28:34.749 [2024-11-15 11:46:35.359144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.749 [2024-11-15 11:46:35.359154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.749 qpair failed and we were unable to recover it. 00:28:34.749 [2024-11-15 11:46:35.359360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.749 [2024-11-15 11:46:35.359370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.749 qpair failed and we were unable to recover it. 00:28:34.749 [2024-11-15 11:46:35.359522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.749 [2024-11-15 11:46:35.359533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.749 qpair failed and we were unable to recover it. 00:28:34.749 [2024-11-15 11:46:35.359688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.749 [2024-11-15 11:46:35.359704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.749 qpair failed and we were unable to recover it. 00:28:34.749 [2024-11-15 11:46:35.359784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.749 [2024-11-15 11:46:35.359794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.749 qpair failed and we were unable to recover it. 00:28:34.749 [2024-11-15 11:46:35.359950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.749 [2024-11-15 11:46:35.359961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.749 qpair failed and we were unable to recover it. 00:28:34.749 [2024-11-15 11:46:35.360146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.749 [2024-11-15 11:46:35.360157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.749 qpair failed and we were unable to recover it. 00:28:34.749 [2024-11-15 11:46:35.360383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.749 [2024-11-15 11:46:35.360395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.749 qpair failed and we were unable to recover it. 00:28:34.749 [2024-11-15 11:46:35.360578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.749 [2024-11-15 11:46:35.360589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.749 qpair failed and we were unable to recover it. 00:28:34.749 [2024-11-15 11:46:35.360803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.749 [2024-11-15 11:46:35.360815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.749 qpair failed and we were unable to recover it. 00:28:34.749 [2024-11-15 11:46:35.361019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.749 [2024-11-15 11:46:35.361030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.749 qpair failed and we were unable to recover it. 00:28:34.749 [2024-11-15 11:46:35.361263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.749 [2024-11-15 11:46:35.361273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.749 qpair failed and we were unable to recover it. 00:28:34.749 [2024-11-15 11:46:35.361482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.749 [2024-11-15 11:46:35.361494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.749 qpair failed and we were unable to recover it. 00:28:34.749 [2024-11-15 11:46:35.361652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.749 [2024-11-15 11:46:35.361663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.749 qpair failed and we were unable to recover it. 00:28:34.749 [2024-11-15 11:46:35.361897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.749 [2024-11-15 11:46:35.361909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.749 qpair failed and we were unable to recover it. 00:28:34.749 [2024-11-15 11:46:35.362012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.749 [2024-11-15 11:46:35.362023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.749 qpair failed and we were unable to recover it. 00:28:34.749 [2024-11-15 11:46:35.362166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.749 [2024-11-15 11:46:35.362177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.749 qpair failed and we were unable to recover it. 00:28:34.749 [2024-11-15 11:46:35.362409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.750 [2024-11-15 11:46:35.362420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.750 qpair failed and we were unable to recover it. 00:28:34.750 [2024-11-15 11:46:35.362598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.750 [2024-11-15 11:46:35.362609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.750 qpair failed and we were unable to recover it. 00:28:34.750 [2024-11-15 11:46:35.362678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.750 [2024-11-15 11:46:35.362689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.750 qpair failed and we were unable to recover it. 00:28:34.750 [2024-11-15 11:46:35.362836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.750 [2024-11-15 11:46:35.362847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.750 qpair failed and we were unable to recover it. 00:28:34.750 [2024-11-15 11:46:35.363073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.750 [2024-11-15 11:46:35.363084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.750 qpair failed and we were unable to recover it. 00:28:34.750 [2024-11-15 11:46:35.363316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.750 [2024-11-15 11:46:35.363327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.750 qpair failed and we were unable to recover it. 00:28:34.750 [2024-11-15 11:46:35.363479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.750 [2024-11-15 11:46:35.363491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.750 qpair failed and we were unable to recover it. 00:28:34.750 [2024-11-15 11:46:35.363708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.750 [2024-11-15 11:46:35.363719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.750 qpair failed and we were unable to recover it. 00:28:34.750 [2024-11-15 11:46:35.363950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.750 [2024-11-15 11:46:35.363961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.750 qpair failed and we were unable to recover it. 00:28:34.750 [2024-11-15 11:46:35.364169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.750 [2024-11-15 11:46:35.364180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.750 qpair failed and we were unable to recover it. 00:28:34.750 [2024-11-15 11:46:35.364454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.750 [2024-11-15 11:46:35.364469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.750 qpair failed and we were unable to recover it. 00:28:34.750 [2024-11-15 11:46:35.364619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.750 [2024-11-15 11:46:35.364640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.750 qpair failed and we were unable to recover it. 00:28:34.750 [2024-11-15 11:46:35.364809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.750 [2024-11-15 11:46:35.364820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.750 qpair failed and we were unable to recover it. 00:28:34.750 [2024-11-15 11:46:35.365060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.750 [2024-11-15 11:46:35.365073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.750 qpair failed and we were unable to recover it. 00:28:34.750 [2024-11-15 11:46:35.365359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.750 [2024-11-15 11:46:35.365370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.750 qpair failed and we were unable to recover it. 00:28:34.750 [2024-11-15 11:46:35.365607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.750 [2024-11-15 11:46:35.365619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.750 qpair failed and we were unable to recover it. 00:28:34.750 [2024-11-15 11:46:35.365759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.750 [2024-11-15 11:46:35.365770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.750 qpair failed and we were unable to recover it. 00:28:34.750 [2024-11-15 11:46:35.365998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.750 [2024-11-15 11:46:35.366010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.750 qpair failed and we were unable to recover it. 00:28:34.750 [2024-11-15 11:46:35.366166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.750 [2024-11-15 11:46:35.366177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.750 qpair failed and we were unable to recover it. 00:28:34.750 [2024-11-15 11:46:35.366381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.750 [2024-11-15 11:46:35.366393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.750 qpair failed and we were unable to recover it. 00:28:34.750 [2024-11-15 11:46:35.366546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.750 [2024-11-15 11:46:35.366558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.750 qpair failed and we were unable to recover it. 00:28:34.750 [2024-11-15 11:46:35.366764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.750 [2024-11-15 11:46:35.366775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.750 qpair failed and we were unable to recover it. 00:28:34.750 [2024-11-15 11:46:35.367016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.750 [2024-11-15 11:46:35.367027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.750 qpair failed and we were unable to recover it. 00:28:34.750 [2024-11-15 11:46:35.367286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.750 [2024-11-15 11:46:35.367298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.750 qpair failed and we were unable to recover it. 00:28:34.750 [2024-11-15 11:46:35.367524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.750 [2024-11-15 11:46:35.367535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.750 qpair failed and we were unable to recover it. 00:28:34.750 [2024-11-15 11:46:35.367772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.750 [2024-11-15 11:46:35.367784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.750 qpair failed and we were unable to recover it. 00:28:34.750 [2024-11-15 11:46:35.368010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.750 [2024-11-15 11:46:35.368021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.750 qpair failed and we were unable to recover it. 00:28:34.750 [2024-11-15 11:46:35.368188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.750 [2024-11-15 11:46:35.368199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.750 qpair failed and we were unable to recover it. 00:28:34.750 [2024-11-15 11:46:35.368403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.750 [2024-11-15 11:46:35.368414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.750 qpair failed and we were unable to recover it. 00:28:34.750 [2024-11-15 11:46:35.368582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.750 [2024-11-15 11:46:35.368593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.750 qpair failed and we were unable to recover it. 00:28:34.750 [2024-11-15 11:46:35.368821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.750 [2024-11-15 11:46:35.368831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.750 qpair failed and we were unable to recover it. 00:28:34.750 [2024-11-15 11:46:35.369074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.750 [2024-11-15 11:46:35.369085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.750 qpair failed and we were unable to recover it. 00:28:34.750 [2024-11-15 11:46:35.369273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.750 [2024-11-15 11:46:35.369284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.750 qpair failed and we were unable to recover it. 00:28:34.750 [2024-11-15 11:46:35.369538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.750 [2024-11-15 11:46:35.369550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.750 qpair failed and we were unable to recover it. 00:28:34.750 [2024-11-15 11:46:35.369709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.750 [2024-11-15 11:46:35.369721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.750 qpair failed and we were unable to recover it. 00:28:34.750 [2024-11-15 11:46:35.369874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.750 [2024-11-15 11:46:35.369885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.750 qpair failed and we were unable to recover it. 00:28:34.750 [2024-11-15 11:46:35.370122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.750 [2024-11-15 11:46:35.370133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.750 qpair failed and we were unable to recover it. 00:28:34.750 [2024-11-15 11:46:35.370342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.750 [2024-11-15 11:46:35.370353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.750 qpair failed and we were unable to recover it. 00:28:34.750 [2024-11-15 11:46:35.370585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.751 [2024-11-15 11:46:35.370597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.751 qpair failed and we were unable to recover it. 00:28:34.751 [2024-11-15 11:46:35.370812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.751 [2024-11-15 11:46:35.370823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.751 qpair failed and we were unable to recover it. 00:28:34.751 [2024-11-15 11:46:35.370896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.751 [2024-11-15 11:46:35.370907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.751 qpair failed and we were unable to recover it. 00:28:34.751 [2024-11-15 11:46:35.371149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.751 [2024-11-15 11:46:35.371161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.751 qpair failed and we were unable to recover it. 00:28:34.751 [2024-11-15 11:46:35.371395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.751 [2024-11-15 11:46:35.371405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.751 qpair failed and we were unable to recover it. 00:28:34.751 [2024-11-15 11:46:35.371542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.751 [2024-11-15 11:46:35.371552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.751 qpair failed and we were unable to recover it. 00:28:34.751 [2024-11-15 11:46:35.371789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.751 [2024-11-15 11:46:35.371801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.751 qpair failed and we were unable to recover it. 00:28:34.751 [2024-11-15 11:46:35.372032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.751 [2024-11-15 11:46:35.372043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.751 qpair failed and we were unable to recover it. 00:28:34.751 [2024-11-15 11:46:35.372192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.751 [2024-11-15 11:46:35.372204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.751 qpair failed and we were unable to recover it. 00:28:34.751 [2024-11-15 11:46:35.372428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.751 [2024-11-15 11:46:35.372439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.751 qpair failed and we were unable to recover it. 00:28:34.751 [2024-11-15 11:46:35.372673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.751 [2024-11-15 11:46:35.372685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.751 qpair failed and we were unable to recover it. 00:28:34.751 [2024-11-15 11:46:35.372913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.751 [2024-11-15 11:46:35.372923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.751 qpair failed and we were unable to recover it. 00:28:34.751 [2024-11-15 11:46:35.373150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.751 [2024-11-15 11:46:35.373161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.751 qpair failed and we were unable to recover it. 00:28:34.751 [2024-11-15 11:46:35.373398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.751 [2024-11-15 11:46:35.373409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.751 qpair failed and we were unable to recover it. 00:28:34.751 [2024-11-15 11:46:35.373692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.751 [2024-11-15 11:46:35.373704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.751 qpair failed and we were unable to recover it. 00:28:34.751 [2024-11-15 11:46:35.373936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.751 [2024-11-15 11:46:35.373949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.751 qpair failed and we were unable to recover it. 00:28:34.751 [2024-11-15 11:46:35.374085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.751 [2024-11-15 11:46:35.374096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.751 qpair failed and we were unable to recover it. 00:28:34.751 [2024-11-15 11:46:35.374311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.751 [2024-11-15 11:46:35.374322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.751 qpair failed and we were unable to recover it. 00:28:34.751 [2024-11-15 11:46:35.374405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.751 [2024-11-15 11:46:35.374417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.751 qpair failed and we were unable to recover it. 00:28:34.751 [2024-11-15 11:46:35.374681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.751 [2024-11-15 11:46:35.374692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.751 qpair failed and we were unable to recover it. 00:28:34.751 [2024-11-15 11:46:35.374942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.751 [2024-11-15 11:46:35.374954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.751 qpair failed and we were unable to recover it. 00:28:34.751 [2024-11-15 11:46:35.375046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.751 [2024-11-15 11:46:35.375056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.751 qpair failed and we were unable to recover it. 00:28:34.751 [2024-11-15 11:46:35.375142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.751 [2024-11-15 11:46:35.375153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.751 qpair failed and we were unable to recover it. 00:28:34.751 [2024-11-15 11:46:35.375385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.751 [2024-11-15 11:46:35.375396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.751 qpair failed and we were unable to recover it. 00:28:34.751 [2024-11-15 11:46:35.375565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.751 [2024-11-15 11:46:35.375576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.751 qpair failed and we were unable to recover it. 00:28:34.751 [2024-11-15 11:46:35.375780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.751 [2024-11-15 11:46:35.375791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.751 qpair failed and we were unable to recover it. 00:28:34.751 [2024-11-15 11:46:35.376020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.751 [2024-11-15 11:46:35.376032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.751 qpair failed and we were unable to recover it. 00:28:34.751 [2024-11-15 11:46:35.376270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.751 [2024-11-15 11:46:35.376281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.751 qpair failed and we were unable to recover it. 00:28:34.751 [2024-11-15 11:46:35.376535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.751 [2024-11-15 11:46:35.376547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.751 qpair failed and we were unable to recover it. 00:28:34.751 [2024-11-15 11:46:35.376781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.751 [2024-11-15 11:46:35.376792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.751 qpair failed and we were unable to recover it. 00:28:34.751 [2024-11-15 11:46:35.377027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.751 [2024-11-15 11:46:35.377038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.751 qpair failed and we were unable to recover it. 00:28:34.751 [2024-11-15 11:46:35.377187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.751 [2024-11-15 11:46:35.377198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.751 qpair failed and we were unable to recover it. 00:28:34.751 [2024-11-15 11:46:35.377331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.751 [2024-11-15 11:46:35.377342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.751 qpair failed and we were unable to recover it. 00:28:34.751 [2024-11-15 11:46:35.377550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.751 [2024-11-15 11:46:35.377562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.752 qpair failed and we were unable to recover it. 00:28:34.752 [2024-11-15 11:46:35.377797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.752 [2024-11-15 11:46:35.377807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.752 qpair failed and we were unable to recover it. 00:28:34.752 [2024-11-15 11:46:35.377893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.752 [2024-11-15 11:46:35.377904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.752 qpair failed and we were unable to recover it. 00:28:34.752 [2024-11-15 11:46:35.378164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.752 [2024-11-15 11:46:35.378175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.752 qpair failed and we were unable to recover it. 00:28:34.752 [2024-11-15 11:46:35.378429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.752 [2024-11-15 11:46:35.378440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.752 qpair failed and we were unable to recover it. 00:28:34.752 [2024-11-15 11:46:35.378709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.752 [2024-11-15 11:46:35.378720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.752 qpair failed and we were unable to recover it. 00:28:34.752 [2024-11-15 11:46:35.378943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.752 [2024-11-15 11:46:35.378953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.752 qpair failed and we were unable to recover it. 00:28:34.752 [2024-11-15 11:46:35.379107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.752 [2024-11-15 11:46:35.379119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.752 qpair failed and we were unable to recover it. 00:28:34.752 [2024-11-15 11:46:35.379350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.752 [2024-11-15 11:46:35.379363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.752 qpair failed and we were unable to recover it. 00:28:34.752 [2024-11-15 11:46:35.379452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.752 [2024-11-15 11:46:35.379469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.752 qpair failed and we were unable to recover it. 00:28:34.752 [2024-11-15 11:46:35.379613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.752 [2024-11-15 11:46:35.379624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.752 qpair failed and we were unable to recover it. 00:28:34.752 [2024-11-15 11:46:35.379799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.752 [2024-11-15 11:46:35.379811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.752 qpair failed and we were unable to recover it. 00:28:34.752 [2024-11-15 11:46:35.379958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.752 [2024-11-15 11:46:35.379970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.752 qpair failed and we were unable to recover it. 00:28:34.752 [2024-11-15 11:46:35.380252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.752 [2024-11-15 11:46:35.380263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.752 qpair failed and we were unable to recover it. 00:28:34.752 [2024-11-15 11:46:35.380406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.752 [2024-11-15 11:46:35.380418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.752 qpair failed and we were unable to recover it. 00:28:34.752 [2024-11-15 11:46:35.380599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.752 [2024-11-15 11:46:35.380611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.752 qpair failed and we were unable to recover it. 00:28:34.752 [2024-11-15 11:46:35.380750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.752 [2024-11-15 11:46:35.380763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.752 qpair failed and we were unable to recover it. 00:28:34.752 [2024-11-15 11:46:35.380919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.752 [2024-11-15 11:46:35.380931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.752 qpair failed and we were unable to recover it. 00:28:34.752 [2024-11-15 11:46:35.381082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.752 [2024-11-15 11:46:35.381094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.752 qpair failed and we were unable to recover it. 00:28:34.752 [2024-11-15 11:46:35.381378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.752 [2024-11-15 11:46:35.381391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.752 qpair failed and we were unable to recover it. 00:28:34.752 [2024-11-15 11:46:35.381646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.752 [2024-11-15 11:46:35.381659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.752 qpair failed and we were unable to recover it. 00:28:34.752 [2024-11-15 11:46:35.381825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.752 [2024-11-15 11:46:35.381837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.752 qpair failed and we were unable to recover it. 00:28:34.752 [2024-11-15 11:46:35.381993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.752 [2024-11-15 11:46:35.382007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.752 qpair failed and we were unable to recover it. 00:28:34.752 [2024-11-15 11:46:35.382094] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:34.752 [2024-11-15 11:46:35.382119] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:34.752 [2024-11-15 11:46:35.382126] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:28:34.752 [2024-11-15 11:46:35.382133] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:28:34.752 [2024-11-15 11:46:35.382138] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:34.752 [2024-11-15 11:46:35.382280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.752 [2024-11-15 11:46:35.382291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.752 qpair failed and we were unable to recover it. 00:28:34.752 [2024-11-15 11:46:35.382497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.752 [2024-11-15 11:46:35.382508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.752 qpair failed and we were unable to recover it. 00:28:34.752 [2024-11-15 11:46:35.382724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.752 [2024-11-15 11:46:35.382735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.752 qpair failed and we were unable to recover it. 00:28:34.752 [2024-11-15 11:46:35.382966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.752 [2024-11-15 11:46:35.382977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.752 qpair failed and we were unable to recover it. 00:28:34.752 [2024-11-15 11:46:35.383134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.752 [2024-11-15 11:46:35.383145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.752 qpair failed and we were unable to recover it. 00:28:34.752 [2024-11-15 11:46:35.383368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.752 [2024-11-15 11:46:35.383379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.752 qpair failed and we were unable to recover it. 00:28:34.752 [2024-11-15 11:46:35.383602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.752 [2024-11-15 11:46:35.383613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.752 qpair failed and we were unable to recover it. 00:28:34.752 [2024-11-15 11:46:35.383770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.752 [2024-11-15 11:46:35.383781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.752 qpair failed and we were unable to recover it. 00:28:34.752 [2024-11-15 11:46:35.383793] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:28:34.752 [2024-11-15 11:46:35.383958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.752 [2024-11-15 11:46:35.383969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.752 qpair failed and we were unable to recover it. 00:28:34.752 [2024-11-15 11:46:35.383907] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:28:34.752 [2024-11-15 11:46:35.384017] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:28:34.752 [2024-11-15 11:46:35.384124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.752 [2024-11-15 11:46:35.384135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.752 [2024-11-15 11:46:35.384018] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 7 00:28:34.752 qpair failed and we were unable to recover it. 00:28:34.752 [2024-11-15 11:46:35.384347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.752 [2024-11-15 11:46:35.384358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.752 qpair failed and we were unable to recover it. 00:28:34.752 [2024-11-15 11:46:35.384540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.752 [2024-11-15 11:46:35.384551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.752 qpair failed and we were unable to recover it. 00:28:34.752 [2024-11-15 11:46:35.384702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.753 [2024-11-15 11:46:35.384713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.753 qpair failed and we were unable to recover it. 00:28:34.753 [2024-11-15 11:46:35.384934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.753 [2024-11-15 11:46:35.384946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.753 qpair failed and we were unable to recover it. 00:28:34.753 [2024-11-15 11:46:35.385096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.753 [2024-11-15 11:46:35.385108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.753 qpair failed and we were unable to recover it. 00:28:34.753 [2024-11-15 11:46:35.385257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.753 [2024-11-15 11:46:35.385268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.753 qpair failed and we were unable to recover it. 00:28:34.753 [2024-11-15 11:46:35.385446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.753 [2024-11-15 11:46:35.385461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.753 qpair failed and we were unable to recover it. 00:28:34.753 [2024-11-15 11:46:35.385629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.753 [2024-11-15 11:46:35.385641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.753 qpair failed and we were unable to recover it. 00:28:34.753 [2024-11-15 11:46:35.385885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.753 [2024-11-15 11:46:35.385897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.753 qpair failed and we were unable to recover it. 00:28:34.753 [2024-11-15 11:46:35.386040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.753 [2024-11-15 11:46:35.386051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.753 qpair failed and we were unable to recover it. 00:28:34.753 [2024-11-15 11:46:35.386273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.753 [2024-11-15 11:46:35.386285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.753 qpair failed and we were unable to recover it. 00:28:34.753 [2024-11-15 11:46:35.386554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.753 [2024-11-15 11:46:35.386566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.753 qpair failed and we were unable to recover it. 00:28:34.753 [2024-11-15 11:46:35.386802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.753 [2024-11-15 11:46:35.386814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.753 qpair failed and we were unable to recover it. 00:28:34.753 [2024-11-15 11:46:35.387037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.753 [2024-11-15 11:46:35.387048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.753 qpair failed and we were unable to recover it. 00:28:34.753 [2024-11-15 11:46:35.387268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.753 [2024-11-15 11:46:35.387279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.753 qpair failed and we were unable to recover it. 00:28:34.753 [2024-11-15 11:46:35.387517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.753 [2024-11-15 11:46:35.387529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.753 qpair failed and we were unable to recover it. 00:28:34.753 [2024-11-15 11:46:35.387810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.753 [2024-11-15 11:46:35.387821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.753 qpair failed and we were unable to recover it. 00:28:34.753 [2024-11-15 11:46:35.387903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.753 [2024-11-15 11:46:35.387914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.753 qpair failed and we were unable to recover it. 00:28:34.753 [2024-11-15 11:46:35.388155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.753 [2024-11-15 11:46:35.388165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.753 qpair failed and we were unable to recover it. 00:28:34.753 [2024-11-15 11:46:35.388252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.753 [2024-11-15 11:46:35.388263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.753 qpair failed and we were unable to recover it. 00:28:34.753 [2024-11-15 11:46:35.388522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.753 [2024-11-15 11:46:35.388534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.753 qpair failed and we were unable to recover it. 00:28:34.753 [2024-11-15 11:46:35.388636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.753 [2024-11-15 11:46:35.388647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.753 qpair failed and we were unable to recover it. 00:28:34.753 [2024-11-15 11:46:35.388809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.753 [2024-11-15 11:46:35.388820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.753 qpair failed and we were unable to recover it. 00:28:34.753 [2024-11-15 11:46:35.389005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.753 [2024-11-15 11:46:35.389017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.753 qpair failed and we were unable to recover it. 00:28:34.753 [2024-11-15 11:46:35.389225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.753 [2024-11-15 11:46:35.389237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.753 qpair failed and we were unable to recover it. 00:28:34.753 [2024-11-15 11:46:35.389389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.753 [2024-11-15 11:46:35.389401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.753 qpair failed and we were unable to recover it. 00:28:34.753 [2024-11-15 11:46:35.389662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.753 [2024-11-15 11:46:35.389678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.753 qpair failed and we were unable to recover it. 00:28:34.753 [2024-11-15 11:46:35.389851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.753 [2024-11-15 11:46:35.389863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.753 qpair failed and we were unable to recover it. 00:28:34.753 [2024-11-15 11:46:35.390019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.753 [2024-11-15 11:46:35.390030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.753 qpair failed and we were unable to recover it. 00:28:34.753 [2024-11-15 11:46:35.390124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.753 [2024-11-15 11:46:35.390135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.753 qpair failed and we were unable to recover it. 00:28:34.753 [2024-11-15 11:46:35.390431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.753 [2024-11-15 11:46:35.390443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.753 qpair failed and we were unable to recover it. 00:28:34.753 [2024-11-15 11:46:35.390683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.753 [2024-11-15 11:46:35.390694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.753 qpair failed and we were unable to recover it. 00:28:34.753 [2024-11-15 11:46:35.390925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.753 [2024-11-15 11:46:35.390937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.753 qpair failed and we were unable to recover it. 00:28:34.753 [2024-11-15 11:46:35.391201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.753 [2024-11-15 11:46:35.391212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.753 qpair failed and we were unable to recover it. 00:28:34.753 [2024-11-15 11:46:35.391369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.753 [2024-11-15 11:46:35.391381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.753 qpair failed and we were unable to recover it. 00:28:34.753 [2024-11-15 11:46:35.391605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.753 [2024-11-15 11:46:35.391618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.753 qpair failed and we were unable to recover it. 00:28:34.753 [2024-11-15 11:46:35.391780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.753 [2024-11-15 11:46:35.391791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.753 qpair failed and we were unable to recover it. 00:28:34.753 [2024-11-15 11:46:35.392025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.753 [2024-11-15 11:46:35.392037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.753 qpair failed and we were unable to recover it. 00:28:34.753 [2024-11-15 11:46:35.392228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.753 [2024-11-15 11:46:35.392240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.753 qpair failed and we were unable to recover it. 00:28:34.753 [2024-11-15 11:46:35.392444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.753 [2024-11-15 11:46:35.392464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.753 qpair failed and we were unable to recover it. 00:28:34.754 [2024-11-15 11:46:35.392720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.754 [2024-11-15 11:46:35.392733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.754 qpair failed and we were unable to recover it. 00:28:34.754 [2024-11-15 11:46:35.392969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.754 [2024-11-15 11:46:35.392981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.754 qpair failed and we were unable to recover it. 00:28:34.754 [2024-11-15 11:46:35.393220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.754 [2024-11-15 11:46:35.393231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.754 qpair failed and we were unable to recover it. 00:28:34.754 [2024-11-15 11:46:35.393380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.754 [2024-11-15 11:46:35.393392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.754 qpair failed and we were unable to recover it. 00:28:34.754 [2024-11-15 11:46:35.393556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.754 [2024-11-15 11:46:35.393568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.754 qpair failed and we were unable to recover it. 00:28:34.754 [2024-11-15 11:46:35.393816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.754 [2024-11-15 11:46:35.393829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.754 qpair failed and we were unable to recover it. 00:28:34.754 [2024-11-15 11:46:35.393992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.754 [2024-11-15 11:46:35.394004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.754 qpair failed and we were unable to recover it. 00:28:34.754 [2024-11-15 11:46:35.394218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.754 [2024-11-15 11:46:35.394230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.754 qpair failed and we were unable to recover it. 00:28:34.754 [2024-11-15 11:46:35.394390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.754 [2024-11-15 11:46:35.394403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.754 qpair failed and we were unable to recover it. 00:28:34.754 [2024-11-15 11:46:35.394639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.754 [2024-11-15 11:46:35.394654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.754 qpair failed and we were unable to recover it. 00:28:34.754 [2024-11-15 11:46:35.394860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.754 [2024-11-15 11:46:35.394875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.754 qpair failed and we were unable to recover it. 00:28:34.754 [2024-11-15 11:46:35.394968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.754 [2024-11-15 11:46:35.394980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.754 qpair failed and we were unable to recover it. 00:28:34.754 [2024-11-15 11:46:35.395082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.754 [2024-11-15 11:46:35.395093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.754 qpair failed and we were unable to recover it. 00:28:34.754 [2024-11-15 11:46:35.395229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.754 [2024-11-15 11:46:35.395240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.754 qpair failed and we were unable to recover it. 00:28:34.754 [2024-11-15 11:46:35.395328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.754 [2024-11-15 11:46:35.395340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.754 qpair failed and we were unable to recover it. 00:28:34.754 [2024-11-15 11:46:35.395484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.754 [2024-11-15 11:46:35.395498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.754 qpair failed and we were unable to recover it. 00:28:34.754 [2024-11-15 11:46:35.395650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.754 [2024-11-15 11:46:35.395663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.754 qpair failed and we were unable to recover it. 00:28:34.754 [2024-11-15 11:46:35.395838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.754 [2024-11-15 11:46:35.395850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.754 qpair failed and we were unable to recover it. 00:28:34.754 [2024-11-15 11:46:35.396094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.754 [2024-11-15 11:46:35.396106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.754 qpair failed and we were unable to recover it. 00:28:34.754 [2024-11-15 11:46:35.396303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.754 [2024-11-15 11:46:35.396315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.754 qpair failed and we were unable to recover it. 00:28:34.754 [2024-11-15 11:46:35.396525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.754 [2024-11-15 11:46:35.396538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.754 qpair failed and we were unable to recover it. 00:28:34.754 [2024-11-15 11:46:35.396698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.754 [2024-11-15 11:46:35.396711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.754 qpair failed and we were unable to recover it. 00:28:34.754 [2024-11-15 11:46:35.396867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.754 [2024-11-15 11:46:35.396879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.754 qpair failed and we were unable to recover it. 00:28:34.754 [2024-11-15 11:46:35.397014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.754 [2024-11-15 11:46:35.397026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.754 qpair failed and we were unable to recover it. 00:28:34.754 [2024-11-15 11:46:35.397129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.754 [2024-11-15 11:46:35.397142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.754 qpair failed and we were unable to recover it. 00:28:34.754 [2024-11-15 11:46:35.397342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.754 [2024-11-15 11:46:35.397356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.754 qpair failed and we were unable to recover it. 00:28:34.754 [2024-11-15 11:46:35.397554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.754 [2024-11-15 11:46:35.397599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1922550 with addr=10.0.0.2, port=4420 00:28:34.754 qpair failed and we were unable to recover it. 00:28:34.754 [2024-11-15 11:46:35.397785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.754 [2024-11-15 11:46:35.397802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.754 qpair failed and we were unable to recover it. 00:28:34.754 [2024-11-15 11:46:35.397956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.754 [2024-11-15 11:46:35.397967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.754 qpair failed and we were unable to recover it. 00:28:34.754 [2024-11-15 11:46:35.398188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.754 [2024-11-15 11:46:35.398200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.754 qpair failed and we were unable to recover it. 00:28:34.754 [2024-11-15 11:46:35.398436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.754 [2024-11-15 11:46:35.398447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.754 qpair failed and we were unable to recover it. 00:28:34.754 [2024-11-15 11:46:35.398661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.754 [2024-11-15 11:46:35.398677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.754 qpair failed and we were unable to recover it. 00:28:34.754 [2024-11-15 11:46:35.398860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.754 [2024-11-15 11:46:35.398871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.754 qpair failed and we were unable to recover it. 00:28:34.754 [2024-11-15 11:46:35.399073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.754 [2024-11-15 11:46:35.399083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.754 qpair failed and we were unable to recover it. 00:28:34.754 [2024-11-15 11:46:35.399219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.754 [2024-11-15 11:46:35.399231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.754 qpair failed and we were unable to recover it. 00:28:34.754 [2024-11-15 11:46:35.399441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.754 [2024-11-15 11:46:35.399453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.754 qpair failed and we were unable to recover it. 00:28:34.754 [2024-11-15 11:46:35.399561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.754 [2024-11-15 11:46:35.399572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.754 qpair failed and we were unable to recover it. 00:28:34.754 [2024-11-15 11:46:35.399845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.754 [2024-11-15 11:46:35.399857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.754 qpair failed and we were unable to recover it. 00:28:34.755 [2024-11-15 11:46:35.399964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.755 [2024-11-15 11:46:35.399976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.755 qpair failed and we were unable to recover it. 00:28:34.755 [2024-11-15 11:46:35.400193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.755 [2024-11-15 11:46:35.400208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.755 qpair failed and we were unable to recover it. 00:28:34.755 [2024-11-15 11:46:35.400300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.755 [2024-11-15 11:46:35.400312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.755 qpair failed and we were unable to recover it. 00:28:34.755 [2024-11-15 11:46:35.400451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.755 [2024-11-15 11:46:35.400467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.755 qpair failed and we were unable to recover it. 00:28:34.755 [2024-11-15 11:46:35.400646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.755 [2024-11-15 11:46:35.400657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.755 qpair failed and we were unable to recover it. 00:28:34.755 [2024-11-15 11:46:35.400739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.755 [2024-11-15 11:46:35.400750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.755 qpair failed and we were unable to recover it. 00:28:34.755 [2024-11-15 11:46:35.400908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.755 [2024-11-15 11:46:35.400919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.755 qpair failed and we were unable to recover it. 00:28:34.755 [2024-11-15 11:46:35.401087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.755 [2024-11-15 11:46:35.401099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.755 qpair failed and we were unable to recover it. 00:28:34.755 [2024-11-15 11:46:35.401189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.755 [2024-11-15 11:46:35.401200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.755 qpair failed and we were unable to recover it. 00:28:34.755 [2024-11-15 11:46:35.401342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.755 [2024-11-15 11:46:35.401353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.755 qpair failed and we were unable to recover it. 00:28:34.755 [2024-11-15 11:46:35.401447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.755 [2024-11-15 11:46:35.401462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.755 qpair failed and we were unable to recover it. 00:28:34.755 [2024-11-15 11:46:35.401632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.755 [2024-11-15 11:46:35.401643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.755 qpair failed and we were unable to recover it. 00:28:34.755 [2024-11-15 11:46:35.401799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.755 [2024-11-15 11:46:35.401811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.755 qpair failed and we were unable to recover it. 00:28:34.755 [2024-11-15 11:46:35.402000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.755 [2024-11-15 11:46:35.402011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.755 qpair failed and we were unable to recover it. 00:28:34.755 [2024-11-15 11:46:35.402216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.755 [2024-11-15 11:46:35.402227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.755 qpair failed and we were unable to recover it. 00:28:34.755 [2024-11-15 11:46:35.402435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.755 [2024-11-15 11:46:35.402447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.755 qpair failed and we were unable to recover it. 00:28:34.755 [2024-11-15 11:46:35.402604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.755 [2024-11-15 11:46:35.402616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.755 qpair failed and we were unable to recover it. 00:28:34.755 [2024-11-15 11:46:35.402794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.755 [2024-11-15 11:46:35.402805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.755 qpair failed and we were unable to recover it. 00:28:34.755 [2024-11-15 11:46:35.402893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.755 [2024-11-15 11:46:35.402904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.755 qpair failed and we were unable to recover it. 00:28:34.755 [2024-11-15 11:46:35.403091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.755 [2024-11-15 11:46:35.403102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.755 qpair failed and we were unable to recover it. 00:28:34.755 [2024-11-15 11:46:35.403283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.755 [2024-11-15 11:46:35.403294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.755 qpair failed and we were unable to recover it. 00:28:34.755 [2024-11-15 11:46:35.403501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.755 [2024-11-15 11:46:35.403513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.755 qpair failed and we were unable to recover it. 00:28:34.755 [2024-11-15 11:46:35.403602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.755 [2024-11-15 11:46:35.403613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.755 qpair failed and we were unable to recover it. 00:28:34.755 [2024-11-15 11:46:35.403719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.755 [2024-11-15 11:46:35.403730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.755 qpair failed and we were unable to recover it. 00:28:34.755 [2024-11-15 11:46:35.403813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.755 [2024-11-15 11:46:35.403824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.755 qpair failed and we were unable to recover it. 00:28:34.755 [2024-11-15 11:46:35.403976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.755 [2024-11-15 11:46:35.403987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.755 qpair failed and we were unable to recover it. 00:28:34.755 [2024-11-15 11:46:35.404265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.755 [2024-11-15 11:46:35.404276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.755 qpair failed and we were unable to recover it. 00:28:34.755 [2024-11-15 11:46:35.404474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.755 [2024-11-15 11:46:35.404486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.755 qpair failed and we were unable to recover it. 00:28:34.755 [2024-11-15 11:46:35.404680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.755 [2024-11-15 11:46:35.404703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1922550 with addr=10.0.0.2, port=4420 00:28:34.755 qpair failed and we were unable to recover it. 00:28:34.755 [2024-11-15 11:46:35.404870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.755 [2024-11-15 11:46:35.404883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.755 qpair failed and we were unable to recover it. 00:28:34.755 [2024-11-15 11:46:35.405028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.755 [2024-11-15 11:46:35.405039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.755 qpair failed and we were unable to recover it. 00:28:34.755 [2024-11-15 11:46:35.405277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.755 [2024-11-15 11:46:35.405288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.755 qpair failed and we were unable to recover it. 00:28:34.755 [2024-11-15 11:46:35.405511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.755 [2024-11-15 11:46:35.405523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.755 qpair failed and we were unable to recover it. 00:28:34.755 [2024-11-15 11:46:35.405728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.755 [2024-11-15 11:46:35.405740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.755 qpair failed and we were unable to recover it. 00:28:34.756 [2024-11-15 11:46:35.405916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.756 [2024-11-15 11:46:35.405927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.756 qpair failed and we were unable to recover it. 00:28:34.756 [2024-11-15 11:46:35.405996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.756 [2024-11-15 11:46:35.406007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.756 qpair failed and we were unable to recover it. 00:28:34.756 [2024-11-15 11:46:35.406262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.756 [2024-11-15 11:46:35.406273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.756 qpair failed and we were unable to recover it. 00:28:34.756 [2024-11-15 11:46:35.406529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.756 [2024-11-15 11:46:35.406540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.756 qpair failed and we were unable to recover it. 00:28:34.756 [2024-11-15 11:46:35.406692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.756 [2024-11-15 11:46:35.406704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.756 qpair failed and we were unable to recover it. 00:28:34.756 [2024-11-15 11:46:35.406812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.756 [2024-11-15 11:46:35.406823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.756 qpair failed and we were unable to recover it. 00:28:34.756 [2024-11-15 11:46:35.406974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.756 [2024-11-15 11:46:35.406985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.756 qpair failed and we were unable to recover it. 00:28:34.756 [2024-11-15 11:46:35.407135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.756 [2024-11-15 11:46:35.407149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.756 qpair failed and we were unable to recover it. 00:28:34.756 [2024-11-15 11:46:35.407237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.756 [2024-11-15 11:46:35.407247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.756 qpair failed and we were unable to recover it. 00:28:34.756 [2024-11-15 11:46:35.407405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.756 [2024-11-15 11:46:35.407416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.756 qpair failed and we were unable to recover it. 00:28:34.756 [2024-11-15 11:46:35.407602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.756 [2024-11-15 11:46:35.407614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.756 qpair failed and we were unable to recover it. 00:28:34.756 [2024-11-15 11:46:35.407698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.756 [2024-11-15 11:46:35.407709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.756 qpair failed and we were unable to recover it. 00:28:34.756 [2024-11-15 11:46:35.407983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.756 [2024-11-15 11:46:35.407994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.756 qpair failed and we were unable to recover it. 00:28:34.756 [2024-11-15 11:46:35.408160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.756 [2024-11-15 11:46:35.408171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.756 qpair failed and we were unable to recover it. 00:28:34.756 [2024-11-15 11:46:35.408417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.756 [2024-11-15 11:46:35.408429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.756 qpair failed and we were unable to recover it. 00:28:34.756 [2024-11-15 11:46:35.408583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.756 [2024-11-15 11:46:35.408595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.756 qpair failed and we were unable to recover it. 00:28:34.756 [2024-11-15 11:46:35.408844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.756 [2024-11-15 11:46:35.408856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.756 qpair failed and we were unable to recover it. 00:28:34.756 [2024-11-15 11:46:35.409090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.756 [2024-11-15 11:46:35.409103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.756 qpair failed and we were unable to recover it. 00:28:34.756 [2024-11-15 11:46:35.409277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.756 [2024-11-15 11:46:35.409289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.756 qpair failed and we were unable to recover it. 00:28:34.756 [2024-11-15 11:46:35.409495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.756 [2024-11-15 11:46:35.409507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.756 qpair failed and we were unable to recover it. 00:28:34.756 [2024-11-15 11:46:35.409620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.756 [2024-11-15 11:46:35.409635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.756 qpair failed and we were unable to recover it. 00:28:34.756 [2024-11-15 11:46:35.409792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.756 [2024-11-15 11:46:35.409804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.756 qpair failed and we were unable to recover it. 00:28:34.756 [2024-11-15 11:46:35.409954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.756 [2024-11-15 11:46:35.409965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.756 qpair failed and we were unable to recover it. 00:28:34.756 [2024-11-15 11:46:35.410111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.756 [2024-11-15 11:46:35.410123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.756 qpair failed and we were unable to recover it. 00:28:34.756 [2024-11-15 11:46:35.410296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.756 [2024-11-15 11:46:35.410307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.756 qpair failed and we were unable to recover it. 00:28:34.756 [2024-11-15 11:46:35.410531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.756 [2024-11-15 11:46:35.410544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.756 qpair failed and we were unable to recover it. 00:28:34.756 [2024-11-15 11:46:35.410684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.756 [2024-11-15 11:46:35.410695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.756 qpair failed and we were unable to recover it. 00:28:34.756 [2024-11-15 11:46:35.410905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.756 [2024-11-15 11:46:35.410917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.756 qpair failed and we were unable to recover it. 00:28:34.756 [2024-11-15 11:46:35.411097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.756 [2024-11-15 11:46:35.411109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.756 qpair failed and we were unable to recover it. 00:28:34.756 [2024-11-15 11:46:35.411196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.756 [2024-11-15 11:46:35.411207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.756 qpair failed and we were unable to recover it. 00:28:34.756 [2024-11-15 11:46:35.411380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.756 [2024-11-15 11:46:35.411392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.756 qpair failed and we were unable to recover it. 00:28:34.756 [2024-11-15 11:46:35.411547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.756 [2024-11-15 11:46:35.411558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.756 qpair failed and we were unable to recover it. 00:28:34.756 [2024-11-15 11:46:35.411711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.756 [2024-11-15 11:46:35.411723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.756 qpair failed and we were unable to recover it. 00:28:34.756 [2024-11-15 11:46:35.411875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.756 [2024-11-15 11:46:35.411887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.756 qpair failed and we were unable to recover it. 00:28:34.756 [2024-11-15 11:46:35.412057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.756 [2024-11-15 11:46:35.412079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1922550 with addr=10.0.0.2, port=4420 00:28:34.756 qpair failed and we were unable to recover it. 00:28:34.756 [2024-11-15 11:46:35.412262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.756 [2024-11-15 11:46:35.412278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.756 qpair failed and we were unable to recover it. 00:28:34.756 [2024-11-15 11:46:35.412420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.756 [2024-11-15 11:46:35.412431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.756 qpair failed and we were unable to recover it. 00:28:34.756 [2024-11-15 11:46:35.412696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.756 [2024-11-15 11:46:35.412709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.756 qpair failed and we were unable to recover it. 00:28:34.756 [2024-11-15 11:46:35.412953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.757 [2024-11-15 11:46:35.412966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.757 qpair failed and we were unable to recover it. 00:28:34.757 [2024-11-15 11:46:35.413055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.757 [2024-11-15 11:46:35.413066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.757 qpair failed and we were unable to recover it. 00:28:34.757 [2024-11-15 11:46:35.413257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.757 [2024-11-15 11:46:35.413270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.757 qpair failed and we were unable to recover it. 00:28:34.757 [2024-11-15 11:46:35.413406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.757 [2024-11-15 11:46:35.413418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.757 qpair failed and we were unable to recover it. 00:28:34.757 [2024-11-15 11:46:35.413614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.757 [2024-11-15 11:46:35.413627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.757 qpair failed and we were unable to recover it. 00:28:34.757 [2024-11-15 11:46:35.413835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.757 [2024-11-15 11:46:35.413848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.757 qpair failed and we were unable to recover it. 00:28:34.757 [2024-11-15 11:46:35.414082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.757 [2024-11-15 11:46:35.414095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.757 qpair failed and we were unable to recover it. 00:28:34.757 [2024-11-15 11:46:35.414383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.757 [2024-11-15 11:46:35.414396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.757 qpair failed and we were unable to recover it. 00:28:34.757 [2024-11-15 11:46:35.414588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.757 [2024-11-15 11:46:35.414602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.757 qpair failed and we were unable to recover it. 00:28:34.757 [2024-11-15 11:46:35.414761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.757 [2024-11-15 11:46:35.414774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.757 qpair failed and we were unable to recover it. 00:28:34.757 [2024-11-15 11:46:35.414938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.757 [2024-11-15 11:46:35.414950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.757 qpair failed and we were unable to recover it. 00:28:34.757 [2024-11-15 11:46:35.415099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.757 [2024-11-15 11:46:35.415112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.757 qpair failed and we were unable to recover it. 00:28:34.757 [2024-11-15 11:46:35.415253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.757 [2024-11-15 11:46:35.415265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.757 qpair failed and we were unable to recover it. 00:28:34.757 [2024-11-15 11:46:35.415517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.757 [2024-11-15 11:46:35.415531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.757 qpair failed and we were unable to recover it. 00:28:34.757 [2024-11-15 11:46:35.415630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.757 [2024-11-15 11:46:35.415641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.757 qpair failed and we were unable to recover it. 00:28:34.757 [2024-11-15 11:46:35.415777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.757 [2024-11-15 11:46:35.415789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.757 qpair failed and we were unable to recover it. 00:28:34.757 [2024-11-15 11:46:35.415960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.757 [2024-11-15 11:46:35.415972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.757 qpair failed and we were unable to recover it. 00:28:34.757 [2024-11-15 11:46:35.416094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.757 [2024-11-15 11:46:35.416107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.757 qpair failed and we were unable to recover it. 00:28:34.757 [2024-11-15 11:46:35.416395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.757 [2024-11-15 11:46:35.416408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.757 qpair failed and we were unable to recover it. 00:28:34.757 [2024-11-15 11:46:35.416498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.757 [2024-11-15 11:46:35.416511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.757 qpair failed and we were unable to recover it. 00:28:34.757 [2024-11-15 11:46:35.416666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.757 [2024-11-15 11:46:35.416679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.757 qpair failed and we were unable to recover it. 00:28:34.757 [2024-11-15 11:46:35.416902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.757 [2024-11-15 11:46:35.416915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.757 qpair failed and we were unable to recover it. 00:28:34.757 [2024-11-15 11:46:35.416986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.757 [2024-11-15 11:46:35.416998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.757 qpair failed and we were unable to recover it. 00:28:34.757 [2024-11-15 11:46:35.417142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.757 [2024-11-15 11:46:35.417154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.757 qpair failed and we were unable to recover it. 00:28:34.757 [2024-11-15 11:46:35.417390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.757 [2024-11-15 11:46:35.417403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.757 qpair failed and we were unable to recover it. 00:28:34.757 [2024-11-15 11:46:35.417659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.757 [2024-11-15 11:46:35.417672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.757 qpair failed and we were unable to recover it. 00:28:34.757 [2024-11-15 11:46:35.417834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.757 [2024-11-15 11:46:35.417846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.757 qpair failed and we were unable to recover it. 00:28:34.757 [2024-11-15 11:46:35.417982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.757 [2024-11-15 11:46:35.417994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.757 qpair failed and we were unable to recover it. 00:28:34.757 [2024-11-15 11:46:35.418139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.757 [2024-11-15 11:46:35.418151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.757 qpair failed and we were unable to recover it. 00:28:34.757 [2024-11-15 11:46:35.418357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.757 [2024-11-15 11:46:35.418370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.757 qpair failed and we were unable to recover it. 00:28:34.757 [2024-11-15 11:46:35.418562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.757 [2024-11-15 11:46:35.418575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.757 qpair failed and we were unable to recover it. 00:28:34.757 [2024-11-15 11:46:35.418836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.757 [2024-11-15 11:46:35.418849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.757 qpair failed and we were unable to recover it. 00:28:34.757 [2024-11-15 11:46:35.419026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.757 [2024-11-15 11:46:35.419037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.757 qpair failed and we were unable to recover it. 00:28:34.757 [2024-11-15 11:46:35.419199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.757 [2024-11-15 11:46:35.419211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.757 qpair failed and we were unable to recover it. 00:28:34.757 [2024-11-15 11:46:35.419418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.757 [2024-11-15 11:46:35.419430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.757 qpair failed and we were unable to recover it. 00:28:34.757 [2024-11-15 11:46:35.419573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.757 [2024-11-15 11:46:35.419585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.757 qpair failed and we were unable to recover it. 00:28:34.757 [2024-11-15 11:46:35.419733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.757 [2024-11-15 11:46:35.419748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.757 qpair failed and we were unable to recover it. 00:28:34.757 [2024-11-15 11:46:35.419901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.757 [2024-11-15 11:46:35.419912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.757 qpair failed and we were unable to recover it. 00:28:34.757 [2024-11-15 11:46:35.420067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.758 [2024-11-15 11:46:35.420077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.758 qpair failed and we were unable to recover it. 00:28:34.758 [2024-11-15 11:46:35.420219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.758 [2024-11-15 11:46:35.420230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.758 qpair failed and we were unable to recover it. 00:28:34.758 [2024-11-15 11:46:35.420451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.758 [2024-11-15 11:46:35.420466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.758 qpair failed and we were unable to recover it. 00:28:34.758 [2024-11-15 11:46:35.420550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.758 [2024-11-15 11:46:35.420561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.758 qpair failed and we were unable to recover it. 00:28:34.758 [2024-11-15 11:46:35.420803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.758 [2024-11-15 11:46:35.420815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.758 qpair failed and we were unable to recover it. 00:28:34.758 [2024-11-15 11:46:35.420987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.758 [2024-11-15 11:46:35.421000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.758 qpair failed and we were unable to recover it. 00:28:34.758 [2024-11-15 11:46:35.421145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.758 [2024-11-15 11:46:35.421157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.758 qpair failed and we were unable to recover it. 00:28:34.758 [2024-11-15 11:46:35.421307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.758 [2024-11-15 11:46:35.421318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.758 qpair failed and we were unable to recover it. 00:28:34.758 [2024-11-15 11:46:35.421399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.758 [2024-11-15 11:46:35.421411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.758 qpair failed and we were unable to recover it. 00:28:34.758 [2024-11-15 11:46:35.421549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.758 [2024-11-15 11:46:35.421560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.758 qpair failed and we were unable to recover it. 00:28:34.758 [2024-11-15 11:46:35.421702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.758 [2024-11-15 11:46:35.421714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.758 qpair failed and we were unable to recover it. 00:28:34.758 [2024-11-15 11:46:35.421815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.758 [2024-11-15 11:46:35.421826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.758 qpair failed and we were unable to recover it. 00:28:34.758 [2024-11-15 11:46:35.422001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.758 [2024-11-15 11:46:35.422012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.758 qpair failed and we were unable to recover it. 00:28:34.758 [2024-11-15 11:46:35.422169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.758 [2024-11-15 11:46:35.422181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.758 qpair failed and we were unable to recover it. 00:28:34.758 [2024-11-15 11:46:35.422387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.758 [2024-11-15 11:46:35.422399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.758 qpair failed and we were unable to recover it. 00:28:34.758 [2024-11-15 11:46:35.422483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.758 [2024-11-15 11:46:35.422494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.758 qpair failed and we were unable to recover it. 00:28:34.758 [2024-11-15 11:46:35.422712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.758 [2024-11-15 11:46:35.422724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.758 qpair failed and we were unable to recover it. 00:28:34.758 [2024-11-15 11:46:35.422790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.758 [2024-11-15 11:46:35.422801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.758 qpair failed and we were unable to recover it. 00:28:34.758 [2024-11-15 11:46:35.422899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.758 [2024-11-15 11:46:35.422910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.758 qpair failed and we were unable to recover it. 00:28:34.758 [2024-11-15 11:46:35.423050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.758 [2024-11-15 11:46:35.423061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.758 qpair failed and we were unable to recover it. 00:28:34.758 [2024-11-15 11:46:35.423202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.758 [2024-11-15 11:46:35.423214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.758 qpair failed and we were unable to recover it. 00:28:34.758 [2024-11-15 11:46:35.423315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.758 [2024-11-15 11:46:35.423328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.758 qpair failed and we were unable to recover it. 00:28:34.758 [2024-11-15 11:46:35.423487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.758 [2024-11-15 11:46:35.423499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.758 qpair failed and we were unable to recover it. 00:28:34.758 [2024-11-15 11:46:35.423572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.758 [2024-11-15 11:46:35.423583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.758 qpair failed and we were unable to recover it. 00:28:34.758 [2024-11-15 11:46:35.423753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.758 [2024-11-15 11:46:35.423765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.758 qpair failed and we were unable to recover it. 00:28:34.758 [2024-11-15 11:46:35.423973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.758 [2024-11-15 11:46:35.423984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.758 qpair failed and we were unable to recover it. 00:28:34.758 [2024-11-15 11:46:35.424148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.758 [2024-11-15 11:46:35.424160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.758 qpair failed and we were unable to recover it. 00:28:34.758 [2024-11-15 11:46:35.424421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.758 [2024-11-15 11:46:35.424435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.758 qpair failed and we were unable to recover it. 00:28:34.758 [2024-11-15 11:46:35.424526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.758 [2024-11-15 11:46:35.424538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.758 qpair failed and we were unable to recover it. 00:28:34.758 [2024-11-15 11:46:35.424636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.758 [2024-11-15 11:46:35.424647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.758 qpair failed and we were unable to recover it. 00:28:34.758 [2024-11-15 11:46:35.424961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.758 [2024-11-15 11:46:35.424973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.758 qpair failed and we were unable to recover it. 00:28:34.758 [2024-11-15 11:46:35.425181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.758 [2024-11-15 11:46:35.425193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.758 qpair failed and we were unable to recover it. 00:28:34.758 [2024-11-15 11:46:35.425270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.758 [2024-11-15 11:46:35.425281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.758 qpair failed and we were unable to recover it. 00:28:34.758 [2024-11-15 11:46:35.425491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.758 [2024-11-15 11:46:35.425504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.758 qpair failed and we were unable to recover it. 00:28:34.758 [2024-11-15 11:46:35.425583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.758 [2024-11-15 11:46:35.425595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.758 qpair failed and we were unable to recover it. 00:28:34.758 [2024-11-15 11:46:35.425773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.758 [2024-11-15 11:46:35.425785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.758 qpair failed and we were unable to recover it. 00:28:34.758 [2024-11-15 11:46:35.425942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.758 [2024-11-15 11:46:35.425954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.758 qpair failed and we were unable to recover it. 00:28:34.758 [2024-11-15 11:46:35.426163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.758 [2024-11-15 11:46:35.426175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.758 qpair failed and we were unable to recover it. 00:28:34.758 [2024-11-15 11:46:35.426430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.759 [2024-11-15 11:46:35.426445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.759 qpair failed and we were unable to recover it. 00:28:34.759 [2024-11-15 11:46:35.426588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.759 [2024-11-15 11:46:35.426600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.759 qpair failed and we were unable to recover it. 00:28:34.759 [2024-11-15 11:46:35.426748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.759 [2024-11-15 11:46:35.426761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.759 qpair failed and we were unable to recover it. 00:28:34.759 [2024-11-15 11:46:35.426933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.759 [2024-11-15 11:46:35.426945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.759 qpair failed and we were unable to recover it. 00:28:34.759 [2024-11-15 11:46:35.427027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.759 [2024-11-15 11:46:35.427039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.759 qpair failed and we were unable to recover it. 00:28:34.759 [2024-11-15 11:46:35.427126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.759 [2024-11-15 11:46:35.427138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.759 qpair failed and we were unable to recover it. 00:28:34.759 [2024-11-15 11:46:35.427317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.759 [2024-11-15 11:46:35.427330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.759 qpair failed and we were unable to recover it. 00:28:34.759 [2024-11-15 11:46:35.427411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.759 [2024-11-15 11:46:35.427424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.759 qpair failed and we were unable to recover it. 00:28:34.759 [2024-11-15 11:46:35.427574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.759 [2024-11-15 11:46:35.427589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.759 qpair failed and we were unable to recover it. 00:28:34.759 [2024-11-15 11:46:35.427732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.759 [2024-11-15 11:46:35.427744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.759 qpair failed and we were unable to recover it. 00:28:34.759 [2024-11-15 11:46:35.427882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.759 [2024-11-15 11:46:35.427897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.759 qpair failed and we were unable to recover it. 00:28:34.759 [2024-11-15 11:46:35.427991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.759 [2024-11-15 11:46:35.428003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.759 qpair failed and we were unable to recover it. 00:28:34.759 [2024-11-15 11:46:35.428082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.759 [2024-11-15 11:46:35.428093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.759 qpair failed and we were unable to recover it. 00:28:34.759 [2024-11-15 11:46:35.428334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.759 [2024-11-15 11:46:35.428346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.759 qpair failed and we were unable to recover it. 00:28:34.759 [2024-11-15 11:46:35.428441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.759 [2024-11-15 11:46:35.428452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.759 qpair failed and we were unable to recover it. 00:28:34.759 [2024-11-15 11:46:35.428664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.759 [2024-11-15 11:46:35.428676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.759 qpair failed and we were unable to recover it. 00:28:34.759 [2024-11-15 11:46:35.428748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.759 [2024-11-15 11:46:35.428760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.759 qpair failed and we were unable to recover it. 00:28:34.759 [2024-11-15 11:46:35.428916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.759 [2024-11-15 11:46:35.428929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.759 qpair failed and we were unable to recover it. 00:28:34.759 [2024-11-15 11:46:35.429070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.759 [2024-11-15 11:46:35.429081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.759 qpair failed and we were unable to recover it. 00:28:34.759 [2024-11-15 11:46:35.429291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.759 [2024-11-15 11:46:35.429304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.759 qpair failed and we were unable to recover it. 00:28:34.759 [2024-11-15 11:46:35.429496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.759 [2024-11-15 11:46:35.429509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.759 qpair failed and we were unable to recover it. 00:28:34.759 [2024-11-15 11:46:35.429684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.759 [2024-11-15 11:46:35.429696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.759 qpair failed and we were unable to recover it. 00:28:34.759 [2024-11-15 11:46:35.429776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.759 [2024-11-15 11:46:35.429788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.759 qpair failed and we were unable to recover it. 00:28:34.759 [2024-11-15 11:46:35.429884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.759 [2024-11-15 11:46:35.429896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.759 qpair failed and we were unable to recover it. 00:28:34.759 [2024-11-15 11:46:35.430063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.759 [2024-11-15 11:46:35.430075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.759 qpair failed and we were unable to recover it. 00:28:34.759 [2024-11-15 11:46:35.430209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.759 [2024-11-15 11:46:35.430220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.759 qpair failed and we were unable to recover it. 00:28:34.759 [2024-11-15 11:46:35.430302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.759 [2024-11-15 11:46:35.430314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.759 qpair failed and we were unable to recover it. 00:28:34.759 [2024-11-15 11:46:35.430474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.759 [2024-11-15 11:46:35.430487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.759 qpair failed and we were unable to recover it. 00:28:34.759 [2024-11-15 11:46:35.430573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.759 [2024-11-15 11:46:35.430585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.759 qpair failed and we were unable to recover it. 00:28:34.759 [2024-11-15 11:46:35.430654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.759 [2024-11-15 11:46:35.430666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.759 qpair failed and we were unable to recover it. 00:28:34.759 [2024-11-15 11:46:35.430745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.759 [2024-11-15 11:46:35.430757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.759 qpair failed and we were unable to recover it. 00:28:34.759 [2024-11-15 11:46:35.430830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.759 [2024-11-15 11:46:35.430841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.759 qpair failed and we were unable to recover it. 00:28:34.759 [2024-11-15 11:46:35.430993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.759 [2024-11-15 11:46:35.431005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.759 qpair failed and we were unable to recover it. 00:28:34.759 [2024-11-15 11:46:35.431085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.759 [2024-11-15 11:46:35.431096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.759 qpair failed and we were unable to recover it. 00:28:34.759 [2024-11-15 11:46:35.431163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.759 [2024-11-15 11:46:35.431175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.759 qpair failed and we were unable to recover it. 00:28:34.759 [2024-11-15 11:46:35.431271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.759 [2024-11-15 11:46:35.431283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.759 qpair failed and we were unable to recover it. 00:28:34.759 [2024-11-15 11:46:35.431438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.759 [2024-11-15 11:46:35.431450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.759 qpair failed and we were unable to recover it. 00:28:34.759 [2024-11-15 11:46:35.431538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.759 [2024-11-15 11:46:35.431550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.759 qpair failed and we were unable to recover it. 00:28:34.759 [2024-11-15 11:46:35.431622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.759 [2024-11-15 11:46:35.431633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.760 qpair failed and we were unable to recover it. 00:28:34.760 [2024-11-15 11:46:35.431697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.760 [2024-11-15 11:46:35.431708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.760 qpair failed and we were unable to recover it. 00:28:34.760 [2024-11-15 11:46:35.431847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.760 [2024-11-15 11:46:35.431861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.760 qpair failed and we were unable to recover it. 00:28:34.760 [2024-11-15 11:46:35.432009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.760 [2024-11-15 11:46:35.432021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.760 qpair failed and we were unable to recover it. 00:28:34.760 [2024-11-15 11:46:35.432155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.760 [2024-11-15 11:46:35.432166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.760 qpair failed and we were unable to recover it. 00:28:34.760 [2024-11-15 11:46:35.432320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.760 [2024-11-15 11:46:35.432332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.760 qpair failed and we were unable to recover it. 00:28:34.760 [2024-11-15 11:46:35.432559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.760 [2024-11-15 11:46:35.432571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.760 qpair failed and we were unable to recover it. 00:28:34.760 [2024-11-15 11:46:35.432759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.760 [2024-11-15 11:46:35.432772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.760 qpair failed and we were unable to recover it. 00:28:34.760 [2024-11-15 11:46:35.433016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.760 [2024-11-15 11:46:35.433029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.760 qpair failed and we were unable to recover it. 00:28:34.760 [2024-11-15 11:46:35.433177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.760 [2024-11-15 11:46:35.433190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.760 qpair failed and we were unable to recover it. 00:28:34.760 [2024-11-15 11:46:35.433288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.760 [2024-11-15 11:46:35.433299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.760 qpair failed and we were unable to recover it. 00:28:34.760 [2024-11-15 11:46:35.433471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.760 [2024-11-15 11:46:35.433483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.760 qpair failed and we were unable to recover it. 00:28:34.760 [2024-11-15 11:46:35.433567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.760 [2024-11-15 11:46:35.433579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.760 qpair failed and we were unable to recover it. 00:28:34.760 [2024-11-15 11:46:35.433659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.760 [2024-11-15 11:46:35.433670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.760 qpair failed and we were unable to recover it. 00:28:34.760 [2024-11-15 11:46:35.433758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.760 [2024-11-15 11:46:35.433770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.760 qpair failed and we were unable to recover it. 00:28:34.760 [2024-11-15 11:46:35.433906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.760 [2024-11-15 11:46:35.433917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.760 qpair failed and we were unable to recover it. 00:28:34.760 [2024-11-15 11:46:35.433988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.760 [2024-11-15 11:46:35.434000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.760 qpair failed and we were unable to recover it. 00:28:34.760 [2024-11-15 11:46:35.434086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.760 [2024-11-15 11:46:35.434098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.760 qpair failed and we were unable to recover it. 00:28:34.760 [2024-11-15 11:46:35.434307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.760 [2024-11-15 11:46:35.434319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.760 qpair failed and we were unable to recover it. 00:28:34.760 [2024-11-15 11:46:35.434457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.760 [2024-11-15 11:46:35.434472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.760 qpair failed and we were unable to recover it. 00:28:34.760 [2024-11-15 11:46:35.434681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.760 [2024-11-15 11:46:35.434693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.760 qpair failed and we were unable to recover it. 00:28:34.760 [2024-11-15 11:46:35.434768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.760 [2024-11-15 11:46:35.434780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.760 qpair failed and we were unable to recover it. 00:28:34.760 [2024-11-15 11:46:35.434921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.760 [2024-11-15 11:46:35.434933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.760 qpair failed and we were unable to recover it. 00:28:34.760 [2024-11-15 11:46:35.435068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.760 [2024-11-15 11:46:35.435079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.760 qpair failed and we were unable to recover it. 00:28:34.760 [2024-11-15 11:46:35.435176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.760 [2024-11-15 11:46:35.435187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.760 qpair failed and we were unable to recover it. 00:28:34.760 [2024-11-15 11:46:35.435266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.760 [2024-11-15 11:46:35.435278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.760 qpair failed and we were unable to recover it. 00:28:34.760 [2024-11-15 11:46:35.435435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.760 [2024-11-15 11:46:35.435446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.760 qpair failed and we were unable to recover it. 00:28:34.760 [2024-11-15 11:46:35.435541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.760 [2024-11-15 11:46:35.435552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.760 qpair failed and we were unable to recover it. 00:28:34.760 [2024-11-15 11:46:35.435615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.760 [2024-11-15 11:46:35.435626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.760 qpair failed and we were unable to recover it. 00:28:34.760 [2024-11-15 11:46:35.435698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.760 [2024-11-15 11:46:35.435709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.760 qpair failed and we were unable to recover it. 00:28:34.760 [2024-11-15 11:46:35.435792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.760 [2024-11-15 11:46:35.435803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.760 qpair failed and we were unable to recover it. 00:28:34.760 [2024-11-15 11:46:35.435882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.760 [2024-11-15 11:46:35.435893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.760 qpair failed and we were unable to recover it. 00:28:34.761 [2024-11-15 11:46:35.436036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.761 [2024-11-15 11:46:35.436047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.761 qpair failed and we were unable to recover it. 00:28:34.761 [2024-11-15 11:46:35.436220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.761 [2024-11-15 11:46:35.436233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.761 qpair failed and we were unable to recover it. 00:28:34.761 [2024-11-15 11:46:35.436378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.761 [2024-11-15 11:46:35.436389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.761 qpair failed and we were unable to recover it. 00:28:34.761 [2024-11-15 11:46:35.436572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.761 [2024-11-15 11:46:35.436583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.761 qpair failed and we were unable to recover it. 00:28:34.761 [2024-11-15 11:46:35.436660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.761 [2024-11-15 11:46:35.436672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.761 qpair failed and we were unable to recover it. 00:28:34.761 [2024-11-15 11:46:35.436877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.761 [2024-11-15 11:46:35.436889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.761 qpair failed and we were unable to recover it. 00:28:34.761 [2024-11-15 11:46:35.436980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.761 [2024-11-15 11:46:35.436993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.761 qpair failed and we were unable to recover it. 00:28:34.761 [2024-11-15 11:46:35.437149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.761 [2024-11-15 11:46:35.437161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.761 qpair failed and we were unable to recover it. 00:28:34.761 [2024-11-15 11:46:35.437297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.761 [2024-11-15 11:46:35.437309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.761 qpair failed and we were unable to recover it. 00:28:34.761 [2024-11-15 11:46:35.437516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.761 [2024-11-15 11:46:35.437529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.761 qpair failed and we were unable to recover it. 00:28:34.761 [2024-11-15 11:46:35.437613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.761 [2024-11-15 11:46:35.437628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.761 qpair failed and we were unable to recover it. 00:28:34.761 [2024-11-15 11:46:35.437778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.761 [2024-11-15 11:46:35.437789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.761 qpair failed and we were unable to recover it. 00:28:34.761 [2024-11-15 11:46:35.438027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.761 [2024-11-15 11:46:35.438038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.761 qpair failed and we were unable to recover it. 00:28:34.761 [2024-11-15 11:46:35.438130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.761 [2024-11-15 11:46:35.438141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.761 qpair failed and we were unable to recover it. 00:28:34.761 [2024-11-15 11:46:35.438224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.761 [2024-11-15 11:46:35.438236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.761 qpair failed and we were unable to recover it. 00:28:34.761 [2024-11-15 11:46:35.438317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.761 [2024-11-15 11:46:35.438329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.761 qpair failed and we were unable to recover it. 00:28:34.761 [2024-11-15 11:46:35.438481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.761 [2024-11-15 11:46:35.438493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.761 qpair failed and we were unable to recover it. 00:28:34.761 [2024-11-15 11:46:35.438638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.761 [2024-11-15 11:46:35.438649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.761 qpair failed and we were unable to recover it. 00:28:34.761 [2024-11-15 11:46:35.438799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.761 [2024-11-15 11:46:35.438810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.761 qpair failed and we were unable to recover it. 00:28:34.761 [2024-11-15 11:46:35.438911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.761 [2024-11-15 11:46:35.438923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.761 qpair failed and we were unable to recover it. 00:28:34.761 [2024-11-15 11:46:35.439145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.761 [2024-11-15 11:46:35.439156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.761 qpair failed and we were unable to recover it. 00:28:34.761 [2024-11-15 11:46:35.439244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.761 [2024-11-15 11:46:35.439257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.761 qpair failed and we were unable to recover it. 00:28:34.761 [2024-11-15 11:46:35.439322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.761 [2024-11-15 11:46:35.439333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.761 qpair failed and we were unable to recover it. 00:28:34.761 [2024-11-15 11:46:35.439402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.761 [2024-11-15 11:46:35.439414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.761 qpair failed and we were unable to recover it. 00:28:34.761 [2024-11-15 11:46:35.439570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.761 [2024-11-15 11:46:35.439581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.761 qpair failed and we were unable to recover it. 00:28:34.761 [2024-11-15 11:46:35.439671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.761 [2024-11-15 11:46:35.439683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.761 qpair failed and we were unable to recover it. 00:28:34.761 [2024-11-15 11:46:35.439946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.761 [2024-11-15 11:46:35.439958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.761 qpair failed and we were unable to recover it. 00:28:34.761 [2024-11-15 11:46:35.440060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.761 [2024-11-15 11:46:35.440072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.761 qpair failed and we were unable to recover it. 00:28:34.761 [2024-11-15 11:46:35.440142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.761 [2024-11-15 11:46:35.440153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.761 qpair failed and we were unable to recover it. 00:28:34.761 [2024-11-15 11:46:35.440226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.761 [2024-11-15 11:46:35.440238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.761 qpair failed and we were unable to recover it. 00:28:34.761 [2024-11-15 11:46:35.440497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.761 [2024-11-15 11:46:35.440511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.761 qpair failed and we were unable to recover it. 00:28:34.761 [2024-11-15 11:46:35.440657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.761 [2024-11-15 11:46:35.440670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.761 qpair failed and we were unable to recover it. 00:28:34.761 [2024-11-15 11:46:35.440757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.761 [2024-11-15 11:46:35.440768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.761 qpair failed and we were unable to recover it. 00:28:34.761 [2024-11-15 11:46:35.440926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.761 [2024-11-15 11:46:35.440938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.761 qpair failed and we were unable to recover it. 00:28:34.761 [2024-11-15 11:46:35.441028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.761 [2024-11-15 11:46:35.441039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.761 qpair failed and we were unable to recover it. 00:28:34.761 [2024-11-15 11:46:35.441192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.761 [2024-11-15 11:46:35.441204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.761 qpair failed and we were unable to recover it. 00:28:34.761 [2024-11-15 11:46:35.441344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.761 [2024-11-15 11:46:35.441355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.761 qpair failed and we were unable to recover it. 00:28:34.762 [2024-11-15 11:46:35.441510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.762 [2024-11-15 11:46:35.441522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.762 qpair failed and we were unable to recover it. 00:28:34.762 [2024-11-15 11:46:35.441696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.762 [2024-11-15 11:46:35.441707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.762 qpair failed and we were unable to recover it. 00:28:34.762 [2024-11-15 11:46:35.441786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.762 [2024-11-15 11:46:35.441798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.762 qpair failed and we were unable to recover it. 00:28:34.762 [2024-11-15 11:46:35.441967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.762 [2024-11-15 11:46:35.441978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.762 qpair failed and we were unable to recover it. 00:28:34.762 [2024-11-15 11:46:35.442070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.762 [2024-11-15 11:46:35.442081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.762 qpair failed and we were unable to recover it. 00:28:34.762 [2024-11-15 11:46:35.442168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.762 [2024-11-15 11:46:35.442179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.762 qpair failed and we were unable to recover it. 00:28:34.762 [2024-11-15 11:46:35.442261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.762 [2024-11-15 11:46:35.442272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.762 qpair failed and we were unable to recover it. 00:28:34.762 [2024-11-15 11:46:35.442409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.762 [2024-11-15 11:46:35.442420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.762 qpair failed and we were unable to recover it. 00:28:34.762 [2024-11-15 11:46:35.442484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.762 [2024-11-15 11:46:35.442495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.762 qpair failed and we were unable to recover it. 00:28:34.762 [2024-11-15 11:46:35.442702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.762 [2024-11-15 11:46:35.442713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.762 qpair failed and we were unable to recover it. 00:28:34.762 [2024-11-15 11:46:35.442847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.762 [2024-11-15 11:46:35.442858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.762 qpair failed and we were unable to recover it. 00:28:34.762 [2024-11-15 11:46:35.442928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.762 [2024-11-15 11:46:35.442940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.762 qpair failed and we were unable to recover it. 00:28:34.762 [2024-11-15 11:46:35.443087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.762 [2024-11-15 11:46:35.443099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.762 qpair failed and we were unable to recover it. 00:28:34.762 [2024-11-15 11:46:35.443201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.762 [2024-11-15 11:46:35.443214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.762 qpair failed and we were unable to recover it. 00:28:34.762 [2024-11-15 11:46:35.443356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.762 [2024-11-15 11:46:35.443367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.762 qpair failed and we were unable to recover it. 00:28:34.762 [2024-11-15 11:46:35.443455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.762 [2024-11-15 11:46:35.443470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.762 qpair failed and we were unable to recover it. 00:28:34.762 [2024-11-15 11:46:35.443537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.762 [2024-11-15 11:46:35.443549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.762 qpair failed and we were unable to recover it. 00:28:34.762 [2024-11-15 11:46:35.443618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.762 [2024-11-15 11:46:35.443629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.762 qpair failed and we were unable to recover it. 00:28:34.762 [2024-11-15 11:46:35.443862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.762 [2024-11-15 11:46:35.443873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.762 qpair failed and we were unable to recover it. 00:28:34.762 [2024-11-15 11:46:35.443940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.762 [2024-11-15 11:46:35.443951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.762 qpair failed and we were unable to recover it. 00:28:34.762 [2024-11-15 11:46:35.444048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.762 [2024-11-15 11:46:35.444059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.762 qpair failed and we were unable to recover it. 00:28:34.762 [2024-11-15 11:46:35.444224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.762 [2024-11-15 11:46:35.444236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.762 qpair failed and we were unable to recover it. 00:28:34.762 [2024-11-15 11:46:35.444368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.762 [2024-11-15 11:46:35.444381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.762 qpair failed and we were unable to recover it. 00:28:34.762 [2024-11-15 11:46:35.444520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.762 [2024-11-15 11:46:35.444532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.762 qpair failed and we were unable to recover it. 00:28:34.762 [2024-11-15 11:46:35.444620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.762 [2024-11-15 11:46:35.444631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.762 qpair failed and we were unable to recover it. 00:28:34.762 [2024-11-15 11:46:35.444707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.762 [2024-11-15 11:46:35.444718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.762 qpair failed and we were unable to recover it. 00:28:34.762 [2024-11-15 11:46:35.444871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.762 [2024-11-15 11:46:35.444883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.762 qpair failed and we were unable to recover it. 00:28:34.762 [2024-11-15 11:46:35.444971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.762 [2024-11-15 11:46:35.444983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.762 qpair failed and we were unable to recover it. 00:28:34.762 [2024-11-15 11:46:35.445139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.762 [2024-11-15 11:46:35.445150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.762 qpair failed and we were unable to recover it. 00:28:34.762 [2024-11-15 11:46:35.445237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.762 [2024-11-15 11:46:35.445248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.762 qpair failed and we were unable to recover it. 00:28:34.762 [2024-11-15 11:46:35.445330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.762 [2024-11-15 11:46:35.445341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.762 qpair failed and we were unable to recover it. 00:28:34.762 [2024-11-15 11:46:35.445482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.762 [2024-11-15 11:46:35.445493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.762 qpair failed and we were unable to recover it. 00:28:34.762 [2024-11-15 11:46:35.445570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.762 [2024-11-15 11:46:35.445581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.762 qpair failed and we were unable to recover it. 00:28:34.762 [2024-11-15 11:46:35.445649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.762 [2024-11-15 11:46:35.445660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.762 qpair failed and we were unable to recover it. 00:28:34.762 [2024-11-15 11:46:35.445722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.762 [2024-11-15 11:46:35.445734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.762 qpair failed and we were unable to recover it. 00:28:34.762 [2024-11-15 11:46:35.445902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.762 [2024-11-15 11:46:35.445913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.762 qpair failed and we were unable to recover it. 00:28:34.762 [2024-11-15 11:46:35.445986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.762 [2024-11-15 11:46:35.445997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.762 qpair failed and we were unable to recover it. 00:28:34.762 [2024-11-15 11:46:35.446081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.762 [2024-11-15 11:46:35.446092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.762 qpair failed and we were unable to recover it. 00:28:34.762 [2024-11-15 11:46:35.446157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.763 [2024-11-15 11:46:35.446168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.763 qpair failed and we were unable to recover it. 00:28:34.763 [2024-11-15 11:46:35.446315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.763 [2024-11-15 11:46:35.446327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.763 qpair failed and we were unable to recover it. 00:28:34.763 [2024-11-15 11:46:35.446478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.763 [2024-11-15 11:46:35.446490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.763 qpair failed and we were unable to recover it. 00:28:34.763 [2024-11-15 11:46:35.446545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.763 [2024-11-15 11:46:35.446555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.763 qpair failed and we were unable to recover it. 00:28:34.763 [2024-11-15 11:46:35.446704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.763 [2024-11-15 11:46:35.446715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.763 qpair failed and we were unable to recover it. 00:28:34.763 [2024-11-15 11:46:35.446781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.763 [2024-11-15 11:46:35.446792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.763 qpair failed and we were unable to recover it. 00:28:34.763 [2024-11-15 11:46:35.446866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.763 [2024-11-15 11:46:35.446877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.763 qpair failed and we were unable to recover it. 00:28:34.763 [2024-11-15 11:46:35.446941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.763 [2024-11-15 11:46:35.446952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.763 qpair failed and we were unable to recover it. 00:28:34.763 [2024-11-15 11:46:35.447103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.763 [2024-11-15 11:46:35.447115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.763 qpair failed and we were unable to recover it. 00:28:34.763 [2024-11-15 11:46:35.447205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.763 [2024-11-15 11:46:35.447216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.763 qpair failed and we were unable to recover it. 00:28:34.763 [2024-11-15 11:46:35.447286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.763 [2024-11-15 11:46:35.447296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.763 qpair failed and we were unable to recover it. 00:28:34.763 [2024-11-15 11:46:35.447441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.763 [2024-11-15 11:46:35.447452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.763 qpair failed and we were unable to recover it. 00:28:34.763 [2024-11-15 11:46:35.447540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.763 [2024-11-15 11:46:35.447551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.763 qpair failed and we were unable to recover it. 00:28:34.763 [2024-11-15 11:46:35.447638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.763 [2024-11-15 11:46:35.447650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.763 qpair failed and we were unable to recover it. 00:28:34.763 [2024-11-15 11:46:35.447716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.763 [2024-11-15 11:46:35.447726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.763 qpair failed and we were unable to recover it. 00:28:34.763 [2024-11-15 11:46:35.447813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.763 [2024-11-15 11:46:35.447826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.763 qpair failed and we were unable to recover it. 00:28:34.763 [2024-11-15 11:46:35.447916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.763 [2024-11-15 11:46:35.447927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.763 qpair failed and we were unable to recover it. 00:28:34.763 [2024-11-15 11:46:35.447998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.763 [2024-11-15 11:46:35.448008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.763 qpair failed and we were unable to recover it. 00:28:34.763 [2024-11-15 11:46:35.448156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.763 [2024-11-15 11:46:35.448167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.763 qpair failed and we were unable to recover it. 00:28:34.763 [2024-11-15 11:46:35.448250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.763 [2024-11-15 11:46:35.448262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.763 qpair failed and we were unable to recover it. 00:28:34.763 [2024-11-15 11:46:35.448356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.763 [2024-11-15 11:46:35.448367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.763 qpair failed and we were unable to recover it. 00:28:34.763 [2024-11-15 11:46:35.448444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.763 [2024-11-15 11:46:35.448456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.763 qpair failed and we were unable to recover it. 00:28:34.763 [2024-11-15 11:46:35.448626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.763 [2024-11-15 11:46:35.448638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.763 qpair failed and we were unable to recover it. 00:28:34.763 [2024-11-15 11:46:35.448806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.763 [2024-11-15 11:46:35.448816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.763 qpair failed and we were unable to recover it. 00:28:34.763 [2024-11-15 11:46:35.448950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.763 [2024-11-15 11:46:35.448961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.763 qpair failed and we were unable to recover it. 00:28:34.763 [2024-11-15 11:46:35.449036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.763 [2024-11-15 11:46:35.449047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.763 qpair failed and we were unable to recover it. 00:28:34.763 [2024-11-15 11:46:35.449190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.763 [2024-11-15 11:46:35.449201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.763 qpair failed and we were unable to recover it. 00:28:34.763 [2024-11-15 11:46:35.449406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.763 [2024-11-15 11:46:35.449417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.763 qpair failed and we were unable to recover it. 00:28:34.763 [2024-11-15 11:46:35.449499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.763 [2024-11-15 11:46:35.449511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.763 qpair failed and we were unable to recover it. 00:28:34.763 [2024-11-15 11:46:35.449584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.763 [2024-11-15 11:46:35.449594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.763 qpair failed and we were unable to recover it. 00:28:34.763 [2024-11-15 11:46:35.449675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.763 [2024-11-15 11:46:35.449685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.763 qpair failed and we were unable to recover it. 00:28:34.763 [2024-11-15 11:46:35.449825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.763 [2024-11-15 11:46:35.449836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.763 qpair failed and we were unable to recover it. 00:28:34.763 [2024-11-15 11:46:35.450045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.763 [2024-11-15 11:46:35.450056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.763 qpair failed and we were unable to recover it. 00:28:34.763 [2024-11-15 11:46:35.450135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.763 [2024-11-15 11:46:35.450146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.763 qpair failed and we were unable to recover it. 00:28:34.763 [2024-11-15 11:46:35.450294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.763 [2024-11-15 11:46:35.450305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.763 qpair failed and we were unable to recover it. 00:28:34.763 [2024-11-15 11:46:35.450382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.763 [2024-11-15 11:46:35.450393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.763 qpair failed and we were unable to recover it. 00:28:34.763 [2024-11-15 11:46:35.450478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.763 [2024-11-15 11:46:35.450490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.763 qpair failed and we were unable to recover it. 00:28:34.763 [2024-11-15 11:46:35.450648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.763 [2024-11-15 11:46:35.450659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.763 qpair failed and we were unable to recover it. 00:28:34.763 [2024-11-15 11:46:35.450864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.764 [2024-11-15 11:46:35.450876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.764 qpair failed and we were unable to recover it. 00:28:34.764 [2024-11-15 11:46:35.451021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.764 [2024-11-15 11:46:35.451032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.764 qpair failed and we were unable to recover it. 00:28:34.764 [2024-11-15 11:46:35.451186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.764 [2024-11-15 11:46:35.451197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.764 qpair failed and we were unable to recover it. 00:28:34.764 [2024-11-15 11:46:35.451402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.764 [2024-11-15 11:46:35.451413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.764 qpair failed and we were unable to recover it. 00:28:34.764 [2024-11-15 11:46:35.451627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.764 [2024-11-15 11:46:35.451639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.764 qpair failed and we were unable to recover it. 00:28:34.764 [2024-11-15 11:46:35.451805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.764 [2024-11-15 11:46:35.451816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.764 qpair failed and we were unable to recover it. 00:28:34.764 [2024-11-15 11:46:35.451995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.764 [2024-11-15 11:46:35.452006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.764 qpair failed and we were unable to recover it. 00:28:34.764 [2024-11-15 11:46:35.452091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.764 [2024-11-15 11:46:35.452101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.764 qpair failed and we were unable to recover it. 00:28:34.764 [2024-11-15 11:46:35.452314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.764 [2024-11-15 11:46:35.452326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.764 qpair failed and we were unable to recover it. 00:28:34.764 [2024-11-15 11:46:35.452590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.764 [2024-11-15 11:46:35.452601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.764 qpair failed and we were unable to recover it. 00:28:34.764 [2024-11-15 11:46:35.452770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.764 [2024-11-15 11:46:35.452780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.764 qpair failed and we were unable to recover it. 00:28:34.764 [2024-11-15 11:46:35.452936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.764 [2024-11-15 11:46:35.452948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.764 qpair failed and we were unable to recover it. 00:28:34.764 [2024-11-15 11:46:35.453041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.764 [2024-11-15 11:46:35.453052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.764 qpair failed and we were unable to recover it. 00:28:34.764 [2024-11-15 11:46:35.453137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.764 [2024-11-15 11:46:35.453148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.764 qpair failed and we were unable to recover it. 00:28:34.764 [2024-11-15 11:46:35.453312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.764 [2024-11-15 11:46:35.453324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.764 qpair failed and we were unable to recover it. 00:28:34.764 [2024-11-15 11:46:35.453532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.764 [2024-11-15 11:46:35.453543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.764 qpair failed and we were unable to recover it. 00:28:34.764 [2024-11-15 11:46:35.453740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.764 [2024-11-15 11:46:35.453751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.764 qpair failed and we were unable to recover it. 00:28:34.764 [2024-11-15 11:46:35.453917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.764 [2024-11-15 11:46:35.453931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.764 qpair failed and we were unable to recover it. 00:28:34.764 [2024-11-15 11:46:35.454092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.764 [2024-11-15 11:46:35.454103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.764 qpair failed and we were unable to recover it. 00:28:34.764 [2024-11-15 11:46:35.454244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.764 [2024-11-15 11:46:35.454254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.764 qpair failed and we were unable to recover it. 00:28:34.764 [2024-11-15 11:46:35.454499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.764 [2024-11-15 11:46:35.454510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.764 qpair failed and we were unable to recover it. 00:28:34.764 [2024-11-15 11:46:35.454600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.764 [2024-11-15 11:46:35.454611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.764 qpair failed and we were unable to recover it. 00:28:34.764 [2024-11-15 11:46:35.454747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.764 [2024-11-15 11:46:35.454758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.764 qpair failed and we were unable to recover it. 00:28:34.764 [2024-11-15 11:46:35.454962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.764 [2024-11-15 11:46:35.454973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.764 qpair failed and we were unable to recover it. 00:28:34.764 [2024-11-15 11:46:35.455237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.764 [2024-11-15 11:46:35.455248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.764 qpair failed and we were unable to recover it. 00:28:34.764 [2024-11-15 11:46:35.455504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.764 [2024-11-15 11:46:35.455516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.764 qpair failed and we were unable to recover it. 00:28:34.764 [2024-11-15 11:46:35.455671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.764 [2024-11-15 11:46:35.455682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.764 qpair failed and we were unable to recover it. 00:28:34.764 [2024-11-15 11:46:35.455804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.764 [2024-11-15 11:46:35.455815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.764 qpair failed and we were unable to recover it. 00:28:34.764 [2024-11-15 11:46:35.455978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.764 [2024-11-15 11:46:35.455990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.764 qpair failed and we were unable to recover it. 00:28:34.764 [2024-11-15 11:46:35.456137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.764 [2024-11-15 11:46:35.456148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.764 qpair failed and we were unable to recover it. 00:28:34.764 [2024-11-15 11:46:35.456368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.764 [2024-11-15 11:46:35.456379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.764 qpair failed and we were unable to recover it. 00:28:34.764 [2024-11-15 11:46:35.456507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.764 [2024-11-15 11:46:35.456518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.764 qpair failed and we were unable to recover it. 00:28:34.764 [2024-11-15 11:46:35.456735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.764 [2024-11-15 11:46:35.456747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.764 qpair failed and we were unable to recover it. 00:28:34.764 [2024-11-15 11:46:35.456890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.764 [2024-11-15 11:46:35.456900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.764 qpair failed and we were unable to recover it. 00:28:34.765 [2024-11-15 11:46:35.457129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.765 [2024-11-15 11:46:35.457140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.765 qpair failed and we were unable to recover it. 00:28:34.765 [2024-11-15 11:46:35.457302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.765 [2024-11-15 11:46:35.457313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.765 qpair failed and we were unable to recover it. 00:28:34.765 [2024-11-15 11:46:35.457548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.765 [2024-11-15 11:46:35.457560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.765 qpair failed and we were unable to recover it. 00:28:34.765 [2024-11-15 11:46:35.457713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.765 [2024-11-15 11:46:35.457724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.765 qpair failed and we were unable to recover it. 00:28:34.765 [2024-11-15 11:46:35.457868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.765 [2024-11-15 11:46:35.457879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.765 qpair failed and we were unable to recover it. 00:28:34.765 [2024-11-15 11:46:35.457959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.765 [2024-11-15 11:46:35.457970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.765 qpair failed and we were unable to recover it. 00:28:34.765 [2024-11-15 11:46:35.458169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.765 [2024-11-15 11:46:35.458179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.765 qpair failed and we were unable to recover it. 00:28:34.765 [2024-11-15 11:46:35.458333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.765 [2024-11-15 11:46:35.458344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.765 qpair failed and we were unable to recover it. 00:28:34.765 [2024-11-15 11:46:35.458552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.765 [2024-11-15 11:46:35.458563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.765 qpair failed and we were unable to recover it. 00:28:34.765 [2024-11-15 11:46:35.458643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.765 [2024-11-15 11:46:35.458654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.765 qpair failed and we were unable to recover it. 00:28:34.765 [2024-11-15 11:46:35.458862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.765 [2024-11-15 11:46:35.458874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.765 qpair failed and we were unable to recover it. 00:28:34.765 [2024-11-15 11:46:35.458979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.765 [2024-11-15 11:46:35.458990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.765 qpair failed and we were unable to recover it. 00:28:34.765 [2024-11-15 11:46:35.459061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.765 [2024-11-15 11:46:35.459071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.765 qpair failed and we were unable to recover it. 00:28:34.765 [2024-11-15 11:46:35.459219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.765 [2024-11-15 11:46:35.459229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.765 qpair failed and we were unable to recover it. 00:28:34.765 [2024-11-15 11:46:35.459367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.765 [2024-11-15 11:46:35.459378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.765 qpair failed and we were unable to recover it. 00:28:34.765 [2024-11-15 11:46:35.459618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.765 [2024-11-15 11:46:35.459630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.765 qpair failed and we were unable to recover it. 00:28:34.765 [2024-11-15 11:46:35.459790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.765 [2024-11-15 11:46:35.459801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.765 qpair failed and we were unable to recover it. 00:28:34.765 [2024-11-15 11:46:35.460004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.765 [2024-11-15 11:46:35.460015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.765 qpair failed and we were unable to recover it. 00:28:34.765 [2024-11-15 11:46:35.460254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.765 [2024-11-15 11:46:35.460265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.765 qpair failed and we were unable to recover it. 00:28:34.765 [2024-11-15 11:46:35.460499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.765 [2024-11-15 11:46:35.460511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.765 qpair failed and we were unable to recover it. 00:28:34.765 [2024-11-15 11:46:35.460664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.765 [2024-11-15 11:46:35.460675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.765 qpair failed and we were unable to recover it. 00:28:34.765 [2024-11-15 11:46:35.460883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.765 [2024-11-15 11:46:35.460894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.765 qpair failed and we were unable to recover it. 00:28:34.765 [2024-11-15 11:46:35.461033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.765 [2024-11-15 11:46:35.461044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.765 qpair failed and we were unable to recover it. 00:28:34.765 [2024-11-15 11:46:35.461198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.765 [2024-11-15 11:46:35.461211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.765 qpair failed and we were unable to recover it. 00:28:34.765 [2024-11-15 11:46:35.461325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.765 [2024-11-15 11:46:35.461336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.765 qpair failed and we were unable to recover it. 00:28:34.765 [2024-11-15 11:46:35.461571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.765 [2024-11-15 11:46:35.461582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.765 qpair failed and we were unable to recover it. 00:28:34.765 [2024-11-15 11:46:35.461792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.765 [2024-11-15 11:46:35.461803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.765 qpair failed and we were unable to recover it. 00:28:34.765 [2024-11-15 11:46:35.461958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.765 [2024-11-15 11:46:35.461969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.765 qpair failed and we were unable to recover it. 00:28:34.765 [2024-11-15 11:46:35.462063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.765 [2024-11-15 11:46:35.462073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.765 qpair failed and we were unable to recover it. 00:28:34.765 [2024-11-15 11:46:35.462235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.765 [2024-11-15 11:46:35.462246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.765 qpair failed and we were unable to recover it. 00:28:34.765 [2024-11-15 11:46:35.462427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.765 [2024-11-15 11:46:35.462438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.765 qpair failed and we were unable to recover it. 00:28:34.765 [2024-11-15 11:46:35.462590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.765 [2024-11-15 11:46:35.462601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.765 qpair failed and we were unable to recover it. 00:28:34.765 [2024-11-15 11:46:35.462704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.765 [2024-11-15 11:46:35.462715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.765 qpair failed and we were unable to recover it. 00:28:34.765 [2024-11-15 11:46:35.462870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.765 [2024-11-15 11:46:35.462881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.765 qpair failed and we were unable to recover it. 00:28:34.765 [2024-11-15 11:46:35.462988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.765 [2024-11-15 11:46:35.463000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.765 qpair failed and we were unable to recover it. 00:28:34.765 [2024-11-15 11:46:35.463152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.765 [2024-11-15 11:46:35.463163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.765 qpair failed and we were unable to recover it. 00:28:34.765 [2024-11-15 11:46:35.463487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.765 [2024-11-15 11:46:35.463499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.765 qpair failed and we were unable to recover it. 00:28:34.765 [2024-11-15 11:46:35.463708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.765 [2024-11-15 11:46:35.463719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.765 qpair failed and we were unable to recover it. 00:28:34.766 [2024-11-15 11:46:35.463925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.766 [2024-11-15 11:46:35.463936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.766 qpair failed and we were unable to recover it. 00:28:34.766 [2024-11-15 11:46:35.464075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.766 [2024-11-15 11:46:35.464086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.766 qpair failed and we were unable to recover it. 00:28:34.766 [2024-11-15 11:46:35.464174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.766 [2024-11-15 11:46:35.464185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.766 qpair failed and we were unable to recover it. 00:28:34.766 [2024-11-15 11:46:35.464338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.766 [2024-11-15 11:46:35.464349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.766 qpair failed and we were unable to recover it. 00:28:34.766 [2024-11-15 11:46:35.464582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.766 [2024-11-15 11:46:35.464594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.766 qpair failed and we were unable to recover it. 00:28:34.766 [2024-11-15 11:46:35.464732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.766 [2024-11-15 11:46:35.464743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.766 qpair failed and we were unable to recover it. 00:28:34.766 [2024-11-15 11:46:35.464888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.766 [2024-11-15 11:46:35.464899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.766 qpair failed and we were unable to recover it. 00:28:34.766 [2024-11-15 11:46:35.465159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.766 [2024-11-15 11:46:35.465170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.766 qpair failed and we were unable to recover it. 00:28:34.766 [2024-11-15 11:46:35.465349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.766 [2024-11-15 11:46:35.465360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.766 qpair failed and we were unable to recover it. 00:28:34.766 [2024-11-15 11:46:35.465512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.766 [2024-11-15 11:46:35.465523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.766 qpair failed and we were unable to recover it. 00:28:34.766 [2024-11-15 11:46:35.465675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.766 [2024-11-15 11:46:35.465686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.766 qpair failed and we were unable to recover it. 00:28:34.766 [2024-11-15 11:46:35.465779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.766 [2024-11-15 11:46:35.465790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.766 qpair failed and we were unable to recover it. 00:28:34.766 [2024-11-15 11:46:35.466049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.766 [2024-11-15 11:46:35.466073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.766 qpair failed and we were unable to recover it. 00:28:34.766 [2024-11-15 11:46:35.466258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.766 [2024-11-15 11:46:35.466277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.766 qpair failed and we were unable to recover it. 00:28:34.766 [2024-11-15 11:46:35.466563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.766 [2024-11-15 11:46:35.466574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.766 qpair failed and we were unable to recover it. 00:28:34.766 [2024-11-15 11:46:35.466750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.766 [2024-11-15 11:46:35.466761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.766 qpair failed and we were unable to recover it. 00:28:34.766 [2024-11-15 11:46:35.466864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.766 [2024-11-15 11:46:35.466875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.766 qpair failed and we were unable to recover it. 00:28:34.766 [2024-11-15 11:46:35.467089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.766 [2024-11-15 11:46:35.467100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.766 qpair failed and we were unable to recover it. 00:28:34.766 [2024-11-15 11:46:35.467263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.766 [2024-11-15 11:46:35.467273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.766 qpair failed and we were unable to recover it. 00:28:34.766 [2024-11-15 11:46:35.467407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.766 [2024-11-15 11:46:35.467418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.766 qpair failed and we were unable to recover it. 00:28:34.766 [2024-11-15 11:46:35.467556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.766 [2024-11-15 11:46:35.467567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.766 qpair failed and we were unable to recover it. 00:28:34.766 [2024-11-15 11:46:35.467742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.766 [2024-11-15 11:46:35.467754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.766 qpair failed and we were unable to recover it. 00:28:34.766 [2024-11-15 11:46:35.467902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.766 [2024-11-15 11:46:35.467913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.766 qpair failed and we were unable to recover it. 00:28:34.766 [2024-11-15 11:46:35.467989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.766 [2024-11-15 11:46:35.468000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.766 qpair failed and we were unable to recover it. 00:28:34.766 [2024-11-15 11:46:35.468235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.766 [2024-11-15 11:46:35.468246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.766 qpair failed and we were unable to recover it. 00:28:34.766 [2024-11-15 11:46:35.468449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.766 [2024-11-15 11:46:35.468466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.766 qpair failed and we were unable to recover it. 00:28:34.766 [2024-11-15 11:46:35.468715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.766 [2024-11-15 11:46:35.468726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.766 qpair failed and we were unable to recover it. 00:28:34.766 [2024-11-15 11:46:35.468953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.766 [2024-11-15 11:46:35.468964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.766 qpair failed and we were unable to recover it. 00:28:34.766 [2024-11-15 11:46:35.469070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.766 [2024-11-15 11:46:35.469080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.766 qpair failed and we were unable to recover it. 00:28:34.766 [2024-11-15 11:46:35.469338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.766 [2024-11-15 11:46:35.469348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.766 qpair failed and we were unable to recover it. 00:28:34.766 [2024-11-15 11:46:35.469417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.766 [2024-11-15 11:46:35.469427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.766 qpair failed and we were unable to recover it. 00:28:34.766 [2024-11-15 11:46:35.469656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.766 [2024-11-15 11:46:35.469667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.766 qpair failed and we were unable to recover it. 00:28:34.766 [2024-11-15 11:46:35.469760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.766 [2024-11-15 11:46:35.469771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.766 qpair failed and we were unable to recover it. 00:28:34.766 [2024-11-15 11:46:35.469976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.766 [2024-11-15 11:46:35.469987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.766 qpair failed and we were unable to recover it. 00:28:34.766 [2024-11-15 11:46:35.470148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.766 [2024-11-15 11:46:35.470158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.766 qpair failed and we were unable to recover it. 00:28:34.766 [2024-11-15 11:46:35.470245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.766 [2024-11-15 11:46:35.470256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.766 qpair failed and we were unable to recover it. 00:28:34.766 [2024-11-15 11:46:35.470469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.766 [2024-11-15 11:46:35.470479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.766 qpair failed and we were unable to recover it. 00:28:34.766 [2024-11-15 11:46:35.470556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.766 [2024-11-15 11:46:35.470567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.766 qpair failed and we were unable to recover it. 00:28:34.766 [2024-11-15 11:46:35.470770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.767 [2024-11-15 11:46:35.470781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.767 qpair failed and we were unable to recover it. 00:28:34.767 [2024-11-15 11:46:35.471023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.767 [2024-11-15 11:46:35.471034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.767 qpair failed and we were unable to recover it. 00:28:34.767 [2024-11-15 11:46:35.471182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.767 [2024-11-15 11:46:35.471193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.767 qpair failed and we were unable to recover it. 00:28:34.767 [2024-11-15 11:46:35.471343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.767 [2024-11-15 11:46:35.471355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.767 qpair failed and we were unable to recover it. 00:28:34.767 [2024-11-15 11:46:35.471571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.767 [2024-11-15 11:46:35.471582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.767 qpair failed and we were unable to recover it. 00:28:34.767 [2024-11-15 11:46:35.471794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.767 [2024-11-15 11:46:35.471805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.767 qpair failed and we were unable to recover it. 00:28:34.767 [2024-11-15 11:46:35.472036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.767 [2024-11-15 11:46:35.472046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.767 qpair failed and we were unable to recover it. 00:28:34.767 [2024-11-15 11:46:35.472279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.767 [2024-11-15 11:46:35.472290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.767 qpair failed and we were unable to recover it. 00:28:34.767 [2024-11-15 11:46:35.472430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.767 [2024-11-15 11:46:35.472441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.767 qpair failed and we were unable to recover it. 00:28:34.767 [2024-11-15 11:46:35.472642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.767 [2024-11-15 11:46:35.472653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.767 qpair failed and we were unable to recover it. 00:28:34.767 [2024-11-15 11:46:35.472727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.767 [2024-11-15 11:46:35.472737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.767 qpair failed and we were unable to recover it. 00:28:34.767 [2024-11-15 11:46:35.472979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.767 [2024-11-15 11:46:35.472990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.767 qpair failed and we were unable to recover it. 00:28:34.767 [2024-11-15 11:46:35.473131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.767 [2024-11-15 11:46:35.473142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.767 qpair failed and we were unable to recover it. 00:28:34.767 [2024-11-15 11:46:35.473301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.767 [2024-11-15 11:46:35.473311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.767 qpair failed and we were unable to recover it. 00:28:34.767 [2024-11-15 11:46:35.473381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.767 [2024-11-15 11:46:35.473393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.767 qpair failed and we were unable to recover it. 00:28:34.767 [2024-11-15 11:46:35.473624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.767 [2024-11-15 11:46:35.473636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.767 qpair failed and we were unable to recover it. 00:28:34.767 [2024-11-15 11:46:35.473887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.767 [2024-11-15 11:46:35.473897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.767 qpair failed and we were unable to recover it. 00:28:34.767 [2024-11-15 11:46:35.473973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.767 [2024-11-15 11:46:35.473984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.767 qpair failed and we were unable to recover it. 00:28:34.767 [2024-11-15 11:46:35.474151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.767 [2024-11-15 11:46:35.474162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.767 qpair failed and we were unable to recover it. 00:28:34.767 [2024-11-15 11:46:35.474368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.767 [2024-11-15 11:46:35.474379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.767 qpair failed and we were unable to recover it. 00:28:34.767 [2024-11-15 11:46:35.474469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.767 [2024-11-15 11:46:35.474480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.767 qpair failed and we were unable to recover it. 00:28:34.767 [2024-11-15 11:46:35.474640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.767 [2024-11-15 11:46:35.474651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.767 qpair failed and we were unable to recover it. 00:28:34.767 [2024-11-15 11:46:35.474908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.767 [2024-11-15 11:46:35.474920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.767 qpair failed and we were unable to recover it. 00:28:34.767 [2024-11-15 11:46:35.475228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.767 [2024-11-15 11:46:35.475239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.767 qpair failed and we were unable to recover it. 00:28:34.767 [2024-11-15 11:46:35.475454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.767 [2024-11-15 11:46:35.475468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.767 qpair failed and we were unable to recover it. 00:28:34.767 [2024-11-15 11:46:35.475639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.767 [2024-11-15 11:46:35.475650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.767 qpair failed and we were unable to recover it. 00:28:34.767 [2024-11-15 11:46:35.475805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.767 [2024-11-15 11:46:35.475816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.767 qpair failed and we were unable to recover it. 00:28:34.767 [2024-11-15 11:46:35.475889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.767 [2024-11-15 11:46:35.475903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.767 qpair failed and we were unable to recover it. 00:28:34.767 [2024-11-15 11:46:35.476140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.767 [2024-11-15 11:46:35.476150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.767 qpair failed and we were unable to recover it. 00:28:34.767 [2024-11-15 11:46:35.476326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.767 [2024-11-15 11:46:35.476337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.767 qpair failed and we were unable to recover it. 00:28:34.767 [2024-11-15 11:46:35.476497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.767 [2024-11-15 11:46:35.476508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.767 qpair failed and we were unable to recover it. 00:28:34.767 [2024-11-15 11:46:35.476657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.767 [2024-11-15 11:46:35.476668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.767 qpair failed and we were unable to recover it. 00:28:34.767 [2024-11-15 11:46:35.476905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.767 [2024-11-15 11:46:35.476915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.767 qpair failed and we were unable to recover it. 00:28:34.767 [2024-11-15 11:46:35.477121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.767 [2024-11-15 11:46:35.477132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.767 qpair failed and we were unable to recover it. 00:28:34.767 [2024-11-15 11:46:35.477227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.767 [2024-11-15 11:46:35.477238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.767 qpair failed and we were unable to recover it. 00:28:34.767 [2024-11-15 11:46:35.477467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.767 [2024-11-15 11:46:35.477478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.767 qpair failed and we were unable to recover it. 00:28:34.767 [2024-11-15 11:46:35.477736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.767 [2024-11-15 11:46:35.477747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.767 qpair failed and we were unable to recover it. 00:28:34.767 [2024-11-15 11:46:35.477989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.767 [2024-11-15 11:46:35.478000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.767 qpair failed and we were unable to recover it. 00:28:34.767 [2024-11-15 11:46:35.478136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.768 [2024-11-15 11:46:35.478147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.768 qpair failed and we were unable to recover it. 00:28:34.768 [2024-11-15 11:46:35.478371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.768 [2024-11-15 11:46:35.478382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.768 qpair failed and we were unable to recover it. 00:28:34.768 [2024-11-15 11:46:35.478633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.768 [2024-11-15 11:46:35.478644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.768 qpair failed and we were unable to recover it. 00:28:34.768 [2024-11-15 11:46:35.478851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.768 [2024-11-15 11:46:35.478862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.768 qpair failed and we were unable to recover it. 00:28:34.768 [2024-11-15 11:46:35.479008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.768 [2024-11-15 11:46:35.479018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.768 qpair failed and we were unable to recover it. 00:28:34.768 [2024-11-15 11:46:35.479208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.768 [2024-11-15 11:46:35.479218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.768 qpair failed and we were unable to recover it. 00:28:34.768 [2024-11-15 11:46:35.479357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.768 [2024-11-15 11:46:35.479368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.768 qpair failed and we were unable to recover it. 00:28:34.768 [2024-11-15 11:46:35.479626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.768 [2024-11-15 11:46:35.479637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.768 qpair failed and we were unable to recover it. 00:28:34.768 [2024-11-15 11:46:35.479841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.768 [2024-11-15 11:46:35.479852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.768 qpair failed and we were unable to recover it. 00:28:34.768 [2024-11-15 11:46:35.480133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.768 [2024-11-15 11:46:35.480144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.768 qpair failed and we were unable to recover it. 00:28:34.768 [2024-11-15 11:46:35.480380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.768 [2024-11-15 11:46:35.480391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.768 qpair failed and we were unable to recover it. 00:28:34.768 [2024-11-15 11:46:35.480662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.768 [2024-11-15 11:46:35.480673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.768 qpair failed and we were unable to recover it. 00:28:34.768 [2024-11-15 11:46:35.480823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.768 [2024-11-15 11:46:35.480833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.768 qpair failed and we were unable to recover it. 00:28:34.768 [2024-11-15 11:46:35.481033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.768 [2024-11-15 11:46:35.481044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.768 qpair failed and we were unable to recover it. 00:28:34.768 [2024-11-15 11:46:35.481308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.768 [2024-11-15 11:46:35.481319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.768 qpair failed and we were unable to recover it. 00:28:34.768 [2024-11-15 11:46:35.481414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.768 [2024-11-15 11:46:35.481425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.768 qpair failed and we were unable to recover it. 00:28:34.768 [2024-11-15 11:46:35.481639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.768 [2024-11-15 11:46:35.481650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.768 qpair failed and we were unable to recover it. 00:28:34.768 [2024-11-15 11:46:35.481883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.768 [2024-11-15 11:46:35.481894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.768 qpair failed and we were unable to recover it. 00:28:34.768 [2024-11-15 11:46:35.482053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.768 [2024-11-15 11:46:35.482063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.768 qpair failed and we were unable to recover it. 00:28:34.768 [2024-11-15 11:46:35.482216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.768 [2024-11-15 11:46:35.482227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.768 qpair failed and we were unable to recover it. 00:28:34.768 [2024-11-15 11:46:35.482301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.768 [2024-11-15 11:46:35.482312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.768 qpair failed and we were unable to recover it. 00:28:34.768 [2024-11-15 11:46:35.482394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.768 [2024-11-15 11:46:35.482404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.768 qpair failed and we were unable to recover it. 00:28:34.768 [2024-11-15 11:46:35.482573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.768 [2024-11-15 11:46:35.482584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.768 qpair failed and we were unable to recover it. 00:28:34.768 [2024-11-15 11:46:35.482769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.768 [2024-11-15 11:46:35.482781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.768 qpair failed and we were unable to recover it. 00:28:34.768 [2024-11-15 11:46:35.482884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.768 [2024-11-15 11:46:35.482895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.768 qpair failed and we were unable to recover it. 00:28:34.768 [2024-11-15 11:46:35.482973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.768 [2024-11-15 11:46:35.482983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.768 qpair failed and we were unable to recover it. 00:28:34.768 [2024-11-15 11:46:35.483135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.768 [2024-11-15 11:46:35.483145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.768 qpair failed and we were unable to recover it. 00:28:34.768 [2024-11-15 11:46:35.483353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.768 [2024-11-15 11:46:35.483364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.768 qpair failed and we were unable to recover it. 00:28:34.768 [2024-11-15 11:46:35.483571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.768 [2024-11-15 11:46:35.483582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.768 qpair failed and we were unable to recover it. 00:28:34.768 [2024-11-15 11:46:35.483656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.768 [2024-11-15 11:46:35.483668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.768 qpair failed and we were unable to recover it. 00:28:34.768 [2024-11-15 11:46:35.483751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.768 [2024-11-15 11:46:35.483761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.768 qpair failed and we were unable to recover it. 00:28:34.768 [2024-11-15 11:46:35.483989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.768 [2024-11-15 11:46:35.484000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.768 qpair failed and we were unable to recover it. 00:28:34.768 [2024-11-15 11:46:35.484137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.768 [2024-11-15 11:46:35.484148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.768 qpair failed and we were unable to recover it. 00:28:34.768 [2024-11-15 11:46:35.484227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.768 [2024-11-15 11:46:35.484239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.768 qpair failed and we were unable to recover it. 00:28:34.768 [2024-11-15 11:46:35.484332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.768 [2024-11-15 11:46:35.484342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.768 qpair failed and we were unable to recover it. 00:28:34.768 [2024-11-15 11:46:35.484551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.768 [2024-11-15 11:46:35.484563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.768 qpair failed and we were unable to recover it. 00:28:34.768 [2024-11-15 11:46:35.484650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.768 [2024-11-15 11:46:35.484660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.768 qpair failed and we were unable to recover it. 00:28:34.768 [2024-11-15 11:46:35.484744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.768 [2024-11-15 11:46:35.484754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.768 qpair failed and we were unable to recover it. 00:28:34.768 [2024-11-15 11:46:35.484837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.769 [2024-11-15 11:46:35.484849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.769 qpair failed and we were unable to recover it. 00:28:34.769 [2024-11-15 11:46:35.485009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.769 [2024-11-15 11:46:35.485019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.769 qpair failed and we were unable to recover it. 00:28:34.769 [2024-11-15 11:46:35.485158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.769 [2024-11-15 11:46:35.485169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.769 qpair failed and we were unable to recover it. 00:28:34.769 [2024-11-15 11:46:35.485234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.769 [2024-11-15 11:46:35.485244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.769 qpair failed and we were unable to recover it. 00:28:34.769 [2024-11-15 11:46:35.485322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.769 [2024-11-15 11:46:35.485333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.769 qpair failed and we were unable to recover it. 00:28:34.769 [2024-11-15 11:46:35.485488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.769 [2024-11-15 11:46:35.485499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.769 qpair failed and we were unable to recover it. 00:28:34.769 [2024-11-15 11:46:35.485584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.769 [2024-11-15 11:46:35.485595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.769 qpair failed and we were unable to recover it. 00:28:34.769 [2024-11-15 11:46:35.485672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.769 [2024-11-15 11:46:35.485682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.769 qpair failed and we were unable to recover it. 00:28:34.769 [2024-11-15 11:46:35.485877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.769 [2024-11-15 11:46:35.485888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.769 qpair failed and we were unable to recover it. 00:28:34.769 [2024-11-15 11:46:35.486025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.769 [2024-11-15 11:46:35.486036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.769 qpair failed and we were unable to recover it. 00:28:34.769 [2024-11-15 11:46:35.486171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.769 [2024-11-15 11:46:35.486182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.769 qpair failed and we were unable to recover it. 00:28:34.769 [2024-11-15 11:46:35.486260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.769 [2024-11-15 11:46:35.486270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.769 qpair failed and we were unable to recover it. 00:28:34.769 [2024-11-15 11:46:35.486456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.769 [2024-11-15 11:46:35.486470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.769 qpair failed and we were unable to recover it. 00:28:34.769 [2024-11-15 11:46:35.486556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.769 [2024-11-15 11:46:35.486567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.769 qpair failed and we were unable to recover it. 00:28:34.769 [2024-11-15 11:46:35.486742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.769 [2024-11-15 11:46:35.486752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.769 qpair failed and we were unable to recover it. 00:28:34.769 [2024-11-15 11:46:35.486987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.769 [2024-11-15 11:46:35.486997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.769 qpair failed and we were unable to recover it. 00:28:34.769 [2024-11-15 11:46:35.487146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.769 [2024-11-15 11:46:35.487156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.769 qpair failed and we were unable to recover it. 00:28:34.769 [2024-11-15 11:46:35.487223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.769 [2024-11-15 11:46:35.487233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.769 qpair failed and we were unable to recover it. 00:28:34.769 [2024-11-15 11:46:35.487322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.769 [2024-11-15 11:46:35.487333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.769 qpair failed and we were unable to recover it. 00:28:34.769 [2024-11-15 11:46:35.487485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.769 [2024-11-15 11:46:35.487496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.769 qpair failed and we were unable to recover it. 00:28:34.769 [2024-11-15 11:46:35.487638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.769 [2024-11-15 11:46:35.487649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.769 qpair failed and we were unable to recover it. 00:28:34.769 [2024-11-15 11:46:35.487730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.769 [2024-11-15 11:46:35.487741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.769 qpair failed and we were unable to recover it. 00:28:34.769 [2024-11-15 11:46:35.487832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.769 [2024-11-15 11:46:35.487843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.769 qpair failed and we were unable to recover it. 00:28:34.769 [2024-11-15 11:46:35.487927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.769 [2024-11-15 11:46:35.487938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.769 qpair failed and we were unable to recover it. 00:28:34.769 [2024-11-15 11:46:35.488076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.769 [2024-11-15 11:46:35.488087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.769 qpair failed and we were unable to recover it. 00:28:34.769 [2024-11-15 11:46:35.488298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.769 [2024-11-15 11:46:35.488309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.769 qpair failed and we were unable to recover it. 00:28:34.769 [2024-11-15 11:46:35.488544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.769 [2024-11-15 11:46:35.488555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.769 qpair failed and we were unable to recover it. 00:28:34.769 [2024-11-15 11:46:35.488716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.769 [2024-11-15 11:46:35.488727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.769 qpair failed and we were unable to recover it. 00:28:34.769 [2024-11-15 11:46:35.488815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.769 [2024-11-15 11:46:35.488825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.769 qpair failed and we were unable to recover it. 00:28:34.769 [2024-11-15 11:46:35.488972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.769 [2024-11-15 11:46:35.488983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.769 qpair failed and we were unable to recover it. 00:28:34.769 [2024-11-15 11:46:35.489142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.769 [2024-11-15 11:46:35.489152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.769 qpair failed and we were unable to recover it. 00:28:34.769 [2024-11-15 11:46:35.489296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.769 [2024-11-15 11:46:35.489308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.769 qpair failed and we were unable to recover it. 00:28:34.769 [2024-11-15 11:46:35.489469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.769 [2024-11-15 11:46:35.489481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.769 qpair failed and we were unable to recover it. 00:28:34.769 [2024-11-15 11:46:35.489617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.769 [2024-11-15 11:46:35.489627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.769 qpair failed and we were unable to recover it. 00:28:34.769 [2024-11-15 11:46:35.489710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.770 [2024-11-15 11:46:35.489721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.770 qpair failed and we were unable to recover it. 00:28:34.770 [2024-11-15 11:46:35.489790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.770 [2024-11-15 11:46:35.489801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.770 qpair failed and we were unable to recover it. 00:28:34.770 [2024-11-15 11:46:35.489881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.770 [2024-11-15 11:46:35.489891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.770 qpair failed and we were unable to recover it. 00:28:34.770 [2024-11-15 11:46:35.490059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.770 [2024-11-15 11:46:35.490070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.770 qpair failed and we were unable to recover it. 00:28:34.770 [2024-11-15 11:46:35.490288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.770 [2024-11-15 11:46:35.490298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.770 qpair failed and we were unable to recover it. 00:28:34.770 [2024-11-15 11:46:35.490433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.770 [2024-11-15 11:46:35.490444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.770 qpair failed and we were unable to recover it. 00:28:34.770 [2024-11-15 11:46:35.490658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.770 [2024-11-15 11:46:35.490669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.770 qpair failed and we were unable to recover it. 00:28:34.770 [2024-11-15 11:46:35.490929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.770 [2024-11-15 11:46:35.490940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.770 qpair failed and we were unable to recover it. 00:28:34.770 [2024-11-15 11:46:35.491026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.770 [2024-11-15 11:46:35.491037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.770 qpair failed and we were unable to recover it. 00:28:34.770 [2024-11-15 11:46:35.491174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.770 [2024-11-15 11:46:35.491185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.770 qpair failed and we were unable to recover it. 00:28:34.770 [2024-11-15 11:46:35.491446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.770 [2024-11-15 11:46:35.491457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.770 qpair failed and we were unable to recover it. 00:28:34.770 [2024-11-15 11:46:35.491601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.770 [2024-11-15 11:46:35.491612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.770 qpair failed and we were unable to recover it. 00:28:34.770 [2024-11-15 11:46:35.491695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.770 [2024-11-15 11:46:35.491705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.770 qpair failed and we were unable to recover it. 00:28:34.770 [2024-11-15 11:46:35.491783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.770 [2024-11-15 11:46:35.491793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.770 qpair failed and we were unable to recover it. 00:28:34.770 [2024-11-15 11:46:35.491869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.770 [2024-11-15 11:46:35.491880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.770 qpair failed and we were unable to recover it. 00:28:34.770 [2024-11-15 11:46:35.492110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.770 [2024-11-15 11:46:35.492121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.770 qpair failed and we were unable to recover it. 00:28:34.770 [2024-11-15 11:46:35.492218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.770 [2024-11-15 11:46:35.492228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.770 qpair failed and we were unable to recover it. 00:28:34.770 [2024-11-15 11:46:35.492416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.770 [2024-11-15 11:46:35.492427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.770 qpair failed and we were unable to recover it. 00:28:34.770 [2024-11-15 11:46:35.492567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.770 [2024-11-15 11:46:35.492578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.770 qpair failed and we were unable to recover it. 00:28:34.770 [2024-11-15 11:46:35.492727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.770 [2024-11-15 11:46:35.492738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.770 qpair failed and we were unable to recover it. 00:28:34.770 11:46:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:28:34.770 [2024-11-15 11:46:35.492948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.770 [2024-11-15 11:46:35.492959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.770 qpair failed and we were unable to recover it. 00:28:34.770 [2024-11-15 11:46:35.493123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.770 [2024-11-15 11:46:35.493134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.770 qpair failed and we were unable to recover it. 00:28:34.770 11:46:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@866 -- # return 0 00:28:34.770 [2024-11-15 11:46:35.493270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.770 [2024-11-15 11:46:35.493281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.770 qpair failed and we were unable to recover it. 00:28:34.770 [2024-11-15 11:46:35.493425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.770 [2024-11-15 11:46:35.493437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.770 qpair failed and we were unable to recover it. 00:28:34.770 [2024-11-15 11:46:35.493511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.770 [2024-11-15 11:46:35.493523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.770 qpair failed and we were unable to recover it. 00:28:34.770 11:46:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:28:34.770 [2024-11-15 11:46:35.493597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.770 [2024-11-15 11:46:35.493609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.770 qpair failed and we were unable to recover it. 00:28:34.770 [2024-11-15 11:46:35.493675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.770 [2024-11-15 11:46:35.493685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.770 qpair failed and we were unable to recover it. 00:28:34.770 11:46:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@730 -- # xtrace_disable 00:28:34.770 [2024-11-15 11:46:35.493878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.770 [2024-11-15 11:46:35.493889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.770 qpair failed and we were unable to recover it. 00:28:34.770 11:46:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:34.770 [2024-11-15 11:46:35.494114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.770 [2024-11-15 11:46:35.494126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.770 qpair failed and we were unable to recover it. 00:28:34.770 [2024-11-15 11:46:35.494276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.770 [2024-11-15 11:46:35.494286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.770 qpair failed and we were unable to recover it. 00:28:34.770 [2024-11-15 11:46:35.494418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.770 [2024-11-15 11:46:35.494429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.770 qpair failed and we were unable to recover it. 00:28:34.770 [2024-11-15 11:46:35.494576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.770 [2024-11-15 11:46:35.494586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.770 qpair failed and we were unable to recover it. 00:28:34.770 [2024-11-15 11:46:35.494651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.770 [2024-11-15 11:46:35.494662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.770 qpair failed and we were unable to recover it. 00:28:34.770 [2024-11-15 11:46:35.494756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.770 [2024-11-15 11:46:35.494766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.770 qpair failed and we were unable to recover it. 00:28:34.770 [2024-11-15 11:46:35.494843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.770 [2024-11-15 11:46:35.494853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.770 qpair failed and we were unable to recover it. 00:28:34.770 [2024-11-15 11:46:35.495012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.770 [2024-11-15 11:46:35.495025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.770 qpair failed and we were unable to recover it. 00:28:34.770 [2024-11-15 11:46:35.495092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.771 [2024-11-15 11:46:35.495102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.771 qpair failed and we were unable to recover it. 00:28:34.771 [2024-11-15 11:46:35.495252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.771 [2024-11-15 11:46:35.495265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.771 qpair failed and we were unable to recover it. 00:28:34.771 [2024-11-15 11:46:35.495341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.771 [2024-11-15 11:46:35.495352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.771 qpair failed and we were unable to recover it. 00:28:34.771 [2024-11-15 11:46:35.495494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.771 [2024-11-15 11:46:35.495506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.771 qpair failed and we were unable to recover it. 00:28:34.771 [2024-11-15 11:46:35.495589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.771 [2024-11-15 11:46:35.495600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.771 qpair failed and we were unable to recover it. 00:28:34.771 [2024-11-15 11:46:35.495735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.771 [2024-11-15 11:46:35.495746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.771 qpair failed and we were unable to recover it. 00:28:34.771 [2024-11-15 11:46:35.495842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.771 [2024-11-15 11:46:35.495853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.771 qpair failed and we were unable to recover it. 00:28:34.771 [2024-11-15 11:46:35.496058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.771 [2024-11-15 11:46:35.496069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.771 qpair failed and we were unable to recover it. 00:28:34.771 [2024-11-15 11:46:35.496218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.771 [2024-11-15 11:46:35.496229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.771 qpair failed and we were unable to recover it. 00:28:34.771 [2024-11-15 11:46:35.496308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.771 [2024-11-15 11:46:35.496319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.771 qpair failed and we were unable to recover it. 00:28:34.771 [2024-11-15 11:46:35.496469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.771 [2024-11-15 11:46:35.496480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.771 qpair failed and we were unable to recover it. 00:28:34.771 [2024-11-15 11:46:35.496630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.771 [2024-11-15 11:46:35.496641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.771 qpair failed and we were unable to recover it. 00:28:34.771 [2024-11-15 11:46:35.496782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.771 [2024-11-15 11:46:35.496792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.771 qpair failed and we were unable to recover it. 00:28:34.771 [2024-11-15 11:46:35.497001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.771 [2024-11-15 11:46:35.497012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.771 qpair failed and we were unable to recover it. 00:28:34.771 [2024-11-15 11:46:35.497187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.771 [2024-11-15 11:46:35.497197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.771 qpair failed and we were unable to recover it. 00:28:34.771 [2024-11-15 11:46:35.497347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.771 [2024-11-15 11:46:35.497357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.771 qpair failed and we were unable to recover it. 00:28:34.771 [2024-11-15 11:46:35.497452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.771 [2024-11-15 11:46:35.497466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.771 qpair failed and we were unable to recover it. 00:28:34.771 [2024-11-15 11:46:35.497674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.771 [2024-11-15 11:46:35.497684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.771 qpair failed and we were unable to recover it. 00:28:34.771 [2024-11-15 11:46:35.497882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.771 [2024-11-15 11:46:35.497893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.771 qpair failed and we were unable to recover it. 00:28:34.771 [2024-11-15 11:46:35.498123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.771 [2024-11-15 11:46:35.498136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.771 qpair failed and we were unable to recover it. 00:28:34.771 [2024-11-15 11:46:35.498273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.771 [2024-11-15 11:46:35.498284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.771 qpair failed and we were unable to recover it. 00:28:34.771 [2024-11-15 11:46:35.498439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.771 [2024-11-15 11:46:35.498450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.771 qpair failed and we were unable to recover it. 00:28:34.771 [2024-11-15 11:46:35.498561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.771 [2024-11-15 11:46:35.498573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.771 qpair failed and we were unable to recover it. 00:28:34.771 [2024-11-15 11:46:35.498668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.771 [2024-11-15 11:46:35.498678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.771 qpair failed and we were unable to recover it. 00:28:34.771 [2024-11-15 11:46:35.498761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.771 [2024-11-15 11:46:35.498771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.771 qpair failed and we were unable to recover it. 00:28:34.771 [2024-11-15 11:46:35.498924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.771 [2024-11-15 11:46:35.498935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.771 qpair failed and we were unable to recover it. 00:28:34.771 [2024-11-15 11:46:35.499158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.771 [2024-11-15 11:46:35.499186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1922550 with addr=10.0.0.2, port=4420 00:28:34.771 qpair failed and we were unable to recover it. 00:28:34.771 [2024-11-15 11:46:35.499278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.771 [2024-11-15 11:46:35.499291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.771 qpair failed and we were unable to recover it. 00:28:34.771 [2024-11-15 11:46:35.499525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.771 [2024-11-15 11:46:35.499536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.771 qpair failed and we were unable to recover it. 00:28:34.771 [2024-11-15 11:46:35.499640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.771 [2024-11-15 11:46:35.499651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.771 qpair failed and we were unable to recover it. 00:28:34.771 [2024-11-15 11:46:35.499933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.771 [2024-11-15 11:46:35.499945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.771 qpair failed and we were unable to recover it. 00:28:34.771 [2024-11-15 11:46:35.500191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.771 [2024-11-15 11:46:35.500203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.771 qpair failed and we were unable to recover it. 00:28:34.771 [2024-11-15 11:46:35.500316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.771 [2024-11-15 11:46:35.500327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.771 qpair failed and we were unable to recover it. 00:28:34.771 [2024-11-15 11:46:35.500477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.771 [2024-11-15 11:46:35.500489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.771 qpair failed and we were unable to recover it. 00:28:34.771 [2024-11-15 11:46:35.500581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.771 [2024-11-15 11:46:35.500592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.771 qpair failed and we were unable to recover it. 00:28:34.771 [2024-11-15 11:46:35.500682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.771 [2024-11-15 11:46:35.500692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.771 qpair failed and we were unable to recover it. 00:28:34.771 [2024-11-15 11:46:35.500826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.771 [2024-11-15 11:46:35.500840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.771 qpair failed and we were unable to recover it. 00:28:34.771 [2024-11-15 11:46:35.501011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.771 [2024-11-15 11:46:35.501023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.771 qpair failed and we were unable to recover it. 00:28:34.771 [2024-11-15 11:46:35.501103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.772 [2024-11-15 11:46:35.501114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.772 qpair failed and we were unable to recover it. 00:28:34.772 [2024-11-15 11:46:35.501257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.772 [2024-11-15 11:46:35.501270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.772 qpair failed and we were unable to recover it. 00:28:34.772 [2024-11-15 11:46:35.501337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.772 [2024-11-15 11:46:35.501348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.772 qpair failed and we were unable to recover it. 00:28:34.772 [2024-11-15 11:46:35.501448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.772 [2024-11-15 11:46:35.501462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.772 qpair failed and we were unable to recover it. 00:28:34.772 [2024-11-15 11:46:35.501698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.772 [2024-11-15 11:46:35.501709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.772 qpair failed and we were unable to recover it. 00:28:34.772 [2024-11-15 11:46:35.501851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.772 [2024-11-15 11:46:35.501862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.772 qpair failed and we were unable to recover it. 00:28:34.772 [2024-11-15 11:46:35.502001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.772 [2024-11-15 11:46:35.502013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.772 qpair failed and we were unable to recover it. 00:28:34.772 [2024-11-15 11:46:35.502093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.772 [2024-11-15 11:46:35.502104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.772 qpair failed and we were unable to recover it. 00:28:34.772 [2024-11-15 11:46:35.502336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.772 [2024-11-15 11:46:35.502348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.772 qpair failed and we were unable to recover it. 00:28:34.772 [2024-11-15 11:46:35.502515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.772 [2024-11-15 11:46:35.502526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.772 qpair failed and we were unable to recover it. 00:28:34.772 [2024-11-15 11:46:35.502602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.772 [2024-11-15 11:46:35.502613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.772 qpair failed and we were unable to recover it. 00:28:34.772 [2024-11-15 11:46:35.502769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.772 [2024-11-15 11:46:35.502780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.772 qpair failed and we were unable to recover it. 00:28:34.772 [2024-11-15 11:46:35.502866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.772 [2024-11-15 11:46:35.502876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.772 qpair failed and we were unable to recover it. 00:28:34.772 [2024-11-15 11:46:35.502935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.772 [2024-11-15 11:46:35.502946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.772 qpair failed and we were unable to recover it. 00:28:34.772 [2024-11-15 11:46:35.503103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.772 [2024-11-15 11:46:35.503114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.772 qpair failed and we were unable to recover it. 00:28:34.772 [2024-11-15 11:46:35.503302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.772 [2024-11-15 11:46:35.503313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.772 qpair failed and we were unable to recover it. 00:28:34.772 [2024-11-15 11:46:35.503445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.772 [2024-11-15 11:46:35.503456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.772 qpair failed and we were unable to recover it. 00:28:34.772 [2024-11-15 11:46:35.503536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.772 [2024-11-15 11:46:35.503546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.772 qpair failed and we were unable to recover it. 00:28:34.772 [2024-11-15 11:46:35.503734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.772 [2024-11-15 11:46:35.503745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.772 qpair failed and we were unable to recover it. 00:28:34.772 [2024-11-15 11:46:35.503838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.772 [2024-11-15 11:46:35.503850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.772 qpair failed and we were unable to recover it. 00:28:34.772 [2024-11-15 11:46:35.504083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.772 [2024-11-15 11:46:35.504094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.772 qpair failed and we were unable to recover it. 00:28:34.772 [2024-11-15 11:46:35.504326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.772 [2024-11-15 11:46:35.504337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.772 qpair failed and we were unable to recover it. 00:28:34.772 [2024-11-15 11:46:35.504407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.772 [2024-11-15 11:46:35.504418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.772 qpair failed and we were unable to recover it. 00:28:34.772 [2024-11-15 11:46:35.504585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.772 [2024-11-15 11:46:35.504597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.772 qpair failed and we were unable to recover it. 00:28:34.772 [2024-11-15 11:46:35.504694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.772 [2024-11-15 11:46:35.504706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.772 qpair failed and we were unable to recover it. 00:28:34.772 [2024-11-15 11:46:35.504795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.772 [2024-11-15 11:46:35.504806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.772 qpair failed and we were unable to recover it. 00:28:34.772 [2024-11-15 11:46:35.504883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.772 [2024-11-15 11:46:35.504894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.772 qpair failed and we were unable to recover it. 00:28:34.772 [2024-11-15 11:46:35.504979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.772 [2024-11-15 11:46:35.504993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.772 qpair failed and we were unable to recover it. 00:28:34.772 [2024-11-15 11:46:35.505163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.772 [2024-11-15 11:46:35.505184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1922550 with addr=10.0.0.2, port=4420 00:28:34.772 qpair failed and we were unable to recover it. 00:28:34.772 [2024-11-15 11:46:35.505277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.772 [2024-11-15 11:46:35.505301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.772 qpair failed and we were unable to recover it. 00:28:34.772 [2024-11-15 11:46:35.505384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.772 [2024-11-15 11:46:35.505397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.772 qpair failed and we were unable to recover it. 00:28:34.772 [2024-11-15 11:46:35.505476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.772 [2024-11-15 11:46:35.505489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.772 qpair failed and we were unable to recover it. 00:28:34.772 [2024-11-15 11:46:35.505657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.772 [2024-11-15 11:46:35.505669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.772 qpair failed and we were unable to recover it. 00:28:34.772 [2024-11-15 11:46:35.505763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.772 [2024-11-15 11:46:35.505775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.772 qpair failed and we were unable to recover it. 00:28:34.772 [2024-11-15 11:46:35.505884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.772 [2024-11-15 11:46:35.505895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.772 qpair failed and we were unable to recover it. 00:28:34.772 [2024-11-15 11:46:35.506041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.772 [2024-11-15 11:46:35.506052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.772 qpair failed and we were unable to recover it. 00:28:34.772 [2024-11-15 11:46:35.506139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.772 [2024-11-15 11:46:35.506150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.772 qpair failed and we were unable to recover it. 00:28:34.772 [2024-11-15 11:46:35.506236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.772 [2024-11-15 11:46:35.506247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.772 qpair failed and we were unable to recover it. 00:28:34.772 [2024-11-15 11:46:35.506340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.772 [2024-11-15 11:46:35.506352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.773 qpair failed and we were unable to recover it. 00:28:34.773 [2024-11-15 11:46:35.506418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.773 [2024-11-15 11:46:35.506429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.773 qpair failed and we were unable to recover it. 00:28:34.773 [2024-11-15 11:46:35.506530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.773 [2024-11-15 11:46:35.506541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.773 qpair failed and we were unable to recover it. 00:28:34.773 [2024-11-15 11:46:35.506684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.773 [2024-11-15 11:46:35.506695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.773 qpair failed and we were unable to recover it. 00:28:34.773 [2024-11-15 11:46:35.506790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.773 [2024-11-15 11:46:35.506801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.773 qpair failed and we were unable to recover it. 00:28:34.773 [2024-11-15 11:46:35.506880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.773 [2024-11-15 11:46:35.506892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.773 qpair failed and we were unable to recover it. 00:28:34.773 [2024-11-15 11:46:35.507039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.773 [2024-11-15 11:46:35.507050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.773 qpair failed and we were unable to recover it. 00:28:34.773 [2024-11-15 11:46:35.507201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.773 [2024-11-15 11:46:35.507212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.773 qpair failed and we were unable to recover it. 00:28:34.773 [2024-11-15 11:46:35.507284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.773 [2024-11-15 11:46:35.507294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.773 qpair failed and we were unable to recover it. 00:28:34.773 [2024-11-15 11:46:35.507476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.773 [2024-11-15 11:46:35.507487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.773 qpair failed and we were unable to recover it. 00:28:34.773 [2024-11-15 11:46:35.507641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.773 [2024-11-15 11:46:35.507652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.773 qpair failed and we were unable to recover it. 00:28:34.773 [2024-11-15 11:46:35.507728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.773 [2024-11-15 11:46:35.507739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.773 qpair failed and we were unable to recover it. 00:28:34.773 [2024-11-15 11:46:35.507928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.773 [2024-11-15 11:46:35.507939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.773 qpair failed and we were unable to recover it. 00:28:34.773 [2024-11-15 11:46:35.508076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.773 [2024-11-15 11:46:35.508087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.773 qpair failed and we were unable to recover it. 00:28:34.773 [2024-11-15 11:46:35.508157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.773 [2024-11-15 11:46:35.508168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.773 qpair failed and we were unable to recover it. 00:28:34.773 [2024-11-15 11:46:35.508246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.773 [2024-11-15 11:46:35.508257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.773 qpair failed and we were unable to recover it. 00:28:34.773 [2024-11-15 11:46:35.508391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.773 [2024-11-15 11:46:35.508401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.773 qpair failed and we were unable to recover it. 00:28:34.773 [2024-11-15 11:46:35.508556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.773 [2024-11-15 11:46:35.508567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.773 qpair failed and we were unable to recover it. 00:28:34.773 [2024-11-15 11:46:35.508729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.773 [2024-11-15 11:46:35.508740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.773 qpair failed and we were unable to recover it. 00:28:34.773 [2024-11-15 11:46:35.508825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.773 [2024-11-15 11:46:35.508836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.773 qpair failed and we were unable to recover it. 00:28:34.773 [2024-11-15 11:46:35.508913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.773 [2024-11-15 11:46:35.508923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.773 qpair failed and we were unable to recover it. 00:28:34.773 [2024-11-15 11:46:35.509075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.773 [2024-11-15 11:46:35.509086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.773 qpair failed and we were unable to recover it. 00:28:34.773 [2024-11-15 11:46:35.509223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.773 [2024-11-15 11:46:35.509233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.773 qpair failed and we were unable to recover it. 00:28:34.773 [2024-11-15 11:46:35.509308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.773 [2024-11-15 11:46:35.509318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.773 qpair failed and we were unable to recover it. 00:28:34.773 [2024-11-15 11:46:35.509405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.773 [2024-11-15 11:46:35.509416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.773 qpair failed and we were unable to recover it. 00:28:34.773 [2024-11-15 11:46:35.509482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.773 [2024-11-15 11:46:35.509493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.773 qpair failed and we were unable to recover it. 00:28:34.773 [2024-11-15 11:46:35.509646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.773 [2024-11-15 11:46:35.509657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.773 qpair failed and we were unable to recover it. 00:28:34.773 [2024-11-15 11:46:35.509748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.773 [2024-11-15 11:46:35.509760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.773 qpair failed and we were unable to recover it. 00:28:34.773 [2024-11-15 11:46:35.509894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.773 [2024-11-15 11:46:35.509905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.773 qpair failed and we were unable to recover it. 00:28:34.773 [2024-11-15 11:46:35.510001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.773 [2024-11-15 11:46:35.510012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.773 qpair failed and we were unable to recover it. 00:28:34.773 [2024-11-15 11:46:35.510096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.773 [2024-11-15 11:46:35.510109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.773 qpair failed and we were unable to recover it. 00:28:34.773 [2024-11-15 11:46:35.510245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.773 [2024-11-15 11:46:35.510256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.773 qpair failed and we were unable to recover it. 00:28:34.773 [2024-11-15 11:46:35.510317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.773 [2024-11-15 11:46:35.510328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.773 qpair failed and we were unable to recover it. 00:28:34.773 [2024-11-15 11:46:35.510439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.773 [2024-11-15 11:46:35.510450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.773 qpair failed and we were unable to recover it. 00:28:34.773 [2024-11-15 11:46:35.510533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.774 [2024-11-15 11:46:35.510544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.774 qpair failed and we were unable to recover it. 00:28:34.774 [2024-11-15 11:46:35.510703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.774 [2024-11-15 11:46:35.510714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.774 qpair failed and we were unable to recover it. 00:28:34.774 [2024-11-15 11:46:35.510792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.774 [2024-11-15 11:46:35.510803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.774 qpair failed and we were unable to recover it. 00:28:34.774 [2024-11-15 11:46:35.510885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.774 [2024-11-15 11:46:35.510896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.774 qpair failed and we were unable to recover it. 00:28:34.774 [2024-11-15 11:46:35.510963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.774 [2024-11-15 11:46:35.510974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.774 qpair failed and we were unable to recover it. 00:28:34.774 [2024-11-15 11:46:35.511051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.774 [2024-11-15 11:46:35.511062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.774 qpair failed and we were unable to recover it. 00:28:34.774 [2024-11-15 11:46:35.511141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.774 [2024-11-15 11:46:35.511151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.774 qpair failed and we were unable to recover it. 00:28:34.774 [2024-11-15 11:46:35.511298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.774 [2024-11-15 11:46:35.511309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.774 qpair failed and we were unable to recover it. 00:28:34.774 [2024-11-15 11:46:35.511392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.774 [2024-11-15 11:46:35.511402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.774 qpair failed and we were unable to recover it. 00:28:34.774 [2024-11-15 11:46:35.511546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.774 [2024-11-15 11:46:35.511557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.774 qpair failed and we were unable to recover it. 00:28:34.774 [2024-11-15 11:46:35.511631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.774 [2024-11-15 11:46:35.511642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.774 qpair failed and we were unable to recover it. 00:28:34.774 [2024-11-15 11:46:35.511796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.774 [2024-11-15 11:46:35.511807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.774 qpair failed and we were unable to recover it. 00:28:34.774 [2024-11-15 11:46:35.511870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.774 [2024-11-15 11:46:35.511881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.774 qpair failed and we were unable to recover it. 00:28:34.774 [2024-11-15 11:46:35.511974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.774 [2024-11-15 11:46:35.511985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.774 qpair failed and we were unable to recover it. 00:28:34.774 [2024-11-15 11:46:35.512051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.774 [2024-11-15 11:46:35.512062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.774 qpair failed and we were unable to recover it. 00:28:34.774 [2024-11-15 11:46:35.512240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.774 [2024-11-15 11:46:35.512251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.774 qpair failed and we were unable to recover it. 00:28:34.774 [2024-11-15 11:46:35.512353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.774 [2024-11-15 11:46:35.512365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.774 qpair failed and we were unable to recover it. 00:28:34.774 [2024-11-15 11:46:35.512513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.774 [2024-11-15 11:46:35.512525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.774 qpair failed and we were unable to recover it. 00:28:34.774 [2024-11-15 11:46:35.512669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.774 [2024-11-15 11:46:35.512680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.774 qpair failed and we were unable to recover it. 00:28:34.774 [2024-11-15 11:46:35.512827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.774 [2024-11-15 11:46:35.512837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.774 qpair failed and we were unable to recover it. 00:28:34.774 [2024-11-15 11:46:35.513001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.774 [2024-11-15 11:46:35.513011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.774 qpair failed and we were unable to recover it. 00:28:34.774 [2024-11-15 11:46:35.513101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.774 [2024-11-15 11:46:35.513112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.774 qpair failed and we were unable to recover it. 00:28:34.774 [2024-11-15 11:46:35.513193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.774 [2024-11-15 11:46:35.513204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.774 qpair failed and we were unable to recover it. 00:28:34.774 [2024-11-15 11:46:35.513282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.774 [2024-11-15 11:46:35.513293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.774 qpair failed and we were unable to recover it. 00:28:34.774 [2024-11-15 11:46:35.513446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.774 [2024-11-15 11:46:35.513462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.774 qpair failed and we were unable to recover it. 00:28:34.774 [2024-11-15 11:46:35.513617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.774 [2024-11-15 11:46:35.513628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.774 qpair failed and we were unable to recover it. 00:28:34.774 [2024-11-15 11:46:35.513778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.774 [2024-11-15 11:46:35.513789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.774 qpair failed and we were unable to recover it. 00:28:34.774 [2024-11-15 11:46:35.513858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.774 [2024-11-15 11:46:35.513870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.774 qpair failed and we were unable to recover it. 00:28:34.774 [2024-11-15 11:46:35.514023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.774 [2024-11-15 11:46:35.514034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.774 qpair failed and we were unable to recover it. 00:28:34.774 [2024-11-15 11:46:35.514145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.774 [2024-11-15 11:46:35.514156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.774 qpair failed and we were unable to recover it. 00:28:34.774 [2024-11-15 11:46:35.514300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.774 [2024-11-15 11:46:35.514311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.774 qpair failed and we were unable to recover it. 00:28:34.774 [2024-11-15 11:46:35.514382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.774 [2024-11-15 11:46:35.514392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.774 qpair failed and we were unable to recover it. 00:28:34.774 [2024-11-15 11:46:35.514563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.774 [2024-11-15 11:46:35.514575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.774 qpair failed and we were unable to recover it. 00:28:34.774 [2024-11-15 11:46:35.514664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.774 [2024-11-15 11:46:35.514675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.774 qpair failed and we were unable to recover it. 00:28:34.774 [2024-11-15 11:46:35.514753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.774 [2024-11-15 11:46:35.514764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.774 qpair failed and we were unable to recover it. 00:28:34.774 [2024-11-15 11:46:35.514833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.774 [2024-11-15 11:46:35.514844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.774 qpair failed and we were unable to recover it. 00:28:34.774 [2024-11-15 11:46:35.515079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.774 [2024-11-15 11:46:35.515092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.774 qpair failed and we were unable to recover it. 00:28:34.774 [2024-11-15 11:46:35.515167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.774 [2024-11-15 11:46:35.515177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.774 qpair failed and we were unable to recover it. 00:28:34.774 [2024-11-15 11:46:35.515264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.775 [2024-11-15 11:46:35.515274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.775 qpair failed and we were unable to recover it. 00:28:34.775 [2024-11-15 11:46:35.515427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.775 [2024-11-15 11:46:35.515438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.775 qpair failed and we were unable to recover it. 00:28:34.775 [2024-11-15 11:46:35.515584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.775 [2024-11-15 11:46:35.515595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.775 qpair failed and we were unable to recover it. 00:28:34.775 [2024-11-15 11:46:35.515752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.775 [2024-11-15 11:46:35.515764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.775 qpair failed and we were unable to recover it. 00:28:34.775 [2024-11-15 11:46:35.515828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.775 [2024-11-15 11:46:35.515839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.775 qpair failed and we were unable to recover it. 00:28:34.775 [2024-11-15 11:46:35.515936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.775 [2024-11-15 11:46:35.515946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.775 qpair failed and we were unable to recover it. 00:28:34.775 [2024-11-15 11:46:35.516020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.775 [2024-11-15 11:46:35.516031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.775 qpair failed and we were unable to recover it. 00:28:34.775 [2024-11-15 11:46:35.516192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.775 [2024-11-15 11:46:35.516204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.775 qpair failed and we were unable to recover it. 00:28:34.775 [2024-11-15 11:46:35.516355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.775 [2024-11-15 11:46:35.516366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.775 qpair failed and we were unable to recover it. 00:28:34.775 [2024-11-15 11:46:35.516440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.775 [2024-11-15 11:46:35.516451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.775 qpair failed and we were unable to recover it. 00:28:34.775 [2024-11-15 11:46:35.516537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.775 [2024-11-15 11:46:35.516548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.775 qpair failed and we were unable to recover it. 00:28:34.775 [2024-11-15 11:46:35.516726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.775 [2024-11-15 11:46:35.516737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.775 qpair failed and we were unable to recover it. 00:28:34.775 [2024-11-15 11:46:35.516882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.775 [2024-11-15 11:46:35.516892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.775 qpair failed and we were unable to recover it. 00:28:34.775 [2024-11-15 11:46:35.517119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.775 [2024-11-15 11:46:35.517130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.775 qpair failed and we were unable to recover it. 00:28:34.775 [2024-11-15 11:46:35.517265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.775 [2024-11-15 11:46:35.517276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.775 qpair failed and we were unable to recover it. 00:28:34.775 [2024-11-15 11:46:35.517413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.775 [2024-11-15 11:46:35.517424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.775 qpair failed and we were unable to recover it. 00:28:34.775 [2024-11-15 11:46:35.517500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.775 [2024-11-15 11:46:35.517512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.775 qpair failed and we were unable to recover it. 00:28:34.775 [2024-11-15 11:46:35.517650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.775 [2024-11-15 11:46:35.517661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.775 qpair failed and we were unable to recover it. 00:28:34.775 [2024-11-15 11:46:35.517848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.775 [2024-11-15 11:46:35.517858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.775 qpair failed and we were unable to recover it. 00:28:34.775 [2024-11-15 11:46:35.517998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.775 [2024-11-15 11:46:35.518009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.775 qpair failed and we were unable to recover it. 00:28:34.775 [2024-11-15 11:46:35.518069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.775 [2024-11-15 11:46:35.518080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.775 qpair failed and we were unable to recover it. 00:28:34.775 [2024-11-15 11:46:35.518166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.775 [2024-11-15 11:46:35.518177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.775 qpair failed and we were unable to recover it. 00:28:34.775 [2024-11-15 11:46:35.518244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.775 [2024-11-15 11:46:35.518256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.775 qpair failed and we were unable to recover it. 00:28:34.775 [2024-11-15 11:46:35.518354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.775 [2024-11-15 11:46:35.518364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.775 qpair failed and we were unable to recover it. 00:28:34.775 [2024-11-15 11:46:35.518450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.775 [2024-11-15 11:46:35.518467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.775 qpair failed and we were unable to recover it. 00:28:34.775 [2024-11-15 11:46:35.518620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.775 [2024-11-15 11:46:35.518632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.775 qpair failed and we were unable to recover it. 00:28:34.775 [2024-11-15 11:46:35.518792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.775 [2024-11-15 11:46:35.518803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.775 qpair failed and we were unable to recover it. 00:28:34.775 [2024-11-15 11:46:35.518887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.775 [2024-11-15 11:46:35.518899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.775 qpair failed and we were unable to recover it. 00:28:34.775 [2024-11-15 11:46:35.518975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.775 [2024-11-15 11:46:35.518987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.775 qpair failed and we were unable to recover it. 00:28:34.775 [2024-11-15 11:46:35.519064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.775 [2024-11-15 11:46:35.519076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.775 qpair failed and we were unable to recover it. 00:28:34.775 [2024-11-15 11:46:35.519134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.775 [2024-11-15 11:46:35.519145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.775 qpair failed and we were unable to recover it. 00:28:34.775 [2024-11-15 11:46:35.519290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.775 [2024-11-15 11:46:35.519303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.775 qpair failed and we were unable to recover it. 00:28:34.775 [2024-11-15 11:46:35.519537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.775 [2024-11-15 11:46:35.519549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.775 qpair failed and we were unable to recover it. 00:28:34.775 [2024-11-15 11:46:35.519641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.775 [2024-11-15 11:46:35.519652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.775 qpair failed and we were unable to recover it. 00:28:34.775 [2024-11-15 11:46:35.519744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.775 [2024-11-15 11:46:35.519754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.775 qpair failed and we were unable to recover it. 00:28:34.775 [2024-11-15 11:46:35.519836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.775 [2024-11-15 11:46:35.519849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.775 qpair failed and we were unable to recover it. 00:28:34.775 [2024-11-15 11:46:35.520000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.775 [2024-11-15 11:46:35.520012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.775 qpair failed and we were unable to recover it. 00:28:34.775 [2024-11-15 11:46:35.520086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.775 [2024-11-15 11:46:35.520098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.775 qpair failed and we were unable to recover it. 00:28:34.775 [2024-11-15 11:46:35.520173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.775 [2024-11-15 11:46:35.520186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.776 qpair failed and we were unable to recover it. 00:28:34.776 [2024-11-15 11:46:35.520336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.776 [2024-11-15 11:46:35.520347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.776 qpair failed and we were unable to recover it. 00:28:34.776 [2024-11-15 11:46:35.520414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.776 [2024-11-15 11:46:35.520425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.776 qpair failed and we were unable to recover it. 00:28:34.776 [2024-11-15 11:46:35.520568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.776 [2024-11-15 11:46:35.520582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.776 qpair failed and we were unable to recover it. 00:28:34.776 [2024-11-15 11:46:35.520660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.776 [2024-11-15 11:46:35.520671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.776 qpair failed and we were unable to recover it. 00:28:34.776 [2024-11-15 11:46:35.520732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.776 [2024-11-15 11:46:35.520744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.776 qpair failed and we were unable to recover it. 00:28:34.776 [2024-11-15 11:46:35.520804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.776 [2024-11-15 11:46:35.520815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.776 qpair failed and we were unable to recover it. 00:28:34.776 [2024-11-15 11:46:35.520881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.776 [2024-11-15 11:46:35.520892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.776 qpair failed and we were unable to recover it. 00:28:34.776 [2024-11-15 11:46:35.521025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.776 [2024-11-15 11:46:35.521036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.776 qpair failed and we were unable to recover it. 00:28:34.776 [2024-11-15 11:46:35.521111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.776 [2024-11-15 11:46:35.521122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.776 qpair failed and we were unable to recover it. 00:28:34.776 [2024-11-15 11:46:35.521206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.776 [2024-11-15 11:46:35.521217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.776 qpair failed and we were unable to recover it. 00:28:34.776 [2024-11-15 11:46:35.521305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.776 [2024-11-15 11:46:35.521316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.776 qpair failed and we were unable to recover it. 00:28:34.776 [2024-11-15 11:46:35.521401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.776 [2024-11-15 11:46:35.521413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.776 qpair failed and we were unable to recover it. 00:28:34.776 [2024-11-15 11:46:35.521494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.776 [2024-11-15 11:46:35.521506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.776 qpair failed and we were unable to recover it. 00:28:34.776 [2024-11-15 11:46:35.521652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.776 [2024-11-15 11:46:35.521663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.776 qpair failed and we were unable to recover it. 00:28:34.776 [2024-11-15 11:46:35.521751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.776 [2024-11-15 11:46:35.521762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.776 qpair failed and we were unable to recover it. 00:28:34.776 [2024-11-15 11:46:35.521976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.776 [2024-11-15 11:46:35.521988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.776 qpair failed and we were unable to recover it. 00:28:34.776 [2024-11-15 11:46:35.522132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.776 [2024-11-15 11:46:35.522143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.776 qpair failed and we were unable to recover it. 00:28:34.776 [2024-11-15 11:46:35.522304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.776 [2024-11-15 11:46:35.522316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.776 qpair failed and we were unable to recover it. 00:28:34.776 [2024-11-15 11:46:35.522395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.776 [2024-11-15 11:46:35.522405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.776 qpair failed and we were unable to recover it. 00:28:34.776 [2024-11-15 11:46:35.522491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.776 [2024-11-15 11:46:35.522502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.776 qpair failed and we were unable to recover it. 00:28:34.776 [2024-11-15 11:46:35.522586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.776 [2024-11-15 11:46:35.522598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.776 qpair failed and we were unable to recover it. 00:28:34.776 [2024-11-15 11:46:35.522690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.776 [2024-11-15 11:46:35.522700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.776 qpair failed and we were unable to recover it. 00:28:34.776 [2024-11-15 11:46:35.522777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.776 [2024-11-15 11:46:35.522788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.776 qpair failed and we were unable to recover it. 00:28:34.776 [2024-11-15 11:46:35.522937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.776 [2024-11-15 11:46:35.522947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.776 qpair failed and we were unable to recover it. 00:28:34.776 [2024-11-15 11:46:35.523044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.776 [2024-11-15 11:46:35.523055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.776 qpair failed and we were unable to recover it. 00:28:34.776 [2024-11-15 11:46:35.523126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.776 [2024-11-15 11:46:35.523137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.776 qpair failed and we were unable to recover it. 00:28:34.776 [2024-11-15 11:46:35.523287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.776 [2024-11-15 11:46:35.523297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.776 qpair failed and we were unable to recover it. 00:28:34.776 [2024-11-15 11:46:35.523379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.776 [2024-11-15 11:46:35.523390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.776 qpair failed and we were unable to recover it. 00:28:34.776 [2024-11-15 11:46:35.523549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.776 [2024-11-15 11:46:35.523561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.776 qpair failed and we were unable to recover it. 00:28:34.776 [2024-11-15 11:46:35.523644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.776 [2024-11-15 11:46:35.523656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.776 qpair failed and we were unable to recover it. 00:28:34.776 [2024-11-15 11:46:35.523725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.776 [2024-11-15 11:46:35.523735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.776 qpair failed and we were unable to recover it. 00:28:34.776 [2024-11-15 11:46:35.523804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.776 [2024-11-15 11:46:35.523815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.776 qpair failed and we were unable to recover it. 00:28:34.776 [2024-11-15 11:46:35.523886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.776 [2024-11-15 11:46:35.523896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.776 qpair failed and we were unable to recover it. 00:28:34.776 [2024-11-15 11:46:35.523994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.776 [2024-11-15 11:46:35.524005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.776 qpair failed and we were unable to recover it. 00:28:34.776 [2024-11-15 11:46:35.524084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.776 [2024-11-15 11:46:35.524094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.776 qpair failed and we were unable to recover it. 00:28:34.776 [2024-11-15 11:46:35.524229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.776 [2024-11-15 11:46:35.524240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.776 qpair failed and we were unable to recover it. 00:28:34.776 [2024-11-15 11:46:35.524322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.776 [2024-11-15 11:46:35.524333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.776 qpair failed and we were unable to recover it. 00:28:34.776 [2024-11-15 11:46:35.524499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.776 [2024-11-15 11:46:35.524510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.776 qpair failed and we were unable to recover it. 00:28:34.777 [2024-11-15 11:46:35.524590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.777 [2024-11-15 11:46:35.524601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.777 qpair failed and we were unable to recover it. 00:28:34.777 [2024-11-15 11:46:35.524760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.777 [2024-11-15 11:46:35.524773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.777 qpair failed and we were unable to recover it. 00:28:34.777 [2024-11-15 11:46:35.524851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.777 [2024-11-15 11:46:35.524862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.777 qpair failed and we were unable to recover it. 00:28:34.777 [2024-11-15 11:46:35.524957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.777 [2024-11-15 11:46:35.524968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.777 qpair failed and we were unable to recover it. 00:28:34.777 [2024-11-15 11:46:35.525048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.777 [2024-11-15 11:46:35.525061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.777 qpair failed and we were unable to recover it. 00:28:34.777 [2024-11-15 11:46:35.525141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.777 [2024-11-15 11:46:35.525152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.777 qpair failed and we were unable to recover it. 00:28:34.777 [2024-11-15 11:46:35.525221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.777 [2024-11-15 11:46:35.525233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.777 qpair failed and we were unable to recover it. 00:28:34.777 [2024-11-15 11:46:35.525373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.777 [2024-11-15 11:46:35.525385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.777 qpair failed and we were unable to recover it. 00:28:34.777 [2024-11-15 11:46:35.525455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.777 [2024-11-15 11:46:35.525471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.777 qpair failed and we were unable to recover it. 00:28:34.777 [2024-11-15 11:46:35.525559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.777 [2024-11-15 11:46:35.525570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.777 qpair failed and we were unable to recover it. 00:28:34.777 [2024-11-15 11:46:35.525708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.777 [2024-11-15 11:46:35.525719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.777 qpair failed and we were unable to recover it. 00:28:34.777 [2024-11-15 11:46:35.525787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.777 [2024-11-15 11:46:35.525797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.777 qpair failed and we were unable to recover it. 00:28:34.777 [2024-11-15 11:46:35.525944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.777 [2024-11-15 11:46:35.525955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.777 qpair failed and we were unable to recover it. 00:28:34.777 [2024-11-15 11:46:35.526035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.777 [2024-11-15 11:46:35.526046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.777 qpair failed and we were unable to recover it. 00:28:34.777 [2024-11-15 11:46:35.526113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.777 [2024-11-15 11:46:35.526124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.777 qpair failed and we were unable to recover it. 00:28:34.777 [2024-11-15 11:46:35.526258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.777 [2024-11-15 11:46:35.526270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.777 qpair failed and we were unable to recover it. 00:28:34.777 [2024-11-15 11:46:35.526340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.777 [2024-11-15 11:46:35.526352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.777 qpair failed and we were unable to recover it. 00:28:34.777 [2024-11-15 11:46:35.526503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.777 [2024-11-15 11:46:35.526515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.777 qpair failed and we were unable to recover it. 00:28:34.777 [2024-11-15 11:46:35.526594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.777 [2024-11-15 11:46:35.526605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.777 qpair failed and we were unable to recover it. 00:28:34.777 [2024-11-15 11:46:35.526675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.777 [2024-11-15 11:46:35.526685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.777 qpair failed and we were unable to recover it. 00:28:34.777 [2024-11-15 11:46:35.526779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.777 [2024-11-15 11:46:35.526790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.777 qpair failed and we were unable to recover it. 00:28:34.777 [2024-11-15 11:46:35.526857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.777 [2024-11-15 11:46:35.526868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.777 qpair failed and we were unable to recover it. 00:28:34.777 [2024-11-15 11:46:35.527003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.777 [2024-11-15 11:46:35.527014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.777 qpair failed and we were unable to recover it. 00:28:34.777 [2024-11-15 11:46:35.527157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.777 [2024-11-15 11:46:35.527167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.777 qpair failed and we were unable to recover it. 00:28:34.777 [2024-11-15 11:46:35.527236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.777 [2024-11-15 11:46:35.527246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.777 qpair failed and we were unable to recover it. 00:28:34.777 [2024-11-15 11:46:35.527329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.777 [2024-11-15 11:46:35.527339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.777 qpair failed and we were unable to recover it. 00:28:34.777 [2024-11-15 11:46:35.527432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.777 [2024-11-15 11:46:35.527444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.777 qpair failed and we were unable to recover it. 00:28:34.777 [2024-11-15 11:46:35.527658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.777 [2024-11-15 11:46:35.527670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.777 qpair failed and we were unable to recover it. 00:28:34.777 [2024-11-15 11:46:35.527807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.777 [2024-11-15 11:46:35.527819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.777 qpair failed and we were unable to recover it. 00:28:34.777 [2024-11-15 11:46:35.527967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.777 [2024-11-15 11:46:35.527978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.777 qpair failed and we were unable to recover it. 00:28:34.777 [2024-11-15 11:46:35.528114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.777 [2024-11-15 11:46:35.528126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.777 qpair failed and we were unable to recover it. 00:28:34.777 [2024-11-15 11:46:35.528196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.777 [2024-11-15 11:46:35.528207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.777 qpair failed and we were unable to recover it. 00:28:34.777 [2024-11-15 11:46:35.528445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.777 [2024-11-15 11:46:35.528456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.777 qpair failed and we were unable to recover it. 00:28:34.777 [2024-11-15 11:46:35.528540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.777 [2024-11-15 11:46:35.528552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.777 qpair failed and we were unable to recover it. 00:28:34.777 [2024-11-15 11:46:35.528639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.777 [2024-11-15 11:46:35.528650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.777 qpair failed and we were unable to recover it. 00:28:34.777 [2024-11-15 11:46:35.528729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.777 [2024-11-15 11:46:35.528739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.777 qpair failed and we were unable to recover it. 00:28:34.777 [2024-11-15 11:46:35.528808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.777 [2024-11-15 11:46:35.528818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.777 qpair failed and we were unable to recover it. 00:28:34.777 [2024-11-15 11:46:35.529078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.777 [2024-11-15 11:46:35.529089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.777 qpair failed and we were unable to recover it. 00:28:34.778 [2024-11-15 11:46:35.529310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.778 [2024-11-15 11:46:35.529321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.778 qpair failed and we were unable to recover it. 00:28:34.778 [2024-11-15 11:46:35.529401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.778 [2024-11-15 11:46:35.529412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.778 qpair failed and we were unable to recover it. 00:28:34.778 [2024-11-15 11:46:35.529568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.778 [2024-11-15 11:46:35.529580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.778 qpair failed and we were unable to recover it. 00:28:34.778 [2024-11-15 11:46:35.529650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.778 [2024-11-15 11:46:35.529665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.778 qpair failed and we were unable to recover it. 00:28:34.778 [2024-11-15 11:46:35.529753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.778 [2024-11-15 11:46:35.529766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.778 qpair failed and we were unable to recover it. 00:28:34.778 11:46:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:34.778 [2024-11-15 11:46:35.529849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.778 [2024-11-15 11:46:35.529865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.778 qpair failed and we were unable to recover it. 00:28:34.778 [2024-11-15 11:46:35.529930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.778 [2024-11-15 11:46:35.529941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.778 qpair failed and we were unable to recover it. 00:28:34.778 [2024-11-15 11:46:35.530092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.778 [2024-11-15 11:46:35.530103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.778 qpair failed and we were unable to recover it. 00:28:34.778 [2024-11-15 11:46:35.530167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.778 [2024-11-15 11:46:35.530180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.778 qpair failed and we were unable to recover it. 00:28:34.778 11:46:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:28:34.778 [2024-11-15 11:46:35.530383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.778 [2024-11-15 11:46:35.530394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.778 qpair failed and we were unable to recover it. 00:28:34.778 [2024-11-15 11:46:35.530538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.778 [2024-11-15 11:46:35.530550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.778 qpair failed and we were unable to recover it. 00:28:34.778 11:46:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:34.778 [2024-11-15 11:46:35.530615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.778 [2024-11-15 11:46:35.530626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.778 qpair failed and we were unable to recover it. 00:28:34.778 [2024-11-15 11:46:35.530797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.778 [2024-11-15 11:46:35.530808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.778 qpair failed and we were unable to recover it. 00:28:34.778 11:46:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:34.778 [2024-11-15 11:46:35.530949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.778 [2024-11-15 11:46:35.530960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.778 qpair failed and we were unable to recover it. 00:28:34.778 [2024-11-15 11:46:35.531044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.778 [2024-11-15 11:46:35.531055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.778 qpair failed and we were unable to recover it. 00:28:34.778 [2024-11-15 11:46:35.531203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.778 [2024-11-15 11:46:35.531214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.778 qpair failed and we were unable to recover it. 00:28:34.778 [2024-11-15 11:46:35.531362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.778 [2024-11-15 11:46:35.531373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.778 qpair failed and we were unable to recover it. 00:28:34.778 [2024-11-15 11:46:35.531475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.778 [2024-11-15 11:46:35.531486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.778 qpair failed and we were unable to recover it. 00:28:34.778 [2024-11-15 11:46:35.531559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.778 [2024-11-15 11:46:35.531570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.778 qpair failed and we were unable to recover it. 00:28:34.778 [2024-11-15 11:46:35.531646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.778 [2024-11-15 11:46:35.531656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.778 qpair failed and we were unable to recover it. 00:28:34.778 [2024-11-15 11:46:35.531731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.778 [2024-11-15 11:46:35.531742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.778 qpair failed and we were unable to recover it. 00:28:34.778 [2024-11-15 11:46:35.531838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.778 [2024-11-15 11:46:35.531849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.778 qpair failed and we were unable to recover it. 00:28:34.778 [2024-11-15 11:46:35.531942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.778 [2024-11-15 11:46:35.531953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.778 qpair failed and we were unable to recover it. 00:28:34.778 [2024-11-15 11:46:35.532096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.778 [2024-11-15 11:46:35.532107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.778 qpair failed and we were unable to recover it. 00:28:34.778 [2024-11-15 11:46:35.532175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.778 [2024-11-15 11:46:35.532187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.778 qpair failed and we were unable to recover it. 00:28:34.778 [2024-11-15 11:46:35.532260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.778 [2024-11-15 11:46:35.532271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.778 qpair failed and we were unable to recover it. 00:28:34.778 [2024-11-15 11:46:35.532343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.778 [2024-11-15 11:46:35.532354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.778 qpair failed and we were unable to recover it. 00:28:34.778 [2024-11-15 11:46:35.532437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.778 [2024-11-15 11:46:35.532447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.778 qpair failed and we were unable to recover it. 00:28:34.778 [2024-11-15 11:46:35.532593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.778 [2024-11-15 11:46:35.532604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.778 qpair failed and we were unable to recover it. 00:28:34.778 [2024-11-15 11:46:35.532749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.778 [2024-11-15 11:46:35.532760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.778 qpair failed and we were unable to recover it. 00:28:34.778 [2024-11-15 11:46:35.532850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.778 [2024-11-15 11:46:35.532861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.778 qpair failed and we were unable to recover it. 00:28:34.778 [2024-11-15 11:46:35.533047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.778 [2024-11-15 11:46:35.533059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.778 qpair failed and we were unable to recover it. 00:28:34.779 [2024-11-15 11:46:35.533141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.779 [2024-11-15 11:46:35.533151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.779 qpair failed and we were unable to recover it. 00:28:34.779 [2024-11-15 11:46:35.533221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.779 [2024-11-15 11:46:35.533232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.779 qpair failed and we were unable to recover it. 00:28:34.779 [2024-11-15 11:46:35.533318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.779 [2024-11-15 11:46:35.533328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.779 qpair failed and we were unable to recover it. 00:28:34.779 [2024-11-15 11:46:35.533414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.779 [2024-11-15 11:46:35.533424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.779 qpair failed and we were unable to recover it. 00:28:34.779 [2024-11-15 11:46:35.533515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.779 [2024-11-15 11:46:35.533526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.779 qpair failed and we were unable to recover it. 00:28:34.779 [2024-11-15 11:46:35.533662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.779 [2024-11-15 11:46:35.533673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.779 qpair failed and we were unable to recover it. 00:28:34.779 [2024-11-15 11:46:35.533747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.779 [2024-11-15 11:46:35.533758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.779 qpair failed and we were unable to recover it. 00:28:34.779 [2024-11-15 11:46:35.533846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.779 [2024-11-15 11:46:35.533857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.779 qpair failed and we were unable to recover it. 00:28:34.779 [2024-11-15 11:46:35.533936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.779 [2024-11-15 11:46:35.533947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.779 qpair failed and we were unable to recover it. 00:28:34.779 [2024-11-15 11:46:35.534101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.779 [2024-11-15 11:46:35.534113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.779 qpair failed and we were unable to recover it. 00:28:34.779 [2024-11-15 11:46:35.534212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.779 [2024-11-15 11:46:35.534223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.779 qpair failed and we were unable to recover it. 00:28:34.779 [2024-11-15 11:46:35.534360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.779 [2024-11-15 11:46:35.534371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.779 qpair failed and we were unable to recover it. 00:28:34.779 [2024-11-15 11:46:35.534509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.779 [2024-11-15 11:46:35.534520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.779 qpair failed and we were unable to recover it. 00:28:34.779 [2024-11-15 11:46:35.534586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.779 [2024-11-15 11:46:35.534597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.779 qpair failed and we were unable to recover it. 00:28:34.779 [2024-11-15 11:46:35.534676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.779 [2024-11-15 11:46:35.534687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.779 qpair failed and we were unable to recover it. 00:28:34.779 [2024-11-15 11:46:35.534842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.779 [2024-11-15 11:46:35.534853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.779 qpair failed and we were unable to recover it. 00:28:34.779 [2024-11-15 11:46:35.534941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.779 [2024-11-15 11:46:35.534952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.779 qpair failed and we were unable to recover it. 00:28:34.779 [2024-11-15 11:46:35.535089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.779 [2024-11-15 11:46:35.535100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.779 qpair failed and we were unable to recover it. 00:28:34.779 [2024-11-15 11:46:35.535248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.779 [2024-11-15 11:46:35.535258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.779 qpair failed and we were unable to recover it. 00:28:34.779 [2024-11-15 11:46:35.535346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.779 [2024-11-15 11:46:35.535357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.779 qpair failed and we were unable to recover it. 00:28:34.779 [2024-11-15 11:46:35.535503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.779 [2024-11-15 11:46:35.535515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.779 qpair failed and we were unable to recover it. 00:28:34.779 [2024-11-15 11:46:35.535653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.779 [2024-11-15 11:46:35.535663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.779 qpair failed and we were unable to recover it. 00:28:34.779 [2024-11-15 11:46:35.535733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.779 [2024-11-15 11:46:35.535743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.779 qpair failed and we were unable to recover it. 00:28:34.779 [2024-11-15 11:46:35.535811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.779 [2024-11-15 11:46:35.535821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.779 qpair failed and we were unable to recover it. 00:28:34.779 [2024-11-15 11:46:35.535955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.779 [2024-11-15 11:46:35.535966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.779 qpair failed and we were unable to recover it. 00:28:34.779 [2024-11-15 11:46:35.536110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.780 [2024-11-15 11:46:35.536120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.780 qpair failed and we were unable to recover it. 00:28:34.780 [2024-11-15 11:46:35.536200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.780 [2024-11-15 11:46:35.536212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.780 qpair failed and we were unable to recover it. 00:28:34.780 [2024-11-15 11:46:35.536278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.780 [2024-11-15 11:46:35.536289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.780 qpair failed and we were unable to recover it. 00:28:34.780 [2024-11-15 11:46:35.536358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.780 [2024-11-15 11:46:35.536369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.780 qpair failed and we were unable to recover it. 00:28:34.780 [2024-11-15 11:46:35.536521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.780 [2024-11-15 11:46:35.536533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.780 qpair failed and we were unable to recover it. 00:28:34.780 [2024-11-15 11:46:35.536619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.780 [2024-11-15 11:46:35.536630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.780 qpair failed and we were unable to recover it. 00:28:34.780 [2024-11-15 11:46:35.536730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.780 [2024-11-15 11:46:35.536741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.780 qpair failed and we were unable to recover it. 00:28:34.780 [2024-11-15 11:46:35.536822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.780 [2024-11-15 11:46:35.536832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.780 qpair failed and we were unable to recover it. 00:28:34.780 [2024-11-15 11:46:35.536909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.780 [2024-11-15 11:46:35.536919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.780 qpair failed and we were unable to recover it. 00:28:34.780 [2024-11-15 11:46:35.536993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.780 [2024-11-15 11:46:35.537004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.780 qpair failed and we were unable to recover it. 00:28:34.780 [2024-11-15 11:46:35.537143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.780 [2024-11-15 11:46:35.537154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.780 qpair failed and we were unable to recover it. 00:28:34.780 [2024-11-15 11:46:35.537237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.780 [2024-11-15 11:46:35.537260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1922550 with addr=10.0.0.2, port=4420 00:28:34.780 qpair failed and we were unable to recover it. 00:28:34.780 [2024-11-15 11:46:35.537346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.780 [2024-11-15 11:46:35.537360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.780 qpair failed and we were unable to recover it. 00:28:34.780 [2024-11-15 11:46:35.537437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.780 [2024-11-15 11:46:35.537447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.780 qpair failed and we were unable to recover it. 00:28:34.780 [2024-11-15 11:46:35.537517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.780 [2024-11-15 11:46:35.537528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.780 qpair failed and we were unable to recover it. 00:28:34.780 [2024-11-15 11:46:35.537611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.780 [2024-11-15 11:46:35.537622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.780 qpair failed and we were unable to recover it. 00:28:34.780 [2024-11-15 11:46:35.537758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.780 [2024-11-15 11:46:35.537768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.780 qpair failed and we were unable to recover it. 00:28:34.780 [2024-11-15 11:46:35.537905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.780 [2024-11-15 11:46:35.537916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.780 qpair failed and we were unable to recover it. 00:28:34.780 [2024-11-15 11:46:35.538049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.780 [2024-11-15 11:46:35.538060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.780 qpair failed and we were unable to recover it. 00:28:34.780 [2024-11-15 11:46:35.538196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.780 [2024-11-15 11:46:35.538206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.780 qpair failed and we were unable to recover it. 00:28:34.780 [2024-11-15 11:46:35.538303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.780 [2024-11-15 11:46:35.538314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.780 qpair failed and we were unable to recover it. 00:28:34.780 [2024-11-15 11:46:35.538385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.780 [2024-11-15 11:46:35.538395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.780 qpair failed and we were unable to recover it. 00:28:34.780 [2024-11-15 11:46:35.538470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.780 [2024-11-15 11:46:35.538481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.780 qpair failed and we were unable to recover it. 00:28:34.780 [2024-11-15 11:46:35.538551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.780 [2024-11-15 11:46:35.538562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.780 qpair failed and we were unable to recover it. 00:28:34.780 [2024-11-15 11:46:35.538721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.780 [2024-11-15 11:46:35.538734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.780 qpair failed and we were unable to recover it. 00:28:34.780 [2024-11-15 11:46:35.538795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.780 [2024-11-15 11:46:35.538806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.780 qpair failed and we were unable to recover it. 00:28:34.780 [2024-11-15 11:46:35.538942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.780 [2024-11-15 11:46:35.538952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.780 qpair failed and we were unable to recover it. 00:28:34.780 [2024-11-15 11:46:35.539034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.780 [2024-11-15 11:46:35.539045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.780 qpair failed and we were unable to recover it. 00:28:34.780 [2024-11-15 11:46:35.539131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.780 [2024-11-15 11:46:35.539142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.780 qpair failed and we were unable to recover it. 00:28:34.780 [2024-11-15 11:46:35.539276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.780 [2024-11-15 11:46:35.539286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.780 qpair failed and we were unable to recover it. 00:28:34.780 [2024-11-15 11:46:35.539447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.780 [2024-11-15 11:46:35.539457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.780 qpair failed and we were unable to recover it. 00:28:34.780 [2024-11-15 11:46:35.539638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.780 [2024-11-15 11:46:35.539649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.780 qpair failed and we were unable to recover it. 00:28:34.780 [2024-11-15 11:46:35.539780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.780 [2024-11-15 11:46:35.539791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.780 qpair failed and we were unable to recover it. 00:28:34.780 [2024-11-15 11:46:35.539862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.780 [2024-11-15 11:46:35.539873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.780 qpair failed and we were unable to recover it. 00:28:34.780 [2024-11-15 11:46:35.539936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.780 [2024-11-15 11:46:35.539947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.780 qpair failed and we were unable to recover it. 00:28:34.780 [2024-11-15 11:46:35.540027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.780 [2024-11-15 11:46:35.540038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.780 qpair failed and we were unable to recover it. 00:28:34.780 [2024-11-15 11:46:35.540094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.780 [2024-11-15 11:46:35.540104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.780 qpair failed and we were unable to recover it. 00:28:34.780 [2024-11-15 11:46:35.540246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.780 [2024-11-15 11:46:35.540256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.780 qpair failed and we were unable to recover it. 00:28:34.780 [2024-11-15 11:46:35.540328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.781 [2024-11-15 11:46:35.540338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.781 qpair failed and we were unable to recover it. 00:28:34.781 [2024-11-15 11:46:35.540415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.781 [2024-11-15 11:46:35.540426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.781 qpair failed and we were unable to recover it. 00:28:34.781 [2024-11-15 11:46:35.540522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.781 [2024-11-15 11:46:35.540534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.781 qpair failed and we were unable to recover it. 00:28:34.781 [2024-11-15 11:46:35.540667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.781 [2024-11-15 11:46:35.540677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.781 qpair failed and we were unable to recover it. 00:28:34.781 [2024-11-15 11:46:35.540747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.781 [2024-11-15 11:46:35.540757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.781 qpair failed and we were unable to recover it. 00:28:34.781 [2024-11-15 11:46:35.540840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.781 [2024-11-15 11:46:35.540851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.781 qpair failed and we were unable to recover it. 00:28:34.781 [2024-11-15 11:46:35.541034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.781 [2024-11-15 11:46:35.541046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.781 qpair failed and we were unable to recover it. 00:28:34.781 [2024-11-15 11:46:35.541181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.781 [2024-11-15 11:46:35.541192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.781 qpair failed and we were unable to recover it. 00:28:34.781 [2024-11-15 11:46:35.541354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.781 [2024-11-15 11:46:35.541365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.781 qpair failed and we were unable to recover it. 00:28:34.781 [2024-11-15 11:46:35.541446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.781 [2024-11-15 11:46:35.541456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.781 qpair failed and we were unable to recover it. 00:28:34.781 [2024-11-15 11:46:35.541543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.781 [2024-11-15 11:46:35.541554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.781 qpair failed and we were unable to recover it. 00:28:34.781 [2024-11-15 11:46:35.541645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.781 [2024-11-15 11:46:35.541655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.781 qpair failed and we were unable to recover it. 00:28:34.781 [2024-11-15 11:46:35.541721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.781 [2024-11-15 11:46:35.541731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.781 qpair failed and we were unable to recover it. 00:28:34.781 [2024-11-15 11:46:35.541876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.781 [2024-11-15 11:46:35.541888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.781 qpair failed and we were unable to recover it. 00:28:34.781 [2024-11-15 11:46:35.542027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.781 [2024-11-15 11:46:35.542038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.781 qpair failed and we were unable to recover it. 00:28:34.781 [2024-11-15 11:46:35.542107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.781 [2024-11-15 11:46:35.542118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.781 qpair failed and we were unable to recover it. 00:28:34.781 [2024-11-15 11:46:35.542282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.781 [2024-11-15 11:46:35.542293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.781 qpair failed and we were unable to recover it. 00:28:34.781 [2024-11-15 11:46:35.542454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.781 [2024-11-15 11:46:35.542469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.781 qpair failed and we were unable to recover it. 00:28:34.781 [2024-11-15 11:46:35.542618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.781 [2024-11-15 11:46:35.542629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.781 qpair failed and we were unable to recover it. 00:28:34.781 [2024-11-15 11:46:35.542725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.781 [2024-11-15 11:46:35.542736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.781 qpair failed and we were unable to recover it. 00:28:34.781 [2024-11-15 11:46:35.542913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.781 [2024-11-15 11:46:35.542924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.781 qpair failed and we were unable to recover it. 00:28:34.781 [2024-11-15 11:46:35.542999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.781 [2024-11-15 11:46:35.543009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.781 qpair failed and we were unable to recover it. 00:28:34.781 [2024-11-15 11:46:35.543078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.781 [2024-11-15 11:46:35.543088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.781 qpair failed and we were unable to recover it. 00:28:34.781 [2024-11-15 11:46:35.543252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.781 [2024-11-15 11:46:35.543263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.781 qpair failed and we were unable to recover it. 00:28:34.781 [2024-11-15 11:46:35.543327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.781 [2024-11-15 11:46:35.543337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.781 qpair failed and we were unable to recover it. 00:28:34.781 [2024-11-15 11:46:35.543405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.781 [2024-11-15 11:46:35.543416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.781 qpair failed and we were unable to recover it. 00:28:34.781 [2024-11-15 11:46:35.543480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.781 [2024-11-15 11:46:35.543493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.781 qpair failed and we were unable to recover it. 00:28:34.781 [2024-11-15 11:46:35.543605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.781 [2024-11-15 11:46:35.543615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.781 qpair failed and we were unable to recover it. 00:28:34.781 [2024-11-15 11:46:35.543714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.781 [2024-11-15 11:46:35.543724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.781 qpair failed and we were unable to recover it. 00:28:34.781 [2024-11-15 11:46:35.543806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.781 [2024-11-15 11:46:35.543816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.781 qpair failed and we were unable to recover it. 00:28:34.781 [2024-11-15 11:46:35.543952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.781 [2024-11-15 11:46:35.543964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.781 qpair failed and we were unable to recover it. 00:28:34.781 [2024-11-15 11:46:35.544099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.781 [2024-11-15 11:46:35.544110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.781 qpair failed and we were unable to recover it. 00:28:34.781 [2024-11-15 11:46:35.544325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.781 [2024-11-15 11:46:35.544335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.781 qpair failed and we were unable to recover it. 00:28:34.781 [2024-11-15 11:46:35.544475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.781 [2024-11-15 11:46:35.544487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.781 qpair failed and we were unable to recover it. 00:28:34.781 [2024-11-15 11:46:35.544570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.781 [2024-11-15 11:46:35.544581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.781 qpair failed and we were unable to recover it. 00:28:34.781 [2024-11-15 11:46:35.544658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.781 [2024-11-15 11:46:35.544669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.781 qpair failed and we were unable to recover it. 00:28:34.781 [2024-11-15 11:46:35.544742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.781 [2024-11-15 11:46:35.544753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.781 qpair failed and we were unable to recover it. 00:28:34.781 [2024-11-15 11:46:35.544893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.782 [2024-11-15 11:46:35.544903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.782 qpair failed and we were unable to recover it. 00:28:34.782 [2024-11-15 11:46:35.545049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.782 [2024-11-15 11:46:35.545059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.782 qpair failed and we were unable to recover it. 00:28:34.782 [2024-11-15 11:46:35.545242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.782 [2024-11-15 11:46:35.545253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.782 qpair failed and we were unable to recover it. 00:28:34.782 [2024-11-15 11:46:35.545327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.782 [2024-11-15 11:46:35.545338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.782 qpair failed and we were unable to recover it. 00:28:34.782 [2024-11-15 11:46:35.545491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.782 [2024-11-15 11:46:35.545502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.782 qpair failed and we were unable to recover it. 00:28:34.782 [2024-11-15 11:46:35.545575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.782 [2024-11-15 11:46:35.545586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.782 qpair failed and we were unable to recover it. 00:28:34.782 [2024-11-15 11:46:35.545673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.782 [2024-11-15 11:46:35.545683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.782 qpair failed and we were unable to recover it. 00:28:34.782 [2024-11-15 11:46:35.545823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.782 [2024-11-15 11:46:35.545834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.782 qpair failed and we were unable to recover it. 00:28:34.782 [2024-11-15 11:46:35.545987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.782 [2024-11-15 11:46:35.545997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.782 qpair failed and we were unable to recover it. 00:28:34.782 [2024-11-15 11:46:35.546066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.782 [2024-11-15 11:46:35.546076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.782 qpair failed and we were unable to recover it. 00:28:34.782 [2024-11-15 11:46:35.546142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.782 [2024-11-15 11:46:35.546152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.782 qpair failed and we were unable to recover it. 00:28:34.782 [2024-11-15 11:46:35.546308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.782 [2024-11-15 11:46:35.546319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.782 qpair failed and we were unable to recover it. 00:28:34.782 [2024-11-15 11:46:35.546524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.782 [2024-11-15 11:46:35.546534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.782 qpair failed and we were unable to recover it. 00:28:34.782 [2024-11-15 11:46:35.546605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.782 [2024-11-15 11:46:35.546616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.782 qpair failed and we were unable to recover it. 00:28:34.782 [2024-11-15 11:46:35.546746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.782 [2024-11-15 11:46:35.546756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.782 qpair failed and we were unable to recover it. 00:28:34.782 [2024-11-15 11:46:35.546892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.782 [2024-11-15 11:46:35.546903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:34.782 qpair failed and we were unable to recover it. 00:28:34.782 [2024-11-15 11:46:35.546998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.782 [2024-11-15 11:46:35.547013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.782 qpair failed and we were unable to recover it. 00:28:34.782 [2024-11-15 11:46:35.547239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.782 [2024-11-15 11:46:35.547250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.782 qpair failed and we were unable to recover it. 00:28:34.782 [2024-11-15 11:46:35.547329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.782 [2024-11-15 11:46:35.547340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.782 qpair failed and we were unable to recover it. 00:28:34.782 [2024-11-15 11:46:35.547479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.782 [2024-11-15 11:46:35.547490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.782 qpair failed and we were unable to recover it. 00:28:34.782 [2024-11-15 11:46:35.547571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.782 [2024-11-15 11:46:35.547582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.782 qpair failed and we were unable to recover it. 00:28:34.782 [2024-11-15 11:46:35.547736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.782 [2024-11-15 11:46:35.547747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.782 qpair failed and we were unable to recover it. 00:28:34.782 [2024-11-15 11:46:35.547843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.782 [2024-11-15 11:46:35.547853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.782 qpair failed and we were unable to recover it. 00:28:34.782 [2024-11-15 11:46:35.547939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.782 [2024-11-15 11:46:35.547950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.782 qpair failed and we were unable to recover it. 00:28:34.782 [2024-11-15 11:46:35.548092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.782 [2024-11-15 11:46:35.548103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.782 qpair failed and we were unable to recover it. 00:28:34.782 [2024-11-15 11:46:35.548168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.782 [2024-11-15 11:46:35.548179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.782 qpair failed and we were unable to recover it. 00:28:34.782 [2024-11-15 11:46:35.548325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.782 [2024-11-15 11:46:35.548335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.782 qpair failed and we were unable to recover it. 00:28:34.782 [2024-11-15 11:46:35.548472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.782 [2024-11-15 11:46:35.548483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.782 qpair failed and we were unable to recover it. 00:28:34.782 [2024-11-15 11:46:35.548579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.782 [2024-11-15 11:46:35.548590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.782 qpair failed and we were unable to recover it. 00:28:34.782 [2024-11-15 11:46:35.548654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.782 [2024-11-15 11:46:35.548668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.782 qpair failed and we were unable to recover it. 00:28:34.782 [2024-11-15 11:46:35.548740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.782 [2024-11-15 11:46:35.548750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.782 qpair failed and we were unable to recover it. 00:28:34.782 [2024-11-15 11:46:35.548848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.782 [2024-11-15 11:46:35.548859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.782 qpair failed and we were unable to recover it. 00:28:34.782 [2024-11-15 11:46:35.548999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.782 [2024-11-15 11:46:35.549009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.782 qpair failed and we were unable to recover it. 00:28:34.782 [2024-11-15 11:46:35.549156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.782 [2024-11-15 11:46:35.549167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.782 qpair failed and we were unable to recover it. 00:28:34.782 [2024-11-15 11:46:35.549228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.782 [2024-11-15 11:46:35.549239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.782 qpair failed and we were unable to recover it. 00:28:34.782 [2024-11-15 11:46:35.549378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.782 [2024-11-15 11:46:35.549389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.782 qpair failed and we were unable to recover it. 00:28:34.782 [2024-11-15 11:46:35.549568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.782 [2024-11-15 11:46:35.549579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.782 qpair failed and we were unable to recover it. 00:28:34.782 [2024-11-15 11:46:35.549655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.782 [2024-11-15 11:46:35.549666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.782 qpair failed and we were unable to recover it. 00:28:34.782 [2024-11-15 11:46:35.549823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.783 [2024-11-15 11:46:35.549834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.783 qpair failed and we were unable to recover it. 00:28:34.783 [2024-11-15 11:46:35.549972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.783 [2024-11-15 11:46:35.549983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.783 qpair failed and we were unable to recover it. 00:28:34.783 [2024-11-15 11:46:35.550120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.783 [2024-11-15 11:46:35.550131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.783 qpair failed and we were unable to recover it. 00:28:34.783 [2024-11-15 11:46:35.550218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.783 [2024-11-15 11:46:35.550229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.783 qpair failed and we were unable to recover it. 00:28:34.783 [2024-11-15 11:46:35.550368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.783 [2024-11-15 11:46:35.550379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.783 qpair failed and we were unable to recover it. 00:28:34.783 [2024-11-15 11:46:35.550544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.783 [2024-11-15 11:46:35.550555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.783 qpair failed and we were unable to recover it. 00:28:34.783 [2024-11-15 11:46:35.550640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.783 [2024-11-15 11:46:35.550650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.783 qpair failed and we were unable to recover it. 00:28:34.783 [2024-11-15 11:46:35.550729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.783 [2024-11-15 11:46:35.550739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.783 qpair failed and we were unable to recover it. 00:28:34.783 [2024-11-15 11:46:35.550804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.783 [2024-11-15 11:46:35.550814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.783 qpair failed and we were unable to recover it. 00:28:34.783 [2024-11-15 11:46:35.550908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.783 [2024-11-15 11:46:35.550919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.783 qpair failed and we were unable to recover it. 00:28:34.783 [2024-11-15 11:46:35.550997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.783 [2024-11-15 11:46:35.551008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.783 qpair failed and we were unable to recover it. 00:28:34.783 [2024-11-15 11:46:35.551102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.783 [2024-11-15 11:46:35.551113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.783 qpair failed and we were unable to recover it. 00:28:34.783 [2024-11-15 11:46:35.551179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.783 [2024-11-15 11:46:35.551190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.783 qpair failed and we were unable to recover it. 00:28:34.783 [2024-11-15 11:46:35.551256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.783 [2024-11-15 11:46:35.551267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.783 qpair failed and we were unable to recover it. 00:28:34.783 [2024-11-15 11:46:35.551421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.783 [2024-11-15 11:46:35.551431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.783 qpair failed and we were unable to recover it. 00:28:34.783 [2024-11-15 11:46:35.551571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.783 [2024-11-15 11:46:35.551582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.783 qpair failed and we were unable to recover it. 00:28:34.783 [2024-11-15 11:46:35.551664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.783 [2024-11-15 11:46:35.551674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.783 qpair failed and we were unable to recover it. 00:28:34.783 [2024-11-15 11:46:35.551815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.783 [2024-11-15 11:46:35.551826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:34.783 qpair failed and we were unable to recover it. 00:28:34.783 [2024-11-15 11:46:35.551904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.783 [2024-11-15 11:46:35.551917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.783 qpair failed and we were unable to recover it. 00:28:34.783 [2024-11-15 11:46:35.552057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.783 [2024-11-15 11:46:35.552067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.783 qpair failed and we were unable to recover it. 00:28:34.783 [2024-11-15 11:46:35.552158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.783 [2024-11-15 11:46:35.552169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.783 qpair failed and we were unable to recover it. 00:28:34.783 [2024-11-15 11:46:35.552310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.783 [2024-11-15 11:46:35.552320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.783 qpair failed and we were unable to recover it. 00:28:34.783 [2024-11-15 11:46:35.552399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.783 [2024-11-15 11:46:35.552410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.783 qpair failed and we were unable to recover it. 00:28:34.783 [2024-11-15 11:46:35.552482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.784 [2024-11-15 11:46:35.552493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.784 qpair failed and we were unable to recover it. 00:28:34.784 [2024-11-15 11:46:35.552642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.784 [2024-11-15 11:46:35.552654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.784 qpair failed and we were unable to recover it. 00:28:34.784 [2024-11-15 11:46:35.552823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.784 [2024-11-15 11:46:35.552833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.784 qpair failed and we were unable to recover it. 00:28:34.784 [2024-11-15 11:46:35.552900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.784 [2024-11-15 11:46:35.552911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.784 qpair failed and we were unable to recover it. 00:28:34.784 [2024-11-15 11:46:35.553111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.784 [2024-11-15 11:46:35.553121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.784 qpair failed and we were unable to recover it. 00:28:34.784 [2024-11-15 11:46:35.553309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.784 [2024-11-15 11:46:35.553320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.784 qpair failed and we were unable to recover it. 00:28:34.784 [2024-11-15 11:46:35.553476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.784 [2024-11-15 11:46:35.553487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.784 qpair failed and we were unable to recover it. 00:28:34.784 [2024-11-15 11:46:35.553743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.784 [2024-11-15 11:46:35.553755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.784 qpair failed and we were unable to recover it. 00:28:34.784 [2024-11-15 11:46:35.553840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.784 [2024-11-15 11:46:35.553853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.784 qpair failed and we were unable to recover it. 00:28:34.784 [2024-11-15 11:46:35.553992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.784 [2024-11-15 11:46:35.554003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.784 qpair failed and we were unable to recover it. 00:28:34.784 [2024-11-15 11:46:35.554093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.784 [2024-11-15 11:46:35.554104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.784 qpair failed and we were unable to recover it. 00:28:34.784 [2024-11-15 11:46:35.554172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.784 [2024-11-15 11:46:35.554182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.784 qpair failed and we were unable to recover it. 00:28:34.784 [2024-11-15 11:46:35.554269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.784 [2024-11-15 11:46:35.554279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.784 qpair failed and we were unable to recover it. 00:28:34.784 [2024-11-15 11:46:35.554342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.784 [2024-11-15 11:46:35.554353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.784 qpair failed and we were unable to recover it. 00:28:34.784 [2024-11-15 11:46:35.554489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.784 [2024-11-15 11:46:35.554499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.784 qpair failed and we were unable to recover it. 00:28:34.784 [2024-11-15 11:46:35.554565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.784 [2024-11-15 11:46:35.554576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.784 qpair failed and we were unable to recover it. 00:28:34.784 [2024-11-15 11:46:35.554660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.784 [2024-11-15 11:46:35.554671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.784 qpair failed and we were unable to recover it. 00:28:34.784 [2024-11-15 11:46:35.554754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.784 [2024-11-15 11:46:35.554765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.784 qpair failed and we were unable to recover it. 00:28:34.784 [2024-11-15 11:46:35.554925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.784 [2024-11-15 11:46:35.554936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.784 qpair failed and we were unable to recover it. 00:28:34.784 [2024-11-15 11:46:35.555167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.784 [2024-11-15 11:46:35.555178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.784 qpair failed and we were unable to recover it. 00:28:34.784 [2024-11-15 11:46:35.555259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.784 [2024-11-15 11:46:35.555270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.784 qpair failed and we were unable to recover it. 00:28:34.784 [2024-11-15 11:46:35.555341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.784 [2024-11-15 11:46:35.555352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.784 qpair failed and we were unable to recover it. 00:28:34.784 [2024-11-15 11:46:35.555417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.784 [2024-11-15 11:46:35.555428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.784 qpair failed and we were unable to recover it. 00:28:34.784 [2024-11-15 11:46:35.555499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.784 [2024-11-15 11:46:35.555511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.784 qpair failed and we were unable to recover it. 00:28:34.784 [2024-11-15 11:46:35.555658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.784 [2024-11-15 11:46:35.555669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.784 qpair failed and we were unable to recover it. 00:28:34.784 [2024-11-15 11:46:35.555804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.784 [2024-11-15 11:46:35.555815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.784 qpair failed and we were unable to recover it. 00:28:34.784 [2024-11-15 11:46:35.555954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.784 [2024-11-15 11:46:35.555965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.784 qpair failed and we were unable to recover it. 00:28:34.784 [2024-11-15 11:46:35.556048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.784 [2024-11-15 11:46:35.556059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.784 qpair failed and we were unable to recover it. 00:28:34.784 [2024-11-15 11:46:35.556207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.784 [2024-11-15 11:46:35.556218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.784 qpair failed and we were unable to recover it. 00:28:34.784 [2024-11-15 11:46:35.556311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.784 [2024-11-15 11:46:35.556321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.784 qpair failed and we were unable to recover it. 00:28:34.784 [2024-11-15 11:46:35.556400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.784 [2024-11-15 11:46:35.556411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.784 qpair failed and we were unable to recover it. 00:28:34.784 [2024-11-15 11:46:35.556574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.784 [2024-11-15 11:46:35.556585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.784 qpair failed and we were unable to recover it. 00:28:34.784 [2024-11-15 11:46:35.556666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.784 [2024-11-15 11:46:35.556677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.784 qpair failed and we were unable to recover it. 00:28:34.784 [2024-11-15 11:46:35.556909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.784 [2024-11-15 11:46:35.556919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.784 qpair failed and we were unable to recover it. 00:28:34.784 [2024-11-15 11:46:35.557079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.784 [2024-11-15 11:46:35.557090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.784 qpair failed and we were unable to recover it. 00:28:34.784 [2024-11-15 11:46:35.557233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.784 [2024-11-15 11:46:35.557244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.784 qpair failed and we were unable to recover it. 00:28:34.784 [2024-11-15 11:46:35.557317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.784 [2024-11-15 11:46:35.557327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.784 qpair failed and we were unable to recover it. 00:28:34.784 [2024-11-15 11:46:35.557550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.785 [2024-11-15 11:46:35.557562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.785 qpair failed and we were unable to recover it. 00:28:34.785 [2024-11-15 11:46:35.557625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.785 [2024-11-15 11:46:35.557636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.785 qpair failed and we were unable to recover it. 00:28:34.785 [2024-11-15 11:46:35.557727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.785 [2024-11-15 11:46:35.557738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.785 qpair failed and we were unable to recover it. 00:28:34.785 [2024-11-15 11:46:35.557815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.785 [2024-11-15 11:46:35.557825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.785 qpair failed and we were unable to recover it. 00:28:34.785 [2024-11-15 11:46:35.557967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.785 [2024-11-15 11:46:35.557979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.785 qpair failed and we were unable to recover it. 00:28:34.785 [2024-11-15 11:46:35.558160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.785 [2024-11-15 11:46:35.558170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:34.785 qpair failed and we were unable to recover it. 00:28:35.052 [2024-11-15 11:46:35.558305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.052 [2024-11-15 11:46:35.558316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:35.052 qpair failed and we were unable to recover it. 00:28:35.052 [2024-11-15 11:46:35.558396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.052 [2024-11-15 11:46:35.558407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:35.052 qpair failed and we were unable to recover it. 00:28:35.052 [2024-11-15 11:46:35.558586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.052 [2024-11-15 11:46:35.558600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:35.052 qpair failed and we were unable to recover it. 00:28:35.052 [2024-11-15 11:46:35.558677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.052 [2024-11-15 11:46:35.558688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:35.052 qpair failed and we were unable to recover it. 00:28:35.052 [2024-11-15 11:46:35.558894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.052 [2024-11-15 11:46:35.558906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:35.052 qpair failed and we were unable to recover it. 00:28:35.052 [2024-11-15 11:46:35.559061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.052 [2024-11-15 11:46:35.559071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:35.052 qpair failed and we were unable to recover it. 00:28:35.052 [2024-11-15 11:46:35.559183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.052 [2024-11-15 11:46:35.559193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:35.052 qpair failed and we were unable to recover it. 00:28:35.052 [2024-11-15 11:46:35.559275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.052 [2024-11-15 11:46:35.559285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:35.052 qpair failed and we were unable to recover it. 00:28:35.052 [2024-11-15 11:46:35.559366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.052 [2024-11-15 11:46:35.559377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:35.052 qpair failed and we were unable to recover it. 00:28:35.052 [2024-11-15 11:46:35.559464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.052 [2024-11-15 11:46:35.559475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:35.052 qpair failed and we were unable to recover it. 00:28:35.052 [2024-11-15 11:46:35.559649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.052 [2024-11-15 11:46:35.559660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:35.052 qpair failed and we were unable to recover it. 00:28:35.052 [2024-11-15 11:46:35.559812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.052 [2024-11-15 11:46:35.559823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:35.052 qpair failed and we were unable to recover it. 00:28:35.052 [2024-11-15 11:46:35.559900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.052 [2024-11-15 11:46:35.559911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:35.052 qpair failed and we were unable to recover it. 00:28:35.052 [2024-11-15 11:46:35.560007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.052 [2024-11-15 11:46:35.560018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:35.052 qpair failed and we were unable to recover it. 00:28:35.052 [2024-11-15 11:46:35.560235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.052 [2024-11-15 11:46:35.560247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:35.052 qpair failed and we were unable to recover it. 00:28:35.053 [2024-11-15 11:46:35.560325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.053 [2024-11-15 11:46:35.560335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:35.053 qpair failed and we were unable to recover it. 00:28:35.053 [2024-11-15 11:46:35.560473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.053 [2024-11-15 11:46:35.560485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:35.053 qpair failed and we were unable to recover it. 00:28:35.053 [2024-11-15 11:46:35.560572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.053 [2024-11-15 11:46:35.560582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:35.053 qpair failed and we were unable to recover it. 00:28:35.053 [2024-11-15 11:46:35.560724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.053 [2024-11-15 11:46:35.560735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:35.053 qpair failed and we were unable to recover it. 00:28:35.053 [2024-11-15 11:46:35.560873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.053 [2024-11-15 11:46:35.560883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:35.053 qpair failed and we were unable to recover it. 00:28:35.053 [2024-11-15 11:46:35.560960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.053 [2024-11-15 11:46:35.560970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:35.053 qpair failed and we were unable to recover it. 00:28:35.053 [2024-11-15 11:46:35.561037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.053 [2024-11-15 11:46:35.561047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:35.053 qpair failed and we were unable to recover it. 00:28:35.053 [2024-11-15 11:46:35.561111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.053 [2024-11-15 11:46:35.561122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:35.053 qpair failed and we were unable to recover it. 00:28:35.053 [2024-11-15 11:46:35.561207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.053 [2024-11-15 11:46:35.561218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:35.053 qpair failed and we were unable to recover it. 00:28:35.053 [2024-11-15 11:46:35.561279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.053 [2024-11-15 11:46:35.561289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:35.053 qpair failed and we were unable to recover it. 00:28:35.053 [2024-11-15 11:46:35.561383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.053 [2024-11-15 11:46:35.561393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:35.053 qpair failed and we were unable to recover it. 00:28:35.053 [2024-11-15 11:46:35.561535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.053 [2024-11-15 11:46:35.561546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:35.053 qpair failed and we were unable to recover it. 00:28:35.053 [2024-11-15 11:46:35.561612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.053 [2024-11-15 11:46:35.561623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:35.053 qpair failed and we were unable to recover it. 00:28:35.053 [2024-11-15 11:46:35.561761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.053 [2024-11-15 11:46:35.561772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:35.053 qpair failed and we were unable to recover it. 00:28:35.053 [2024-11-15 11:46:35.561847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.053 [2024-11-15 11:46:35.561857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:35.053 qpair failed and we were unable to recover it. 00:28:35.053 [2024-11-15 11:46:35.561947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.053 [2024-11-15 11:46:35.561959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:35.053 qpair failed and we were unable to recover it. 00:28:35.053 [2024-11-15 11:46:35.562049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.053 [2024-11-15 11:46:35.562060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:35.053 qpair failed and we were unable to recover it. 00:28:35.053 [2024-11-15 11:46:35.562196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.053 [2024-11-15 11:46:35.562209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:35.053 qpair failed and we were unable to recover it. 00:28:35.053 [2024-11-15 11:46:35.562296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.053 [2024-11-15 11:46:35.562307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:35.053 qpair failed and we were unable to recover it. 00:28:35.053 [2024-11-15 11:46:35.562377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.053 [2024-11-15 11:46:35.562388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:35.053 qpair failed and we were unable to recover it. 00:28:35.053 [2024-11-15 11:46:35.562540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.053 [2024-11-15 11:46:35.562552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:35.053 qpair failed and we were unable to recover it. 00:28:35.053 [2024-11-15 11:46:35.562631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.053 [2024-11-15 11:46:35.562642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:35.053 qpair failed and we were unable to recover it. 00:28:35.053 [2024-11-15 11:46:35.562712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.053 [2024-11-15 11:46:35.562723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:35.053 qpair failed and we were unable to recover it. 00:28:35.053 [2024-11-15 11:46:35.562813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.053 [2024-11-15 11:46:35.562823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:35.053 qpair failed and we were unable to recover it. 00:28:35.053 [2024-11-15 11:46:35.562960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.053 [2024-11-15 11:46:35.562971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:35.053 qpair failed and we were unable to recover it. 00:28:35.053 [2024-11-15 11:46:35.563110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.053 [2024-11-15 11:46:35.563121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:35.053 qpair failed and we were unable to recover it. 00:28:35.053 [2024-11-15 11:46:35.563268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.053 [2024-11-15 11:46:35.563279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:35.053 qpair failed and we were unable to recover it. 00:28:35.053 [2024-11-15 11:46:35.563418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.053 [2024-11-15 11:46:35.563428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:35.053 qpair failed and we were unable to recover it. 00:28:35.053 [2024-11-15 11:46:35.563580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.053 [2024-11-15 11:46:35.563592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:35.053 qpair failed and we were unable to recover it. 00:28:35.053 [2024-11-15 11:46:35.563667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.053 [2024-11-15 11:46:35.563678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:35.053 qpair failed and we were unable to recover it. 00:28:35.053 [2024-11-15 11:46:35.563811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.053 [2024-11-15 11:46:35.563822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:35.053 qpair failed and we were unable to recover it. 00:28:35.053 [2024-11-15 11:46:35.563990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.053 [2024-11-15 11:46:35.564002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:35.053 qpair failed and we were unable to recover it. 00:28:35.053 [2024-11-15 11:46:35.564164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.053 [2024-11-15 11:46:35.564175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:35.053 qpair failed and we were unable to recover it. 00:28:35.053 [2024-11-15 11:46:35.564257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.053 [2024-11-15 11:46:35.564268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:35.053 qpair failed and we were unable to recover it. 00:28:35.053 [2024-11-15 11:46:35.564342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.053 [2024-11-15 11:46:35.564353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:35.053 qpair failed and we were unable to recover it. 00:28:35.053 [2024-11-15 11:46:35.564502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.053 [2024-11-15 11:46:35.564514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:35.053 qpair failed and we were unable to recover it. 00:28:35.053 [2024-11-15 11:46:35.564723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.053 [2024-11-15 11:46:35.564734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:35.053 qpair failed and we were unable to recover it. 00:28:35.053 [2024-11-15 11:46:35.564949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.054 [2024-11-15 11:46:35.564960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:35.054 qpair failed and we were unable to recover it. 00:28:35.054 [2024-11-15 11:46:35.565103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.054 [2024-11-15 11:46:35.565114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:35.054 qpair failed and we were unable to recover it. 00:28:35.054 [2024-11-15 11:46:35.565328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.054 [2024-11-15 11:46:35.565339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:35.054 qpair failed and we were unable to recover it. 00:28:35.054 [2024-11-15 11:46:35.565415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.054 [2024-11-15 11:46:35.565426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:35.054 qpair failed and we were unable to recover it. 00:28:35.054 [2024-11-15 11:46:35.565499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.054 [2024-11-15 11:46:35.565511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:35.054 qpair failed and we were unable to recover it. 00:28:35.054 [2024-11-15 11:46:35.565595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.054 [2024-11-15 11:46:35.565606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:35.054 qpair failed and we were unable to recover it. 00:28:35.054 [2024-11-15 11:46:35.565679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.054 [2024-11-15 11:46:35.565690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:35.054 qpair failed and we were unable to recover it. 00:28:35.054 [2024-11-15 11:46:35.565773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.054 [2024-11-15 11:46:35.565784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:35.054 qpair failed and we were unable to recover it. 00:28:35.054 [2024-11-15 11:46:35.565869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.054 [2024-11-15 11:46:35.565880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:35.054 qpair failed and we were unable to recover it. 00:28:35.054 [2024-11-15 11:46:35.565948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.054 [2024-11-15 11:46:35.565959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:35.054 qpair failed and we were unable to recover it. 00:28:35.054 [2024-11-15 11:46:35.566023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.054 [2024-11-15 11:46:35.566035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:35.054 qpair failed and we were unable to recover it. 00:28:35.054 [2024-11-15 11:46:35.566100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.054 [2024-11-15 11:46:35.566111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:35.054 qpair failed and we were unable to recover it. 00:28:35.054 [2024-11-15 11:46:35.566208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.054 [2024-11-15 11:46:35.566220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:35.054 qpair failed and we were unable to recover it. 00:28:35.054 [2024-11-15 11:46:35.566305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.054 [2024-11-15 11:46:35.566316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:35.054 qpair failed and we were unable to recover it. 00:28:35.054 [2024-11-15 11:46:35.566450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.054 [2024-11-15 11:46:35.566465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:35.054 qpair failed and we were unable to recover it. 00:28:35.054 [2024-11-15 11:46:35.566551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.054 [2024-11-15 11:46:35.566562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:35.054 qpair failed and we were unable to recover it. 00:28:35.054 [2024-11-15 11:46:35.566783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.054 [2024-11-15 11:46:35.566794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:35.054 qpair failed and we were unable to recover it. 00:28:35.054 [2024-11-15 11:46:35.566946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.054 [2024-11-15 11:46:35.566957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:35.054 qpair failed and we were unable to recover it. 00:28:35.054 [2024-11-15 11:46:35.567092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.054 [2024-11-15 11:46:35.567104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:35.054 qpair failed and we were unable to recover it. 00:28:35.054 [2024-11-15 11:46:35.567261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.054 [2024-11-15 11:46:35.567272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:35.054 qpair failed and we were unable to recover it. 00:28:35.054 [2024-11-15 11:46:35.567431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.054 [2024-11-15 11:46:35.567444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:35.054 qpair failed and we were unable to recover it. 00:28:35.054 [2024-11-15 11:46:35.567639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.054 [2024-11-15 11:46:35.567651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:35.054 qpair failed and we were unable to recover it. 00:28:35.054 [2024-11-15 11:46:35.567734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.054 [2024-11-15 11:46:35.567746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:35.054 qpair failed and we were unable to recover it. 00:28:35.054 [2024-11-15 11:46:35.567900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.054 [2024-11-15 11:46:35.567913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:35.054 qpair failed and we were unable to recover it. 00:28:35.054 [2024-11-15 11:46:35.567992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.054 [2024-11-15 11:46:35.568003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:35.054 qpair failed and we were unable to recover it. 00:28:35.054 [2024-11-15 11:46:35.568070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.054 [2024-11-15 11:46:35.568081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:35.054 qpair failed and we were unable to recover it. 00:28:35.054 [2024-11-15 11:46:35.568165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.054 [2024-11-15 11:46:35.568176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:35.054 qpair failed and we were unable to recover it. 00:28:35.054 [2024-11-15 11:46:35.568349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.054 [2024-11-15 11:46:35.568361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:35.054 qpair failed and we were unable to recover it. 00:28:35.054 [2024-11-15 11:46:35.568430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.054 [2024-11-15 11:46:35.568440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:35.054 qpair failed and we were unable to recover it. 00:28:35.054 [2024-11-15 11:46:35.568618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.054 [2024-11-15 11:46:35.568631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:35.054 qpair failed and we were unable to recover it. 00:28:35.054 [2024-11-15 11:46:35.568868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.054 [2024-11-15 11:46:35.568879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:35.054 qpair failed and we were unable to recover it. 00:28:35.054 [2024-11-15 11:46:35.569089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.054 [2024-11-15 11:46:35.569100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:35.054 qpair failed and we were unable to recover it. 00:28:35.054 [2024-11-15 11:46:35.569177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.054 [2024-11-15 11:46:35.569189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:35.054 qpair failed and we were unable to recover it. 00:28:35.054 [2024-11-15 11:46:35.569419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.054 [2024-11-15 11:46:35.569430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:35.054 qpair failed and we were unable to recover it. 00:28:35.054 [2024-11-15 11:46:35.569575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.054 [2024-11-15 11:46:35.569586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:35.054 qpair failed and we were unable to recover it. 00:28:35.054 [2024-11-15 11:46:35.569765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.054 [2024-11-15 11:46:35.569776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:35.054 qpair failed and we were unable to recover it. 00:28:35.054 [2024-11-15 11:46:35.569915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.054 [2024-11-15 11:46:35.569926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:35.054 qpair failed and we were unable to recover it. 00:28:35.054 [2024-11-15 11:46:35.570091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.054 [2024-11-15 11:46:35.570102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:35.054 qpair failed and we were unable to recover it. 00:28:35.054 [2024-11-15 11:46:35.570282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.055 [2024-11-15 11:46:35.570293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:35.055 qpair failed and we were unable to recover it. 00:28:35.055 [2024-11-15 11:46:35.570553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.055 [2024-11-15 11:46:35.570564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:35.055 qpair failed and we were unable to recover it. 00:28:35.055 [2024-11-15 11:46:35.570709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.055 [2024-11-15 11:46:35.570719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:35.055 qpair failed and we were unable to recover it. 00:28:35.055 [2024-11-15 11:46:35.570873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.055 [2024-11-15 11:46:35.570883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:35.055 qpair failed and we were unable to recover it. 00:28:35.055 [2024-11-15 11:46:35.570985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.055 [2024-11-15 11:46:35.570996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:35.055 qpair failed and we were unable to recover it. 00:28:35.055 [2024-11-15 11:46:35.571136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.055 [2024-11-15 11:46:35.571147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:35.055 qpair failed and we were unable to recover it. 00:28:35.055 [2024-11-15 11:46:35.571230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.055 [2024-11-15 11:46:35.571242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:35.055 qpair failed and we were unable to recover it. 00:28:35.055 [2024-11-15 11:46:35.571377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.055 [2024-11-15 11:46:35.571387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:35.055 qpair failed and we were unable to recover it. 00:28:35.055 Malloc0 00:28:35.055 [2024-11-15 11:46:35.571543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.055 [2024-11-15 11:46:35.571554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:35.055 qpair failed and we were unable to recover it. 00:28:35.055 [2024-11-15 11:46:35.571706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.055 [2024-11-15 11:46:35.571718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:35.055 qpair failed and we were unable to recover it. 00:28:35.055 [2024-11-15 11:46:35.571791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.055 [2024-11-15 11:46:35.571802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:35.055 qpair failed and we were unable to recover it. 00:28:35.055 [2024-11-15 11:46:35.571951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.055 [2024-11-15 11:46:35.571962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:35.055 qpair failed and we were unable to recover it. 00:28:35.055 [2024-11-15 11:46:35.572057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.055 [2024-11-15 11:46:35.572068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:35.055 qpair failed and we were unable to recover it. 00:28:35.055 [2024-11-15 11:46:35.572164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.055 [2024-11-15 11:46:35.572175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:35.055 11:46:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:35.055 qpair failed and we were unable to recover it. 00:28:35.055 [2024-11-15 11:46:35.572389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.055 [2024-11-15 11:46:35.572405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:35.055 qpair failed and we were unable to recover it. 00:28:35.055 11:46:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:28:35.055 [2024-11-15 11:46:35.572638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.055 [2024-11-15 11:46:35.572650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:35.055 qpair failed and we were unable to recover it. 00:28:35.055 [2024-11-15 11:46:35.572743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.055 [2024-11-15 11:46:35.572754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:35.055 qpair failed and we were unable to recover it. 00:28:35.055 11:46:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:35.055 [2024-11-15 11:46:35.572947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.055 [2024-11-15 11:46:35.572959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:35.055 qpair failed and we were unable to recover it. 00:28:35.055 [2024-11-15 11:46:35.573097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.055 [2024-11-15 11:46:35.573109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:35.055 qpair failed and we were unable to recover it. 00:28:35.055 11:46:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:35.055 [2024-11-15 11:46:35.573185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.055 [2024-11-15 11:46:35.573196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:35.055 qpair failed and we were unable to recover it. 00:28:35.055 [2024-11-15 11:46:35.573330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.055 [2024-11-15 11:46:35.573344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:35.055 qpair failed and we were unable to recover it. 00:28:35.055 [2024-11-15 11:46:35.573514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.055 [2024-11-15 11:46:35.573526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:35.055 qpair failed and we were unable to recover it. 00:28:35.055 [2024-11-15 11:46:35.573740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.055 [2024-11-15 11:46:35.573750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:35.055 qpair failed and we were unable to recover it. 00:28:35.055 [2024-11-15 11:46:35.573939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.055 [2024-11-15 11:46:35.573950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:35.055 qpair failed and we were unable to recover it. 00:28:35.055 [2024-11-15 11:46:35.574101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.055 [2024-11-15 11:46:35.574112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:35.055 qpair failed and we were unable to recover it. 00:28:35.055 [2024-11-15 11:46:35.574247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.055 [2024-11-15 11:46:35.574257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:35.055 qpair failed and we were unable to recover it. 00:28:35.055 [2024-11-15 11:46:35.574398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.055 [2024-11-15 11:46:35.574410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:35.055 qpair failed and we were unable to recover it. 00:28:35.055 [2024-11-15 11:46:35.574489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.055 [2024-11-15 11:46:35.574499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:35.055 qpair failed and we were unable to recover it. 00:28:35.055 [2024-11-15 11:46:35.574710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.055 [2024-11-15 11:46:35.574721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:35.055 qpair failed and we were unable to recover it. 00:28:35.055 [2024-11-15 11:46:35.574869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.055 [2024-11-15 11:46:35.574879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:35.055 qpair failed and we were unable to recover it. 00:28:35.055 [2024-11-15 11:46:35.574949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.055 [2024-11-15 11:46:35.574959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:35.055 qpair failed and we were unable to recover it. 00:28:35.055 [2024-11-15 11:46:35.575107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.055 [2024-11-15 11:46:35.575118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:35.055 qpair failed and we were unable to recover it. 00:28:35.055 [2024-11-15 11:46:35.575274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.055 [2024-11-15 11:46:35.575285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:35.055 qpair failed and we were unable to recover it. 00:28:35.055 [2024-11-15 11:46:35.575366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.055 [2024-11-15 11:46:35.575376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:35.055 qpair failed and we were unable to recover it. 00:28:35.055 [2024-11-15 11:46:35.575511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.055 [2024-11-15 11:46:35.575522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:35.055 qpair failed and we were unable to recover it. 00:28:35.055 [2024-11-15 11:46:35.575598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.055 [2024-11-15 11:46:35.575610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:35.055 qpair failed and we were unable to recover it. 00:28:35.055 [2024-11-15 11:46:35.575744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.055 [2024-11-15 11:46:35.575754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:35.056 qpair failed and we were unable to recover it. 00:28:35.056 [2024-11-15 11:46:35.575848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.056 [2024-11-15 11:46:35.575860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:35.056 qpair failed and we were unable to recover it. 00:28:35.056 [2024-11-15 11:46:35.576088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.056 [2024-11-15 11:46:35.576099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:35.056 qpair failed and we were unable to recover it. 00:28:35.056 [2024-11-15 11:46:35.576251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.056 [2024-11-15 11:46:35.576262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:35.056 qpair failed and we were unable to recover it. 00:28:35.056 [2024-11-15 11:46:35.576467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.056 [2024-11-15 11:46:35.576479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:35.056 qpair failed and we were unable to recover it. 00:28:35.056 [2024-11-15 11:46:35.576617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.056 [2024-11-15 11:46:35.576627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:35.056 qpair failed and we were unable to recover it. 00:28:35.056 [2024-11-15 11:46:35.576704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.056 [2024-11-15 11:46:35.576715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:35.056 qpair failed and we were unable to recover it. 00:28:35.056 [2024-11-15 11:46:35.576780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.056 [2024-11-15 11:46:35.576790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:35.056 qpair failed and we were unable to recover it. 00:28:35.056 [2024-11-15 11:46:35.576865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.056 [2024-11-15 11:46:35.576876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:35.056 qpair failed and we were unable to recover it. 00:28:35.056 [2024-11-15 11:46:35.576952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.056 [2024-11-15 11:46:35.576963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:35.056 qpair failed and we were unable to recover it. 00:28:35.056 [2024-11-15 11:46:35.577167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.056 [2024-11-15 11:46:35.577178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f3c000b90 with addr=10.0.0.2, port=4420 00:28:35.056 qpair failed and we were unable to recover it. 00:28:35.056 [2024-11-15 11:46:35.577353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.056 [2024-11-15 11:46:35.577365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:35.056 qpair failed and we were unable to recover it. 00:28:35.056 [2024-11-15 11:46:35.577517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.056 [2024-11-15 11:46:35.577528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:35.056 qpair failed and we were unable to recover it. 00:28:35.056 [2024-11-15 11:46:35.577626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.056 [2024-11-15 11:46:35.577637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:35.056 qpair failed and we were unable to recover it. 00:28:35.056 [2024-11-15 11:46:35.577778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.056 [2024-11-15 11:46:35.577789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:35.056 qpair failed and we were unable to recover it. 00:28:35.056 [2024-11-15 11:46:35.577873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.056 [2024-11-15 11:46:35.577884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:35.056 qpair failed and we were unable to recover it. 00:28:35.056 [2024-11-15 11:46:35.578036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.056 [2024-11-15 11:46:35.578046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:35.056 qpair failed and we were unable to recover it. 00:28:35.056 [2024-11-15 11:46:35.578180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.056 [2024-11-15 11:46:35.578190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:35.056 qpair failed and we were unable to recover it. 00:28:35.056 [2024-11-15 11:46:35.578332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.056 [2024-11-15 11:46:35.578343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:35.056 qpair failed and we were unable to recover it. 00:28:35.056 [2024-11-15 11:46:35.578520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.056 [2024-11-15 11:46:35.578530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:35.056 qpair failed and we were unable to recover it. 00:28:35.056 [2024-11-15 11:46:35.578694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.056 [2024-11-15 11:46:35.578705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:35.056 qpair failed and we were unable to recover it. 00:28:35.056 [2024-11-15 11:46:35.578886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.056 [2024-11-15 11:46:35.578897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:35.056 qpair failed and we were unable to recover it. 00:28:35.056 [2024-11-15 11:46:35.578971] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:35.056 [2024-11-15 11:46:35.579033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.056 [2024-11-15 11:46:35.579044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:35.056 qpair failed and we were unable to recover it. 00:28:35.056 [2024-11-15 11:46:35.579128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.056 [2024-11-15 11:46:35.579139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f30000b90 with addr=10.0.0.2, port=4420 00:28:35.056 qpair failed and we were unable to recover it. 00:28:35.056 [2024-11-15 11:46:35.579314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.056 [2024-11-15 11:46:35.579330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:35.056 qpair failed and we were unable to recover it. 00:28:35.056 [2024-11-15 11:46:35.579427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.056 [2024-11-15 11:46:35.579437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:35.056 qpair failed and we were unable to recover it. 00:28:35.056 [2024-11-15 11:46:35.579612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.056 [2024-11-15 11:46:35.579624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:35.056 qpair failed and we were unable to recover it. 00:28:35.056 [2024-11-15 11:46:35.579697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.056 [2024-11-15 11:46:35.579708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:35.056 qpair failed and we were unable to recover it. 00:28:35.056 [2024-11-15 11:46:35.579787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.056 [2024-11-15 11:46:35.579798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:35.056 qpair failed and we were unable to recover it. 00:28:35.056 [2024-11-15 11:46:35.579977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.056 [2024-11-15 11:46:35.579988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:35.056 qpair failed and we were unable to recover it. 00:28:35.056 [2024-11-15 11:46:35.580054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.056 [2024-11-15 11:46:35.580065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:35.056 qpair failed and we were unable to recover it. 00:28:35.056 [2024-11-15 11:46:35.580160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.056 [2024-11-15 11:46:35.580171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:35.056 qpair failed and we were unable to recover it. 00:28:35.056 [2024-11-15 11:46:35.580304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.056 [2024-11-15 11:46:35.580315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:35.056 qpair failed and we were unable to recover it. 00:28:35.056 [2024-11-15 11:46:35.580414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.056 [2024-11-15 11:46:35.580425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:35.056 qpair failed and we were unable to recover it. 00:28:35.056 [2024-11-15 11:46:35.580520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.056 [2024-11-15 11:46:35.580532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:35.056 qpair failed and we were unable to recover it. 00:28:35.056 [2024-11-15 11:46:35.580674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.056 [2024-11-15 11:46:35.580685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:35.056 qpair failed and we were unable to recover it. 00:28:35.056 [2024-11-15 11:46:35.580895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.056 [2024-11-15 11:46:35.580907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:35.056 qpair failed and we were unable to recover it. 00:28:35.056 [2024-11-15 11:46:35.581003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.056 [2024-11-15 11:46:35.581014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:35.056 qpair failed and we were unable to recover it. 00:28:35.056 [2024-11-15 11:46:35.581221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.057 [2024-11-15 11:46:35.581232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:35.057 qpair failed and we were unable to recover it. 00:28:35.057 [2024-11-15 11:46:35.581372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.057 [2024-11-15 11:46:35.581383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:35.057 qpair failed and we were unable to recover it. 00:28:35.057 [2024-11-15 11:46:35.581523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.057 [2024-11-15 11:46:35.581534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:35.057 qpair failed and we were unable to recover it. 00:28:35.057 [2024-11-15 11:46:35.581633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.057 [2024-11-15 11:46:35.581644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:35.057 qpair failed and we were unable to recover it. 00:28:35.057 [2024-11-15 11:46:35.581721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.057 [2024-11-15 11:46:35.581731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:35.057 qpair failed and we were unable to recover it. 00:28:35.057 [2024-11-15 11:46:35.581812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.057 [2024-11-15 11:46:35.581823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:35.057 qpair failed and we were unable to recover it. 00:28:35.057 [2024-11-15 11:46:35.581957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.057 [2024-11-15 11:46:35.581968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:35.057 qpair failed and we were unable to recover it. 00:28:35.057 [2024-11-15 11:46:35.582049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.057 [2024-11-15 11:46:35.582059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:35.057 qpair failed and we were unable to recover it. 00:28:35.057 [2024-11-15 11:46:35.582263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.057 [2024-11-15 11:46:35.582274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:35.057 qpair failed and we were unable to recover it. 00:28:35.057 [2024-11-15 11:46:35.582338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.057 [2024-11-15 11:46:35.582349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:35.057 qpair failed and we were unable to recover it. 00:28:35.057 [2024-11-15 11:46:35.582440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.057 [2024-11-15 11:46:35.582451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:35.057 qpair failed and we were unable to recover it. 00:28:35.057 [2024-11-15 11:46:35.582660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.057 [2024-11-15 11:46:35.582672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:35.057 qpair failed and we were unable to recover it. 00:28:35.057 [2024-11-15 11:46:35.582807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.057 [2024-11-15 11:46:35.582818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:35.057 qpair failed and we were unable to recover it. 00:28:35.057 [2024-11-15 11:46:35.582899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.057 [2024-11-15 11:46:35.582909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:35.057 qpair failed and we were unable to recover it. 00:28:35.057 [2024-11-15 11:46:35.582991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.057 [2024-11-15 11:46:35.583001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:35.057 qpair failed and we were unable to recover it. 00:28:35.057 [2024-11-15 11:46:35.583070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.057 [2024-11-15 11:46:35.583081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:35.057 qpair failed and we were unable to recover it. 00:28:35.057 [2024-11-15 11:46:35.583212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.057 [2024-11-15 11:46:35.583223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:35.057 qpair failed and we were unable to recover it. 00:28:35.057 [2024-11-15 11:46:35.583308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.057 [2024-11-15 11:46:35.583319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:35.057 qpair failed and we were unable to recover it. 00:28:35.057 [2024-11-15 11:46:35.583403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.057 [2024-11-15 11:46:35.583414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:35.057 qpair failed and we were unable to recover it. 00:28:35.057 [2024-11-15 11:46:35.583621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.057 [2024-11-15 11:46:35.583632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:35.057 qpair failed and we were unable to recover it. 00:28:35.057 [2024-11-15 11:46:35.583728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.057 [2024-11-15 11:46:35.583739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:35.057 qpair failed and we were unable to recover it. 00:28:35.057 [2024-11-15 11:46:35.583892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.057 [2024-11-15 11:46:35.583902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:35.057 qpair failed and we were unable to recover it. 00:28:35.057 [2024-11-15 11:46:35.583974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.057 [2024-11-15 11:46:35.583985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:35.057 qpair failed and we were unable to recover it. 00:28:35.057 [2024-11-15 11:46:35.584112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.057 [2024-11-15 11:46:35.584122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:35.057 qpair failed and we were unable to recover it. 00:28:35.057 [2024-11-15 11:46:35.584248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.057 [2024-11-15 11:46:35.584259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:35.057 qpair failed and we were unable to recover it. 00:28:35.057 [2024-11-15 11:46:35.584325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.057 [2024-11-15 11:46:35.584336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:35.057 qpair failed and we were unable to recover it. 00:28:35.057 [2024-11-15 11:46:35.584471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.057 [2024-11-15 11:46:35.584484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:35.057 qpair failed and we were unable to recover it. 00:28:35.057 [2024-11-15 11:46:35.584621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.057 [2024-11-15 11:46:35.584632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:35.057 qpair failed and we were unable to recover it. 00:28:35.057 [2024-11-15 11:46:35.584776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.057 [2024-11-15 11:46:35.584786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:35.057 qpair failed and we were unable to recover it. 00:28:35.057 [2024-11-15 11:46:35.584859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.057 [2024-11-15 11:46:35.584870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:35.057 qpair failed and we were unable to recover it. 00:28:35.057 [2024-11-15 11:46:35.585127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.057 [2024-11-15 11:46:35.585138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:35.057 qpair failed and we were unable to recover it. 00:28:35.057 [2024-11-15 11:46:35.585203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.057 [2024-11-15 11:46:35.585214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:35.057 qpair failed and we were unable to recover it. 00:28:35.057 [2024-11-15 11:46:35.585363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.058 [2024-11-15 11:46:35.585374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:35.058 qpair failed and we were unable to recover it. 00:28:35.058 [2024-11-15 11:46:35.585467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.058 [2024-11-15 11:46:35.585478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:35.058 qpair failed and we were unable to recover it. 00:28:35.058 [2024-11-15 11:46:35.585557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.058 [2024-11-15 11:46:35.585568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:35.058 qpair failed and we were unable to recover it. 00:28:35.058 [2024-11-15 11:46:35.585774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.058 [2024-11-15 11:46:35.585785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:35.058 qpair failed and we were unable to recover it. 00:28:35.058 [2024-11-15 11:46:35.585962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.058 [2024-11-15 11:46:35.585973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:35.058 qpair failed and we were unable to recover it. 00:28:35.058 [2024-11-15 11:46:35.586178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.058 [2024-11-15 11:46:35.586189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:35.058 qpair failed and we were unable to recover it. 00:28:35.058 [2024-11-15 11:46:35.586277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.058 [2024-11-15 11:46:35.586288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:35.058 qpair failed and we were unable to recover it. 00:28:35.058 [2024-11-15 11:46:35.586358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.058 [2024-11-15 11:46:35.586368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:35.058 qpair failed and we were unable to recover it. 00:28:35.058 [2024-11-15 11:46:35.586464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.058 [2024-11-15 11:46:35.586475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:35.058 qpair failed and we were unable to recover it. 00:28:35.058 [2024-11-15 11:46:35.586637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.058 [2024-11-15 11:46:35.586648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:35.058 qpair failed and we were unable to recover it. 00:28:35.058 [2024-11-15 11:46:35.586730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.058 [2024-11-15 11:46:35.586741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:35.058 qpair failed and we were unable to recover it. 00:28:35.058 [2024-11-15 11:46:35.586819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.058 [2024-11-15 11:46:35.586830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:35.058 qpair failed and we were unable to recover it. 00:28:35.058 [2024-11-15 11:46:35.586893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.058 [2024-11-15 11:46:35.586903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:35.058 qpair failed and we were unable to recover it. 00:28:35.058 [2024-11-15 11:46:35.587053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.058 [2024-11-15 11:46:35.587064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:35.058 qpair failed and we were unable to recover it. 00:28:35.058 [2024-11-15 11:46:35.587136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.058 [2024-11-15 11:46:35.587147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:35.058 qpair failed and we were unable to recover it. 00:28:35.058 [2024-11-15 11:46:35.587285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.058 [2024-11-15 11:46:35.587296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:35.058 qpair failed and we were unable to recover it. 00:28:35.058 [2024-11-15 11:46:35.587381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.058 [2024-11-15 11:46:35.587392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:35.058 qpair failed and we were unable to recover it. 00:28:35.058 [2024-11-15 11:46:35.587473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.058 [2024-11-15 11:46:35.587484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:35.058 qpair failed and we were unable to recover it. 00:28:35.058 [2024-11-15 11:46:35.587634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.058 [2024-11-15 11:46:35.587645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:35.058 qpair failed and we were unable to recover it. 00:28:35.058 11:46:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:35.058 [2024-11-15 11:46:35.587792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.058 [2024-11-15 11:46:35.587803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:35.058 qpair failed and we were unable to recover it. 00:28:35.058 [2024-11-15 11:46:35.587947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.058 [2024-11-15 11:46:35.587960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:35.058 qpair failed and we were unable to recover it. 00:28:35.058 11:46:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:28:35.058 [2024-11-15 11:46:35.588098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.058 [2024-11-15 11:46:35.588109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:35.058 qpair failed and we were unable to recover it. 00:28:35.058 [2024-11-15 11:46:35.588173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.058 [2024-11-15 11:46:35.588183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:35.058 qpair failed and we were unable to recover it. 00:28:35.058 [2024-11-15 11:46:35.588279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.058 [2024-11-15 11:46:35.588290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:35.058 qpair failed and we were unable to recover it. 00:28:35.058 11:46:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:35.058 [2024-11-15 11:46:35.588520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.058 [2024-11-15 11:46:35.588532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:35.058 qpair failed and we were unable to recover it. 00:28:35.058 [2024-11-15 11:46:35.588631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.058 [2024-11-15 11:46:35.588642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:35.058 qpair failed and we were unable to recover it. 00:28:35.058 11:46:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:35.058 [2024-11-15 11:46:35.588818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.058 [2024-11-15 11:46:35.588829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:35.058 qpair failed and we were unable to recover it. 00:28:35.058 [2024-11-15 11:46:35.588905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.058 [2024-11-15 11:46:35.588915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:35.058 qpair failed and we were unable to recover it. 00:28:35.058 [2024-11-15 11:46:35.588984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.058 [2024-11-15 11:46:35.588995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:35.058 qpair failed and we were unable to recover it. 00:28:35.058 [2024-11-15 11:46:35.589226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.058 [2024-11-15 11:46:35.589237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:35.058 qpair failed and we were unable to recover it. 00:28:35.058 [2024-11-15 11:46:35.589441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.058 [2024-11-15 11:46:35.589452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:35.058 qpair failed and we were unable to recover it. 00:28:35.058 [2024-11-15 11:46:35.589551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.058 [2024-11-15 11:46:35.589562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:35.058 qpair failed and we were unable to recover it. 00:28:35.058 [2024-11-15 11:46:35.589647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.058 [2024-11-15 11:46:35.589660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:35.058 qpair failed and we were unable to recover it. 00:28:35.058 [2024-11-15 11:46:35.589726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.058 [2024-11-15 11:46:35.589737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:35.058 qpair failed and we were unable to recover it. 00:28:35.058 [2024-11-15 11:46:35.589828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.058 [2024-11-15 11:46:35.589839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:35.058 qpair failed and we were unable to recover it. 00:28:35.058 [2024-11-15 11:46:35.589914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.058 [2024-11-15 11:46:35.589925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:35.058 qpair failed and we were unable to recover it. 00:28:35.058 [2024-11-15 11:46:35.590076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.059 [2024-11-15 11:46:35.590086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:35.059 qpair failed and we were unable to recover it. 00:28:35.059 [2024-11-15 11:46:35.590242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.059 [2024-11-15 11:46:35.590252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:35.059 qpair failed and we were unable to recover it. 00:28:35.059 [2024-11-15 11:46:35.590399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.059 [2024-11-15 11:46:35.590411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:35.059 qpair failed and we were unable to recover it. 00:28:35.059 [2024-11-15 11:46:35.590485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.059 [2024-11-15 11:46:35.590496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:35.059 qpair failed and we were unable to recover it. 00:28:35.059 [2024-11-15 11:46:35.590645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.059 [2024-11-15 11:46:35.590656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:35.059 qpair failed and we were unable to recover it. 00:28:35.059 [2024-11-15 11:46:35.590860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.059 [2024-11-15 11:46:35.590871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:35.059 qpair failed and we were unable to recover it. 00:28:35.059 [2024-11-15 11:46:35.591009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.059 [2024-11-15 11:46:35.591019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:35.059 qpair failed and we were unable to recover it. 00:28:35.059 [2024-11-15 11:46:35.591096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.059 [2024-11-15 11:46:35.591107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:35.059 qpair failed and we were unable to recover it. 00:28:35.059 [2024-11-15 11:46:35.591190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.059 [2024-11-15 11:46:35.591200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:35.059 qpair failed and we were unable to recover it. 00:28:35.059 [2024-11-15 11:46:35.591345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.059 [2024-11-15 11:46:35.591356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:35.059 qpair failed and we were unable to recover it. 00:28:35.059 [2024-11-15 11:46:35.591567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.059 [2024-11-15 11:46:35.591579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:35.059 qpair failed and we were unable to recover it. 00:28:35.059 [2024-11-15 11:46:35.591679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.059 [2024-11-15 11:46:35.591690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:35.059 qpair failed and we were unable to recover it. 00:28:35.059 [2024-11-15 11:46:35.591840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.059 [2024-11-15 11:46:35.591851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:35.059 qpair failed and we were unable to recover it. 00:28:35.059 [2024-11-15 11:46:35.591987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.059 [2024-11-15 11:46:35.591998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:35.059 qpair failed and we were unable to recover it. 00:28:35.059 [2024-11-15 11:46:35.592133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.059 [2024-11-15 11:46:35.592144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:35.059 qpair failed and we were unable to recover it. 00:28:35.059 [2024-11-15 11:46:35.592216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.059 [2024-11-15 11:46:35.592227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:35.059 qpair failed and we were unable to recover it. 00:28:35.059 [2024-11-15 11:46:35.592371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.059 [2024-11-15 11:46:35.592381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:35.059 qpair failed and we were unable to recover it. 00:28:35.059 [2024-11-15 11:46:35.592547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.059 [2024-11-15 11:46:35.592558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:35.059 qpair failed and we were unable to recover it. 00:28:35.059 [2024-11-15 11:46:35.592658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.059 [2024-11-15 11:46:35.592669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:35.059 qpair failed and we were unable to recover it. 00:28:35.059 [2024-11-15 11:46:35.592849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.059 [2024-11-15 11:46:35.592859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:35.059 qpair failed and we were unable to recover it. 00:28:35.059 [2024-11-15 11:46:35.593006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.059 [2024-11-15 11:46:35.593017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:35.059 qpair failed and we were unable to recover it. 00:28:35.059 [2024-11-15 11:46:35.593173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.059 [2024-11-15 11:46:35.593184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:35.059 qpair failed and we were unable to recover it. 00:28:35.059 [2024-11-15 11:46:35.593389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.059 [2024-11-15 11:46:35.593400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:35.059 qpair failed and we were unable to recover it. 00:28:35.059 [2024-11-15 11:46:35.593567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.059 [2024-11-15 11:46:35.593578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:35.059 qpair failed and we were unable to recover it. 00:28:35.059 [2024-11-15 11:46:35.593663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.059 [2024-11-15 11:46:35.593674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:35.059 qpair failed and we were unable to recover it. 00:28:35.059 [2024-11-15 11:46:35.593818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.059 [2024-11-15 11:46:35.593829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:35.059 qpair failed and we were unable to recover it. 00:28:35.059 [2024-11-15 11:46:35.594005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.059 [2024-11-15 11:46:35.594015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:35.059 qpair failed and we were unable to recover it. 00:28:35.059 [2024-11-15 11:46:35.594164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.059 [2024-11-15 11:46:35.594176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:35.059 qpair failed and we were unable to recover it. 00:28:35.059 [2024-11-15 11:46:35.594263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.059 [2024-11-15 11:46:35.594274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:35.059 qpair failed and we were unable to recover it. 00:28:35.059 [2024-11-15 11:46:35.594433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.059 [2024-11-15 11:46:35.594443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:35.059 qpair failed and we were unable to recover it. 00:28:35.059 [2024-11-15 11:46:35.594581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.059 [2024-11-15 11:46:35.594592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:35.059 qpair failed and we were unable to recover it. 00:28:35.059 [2024-11-15 11:46:35.594671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.059 [2024-11-15 11:46:35.594682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:35.059 qpair failed and we were unable to recover it. 00:28:35.059 [2024-11-15 11:46:35.594830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.059 [2024-11-15 11:46:35.594841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:35.059 qpair failed and we were unable to recover it. 00:28:35.059 [2024-11-15 11:46:35.595001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.059 [2024-11-15 11:46:35.595011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:35.059 qpair failed and we were unable to recover it. 00:28:35.059 [2024-11-15 11:46:35.595142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.059 [2024-11-15 11:46:35.595153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:35.059 qpair failed and we were unable to recover it. 00:28:35.059 [2024-11-15 11:46:35.595354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.059 [2024-11-15 11:46:35.595365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:35.059 qpair failed and we were unable to recover it. 00:28:35.059 [2024-11-15 11:46:35.595466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.059 [2024-11-15 11:46:35.595479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:35.059 qpair failed and we were unable to recover it. 00:28:35.059 [2024-11-15 11:46:35.595554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.059 [2024-11-15 11:46:35.595564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:35.059 qpair failed and we were unable to recover it. 00:28:35.059 11:46:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:35.059 [2024-11-15 11:46:35.595737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.060 [2024-11-15 11:46:35.595749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:35.060 qpair failed and we were unable to recover it. 00:28:35.060 [2024-11-15 11:46:35.595981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.060 [2024-11-15 11:46:35.595992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:35.060 qpair failed and we were unable to recover it. 00:28:35.060 11:46:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:28:35.060 [2024-11-15 11:46:35.596123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.060 [2024-11-15 11:46:35.596134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:35.060 qpair failed and we were unable to recover it. 00:28:35.060 [2024-11-15 11:46:35.596270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.060 [2024-11-15 11:46:35.596281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:35.060 qpair failed and we were unable to recover it. 00:28:35.060 11:46:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:35.060 [2024-11-15 11:46:35.596447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.060 [2024-11-15 11:46:35.596464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:35.060 qpair failed and we were unable to recover it. 00:28:35.060 [2024-11-15 11:46:35.596607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.060 [2024-11-15 11:46:35.596618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:35.060 qpair failed and we were unable to recover it. 00:28:35.060 11:46:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:35.060 [2024-11-15 11:46:35.596794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.060 [2024-11-15 11:46:35.596805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:35.060 qpair failed and we were unable to recover it. 00:28:35.060 [2024-11-15 11:46:35.596986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.060 [2024-11-15 11:46:35.596996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:35.060 qpair failed and we were unable to recover it. 00:28:35.060 [2024-11-15 11:46:35.597096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.060 [2024-11-15 11:46:35.597107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:35.060 qpair failed and we were unable to recover it. 00:28:35.060 [2024-11-15 11:46:35.597369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.060 [2024-11-15 11:46:35.597380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:35.060 qpair failed and we were unable to recover it. 00:28:35.060 [2024-11-15 11:46:35.597547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.060 [2024-11-15 11:46:35.597558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:35.060 qpair failed and we were unable to recover it. 00:28:35.060 [2024-11-15 11:46:35.597700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.060 [2024-11-15 11:46:35.597711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:35.060 qpair failed and we were unable to recover it. 00:28:35.060 [2024-11-15 11:46:35.597811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.060 [2024-11-15 11:46:35.597822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:35.060 qpair failed and we were unable to recover it. 00:28:35.060 [2024-11-15 11:46:35.598001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.060 [2024-11-15 11:46:35.598011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:35.060 qpair failed and we were unable to recover it. 00:28:35.060 [2024-11-15 11:46:35.598154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.060 [2024-11-15 11:46:35.598165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:35.060 qpair failed and we were unable to recover it. 00:28:35.060 [2024-11-15 11:46:35.598317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.060 [2024-11-15 11:46:35.598328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:35.060 qpair failed and we were unable to recover it. 00:28:35.060 [2024-11-15 11:46:35.598506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.060 [2024-11-15 11:46:35.598517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:35.060 qpair failed and we were unable to recover it. 00:28:35.060 [2024-11-15 11:46:35.598671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.060 [2024-11-15 11:46:35.598682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:35.060 qpair failed and we were unable to recover it. 00:28:35.060 [2024-11-15 11:46:35.598828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.060 [2024-11-15 11:46:35.598838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:35.060 qpair failed and we were unable to recover it. 00:28:35.060 [2024-11-15 11:46:35.598993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.060 [2024-11-15 11:46:35.599004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:35.060 qpair failed and we were unable to recover it. 00:28:35.060 [2024-11-15 11:46:35.599139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.060 [2024-11-15 11:46:35.599150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:35.060 qpair failed and we were unable to recover it. 00:28:35.060 [2024-11-15 11:46:35.599242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.060 [2024-11-15 11:46:35.599253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:35.060 qpair failed and we were unable to recover it. 00:28:35.060 [2024-11-15 11:46:35.599399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.060 [2024-11-15 11:46:35.599409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:35.060 qpair failed and we were unable to recover it. 00:28:35.060 [2024-11-15 11:46:35.599577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.060 [2024-11-15 11:46:35.599590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:35.060 qpair failed and we were unable to recover it. 00:28:35.060 [2024-11-15 11:46:35.599658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.060 [2024-11-15 11:46:35.599668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:35.060 qpair failed and we were unable to recover it. 00:28:35.060 [2024-11-15 11:46:35.599822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.060 [2024-11-15 11:46:35.599833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:35.060 qpair failed and we were unable to recover it. 00:28:35.060 [2024-11-15 11:46:35.599972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.060 [2024-11-15 11:46:35.599984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:35.060 qpair failed and we were unable to recover it. 00:28:35.060 [2024-11-15 11:46:35.600113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.060 [2024-11-15 11:46:35.600123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:35.060 qpair failed and we were unable to recover it. 00:28:35.060 [2024-11-15 11:46:35.600204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.060 [2024-11-15 11:46:35.600215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:35.060 qpair failed and we were unable to recover it. 00:28:35.060 [2024-11-15 11:46:35.600391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.060 [2024-11-15 11:46:35.600402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:35.060 qpair failed and we were unable to recover it. 00:28:35.060 [2024-11-15 11:46:35.600478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.060 [2024-11-15 11:46:35.600489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:35.060 qpair failed and we were unable to recover it. 00:28:35.060 [2024-11-15 11:46:35.600651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.060 [2024-11-15 11:46:35.600661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:35.060 qpair failed and we were unable to recover it. 00:28:35.060 [2024-11-15 11:46:35.600816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.060 [2024-11-15 11:46:35.600827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:35.060 qpair failed and we were unable to recover it. 00:28:35.060 [2024-11-15 11:46:35.600908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.060 [2024-11-15 11:46:35.600919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:35.060 qpair failed and we were unable to recover it. 00:28:35.060 [2024-11-15 11:46:35.601053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.060 [2024-11-15 11:46:35.601064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:35.060 qpair failed and we were unable to recover it. 00:28:35.060 [2024-11-15 11:46:35.601210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.060 [2024-11-15 11:46:35.601221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:35.060 qpair failed and we were unable to recover it. 00:28:35.060 [2024-11-15 11:46:35.601357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.060 [2024-11-15 11:46:35.601367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:35.060 qpair failed and we were unable to recover it. 00:28:35.060 [2024-11-15 11:46:35.601522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.061 [2024-11-15 11:46:35.601535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:35.061 qpair failed and we were unable to recover it. 00:28:35.061 [2024-11-15 11:46:35.601690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.061 [2024-11-15 11:46:35.601700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:35.061 qpair failed and we were unable to recover it. 00:28:35.061 [2024-11-15 11:46:35.601780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.061 [2024-11-15 11:46:35.601792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:35.061 qpair failed and we were unable to recover it. 00:28:35.061 [2024-11-15 11:46:35.601931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.061 [2024-11-15 11:46:35.601941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:35.061 qpair failed and we were unable to recover it. 00:28:35.061 [2024-11-15 11:46:35.602040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.061 [2024-11-15 11:46:35.602050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:35.061 qpair failed and we were unable to recover it. 00:28:35.061 [2024-11-15 11:46:35.602123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.061 [2024-11-15 11:46:35.602133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:35.061 qpair failed and we were unable to recover it. 00:28:35.061 [2024-11-15 11:46:35.602276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.061 [2024-11-15 11:46:35.602287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:35.061 qpair failed and we were unable to recover it. 00:28:35.061 [2024-11-15 11:46:35.602360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.061 [2024-11-15 11:46:35.602371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:35.061 qpair failed and we were unable to recover it. 00:28:35.061 [2024-11-15 11:46:35.602512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.061 [2024-11-15 11:46:35.602523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:35.061 qpair failed and we were unable to recover it. 00:28:35.061 [2024-11-15 11:46:35.602614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.061 [2024-11-15 11:46:35.602625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:35.061 qpair failed and we were unable to recover it. 00:28:35.061 [2024-11-15 11:46:35.602729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.061 [2024-11-15 11:46:35.602739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:35.061 qpair failed and we were unable to recover it. 00:28:35.061 [2024-11-15 11:46:35.602872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.061 [2024-11-15 11:46:35.602883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:35.061 qpair failed and we were unable to recover it. 00:28:35.061 [2024-11-15 11:46:35.602966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.061 [2024-11-15 11:46:35.602977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:35.061 qpair failed and we were unable to recover it. 00:28:35.061 [2024-11-15 11:46:35.603126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.061 [2024-11-15 11:46:35.603137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:35.061 qpair failed and we were unable to recover it. 00:28:35.061 [2024-11-15 11:46:35.603286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.061 [2024-11-15 11:46:35.603297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:35.061 qpair failed and we were unable to recover it. 00:28:35.061 [2024-11-15 11:46:35.603363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.061 [2024-11-15 11:46:35.603373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:35.061 qpair failed and we were unable to recover it. 00:28:35.061 [2024-11-15 11:46:35.603455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.061 [2024-11-15 11:46:35.603476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:35.061 qpair failed and we were unable to recover it. 00:28:35.061 [2024-11-15 11:46:35.603683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.061 [2024-11-15 11:46:35.603695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:35.061 qpair failed and we were unable to recover it. 00:28:35.061 11:46:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:35.061 [2024-11-15 11:46:35.603855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.061 [2024-11-15 11:46:35.603866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:35.061 qpair failed and we were unable to recover it. 00:28:35.061 [2024-11-15 11:46:35.604092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.061 [2024-11-15 11:46:35.604103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:35.061 qpair failed and we were unable to recover it. 00:28:35.061 11:46:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:28:35.061 [2024-11-15 11:46:35.604251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.061 [2024-11-15 11:46:35.604262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:35.061 qpair failed and we were unable to recover it. 00:28:35.061 [2024-11-15 11:46:35.604403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.061 [2024-11-15 11:46:35.604414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:35.061 qpair failed and we were unable to recover it. 00:28:35.061 11:46:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:35.061 [2024-11-15 11:46:35.604589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.061 [2024-11-15 11:46:35.604601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:35.061 qpair failed and we were unable to recover it. 00:28:35.061 11:46:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:35.061 [2024-11-15 11:46:35.604740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.061 [2024-11-15 11:46:35.604751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:35.061 qpair failed and we were unable to recover it. 00:28:35.061 [2024-11-15 11:46:35.604823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.061 [2024-11-15 11:46:35.604836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:35.061 qpair failed and we were unable to recover it. 00:28:35.061 [2024-11-15 11:46:35.605046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.061 [2024-11-15 11:46:35.605057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:35.061 qpair failed and we were unable to recover it. 00:28:35.061 [2024-11-15 11:46:35.605260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.061 [2024-11-15 11:46:35.605271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:35.061 qpair failed and we were unable to recover it. 00:28:35.061 [2024-11-15 11:46:35.605371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.061 [2024-11-15 11:46:35.605382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:35.061 qpair failed and we were unable to recover it. 00:28:35.061 [2024-11-15 11:46:35.605532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.061 [2024-11-15 11:46:35.605543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:35.061 qpair failed and we were unable to recover it. 00:28:35.061 [2024-11-15 11:46:35.605606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.061 [2024-11-15 11:46:35.605616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:35.061 qpair failed and we were unable to recover it. 00:28:35.061 [2024-11-15 11:46:35.605752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.061 [2024-11-15 11:46:35.605762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:35.061 qpair failed and we were unable to recover it. 00:28:35.061 [2024-11-15 11:46:35.605848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.062 [2024-11-15 11:46:35.605858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:35.062 qpair failed and we were unable to recover it. 00:28:35.062 [2024-11-15 11:46:35.605936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.062 [2024-11-15 11:46:35.605947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:35.062 qpair failed and we were unable to recover it. 00:28:35.062 [2024-11-15 11:46:35.606100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.062 [2024-11-15 11:46:35.606111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:35.062 qpair failed and we were unable to recover it. 00:28:35.062 [2024-11-15 11:46:35.606194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.062 [2024-11-15 11:46:35.606205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:35.062 qpair failed and we were unable to recover it. 00:28:35.062 [2024-11-15 11:46:35.606418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.062 [2024-11-15 11:46:35.606429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:35.062 qpair failed and we were unable to recover it. 00:28:35.062 [2024-11-15 11:46:35.606511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.062 [2024-11-15 11:46:35.606522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:35.062 qpair failed and we were unable to recover it. 00:28:35.062 [2024-11-15 11:46:35.606606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.062 [2024-11-15 11:46:35.606617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:35.062 qpair failed and we were unable to recover it. 00:28:35.062 [2024-11-15 11:46:35.606833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.062 [2024-11-15 11:46:35.606844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:35.062 qpair failed and we were unable to recover it. 00:28:35.062 [2024-11-15 11:46:35.607024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.062 [2024-11-15 11:46:35.607034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:35.062 qpair failed and we were unable to recover it. 00:28:35.062 [2024-11-15 11:46:35.607103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.062 [2024-11-15 11:46:35.607113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4f34000b90 with addr=10.0.0.2, port=4420 00:28:35.062 qpair failed and we were unable to recover it. 00:28:35.062 [2024-11-15 11:46:35.607211] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:35.062 [2024-11-15 11:46:35.609738] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:35.062 [2024-11-15 11:46:35.609813] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:35.062 [2024-11-15 11:46:35.609834] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:35.062 [2024-11-15 11:46:35.609842] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:35.062 [2024-11-15 11:46:35.609848] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4f34000b90 00:28:35.062 [2024-11-15 11:46:35.609869] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:35.062 qpair failed and we were unable to recover it. 00:28:35.062 11:46:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:35.062 11:46:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:28:35.062 11:46:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:35.062 11:46:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:35.062 [2024-11-15 11:46:35.619609] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:35.062 [2024-11-15 11:46:35.619681] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:35.062 [2024-11-15 11:46:35.619696] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:35.062 [2024-11-15 11:46:35.619705] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:35.062 [2024-11-15 11:46:35.619714] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4f34000b90 00:28:35.062 [2024-11-15 11:46:35.619730] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:35.062 qpair failed and we were unable to recover it. 00:28:35.062 11:46:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:35.062 11:46:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@50 -- # wait 1407535 00:28:35.062 [2024-11-15 11:46:35.629685] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:35.062 [2024-11-15 11:46:35.629764] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:35.062 [2024-11-15 11:46:35.629786] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:35.062 [2024-11-15 11:46:35.629792] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:35.062 [2024-11-15 11:46:35.629799] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4f34000b90 00:28:35.062 [2024-11-15 11:46:35.629814] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:35.062 qpair failed and we were unable to recover it. 00:28:35.062 [2024-11-15 11:46:35.639535] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:35.062 [2024-11-15 11:46:35.639609] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:35.062 [2024-11-15 11:46:35.639622] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:35.062 [2024-11-15 11:46:35.639629] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:35.062 [2024-11-15 11:46:35.639636] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4f34000b90 00:28:35.062 [2024-11-15 11:46:35.639652] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:35.062 qpair failed and we were unable to recover it. 00:28:35.062 [2024-11-15 11:46:35.649610] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:35.062 [2024-11-15 11:46:35.649685] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:35.062 [2024-11-15 11:46:35.649699] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:35.062 [2024-11-15 11:46:35.649706] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:35.062 [2024-11-15 11:46:35.649712] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4f34000b90 00:28:35.062 [2024-11-15 11:46:35.649727] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:35.062 qpair failed and we were unable to recover it. 00:28:35.062 [2024-11-15 11:46:35.659615] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:35.062 [2024-11-15 11:46:35.659680] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:35.062 [2024-11-15 11:46:35.659694] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:35.062 [2024-11-15 11:46:35.659701] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:35.062 [2024-11-15 11:46:35.659707] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4f34000b90 00:28:35.062 [2024-11-15 11:46:35.659723] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:35.062 qpair failed and we were unable to recover it. 00:28:35.062 [2024-11-15 11:46:35.669634] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:35.062 [2024-11-15 11:46:35.669698] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:35.062 [2024-11-15 11:46:35.669713] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:35.062 [2024-11-15 11:46:35.669723] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:35.062 [2024-11-15 11:46:35.669730] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4f34000b90 00:28:35.062 [2024-11-15 11:46:35.669744] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:35.062 qpair failed and we were unable to recover it. 00:28:35.062 [2024-11-15 11:46:35.679596] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:35.062 [2024-11-15 11:46:35.679654] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:35.062 [2024-11-15 11:46:35.679667] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:35.062 [2024-11-15 11:46:35.679673] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:35.062 [2024-11-15 11:46:35.679679] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4f34000b90 00:28:35.062 [2024-11-15 11:46:35.679694] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:35.062 qpair failed and we were unable to recover it. 00:28:35.062 [2024-11-15 11:46:35.689723] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:35.062 [2024-11-15 11:46:35.689782] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:35.062 [2024-11-15 11:46:35.689795] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:35.062 [2024-11-15 11:46:35.689802] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:35.062 [2024-11-15 11:46:35.689808] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4f34000b90 00:28:35.062 [2024-11-15 11:46:35.689822] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:35.063 qpair failed and we were unable to recover it. 00:28:35.063 [2024-11-15 11:46:35.699736] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:35.063 [2024-11-15 11:46:35.699795] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:35.063 [2024-11-15 11:46:35.699808] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:35.063 [2024-11-15 11:46:35.699815] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:35.063 [2024-11-15 11:46:35.699821] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4f34000b90 00:28:35.063 [2024-11-15 11:46:35.699836] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:35.063 qpair failed and we were unable to recover it. 00:28:35.063 [2024-11-15 11:46:35.709788] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:35.063 [2024-11-15 11:46:35.709854] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:35.063 [2024-11-15 11:46:35.709868] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:35.063 [2024-11-15 11:46:35.709875] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:35.063 [2024-11-15 11:46:35.709881] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4f34000b90 00:28:35.063 [2024-11-15 11:46:35.709896] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:35.063 qpair failed and we were unable to recover it. 00:28:35.063 [2024-11-15 11:46:35.719733] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:35.063 [2024-11-15 11:46:35.719790] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:35.063 [2024-11-15 11:46:35.719803] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:35.063 [2024-11-15 11:46:35.719810] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:35.063 [2024-11-15 11:46:35.719815] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4f34000b90 00:28:35.063 [2024-11-15 11:46:35.719830] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:35.063 qpair failed and we were unable to recover it. 00:28:35.063 [2024-11-15 11:46:35.729816] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:35.063 [2024-11-15 11:46:35.729877] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:35.063 [2024-11-15 11:46:35.729890] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:35.063 [2024-11-15 11:46:35.729897] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:35.063 [2024-11-15 11:46:35.729903] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4f34000b90 00:28:35.063 [2024-11-15 11:46:35.729917] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:35.063 qpair failed and we were unable to recover it. 00:28:35.063 [2024-11-15 11:46:35.739833] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:35.063 [2024-11-15 11:46:35.739890] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:35.063 [2024-11-15 11:46:35.739903] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:35.063 [2024-11-15 11:46:35.739910] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:35.063 [2024-11-15 11:46:35.739917] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4f34000b90 00:28:35.063 [2024-11-15 11:46:35.739932] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:35.063 qpair failed and we were unable to recover it. 00:28:35.063 [2024-11-15 11:46:35.749869] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:35.063 [2024-11-15 11:46:35.749935] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:35.063 [2024-11-15 11:46:35.749949] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:35.063 [2024-11-15 11:46:35.749956] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:35.063 [2024-11-15 11:46:35.749962] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4f34000b90 00:28:35.063 [2024-11-15 11:46:35.749977] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:35.063 qpair failed and we were unable to recover it. 00:28:35.063 [2024-11-15 11:46:35.759840] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:35.063 [2024-11-15 11:46:35.759900] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:35.063 [2024-11-15 11:46:35.759914] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:35.063 [2024-11-15 11:46:35.759921] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:35.063 [2024-11-15 11:46:35.759926] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4f34000b90 00:28:35.063 [2024-11-15 11:46:35.759941] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:35.063 qpair failed and we were unable to recover it. 00:28:35.063 [2024-11-15 11:46:35.769963] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:35.063 [2024-11-15 11:46:35.770023] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:35.063 [2024-11-15 11:46:35.770036] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:35.063 [2024-11-15 11:46:35.770044] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:35.063 [2024-11-15 11:46:35.770050] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4f34000b90 00:28:35.063 [2024-11-15 11:46:35.770065] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:35.063 qpair failed and we were unable to recover it. 00:28:35.063 [2024-11-15 11:46:35.779956] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:35.063 [2024-11-15 11:46:35.780019] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:35.063 [2024-11-15 11:46:35.780032] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:35.063 [2024-11-15 11:46:35.780039] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:35.063 [2024-11-15 11:46:35.780045] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4f34000b90 00:28:35.063 [2024-11-15 11:46:35.780060] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:35.063 qpair failed and we were unable to recover it. 00:28:35.063 [2024-11-15 11:46:35.789975] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:35.063 [2024-11-15 11:46:35.790036] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:35.063 [2024-11-15 11:46:35.790048] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:35.063 [2024-11-15 11:46:35.790056] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:35.063 [2024-11-15 11:46:35.790062] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4f34000b90 00:28:35.063 [2024-11-15 11:46:35.790077] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:35.063 qpair failed and we were unable to recover it. 00:28:35.063 [2024-11-15 11:46:35.799964] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:35.063 [2024-11-15 11:46:35.800036] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:35.063 [2024-11-15 11:46:35.800049] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:35.063 [2024-11-15 11:46:35.800059] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:35.063 [2024-11-15 11:46:35.800065] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4f34000b90 00:28:35.063 [2024-11-15 11:46:35.800080] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:35.063 qpair failed and we were unable to recover it. 00:28:35.063 [2024-11-15 11:46:35.810057] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:35.063 [2024-11-15 11:46:35.810118] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:35.063 [2024-11-15 11:46:35.810132] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:35.063 [2024-11-15 11:46:35.810139] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:35.063 [2024-11-15 11:46:35.810145] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4f34000b90 00:28:35.063 [2024-11-15 11:46:35.810161] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:35.063 qpair failed and we were unable to recover it. 00:28:35.063 [2024-11-15 11:46:35.820078] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:35.063 [2024-11-15 11:46:35.820190] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:35.063 [2024-11-15 11:46:35.820205] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:35.063 [2024-11-15 11:46:35.820211] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:35.063 [2024-11-15 11:46:35.820217] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4f34000b90 00:28:35.063 [2024-11-15 11:46:35.820232] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:35.063 qpair failed and we were unable to recover it. 00:28:35.064 [2024-11-15 11:46:35.830105] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:35.064 [2024-11-15 11:46:35.830183] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:35.064 [2024-11-15 11:46:35.830197] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:35.064 [2024-11-15 11:46:35.830204] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:35.064 [2024-11-15 11:46:35.830209] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4f34000b90 00:28:35.064 [2024-11-15 11:46:35.830224] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:35.064 qpair failed and we were unable to recover it. 00:28:35.064 [2024-11-15 11:46:35.840104] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:35.064 [2024-11-15 11:46:35.840182] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:35.064 [2024-11-15 11:46:35.840197] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:35.064 [2024-11-15 11:46:35.840203] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:35.064 [2024-11-15 11:46:35.840209] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4f34000b90 00:28:35.064 [2024-11-15 11:46:35.840228] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:35.064 qpair failed and we were unable to recover it. 00:28:35.064 [2024-11-15 11:46:35.850169] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:35.064 [2024-11-15 11:46:35.850272] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:35.064 [2024-11-15 11:46:35.850286] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:35.064 [2024-11-15 11:46:35.850293] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:35.064 [2024-11-15 11:46:35.850299] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4f34000b90 00:28:35.064 [2024-11-15 11:46:35.850314] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:35.064 qpair failed and we were unable to recover it. 00:28:35.064 [2024-11-15 11:46:35.860198] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:35.064 [2024-11-15 11:46:35.860262] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:35.064 [2024-11-15 11:46:35.860275] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:35.064 [2024-11-15 11:46:35.860281] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:35.064 [2024-11-15 11:46:35.860287] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4f34000b90 00:28:35.064 [2024-11-15 11:46:35.860302] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:35.064 qpair failed and we were unable to recover it. 00:28:35.064 [2024-11-15 11:46:35.870219] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:35.064 [2024-11-15 11:46:35.870278] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:35.064 [2024-11-15 11:46:35.870291] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:35.064 [2024-11-15 11:46:35.870298] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:35.064 [2024-11-15 11:46:35.870304] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4f34000b90 00:28:35.064 [2024-11-15 11:46:35.870318] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:35.064 qpair failed and we were unable to recover it. 00:28:35.064 [2024-11-15 11:46:35.880235] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:35.064 [2024-11-15 11:46:35.880293] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:35.064 [2024-11-15 11:46:35.880306] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:35.064 [2024-11-15 11:46:35.880313] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:35.064 [2024-11-15 11:46:35.880318] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4f34000b90 00:28:35.064 [2024-11-15 11:46:35.880333] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:35.064 qpair failed and we were unable to recover it. 00:28:35.064 [2024-11-15 11:46:35.890285] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:35.064 [2024-11-15 11:46:35.890349] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:35.064 [2024-11-15 11:46:35.890362] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:35.064 [2024-11-15 11:46:35.890369] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:35.064 [2024-11-15 11:46:35.890375] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4f34000b90 00:28:35.064 [2024-11-15 11:46:35.890390] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:35.064 qpair failed and we were unable to recover it. 00:28:35.325 [2024-11-15 11:46:35.900331] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:35.325 [2024-11-15 11:46:35.900395] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:35.325 [2024-11-15 11:46:35.900409] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:35.325 [2024-11-15 11:46:35.900416] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:35.325 [2024-11-15 11:46:35.900423] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4f34000b90 00:28:35.325 [2024-11-15 11:46:35.900438] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:35.325 qpair failed and we were unable to recover it. 00:28:35.325 [2024-11-15 11:46:35.910372] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:35.325 [2024-11-15 11:46:35.910428] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:35.325 [2024-11-15 11:46:35.910442] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:35.325 [2024-11-15 11:46:35.910448] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:35.325 [2024-11-15 11:46:35.910454] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4f34000b90 00:28:35.325 [2024-11-15 11:46:35.910472] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:35.325 qpair failed and we were unable to recover it. 00:28:35.325 [2024-11-15 11:46:35.920303] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:35.325 [2024-11-15 11:46:35.920360] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:35.325 [2024-11-15 11:46:35.920373] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:35.325 [2024-11-15 11:46:35.920380] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:35.325 [2024-11-15 11:46:35.920386] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4f34000b90 00:28:35.325 [2024-11-15 11:46:35.920400] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:35.325 qpair failed and we were unable to recover it. 00:28:35.325 [2024-11-15 11:46:35.930416] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:35.325 [2024-11-15 11:46:35.930480] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:35.325 [2024-11-15 11:46:35.930496] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:35.325 [2024-11-15 11:46:35.930503] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:35.325 [2024-11-15 11:46:35.930509] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4f34000b90 00:28:35.325 [2024-11-15 11:46:35.930523] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:35.325 qpair failed and we were unable to recover it. 00:28:35.325 [2024-11-15 11:46:35.940434] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:35.325 [2024-11-15 11:46:35.940500] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:35.325 [2024-11-15 11:46:35.940513] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:35.325 [2024-11-15 11:46:35.940519] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:35.325 [2024-11-15 11:46:35.940526] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4f34000b90 00:28:35.325 [2024-11-15 11:46:35.940541] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:35.325 qpair failed and we were unable to recover it. 00:28:35.325 [2024-11-15 11:46:35.950440] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:35.325 [2024-11-15 11:46:35.950511] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:35.325 [2024-11-15 11:46:35.950534] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:35.325 [2024-11-15 11:46:35.950540] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:35.326 [2024-11-15 11:46:35.950547] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4f34000b90 00:28:35.326 [2024-11-15 11:46:35.950562] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:35.326 qpair failed and we were unable to recover it. 00:28:35.326 [2024-11-15 11:46:35.960399] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:35.326 [2024-11-15 11:46:35.960461] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:35.326 [2024-11-15 11:46:35.960474] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:35.326 [2024-11-15 11:46:35.960481] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:35.326 [2024-11-15 11:46:35.960487] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4f34000b90 00:28:35.326 [2024-11-15 11:46:35.960501] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:35.326 qpair failed and we were unable to recover it. 00:28:35.326 [2024-11-15 11:46:35.970509] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:35.326 [2024-11-15 11:46:35.970569] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:35.326 [2024-11-15 11:46:35.970582] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:35.326 [2024-11-15 11:46:35.970588] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:35.326 [2024-11-15 11:46:35.970598] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4f34000b90 00:28:35.326 [2024-11-15 11:46:35.970613] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:35.326 qpair failed and we were unable to recover it. 00:28:35.326 [2024-11-15 11:46:35.980559] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:35.326 [2024-11-15 11:46:35.980624] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:35.326 [2024-11-15 11:46:35.980636] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:35.326 [2024-11-15 11:46:35.980643] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:35.326 [2024-11-15 11:46:35.980649] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4f34000b90 00:28:35.326 [2024-11-15 11:46:35.980664] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:35.326 qpair failed and we were unable to recover it. 00:28:35.326 [2024-11-15 11:46:35.990564] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:35.326 [2024-11-15 11:46:35.990667] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:35.326 [2024-11-15 11:46:35.990682] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:35.326 [2024-11-15 11:46:35.990688] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:35.326 [2024-11-15 11:46:35.990695] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4f34000b90 00:28:35.326 [2024-11-15 11:46:35.990710] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:35.326 qpair failed and we were unable to recover it. 00:28:35.326 [2024-11-15 11:46:36.000539] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:35.326 [2024-11-15 11:46:36.000597] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:35.326 [2024-11-15 11:46:36.000611] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:35.326 [2024-11-15 11:46:36.000617] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:35.326 [2024-11-15 11:46:36.000623] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4f34000b90 00:28:35.326 [2024-11-15 11:46:36.000638] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:35.326 qpair failed and we were unable to recover it. 00:28:35.326 [2024-11-15 11:46:36.010619] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:35.326 [2024-11-15 11:46:36.010678] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:35.326 [2024-11-15 11:46:36.010693] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:35.326 [2024-11-15 11:46:36.010701] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:35.326 [2024-11-15 11:46:36.010707] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4f34000b90 00:28:35.326 [2024-11-15 11:46:36.010722] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:35.326 qpair failed and we were unable to recover it. 00:28:35.326 [2024-11-15 11:46:36.020645] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:35.326 [2024-11-15 11:46:36.020705] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:35.326 [2024-11-15 11:46:36.020719] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:35.326 [2024-11-15 11:46:36.020725] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:35.326 [2024-11-15 11:46:36.020732] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4f34000b90 00:28:35.326 [2024-11-15 11:46:36.020747] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:35.326 qpair failed and we were unable to recover it. 00:28:35.326 [2024-11-15 11:46:36.030677] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:35.326 [2024-11-15 11:46:36.030733] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:35.326 [2024-11-15 11:46:36.030746] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:35.326 [2024-11-15 11:46:36.030753] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:35.326 [2024-11-15 11:46:36.030760] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4f34000b90 00:28:35.326 [2024-11-15 11:46:36.030774] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:35.326 qpair failed and we were unable to recover it. 00:28:35.326 [2024-11-15 11:46:36.040645] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:35.326 [2024-11-15 11:46:36.040707] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:35.326 [2024-11-15 11:46:36.040720] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:35.326 [2024-11-15 11:46:36.040726] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:35.326 [2024-11-15 11:46:36.040732] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4f34000b90 00:28:35.326 [2024-11-15 11:46:36.040747] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:35.326 qpair failed and we were unable to recover it. 00:28:35.326 [2024-11-15 11:46:36.050702] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:35.326 [2024-11-15 11:46:36.050763] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:35.326 [2024-11-15 11:46:36.050776] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:35.326 [2024-11-15 11:46:36.050783] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:35.326 [2024-11-15 11:46:36.050790] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4f34000b90 00:28:35.326 [2024-11-15 11:46:36.050805] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:35.326 qpair failed and we were unable to recover it. 00:28:35.326 [2024-11-15 11:46:36.060807] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:35.326 [2024-11-15 11:46:36.060871] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:35.326 [2024-11-15 11:46:36.060886] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:35.326 [2024-11-15 11:46:36.060893] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:35.326 [2024-11-15 11:46:36.060899] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4f34000b90 00:28:35.326 [2024-11-15 11:46:36.060914] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:35.326 qpair failed and we were unable to recover it. 00:28:35.326 [2024-11-15 11:46:36.070793] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:35.326 [2024-11-15 11:46:36.070851] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:35.326 [2024-11-15 11:46:36.070865] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:35.326 [2024-11-15 11:46:36.070872] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:35.326 [2024-11-15 11:46:36.070878] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4f34000b90 00:28:35.326 [2024-11-15 11:46:36.070894] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:35.326 qpair failed and we were unable to recover it. 00:28:35.326 [2024-11-15 11:46:36.080785] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:35.326 [2024-11-15 11:46:36.080850] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:35.326 [2024-11-15 11:46:36.080863] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:35.326 [2024-11-15 11:46:36.080870] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:35.326 [2024-11-15 11:46:36.080875] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4f34000b90 00:28:35.327 [2024-11-15 11:46:36.080891] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:35.327 qpair failed and we were unable to recover it. 00:28:35.327 [2024-11-15 11:46:36.090899] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:35.327 [2024-11-15 11:46:36.090964] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:35.327 [2024-11-15 11:46:36.090986] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:35.327 [2024-11-15 11:46:36.090993] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:35.327 [2024-11-15 11:46:36.090999] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4f34000b90 00:28:35.327 [2024-11-15 11:46:36.091014] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:35.327 qpair failed and we were unable to recover it. 00:28:35.327 [2024-11-15 11:46:36.100875] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:35.327 [2024-11-15 11:46:36.100938] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:35.327 [2024-11-15 11:46:36.100951] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:35.327 [2024-11-15 11:46:36.100958] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:35.327 [2024-11-15 11:46:36.100968] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4f34000b90 00:28:35.327 [2024-11-15 11:46:36.100983] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:35.327 qpair failed and we were unable to recover it. 00:28:35.327 [2024-11-15 11:46:36.110910] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:35.327 [2024-11-15 11:46:36.110977] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:35.327 [2024-11-15 11:46:36.110992] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:35.327 [2024-11-15 11:46:36.110999] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:35.327 [2024-11-15 11:46:36.111005] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4f34000b90 00:28:35.327 [2024-11-15 11:46:36.111019] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:35.327 qpair failed and we were unable to recover it. 00:28:35.327 [2024-11-15 11:46:36.120875] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:35.327 [2024-11-15 11:46:36.120929] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:35.327 [2024-11-15 11:46:36.120942] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:35.327 [2024-11-15 11:46:36.120949] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:35.327 [2024-11-15 11:46:36.120955] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4f34000b90 00:28:35.327 [2024-11-15 11:46:36.120970] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:35.327 qpair failed and we were unable to recover it. 00:28:35.327 [2024-11-15 11:46:36.130992] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:35.327 [2024-11-15 11:46:36.131050] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:35.327 [2024-11-15 11:46:36.131062] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:35.327 [2024-11-15 11:46:36.131070] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:35.327 [2024-11-15 11:46:36.131077] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4f34000b90 00:28:35.327 [2024-11-15 11:46:36.131092] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:35.327 qpair failed and we were unable to recover it. 00:28:35.327 [2024-11-15 11:46:36.141019] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:35.327 [2024-11-15 11:46:36.141075] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:35.327 [2024-11-15 11:46:36.141088] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:35.327 [2024-11-15 11:46:36.141094] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:35.327 [2024-11-15 11:46:36.141101] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4f34000b90 00:28:35.327 [2024-11-15 11:46:36.141116] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:35.327 qpair failed and we were unable to recover it. 00:28:35.327 [2024-11-15 11:46:36.151033] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:35.327 [2024-11-15 11:46:36.151099] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:35.327 [2024-11-15 11:46:36.151114] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:35.327 [2024-11-15 11:46:36.151120] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:35.327 [2024-11-15 11:46:36.151126] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4f34000b90 00:28:35.327 [2024-11-15 11:46:36.151141] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:35.327 qpair failed and we were unable to recover it. 00:28:35.327 [2024-11-15 11:46:36.160993] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:35.327 [2024-11-15 11:46:36.161049] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:35.327 [2024-11-15 11:46:36.161063] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:35.327 [2024-11-15 11:46:36.161069] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:35.327 [2024-11-15 11:46:36.161075] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4f34000b90 00:28:35.327 [2024-11-15 11:46:36.161090] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:35.327 qpair failed and we were unable to recover it. 00:28:35.327 [2024-11-15 11:46:36.171099] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:35.327 [2024-11-15 11:46:36.171156] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:35.327 [2024-11-15 11:46:36.171169] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:35.327 [2024-11-15 11:46:36.171176] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:35.327 [2024-11-15 11:46:36.171182] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4f34000b90 00:28:35.327 [2024-11-15 11:46:36.171196] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:35.327 qpair failed and we were unable to recover it. 00:28:35.588 [2024-11-15 11:46:36.181110] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:35.588 [2024-11-15 11:46:36.181171] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:35.588 [2024-11-15 11:46:36.181185] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:35.588 [2024-11-15 11:46:36.181192] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:35.588 [2024-11-15 11:46:36.181199] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4f34000b90 00:28:35.588 [2024-11-15 11:46:36.181214] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:35.588 qpair failed and we were unable to recover it. 00:28:35.588 [2024-11-15 11:46:36.191138] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:35.588 [2024-11-15 11:46:36.191198] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:35.588 [2024-11-15 11:46:36.191214] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:35.588 [2024-11-15 11:46:36.191221] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:35.588 [2024-11-15 11:46:36.191227] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4f34000b90 00:28:35.588 [2024-11-15 11:46:36.191242] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:35.588 qpair failed and we were unable to recover it. 00:28:35.588 [2024-11-15 11:46:36.201103] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:35.588 [2024-11-15 11:46:36.201159] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:35.588 [2024-11-15 11:46:36.201172] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:35.588 [2024-11-15 11:46:36.201178] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:35.589 [2024-11-15 11:46:36.201184] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4f34000b90 00:28:35.589 [2024-11-15 11:46:36.201199] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:35.589 qpair failed and we were unable to recover it. 00:28:35.589 [2024-11-15 11:46:36.211190] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:35.589 [2024-11-15 11:46:36.211259] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:35.589 [2024-11-15 11:46:36.211273] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:35.589 [2024-11-15 11:46:36.211279] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:35.589 [2024-11-15 11:46:36.211286] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4f34000b90 00:28:35.589 [2024-11-15 11:46:36.211300] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:35.589 qpair failed and we were unable to recover it. 00:28:35.589 [2024-11-15 11:46:36.221265] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:35.589 [2024-11-15 11:46:36.221325] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:35.589 [2024-11-15 11:46:36.221340] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:35.589 [2024-11-15 11:46:36.221348] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:35.589 [2024-11-15 11:46:36.221354] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4f34000b90 00:28:35.589 [2024-11-15 11:46:36.221370] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:35.589 qpair failed and we were unable to recover it. 00:28:35.589 [2024-11-15 11:46:36.231255] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:35.589 [2024-11-15 11:46:36.231319] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:35.589 [2024-11-15 11:46:36.231333] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:35.589 [2024-11-15 11:46:36.231343] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:35.589 [2024-11-15 11:46:36.231349] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4f34000b90 00:28:35.589 [2024-11-15 11:46:36.231364] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:35.589 qpair failed and we were unable to recover it. 00:28:35.589 [2024-11-15 11:46:36.241232] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:35.589 [2024-11-15 11:46:36.241288] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:35.589 [2024-11-15 11:46:36.241301] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:35.589 [2024-11-15 11:46:36.241307] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:35.589 [2024-11-15 11:46:36.241313] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4f34000b90 00:28:35.589 [2024-11-15 11:46:36.241328] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:35.589 qpair failed and we were unable to recover it. 00:28:35.589 [2024-11-15 11:46:36.251330] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:35.589 [2024-11-15 11:46:36.251393] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:35.589 [2024-11-15 11:46:36.251407] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:35.589 [2024-11-15 11:46:36.251415] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:35.589 [2024-11-15 11:46:36.251421] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4f34000b90 00:28:35.589 [2024-11-15 11:46:36.251435] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:35.589 qpair failed and we were unable to recover it. 00:28:35.589 [2024-11-15 11:46:36.261345] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:35.589 [2024-11-15 11:46:36.261406] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:35.589 [2024-11-15 11:46:36.261435] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:35.589 [2024-11-15 11:46:36.261442] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:35.589 [2024-11-15 11:46:36.261448] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4f34000b90 00:28:35.589 [2024-11-15 11:46:36.261473] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:35.589 qpair failed and we were unable to recover it. 00:28:35.589 [2024-11-15 11:46:36.271374] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:35.589 [2024-11-15 11:46:36.271433] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:35.589 [2024-11-15 11:46:36.271446] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:35.589 [2024-11-15 11:46:36.271453] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:35.589 [2024-11-15 11:46:36.271463] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4f34000b90 00:28:35.589 [2024-11-15 11:46:36.271478] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:35.589 qpair failed and we were unable to recover it. 00:28:35.589 [2024-11-15 11:46:36.281271] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:35.589 [2024-11-15 11:46:36.281328] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:35.589 [2024-11-15 11:46:36.281344] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:35.589 [2024-11-15 11:46:36.281351] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:35.589 [2024-11-15 11:46:36.281357] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4f34000b90 00:28:35.589 [2024-11-15 11:46:36.281372] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:35.589 qpair failed and we were unable to recover it. 00:28:35.589 [2024-11-15 11:46:36.291428] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:35.589 [2024-11-15 11:46:36.291502] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:35.589 [2024-11-15 11:46:36.291516] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:35.589 [2024-11-15 11:46:36.291524] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:35.589 [2024-11-15 11:46:36.291530] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4f34000b90 00:28:35.589 [2024-11-15 11:46:36.291546] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:35.589 qpair failed and we were unable to recover it. 00:28:35.589 [2024-11-15 11:46:36.301462] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:35.589 [2024-11-15 11:46:36.301520] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:35.589 [2024-11-15 11:46:36.301533] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:35.589 [2024-11-15 11:46:36.301540] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:35.589 [2024-11-15 11:46:36.301546] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4f34000b90 00:28:35.589 [2024-11-15 11:46:36.301561] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:35.589 qpair failed and we were unable to recover it. 00:28:35.589 [2024-11-15 11:46:36.311418] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:35.589 [2024-11-15 11:46:36.311480] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:35.589 [2024-11-15 11:46:36.311493] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:35.589 [2024-11-15 11:46:36.311500] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:35.589 [2024-11-15 11:46:36.311505] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4f34000b90 00:28:35.589 [2024-11-15 11:46:36.311520] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:35.589 qpair failed and we were unable to recover it. 00:28:35.589 [2024-11-15 11:46:36.321396] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:35.590 [2024-11-15 11:46:36.321483] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:35.590 [2024-11-15 11:46:36.321498] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:35.590 [2024-11-15 11:46:36.321504] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:35.590 [2024-11-15 11:46:36.321510] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4f34000b90 00:28:35.590 [2024-11-15 11:46:36.321526] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:35.590 qpair failed and we were unable to recover it. 00:28:35.590 [2024-11-15 11:46:36.331469] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:35.590 [2024-11-15 11:46:36.331539] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:35.590 [2024-11-15 11:46:36.331553] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:35.590 [2024-11-15 11:46:36.331559] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:35.590 [2024-11-15 11:46:36.331565] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4f34000b90 00:28:35.590 [2024-11-15 11:46:36.331580] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:35.590 qpair failed and we were unable to recover it. 00:28:35.590 [2024-11-15 11:46:36.341645] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:35.590 [2024-11-15 11:46:36.341713] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:35.590 [2024-11-15 11:46:36.341727] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:35.590 [2024-11-15 11:46:36.341733] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:35.590 [2024-11-15 11:46:36.341739] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4f34000b90 00:28:35.590 [2024-11-15 11:46:36.341754] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:35.590 qpair failed and we were unable to recover it. 00:28:35.590 [2024-11-15 11:46:36.351604] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:35.590 [2024-11-15 11:46:36.351660] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:35.590 [2024-11-15 11:46:36.351673] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:35.590 [2024-11-15 11:46:36.351679] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:35.590 [2024-11-15 11:46:36.351685] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4f34000b90 00:28:35.590 [2024-11-15 11:46:36.351701] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:35.590 qpair failed and we were unable to recover it. 00:28:35.590 [2024-11-15 11:46:36.361582] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:35.590 [2024-11-15 11:46:36.361637] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:35.590 [2024-11-15 11:46:36.361650] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:35.590 [2024-11-15 11:46:36.361663] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:35.590 [2024-11-15 11:46:36.361669] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4f34000b90 00:28:35.590 [2024-11-15 11:46:36.361683] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:35.590 qpair failed and we were unable to recover it. 00:28:35.590 [2024-11-15 11:46:36.371668] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:35.590 [2024-11-15 11:46:36.371730] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:35.590 [2024-11-15 11:46:36.371743] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:35.590 [2024-11-15 11:46:36.371750] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:35.590 [2024-11-15 11:46:36.371757] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4f34000b90 00:28:35.590 [2024-11-15 11:46:36.371771] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:35.590 qpair failed and we were unable to recover it. 00:28:35.590 [2024-11-15 11:46:36.381616] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:35.590 [2024-11-15 11:46:36.381674] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:35.590 [2024-11-15 11:46:36.381687] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:35.590 [2024-11-15 11:46:36.381694] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:35.590 [2024-11-15 11:46:36.381700] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4f34000b90 00:28:35.590 [2024-11-15 11:46:36.381715] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:35.590 qpair failed and we were unable to recover it. 00:28:35.590 [2024-11-15 11:46:36.391752] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:35.590 [2024-11-15 11:46:36.391818] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:35.590 [2024-11-15 11:46:36.391833] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:35.590 [2024-11-15 11:46:36.391841] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:35.590 [2024-11-15 11:46:36.391847] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4f34000b90 00:28:35.590 [2024-11-15 11:46:36.391862] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:35.590 qpair failed and we were unable to recover it. 00:28:35.590 [2024-11-15 11:46:36.401719] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:35.590 [2024-11-15 11:46:36.401783] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:35.590 [2024-11-15 11:46:36.401796] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:35.590 [2024-11-15 11:46:36.401802] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:35.590 [2024-11-15 11:46:36.401808] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4f34000b90 00:28:35.590 [2024-11-15 11:46:36.401826] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:35.590 qpair failed and we were unable to recover it. 00:28:35.590 [2024-11-15 11:46:36.411758] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:35.590 [2024-11-15 11:46:36.411826] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:35.590 [2024-11-15 11:46:36.411839] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:35.590 [2024-11-15 11:46:36.411846] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:35.590 [2024-11-15 11:46:36.411852] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4f34000b90 00:28:35.590 [2024-11-15 11:46:36.411867] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:35.590 qpair failed and we were unable to recover it. 00:28:35.590 [2024-11-15 11:46:36.421800] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:35.590 [2024-11-15 11:46:36.421859] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:35.590 [2024-11-15 11:46:36.421872] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:35.590 [2024-11-15 11:46:36.421879] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:35.590 [2024-11-15 11:46:36.421885] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4f34000b90 00:28:35.590 [2024-11-15 11:46:36.421900] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:35.590 qpair failed and we were unable to recover it. 00:28:35.590 [2024-11-15 11:46:36.431759] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:35.590 [2024-11-15 11:46:36.431824] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:35.590 [2024-11-15 11:46:36.431837] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:35.591 [2024-11-15 11:46:36.431844] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:35.591 [2024-11-15 11:46:36.431850] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4f34000b90 00:28:35.591 [2024-11-15 11:46:36.431865] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:35.591 qpair failed and we were unable to recover it. 00:28:35.850 [2024-11-15 11:46:36.441719] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:35.850 [2024-11-15 11:46:36.441776] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:35.850 [2024-11-15 11:46:36.441788] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:35.850 [2024-11-15 11:46:36.441795] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:35.850 [2024-11-15 11:46:36.441801] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4f34000b90 00:28:35.850 [2024-11-15 11:46:36.441815] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:35.850 qpair failed and we were unable to recover it. 00:28:35.850 [2024-11-15 11:46:36.451824] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:35.850 [2024-11-15 11:46:36.451882] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:35.851 [2024-11-15 11:46:36.451895] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:35.851 [2024-11-15 11:46:36.451902] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:35.851 [2024-11-15 11:46:36.451908] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4f34000b90 00:28:35.851 [2024-11-15 11:46:36.451922] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:35.851 qpair failed and we were unable to recover it. 00:28:35.851 [2024-11-15 11:46:36.461927] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:35.851 [2024-11-15 11:46:36.461993] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:35.851 [2024-11-15 11:46:36.462007] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:35.851 [2024-11-15 11:46:36.462014] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:35.851 [2024-11-15 11:46:36.462020] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4f34000b90 00:28:35.851 [2024-11-15 11:46:36.462035] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:35.851 qpair failed and we were unable to recover it. 00:28:35.851 [2024-11-15 11:46:36.471858] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:35.851 [2024-11-15 11:46:36.471922] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:35.851 [2024-11-15 11:46:36.471944] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:35.851 [2024-11-15 11:46:36.471951] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:35.851 [2024-11-15 11:46:36.471957] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4f34000b90 00:28:35.851 [2024-11-15 11:46:36.471972] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:35.851 qpair failed and we were unable to recover it. 00:28:35.851 [2024-11-15 11:46:36.481943] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:35.851 [2024-11-15 11:46:36.482000] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:35.851 [2024-11-15 11:46:36.482013] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:35.851 [2024-11-15 11:46:36.482020] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:35.851 [2024-11-15 11:46:36.482026] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4f34000b90 00:28:35.851 [2024-11-15 11:46:36.482041] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:35.851 qpair failed and we were unable to recover it. 00:28:35.851 [2024-11-15 11:46:36.491936] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:35.851 [2024-11-15 11:46:36.491995] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:35.851 [2024-11-15 11:46:36.492011] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:35.851 [2024-11-15 11:46:36.492017] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:35.851 [2024-11-15 11:46:36.492024] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4f34000b90 00:28:35.851 [2024-11-15 11:46:36.492038] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:35.851 qpair failed and we were unable to recover it. 00:28:35.851 [2024-11-15 11:46:36.501952] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:35.851 [2024-11-15 11:46:36.502014] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:35.851 [2024-11-15 11:46:36.502027] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:35.851 [2024-11-15 11:46:36.502034] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:35.851 [2024-11-15 11:46:36.502040] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4f34000b90 00:28:35.851 [2024-11-15 11:46:36.502055] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:35.851 qpair failed and we were unable to recover it. 00:28:35.851 [2024-11-15 11:46:36.512060] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:35.851 [2024-11-15 11:46:36.512119] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:35.851 [2024-11-15 11:46:36.512133] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:35.851 [2024-11-15 11:46:36.512140] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:35.851 [2024-11-15 11:46:36.512146] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4f34000b90 00:28:35.851 [2024-11-15 11:46:36.512162] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:35.851 qpair failed and we were unable to recover it. 00:28:35.851 [2024-11-15 11:46:36.522055] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:35.851 [2024-11-15 11:46:36.522113] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:35.851 [2024-11-15 11:46:36.522127] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:35.851 [2024-11-15 11:46:36.522134] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:35.851 [2024-11-15 11:46:36.522140] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4f34000b90 00:28:35.851 [2024-11-15 11:46:36.522154] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:35.851 qpair failed and we were unable to recover it. 00:28:35.851 [2024-11-15 11:46:36.532048] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:35.851 [2024-11-15 11:46:36.532116] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:35.851 [2024-11-15 11:46:36.532130] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:35.851 [2024-11-15 11:46:36.532137] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:35.851 [2024-11-15 11:46:36.532146] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4f34000b90 00:28:35.851 [2024-11-15 11:46:36.532161] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:35.851 qpair failed and we were unable to recover it. 00:28:35.851 [2024-11-15 11:46:36.542070] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:35.851 [2024-11-15 11:46:36.542134] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:35.851 [2024-11-15 11:46:36.542147] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:35.851 [2024-11-15 11:46:36.542154] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:35.851 [2024-11-15 11:46:36.542161] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4f34000b90 00:28:35.851 [2024-11-15 11:46:36.542176] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:35.851 qpair failed and we were unable to recover it. 00:28:35.851 [2024-11-15 11:46:36.552084] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:35.851 [2024-11-15 11:46:36.552150] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:35.851 [2024-11-15 11:46:36.552172] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:35.851 [2024-11-15 11:46:36.552178] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:35.851 [2024-11-15 11:46:36.552184] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4f34000b90 00:28:35.851 [2024-11-15 11:46:36.552199] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:35.851 qpair failed and we were unable to recover it. 00:28:35.851 [2024-11-15 11:46:36.562106] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:35.851 [2024-11-15 11:46:36.562164] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:35.851 [2024-11-15 11:46:36.562177] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:35.851 [2024-11-15 11:46:36.562183] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:35.851 [2024-11-15 11:46:36.562189] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4f34000b90 00:28:35.851 [2024-11-15 11:46:36.562203] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:35.851 qpair failed and we were unable to recover it. 00:28:35.851 [2024-11-15 11:46:36.572159] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:35.851 [2024-11-15 11:46:36.572226] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:35.851 [2024-11-15 11:46:36.572239] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:35.851 [2024-11-15 11:46:36.572245] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:35.851 [2024-11-15 11:46:36.572251] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4f34000b90 00:28:35.851 [2024-11-15 11:46:36.572266] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:35.851 qpair failed and we were unable to recover it. 00:28:35.851 [2024-11-15 11:46:36.582261] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:35.851 [2024-11-15 11:46:36.582326] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:35.852 [2024-11-15 11:46:36.582339] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:35.852 [2024-11-15 11:46:36.582345] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:35.852 [2024-11-15 11:46:36.582351] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4f34000b90 00:28:35.852 [2024-11-15 11:46:36.582366] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:35.852 qpair failed and we were unable to recover it. 00:28:35.852 [2024-11-15 11:46:36.592278] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:35.852 [2024-11-15 11:46:36.592337] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:35.852 [2024-11-15 11:46:36.592351] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:35.852 [2024-11-15 11:46:36.592358] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:35.852 [2024-11-15 11:46:36.592364] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4f34000b90 00:28:35.852 [2024-11-15 11:46:36.592379] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:35.852 qpair failed and we were unable to recover it. 00:28:35.852 [2024-11-15 11:46:36.602217] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:35.852 [2024-11-15 11:46:36.602275] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:35.852 [2024-11-15 11:46:36.602287] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:35.852 [2024-11-15 11:46:36.602294] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:35.852 [2024-11-15 11:46:36.602300] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4f34000b90 00:28:35.852 [2024-11-15 11:46:36.602315] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:35.852 qpair failed and we were unable to recover it. 00:28:35.852 [2024-11-15 11:46:36.612356] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:35.852 [2024-11-15 11:46:36.612420] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:35.852 [2024-11-15 11:46:36.612433] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:35.852 [2024-11-15 11:46:36.612441] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:35.852 [2024-11-15 11:46:36.612447] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4f34000b90 00:28:35.852 [2024-11-15 11:46:36.612466] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:35.852 qpair failed and we were unable to recover it. 00:28:35.852 [2024-11-15 11:46:36.622284] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:35.852 [2024-11-15 11:46:36.622341] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:35.852 [2024-11-15 11:46:36.622357] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:35.852 [2024-11-15 11:46:36.622364] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:35.852 [2024-11-15 11:46:36.622370] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4f34000b90 00:28:35.852 [2024-11-15 11:46:36.622385] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:35.852 qpair failed and we were unable to recover it. 00:28:35.852 [2024-11-15 11:46:36.632382] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:35.852 [2024-11-15 11:46:36.632439] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:35.852 [2024-11-15 11:46:36.632453] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:35.852 [2024-11-15 11:46:36.632464] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:35.852 [2024-11-15 11:46:36.632470] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4f34000b90 00:28:35.852 [2024-11-15 11:46:36.632486] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:35.852 qpair failed and we were unable to recover it. 00:28:35.852 [2024-11-15 11:46:36.642412] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:35.852 [2024-11-15 11:46:36.642474] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:35.852 [2024-11-15 11:46:36.642487] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:35.852 [2024-11-15 11:46:36.642494] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:35.852 [2024-11-15 11:46:36.642499] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4f34000b90 00:28:35.852 [2024-11-15 11:46:36.642515] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:35.852 qpair failed and we were unable to recover it. 00:28:35.852 [2024-11-15 11:46:36.652397] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:35.852 [2024-11-15 11:46:36.652463] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:35.852 [2024-11-15 11:46:36.652477] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:35.852 [2024-11-15 11:46:36.652484] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:35.852 [2024-11-15 11:46:36.652489] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4f34000b90 00:28:35.852 [2024-11-15 11:46:36.652504] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:35.852 qpair failed and we were unable to recover it. 00:28:35.852 [2024-11-15 11:46:36.662425] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:35.852 [2024-11-15 11:46:36.662492] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:35.852 [2024-11-15 11:46:36.662505] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:35.852 [2024-11-15 11:46:36.662512] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:35.852 [2024-11-15 11:46:36.662521] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4f34000b90 00:28:35.852 [2024-11-15 11:46:36.662536] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:35.852 qpair failed and we were unable to recover it. 00:28:35.852 [2024-11-15 11:46:36.672431] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:35.852 [2024-11-15 11:46:36.672501] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:35.852 [2024-11-15 11:46:36.672516] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:35.852 [2024-11-15 11:46:36.672522] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:35.852 [2024-11-15 11:46:36.672529] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4f34000b90 00:28:35.852 [2024-11-15 11:46:36.672544] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:35.852 qpair failed and we were unable to recover it. 00:28:35.852 [2024-11-15 11:46:36.682486] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:35.852 [2024-11-15 11:46:36.682544] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:35.852 [2024-11-15 11:46:36.682557] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:35.852 [2024-11-15 11:46:36.682564] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:35.852 [2024-11-15 11:46:36.682569] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4f34000b90 00:28:35.852 [2024-11-15 11:46:36.682584] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:35.852 qpair failed and we were unable to recover it. 00:28:35.852 [2024-11-15 11:46:36.692558] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:35.852 [2024-11-15 11:46:36.692639] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:35.852 [2024-11-15 11:46:36.692654] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:35.852 [2024-11-15 11:46:36.692660] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:35.852 [2024-11-15 11:46:36.692666] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4f34000b90 00:28:35.852 [2024-11-15 11:46:36.692682] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:35.852 qpair failed and we were unable to recover it. 00:28:36.114 [2024-11-15 11:46:36.702538] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:36.114 [2024-11-15 11:46:36.702609] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:36.114 [2024-11-15 11:46:36.702623] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:36.114 [2024-11-15 11:46:36.702630] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:36.114 [2024-11-15 11:46:36.702636] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4f34000b90 00:28:36.114 [2024-11-15 11:46:36.702651] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:36.114 qpair failed and we were unable to recover it. 00:28:36.114 [2024-11-15 11:46:36.712629] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:36.114 [2024-11-15 11:46:36.712700] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:36.114 [2024-11-15 11:46:36.712715] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:36.114 [2024-11-15 11:46:36.712721] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:36.114 [2024-11-15 11:46:36.712727] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4f34000b90 00:28:36.114 [2024-11-15 11:46:36.712741] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:36.114 qpair failed and we were unable to recover it. 00:28:36.114 [2024-11-15 11:46:36.722598] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:36.114 [2024-11-15 11:46:36.722654] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:36.114 [2024-11-15 11:46:36.722667] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:36.114 [2024-11-15 11:46:36.722673] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:36.114 [2024-11-15 11:46:36.722679] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4f34000b90 00:28:36.114 [2024-11-15 11:46:36.722694] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:36.114 qpair failed and we were unable to recover it. 00:28:36.114 [2024-11-15 11:46:36.732696] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:36.114 [2024-11-15 11:46:36.732761] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:36.114 [2024-11-15 11:46:36.732774] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:36.114 [2024-11-15 11:46:36.732781] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:36.114 [2024-11-15 11:46:36.732787] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4f34000b90 00:28:36.114 [2024-11-15 11:46:36.732803] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:36.114 qpair failed and we were unable to recover it. 00:28:36.114 [2024-11-15 11:46:36.742635] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:36.114 [2024-11-15 11:46:36.742698] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:36.114 [2024-11-15 11:46:36.742711] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:36.114 [2024-11-15 11:46:36.742718] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:36.114 [2024-11-15 11:46:36.742724] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4f34000b90 00:28:36.114 [2024-11-15 11:46:36.742740] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:36.114 qpair failed and we were unable to recover it. 00:28:36.114 [2024-11-15 11:46:36.752731] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:36.114 [2024-11-15 11:46:36.752796] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:36.114 [2024-11-15 11:46:36.752813] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:36.114 [2024-11-15 11:46:36.752819] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:36.114 [2024-11-15 11:46:36.752825] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4f34000b90 00:28:36.114 [2024-11-15 11:46:36.752840] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:36.114 qpair failed and we were unable to recover it. 00:28:36.114 [2024-11-15 11:46:36.762726] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:36.114 [2024-11-15 11:46:36.762787] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:36.114 [2024-11-15 11:46:36.762800] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:36.114 [2024-11-15 11:46:36.762806] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:36.114 [2024-11-15 11:46:36.762812] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4f34000b90 00:28:36.114 [2024-11-15 11:46:36.762827] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:36.114 qpair failed and we were unable to recover it. 00:28:36.114 [2024-11-15 11:46:36.772816] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:36.114 [2024-11-15 11:46:36.772884] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:36.114 [2024-11-15 11:46:36.772898] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:36.114 [2024-11-15 11:46:36.772904] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:36.115 [2024-11-15 11:46:36.772910] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4f34000b90 00:28:36.115 [2024-11-15 11:46:36.772924] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:36.115 qpair failed and we were unable to recover it. 00:28:36.115 [2024-11-15 11:46:36.782861] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:36.115 [2024-11-15 11:46:36.782921] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:36.115 [2024-11-15 11:46:36.782934] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:36.115 [2024-11-15 11:46:36.782941] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:36.115 [2024-11-15 11:46:36.782947] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4f34000b90 00:28:36.115 [2024-11-15 11:46:36.782962] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:36.115 qpair failed and we were unable to recover it. 00:28:36.115 [2024-11-15 11:46:36.792778] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:36.115 [2024-11-15 11:46:36.792840] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:36.115 [2024-11-15 11:46:36.792853] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:36.115 [2024-11-15 11:46:36.792864] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:36.115 [2024-11-15 11:46:36.792870] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4f34000b90 00:28:36.115 [2024-11-15 11:46:36.792885] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:36.115 qpair failed and we were unable to recover it. 00:28:36.115 [2024-11-15 11:46:36.802832] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:36.115 [2024-11-15 11:46:36.802889] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:36.115 [2024-11-15 11:46:36.802902] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:36.115 [2024-11-15 11:46:36.802908] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:36.115 [2024-11-15 11:46:36.802914] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4f34000b90 00:28:36.115 [2024-11-15 11:46:36.802928] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:36.115 qpair failed and we were unable to recover it. 00:28:36.115 [2024-11-15 11:46:36.812925] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:36.115 [2024-11-15 11:46:36.812999] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:36.115 [2024-11-15 11:46:36.813013] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:36.115 [2024-11-15 11:46:36.813019] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:36.115 [2024-11-15 11:46:36.813026] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4f34000b90 00:28:36.115 [2024-11-15 11:46:36.813041] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:36.115 qpair failed and we were unable to recover it. 00:28:36.115 [2024-11-15 11:46:36.822959] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:36.115 [2024-11-15 11:46:36.823016] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:36.115 [2024-11-15 11:46:36.823028] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:36.115 [2024-11-15 11:46:36.823035] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:36.115 [2024-11-15 11:46:36.823041] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4f34000b90 00:28:36.115 [2024-11-15 11:46:36.823056] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:36.115 qpair failed and we were unable to recover it. 00:28:36.115 [2024-11-15 11:46:36.832978] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:36.115 [2024-11-15 11:46:36.833036] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:36.115 [2024-11-15 11:46:36.833049] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:36.115 [2024-11-15 11:46:36.833056] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:36.115 [2024-11-15 11:46:36.833062] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4f34000b90 00:28:36.115 [2024-11-15 11:46:36.833076] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:36.115 qpair failed and we were unable to recover it. 00:28:36.115 [2024-11-15 11:46:36.842932] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:36.115 [2024-11-15 11:46:36.843024] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:36.115 [2024-11-15 11:46:36.843038] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:36.115 [2024-11-15 11:46:36.843045] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:36.115 [2024-11-15 11:46:36.843051] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4f34000b90 00:28:36.115 [2024-11-15 11:46:36.843066] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:36.115 qpair failed and we were unable to recover it. 00:28:36.115 [2024-11-15 11:46:36.853029] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:36.115 [2024-11-15 11:46:36.853098] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:36.115 [2024-11-15 11:46:36.853112] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:36.115 [2024-11-15 11:46:36.853118] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:36.115 [2024-11-15 11:46:36.853124] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4f34000b90 00:28:36.115 [2024-11-15 11:46:36.853139] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:36.115 qpair failed and we were unable to recover it. 00:28:36.115 [2024-11-15 11:46:36.863070] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:36.115 [2024-11-15 11:46:36.863132] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:36.115 [2024-11-15 11:46:36.863145] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:36.115 [2024-11-15 11:46:36.863153] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:36.115 [2024-11-15 11:46:36.863159] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4f34000b90 00:28:36.115 [2024-11-15 11:46:36.863174] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:36.115 qpair failed and we were unable to recover it. 00:28:36.115 [2024-11-15 11:46:36.873124] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:36.115 [2024-11-15 11:46:36.873182] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:36.115 [2024-11-15 11:46:36.873197] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:36.115 [2024-11-15 11:46:36.873203] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:36.115 [2024-11-15 11:46:36.873210] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4f34000b90 00:28:36.115 [2024-11-15 11:46:36.873225] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:36.115 qpair failed and we were unable to recover it. 00:28:36.115 [2024-11-15 11:46:36.883071] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:36.115 [2024-11-15 11:46:36.883128] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:36.115 [2024-11-15 11:46:36.883141] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:36.115 [2024-11-15 11:46:36.883148] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:36.115 [2024-11-15 11:46:36.883153] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4f34000b90 00:28:36.115 [2024-11-15 11:46:36.883168] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:36.115 qpair failed and we were unable to recover it. 00:28:36.115 [2024-11-15 11:46:36.893164] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:36.116 [2024-11-15 11:46:36.893229] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:36.116 [2024-11-15 11:46:36.893242] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:36.116 [2024-11-15 11:46:36.893248] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:36.116 [2024-11-15 11:46:36.893254] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4f34000b90 00:28:36.116 [2024-11-15 11:46:36.893268] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:36.116 qpair failed and we were unable to recover it. 00:28:36.116 [2024-11-15 11:46:36.903181] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:36.116 [2024-11-15 11:46:36.903290] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:36.116 [2024-11-15 11:46:36.903304] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:36.116 [2024-11-15 11:46:36.903310] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:36.116 [2024-11-15 11:46:36.903316] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4f34000b90 00:28:36.116 [2024-11-15 11:46:36.903330] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:36.116 qpair failed and we were unable to recover it. 00:28:36.116 [2024-11-15 11:46:36.913275] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:36.116 [2024-11-15 11:46:36.913335] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:36.116 [2024-11-15 11:46:36.913348] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:36.116 [2024-11-15 11:46:36.913354] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:36.116 [2024-11-15 11:46:36.913360] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4f34000b90 00:28:36.116 [2024-11-15 11:46:36.913375] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:36.116 qpair failed and we were unable to recover it. 00:28:36.116 [2024-11-15 11:46:36.923169] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:36.116 [2024-11-15 11:46:36.923223] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:36.116 [2024-11-15 11:46:36.923236] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:36.116 [2024-11-15 11:46:36.923245] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:36.116 [2024-11-15 11:46:36.923251] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4f34000b90 00:28:36.116 [2024-11-15 11:46:36.923266] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:36.116 qpair failed and we were unable to recover it. 00:28:36.116 [2024-11-15 11:46:36.933275] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:36.116 [2024-11-15 11:46:36.933335] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:36.116 [2024-11-15 11:46:36.933348] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:36.116 [2024-11-15 11:46:36.933356] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:36.116 [2024-11-15 11:46:36.933362] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4f34000b90 00:28:36.116 [2024-11-15 11:46:36.933377] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:36.116 qpair failed and we were unable to recover it. 00:28:36.116 [2024-11-15 11:46:36.943298] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:36.116 [2024-11-15 11:46:36.943354] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:36.116 [2024-11-15 11:46:36.943367] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:36.116 [2024-11-15 11:46:36.943373] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:36.116 [2024-11-15 11:46:36.943379] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4f34000b90 00:28:36.116 [2024-11-15 11:46:36.943393] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:36.116 qpair failed and we were unable to recover it. 00:28:36.116 [2024-11-15 11:46:36.953317] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:36.116 [2024-11-15 11:46:36.953377] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:36.116 [2024-11-15 11:46:36.953390] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:36.116 [2024-11-15 11:46:36.953397] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:36.116 [2024-11-15 11:46:36.953403] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4f34000b90 00:28:36.116 [2024-11-15 11:46:36.953418] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:36.116 qpair failed and we were unable to recover it. 00:28:36.377 [2024-11-15 11:46:36.963284] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:36.377 [2024-11-15 11:46:36.963341] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:36.377 [2024-11-15 11:46:36.963354] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:36.377 [2024-11-15 11:46:36.963361] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:36.377 [2024-11-15 11:46:36.963367] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4f34000b90 00:28:36.377 [2024-11-15 11:46:36.963385] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:36.377 qpair failed and we were unable to recover it. 00:28:36.377 [2024-11-15 11:46:36.973386] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:36.377 [2024-11-15 11:46:36.973444] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:36.377 [2024-11-15 11:46:36.973456] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:36.377 [2024-11-15 11:46:36.973467] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:36.377 [2024-11-15 11:46:36.973472] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4f34000b90 00:28:36.377 [2024-11-15 11:46:36.973487] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:36.377 qpair failed and we were unable to recover it. 00:28:36.377 [2024-11-15 11:46:36.983419] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:36.377 [2024-11-15 11:46:36.983480] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:36.377 [2024-11-15 11:46:36.983494] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:36.377 [2024-11-15 11:46:36.983501] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:36.377 [2024-11-15 11:46:36.983506] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4f34000b90 00:28:36.377 [2024-11-15 11:46:36.983521] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:36.377 qpair failed and we were unable to recover it. 00:28:36.377 [2024-11-15 11:46:36.993445] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:36.377 [2024-11-15 11:46:36.993514] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:36.377 [2024-11-15 11:46:36.993528] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:36.377 [2024-11-15 11:46:36.993535] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:36.377 [2024-11-15 11:46:36.993542] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4f34000b90 00:28:36.377 [2024-11-15 11:46:36.993556] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:36.377 qpair failed and we were unable to recover it. 00:28:36.377 [2024-11-15 11:46:37.003409] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:36.377 [2024-11-15 11:46:37.003469] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:36.377 [2024-11-15 11:46:37.003481] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:36.377 [2024-11-15 11:46:37.003488] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:36.377 [2024-11-15 11:46:37.003494] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4f34000b90 00:28:36.377 [2024-11-15 11:46:37.003508] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:36.377 qpair failed and we were unable to recover it. 00:28:36.377 [2024-11-15 11:46:37.013499] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:36.377 [2024-11-15 11:46:37.013560] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:36.377 [2024-11-15 11:46:37.013573] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:36.377 [2024-11-15 11:46:37.013580] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:36.377 [2024-11-15 11:46:37.013586] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4f34000b90 00:28:36.377 [2024-11-15 11:46:37.013601] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:36.377 qpair failed and we were unable to recover it. 00:28:36.377 [2024-11-15 11:46:37.023530] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:36.377 [2024-11-15 11:46:37.023594] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:36.377 [2024-11-15 11:46:37.023607] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:36.377 [2024-11-15 11:46:37.023613] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:36.377 [2024-11-15 11:46:37.023619] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4f34000b90 00:28:36.377 [2024-11-15 11:46:37.023634] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:36.377 qpair failed and we were unable to recover it. 00:28:36.377 [2024-11-15 11:46:37.033564] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:36.377 [2024-11-15 11:46:37.033625] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:36.377 [2024-11-15 11:46:37.033638] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:36.377 [2024-11-15 11:46:37.033645] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:36.377 [2024-11-15 11:46:37.033651] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4f34000b90 00:28:36.377 [2024-11-15 11:46:37.033666] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:36.377 qpair failed and we were unable to recover it. 00:28:36.377 [2024-11-15 11:46:37.043518] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:36.377 [2024-11-15 11:46:37.043577] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:36.377 [2024-11-15 11:46:37.043590] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:36.377 [2024-11-15 11:46:37.043596] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:36.377 [2024-11-15 11:46:37.043602] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4f34000b90 00:28:36.377 [2024-11-15 11:46:37.043616] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:36.378 qpair failed and we were unable to recover it. 00:28:36.378 [2024-11-15 11:46:37.053599] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:36.378 [2024-11-15 11:46:37.053666] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:36.378 [2024-11-15 11:46:37.053687] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:36.378 [2024-11-15 11:46:37.053694] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:36.378 [2024-11-15 11:46:37.053700] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4f34000b90 00:28:36.378 [2024-11-15 11:46:37.053714] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:36.378 qpair failed and we were unable to recover it. 00:28:36.378 [2024-11-15 11:46:37.063642] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:36.378 [2024-11-15 11:46:37.063700] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:36.378 [2024-11-15 11:46:37.063713] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:36.378 [2024-11-15 11:46:37.063719] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:36.378 [2024-11-15 11:46:37.063725] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4f34000b90 00:28:36.378 [2024-11-15 11:46:37.063740] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:36.378 qpair failed and we were unable to recover it. 00:28:36.378 [2024-11-15 11:46:37.073664] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:36.378 [2024-11-15 11:46:37.073724] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:36.378 [2024-11-15 11:46:37.073737] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:36.378 [2024-11-15 11:46:37.073743] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:36.378 [2024-11-15 11:46:37.073749] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4f34000b90 00:28:36.378 [2024-11-15 11:46:37.073763] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:36.378 qpair failed and we were unable to recover it. 00:28:36.378 [2024-11-15 11:46:37.083641] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:36.378 [2024-11-15 11:46:37.083696] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:36.378 [2024-11-15 11:46:37.083709] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:36.378 [2024-11-15 11:46:37.083715] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:36.378 [2024-11-15 11:46:37.083721] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4f34000b90 00:28:36.378 [2024-11-15 11:46:37.083735] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:36.378 qpair failed and we were unable to recover it. 00:28:36.378 [2024-11-15 11:46:37.093648] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:36.378 [2024-11-15 11:46:37.093736] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:36.378 [2024-11-15 11:46:37.093750] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:36.378 [2024-11-15 11:46:37.093757] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:36.378 [2024-11-15 11:46:37.093765] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4f34000b90 00:28:36.378 [2024-11-15 11:46:37.093780] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:36.378 qpair failed and we were unable to recover it. 00:28:36.378 [2024-11-15 11:46:37.103760] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:36.378 [2024-11-15 11:46:37.103823] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:36.378 [2024-11-15 11:46:37.103836] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:36.378 [2024-11-15 11:46:37.103843] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:36.378 [2024-11-15 11:46:37.103849] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4f34000b90 00:28:36.378 [2024-11-15 11:46:37.103864] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:36.378 qpair failed and we were unable to recover it. 00:28:36.378 [2024-11-15 11:46:37.113778] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:36.378 [2024-11-15 11:46:37.113837] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:36.378 [2024-11-15 11:46:37.113849] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:36.378 [2024-11-15 11:46:37.113856] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:36.378 [2024-11-15 11:46:37.113862] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4f34000b90 00:28:36.378 [2024-11-15 11:46:37.113877] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:36.378 qpair failed and we were unable to recover it. 00:28:36.378 [2024-11-15 11:46:37.123796] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:36.378 [2024-11-15 11:46:37.123850] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:36.378 [2024-11-15 11:46:37.123863] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:36.378 [2024-11-15 11:46:37.123869] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:36.378 [2024-11-15 11:46:37.123875] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4f34000b90 00:28:36.378 [2024-11-15 11:46:37.123890] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:36.378 qpair failed and we were unable to recover it. 00:28:36.378 [2024-11-15 11:46:37.133842] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:36.378 [2024-11-15 11:46:37.133914] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:36.378 [2024-11-15 11:46:37.133930] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:36.378 [2024-11-15 11:46:37.133936] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:36.378 [2024-11-15 11:46:37.133942] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4f34000b90 00:28:36.378 [2024-11-15 11:46:37.133956] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:36.378 qpair failed and we were unable to recover it. 00:28:36.378 [2024-11-15 11:46:37.143879] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:36.378 [2024-11-15 11:46:37.143939] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:36.378 [2024-11-15 11:46:37.143952] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:36.378 [2024-11-15 11:46:37.143959] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:36.378 [2024-11-15 11:46:37.143965] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4f34000b90 00:28:36.378 [2024-11-15 11:46:37.143980] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:36.378 qpair failed and we were unable to recover it. 00:28:36.378 [2024-11-15 11:46:37.153901] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:36.378 [2024-11-15 11:46:37.153961] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:36.378 [2024-11-15 11:46:37.153974] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:36.378 [2024-11-15 11:46:37.153980] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:36.378 [2024-11-15 11:46:37.153987] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4f34000b90 00:28:36.378 [2024-11-15 11:46:37.154002] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:36.378 qpair failed and we were unable to recover it. 00:28:36.378 [2024-11-15 11:46:37.163886] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:36.378 [2024-11-15 11:46:37.163944] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:36.378 [2024-11-15 11:46:37.163957] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:36.378 [2024-11-15 11:46:37.163963] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:36.378 [2024-11-15 11:46:37.163969] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4f34000b90 00:28:36.378 [2024-11-15 11:46:37.163984] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:36.378 qpair failed and we were unable to recover it. 00:28:36.378 [2024-11-15 11:46:37.173971] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:36.378 [2024-11-15 11:46:37.174042] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:36.378 [2024-11-15 11:46:37.174056] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:36.378 [2024-11-15 11:46:37.174062] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:36.378 [2024-11-15 11:46:37.174068] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4f34000b90 00:28:36.378 [2024-11-15 11:46:37.174082] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:36.378 qpair failed and we were unable to recover it. 00:28:36.378 [2024-11-15 11:46:37.183997] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:36.379 [2024-11-15 11:46:37.184062] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:36.379 [2024-11-15 11:46:37.184078] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:36.379 [2024-11-15 11:46:37.184085] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:36.379 [2024-11-15 11:46:37.184091] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4f34000b90 00:28:36.379 [2024-11-15 11:46:37.184105] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:36.379 qpair failed and we were unable to recover it. 00:28:36.379 [2024-11-15 11:46:37.194050] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:36.379 [2024-11-15 11:46:37.194107] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:36.379 [2024-11-15 11:46:37.194120] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:36.379 [2024-11-15 11:46:37.194126] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:36.379 [2024-11-15 11:46:37.194133] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4f34000b90 00:28:36.379 [2024-11-15 11:46:37.194147] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:36.379 qpair failed and we were unable to recover it. 00:28:36.379 [2024-11-15 11:46:37.204002] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:36.379 [2024-11-15 11:46:37.204059] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:36.379 [2024-11-15 11:46:37.204072] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:36.379 [2024-11-15 11:46:37.204078] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:36.379 [2024-11-15 11:46:37.204083] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4f34000b90 00:28:36.379 [2024-11-15 11:46:37.204099] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:36.379 qpair failed and we were unable to recover it. 00:28:36.379 [2024-11-15 11:46:37.214020] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:36.379 [2024-11-15 11:46:37.214080] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:36.379 [2024-11-15 11:46:37.214094] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:36.379 [2024-11-15 11:46:37.214101] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:36.379 [2024-11-15 11:46:37.214107] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4f34000b90 00:28:36.379 [2024-11-15 11:46:37.214121] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:36.379 qpair failed and we were unable to recover it. 00:28:36.379 [2024-11-15 11:46:37.224111] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:36.379 [2024-11-15 11:46:37.224178] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:36.379 [2024-11-15 11:46:37.224193] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:36.379 [2024-11-15 11:46:37.224200] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:36.379 [2024-11-15 11:46:37.224209] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4f34000b90 00:28:36.379 [2024-11-15 11:46:37.224223] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:36.379 qpair failed and we were unable to recover it. 00:28:36.640 [2024-11-15 11:46:37.234143] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:36.640 [2024-11-15 11:46:37.234203] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:36.640 [2024-11-15 11:46:37.234216] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:36.640 [2024-11-15 11:46:37.234223] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:36.640 [2024-11-15 11:46:37.234229] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4f34000b90 00:28:36.640 [2024-11-15 11:46:37.234244] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:36.640 qpair failed and we were unable to recover it. 00:28:36.640 [2024-11-15 11:46:37.244121] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:36.640 [2024-11-15 11:46:37.244176] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:36.640 [2024-11-15 11:46:37.244189] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:36.640 [2024-11-15 11:46:37.244196] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:36.640 [2024-11-15 11:46:37.244201] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4f34000b90 00:28:36.640 [2024-11-15 11:46:37.244215] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:36.640 qpair failed and we were unable to recover it. 00:28:36.640 [2024-11-15 11:46:37.254203] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:36.640 [2024-11-15 11:46:37.254264] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:36.640 [2024-11-15 11:46:37.254276] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:36.640 [2024-11-15 11:46:37.254283] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:36.640 [2024-11-15 11:46:37.254289] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4f34000b90 00:28:36.640 [2024-11-15 11:46:37.254304] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:36.640 qpair failed and we were unable to recover it. 00:28:36.640 [2024-11-15 11:46:37.264223] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:36.640 [2024-11-15 11:46:37.264280] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:36.640 [2024-11-15 11:46:37.264293] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:36.640 [2024-11-15 11:46:37.264300] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:36.640 [2024-11-15 11:46:37.264306] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4f34000b90 00:28:36.640 [2024-11-15 11:46:37.264321] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:36.640 qpair failed and we were unable to recover it. 00:28:36.640 [2024-11-15 11:46:37.274249] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:36.640 [2024-11-15 11:46:37.274307] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:36.640 [2024-11-15 11:46:37.274320] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:36.640 [2024-11-15 11:46:37.274326] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:36.640 [2024-11-15 11:46:37.274333] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4f34000b90 00:28:36.640 [2024-11-15 11:46:37.274347] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:36.640 qpair failed and we were unable to recover it. 00:28:36.640 [2024-11-15 11:46:37.284222] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:36.640 [2024-11-15 11:46:37.284278] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:36.640 [2024-11-15 11:46:37.284292] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:36.640 [2024-11-15 11:46:37.284298] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:36.640 [2024-11-15 11:46:37.284304] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4f34000b90 00:28:36.640 [2024-11-15 11:46:37.284317] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:36.640 qpair failed and we were unable to recover it. 00:28:36.640 [2024-11-15 11:46:37.294237] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:36.640 [2024-11-15 11:46:37.294296] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:36.640 [2024-11-15 11:46:37.294309] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:36.640 [2024-11-15 11:46:37.294315] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:36.640 [2024-11-15 11:46:37.294321] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4f34000b90 00:28:36.640 [2024-11-15 11:46:37.294335] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:36.640 qpair failed and we were unable to recover it. 00:28:36.640 [2024-11-15 11:46:37.304352] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:36.640 [2024-11-15 11:46:37.304413] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:36.640 [2024-11-15 11:46:37.304426] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:36.640 [2024-11-15 11:46:37.304433] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:36.640 [2024-11-15 11:46:37.304439] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4f34000b90 00:28:36.640 [2024-11-15 11:46:37.304453] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:36.640 qpair failed and we were unable to recover it. 00:28:36.640 [2024-11-15 11:46:37.314389] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:36.640 [2024-11-15 11:46:37.314456] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:36.640 [2024-11-15 11:46:37.314475] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:36.640 [2024-11-15 11:46:37.314482] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:36.640 [2024-11-15 11:46:37.314487] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4f34000b90 00:28:36.640 [2024-11-15 11:46:37.314502] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:36.640 qpair failed and we were unable to recover it. 00:28:36.640 [2024-11-15 11:46:37.324329] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:36.640 [2024-11-15 11:46:37.324385] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:36.640 [2024-11-15 11:46:37.324399] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:36.640 [2024-11-15 11:46:37.324405] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:36.640 [2024-11-15 11:46:37.324411] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4f34000b90 00:28:36.640 [2024-11-15 11:46:37.324425] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:36.640 qpair failed and we were unable to recover it. 00:28:36.640 [2024-11-15 11:46:37.334471] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:36.640 [2024-11-15 11:46:37.334539] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:36.640 [2024-11-15 11:46:37.334553] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:36.640 [2024-11-15 11:46:37.334560] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:36.640 [2024-11-15 11:46:37.334566] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4f34000b90 00:28:36.640 [2024-11-15 11:46:37.334580] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:36.641 qpair failed and we were unable to recover it. 00:28:36.641 [2024-11-15 11:46:37.344457] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:36.641 [2024-11-15 11:46:37.344519] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:36.641 [2024-11-15 11:46:37.344532] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:36.641 [2024-11-15 11:46:37.344539] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:36.641 [2024-11-15 11:46:37.344545] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4f34000b90 00:28:36.641 [2024-11-15 11:46:37.344560] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:36.641 qpair failed and we were unable to recover it. 00:28:36.641 [2024-11-15 11:46:37.354493] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:36.641 [2024-11-15 11:46:37.354554] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:36.641 [2024-11-15 11:46:37.354566] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:36.641 [2024-11-15 11:46:37.354577] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:36.641 [2024-11-15 11:46:37.354583] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4f34000b90 00:28:36.641 [2024-11-15 11:46:37.354597] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:36.641 qpair failed and we were unable to recover it. 00:28:36.641 [2024-11-15 11:46:37.364407] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:36.641 [2024-11-15 11:46:37.364466] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:36.641 [2024-11-15 11:46:37.364479] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:36.641 [2024-11-15 11:46:37.364486] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:36.641 [2024-11-15 11:46:37.364491] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4f34000b90 00:28:36.641 [2024-11-15 11:46:37.364506] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:36.641 qpair failed and we were unable to recover it. 00:28:36.641 [2024-11-15 11:46:37.374556] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:36.641 [2024-11-15 11:46:37.374625] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:36.641 [2024-11-15 11:46:37.374638] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:36.641 [2024-11-15 11:46:37.374645] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:36.641 [2024-11-15 11:46:37.374651] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4f34000b90 00:28:36.641 [2024-11-15 11:46:37.374666] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:36.641 qpair failed and we were unable to recover it. 00:28:36.641 [2024-11-15 11:46:37.384596] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:36.641 [2024-11-15 11:46:37.384654] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:36.641 [2024-11-15 11:46:37.384667] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:36.641 [2024-11-15 11:46:37.384673] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:36.641 [2024-11-15 11:46:37.384679] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4f34000b90 00:28:36.641 [2024-11-15 11:46:37.384695] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:36.641 qpair failed and we were unable to recover it. 00:28:36.641 [2024-11-15 11:46:37.394586] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:36.641 [2024-11-15 11:46:37.394646] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:36.641 [2024-11-15 11:46:37.394659] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:36.641 [2024-11-15 11:46:37.394666] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:36.641 [2024-11-15 11:46:37.394672] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4f34000b90 00:28:36.641 [2024-11-15 11:46:37.394690] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:36.641 qpair failed and we were unable to recover it. 00:28:36.641 [2024-11-15 11:46:37.404618] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:36.641 [2024-11-15 11:46:37.404692] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:36.641 [2024-11-15 11:46:37.404705] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:36.641 [2024-11-15 11:46:37.404712] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:36.641 [2024-11-15 11:46:37.404718] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4f34000b90 00:28:36.641 [2024-11-15 11:46:37.404733] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:36.641 qpair failed and we were unable to recover it. 00:28:36.641 [2024-11-15 11:46:37.414677] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:36.641 [2024-11-15 11:46:37.414740] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:36.641 [2024-11-15 11:46:37.414752] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:36.641 [2024-11-15 11:46:37.414759] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:36.641 [2024-11-15 11:46:37.414766] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4f34000b90 00:28:36.641 [2024-11-15 11:46:37.414780] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:36.641 qpair failed and we were unable to recover it. 00:28:36.641 [2024-11-15 11:46:37.424656] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:36.641 [2024-11-15 11:46:37.424732] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:36.641 [2024-11-15 11:46:37.424747] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:36.641 [2024-11-15 11:46:37.424753] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:36.641 [2024-11-15 11:46:37.424760] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4f34000b90 00:28:36.641 [2024-11-15 11:46:37.424775] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:36.641 qpair failed and we were unable to recover it. 00:28:36.641 [2024-11-15 11:46:37.434726] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:36.641 [2024-11-15 11:46:37.434797] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:36.641 [2024-11-15 11:46:37.434812] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:36.641 [2024-11-15 11:46:37.434818] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:36.641 [2024-11-15 11:46:37.434824] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4f34000b90 00:28:36.641 [2024-11-15 11:46:37.434839] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:36.641 qpair failed and we were unable to recover it. 00:28:36.641 [2024-11-15 11:46:37.444715] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:36.641 [2024-11-15 11:46:37.444786] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:36.642 [2024-11-15 11:46:37.444800] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:36.642 [2024-11-15 11:46:37.444807] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:36.642 [2024-11-15 11:46:37.444814] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4f34000b90 00:28:36.642 [2024-11-15 11:46:37.444829] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:36.642 qpair failed and we were unable to recover it. 00:28:36.642 [2024-11-15 11:46:37.454833] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:36.642 [2024-11-15 11:46:37.454893] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:36.642 [2024-11-15 11:46:37.454907] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:36.642 [2024-11-15 11:46:37.454913] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:36.642 [2024-11-15 11:46:37.454920] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4f34000b90 00:28:36.642 [2024-11-15 11:46:37.454935] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:36.642 qpair failed and we were unable to recover it. 00:28:36.642 [2024-11-15 11:46:37.464864] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:36.642 [2024-11-15 11:46:37.464924] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:36.642 [2024-11-15 11:46:37.464937] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:36.642 [2024-11-15 11:46:37.464944] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:36.642 [2024-11-15 11:46:37.464950] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4f34000b90 00:28:36.642 [2024-11-15 11:46:37.464965] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:36.642 qpair failed and we were unable to recover it. 00:28:36.642 [2024-11-15 11:46:37.474846] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:36.642 [2024-11-15 11:46:37.474904] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:36.642 [2024-11-15 11:46:37.474917] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:36.642 [2024-11-15 11:46:37.474923] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:36.642 [2024-11-15 11:46:37.474929] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4f34000b90 00:28:36.642 [2024-11-15 11:46:37.474944] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:36.642 qpair failed and we were unable to recover it. 00:28:36.642 [2024-11-15 11:46:37.484822] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:36.642 [2024-11-15 11:46:37.484900] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:36.642 [2024-11-15 11:46:37.484914] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:36.642 [2024-11-15 11:46:37.484923] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:36.642 [2024-11-15 11:46:37.484930] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4f34000b90 00:28:36.642 [2024-11-15 11:46:37.484945] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:36.642 qpair failed and we were unable to recover it. 00:28:36.902 [2024-11-15 11:46:37.494926] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:36.902 [2024-11-15 11:46:37.495015] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:36.902 [2024-11-15 11:46:37.495030] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:36.902 [2024-11-15 11:46:37.495036] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:36.902 [2024-11-15 11:46:37.495042] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4f34000b90 00:28:36.902 [2024-11-15 11:46:37.495057] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:36.902 qpair failed and we were unable to recover it. 00:28:36.902 [2024-11-15 11:46:37.504917] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:36.902 [2024-11-15 11:46:37.504990] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:36.902 [2024-11-15 11:46:37.505005] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:36.902 [2024-11-15 11:46:37.505011] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:36.902 [2024-11-15 11:46:37.505017] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4f34000b90 00:28:36.902 [2024-11-15 11:46:37.505031] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:36.902 qpair failed and we were unable to recover it. 00:28:36.902 [2024-11-15 11:46:37.514947] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:36.902 [2024-11-15 11:46:37.515005] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:36.902 [2024-11-15 11:46:37.515019] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:36.902 [2024-11-15 11:46:37.515025] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:36.902 [2024-11-15 11:46:37.515031] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4f34000b90 00:28:36.902 [2024-11-15 11:46:37.515045] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:36.902 qpair failed and we were unable to recover it. 00:28:36.902 [2024-11-15 11:46:37.524911] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:36.902 [2024-11-15 11:46:37.524969] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:36.902 [2024-11-15 11:46:37.524982] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:36.902 [2024-11-15 11:46:37.524988] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:36.902 [2024-11-15 11:46:37.524994] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4f34000b90 00:28:36.902 [2024-11-15 11:46:37.525012] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:36.902 qpair failed and we were unable to recover it. 00:28:36.902 [2024-11-15 11:46:37.535008] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:36.903 [2024-11-15 11:46:37.535068] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:36.903 [2024-11-15 11:46:37.535082] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:36.903 [2024-11-15 11:46:37.535089] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:36.903 [2024-11-15 11:46:37.535095] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4f34000b90 00:28:36.903 [2024-11-15 11:46:37.535109] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:36.903 qpair failed and we were unable to recover it. 00:28:36.903 [2024-11-15 11:46:37.545063] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:36.903 [2024-11-15 11:46:37.545135] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:36.903 [2024-11-15 11:46:37.545149] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:36.903 [2024-11-15 11:46:37.545156] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:36.903 [2024-11-15 11:46:37.545162] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4f34000b90 00:28:36.903 [2024-11-15 11:46:37.545177] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:36.903 qpair failed and we were unable to recover it. 00:28:36.903 [2024-11-15 11:46:37.555067] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:36.903 [2024-11-15 11:46:37.555128] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:36.903 [2024-11-15 11:46:37.555141] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:36.903 [2024-11-15 11:46:37.555148] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:36.903 [2024-11-15 11:46:37.555155] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4f34000b90 00:28:36.903 [2024-11-15 11:46:37.555171] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:36.903 qpair failed and we were unable to recover it. 00:28:36.903 [2024-11-15 11:46:37.565043] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:36.903 [2024-11-15 11:46:37.565139] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:36.903 [2024-11-15 11:46:37.565154] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:36.903 [2024-11-15 11:46:37.565160] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:36.903 [2024-11-15 11:46:37.565166] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4f34000b90 00:28:36.903 [2024-11-15 11:46:37.565181] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:36.903 qpair failed and we were unable to recover it. 00:28:36.903 [2024-11-15 11:46:37.575190] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:36.903 [2024-11-15 11:46:37.575251] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:36.903 [2024-11-15 11:46:37.575265] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:36.903 [2024-11-15 11:46:37.575272] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:36.903 [2024-11-15 11:46:37.575278] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4f34000b90 00:28:36.903 [2024-11-15 11:46:37.575294] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:36.903 qpair failed and we were unable to recover it. 00:28:36.903 [2024-11-15 11:46:37.585204] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:36.903 [2024-11-15 11:46:37.585269] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:36.903 [2024-11-15 11:46:37.585291] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:36.903 [2024-11-15 11:46:37.585297] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:36.903 [2024-11-15 11:46:37.585303] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4f34000b90 00:28:36.903 [2024-11-15 11:46:37.585319] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:36.903 qpair failed and we were unable to recover it. 00:28:36.903 [2024-11-15 11:46:37.595200] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:36.903 [2024-11-15 11:46:37.595260] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:36.903 [2024-11-15 11:46:37.595274] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:36.903 [2024-11-15 11:46:37.595281] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:36.903 [2024-11-15 11:46:37.595288] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4f34000b90 00:28:36.903 [2024-11-15 11:46:37.595304] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:36.903 qpair failed and we were unable to recover it. 00:28:36.903 [2024-11-15 11:46:37.605153] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:36.903 [2024-11-15 11:46:37.605227] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:36.903 [2024-11-15 11:46:37.605242] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:36.903 [2024-11-15 11:46:37.605249] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:36.903 [2024-11-15 11:46:37.605255] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4f34000b90 00:28:36.903 [2024-11-15 11:46:37.605270] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:36.903 qpair failed and we were unable to recover it. 00:28:36.903 [2024-11-15 11:46:37.615260] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:36.903 [2024-11-15 11:46:37.615318] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:36.903 [2024-11-15 11:46:37.615334] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:36.903 [2024-11-15 11:46:37.615341] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:36.903 [2024-11-15 11:46:37.615348] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4f34000b90 00:28:36.903 [2024-11-15 11:46:37.615363] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:36.903 qpair failed and we were unable to recover it. 00:28:36.903 [2024-11-15 11:46:37.625303] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:36.903 [2024-11-15 11:46:37.625367] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:36.903 [2024-11-15 11:46:37.625379] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:36.903 [2024-11-15 11:46:37.625386] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:36.903 [2024-11-15 11:46:37.625393] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4f34000b90 00:28:36.903 [2024-11-15 11:46:37.625408] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:36.903 qpair failed and we were unable to recover it. 00:28:36.903 [2024-11-15 11:46:37.635336] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:36.903 [2024-11-15 11:46:37.635403] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:36.903 [2024-11-15 11:46:37.635418] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:36.903 [2024-11-15 11:46:37.635424] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:36.903 [2024-11-15 11:46:37.635430] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4f34000b90 00:28:36.903 [2024-11-15 11:46:37.635445] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:36.903 qpair failed and we were unable to recover it. 00:28:36.903 [2024-11-15 11:46:37.645267] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:36.903 [2024-11-15 11:46:37.645321] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:36.903 [2024-11-15 11:46:37.645336] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:36.903 [2024-11-15 11:46:37.645343] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:36.903 [2024-11-15 11:46:37.645349] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4f34000b90 00:28:36.903 [2024-11-15 11:46:37.645364] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:36.903 qpair failed and we were unable to recover it. 00:28:36.903 [2024-11-15 11:46:37.655293] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:36.903 [2024-11-15 11:46:37.655360] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:36.903 [2024-11-15 11:46:37.655382] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:36.903 [2024-11-15 11:46:37.655389] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:36.903 [2024-11-15 11:46:37.655398] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4f34000b90 00:28:36.904 [2024-11-15 11:46:37.655414] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:36.904 qpair failed and we were unable to recover it. 00:28:36.904 [2024-11-15 11:46:37.665363] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:36.904 [2024-11-15 11:46:37.665422] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:36.904 [2024-11-15 11:46:37.665435] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:36.904 [2024-11-15 11:46:37.665442] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:36.904 [2024-11-15 11:46:37.665449] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4f34000b90 00:28:36.904 [2024-11-15 11:46:37.665468] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:36.904 qpair failed and we were unable to recover it. 00:28:36.904 [2024-11-15 11:46:37.675472] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:36.904 [2024-11-15 11:46:37.675534] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:36.904 [2024-11-15 11:46:37.675547] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:36.904 [2024-11-15 11:46:37.675554] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:36.904 [2024-11-15 11:46:37.675561] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4f34000b90 00:28:36.904 [2024-11-15 11:46:37.675577] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:36.904 qpair failed and we were unable to recover it. 00:28:36.904 [2024-11-15 11:46:37.685393] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:36.904 [2024-11-15 11:46:37.685462] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:36.904 [2024-11-15 11:46:37.685476] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:36.904 [2024-11-15 11:46:37.685482] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:36.904 [2024-11-15 11:46:37.685488] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4f34000b90 00:28:36.904 [2024-11-15 11:46:37.685503] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:36.904 qpair failed and we were unable to recover it. 00:28:36.904 [2024-11-15 11:46:37.695470] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:36.904 [2024-11-15 11:46:37.695533] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:36.904 [2024-11-15 11:46:37.695546] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:36.904 [2024-11-15 11:46:37.695553] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:36.904 [2024-11-15 11:46:37.695560] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4f34000b90 00:28:36.904 [2024-11-15 11:46:37.695574] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:36.904 qpair failed and we were unable to recover it. 00:28:36.904 [2024-11-15 11:46:37.705527] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:36.904 [2024-11-15 11:46:37.705585] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:36.904 [2024-11-15 11:46:37.705599] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:36.904 [2024-11-15 11:46:37.705606] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:36.904 [2024-11-15 11:46:37.705612] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4f34000b90 00:28:36.904 [2024-11-15 11:46:37.705627] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:36.904 qpair failed and we were unable to recover it. 00:28:36.904 [2024-11-15 11:46:37.715505] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:36.904 [2024-11-15 11:46:37.715563] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:36.904 [2024-11-15 11:46:37.715577] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:36.904 [2024-11-15 11:46:37.715584] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:36.904 [2024-11-15 11:46:37.715590] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4f34000b90 00:28:36.904 [2024-11-15 11:46:37.715605] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:36.904 qpair failed and we were unable to recover it. 00:28:36.904 [2024-11-15 11:46:37.725504] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:36.904 [2024-11-15 11:46:37.725557] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:36.904 [2024-11-15 11:46:37.725571] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:36.904 [2024-11-15 11:46:37.725577] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:36.904 [2024-11-15 11:46:37.725583] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4f34000b90 00:28:36.904 [2024-11-15 11:46:37.725597] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:36.904 qpair failed and we were unable to recover it. 00:28:36.904 [2024-11-15 11:46:37.735582] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:36.904 [2024-11-15 11:46:37.735640] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:36.904 [2024-11-15 11:46:37.735654] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:36.904 [2024-11-15 11:46:37.735661] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:36.904 [2024-11-15 11:46:37.735668] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4f34000b90 00:28:36.904 [2024-11-15 11:46:37.735683] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:36.904 qpair failed and we were unable to recover it. 00:28:36.904 [2024-11-15 11:46:37.745660] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:36.904 [2024-11-15 11:46:37.745755] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:36.904 [2024-11-15 11:46:37.745775] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:36.904 [2024-11-15 11:46:37.745781] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:36.904 [2024-11-15 11:46:37.745787] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4f34000b90 00:28:36.904 [2024-11-15 11:46:37.745802] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:36.904 qpair failed and we were unable to recover it. 00:28:37.165 [2024-11-15 11:46:37.755630] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:37.165 [2024-11-15 11:46:37.755697] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:37.165 [2024-11-15 11:46:37.755712] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:37.165 [2024-11-15 11:46:37.755720] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:37.165 [2024-11-15 11:46:37.755726] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4f34000b90 00:28:37.165 [2024-11-15 11:46:37.755741] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:37.165 qpair failed and we were unable to recover it. 00:28:37.165 [2024-11-15 11:46:37.765642] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:37.165 [2024-11-15 11:46:37.765698] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:37.165 [2024-11-15 11:46:37.765711] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:37.165 [2024-11-15 11:46:37.765717] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:37.165 [2024-11-15 11:46:37.765723] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4f34000b90 00:28:37.165 [2024-11-15 11:46:37.765737] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:37.165 qpair failed and we were unable to recover it. 00:28:37.165 [2024-11-15 11:46:37.775724] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:37.165 [2024-11-15 11:46:37.775782] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:37.165 [2024-11-15 11:46:37.775795] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:37.165 [2024-11-15 11:46:37.775803] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:37.165 [2024-11-15 11:46:37.775809] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4f34000b90 00:28:37.165 [2024-11-15 11:46:37.775825] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:37.165 qpair failed and we were unable to recover it. 00:28:37.165 [2024-11-15 11:46:37.785727] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:37.165 [2024-11-15 11:46:37.785797] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:37.165 [2024-11-15 11:46:37.785812] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:37.165 [2024-11-15 11:46:37.785818] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:37.165 [2024-11-15 11:46:37.785828] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4f34000b90 00:28:37.165 [2024-11-15 11:46:37.785843] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:37.165 qpair failed and we were unable to recover it. 00:28:37.165 [2024-11-15 11:46:37.795757] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:37.165 [2024-11-15 11:46:37.795818] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:37.165 [2024-11-15 11:46:37.795832] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:37.165 [2024-11-15 11:46:37.795838] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:37.165 [2024-11-15 11:46:37.795845] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4f34000b90 00:28:37.165 [2024-11-15 11:46:37.795861] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:37.165 qpair failed and we were unable to recover it. 00:28:37.165 [2024-11-15 11:46:37.805737] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:37.165 [2024-11-15 11:46:37.805795] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:37.165 [2024-11-15 11:46:37.805809] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:37.165 [2024-11-15 11:46:37.805816] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:37.165 [2024-11-15 11:46:37.805822] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4f34000b90 00:28:37.165 [2024-11-15 11:46:37.805837] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:37.165 qpair failed and we were unable to recover it. 00:28:37.165 [2024-11-15 11:46:37.815820] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:37.165 [2024-11-15 11:46:37.815882] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:37.165 [2024-11-15 11:46:37.815895] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:37.165 [2024-11-15 11:46:37.815902] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:37.165 [2024-11-15 11:46:37.815909] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4f34000b90 00:28:37.165 [2024-11-15 11:46:37.815923] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:37.165 qpair failed and we were unable to recover it. 00:28:37.165 [2024-11-15 11:46:37.825868] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:37.165 [2024-11-15 11:46:37.825926] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:37.166 [2024-11-15 11:46:37.825939] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:37.166 [2024-11-15 11:46:37.825947] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:37.166 [2024-11-15 11:46:37.825953] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4f34000b90 00:28:37.166 [2024-11-15 11:46:37.825967] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:37.166 qpair failed and we were unable to recover it. 00:28:37.166 [2024-11-15 11:46:37.835831] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:37.166 [2024-11-15 11:46:37.835895] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:37.166 [2024-11-15 11:46:37.835908] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:37.166 [2024-11-15 11:46:37.835915] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:37.166 [2024-11-15 11:46:37.835922] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4f34000b90 00:28:37.166 [2024-11-15 11:46:37.835936] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:37.166 qpair failed and we were unable to recover it. 00:28:37.166 [2024-11-15 11:46:37.845775] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:37.166 [2024-11-15 11:46:37.845831] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:37.166 [2024-11-15 11:46:37.845844] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:37.166 [2024-11-15 11:46:37.845850] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:37.166 [2024-11-15 11:46:37.845856] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4f34000b90 00:28:37.166 [2024-11-15 11:46:37.845871] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:37.166 qpair failed and we were unable to recover it. 00:28:37.166 [2024-11-15 11:46:37.855924] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:37.166 [2024-11-15 11:46:37.855989] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:37.166 [2024-11-15 11:46:37.856002] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:37.166 [2024-11-15 11:46:37.856009] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:37.166 [2024-11-15 11:46:37.856016] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4f34000b90 00:28:37.166 [2024-11-15 11:46:37.856030] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:37.166 qpair failed and we were unable to recover it. 00:28:37.166 [2024-11-15 11:46:37.865973] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:37.166 [2024-11-15 11:46:37.866034] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:37.166 [2024-11-15 11:46:37.866047] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:37.166 [2024-11-15 11:46:37.866054] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:37.166 [2024-11-15 11:46:37.866060] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4f34000b90 00:28:37.166 [2024-11-15 11:46:37.866074] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:37.166 qpair failed and we were unable to recover it. 00:28:37.166 [2024-11-15 11:46:37.875996] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:37.166 [2024-11-15 11:46:37.876055] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:37.166 [2024-11-15 11:46:37.876071] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:37.166 [2024-11-15 11:46:37.876078] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:37.166 [2024-11-15 11:46:37.876085] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4f34000b90 00:28:37.166 [2024-11-15 11:46:37.876099] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:37.166 qpair failed and we were unable to recover it. 00:28:37.166 [2024-11-15 11:46:37.885954] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:37.166 [2024-11-15 11:46:37.886011] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:37.166 [2024-11-15 11:46:37.886024] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:37.166 [2024-11-15 11:46:37.886031] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:37.166 [2024-11-15 11:46:37.886036] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4f34000b90 00:28:37.166 [2024-11-15 11:46:37.886051] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:37.166 qpair failed and we were unable to recover it. 00:28:37.166 [2024-11-15 11:46:37.896028] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:37.166 [2024-11-15 11:46:37.896088] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:37.166 [2024-11-15 11:46:37.896101] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:37.166 [2024-11-15 11:46:37.896107] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:37.166 [2024-11-15 11:46:37.896114] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4f34000b90 00:28:37.166 [2024-11-15 11:46:37.896128] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:37.166 qpair failed and we were unable to recover it. 00:28:37.166 [2024-11-15 11:46:37.906095] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:37.166 [2024-11-15 11:46:37.906177] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:37.166 [2024-11-15 11:46:37.906192] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:37.166 [2024-11-15 11:46:37.906199] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:37.166 [2024-11-15 11:46:37.906205] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4f34000b90 00:28:37.166 [2024-11-15 11:46:37.906220] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:37.166 qpair failed and we were unable to recover it. 00:28:37.166 [2024-11-15 11:46:37.916088] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:37.166 [2024-11-15 11:46:37.916146] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:37.166 [2024-11-15 11:46:37.916159] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:37.166 [2024-11-15 11:46:37.916169] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:37.166 [2024-11-15 11:46:37.916176] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4f34000b90 00:28:37.166 [2024-11-15 11:46:37.916191] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:37.166 qpair failed and we were unable to recover it. 00:28:37.166 [2024-11-15 11:46:37.926065] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:37.166 [2024-11-15 11:46:37.926123] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:37.166 [2024-11-15 11:46:37.926136] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:37.166 [2024-11-15 11:46:37.926143] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:37.166 [2024-11-15 11:46:37.926148] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4f34000b90 00:28:37.166 [2024-11-15 11:46:37.926163] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:37.166 qpair failed and we were unable to recover it. 00:28:37.166 [2024-11-15 11:46:37.936158] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:37.166 [2024-11-15 11:46:37.936223] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:37.166 [2024-11-15 11:46:37.936236] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:37.166 [2024-11-15 11:46:37.936243] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:37.166 [2024-11-15 11:46:37.936249] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4f34000b90 00:28:37.166 [2024-11-15 11:46:37.936264] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:37.166 qpair failed and we were unable to recover it. 00:28:37.166 [2024-11-15 11:46:37.946124] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:37.166 [2024-11-15 11:46:37.946184] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:37.166 [2024-11-15 11:46:37.946198] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:37.166 [2024-11-15 11:46:37.946205] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:37.166 [2024-11-15 11:46:37.946211] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4f34000b90 00:28:37.166 [2024-11-15 11:46:37.946226] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:37.167 qpair failed and we were unable to recover it. 00:28:37.167 [2024-11-15 11:46:37.956252] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:37.167 [2024-11-15 11:46:37.956332] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:37.167 [2024-11-15 11:46:37.956347] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:37.167 [2024-11-15 11:46:37.956353] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:37.167 [2024-11-15 11:46:37.956359] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4f34000b90 00:28:37.167 [2024-11-15 11:46:37.956377] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:37.167 qpair failed and we were unable to recover it. 00:28:37.167 [2024-11-15 11:46:37.966205] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:37.167 [2024-11-15 11:46:37.966260] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:37.167 [2024-11-15 11:46:37.966274] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:37.167 [2024-11-15 11:46:37.966280] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:37.167 [2024-11-15 11:46:37.966285] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4f34000b90 00:28:37.167 [2024-11-15 11:46:37.966300] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:37.167 qpair failed and we were unable to recover it. 00:28:37.167 [2024-11-15 11:46:37.976283] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:37.167 [2024-11-15 11:46:37.976393] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:37.167 [2024-11-15 11:46:37.976408] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:37.167 [2024-11-15 11:46:37.976415] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:37.167 [2024-11-15 11:46:37.976421] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4f34000b90 00:28:37.167 [2024-11-15 11:46:37.976436] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:37.167 qpair failed and we were unable to recover it. 00:28:37.167 [2024-11-15 11:46:37.986289] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:37.167 [2024-11-15 11:46:37.986342] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:37.167 [2024-11-15 11:46:37.986354] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:37.167 [2024-11-15 11:46:37.986361] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:37.167 [2024-11-15 11:46:37.986367] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4f34000b90 00:28:37.167 [2024-11-15 11:46:37.986382] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:37.167 qpair failed and we were unable to recover it. 00:28:37.167 [2024-11-15 11:46:37.996318] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:37.167 [2024-11-15 11:46:37.996375] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:37.167 [2024-11-15 11:46:37.996389] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:37.167 [2024-11-15 11:46:37.996395] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:37.167 [2024-11-15 11:46:37.996401] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4f34000b90 00:28:37.167 [2024-11-15 11:46:37.996416] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:37.167 qpair failed and we were unable to recover it. 00:28:37.167 [2024-11-15 11:46:38.006230] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:37.167 [2024-11-15 11:46:38.006296] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:37.167 [2024-11-15 11:46:38.006309] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:37.167 [2024-11-15 11:46:38.006316] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:37.167 [2024-11-15 11:46:38.006322] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4f34000b90 00:28:37.167 [2024-11-15 11:46:38.006336] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:37.167 qpair failed and we were unable to recover it. 00:28:37.427 [2024-11-15 11:46:38.016311] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:37.427 [2024-11-15 11:46:38.016369] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:37.427 [2024-11-15 11:46:38.016382] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:37.427 [2024-11-15 11:46:38.016389] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:37.427 [2024-11-15 11:46:38.016396] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4f34000b90 00:28:37.427 [2024-11-15 11:46:38.016411] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:37.427 qpair failed and we were unable to recover it. 00:28:37.427 [2024-11-15 11:46:38.026379] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:37.427 [2024-11-15 11:46:38.026441] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:37.427 [2024-11-15 11:46:38.026454] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:37.427 [2024-11-15 11:46:38.026465] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:37.427 [2024-11-15 11:46:38.026471] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4f34000b90 00:28:37.427 [2024-11-15 11:46:38.026487] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:37.427 qpair failed and we were unable to recover it. 00:28:37.427 [2024-11-15 11:46:38.036432] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:37.427 [2024-11-15 11:46:38.036502] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:37.427 [2024-11-15 11:46:38.036515] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:37.428 [2024-11-15 11:46:38.036521] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:37.428 [2024-11-15 11:46:38.036528] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4f34000b90 00:28:37.428 [2024-11-15 11:46:38.036543] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:37.428 qpair failed and we were unable to recover it. 00:28:37.428 [2024-11-15 11:46:38.046412] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:37.428 [2024-11-15 11:46:38.046471] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:37.428 [2024-11-15 11:46:38.046484] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:37.428 [2024-11-15 11:46:38.046494] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:37.428 [2024-11-15 11:46:38.046500] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4f34000b90 00:28:37.428 [2024-11-15 11:46:38.046515] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:37.428 qpair failed and we were unable to recover it. 00:28:37.428 [2024-11-15 11:46:38.056497] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:37.428 [2024-11-15 11:46:38.056558] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:37.428 [2024-11-15 11:46:38.056571] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:37.428 [2024-11-15 11:46:38.056577] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:37.428 [2024-11-15 11:46:38.056584] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4f34000b90 00:28:37.428 [2024-11-15 11:46:38.056599] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:37.428 qpair failed and we were unable to recover it. 00:28:37.428 [2024-11-15 11:46:38.066522] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:37.428 [2024-11-15 11:46:38.066586] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:37.428 [2024-11-15 11:46:38.066608] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:37.428 [2024-11-15 11:46:38.066615] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:37.428 [2024-11-15 11:46:38.066621] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4f34000b90 00:28:37.428 [2024-11-15 11:46:38.066635] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:37.428 qpair failed and we were unable to recover it. 00:28:37.428 [2024-11-15 11:46:38.076522] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:37.428 [2024-11-15 11:46:38.076589] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:37.428 [2024-11-15 11:46:38.076603] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:37.428 [2024-11-15 11:46:38.076610] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:37.428 [2024-11-15 11:46:38.076616] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4f34000b90 00:28:37.428 [2024-11-15 11:46:38.076630] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:37.428 qpair failed and we were unable to recover it. 00:28:37.428 [2024-11-15 11:46:38.086450] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:37.428 [2024-11-15 11:46:38.086514] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:37.428 [2024-11-15 11:46:38.086528] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:37.428 [2024-11-15 11:46:38.086535] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:37.428 [2024-11-15 11:46:38.086540] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4f34000b90 00:28:37.428 [2024-11-15 11:46:38.086559] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:37.428 qpair failed and we were unable to recover it. 00:28:37.428 [2024-11-15 11:46:38.096661] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:37.428 [2024-11-15 11:46:38.096722] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:37.428 [2024-11-15 11:46:38.096736] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:37.428 [2024-11-15 11:46:38.096742] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:37.428 [2024-11-15 11:46:38.096749] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4f34000b90 00:28:37.428 [2024-11-15 11:46:38.096765] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:37.428 qpair failed and we were unable to recover it. 00:28:37.428 [2024-11-15 11:46:38.106639] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:37.428 [2024-11-15 11:46:38.106696] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:37.428 [2024-11-15 11:46:38.106711] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:37.428 [2024-11-15 11:46:38.106718] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:37.428 [2024-11-15 11:46:38.106726] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4f34000b90 00:28:37.428 [2024-11-15 11:46:38.106741] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:37.428 qpair failed and we were unable to recover it. 00:28:37.428 [2024-11-15 11:46:38.116627] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:37.428 [2024-11-15 11:46:38.116694] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:37.428 [2024-11-15 11:46:38.116709] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:37.428 [2024-11-15 11:46:38.116715] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:37.428 [2024-11-15 11:46:38.116721] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4f34000b90 00:28:37.428 [2024-11-15 11:46:38.116735] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:37.428 qpair failed and we were unable to recover it. 00:28:37.428 [2024-11-15 11:46:38.126618] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:37.428 [2024-11-15 11:46:38.126708] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:37.428 [2024-11-15 11:46:38.126723] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:37.428 [2024-11-15 11:46:38.126729] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:37.428 [2024-11-15 11:46:38.126736] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4f34000b90 00:28:37.428 [2024-11-15 11:46:38.126750] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:37.428 qpair failed and we were unable to recover it. 00:28:37.428 [2024-11-15 11:46:38.136725] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:37.428 [2024-11-15 11:46:38.136787] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:37.428 [2024-11-15 11:46:38.136800] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:37.428 [2024-11-15 11:46:38.136807] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:37.428 [2024-11-15 11:46:38.136813] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4f34000b90 00:28:37.428 [2024-11-15 11:46:38.136828] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:37.428 qpair failed and we were unable to recover it. 00:28:37.428 [2024-11-15 11:46:38.146747] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:37.428 [2024-11-15 11:46:38.146803] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:37.428 [2024-11-15 11:46:38.146816] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:37.428 [2024-11-15 11:46:38.146823] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:37.428 [2024-11-15 11:46:38.146829] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4f34000b90 00:28:37.428 [2024-11-15 11:46:38.146843] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:37.428 qpair failed and we were unable to recover it. 00:28:37.428 [2024-11-15 11:46:38.156728] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:37.428 [2024-11-15 11:46:38.156791] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:37.428 [2024-11-15 11:46:38.156804] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:37.428 [2024-11-15 11:46:38.156811] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:37.428 [2024-11-15 11:46:38.156817] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4f34000b90 00:28:37.428 [2024-11-15 11:46:38.156831] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:37.428 qpair failed and we were unable to recover it. 00:28:37.428 [2024-11-15 11:46:38.166745] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:37.429 [2024-11-15 11:46:38.166799] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:37.429 [2024-11-15 11:46:38.166813] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:37.429 [2024-11-15 11:46:38.166820] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:37.429 [2024-11-15 11:46:38.166826] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4f34000b90 00:28:37.429 [2024-11-15 11:46:38.166840] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:37.429 qpair failed and we were unable to recover it. 00:28:37.429 [2024-11-15 11:46:38.176786] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:37.429 [2024-11-15 11:46:38.176844] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:37.429 [2024-11-15 11:46:38.176860] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:37.429 [2024-11-15 11:46:38.176868] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:37.429 [2024-11-15 11:46:38.176874] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4f34000b90 00:28:37.429 [2024-11-15 11:46:38.176889] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:37.429 qpair failed and we were unable to recover it. 00:28:37.429 [2024-11-15 11:46:38.186786] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:37.429 [2024-11-15 11:46:38.186844] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:37.429 [2024-11-15 11:46:38.186856] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:37.429 [2024-11-15 11:46:38.186862] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:37.429 [2024-11-15 11:46:38.186868] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4f34000b90 00:28:37.429 [2024-11-15 11:46:38.186884] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:37.429 qpair failed and we were unable to recover it. 00:28:37.429 [2024-11-15 11:46:38.196789] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:37.429 [2024-11-15 11:46:38.196856] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:37.429 [2024-11-15 11:46:38.196869] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:37.429 [2024-11-15 11:46:38.196876] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:37.429 [2024-11-15 11:46:38.196882] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4f34000b90 00:28:37.429 [2024-11-15 11:46:38.196897] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:37.429 qpair failed and we were unable to recover it. 00:28:37.429 [2024-11-15 11:46:38.206850] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:37.429 [2024-11-15 11:46:38.206907] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:37.429 [2024-11-15 11:46:38.206921] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:37.429 [2024-11-15 11:46:38.206927] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:37.429 [2024-11-15 11:46:38.206933] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4f34000b90 00:28:37.429 [2024-11-15 11:46:38.206948] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:37.429 qpair failed and we were unable to recover it. 00:28:37.429 [2024-11-15 11:46:38.216941] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:37.429 [2024-11-15 11:46:38.217010] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:37.429 [2024-11-15 11:46:38.217024] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:37.429 [2024-11-15 11:46:38.217031] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:37.429 [2024-11-15 11:46:38.217039] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4f34000b90 00:28:37.429 [2024-11-15 11:46:38.217054] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:37.429 qpair failed and we were unable to recover it. 00:28:37.429 [2024-11-15 11:46:38.226973] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:37.429 [2024-11-15 11:46:38.227034] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:37.429 [2024-11-15 11:46:38.227048] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:37.429 [2024-11-15 11:46:38.227054] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:37.429 [2024-11-15 11:46:38.227060] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4f34000b90 00:28:37.429 [2024-11-15 11:46:38.227075] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:37.429 qpair failed and we were unable to recover it. 00:28:37.429 [2024-11-15 11:46:38.236994] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:37.429 [2024-11-15 11:46:38.237052] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:37.429 [2024-11-15 11:46:38.237064] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:37.429 [2024-11-15 11:46:38.237071] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:37.429 [2024-11-15 11:46:38.237077] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4f34000b90 00:28:37.429 [2024-11-15 11:46:38.237091] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:37.429 qpair failed and we were unable to recover it. 00:28:37.429 [2024-11-15 11:46:38.247027] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:37.429 [2024-11-15 11:46:38.247086] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:37.429 [2024-11-15 11:46:38.247099] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:37.429 [2024-11-15 11:46:38.247105] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:37.429 [2024-11-15 11:46:38.247111] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4f34000b90 00:28:37.429 [2024-11-15 11:46:38.247126] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:37.429 qpair failed and we were unable to recover it. 00:28:37.429 [2024-11-15 11:46:38.256978] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:37.429 [2024-11-15 11:46:38.257037] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:37.429 [2024-11-15 11:46:38.257050] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:37.429 [2024-11-15 11:46:38.257056] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:37.429 [2024-11-15 11:46:38.257062] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4f34000b90 00:28:37.429 [2024-11-15 11:46:38.257077] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:37.429 qpair failed and we were unable to recover it. 00:28:37.429 [2024-11-15 11:46:38.267085] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:37.429 [2024-11-15 11:46:38.267144] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:37.429 [2024-11-15 11:46:38.267157] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:37.429 [2024-11-15 11:46:38.267164] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:37.429 [2024-11-15 11:46:38.267170] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4f34000b90 00:28:37.429 [2024-11-15 11:46:38.267184] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:37.429 qpair failed and we were unable to recover it. 00:28:37.429 [2024-11-15 11:46:38.277056] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:37.429 [2024-11-15 11:46:38.277115] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:37.429 [2024-11-15 11:46:38.277128] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:37.429 [2024-11-15 11:46:38.277134] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:37.429 [2024-11-15 11:46:38.277140] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4f34000b90 00:28:37.429 [2024-11-15 11:46:38.277154] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:37.429 qpair failed and we were unable to recover it. 00:28:37.690 [2024-11-15 11:46:38.287014] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:37.690 [2024-11-15 11:46:38.287074] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:37.690 [2024-11-15 11:46:38.287089] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:37.690 [2024-11-15 11:46:38.287096] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:37.690 [2024-11-15 11:46:38.287103] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4f34000b90 00:28:37.690 [2024-11-15 11:46:38.287117] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:37.690 qpair failed and we were unable to recover it. 00:28:37.690 [2024-11-15 11:46:38.297172] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:37.690 [2024-11-15 11:46:38.297231] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:37.690 [2024-11-15 11:46:38.297244] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:37.690 [2024-11-15 11:46:38.297251] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:37.690 [2024-11-15 11:46:38.297257] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4f34000b90 00:28:37.690 [2024-11-15 11:46:38.297272] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:37.690 qpair failed and we were unable to recover it. 00:28:37.690 [2024-11-15 11:46:38.307217] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:37.691 [2024-11-15 11:46:38.307279] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:37.691 [2024-11-15 11:46:38.307296] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:37.691 [2024-11-15 11:46:38.307303] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:37.691 [2024-11-15 11:46:38.307309] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4f34000b90 00:28:37.691 [2024-11-15 11:46:38.307323] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:37.691 qpair failed and we were unable to recover it. 00:28:37.691 [2024-11-15 11:46:38.317223] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:37.691 [2024-11-15 11:46:38.317284] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:37.691 [2024-11-15 11:46:38.317298] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:37.691 [2024-11-15 11:46:38.317304] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:37.691 [2024-11-15 11:46:38.317310] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4f34000b90 00:28:37.691 [2024-11-15 11:46:38.317325] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:37.691 qpair failed and we were unable to recover it. 00:28:37.691 [2024-11-15 11:46:38.327240] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:37.691 [2024-11-15 11:46:38.327293] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:37.691 [2024-11-15 11:46:38.327306] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:37.691 [2024-11-15 11:46:38.327312] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:37.691 [2024-11-15 11:46:38.327318] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4f34000b90 00:28:37.691 [2024-11-15 11:46:38.327332] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:37.691 qpair failed and we were unable to recover it. 00:28:37.691 [2024-11-15 11:46:38.337283] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:37.691 [2024-11-15 11:46:38.337347] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:37.691 [2024-11-15 11:46:38.337361] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:37.691 [2024-11-15 11:46:38.337368] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:37.691 [2024-11-15 11:46:38.337373] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4f34000b90 00:28:37.691 [2024-11-15 11:46:38.337388] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:37.691 qpair failed and we were unable to recover it. 00:28:37.691 [2024-11-15 11:46:38.347294] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:37.691 [2024-11-15 11:46:38.347352] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:37.691 [2024-11-15 11:46:38.347365] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:37.691 [2024-11-15 11:46:38.347372] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:37.691 [2024-11-15 11:46:38.347382] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4f34000b90 00:28:37.691 [2024-11-15 11:46:38.347398] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:37.691 qpair failed and we were unable to recover it. 00:28:37.691 [2024-11-15 11:46:38.357345] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:37.691 [2024-11-15 11:46:38.357404] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:37.691 [2024-11-15 11:46:38.357417] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:37.691 [2024-11-15 11:46:38.357424] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:37.691 [2024-11-15 11:46:38.357430] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4f34000b90 00:28:37.691 [2024-11-15 11:46:38.357445] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:37.691 qpair failed and we were unable to recover it. 00:28:37.691 [2024-11-15 11:46:38.367319] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:37.691 [2024-11-15 11:46:38.367376] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:37.691 [2024-11-15 11:46:38.367389] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:37.691 [2024-11-15 11:46:38.367396] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:37.691 [2024-11-15 11:46:38.367402] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4f34000b90 00:28:37.691 [2024-11-15 11:46:38.367417] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:37.691 qpair failed and we were unable to recover it. 00:28:37.691 [2024-11-15 11:46:38.377471] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:37.691 [2024-11-15 11:46:38.377551] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:37.691 [2024-11-15 11:46:38.377565] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:37.691 [2024-11-15 11:46:38.377571] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:37.691 [2024-11-15 11:46:38.377577] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4f34000b90 00:28:37.691 [2024-11-15 11:46:38.377592] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:37.691 qpair failed and we were unable to recover it. 00:28:37.691 [2024-11-15 11:46:38.387437] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:37.691 [2024-11-15 11:46:38.387501] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:37.691 [2024-11-15 11:46:38.387513] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:37.691 [2024-11-15 11:46:38.387520] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:37.691 [2024-11-15 11:46:38.387526] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4f34000b90 00:28:37.691 [2024-11-15 11:46:38.387541] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:37.691 qpair failed and we were unable to recover it. 00:28:37.691 [2024-11-15 11:46:38.397569] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:37.691 [2024-11-15 11:46:38.397644] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:37.691 [2024-11-15 11:46:38.397658] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:37.691 [2024-11-15 11:46:38.397665] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:37.691 [2024-11-15 11:46:38.397671] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4f34000b90 00:28:37.691 [2024-11-15 11:46:38.397686] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:37.691 qpair failed and we were unable to recover it. 00:28:37.691 [2024-11-15 11:46:38.407478] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:37.691 [2024-11-15 11:46:38.407539] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:37.691 [2024-11-15 11:46:38.407552] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:37.691 [2024-11-15 11:46:38.407559] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:37.691 [2024-11-15 11:46:38.407564] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4f34000b90 00:28:37.691 [2024-11-15 11:46:38.407580] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:37.691 qpair failed and we were unable to recover it. 00:28:37.691 [2024-11-15 11:46:38.417566] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:37.691 [2024-11-15 11:46:38.417625] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:37.691 [2024-11-15 11:46:38.417639] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:37.691 [2024-11-15 11:46:38.417646] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:37.691 [2024-11-15 11:46:38.417652] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4f34000b90 00:28:37.692 [2024-11-15 11:46:38.417667] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:37.692 qpair failed and we were unable to recover it. 00:28:37.692 [2024-11-15 11:46:38.427612] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:37.692 [2024-11-15 11:46:38.427676] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:37.692 [2024-11-15 11:46:38.427690] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:37.692 [2024-11-15 11:46:38.427697] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:37.692 [2024-11-15 11:46:38.427703] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4f34000b90 00:28:37.692 [2024-11-15 11:46:38.427717] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:37.692 qpair failed and we were unable to recover it. 00:28:37.692 [2024-11-15 11:46:38.437575] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:37.692 [2024-11-15 11:46:38.437631] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:37.692 [2024-11-15 11:46:38.437650] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:37.692 [2024-11-15 11:46:38.437657] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:37.692 [2024-11-15 11:46:38.437663] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4f34000b90 00:28:37.692 [2024-11-15 11:46:38.437678] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:37.692 qpair failed and we were unable to recover it. 00:28:37.692 [2024-11-15 11:46:38.447567] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:37.692 [2024-11-15 11:46:38.447642] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:37.692 [2024-11-15 11:46:38.447663] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:37.692 [2024-11-15 11:46:38.447669] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:37.692 [2024-11-15 11:46:38.447675] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4f34000b90 00:28:37.692 [2024-11-15 11:46:38.447690] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:37.692 qpair failed and we were unable to recover it. 00:28:37.692 [2024-11-15 11:46:38.457652] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:37.692 [2024-11-15 11:46:38.457712] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:37.692 [2024-11-15 11:46:38.457726] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:37.692 [2024-11-15 11:46:38.457732] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:37.692 [2024-11-15 11:46:38.457738] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4f34000b90 00:28:37.692 [2024-11-15 11:46:38.457753] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:37.692 qpair failed and we were unable to recover it. 00:28:37.692 [2024-11-15 11:46:38.467602] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:37.692 [2024-11-15 11:46:38.467658] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:37.692 [2024-11-15 11:46:38.467671] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:37.692 [2024-11-15 11:46:38.467677] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:37.692 [2024-11-15 11:46:38.467683] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4f34000b90 00:28:37.692 [2024-11-15 11:46:38.467698] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:37.692 qpair failed and we were unable to recover it. 00:28:37.692 [2024-11-15 11:46:38.477698] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:37.692 [2024-11-15 11:46:38.477772] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:37.692 [2024-11-15 11:46:38.477786] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:37.692 [2024-11-15 11:46:38.477795] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:37.692 [2024-11-15 11:46:38.477801] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4f34000b90 00:28:37.692 [2024-11-15 11:46:38.477816] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:37.692 qpair failed and we were unable to recover it. 00:28:37.692 [2024-11-15 11:46:38.487601] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:37.692 [2024-11-15 11:46:38.487658] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:37.692 [2024-11-15 11:46:38.487671] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:37.692 [2024-11-15 11:46:38.487677] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:37.692 [2024-11-15 11:46:38.487682] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4f34000b90 00:28:37.692 [2024-11-15 11:46:38.487697] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:37.692 qpair failed and we were unable to recover it. 00:28:37.692 [2024-11-15 11:46:38.497766] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:37.692 [2024-11-15 11:46:38.497831] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:37.692 [2024-11-15 11:46:38.497852] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:37.692 [2024-11-15 11:46:38.497859] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:37.692 [2024-11-15 11:46:38.497865] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4f34000b90 00:28:37.692 [2024-11-15 11:46:38.497879] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:37.692 qpair failed and we were unable to recover it. 00:28:37.692 [2024-11-15 11:46:38.507797] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:37.692 [2024-11-15 11:46:38.507862] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:37.692 [2024-11-15 11:46:38.507877] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:37.692 [2024-11-15 11:46:38.507884] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:37.692 [2024-11-15 11:46:38.507890] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4f34000b90 00:28:37.692 [2024-11-15 11:46:38.507904] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:37.692 qpair failed and we were unable to recover it. 00:28:37.692 [2024-11-15 11:46:38.517742] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:37.692 [2024-11-15 11:46:38.517801] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:37.692 [2024-11-15 11:46:38.517814] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:37.692 [2024-11-15 11:46:38.517821] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:37.692 [2024-11-15 11:46:38.517827] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4f34000b90 00:28:37.692 [2024-11-15 11:46:38.517846] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:37.692 qpair failed and we were unable to recover it. 00:28:37.692 [2024-11-15 11:46:38.527792] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:37.692 [2024-11-15 11:46:38.527849] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:37.692 [2024-11-15 11:46:38.527862] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:37.692 [2024-11-15 11:46:38.527868] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:37.692 [2024-11-15 11:46:38.527874] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4f34000b90 00:28:37.692 [2024-11-15 11:46:38.527888] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:37.692 qpair failed and we were unable to recover it. 00:28:37.692 [2024-11-15 11:46:38.537893] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:37.692 [2024-11-15 11:46:38.537989] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:37.692 [2024-11-15 11:46:38.538003] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:37.692 [2024-11-15 11:46:38.538009] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:37.692 [2024-11-15 11:46:38.538015] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4f34000b90 00:28:37.692 [2024-11-15 11:46:38.538031] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:37.692 qpair failed and we were unable to recover it. 00:28:37.953 [2024-11-15 11:46:38.547903] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:37.953 [2024-11-15 11:46:38.548001] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:37.953 [2024-11-15 11:46:38.548015] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:37.953 [2024-11-15 11:46:38.548022] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:37.953 [2024-11-15 11:46:38.548028] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4f34000b90 00:28:37.953 [2024-11-15 11:46:38.548043] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:37.953 qpair failed and we were unable to recover it. 00:28:37.953 [2024-11-15 11:46:38.557956] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:37.953 [2024-11-15 11:46:38.558043] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:37.953 [2024-11-15 11:46:38.558057] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:37.953 [2024-11-15 11:46:38.558064] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:37.953 [2024-11-15 11:46:38.558070] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4f34000b90 00:28:37.953 [2024-11-15 11:46:38.558085] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:37.953 qpair failed and we were unable to recover it. 00:28:37.953 [2024-11-15 11:46:38.567895] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:37.953 [2024-11-15 11:46:38.567954] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:37.953 [2024-11-15 11:46:38.567966] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:37.953 [2024-11-15 11:46:38.567973] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:37.954 [2024-11-15 11:46:38.567978] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4f34000b90 00:28:37.954 [2024-11-15 11:46:38.567993] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:37.954 qpair failed and we were unable to recover it. 00:28:37.954 [2024-11-15 11:46:38.577986] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:37.954 [2024-11-15 11:46:38.578048] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:37.954 [2024-11-15 11:46:38.578060] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:37.954 [2024-11-15 11:46:38.578067] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:37.954 [2024-11-15 11:46:38.578073] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4f34000b90 00:28:37.954 [2024-11-15 11:46:38.578089] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:37.954 qpair failed and we were unable to recover it. 00:28:37.954 [2024-11-15 11:46:38.588007] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:37.954 [2024-11-15 11:46:38.588065] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:37.954 [2024-11-15 11:46:38.588079] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:37.954 [2024-11-15 11:46:38.588086] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:37.954 [2024-11-15 11:46:38.588092] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4f34000b90 00:28:37.954 [2024-11-15 11:46:38.588107] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:37.954 qpair failed and we were unable to recover it. 00:28:37.954 [2024-11-15 11:46:38.598037] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:37.954 [2024-11-15 11:46:38.598095] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:37.954 [2024-11-15 11:46:38.598108] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:37.954 [2024-11-15 11:46:38.598114] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:37.954 [2024-11-15 11:46:38.598121] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4f34000b90 00:28:37.954 [2024-11-15 11:46:38.598135] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:37.954 qpair failed and we were unable to recover it. 00:28:37.954 [2024-11-15 11:46:38.608018] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:37.954 [2024-11-15 11:46:38.608076] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:37.954 [2024-11-15 11:46:38.608089] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:37.954 [2024-11-15 11:46:38.608098] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:37.954 [2024-11-15 11:46:38.608104] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4f34000b90 00:28:37.954 [2024-11-15 11:46:38.608119] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:37.954 qpair failed and we were unable to recover it. 00:28:37.954 [2024-11-15 11:46:38.618104] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:37.954 [2024-11-15 11:46:38.618179] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:37.954 [2024-11-15 11:46:38.618193] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:37.954 [2024-11-15 11:46:38.618200] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:37.954 [2024-11-15 11:46:38.618206] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4f34000b90 00:28:37.954 [2024-11-15 11:46:38.618221] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:37.954 qpair failed and we were unable to recover it. 00:28:37.954 [2024-11-15 11:46:38.628164] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:37.954 [2024-11-15 11:46:38.628221] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:37.954 [2024-11-15 11:46:38.628234] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:37.954 [2024-11-15 11:46:38.628241] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:37.954 [2024-11-15 11:46:38.628247] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4f34000b90 00:28:37.954 [2024-11-15 11:46:38.628262] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:37.954 qpair failed and we were unable to recover it. 00:28:37.954 [2024-11-15 11:46:38.638169] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:37.954 [2024-11-15 11:46:38.638248] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:37.954 [2024-11-15 11:46:38.638262] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:37.954 [2024-11-15 11:46:38.638269] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:37.954 [2024-11-15 11:46:38.638275] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4f34000b90 00:28:37.954 [2024-11-15 11:46:38.638289] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:37.954 qpair failed and we were unable to recover it. 00:28:37.954 [2024-11-15 11:46:38.648065] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:37.954 [2024-11-15 11:46:38.648121] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:37.954 [2024-11-15 11:46:38.648135] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:37.954 [2024-11-15 11:46:38.648141] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:37.954 [2024-11-15 11:46:38.648147] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4f34000b90 00:28:37.954 [2024-11-15 11:46:38.648165] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:37.954 qpair failed and we were unable to recover it. 00:28:37.954 [2024-11-15 11:46:38.658200] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:37.954 [2024-11-15 11:46:38.658262] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:37.954 [2024-11-15 11:46:38.658275] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:37.954 [2024-11-15 11:46:38.658282] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:37.954 [2024-11-15 11:46:38.658288] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4f34000b90 00:28:37.954 [2024-11-15 11:46:38.658303] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:37.954 qpair failed and we were unable to recover it. 00:28:37.954 [2024-11-15 11:46:38.668180] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:37.954 [2024-11-15 11:46:38.668243] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:37.954 [2024-11-15 11:46:38.668256] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:37.954 [2024-11-15 11:46:38.668264] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:37.954 [2024-11-15 11:46:38.668270] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4f34000b90 00:28:37.954 [2024-11-15 11:46:38.668284] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:37.954 qpair failed and we were unable to recover it. 00:28:37.954 [2024-11-15 11:46:38.678352] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:37.954 [2024-11-15 11:46:38.678428] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:37.954 [2024-11-15 11:46:38.678442] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:37.954 [2024-11-15 11:46:38.678449] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:37.954 [2024-11-15 11:46:38.678456] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4f34000b90 00:28:37.954 [2024-11-15 11:46:38.678474] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:37.954 qpair failed and we were unable to recover it. 00:28:37.954 [2024-11-15 11:46:38.688254] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:37.954 [2024-11-15 11:46:38.688307] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:37.954 [2024-11-15 11:46:38.688320] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:37.954 [2024-11-15 11:46:38.688327] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:37.954 [2024-11-15 11:46:38.688333] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4f34000b90 00:28:37.954 [2024-11-15 11:46:38.688347] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:37.954 qpair failed and we were unable to recover it. 00:28:37.954 [2024-11-15 11:46:38.698334] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:37.954 [2024-11-15 11:46:38.698406] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:37.954 [2024-11-15 11:46:38.698421] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:37.954 [2024-11-15 11:46:38.698428] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:37.955 [2024-11-15 11:46:38.698434] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4f34000b90 00:28:37.955 [2024-11-15 11:46:38.698449] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:37.955 qpair failed and we were unable to recover it. 00:28:37.955 [2024-11-15 11:46:38.708369] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:37.955 [2024-11-15 11:46:38.708433] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:37.955 [2024-11-15 11:46:38.708447] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:37.955 [2024-11-15 11:46:38.708454] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:37.955 [2024-11-15 11:46:38.708464] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4f34000b90 00:28:37.955 [2024-11-15 11:46:38.708479] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:37.955 qpair failed and we were unable to recover it. 00:28:37.955 [2024-11-15 11:46:38.718405] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:37.955 [2024-11-15 11:46:38.718472] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:37.955 [2024-11-15 11:46:38.718485] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:37.955 [2024-11-15 11:46:38.718492] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:37.955 [2024-11-15 11:46:38.718498] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4f34000b90 00:28:37.955 [2024-11-15 11:46:38.718513] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:37.955 qpair failed and we were unable to recover it. 00:28:37.955 [2024-11-15 11:46:38.728366] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:37.955 [2024-11-15 11:46:38.728422] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:37.955 [2024-11-15 11:46:38.728436] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:37.955 [2024-11-15 11:46:38.728443] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:37.955 [2024-11-15 11:46:38.728449] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4f34000b90 00:28:37.955 [2024-11-15 11:46:38.728467] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:37.955 qpair failed and we were unable to recover it. 00:28:37.955 [2024-11-15 11:46:38.738379] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:37.955 [2024-11-15 11:46:38.738441] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:37.955 [2024-11-15 11:46:38.738462] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:37.955 [2024-11-15 11:46:38.738469] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:37.955 [2024-11-15 11:46:38.738475] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4f34000b90 00:28:37.955 [2024-11-15 11:46:38.738489] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:37.955 qpair failed and we were unable to recover it. 00:28:37.955 [2024-11-15 11:46:38.748487] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:37.955 [2024-11-15 11:46:38.748546] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:37.955 [2024-11-15 11:46:38.748559] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:37.955 [2024-11-15 11:46:38.748565] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:37.955 [2024-11-15 11:46:38.748571] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4f34000b90 00:28:37.955 [2024-11-15 11:46:38.748586] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:37.955 qpair failed and we were unable to recover it. 00:28:37.955 [2024-11-15 11:46:38.758508] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:37.955 [2024-11-15 11:46:38.758571] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:37.955 [2024-11-15 11:46:38.758584] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:37.955 [2024-11-15 11:46:38.758591] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:37.955 [2024-11-15 11:46:38.758597] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4f34000b90 00:28:37.955 [2024-11-15 11:46:38.758611] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:37.955 qpair failed and we were unable to recover it. 00:28:37.955 [2024-11-15 11:46:38.768490] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:37.955 [2024-11-15 11:46:38.768547] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:37.955 [2024-11-15 11:46:38.768560] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:37.955 [2024-11-15 11:46:38.768566] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:37.955 [2024-11-15 11:46:38.768572] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4f34000b90 00:28:37.955 [2024-11-15 11:46:38.768586] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:37.955 qpair failed and we were unable to recover it. 00:28:37.955 [2024-11-15 11:46:38.778552] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:37.955 [2024-11-15 11:46:38.778612] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:37.955 [2024-11-15 11:46:38.778626] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:37.955 [2024-11-15 11:46:38.778633] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:37.955 [2024-11-15 11:46:38.778642] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4f34000b90 00:28:37.955 [2024-11-15 11:46:38.778656] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:37.955 qpair failed and we were unable to recover it. 00:28:37.955 [2024-11-15 11:46:38.788592] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:37.955 [2024-11-15 11:46:38.788656] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:37.955 [2024-11-15 11:46:38.788669] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:37.955 [2024-11-15 11:46:38.788675] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:37.955 [2024-11-15 11:46:38.788681] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4f34000b90 00:28:37.955 [2024-11-15 11:46:38.788696] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:37.955 qpair failed and we were unable to recover it. 00:28:37.955 [2024-11-15 11:46:38.798621] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:37.955 [2024-11-15 11:46:38.798679] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:37.955 [2024-11-15 11:46:38.798693] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:37.955 [2024-11-15 11:46:38.798700] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:37.955 [2024-11-15 11:46:38.798706] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4f34000b90 00:28:37.955 [2024-11-15 11:46:38.798721] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:37.955 qpair failed and we were unable to recover it. 00:28:38.215 [2024-11-15 11:46:38.808517] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:38.215 [2024-11-15 11:46:38.808575] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:38.215 [2024-11-15 11:46:38.808588] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:38.215 [2024-11-15 11:46:38.808595] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:38.215 [2024-11-15 11:46:38.808601] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4f34000b90 00:28:38.215 [2024-11-15 11:46:38.808616] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:38.215 qpair failed and we were unable to recover it. 00:28:38.215 [2024-11-15 11:46:38.818680] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:38.215 [2024-11-15 11:46:38.818755] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:38.215 [2024-11-15 11:46:38.818770] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:38.215 [2024-11-15 11:46:38.818777] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:38.215 [2024-11-15 11:46:38.818783] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4f34000b90 00:28:38.215 [2024-11-15 11:46:38.818798] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:38.215 qpair failed and we were unable to recover it. 00:28:38.215 [2024-11-15 11:46:38.828704] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:38.215 [2024-11-15 11:46:38.828800] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:38.215 [2024-11-15 11:46:38.828815] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:38.216 [2024-11-15 11:46:38.828822] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:38.216 [2024-11-15 11:46:38.828828] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4f34000b90 00:28:38.216 [2024-11-15 11:46:38.828842] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:38.216 qpair failed and we were unable to recover it. 00:28:38.216 [2024-11-15 11:46:38.838738] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:38.216 [2024-11-15 11:46:38.838809] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:38.216 [2024-11-15 11:46:38.838824] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:38.216 [2024-11-15 11:46:38.838831] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:38.216 [2024-11-15 11:46:38.838836] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4f34000b90 00:28:38.216 [2024-11-15 11:46:38.838852] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:38.216 qpair failed and we were unable to recover it. 00:28:38.216 [2024-11-15 11:46:38.848691] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:38.216 [2024-11-15 11:46:38.848774] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:38.216 [2024-11-15 11:46:38.848789] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:38.216 [2024-11-15 11:46:38.848795] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:38.216 [2024-11-15 11:46:38.848801] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4f34000b90 00:28:38.216 [2024-11-15 11:46:38.848816] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:38.216 qpair failed and we were unable to recover it. 00:28:38.216 [2024-11-15 11:46:38.858800] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:38.216 [2024-11-15 11:46:38.858867] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:38.216 [2024-11-15 11:46:38.858881] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:38.216 [2024-11-15 11:46:38.858887] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:38.216 [2024-11-15 11:46:38.858894] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4f34000b90 00:28:38.216 [2024-11-15 11:46:38.858910] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:38.216 qpair failed and we were unable to recover it. 00:28:38.216 [2024-11-15 11:46:38.868823] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:38.216 [2024-11-15 11:46:38.868893] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:38.216 [2024-11-15 11:46:38.868910] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:38.216 [2024-11-15 11:46:38.868917] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:38.216 [2024-11-15 11:46:38.868923] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4f34000b90 00:28:38.216 [2024-11-15 11:46:38.868938] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:38.216 qpair failed and we were unable to recover it. 00:28:38.216 [2024-11-15 11:46:38.878853] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:38.216 [2024-11-15 11:46:38.878915] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:38.216 [2024-11-15 11:46:38.878927] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:38.216 [2024-11-15 11:46:38.878935] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:38.216 [2024-11-15 11:46:38.878942] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4f34000b90 00:28:38.216 [2024-11-15 11:46:38.878956] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:38.216 qpair failed and we were unable to recover it. 00:28:38.216 [2024-11-15 11:46:38.888826] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:38.216 [2024-11-15 11:46:38.888881] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:38.216 [2024-11-15 11:46:38.888894] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:38.216 [2024-11-15 11:46:38.888900] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:38.216 [2024-11-15 11:46:38.888906] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4f34000b90 00:28:38.216 [2024-11-15 11:46:38.888920] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:38.216 qpair failed and we were unable to recover it. 00:28:38.216 [2024-11-15 11:46:38.898913] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:38.216 [2024-11-15 11:46:38.898977] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:38.216 [2024-11-15 11:46:38.898990] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:38.216 [2024-11-15 11:46:38.898997] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:38.216 [2024-11-15 11:46:38.899003] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4f34000b90 00:28:38.216 [2024-11-15 11:46:38.899019] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:38.216 qpair failed and we were unable to recover it. 00:28:38.216 [2024-11-15 11:46:38.908972] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:38.216 [2024-11-15 11:46:38.909064] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:38.216 [2024-11-15 11:46:38.909079] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:38.216 [2024-11-15 11:46:38.909085] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:38.216 [2024-11-15 11:46:38.909094] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4f34000b90 00:28:38.216 [2024-11-15 11:46:38.909109] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:38.216 qpair failed and we were unable to recover it. 00:28:38.216 [2024-11-15 11:46:38.918977] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:38.216 [2024-11-15 11:46:38.919033] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:38.216 [2024-11-15 11:46:38.919046] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:38.216 [2024-11-15 11:46:38.919052] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:38.216 [2024-11-15 11:46:38.919058] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4f34000b90 00:28:38.216 [2024-11-15 11:46:38.919072] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:38.216 qpair failed and we were unable to recover it. 00:28:38.216 [2024-11-15 11:46:38.928958] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:38.216 [2024-11-15 11:46:38.929014] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:38.216 [2024-11-15 11:46:38.929028] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:38.216 [2024-11-15 11:46:38.929035] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:38.216 [2024-11-15 11:46:38.929041] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4f34000b90 00:28:38.216 [2024-11-15 11:46:38.929055] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:38.216 qpair failed and we were unable to recover it. 00:28:38.216 [2024-11-15 11:46:38.939072] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:38.216 [2024-11-15 11:46:38.939172] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:38.216 [2024-11-15 11:46:38.939186] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:38.216 [2024-11-15 11:46:38.939193] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:38.216 [2024-11-15 11:46:38.939198] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4f34000b90 00:28:38.216 [2024-11-15 11:46:38.939213] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:38.216 qpair failed and we were unable to recover it. 00:28:38.216 [2024-11-15 11:46:38.949058] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:38.216 [2024-11-15 11:46:38.949115] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:38.216 [2024-11-15 11:46:38.949128] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:38.216 [2024-11-15 11:46:38.949136] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:38.216 [2024-11-15 11:46:38.949142] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4f34000b90 00:28:38.216 [2024-11-15 11:46:38.949157] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:38.216 qpair failed and we were unable to recover it. 00:28:38.216 [2024-11-15 11:46:38.959083] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:38.216 [2024-11-15 11:46:38.959143] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:38.216 [2024-11-15 11:46:38.959156] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:38.217 [2024-11-15 11:46:38.959164] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:38.217 [2024-11-15 11:46:38.959170] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4f34000b90 00:28:38.217 [2024-11-15 11:46:38.959185] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:38.217 qpair failed and we were unable to recover it. 00:28:38.217 [2024-11-15 11:46:38.969056] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:38.217 [2024-11-15 11:46:38.969111] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:38.217 [2024-11-15 11:46:38.969124] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:38.217 [2024-11-15 11:46:38.969130] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:38.217 [2024-11-15 11:46:38.969136] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4f34000b90 00:28:38.217 [2024-11-15 11:46:38.969151] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:38.217 qpair failed and we were unable to recover it. 00:28:38.217 [2024-11-15 11:46:38.979135] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:38.217 [2024-11-15 11:46:38.979202] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:38.217 [2024-11-15 11:46:38.979216] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:38.217 [2024-11-15 11:46:38.979222] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:38.217 [2024-11-15 11:46:38.979228] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4f34000b90 00:28:38.217 [2024-11-15 11:46:38.979243] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:38.217 qpair failed and we were unable to recover it. 00:28:38.217 [2024-11-15 11:46:38.989170] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:38.217 [2024-11-15 11:46:38.989251] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:38.217 [2024-11-15 11:46:38.989266] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:38.217 [2024-11-15 11:46:38.989272] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:38.217 [2024-11-15 11:46:38.989278] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4f34000b90 00:28:38.217 [2024-11-15 11:46:38.989293] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:38.217 qpair failed and we were unable to recover it. 00:28:38.217 [2024-11-15 11:46:38.999191] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:38.217 [2024-11-15 11:46:38.999253] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:38.217 [2024-11-15 11:46:38.999267] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:38.217 [2024-11-15 11:46:38.999274] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:38.217 [2024-11-15 11:46:38.999280] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4f34000b90 00:28:38.217 [2024-11-15 11:46:38.999294] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:38.217 qpair failed and we were unable to recover it. 00:28:38.217 [2024-11-15 11:46:39.009170] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:38.217 [2024-11-15 11:46:39.009226] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:38.217 [2024-11-15 11:46:39.009240] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:38.217 [2024-11-15 11:46:39.009247] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:38.217 [2024-11-15 11:46:39.009253] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4f34000b90 00:28:38.217 [2024-11-15 11:46:39.009268] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:38.217 qpair failed and we were unable to recover it. 00:28:38.217 [2024-11-15 11:46:39.019286] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:38.217 [2024-11-15 11:46:39.019344] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:38.217 [2024-11-15 11:46:39.019357] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:38.217 [2024-11-15 11:46:39.019364] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:38.217 [2024-11-15 11:46:39.019371] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4f34000b90 00:28:38.217 [2024-11-15 11:46:39.019386] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:38.217 qpair failed and we were unable to recover it. 00:28:38.217 [2024-11-15 11:46:39.029286] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:38.217 [2024-11-15 11:46:39.029348] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:38.217 [2024-11-15 11:46:39.029361] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:38.217 [2024-11-15 11:46:39.029368] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:38.217 [2024-11-15 11:46:39.029374] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4f34000b90 00:28:38.217 [2024-11-15 11:46:39.029389] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:38.217 qpair failed and we were unable to recover it. 00:28:38.217 [2024-11-15 11:46:39.039315] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:38.217 [2024-11-15 11:46:39.039377] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:38.217 [2024-11-15 11:46:39.039391] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:38.217 [2024-11-15 11:46:39.039401] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:38.217 [2024-11-15 11:46:39.039407] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4f34000b90 00:28:38.217 [2024-11-15 11:46:39.039423] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:38.217 qpair failed and we were unable to recover it. 00:28:38.217 [2024-11-15 11:46:39.049297] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:38.217 [2024-11-15 11:46:39.049380] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:38.217 [2024-11-15 11:46:39.049394] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:38.217 [2024-11-15 11:46:39.049400] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:38.217 [2024-11-15 11:46:39.049406] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4f34000b90 00:28:38.217 [2024-11-15 11:46:39.049422] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:38.217 qpair failed and we were unable to recover it. 00:28:38.217 [2024-11-15 11:46:39.059366] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:38.217 [2024-11-15 11:46:39.059427] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:38.217 [2024-11-15 11:46:39.059440] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:38.217 [2024-11-15 11:46:39.059447] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:38.217 [2024-11-15 11:46:39.059454] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4f34000b90 00:28:38.217 [2024-11-15 11:46:39.059472] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:38.217 qpair failed and we were unable to recover it. 00:28:38.477 [2024-11-15 11:46:39.069394] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:38.477 [2024-11-15 11:46:39.069450] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:38.477 [2024-11-15 11:46:39.069466] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:38.477 [2024-11-15 11:46:39.069473] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:38.477 [2024-11-15 11:46:39.069479] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4f34000b90 00:28:38.477 [2024-11-15 11:46:39.069494] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:38.477 qpair failed and we were unable to recover it. 00:28:38.477 [2024-11-15 11:46:39.079430] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:38.477 [2024-11-15 11:46:39.079490] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:38.477 [2024-11-15 11:46:39.079503] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:38.477 [2024-11-15 11:46:39.079510] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:38.477 [2024-11-15 11:46:39.079516] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4f34000b90 00:28:38.477 [2024-11-15 11:46:39.079534] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:38.477 qpair failed and we were unable to recover it. 00:28:38.477 [2024-11-15 11:46:39.089404] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:38.477 [2024-11-15 11:46:39.089483] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:38.478 [2024-11-15 11:46:39.089496] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:38.478 [2024-11-15 11:46:39.089503] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:38.478 [2024-11-15 11:46:39.089510] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4f34000b90 00:28:38.478 [2024-11-15 11:46:39.089524] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:38.478 qpair failed and we were unable to recover it. 00:28:38.478 [2024-11-15 11:46:39.099491] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:38.478 [2024-11-15 11:46:39.099548] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:38.478 [2024-11-15 11:46:39.099561] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:38.478 [2024-11-15 11:46:39.099568] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:38.478 [2024-11-15 11:46:39.099574] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4f34000b90 00:28:38.478 [2024-11-15 11:46:39.099589] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:38.478 qpair failed and we were unable to recover it. 00:28:38.478 [2024-11-15 11:46:39.109526] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:38.478 [2024-11-15 11:46:39.109586] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:38.478 [2024-11-15 11:46:39.109599] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:38.478 [2024-11-15 11:46:39.109606] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:38.478 [2024-11-15 11:46:39.109612] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4f34000b90 00:28:38.478 [2024-11-15 11:46:39.109627] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:38.478 qpair failed and we were unable to recover it. 00:28:38.478 [2024-11-15 11:46:39.119567] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:38.478 [2024-11-15 11:46:39.119627] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:38.478 [2024-11-15 11:46:39.119641] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:38.478 [2024-11-15 11:46:39.119648] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:38.478 [2024-11-15 11:46:39.119655] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4f34000b90 00:28:38.478 [2024-11-15 11:46:39.119670] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:38.478 qpair failed and we were unable to recover it. 00:28:38.478 [2024-11-15 11:46:39.129549] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:38.478 [2024-11-15 11:46:39.129628] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:38.478 [2024-11-15 11:46:39.129642] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:38.478 [2024-11-15 11:46:39.129649] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:38.478 [2024-11-15 11:46:39.129655] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4f34000b90 00:28:38.478 [2024-11-15 11:46:39.129670] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:38.478 qpair failed and we were unable to recover it. 00:28:38.478 [2024-11-15 11:46:39.139604] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:38.478 [2024-11-15 11:46:39.139668] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:38.478 [2024-11-15 11:46:39.139682] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:38.478 [2024-11-15 11:46:39.139688] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:38.478 [2024-11-15 11:46:39.139695] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4f34000b90 00:28:38.478 [2024-11-15 11:46:39.139709] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:38.478 qpair failed and we were unable to recover it. 00:28:38.478 [2024-11-15 11:46:39.149700] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:38.478 [2024-11-15 11:46:39.149773] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:38.478 [2024-11-15 11:46:39.149787] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:38.478 [2024-11-15 11:46:39.149793] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:38.478 [2024-11-15 11:46:39.149800] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4f34000b90 00:28:38.478 [2024-11-15 11:46:39.149814] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:38.478 qpair failed and we were unable to recover it. 00:28:38.478 [2024-11-15 11:46:39.159661] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:38.478 [2024-11-15 11:46:39.159739] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:38.478 [2024-11-15 11:46:39.159754] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:38.478 [2024-11-15 11:46:39.159760] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:38.478 [2024-11-15 11:46:39.159767] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4f34000b90 00:28:38.478 [2024-11-15 11:46:39.159782] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:38.478 qpair failed and we were unable to recover it. 00:28:38.478 [2024-11-15 11:46:39.169633] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:38.478 [2024-11-15 11:46:39.169692] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:38.478 [2024-11-15 11:46:39.169705] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:38.478 [2024-11-15 11:46:39.169714] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:38.478 [2024-11-15 11:46:39.169720] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4f34000b90 00:28:38.478 [2024-11-15 11:46:39.169736] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:38.478 qpair failed and we were unable to recover it. 00:28:38.478 [2024-11-15 11:46:39.179715] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:38.478 [2024-11-15 11:46:39.179777] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:38.478 [2024-11-15 11:46:39.179790] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:38.478 [2024-11-15 11:46:39.179797] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:38.478 [2024-11-15 11:46:39.179804] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4f34000b90 00:28:38.478 [2024-11-15 11:46:39.179818] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:38.478 qpair failed and we were unable to recover it. 00:28:38.478 [2024-11-15 11:46:39.189783] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:38.478 [2024-11-15 11:46:39.189869] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:38.478 [2024-11-15 11:46:39.189884] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:38.478 [2024-11-15 11:46:39.189891] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:38.478 [2024-11-15 11:46:39.189898] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4f34000b90 00:28:38.478 [2024-11-15 11:46:39.189912] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:38.478 qpair failed and we were unable to recover it. 00:28:38.478 [2024-11-15 11:46:39.199774] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:38.478 [2024-11-15 11:46:39.199841] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:38.478 [2024-11-15 11:46:39.199855] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:38.478 [2024-11-15 11:46:39.199862] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:38.478 [2024-11-15 11:46:39.199868] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4f34000b90 00:28:38.478 [2024-11-15 11:46:39.199883] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:38.478 qpair failed and we were unable to recover it. 00:28:38.478 [2024-11-15 11:46:39.209739] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:38.478 [2024-11-15 11:46:39.209794] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:38.478 [2024-11-15 11:46:39.209807] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:38.478 [2024-11-15 11:46:39.209814] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:38.478 [2024-11-15 11:46:39.209819] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4f34000b90 00:28:38.478 [2024-11-15 11:46:39.209840] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:38.478 qpair failed and we were unable to recover it. 00:28:38.478 [2024-11-15 11:46:39.219839] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:38.478 [2024-11-15 11:46:39.219950] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:38.479 [2024-11-15 11:46:39.219965] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:38.479 [2024-11-15 11:46:39.219971] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:38.479 [2024-11-15 11:46:39.219977] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4f34000b90 00:28:38.479 [2024-11-15 11:46:39.219992] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:38.479 qpair failed and we were unable to recover it. 00:28:38.479 [2024-11-15 11:46:39.229846] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:38.479 [2024-11-15 11:46:39.229911] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:38.479 [2024-11-15 11:46:39.229925] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:38.479 [2024-11-15 11:46:39.229931] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:38.479 [2024-11-15 11:46:39.229938] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4f34000b90 00:28:38.479 [2024-11-15 11:46:39.229952] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:38.479 qpair failed and we were unable to recover it. 00:28:38.479 [2024-11-15 11:46:39.239801] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:38.479 [2024-11-15 11:46:39.239862] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:38.479 [2024-11-15 11:46:39.239875] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:38.479 [2024-11-15 11:46:39.239883] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:38.479 [2024-11-15 11:46:39.239889] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4f34000b90 00:28:38.479 [2024-11-15 11:46:39.239905] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:38.479 qpair failed and we were unable to recover it. 00:28:38.479 [2024-11-15 11:46:39.249861] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:38.479 [2024-11-15 11:46:39.249926] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:38.479 [2024-11-15 11:46:39.249939] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:38.479 [2024-11-15 11:46:39.249946] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:38.479 [2024-11-15 11:46:39.249953] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4f34000b90 00:28:38.479 [2024-11-15 11:46:39.249968] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:38.479 qpair failed and we were unable to recover it. 00:28:38.479 [2024-11-15 11:46:39.259939] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:38.479 [2024-11-15 11:46:39.259997] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:38.479 [2024-11-15 11:46:39.260010] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:38.479 [2024-11-15 11:46:39.260017] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:38.479 [2024-11-15 11:46:39.260023] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4f34000b90 00:28:38.479 [2024-11-15 11:46:39.260038] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:38.479 qpair failed and we were unable to recover it. 00:28:38.479 [2024-11-15 11:46:39.269988] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:38.479 [2024-11-15 11:46:39.270084] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:38.479 [2024-11-15 11:46:39.270098] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:38.479 [2024-11-15 11:46:39.270105] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:38.479 [2024-11-15 11:46:39.270110] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4f34000b90 00:28:38.479 [2024-11-15 11:46:39.270126] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:38.479 qpair failed and we were unable to recover it. 00:28:38.479 [2024-11-15 11:46:39.279954] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:38.479 [2024-11-15 11:46:39.280017] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:38.479 [2024-11-15 11:46:39.280030] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:38.479 [2024-11-15 11:46:39.280037] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:38.479 [2024-11-15 11:46:39.280043] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4f34000b90 00:28:38.479 [2024-11-15 11:46:39.280057] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:38.479 qpair failed and we were unable to recover it. 00:28:38.479 [2024-11-15 11:46:39.289948] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:38.479 [2024-11-15 11:46:39.290004] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:38.479 [2024-11-15 11:46:39.290017] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:38.479 [2024-11-15 11:46:39.290024] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:38.479 [2024-11-15 11:46:39.290030] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4f34000b90 00:28:38.479 [2024-11-15 11:46:39.290045] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:38.479 qpair failed and we were unable to recover it. 00:28:38.479 [2024-11-15 11:46:39.300051] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:38.479 [2024-11-15 11:46:39.300117] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:38.479 [2024-11-15 11:46:39.300133] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:38.479 [2024-11-15 11:46:39.300140] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:38.479 [2024-11-15 11:46:39.300146] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4f34000b90 00:28:38.479 [2024-11-15 11:46:39.300161] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:38.479 qpair failed and we were unable to recover it. 00:28:38.479 [2024-11-15 11:46:39.310088] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:38.479 [2024-11-15 11:46:39.310149] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:38.479 [2024-11-15 11:46:39.310162] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:38.479 [2024-11-15 11:46:39.310169] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:38.479 [2024-11-15 11:46:39.310177] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4f34000b90 00:28:38.479 [2024-11-15 11:46:39.310191] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:38.479 qpair failed and we were unable to recover it. 00:28:38.479 [2024-11-15 11:46:39.320088] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:38.479 [2024-11-15 11:46:39.320177] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:38.479 [2024-11-15 11:46:39.320191] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:38.479 [2024-11-15 11:46:39.320198] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:38.479 [2024-11-15 11:46:39.320204] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4f34000b90 00:28:38.479 [2024-11-15 11:46:39.320219] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:38.479 qpair failed and we were unable to recover it. 00:28:38.740 [2024-11-15 11:46:39.330083] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:38.740 [2024-11-15 11:46:39.330140] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:38.740 [2024-11-15 11:46:39.330154] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:38.740 [2024-11-15 11:46:39.330160] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:38.740 [2024-11-15 11:46:39.330166] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4f34000b90 00:28:38.740 [2024-11-15 11:46:39.330182] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:38.740 qpair failed and we were unable to recover it. 00:28:38.740 [2024-11-15 11:46:39.340162] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:38.740 [2024-11-15 11:46:39.340221] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:38.740 [2024-11-15 11:46:39.340237] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:38.740 [2024-11-15 11:46:39.340244] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:38.740 [2024-11-15 11:46:39.340253] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4f34000b90 00:28:38.740 [2024-11-15 11:46:39.340268] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:38.740 qpair failed and we were unable to recover it. 00:28:38.740 [2024-11-15 11:46:39.350131] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:38.740 [2024-11-15 11:46:39.350195] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:38.740 [2024-11-15 11:46:39.350209] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:38.740 [2024-11-15 11:46:39.350216] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:38.740 [2024-11-15 11:46:39.350222] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4f34000b90 00:28:38.740 [2024-11-15 11:46:39.350238] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:38.740 qpair failed and we were unable to recover it. 00:28:38.740 [2024-11-15 11:46:39.360143] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:38.740 [2024-11-15 11:46:39.360199] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:38.740 [2024-11-15 11:46:39.360211] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:38.740 [2024-11-15 11:46:39.360218] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:38.740 [2024-11-15 11:46:39.360224] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4f34000b90 00:28:38.740 [2024-11-15 11:46:39.360239] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:38.740 qpair failed and we were unable to recover it. 00:28:38.740 [2024-11-15 11:46:39.370186] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:38.740 [2024-11-15 11:46:39.370260] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:38.740 [2024-11-15 11:46:39.370282] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:38.740 [2024-11-15 11:46:39.370289] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:38.740 [2024-11-15 11:46:39.370295] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4f34000b90 00:28:38.740 [2024-11-15 11:46:39.370310] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:38.740 qpair failed and we were unable to recover it. 00:28:38.740 [2024-11-15 11:46:39.380285] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:38.740 [2024-11-15 11:46:39.380353] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:38.740 [2024-11-15 11:46:39.380367] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:38.740 [2024-11-15 11:46:39.380374] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:38.740 [2024-11-15 11:46:39.380381] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4f34000b90 00:28:38.740 [2024-11-15 11:46:39.380396] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:38.740 qpair failed and we were unable to recover it. 00:28:38.740 [2024-11-15 11:46:39.390233] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:38.740 [2024-11-15 11:46:39.390339] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:38.740 [2024-11-15 11:46:39.390354] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:38.740 [2024-11-15 11:46:39.390360] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:38.740 [2024-11-15 11:46:39.390366] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4f34000b90 00:28:38.740 [2024-11-15 11:46:39.390381] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:38.740 qpair failed and we were unable to recover it. 00:28:38.740 [2024-11-15 11:46:39.400255] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:38.740 [2024-11-15 11:46:39.400315] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:38.740 [2024-11-15 11:46:39.400329] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:38.740 [2024-11-15 11:46:39.400335] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:38.740 [2024-11-15 11:46:39.400341] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4f34000b90 00:28:38.740 [2024-11-15 11:46:39.400356] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:38.740 qpair failed and we were unable to recover it. 00:28:38.740 [2024-11-15 11:46:39.410247] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:38.740 [2024-11-15 11:46:39.410304] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:38.740 [2024-11-15 11:46:39.410319] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:38.740 [2024-11-15 11:46:39.410325] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:38.740 [2024-11-15 11:46:39.410331] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4f34000b90 00:28:38.740 [2024-11-15 11:46:39.410346] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:38.740 qpair failed and we were unable to recover it. 00:28:38.740 [2024-11-15 11:46:39.420411] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:38.740 [2024-11-15 11:46:39.420471] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:38.740 [2024-11-15 11:46:39.420485] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:38.740 [2024-11-15 11:46:39.420491] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:38.740 [2024-11-15 11:46:39.420497] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4f34000b90 00:28:38.740 [2024-11-15 11:46:39.420511] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:38.740 qpair failed and we were unable to recover it. 00:28:38.740 [2024-11-15 11:46:39.430417] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:38.740 [2024-11-15 11:46:39.430481] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:38.740 [2024-11-15 11:46:39.430498] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:38.740 [2024-11-15 11:46:39.430504] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:38.740 [2024-11-15 11:46:39.430510] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4f34000b90 00:28:38.740 [2024-11-15 11:46:39.430525] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:38.740 qpair failed and we were unable to recover it. 00:28:38.740 [2024-11-15 11:46:39.440440] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:38.740 [2024-11-15 11:46:39.440504] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:38.740 [2024-11-15 11:46:39.440517] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:38.740 [2024-11-15 11:46:39.440524] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:38.740 [2024-11-15 11:46:39.440530] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4f34000b90 00:28:38.740 [2024-11-15 11:46:39.440545] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:38.740 qpair failed and we were unable to recover it. 00:28:38.740 [2024-11-15 11:46:39.450338] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:38.740 [2024-11-15 11:46:39.450395] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:38.740 [2024-11-15 11:46:39.450407] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:38.740 [2024-11-15 11:46:39.450413] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:38.740 [2024-11-15 11:46:39.450419] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4f34000b90 00:28:38.741 [2024-11-15 11:46:39.450434] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:38.741 qpair failed and we were unable to recover it. 00:28:38.741 [2024-11-15 11:46:39.460434] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:38.741 [2024-11-15 11:46:39.460504] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:38.741 [2024-11-15 11:46:39.460517] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:38.741 [2024-11-15 11:46:39.460524] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:38.741 [2024-11-15 11:46:39.460530] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4f34000b90 00:28:38.741 [2024-11-15 11:46:39.460545] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:38.741 qpair failed and we were unable to recover it. 00:28:38.741 [2024-11-15 11:46:39.470551] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:38.741 [2024-11-15 11:46:39.470618] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:38.741 [2024-11-15 11:46:39.470632] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:38.741 [2024-11-15 11:46:39.470639] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:38.741 [2024-11-15 11:46:39.470647] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4f34000b90 00:28:38.741 [2024-11-15 11:46:39.470663] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:38.741 qpair failed and we were unable to recover it. 00:28:38.741 [2024-11-15 11:46:39.480489] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:38.741 [2024-11-15 11:46:39.480551] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:38.741 [2024-11-15 11:46:39.480565] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:38.741 [2024-11-15 11:46:39.480572] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:38.741 [2024-11-15 11:46:39.480579] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4f34000b90 00:28:38.741 [2024-11-15 11:46:39.480594] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:38.741 qpair failed and we were unable to recover it. 00:28:38.741 [2024-11-15 11:46:39.490536] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:38.741 [2024-11-15 11:46:39.490590] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:38.741 [2024-11-15 11:46:39.490604] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:38.741 [2024-11-15 11:46:39.490610] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:38.741 [2024-11-15 11:46:39.490616] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4f34000b90 00:28:38.741 [2024-11-15 11:46:39.490630] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:38.741 qpair failed and we were unable to recover it. 00:28:38.741 [2024-11-15 11:46:39.500668] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:38.741 [2024-11-15 11:46:39.500731] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:38.741 [2024-11-15 11:46:39.500744] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:38.741 [2024-11-15 11:46:39.500751] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:38.741 [2024-11-15 11:46:39.500757] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4f34000b90 00:28:38.741 [2024-11-15 11:46:39.500772] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:38.741 qpair failed and we were unable to recover it. 00:28:38.741 [2024-11-15 11:46:39.510569] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:38.741 [2024-11-15 11:46:39.510627] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:38.741 [2024-11-15 11:46:39.510640] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:38.741 [2024-11-15 11:46:39.510646] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:38.741 [2024-11-15 11:46:39.510652] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4f34000b90 00:28:38.741 [2024-11-15 11:46:39.510666] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:38.741 qpair failed and we were unable to recover it. 00:28:38.741 [2024-11-15 11:46:39.520605] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:38.741 [2024-11-15 11:46:39.520665] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:38.741 [2024-11-15 11:46:39.520678] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:38.741 [2024-11-15 11:46:39.520686] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:38.741 [2024-11-15 11:46:39.520692] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4f34000b90 00:28:38.741 [2024-11-15 11:46:39.520707] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:38.741 qpair failed and we were unable to recover it. 00:28:38.741 [2024-11-15 11:46:39.530589] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:38.741 [2024-11-15 11:46:39.530647] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:38.741 [2024-11-15 11:46:39.530660] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:38.741 [2024-11-15 11:46:39.530667] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:38.741 [2024-11-15 11:46:39.530672] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4f34000b90 00:28:38.741 [2024-11-15 11:46:39.530687] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:38.741 qpair failed and we were unable to recover it. 00:28:38.741 [2024-11-15 11:46:39.540656] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:38.741 [2024-11-15 11:46:39.540724] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:38.741 [2024-11-15 11:46:39.540738] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:38.741 [2024-11-15 11:46:39.540745] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:38.741 [2024-11-15 11:46:39.540751] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4f34000b90 00:28:38.741 [2024-11-15 11:46:39.540765] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:38.741 qpair failed and we were unable to recover it. 00:28:38.741 [2024-11-15 11:46:39.550804] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:38.741 [2024-11-15 11:46:39.550892] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:38.741 [2024-11-15 11:46:39.550906] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:38.741 [2024-11-15 11:46:39.550913] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:38.741 [2024-11-15 11:46:39.550920] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4f34000b90 00:28:38.741 [2024-11-15 11:46:39.550935] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:38.741 qpair failed and we were unable to recover it. 00:28:38.741 [2024-11-15 11:46:39.560816] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:38.741 [2024-11-15 11:46:39.560880] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:38.741 [2024-11-15 11:46:39.560893] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:38.741 [2024-11-15 11:46:39.560900] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:38.741 [2024-11-15 11:46:39.560907] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4f34000b90 00:28:38.741 [2024-11-15 11:46:39.560923] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:38.741 qpair failed and we were unable to recover it. 00:28:38.741 [2024-11-15 11:46:39.570759] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:38.741 [2024-11-15 11:46:39.570813] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:38.741 [2024-11-15 11:46:39.570826] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:38.741 [2024-11-15 11:46:39.570832] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:38.741 [2024-11-15 11:46:39.570838] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4f34000b90 00:28:38.741 [2024-11-15 11:46:39.570852] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:38.741 qpair failed and we were unable to recover it. 00:28:38.741 [2024-11-15 11:46:39.580824] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:38.741 [2024-11-15 11:46:39.580886] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:38.741 [2024-11-15 11:46:39.580899] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:38.741 [2024-11-15 11:46:39.580906] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:38.741 [2024-11-15 11:46:39.580912] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4f34000b90 00:28:38.742 [2024-11-15 11:46:39.580926] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:38.742 qpair failed and we were unable to recover it. 00:28:39.002 [2024-11-15 11:46:39.590796] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:39.002 [2024-11-15 11:46:39.590865] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:39.002 [2024-11-15 11:46:39.590879] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:39.002 [2024-11-15 11:46:39.590886] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:39.002 [2024-11-15 11:46:39.590892] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4f34000b90 00:28:39.002 [2024-11-15 11:46:39.590907] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:39.002 qpair failed and we were unable to recover it. 00:28:39.002 [2024-11-15 11:46:39.600866] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:39.002 [2024-11-15 11:46:39.600925] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:39.002 [2024-11-15 11:46:39.600938] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:39.002 [2024-11-15 11:46:39.600949] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:39.002 [2024-11-15 11:46:39.600956] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4f34000b90 00:28:39.002 [2024-11-15 11:46:39.600971] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:39.002 qpair failed and we were unable to recover it. 00:28:39.002 [2024-11-15 11:46:39.610804] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:39.002 [2024-11-15 11:46:39.610858] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:39.002 [2024-11-15 11:46:39.610871] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:39.002 [2024-11-15 11:46:39.610878] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:39.002 [2024-11-15 11:46:39.610883] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4f34000b90 00:28:39.002 [2024-11-15 11:46:39.610897] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:39.002 qpair failed and we were unable to recover it. 00:28:39.002 [2024-11-15 11:46:39.620985] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:39.002 [2024-11-15 11:46:39.621050] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:39.002 [2024-11-15 11:46:39.621064] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:39.002 [2024-11-15 11:46:39.621070] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:39.002 [2024-11-15 11:46:39.621077] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4f34000b90 00:28:39.002 [2024-11-15 11:46:39.621090] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:39.002 qpair failed and we were unable to recover it. 00:28:39.002 [2024-11-15 11:46:39.630952] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:39.002 [2024-11-15 11:46:39.631025] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:39.002 [2024-11-15 11:46:39.631040] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:39.002 [2024-11-15 11:46:39.631046] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:39.002 [2024-11-15 11:46:39.631052] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4f34000b90 00:28:39.002 [2024-11-15 11:46:39.631067] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:39.002 qpair failed and we were unable to recover it. 00:28:39.002 [2024-11-15 11:46:39.641019] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:39.002 [2024-11-15 11:46:39.641080] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:39.002 [2024-11-15 11:46:39.641093] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:39.002 [2024-11-15 11:46:39.641100] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:39.002 [2024-11-15 11:46:39.641107] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4f34000b90 00:28:39.002 [2024-11-15 11:46:39.641125] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:39.002 qpair failed and we were unable to recover it. 00:28:39.002 [2024-11-15 11:46:39.650985] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:39.002 [2024-11-15 11:46:39.651043] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:39.002 [2024-11-15 11:46:39.651056] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:39.002 [2024-11-15 11:46:39.651062] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:39.002 [2024-11-15 11:46:39.651068] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4f34000b90 00:28:39.002 [2024-11-15 11:46:39.651082] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:39.002 qpair failed and we were unable to recover it. 00:28:39.002 [2024-11-15 11:46:39.660999] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:39.003 [2024-11-15 11:46:39.661065] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:39.003 [2024-11-15 11:46:39.661078] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:39.003 [2024-11-15 11:46:39.661085] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:39.003 [2024-11-15 11:46:39.661091] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4f34000b90 00:28:39.003 [2024-11-15 11:46:39.661106] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:39.003 qpair failed and we were unable to recover it. 00:28:39.003 [2024-11-15 11:46:39.671028] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:39.003 [2024-11-15 11:46:39.671114] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:39.003 [2024-11-15 11:46:39.671128] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:39.003 [2024-11-15 11:46:39.671135] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:39.003 [2024-11-15 11:46:39.671141] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4f34000b90 00:28:39.003 [2024-11-15 11:46:39.671156] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:39.003 qpair failed and we were unable to recover it. 00:28:39.003 [2024-11-15 11:46:39.681148] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:39.003 [2024-11-15 11:46:39.681213] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:39.003 [2024-11-15 11:46:39.681225] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:39.003 [2024-11-15 11:46:39.681232] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:39.003 [2024-11-15 11:46:39.681238] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4f34000b90 00:28:39.003 [2024-11-15 11:46:39.681254] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:39.003 qpair failed and we were unable to recover it. 00:28:39.003 [2024-11-15 11:46:39.691033] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:39.003 [2024-11-15 11:46:39.691093] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:39.003 [2024-11-15 11:46:39.691107] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:39.003 [2024-11-15 11:46:39.691113] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:39.003 [2024-11-15 11:46:39.691118] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4f34000b90 00:28:39.003 [2024-11-15 11:46:39.691133] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:39.003 qpair failed and we were unable to recover it. 00:28:39.003 [2024-11-15 11:46:39.701207] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:39.003 [2024-11-15 11:46:39.701265] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:39.003 [2024-11-15 11:46:39.701278] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:39.003 [2024-11-15 11:46:39.701285] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:39.003 [2024-11-15 11:46:39.701291] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4f34000b90 00:28:39.003 [2024-11-15 11:46:39.701306] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:39.003 qpair failed and we were unable to recover it. 00:28:39.003 [2024-11-15 11:46:39.711144] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:39.003 [2024-11-15 11:46:39.711210] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:39.003 [2024-11-15 11:46:39.711224] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:39.003 [2024-11-15 11:46:39.711231] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:39.003 [2024-11-15 11:46:39.711237] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4f34000b90 00:28:39.003 [2024-11-15 11:46:39.711252] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:39.003 qpair failed and we were unable to recover it. 00:28:39.003 [2024-11-15 11:46:39.721267] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:39.003 [2024-11-15 11:46:39.721327] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:39.003 [2024-11-15 11:46:39.721340] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:39.003 [2024-11-15 11:46:39.721347] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:39.003 [2024-11-15 11:46:39.721354] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4f34000b90 00:28:39.003 [2024-11-15 11:46:39.721369] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:39.003 qpair failed and we were unable to recover it. 00:28:39.003 [2024-11-15 11:46:39.731226] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:39.003 [2024-11-15 11:46:39.731335] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:39.003 [2024-11-15 11:46:39.731350] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:39.003 [2024-11-15 11:46:39.731360] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:39.003 [2024-11-15 11:46:39.731367] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4f34000b90 00:28:39.003 [2024-11-15 11:46:39.731383] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:39.003 qpair failed and we were unable to recover it. 00:28:39.003 [2024-11-15 11:46:39.741275] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:39.003 [2024-11-15 11:46:39.741395] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:39.003 [2024-11-15 11:46:39.741410] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:39.003 [2024-11-15 11:46:39.741416] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:39.003 [2024-11-15 11:46:39.741422] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4f34000b90 00:28:39.003 [2024-11-15 11:46:39.741437] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:39.003 qpair failed and we were unable to recover it. 00:28:39.003 [2024-11-15 11:46:39.751366] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:39.003 [2024-11-15 11:46:39.751473] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:39.003 [2024-11-15 11:46:39.751488] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:39.003 [2024-11-15 11:46:39.751494] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:39.003 [2024-11-15 11:46:39.751501] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4f34000b90 00:28:39.003 [2024-11-15 11:46:39.751516] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:39.003 qpair failed and we were unable to recover it. 00:28:39.003 [2024-11-15 11:46:39.761393] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:39.003 [2024-11-15 11:46:39.761454] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:39.003 [2024-11-15 11:46:39.761472] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:39.003 [2024-11-15 11:46:39.761479] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:39.003 [2024-11-15 11:46:39.761484] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4f34000b90 00:28:39.003 [2024-11-15 11:46:39.761500] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:39.003 qpair failed and we were unable to recover it. 00:28:39.003 [2024-11-15 11:46:39.771343] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:39.003 [2024-11-15 11:46:39.771401] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:39.003 [2024-11-15 11:46:39.771415] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:39.003 [2024-11-15 11:46:39.771422] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:39.003 [2024-11-15 11:46:39.771427] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4f34000b90 00:28:39.003 [2024-11-15 11:46:39.771445] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:39.003 qpair failed and we were unable to recover it. 00:28:39.003 [2024-11-15 11:46:39.781427] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:39.003 [2024-11-15 11:46:39.781501] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:39.003 [2024-11-15 11:46:39.781514] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:39.003 [2024-11-15 11:46:39.781520] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:39.003 [2024-11-15 11:46:39.781526] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4f34000b90 00:28:39.003 [2024-11-15 11:46:39.781542] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:39.003 qpair failed and we were unable to recover it. 00:28:39.003 [2024-11-15 11:46:39.791447] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:39.003 [2024-11-15 11:46:39.791512] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:39.004 [2024-11-15 11:46:39.791527] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:39.004 [2024-11-15 11:46:39.791533] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:39.004 [2024-11-15 11:46:39.791539] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4f34000b90 00:28:39.004 [2024-11-15 11:46:39.791554] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:39.004 qpair failed and we were unable to recover it. 00:28:39.004 [2024-11-15 11:46:39.801474] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:39.004 [2024-11-15 11:46:39.801543] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:39.004 [2024-11-15 11:46:39.801557] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:39.004 [2024-11-15 11:46:39.801563] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:39.004 [2024-11-15 11:46:39.801569] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4f34000b90 00:28:39.004 [2024-11-15 11:46:39.801584] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:39.004 qpair failed and we were unable to recover it. 00:28:39.004 [2024-11-15 11:46:39.811470] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:39.004 [2024-11-15 11:46:39.811559] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:39.004 [2024-11-15 11:46:39.811573] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:39.004 [2024-11-15 11:46:39.811580] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:39.004 [2024-11-15 11:46:39.811586] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4f34000b90 00:28:39.004 [2024-11-15 11:46:39.811600] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:39.004 qpair failed and we were unable to recover it. 00:28:39.004 [2024-11-15 11:46:39.821540] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:39.004 [2024-11-15 11:46:39.821602] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:39.004 [2024-11-15 11:46:39.821616] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:39.004 [2024-11-15 11:46:39.821624] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:39.004 [2024-11-15 11:46:39.821630] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4f34000b90 00:28:39.004 [2024-11-15 11:46:39.821647] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:39.004 qpair failed and we were unable to recover it. 00:28:39.004 [2024-11-15 11:46:39.831618] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:39.004 [2024-11-15 11:46:39.831703] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:39.004 [2024-11-15 11:46:39.831718] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:39.004 [2024-11-15 11:46:39.831724] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:39.004 [2024-11-15 11:46:39.831730] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4f34000b90 00:28:39.004 [2024-11-15 11:46:39.831744] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:39.004 qpair failed and we were unable to recover it. 00:28:39.004 [2024-11-15 11:46:39.841634] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:39.004 [2024-11-15 11:46:39.841696] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:39.004 [2024-11-15 11:46:39.841709] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:39.004 [2024-11-15 11:46:39.841716] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:39.004 [2024-11-15 11:46:39.841722] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4f34000b90 00:28:39.004 [2024-11-15 11:46:39.841736] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:39.004 qpair failed and we were unable to recover it. 00:28:39.004 [2024-11-15 11:46:39.851569] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:39.004 [2024-11-15 11:46:39.851623] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:39.004 [2024-11-15 11:46:39.851636] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:39.004 [2024-11-15 11:46:39.851642] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:39.004 [2024-11-15 11:46:39.851648] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4f34000b90 00:28:39.004 [2024-11-15 11:46:39.851662] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:39.004 qpair failed and we were unable to recover it. 00:28:39.264 [2024-11-15 11:46:39.861670] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:39.264 [2024-11-15 11:46:39.861762] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:39.264 [2024-11-15 11:46:39.861779] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:39.264 [2024-11-15 11:46:39.861786] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:39.264 [2024-11-15 11:46:39.861792] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4f34000b90 00:28:39.264 [2024-11-15 11:46:39.861807] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:39.264 qpair failed and we were unable to recover it. 00:28:39.264 [2024-11-15 11:46:39.871703] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:39.264 [2024-11-15 11:46:39.871772] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:39.264 [2024-11-15 11:46:39.871786] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:39.264 [2024-11-15 11:46:39.871793] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:39.264 [2024-11-15 11:46:39.871799] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4f34000b90 00:28:39.264 [2024-11-15 11:46:39.871813] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:39.264 qpair failed and we were unable to recover it. 00:28:39.264 [2024-11-15 11:46:39.881703] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:39.264 [2024-11-15 11:46:39.881767] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:39.264 [2024-11-15 11:46:39.881780] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:39.264 [2024-11-15 11:46:39.881787] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:39.264 [2024-11-15 11:46:39.881794] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4f34000b90 00:28:39.264 [2024-11-15 11:46:39.881808] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:39.264 qpair failed and we were unable to recover it. 00:28:39.264 [2024-11-15 11:46:39.891685] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:39.264 [2024-11-15 11:46:39.891742] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:39.264 [2024-11-15 11:46:39.891755] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:39.264 [2024-11-15 11:46:39.891762] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:39.264 [2024-11-15 11:46:39.891767] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4f34000b90 00:28:39.264 [2024-11-15 11:46:39.891782] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:39.264 qpair failed and we were unable to recover it. 00:28:39.264 [2024-11-15 11:46:39.901776] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:39.264 [2024-11-15 11:46:39.901842] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:39.264 [2024-11-15 11:46:39.901854] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:39.264 [2024-11-15 11:46:39.901861] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:39.264 [2024-11-15 11:46:39.901870] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4f34000b90 00:28:39.264 [2024-11-15 11:46:39.901886] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:39.264 qpair failed and we were unable to recover it. 00:28:39.264 [2024-11-15 11:46:39.911797] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:39.265 [2024-11-15 11:46:39.911861] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:39.265 [2024-11-15 11:46:39.911880] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:39.265 [2024-11-15 11:46:39.911887] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:39.265 [2024-11-15 11:46:39.911892] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4f34000b90 00:28:39.265 [2024-11-15 11:46:39.911908] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:39.265 qpair failed and we were unable to recover it. 00:28:39.265 [2024-11-15 11:46:39.921828] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:39.265 [2024-11-15 11:46:39.921934] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:39.265 [2024-11-15 11:46:39.921948] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:39.265 [2024-11-15 11:46:39.921955] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:39.265 [2024-11-15 11:46:39.921961] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4f34000b90 00:28:39.265 [2024-11-15 11:46:39.921975] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:39.265 qpair failed and we were unable to recover it. 00:28:39.265 [2024-11-15 11:46:39.931811] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:39.265 [2024-11-15 11:46:39.931867] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:39.265 [2024-11-15 11:46:39.931880] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:39.265 [2024-11-15 11:46:39.931886] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:39.265 [2024-11-15 11:46:39.931892] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4f34000b90 00:28:39.265 [2024-11-15 11:46:39.931906] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:39.265 qpair failed and we were unable to recover it. 00:28:39.265 [2024-11-15 11:46:39.941907] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:39.265 [2024-11-15 11:46:39.941968] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:39.265 [2024-11-15 11:46:39.941982] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:39.265 [2024-11-15 11:46:39.941989] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:39.265 [2024-11-15 11:46:39.941995] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4f34000b90 00:28:39.265 [2024-11-15 11:46:39.942009] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:39.265 qpair failed and we were unable to recover it. 00:28:39.265 [2024-11-15 11:46:39.951948] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:39.265 [2024-11-15 11:46:39.952005] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:39.265 [2024-11-15 11:46:39.952018] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:39.265 [2024-11-15 11:46:39.952024] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:39.265 [2024-11-15 11:46:39.952030] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4f34000b90 00:28:39.265 [2024-11-15 11:46:39.952045] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:39.265 qpair failed and we were unable to recover it. 00:28:39.265 [2024-11-15 11:46:39.961942] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:39.265 [2024-11-15 11:46:39.962005] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:39.265 [2024-11-15 11:46:39.962017] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:39.265 [2024-11-15 11:46:39.962025] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:39.265 [2024-11-15 11:46:39.962031] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4f34000b90 00:28:39.265 [2024-11-15 11:46:39.962045] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:39.265 qpair failed and we were unable to recover it. 00:28:39.265 [2024-11-15 11:46:39.971906] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:39.265 [2024-11-15 11:46:39.971975] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:39.265 [2024-11-15 11:46:39.971990] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:39.265 [2024-11-15 11:46:39.971997] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:39.265 [2024-11-15 11:46:39.972003] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4f34000b90 00:28:39.265 [2024-11-15 11:46:39.972017] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:39.265 qpair failed and we were unable to recover it. 00:28:39.265 [2024-11-15 11:46:39.981992] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:39.265 [2024-11-15 11:46:39.982055] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:39.265 [2024-11-15 11:46:39.982067] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:39.265 [2024-11-15 11:46:39.982074] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:39.265 [2024-11-15 11:46:39.982081] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4f34000b90 00:28:39.265 [2024-11-15 11:46:39.982094] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:39.265 qpair failed and we were unable to recover it. 00:28:39.265 [2024-11-15 11:46:39.992046] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:39.265 [2024-11-15 11:46:39.992100] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:39.265 [2024-11-15 11:46:39.992116] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:39.265 [2024-11-15 11:46:39.992123] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:39.265 [2024-11-15 11:46:39.992129] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4f34000b90 00:28:39.265 [2024-11-15 11:46:39.992143] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:39.265 qpair failed and we were unable to recover it. 00:28:39.265 [2024-11-15 11:46:40.002117] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:39.265 [2024-11-15 11:46:40.002188] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:39.265 [2024-11-15 11:46:40.002201] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:39.265 [2024-11-15 11:46:40.002208] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:39.265 [2024-11-15 11:46:40.002215] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4f34000b90 00:28:39.265 [2024-11-15 11:46:40.002230] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:39.265 qpair failed and we were unable to recover it. 00:28:39.265 [2024-11-15 11:46:40.012084] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:39.265 [2024-11-15 11:46:40.012159] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:39.265 [2024-11-15 11:46:40.012174] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:39.265 [2024-11-15 11:46:40.012181] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:39.265 [2024-11-15 11:46:40.012187] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4f34000b90 00:28:39.265 [2024-11-15 11:46:40.012202] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:39.265 qpair failed and we were unable to recover it. 00:28:39.265 [2024-11-15 11:46:40.022137] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:39.265 [2024-11-15 11:46:40.022207] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:39.265 [2024-11-15 11:46:40.022221] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:39.265 [2024-11-15 11:46:40.022227] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:39.265 [2024-11-15 11:46:40.022233] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4f34000b90 00:28:39.265 [2024-11-15 11:46:40.022248] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:39.265 qpair failed and we were unable to recover it. 00:28:39.266 [2024-11-15 11:46:40.032277] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:39.266 [2024-11-15 11:46:40.032357] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:39.266 [2024-11-15 11:46:40.032372] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:39.266 [2024-11-15 11:46:40.032380] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:39.266 [2024-11-15 11:46:40.032392] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4f34000b90 00:28:39.266 [2024-11-15 11:46:40.032408] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:39.266 qpair failed and we were unable to recover it. 00:28:39.266 [2024-11-15 11:46:40.042197] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:39.266 [2024-11-15 11:46:40.042263] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:39.266 [2024-11-15 11:46:40.042281] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:39.266 [2024-11-15 11:46:40.042289] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:39.266 [2024-11-15 11:46:40.042296] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4f34000b90 00:28:39.266 [2024-11-15 11:46:40.042312] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:39.266 qpair failed and we were unable to recover it. 00:28:39.266 [2024-11-15 11:46:40.052094] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:39.266 [2024-11-15 11:46:40.052188] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:39.266 [2024-11-15 11:46:40.052203] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:39.266 [2024-11-15 11:46:40.052210] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:39.266 [2024-11-15 11:46:40.052216] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4f34000b90 00:28:39.266 [2024-11-15 11:46:40.052232] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:39.266 qpair failed and we were unable to recover it. 00:28:39.266 [2024-11-15 11:46:40.062327] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:39.266 [2024-11-15 11:46:40.062410] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:39.266 [2024-11-15 11:46:40.062425] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:39.266 [2024-11-15 11:46:40.062432] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:39.266 [2024-11-15 11:46:40.062438] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4f34000b90 00:28:39.266 [2024-11-15 11:46:40.062454] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:39.266 qpair failed and we were unable to recover it. 00:28:39.266 [2024-11-15 11:46:40.072280] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:39.266 [2024-11-15 11:46:40.072344] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:39.266 [2024-11-15 11:46:40.072357] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:39.266 [2024-11-15 11:46:40.072364] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:39.266 [2024-11-15 11:46:40.072370] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4f34000b90 00:28:39.266 [2024-11-15 11:46:40.072387] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:39.266 qpair failed and we were unable to recover it. 00:28:39.266 [2024-11-15 11:46:40.082325] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:39.266 [2024-11-15 11:46:40.082415] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:39.266 [2024-11-15 11:46:40.082431] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:39.266 [2024-11-15 11:46:40.082438] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:39.266 [2024-11-15 11:46:40.082444] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4f34000b90 00:28:39.266 [2024-11-15 11:46:40.082465] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:39.266 qpair failed and we were unable to recover it. 00:28:39.266 [2024-11-15 11:46:40.092278] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:39.266 [2024-11-15 11:46:40.092336] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:39.266 [2024-11-15 11:46:40.092350] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:39.266 [2024-11-15 11:46:40.092356] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:39.266 [2024-11-15 11:46:40.092362] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4f34000b90 00:28:39.266 [2024-11-15 11:46:40.092377] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:39.266 qpair failed and we were unable to recover it. 00:28:39.266 [2024-11-15 11:46:40.102374] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:39.266 [2024-11-15 11:46:40.102484] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:39.266 [2024-11-15 11:46:40.102499] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:39.266 [2024-11-15 11:46:40.102506] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:39.266 [2024-11-15 11:46:40.102512] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4f34000b90 00:28:39.266 [2024-11-15 11:46:40.102527] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:39.266 qpair failed and we were unable to recover it. 00:28:39.266 [2024-11-15 11:46:40.112400] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:39.266 [2024-11-15 11:46:40.112468] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:39.266 [2024-11-15 11:46:40.112482] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:39.266 [2024-11-15 11:46:40.112488] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:39.266 [2024-11-15 11:46:40.112495] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4f34000b90 00:28:39.266 [2024-11-15 11:46:40.112510] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:39.266 qpair failed and we were unable to recover it. 00:28:39.527 [2024-11-15 11:46:40.122434] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:39.527 [2024-11-15 11:46:40.122502] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:39.527 [2024-11-15 11:46:40.122516] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:39.527 [2024-11-15 11:46:40.122522] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:39.527 [2024-11-15 11:46:40.122528] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4f34000b90 00:28:39.527 [2024-11-15 11:46:40.122544] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:39.527 qpair failed and we were unable to recover it. 00:28:39.527 [2024-11-15 11:46:40.132389] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:39.527 [2024-11-15 11:46:40.132446] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:39.527 [2024-11-15 11:46:40.132462] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:39.527 [2024-11-15 11:46:40.132469] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:39.527 [2024-11-15 11:46:40.132475] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4f34000b90 00:28:39.527 [2024-11-15 11:46:40.132490] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:39.527 qpair failed and we were unable to recover it. 00:28:39.527 [2024-11-15 11:46:40.142482] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:39.527 [2024-11-15 11:46:40.142544] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:39.527 [2024-11-15 11:46:40.142557] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:39.527 [2024-11-15 11:46:40.142564] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:39.527 [2024-11-15 11:46:40.142570] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4f34000b90 00:28:39.527 [2024-11-15 11:46:40.142584] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:39.527 qpair failed and we were unable to recover it. 00:28:39.527 [2024-11-15 11:46:40.152509] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:39.527 [2024-11-15 11:46:40.152566] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:39.527 [2024-11-15 11:46:40.152579] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:39.527 [2024-11-15 11:46:40.152585] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:39.527 [2024-11-15 11:46:40.152591] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4f34000b90 00:28:39.527 [2024-11-15 11:46:40.152607] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:39.527 qpair failed and we were unable to recover it. 00:28:39.527 [2024-11-15 11:46:40.162569] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:39.527 [2024-11-15 11:46:40.162661] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:39.527 [2024-11-15 11:46:40.162675] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:39.527 [2024-11-15 11:46:40.162684] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:39.527 [2024-11-15 11:46:40.162691] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4f34000b90 00:28:39.527 [2024-11-15 11:46:40.162706] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:39.527 qpair failed and we were unable to recover it. 00:28:39.527 [2024-11-15 11:46:40.172502] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:39.527 [2024-11-15 11:46:40.172560] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:39.527 [2024-11-15 11:46:40.172573] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:39.527 [2024-11-15 11:46:40.172579] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:39.527 [2024-11-15 11:46:40.172585] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4f34000b90 00:28:39.527 [2024-11-15 11:46:40.172600] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:39.527 qpair failed and we were unable to recover it. 00:28:39.527 [2024-11-15 11:46:40.182585] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:39.527 [2024-11-15 11:46:40.182648] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:39.527 [2024-11-15 11:46:40.182661] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:39.527 [2024-11-15 11:46:40.182668] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:39.527 [2024-11-15 11:46:40.182674] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4f34000b90 00:28:39.527 [2024-11-15 11:46:40.182688] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:39.527 qpair failed and we were unable to recover it. 00:28:39.527 [2024-11-15 11:46:40.192641] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:39.527 [2024-11-15 11:46:40.192703] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:39.527 [2024-11-15 11:46:40.192716] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:39.527 [2024-11-15 11:46:40.192722] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:39.527 [2024-11-15 11:46:40.192729] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4f34000b90 00:28:39.527 [2024-11-15 11:46:40.192744] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:39.527 qpair failed and we were unable to recover it. 00:28:39.527 [2024-11-15 11:46:40.202561] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:39.527 [2024-11-15 11:46:40.202621] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:39.527 [2024-11-15 11:46:40.202634] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:39.527 [2024-11-15 11:46:40.202641] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:39.527 [2024-11-15 11:46:40.202648] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4f34000b90 00:28:39.527 [2024-11-15 11:46:40.202666] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:39.527 qpair failed and we were unable to recover it. 00:28:39.527 [2024-11-15 11:46:40.212612] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:39.527 [2024-11-15 11:46:40.212669] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:39.527 [2024-11-15 11:46:40.212683] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:39.527 [2024-11-15 11:46:40.212689] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:39.527 [2024-11-15 11:46:40.212695] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4f34000b90 00:28:39.527 [2024-11-15 11:46:40.212710] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:39.527 qpair failed and we were unable to recover it. 00:28:39.528 [2024-11-15 11:46:40.222700] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:39.528 [2024-11-15 11:46:40.222758] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:39.528 [2024-11-15 11:46:40.222771] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:39.528 [2024-11-15 11:46:40.222778] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:39.528 [2024-11-15 11:46:40.222784] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4f34000b90 00:28:39.528 [2024-11-15 11:46:40.222799] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:39.528 qpair failed and we were unable to recover it. 00:28:39.528 [2024-11-15 11:46:40.232726] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:39.528 [2024-11-15 11:46:40.232798] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:39.528 [2024-11-15 11:46:40.232813] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:39.528 [2024-11-15 11:46:40.232819] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:39.528 [2024-11-15 11:46:40.232826] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4f34000b90 00:28:39.528 [2024-11-15 11:46:40.232841] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:39.528 qpair failed and we were unable to recover it. 00:28:39.528 [2024-11-15 11:46:40.242748] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:39.528 [2024-11-15 11:46:40.242810] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:39.528 [2024-11-15 11:46:40.242823] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:39.528 [2024-11-15 11:46:40.242830] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:39.528 [2024-11-15 11:46:40.242836] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4f34000b90 00:28:39.528 [2024-11-15 11:46:40.242852] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:39.528 qpair failed and we were unable to recover it. 00:28:39.528 [2024-11-15 11:46:40.252704] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:39.528 [2024-11-15 11:46:40.252762] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:39.528 [2024-11-15 11:46:40.252775] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:39.528 [2024-11-15 11:46:40.252781] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:39.528 [2024-11-15 11:46:40.252787] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4f34000b90 00:28:39.528 [2024-11-15 11:46:40.252802] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:39.528 qpair failed and we were unable to recover it. 00:28:39.528 [2024-11-15 11:46:40.262794] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:39.528 [2024-11-15 11:46:40.262855] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:39.528 [2024-11-15 11:46:40.262868] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:39.528 [2024-11-15 11:46:40.262874] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:39.528 [2024-11-15 11:46:40.262880] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4f34000b90 00:28:39.528 [2024-11-15 11:46:40.262895] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:39.528 qpair failed and we were unable to recover it. 00:28:39.528 [2024-11-15 11:46:40.272876] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:39.528 [2024-11-15 11:46:40.272934] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:39.528 [2024-11-15 11:46:40.272947] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:39.528 [2024-11-15 11:46:40.272954] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:39.528 [2024-11-15 11:46:40.272961] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4f34000b90 00:28:39.528 [2024-11-15 11:46:40.272975] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:39.528 qpair failed and we were unable to recover it. 00:28:39.528 [2024-11-15 11:46:40.282850] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:39.528 [2024-11-15 11:46:40.282911] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:39.528 [2024-11-15 11:46:40.282923] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:39.528 [2024-11-15 11:46:40.282930] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:39.528 [2024-11-15 11:46:40.282936] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4f34000b90 00:28:39.528 [2024-11-15 11:46:40.282950] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:39.528 qpair failed and we were unable to recover it. 00:28:39.528 [2024-11-15 11:46:40.292832] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:39.528 [2024-11-15 11:46:40.292887] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:39.528 [2024-11-15 11:46:40.292903] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:39.528 [2024-11-15 11:46:40.292909] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:39.528 [2024-11-15 11:46:40.292915] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4f34000b90 00:28:39.528 [2024-11-15 11:46:40.292930] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:39.528 qpair failed and we were unable to recover it. 00:28:39.528 [2024-11-15 11:46:40.302944] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:39.528 [2024-11-15 11:46:40.303004] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:39.528 [2024-11-15 11:46:40.303016] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:39.528 [2024-11-15 11:46:40.303022] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:39.528 [2024-11-15 11:46:40.303029] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4f34000b90 00:28:39.528 [2024-11-15 11:46:40.303043] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:39.528 qpair failed and we were unable to recover it. 00:28:39.528 [2024-11-15 11:46:40.312949] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:39.528 [2024-11-15 11:46:40.313007] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:39.528 [2024-11-15 11:46:40.313020] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:39.528 [2024-11-15 11:46:40.313027] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:39.528 [2024-11-15 11:46:40.313032] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4f34000b90 00:28:39.528 [2024-11-15 11:46:40.313047] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:39.528 qpair failed and we were unable to recover it. 00:28:39.528 [2024-11-15 11:46:40.322967] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:39.528 [2024-11-15 11:46:40.323028] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:39.528 [2024-11-15 11:46:40.323041] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:39.528 [2024-11-15 11:46:40.323048] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:39.528 [2024-11-15 11:46:40.323054] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4f34000b90 00:28:39.528 [2024-11-15 11:46:40.323069] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:39.528 qpair failed and we were unable to recover it. 00:28:39.528 [2024-11-15 11:46:40.332898] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:39.528 [2024-11-15 11:46:40.332960] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:39.528 [2024-11-15 11:46:40.332973] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:39.528 [2024-11-15 11:46:40.332979] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:39.528 [2024-11-15 11:46:40.332985] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4f34000b90 00:28:39.528 [2024-11-15 11:46:40.333003] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:39.528 qpair failed and we were unable to recover it. 00:28:39.528 [2024-11-15 11:46:40.343014] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:39.528 [2024-11-15 11:46:40.343077] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:39.528 [2024-11-15 11:46:40.343090] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:39.528 [2024-11-15 11:46:40.343097] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:39.528 [2024-11-15 11:46:40.343103] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4f34000b90 00:28:39.528 [2024-11-15 11:46:40.343118] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:39.528 qpair failed and we were unable to recover it. 00:28:39.528 [2024-11-15 11:46:40.353061] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:39.529 [2024-11-15 11:46:40.353121] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:39.529 [2024-11-15 11:46:40.353133] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:39.529 [2024-11-15 11:46:40.353140] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:39.529 [2024-11-15 11:46:40.353146] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4f34000b90 00:28:39.529 [2024-11-15 11:46:40.353161] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:39.529 qpair failed and we were unable to recover it. 00:28:39.529 [2024-11-15 11:46:40.363081] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:39.529 [2024-11-15 11:46:40.363141] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:39.529 [2024-11-15 11:46:40.363154] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:39.529 [2024-11-15 11:46:40.363161] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:39.529 [2024-11-15 11:46:40.363167] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4f34000b90 00:28:39.529 [2024-11-15 11:46:40.363181] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:39.529 qpair failed and we were unable to recover it. 00:28:39.529 [2024-11-15 11:46:40.373100] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:39.529 [2024-11-15 11:46:40.373157] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:39.529 [2024-11-15 11:46:40.373170] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:39.529 [2024-11-15 11:46:40.373177] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:39.529 [2024-11-15 11:46:40.373182] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4f34000b90 00:28:39.529 [2024-11-15 11:46:40.373197] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:39.529 qpair failed and we were unable to recover it. 00:28:39.789 [2024-11-15 11:46:40.383215] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:39.789 [2024-11-15 11:46:40.383279] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:39.789 [2024-11-15 11:46:40.383292] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:39.789 [2024-11-15 11:46:40.383299] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:39.789 [2024-11-15 11:46:40.383305] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4f34000b90 00:28:39.789 [2024-11-15 11:46:40.383320] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:39.789 qpair failed and we were unable to recover it. 00:28:39.789 [2024-11-15 11:46:40.393183] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:39.789 [2024-11-15 11:46:40.393250] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:39.789 [2024-11-15 11:46:40.393265] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:39.789 [2024-11-15 11:46:40.393271] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:39.789 [2024-11-15 11:46:40.393277] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4f34000b90 00:28:39.789 [2024-11-15 11:46:40.393291] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:39.789 qpair failed and we were unable to recover it. 00:28:39.789 [2024-11-15 11:46:40.403203] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:39.789 [2024-11-15 11:46:40.403267] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:39.789 [2024-11-15 11:46:40.403281] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:39.789 [2024-11-15 11:46:40.403287] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:39.789 [2024-11-15 11:46:40.403293] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4f34000b90 00:28:39.789 [2024-11-15 11:46:40.403308] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:39.789 qpair failed and we were unable to recover it. 00:28:39.789 [2024-11-15 11:46:40.413187] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:39.789 [2024-11-15 11:46:40.413257] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:39.789 [2024-11-15 11:46:40.413270] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:39.789 [2024-11-15 11:46:40.413277] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:39.789 [2024-11-15 11:46:40.413284] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4f34000b90 00:28:39.789 [2024-11-15 11:46:40.413299] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:39.789 qpair failed and we were unable to recover it. 00:28:39.789 [2024-11-15 11:46:40.423264] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:39.789 [2024-11-15 11:46:40.423323] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:39.789 [2024-11-15 11:46:40.423339] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:39.789 [2024-11-15 11:46:40.423346] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:39.789 [2024-11-15 11:46:40.423352] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4f34000b90 00:28:39.789 [2024-11-15 11:46:40.423367] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:39.789 qpair failed and we were unable to recover it. 00:28:39.789 [2024-11-15 11:46:40.433293] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:39.789 [2024-11-15 11:46:40.433350] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:39.789 [2024-11-15 11:46:40.433363] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:39.789 [2024-11-15 11:46:40.433369] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:39.789 [2024-11-15 11:46:40.433376] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4f34000b90 00:28:39.789 [2024-11-15 11:46:40.433390] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:39.789 qpair failed and we were unable to recover it. 00:28:39.789 [2024-11-15 11:46:40.443322] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:39.789 [2024-11-15 11:46:40.443392] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:39.789 [2024-11-15 11:46:40.443407] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:39.789 [2024-11-15 11:46:40.443414] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:39.789 [2024-11-15 11:46:40.443420] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4f34000b90 00:28:39.789 [2024-11-15 11:46:40.443435] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:39.789 qpair failed and we were unable to recover it. 00:28:39.789 [2024-11-15 11:46:40.453293] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:39.789 [2024-11-15 11:46:40.453350] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:39.789 [2024-11-15 11:46:40.453364] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:39.789 [2024-11-15 11:46:40.453370] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:39.790 [2024-11-15 11:46:40.453376] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4f34000b90 00:28:39.790 [2024-11-15 11:46:40.453391] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:39.790 qpair failed and we were unable to recover it. 00:28:39.790 [2024-11-15 11:46:40.463384] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:39.790 [2024-11-15 11:46:40.463453] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:39.790 [2024-11-15 11:46:40.463470] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:39.790 [2024-11-15 11:46:40.463478] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:39.790 [2024-11-15 11:46:40.463487] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4f34000b90 00:28:39.790 [2024-11-15 11:46:40.463502] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:39.790 qpair failed and we were unable to recover it. 00:28:39.790 [2024-11-15 11:46:40.473433] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:39.790 [2024-11-15 11:46:40.473525] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:39.790 [2024-11-15 11:46:40.473539] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:39.790 [2024-11-15 11:46:40.473546] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:39.790 [2024-11-15 11:46:40.473552] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4f34000b90 00:28:39.790 [2024-11-15 11:46:40.473567] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:39.790 qpair failed and we were unable to recover it. 00:28:39.790 [2024-11-15 11:46:40.483359] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:39.790 [2024-11-15 11:46:40.483424] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:39.790 [2024-11-15 11:46:40.483438] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:39.790 [2024-11-15 11:46:40.483444] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:39.790 [2024-11-15 11:46:40.483450] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4f34000b90 00:28:39.790 [2024-11-15 11:46:40.483468] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:39.790 qpair failed and we were unable to recover it. 00:28:39.790 [2024-11-15 11:46:40.493424] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:39.790 [2024-11-15 11:46:40.493480] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:39.790 [2024-11-15 11:46:40.493493] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:39.790 [2024-11-15 11:46:40.493500] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:39.790 [2024-11-15 11:46:40.493506] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4f34000b90 00:28:39.790 [2024-11-15 11:46:40.493520] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:39.790 qpair failed and we were unable to recover it. 00:28:39.790 [2024-11-15 11:46:40.503526] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:39.790 [2024-11-15 11:46:40.503588] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:39.790 [2024-11-15 11:46:40.503601] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:39.790 [2024-11-15 11:46:40.503608] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:39.790 [2024-11-15 11:46:40.503615] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4f34000b90 00:28:39.790 [2024-11-15 11:46:40.503629] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:39.790 qpair failed and we were unable to recover it. 00:28:39.790 [2024-11-15 11:46:40.513531] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:39.790 [2024-11-15 11:46:40.513599] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:39.790 [2024-11-15 11:46:40.513612] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:39.790 [2024-11-15 11:46:40.513619] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:39.790 [2024-11-15 11:46:40.513625] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4f34000b90 00:28:39.790 [2024-11-15 11:46:40.513640] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:39.790 qpair failed and we were unable to recover it. 00:28:39.790 [2024-11-15 11:46:40.523582] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:39.790 [2024-11-15 11:46:40.523641] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:39.790 [2024-11-15 11:46:40.523654] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:39.790 [2024-11-15 11:46:40.523660] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:39.790 [2024-11-15 11:46:40.523666] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4f34000b90 00:28:39.790 [2024-11-15 11:46:40.523682] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:39.790 qpair failed and we were unable to recover it. 00:28:39.790 [2024-11-15 11:46:40.533541] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:39.790 [2024-11-15 11:46:40.533598] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:39.790 [2024-11-15 11:46:40.533611] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:39.790 [2024-11-15 11:46:40.533618] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:39.790 [2024-11-15 11:46:40.533623] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4f34000b90 00:28:39.790 [2024-11-15 11:46:40.533640] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:39.790 qpair failed and we were unable to recover it. 00:28:39.790 [2024-11-15 11:46:40.543620] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:39.790 [2024-11-15 11:46:40.543687] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:39.790 [2024-11-15 11:46:40.543702] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:39.790 [2024-11-15 11:46:40.543708] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:39.790 [2024-11-15 11:46:40.543714] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4f34000b90 00:28:39.790 [2024-11-15 11:46:40.543729] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:39.790 qpair failed and we were unable to recover it. 00:28:39.790 [2024-11-15 11:46:40.553581] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:39.790 [2024-11-15 11:46:40.553649] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:39.790 [2024-11-15 11:46:40.553670] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:39.790 [2024-11-15 11:46:40.553677] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:39.790 [2024-11-15 11:46:40.553683] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4f34000b90 00:28:39.790 [2024-11-15 11:46:40.553699] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:39.790 qpair failed and we were unable to recover it. 00:28:39.790 [2024-11-15 11:46:40.563659] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:39.790 [2024-11-15 11:46:40.563719] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:39.790 [2024-11-15 11:46:40.563733] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:39.790 [2024-11-15 11:46:40.563740] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:39.790 [2024-11-15 11:46:40.563747] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4f34000b90 00:28:39.790 [2024-11-15 11:46:40.563763] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:39.790 qpair failed and we were unable to recover it. 00:28:39.790 [2024-11-15 11:46:40.573663] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:39.790 [2024-11-15 11:46:40.573718] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:39.790 [2024-11-15 11:46:40.573732] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:39.790 [2024-11-15 11:46:40.573738] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:39.790 [2024-11-15 11:46:40.573744] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4f34000b90 00:28:39.790 [2024-11-15 11:46:40.573759] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:39.790 qpair failed and we were unable to recover it. 00:28:39.790 [2024-11-15 11:46:40.583757] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:39.790 [2024-11-15 11:46:40.583818] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:39.790 [2024-11-15 11:46:40.583831] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:39.790 [2024-11-15 11:46:40.583837] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:39.791 [2024-11-15 11:46:40.583844] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4f34000b90 00:28:39.791 [2024-11-15 11:46:40.583859] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:39.791 qpair failed and we were unable to recover it. 00:28:39.791 [2024-11-15 11:46:40.593773] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:39.791 [2024-11-15 11:46:40.593884] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:39.791 [2024-11-15 11:46:40.593899] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:39.791 [2024-11-15 11:46:40.593909] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:39.791 [2024-11-15 11:46:40.593915] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4f34000b90 00:28:39.791 [2024-11-15 11:46:40.593930] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:39.791 qpair failed and we were unable to recover it. 00:28:39.791 [2024-11-15 11:46:40.603806] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:39.791 [2024-11-15 11:46:40.603869] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:39.791 [2024-11-15 11:46:40.603883] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:39.791 [2024-11-15 11:46:40.603890] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:39.791 [2024-11-15 11:46:40.603896] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4f34000b90 00:28:39.791 [2024-11-15 11:46:40.603911] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:39.791 qpair failed and we were unable to recover it. 00:28:39.791 [2024-11-15 11:46:40.613763] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:39.791 [2024-11-15 11:46:40.613817] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:39.791 [2024-11-15 11:46:40.613830] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:39.791 [2024-11-15 11:46:40.613837] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:39.791 [2024-11-15 11:46:40.613842] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4f34000b90 00:28:39.791 [2024-11-15 11:46:40.613857] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:39.791 qpair failed and we were unable to recover it. 00:28:39.791 [2024-11-15 11:46:40.623843] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:39.791 [2024-11-15 11:46:40.623909] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:39.791 [2024-11-15 11:46:40.623921] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:39.791 [2024-11-15 11:46:40.623928] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:39.791 [2024-11-15 11:46:40.623934] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4f34000b90 00:28:39.791 [2024-11-15 11:46:40.623949] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:39.791 qpair failed and we were unable to recover it. 00:28:39.791 [2024-11-15 11:46:40.633874] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:39.791 [2024-11-15 11:46:40.633930] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:39.791 [2024-11-15 11:46:40.633943] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:39.791 [2024-11-15 11:46:40.633950] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:39.791 [2024-11-15 11:46:40.633956] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4f34000b90 00:28:39.791 [2024-11-15 11:46:40.633971] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:39.791 qpair failed and we were unable to recover it. 00:28:40.052 [2024-11-15 11:46:40.643909] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:40.052 [2024-11-15 11:46:40.643967] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:40.052 [2024-11-15 11:46:40.643981] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:40.052 [2024-11-15 11:46:40.643989] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:40.052 [2024-11-15 11:46:40.643995] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4f34000b90 00:28:40.052 [2024-11-15 11:46:40.644009] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:40.052 qpair failed and we were unable to recover it. 00:28:40.052 [2024-11-15 11:46:40.653869] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:40.052 [2024-11-15 11:46:40.653930] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:40.052 [2024-11-15 11:46:40.653944] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:40.052 [2024-11-15 11:46:40.653950] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:40.052 [2024-11-15 11:46:40.653956] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4f34000b90 00:28:40.052 [2024-11-15 11:46:40.653971] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:40.052 qpair failed and we were unable to recover it. 00:28:40.052 [2024-11-15 11:46:40.663966] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:40.052 [2024-11-15 11:46:40.664029] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:40.052 [2024-11-15 11:46:40.664042] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:40.052 [2024-11-15 11:46:40.664049] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:40.052 [2024-11-15 11:46:40.664055] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4f34000b90 00:28:40.052 [2024-11-15 11:46:40.664070] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:40.052 qpair failed and we were unable to recover it. 00:28:40.052 [2024-11-15 11:46:40.673993] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:40.053 [2024-11-15 11:46:40.674054] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:40.053 [2024-11-15 11:46:40.674066] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:40.053 [2024-11-15 11:46:40.674073] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:40.053 [2024-11-15 11:46:40.674079] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4f34000b90 00:28:40.053 [2024-11-15 11:46:40.674093] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:40.053 qpair failed and we were unable to recover it. 00:28:40.053 [2024-11-15 11:46:40.684002] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:40.053 [2024-11-15 11:46:40.684076] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:40.053 [2024-11-15 11:46:40.684091] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:40.053 [2024-11-15 11:46:40.684097] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:40.053 [2024-11-15 11:46:40.684103] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4f34000b90 00:28:40.053 [2024-11-15 11:46:40.684118] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:40.053 qpair failed and we were unable to recover it. 00:28:40.053 [2024-11-15 11:46:40.693993] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:40.053 [2024-11-15 11:46:40.694050] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:40.053 [2024-11-15 11:46:40.694063] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:40.053 [2024-11-15 11:46:40.694070] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:40.053 [2024-11-15 11:46:40.694075] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4f34000b90 00:28:40.053 [2024-11-15 11:46:40.694090] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:40.053 qpair failed and we were unable to recover it. 00:28:40.053 [2024-11-15 11:46:40.704083] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:40.053 [2024-11-15 11:46:40.704147] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:40.053 [2024-11-15 11:46:40.704159] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:40.053 [2024-11-15 11:46:40.704166] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:40.053 [2024-11-15 11:46:40.704172] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4f34000b90 00:28:40.053 [2024-11-15 11:46:40.704187] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:40.053 qpair failed and we were unable to recover it. 00:28:40.053 [2024-11-15 11:46:40.714107] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:40.053 [2024-11-15 11:46:40.714171] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:40.053 [2024-11-15 11:46:40.714186] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:40.053 [2024-11-15 11:46:40.714192] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:40.053 [2024-11-15 11:46:40.714197] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4f34000b90 00:28:40.053 [2024-11-15 11:46:40.714212] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:40.053 qpair failed and we were unable to recover it. 00:28:40.053 [2024-11-15 11:46:40.724122] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:40.053 [2024-11-15 11:46:40.724182] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:40.053 [2024-11-15 11:46:40.724195] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:40.053 [2024-11-15 11:46:40.724210] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:40.053 [2024-11-15 11:46:40.724215] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4f34000b90 00:28:40.053 [2024-11-15 11:46:40.724230] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:40.053 qpair failed and we were unable to recover it. 00:28:40.053 [2024-11-15 11:46:40.734100] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:40.053 [2024-11-15 11:46:40.734204] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:40.053 [2024-11-15 11:46:40.734218] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:40.053 [2024-11-15 11:46:40.734224] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:40.053 [2024-11-15 11:46:40.734231] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4f34000b90 00:28:40.053 [2024-11-15 11:46:40.734245] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:40.053 qpair failed and we were unable to recover it. 00:28:40.053 [2024-11-15 11:46:40.744257] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:40.053 [2024-11-15 11:46:40.744330] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:40.053 [2024-11-15 11:46:40.744345] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:40.053 [2024-11-15 11:46:40.744351] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:40.053 [2024-11-15 11:46:40.744357] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4f34000b90 00:28:40.053 [2024-11-15 11:46:40.744372] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:40.053 qpair failed and we were unable to recover it. 00:28:40.053 [2024-11-15 11:46:40.754225] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:40.053 [2024-11-15 11:46:40.754286] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:40.053 [2024-11-15 11:46:40.754299] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:40.053 [2024-11-15 11:46:40.754306] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:40.053 [2024-11-15 11:46:40.754313] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4f34000b90 00:28:40.053 [2024-11-15 11:46:40.754328] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:40.053 qpair failed and we were unable to recover it. 00:28:40.053 [2024-11-15 11:46:40.764256] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:40.053 [2024-11-15 11:46:40.764319] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:40.053 [2024-11-15 11:46:40.764334] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:40.053 [2024-11-15 11:46:40.764340] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:40.053 [2024-11-15 11:46:40.764347] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4f34000b90 00:28:40.053 [2024-11-15 11:46:40.764365] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:40.053 qpair failed and we were unable to recover it. 00:28:40.053 [2024-11-15 11:46:40.774225] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:40.053 [2024-11-15 11:46:40.774283] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:40.053 [2024-11-15 11:46:40.774296] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:40.053 [2024-11-15 11:46:40.774302] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:40.053 [2024-11-15 11:46:40.774309] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4f34000b90 00:28:40.053 [2024-11-15 11:46:40.774323] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:40.053 qpair failed and we were unable to recover it. 00:28:40.053 [2024-11-15 11:46:40.784317] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:40.053 [2024-11-15 11:46:40.784375] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:40.053 [2024-11-15 11:46:40.784387] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:40.053 [2024-11-15 11:46:40.784394] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:40.053 [2024-11-15 11:46:40.784400] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4f34000b90 00:28:40.053 [2024-11-15 11:46:40.784415] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:40.053 qpair failed and we were unable to recover it. 00:28:40.053 [2024-11-15 11:46:40.794324] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:40.053 [2024-11-15 11:46:40.794387] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:40.053 [2024-11-15 11:46:40.794400] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:40.053 [2024-11-15 11:46:40.794407] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:40.053 [2024-11-15 11:46:40.794414] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4f34000b90 00:28:40.053 [2024-11-15 11:46:40.794429] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:40.053 qpair failed and we were unable to recover it. 00:28:40.053 [2024-11-15 11:46:40.804329] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:40.054 [2024-11-15 11:46:40.804388] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:40.054 [2024-11-15 11:46:40.804401] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:40.054 [2024-11-15 11:46:40.804407] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:40.054 [2024-11-15 11:46:40.804414] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4f34000b90 00:28:40.054 [2024-11-15 11:46:40.804429] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:40.054 qpair failed and we were unable to recover it. 00:28:40.054 [2024-11-15 11:46:40.814331] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:40.054 [2024-11-15 11:46:40.814390] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:40.054 [2024-11-15 11:46:40.814405] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:40.054 [2024-11-15 11:46:40.814411] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:40.054 [2024-11-15 11:46:40.814417] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4f34000b90 00:28:40.054 [2024-11-15 11:46:40.814432] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:40.054 qpair failed and we were unable to recover it. 00:28:40.054 [2024-11-15 11:46:40.824428] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:40.054 [2024-11-15 11:46:40.824504] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:40.054 [2024-11-15 11:46:40.824518] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:40.054 [2024-11-15 11:46:40.824526] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:40.054 [2024-11-15 11:46:40.824532] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4f34000b90 00:28:40.054 [2024-11-15 11:46:40.824549] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:40.054 qpair failed and we were unable to recover it. 00:28:40.054 [2024-11-15 11:46:40.834435] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:40.054 [2024-11-15 11:46:40.834508] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:40.054 [2024-11-15 11:46:40.834522] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:40.054 [2024-11-15 11:46:40.834529] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:40.054 [2024-11-15 11:46:40.834536] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4f34000b90 00:28:40.054 [2024-11-15 11:46:40.834551] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:40.054 qpair failed and we were unable to recover it. 00:28:40.054 [2024-11-15 11:46:40.844469] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:40.054 [2024-11-15 11:46:40.844575] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:40.054 [2024-11-15 11:46:40.844591] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:40.054 [2024-11-15 11:46:40.844598] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:40.054 [2024-11-15 11:46:40.844604] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4f34000b90 00:28:40.054 [2024-11-15 11:46:40.844619] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:40.054 qpair failed and we were unable to recover it. 00:28:40.054 [2024-11-15 11:46:40.854415] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:40.054 [2024-11-15 11:46:40.854507] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:40.054 [2024-11-15 11:46:40.854524] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:40.054 [2024-11-15 11:46:40.854531] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:40.054 [2024-11-15 11:46:40.854537] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4f34000b90 00:28:40.054 [2024-11-15 11:46:40.854552] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:40.054 qpair failed and we were unable to recover it. 00:28:40.054 [2024-11-15 11:46:40.864523] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:40.054 [2024-11-15 11:46:40.864582] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:40.054 [2024-11-15 11:46:40.864594] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:40.054 [2024-11-15 11:46:40.864601] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:40.054 [2024-11-15 11:46:40.864607] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4f34000b90 00:28:40.054 [2024-11-15 11:46:40.864622] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:40.054 qpair failed and we were unable to recover it. 00:28:40.054 [2024-11-15 11:46:40.874484] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:40.054 [2024-11-15 11:46:40.874544] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:40.054 [2024-11-15 11:46:40.874557] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:40.054 [2024-11-15 11:46:40.874564] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:40.054 [2024-11-15 11:46:40.874571] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4f34000b90 00:28:40.054 [2024-11-15 11:46:40.874585] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:40.054 qpair failed and we were unable to recover it. 00:28:40.054 [2024-11-15 11:46:40.884687] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:40.054 [2024-11-15 11:46:40.884778] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:40.054 [2024-11-15 11:46:40.884793] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:40.054 [2024-11-15 11:46:40.884799] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:40.054 [2024-11-15 11:46:40.884805] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4f34000b90 00:28:40.054 [2024-11-15 11:46:40.884821] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:40.054 qpair failed and we were unable to recover it. 00:28:40.054 [2024-11-15 11:46:40.894565] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:40.054 [2024-11-15 11:46:40.894625] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:40.054 [2024-11-15 11:46:40.894639] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:40.054 [2024-11-15 11:46:40.894646] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:40.054 [2024-11-15 11:46:40.894651] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4f34000b90 00:28:40.054 [2024-11-15 11:46:40.894670] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:40.054 qpair failed and we were unable to recover it. 00:28:40.313 [2024-11-15 11:46:40.904613] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:40.313 [2024-11-15 11:46:40.904676] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:40.313 [2024-11-15 11:46:40.904689] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:40.313 [2024-11-15 11:46:40.904696] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:40.313 [2024-11-15 11:46:40.904702] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4f34000b90 00:28:40.313 [2024-11-15 11:46:40.904717] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:40.313 qpair failed and we were unable to recover it. 00:28:40.313 [2024-11-15 11:46:40.914599] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:40.313 [2024-11-15 11:46:40.914663] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:40.313 [2024-11-15 11:46:40.914676] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:40.313 [2024-11-15 11:46:40.914683] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:40.313 [2024-11-15 11:46:40.914690] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4f34000b90 00:28:40.313 [2024-11-15 11:46:40.914704] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:40.313 qpair failed and we were unable to recover it. 00:28:40.313 [2024-11-15 11:46:40.924638] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:40.313 [2024-11-15 11:46:40.924697] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:40.313 [2024-11-15 11:46:40.924710] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:40.313 [2024-11-15 11:46:40.924717] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:40.313 [2024-11-15 11:46:40.924723] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4f34000b90 00:28:40.313 [2024-11-15 11:46:40.924738] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:40.313 qpair failed and we were unable to recover it. 00:28:40.313 [2024-11-15 11:46:40.934697] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:40.313 [2024-11-15 11:46:40.934756] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:40.313 [2024-11-15 11:46:40.934769] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:40.313 [2024-11-15 11:46:40.934775] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:40.313 [2024-11-15 11:46:40.934780] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4f34000b90 00:28:40.314 [2024-11-15 11:46:40.934796] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:40.314 qpair failed and we were unable to recover it. 00:28:40.314 [2024-11-15 11:46:40.944757] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:40.314 [2024-11-15 11:46:40.944831] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:40.314 [2024-11-15 11:46:40.944845] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:40.314 [2024-11-15 11:46:40.944852] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:40.314 [2024-11-15 11:46:40.944858] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4f34000b90 00:28:40.314 [2024-11-15 11:46:40.944872] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:40.314 qpair failed and we were unable to recover it. 00:28:40.314 [2024-11-15 11:46:40.954714] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:40.314 [2024-11-15 11:46:40.954778] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:40.314 [2024-11-15 11:46:40.954791] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:40.314 [2024-11-15 11:46:40.954798] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:40.314 [2024-11-15 11:46:40.954805] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4f34000b90 00:28:40.314 [2024-11-15 11:46:40.954820] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:40.314 qpair failed and we were unable to recover it. 00:28:40.314 [2024-11-15 11:46:40.964817] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:40.314 [2024-11-15 11:46:40.964880] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:40.314 [2024-11-15 11:46:40.964894] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:40.314 [2024-11-15 11:46:40.964901] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:40.314 [2024-11-15 11:46:40.964907] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4f34000b90 00:28:40.314 [2024-11-15 11:46:40.964923] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:40.314 qpair failed and we were unable to recover it. 00:28:40.314 [2024-11-15 11:46:40.974744] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:40.314 [2024-11-15 11:46:40.974821] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:40.314 [2024-11-15 11:46:40.974835] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:40.314 [2024-11-15 11:46:40.974841] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:40.314 [2024-11-15 11:46:40.974847] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4f34000b90 00:28:40.314 [2024-11-15 11:46:40.974862] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:40.314 qpair failed and we were unable to recover it. 00:28:40.314 [2024-11-15 11:46:40.984888] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:40.314 [2024-11-15 11:46:40.984947] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:40.314 [2024-11-15 11:46:40.984963] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:40.314 [2024-11-15 11:46:40.984969] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:40.314 [2024-11-15 11:46:40.984975] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4f34000b90 00:28:40.314 [2024-11-15 11:46:40.984990] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:40.314 qpair failed and we were unable to recover it. 00:28:40.314 [2024-11-15 11:46:40.994825] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:40.314 [2024-11-15 11:46:40.994884] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:40.314 [2024-11-15 11:46:40.994897] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:40.314 [2024-11-15 11:46:40.994903] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:40.314 [2024-11-15 11:46:40.994909] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4f34000b90 00:28:40.314 [2024-11-15 11:46:40.994924] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:40.314 qpair failed and we were unable to recover it. 00:28:40.314 [2024-11-15 11:46:41.004843] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:40.314 [2024-11-15 11:46:41.004934] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:40.314 [2024-11-15 11:46:41.004948] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:40.314 [2024-11-15 11:46:41.004955] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:40.314 [2024-11-15 11:46:41.004961] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4f34000b90 00:28:40.314 [2024-11-15 11:46:41.004976] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:40.314 qpair failed and we were unable to recover it. 00:28:40.314 [2024-11-15 11:46:41.014949] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:40.314 [2024-11-15 11:46:41.015007] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:40.314 [2024-11-15 11:46:41.015020] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:40.314 [2024-11-15 11:46:41.015027] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:40.314 [2024-11-15 11:46:41.015032] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4f34000b90 00:28:40.314 [2024-11-15 11:46:41.015046] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:40.314 qpair failed and we were unable to recover it. 00:28:40.314 [2024-11-15 11:46:41.024963] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:40.314 [2024-11-15 11:46:41.025022] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:40.314 [2024-11-15 11:46:41.025035] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:40.314 [2024-11-15 11:46:41.025042] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:40.314 [2024-11-15 11:46:41.025051] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4f34000b90 00:28:40.314 [2024-11-15 11:46:41.025066] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:40.314 qpair failed and we were unable to recover it. 00:28:40.314 [2024-11-15 11:46:41.034956] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:40.314 [2024-11-15 11:46:41.035020] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:40.314 [2024-11-15 11:46:41.035032] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:40.314 [2024-11-15 11:46:41.035039] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:40.314 [2024-11-15 11:46:41.035046] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4f34000b90 00:28:40.314 [2024-11-15 11:46:41.035061] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:40.314 qpair failed and we were unable to recover it. 00:28:40.314 [2024-11-15 11:46:41.045006] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:40.314 [2024-11-15 11:46:41.045067] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:40.314 [2024-11-15 11:46:41.045080] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:40.314 [2024-11-15 11:46:41.045088] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:40.314 [2024-11-15 11:46:41.045094] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4f34000b90 00:28:40.314 [2024-11-15 11:46:41.045108] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:40.314 qpair failed and we were unable to recover it. 00:28:40.314 [2024-11-15 11:46:41.054931] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:40.314 [2024-11-15 11:46:41.054988] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:40.314 [2024-11-15 11:46:41.055001] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:40.314 [2024-11-15 11:46:41.055007] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:40.314 [2024-11-15 11:46:41.055013] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4f34000b90 00:28:40.314 [2024-11-15 11:46:41.055027] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:40.314 qpair failed and we were unable to recover it. 00:28:40.314 [2024-11-15 11:46:41.065172] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:40.314 [2024-11-15 11:46:41.065247] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:40.314 [2024-11-15 11:46:41.065262] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:40.314 [2024-11-15 11:46:41.065268] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:40.314 [2024-11-15 11:46:41.065274] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4f34000b90 00:28:40.314 [2024-11-15 11:46:41.065289] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:40.314 qpair failed and we were unable to recover it. 00:28:40.314 [2024-11-15 11:46:41.075116] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:40.314 [2024-11-15 11:46:41.075181] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:40.315 [2024-11-15 11:46:41.075195] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:40.315 [2024-11-15 11:46:41.075202] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:40.315 [2024-11-15 11:46:41.075207] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4f34000b90 00:28:40.315 [2024-11-15 11:46:41.075222] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:40.315 qpair failed and we were unable to recover it. 00:28:40.315 [2024-11-15 11:46:41.085168] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:40.315 [2024-11-15 11:46:41.085235] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:40.315 [2024-11-15 11:46:41.085250] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:40.315 [2024-11-15 11:46:41.085256] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:40.315 [2024-11-15 11:46:41.085262] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4f34000b90 00:28:40.315 [2024-11-15 11:46:41.085276] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:40.315 qpair failed and we were unable to recover it. 00:28:40.315 [2024-11-15 11:46:41.095109] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:40.315 [2024-11-15 11:46:41.095166] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:40.315 [2024-11-15 11:46:41.095179] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:40.315 [2024-11-15 11:46:41.095185] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:40.315 [2024-11-15 11:46:41.095191] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4f34000b90 00:28:40.315 [2024-11-15 11:46:41.095205] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:40.315 qpair failed and we were unable to recover it. 00:28:40.315 [2024-11-15 11:46:41.105210] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:40.315 [2024-11-15 11:46:41.105272] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:40.315 [2024-11-15 11:46:41.105285] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:40.315 [2024-11-15 11:46:41.105292] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:40.315 [2024-11-15 11:46:41.105298] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4f34000b90 00:28:40.315 [2024-11-15 11:46:41.105312] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:40.315 qpair failed and we were unable to recover it. 00:28:40.315 [2024-11-15 11:46:41.115241] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:40.315 [2024-11-15 11:46:41.115300] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:40.315 [2024-11-15 11:46:41.115317] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:40.315 [2024-11-15 11:46:41.115324] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:40.315 [2024-11-15 11:46:41.115330] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4f34000b90 00:28:40.315 [2024-11-15 11:46:41.115344] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:40.315 qpair failed and we were unable to recover it. 00:28:40.315 [2024-11-15 11:46:41.125266] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:40.315 [2024-11-15 11:46:41.125322] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:40.315 [2024-11-15 11:46:41.125335] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:40.315 [2024-11-15 11:46:41.125342] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:40.315 [2024-11-15 11:46:41.125348] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4f34000b90 00:28:40.315 [2024-11-15 11:46:41.125363] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:40.315 qpair failed and we were unable to recover it. 00:28:40.315 [2024-11-15 11:46:41.135229] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:40.315 [2024-11-15 11:46:41.135284] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:40.315 [2024-11-15 11:46:41.135297] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:40.315 [2024-11-15 11:46:41.135304] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:40.315 [2024-11-15 11:46:41.135309] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4f34000b90 00:28:40.315 [2024-11-15 11:46:41.135325] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:40.315 qpair failed and we were unable to recover it. 00:28:40.315 [2024-11-15 11:46:41.145251] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:40.315 [2024-11-15 11:46:41.145312] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:40.315 [2024-11-15 11:46:41.145326] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:40.315 [2024-11-15 11:46:41.145332] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:40.315 [2024-11-15 11:46:41.145338] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4f34000b90 00:28:40.315 [2024-11-15 11:46:41.145353] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:40.315 qpair failed and we were unable to recover it. 00:28:40.315 [2024-11-15 11:46:41.155276] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:40.315 [2024-11-15 11:46:41.155339] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:40.315 [2024-11-15 11:46:41.155352] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:40.315 [2024-11-15 11:46:41.155362] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:40.315 [2024-11-15 11:46:41.155368] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4f34000b90 00:28:40.315 [2024-11-15 11:46:41.155383] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:40.315 qpair failed and we were unable to recover it. 00:28:40.575 [2024-11-15 11:46:41.165316] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:40.575 [2024-11-15 11:46:41.165374] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:40.575 [2024-11-15 11:46:41.165387] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:40.575 [2024-11-15 11:46:41.165394] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:40.575 [2024-11-15 11:46:41.165400] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4f34000b90 00:28:40.575 [2024-11-15 11:46:41.165415] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:40.575 qpair failed and we were unable to recover it. 00:28:40.575 [2024-11-15 11:46:41.175390] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:40.575 [2024-11-15 11:46:41.175446] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:40.575 [2024-11-15 11:46:41.175463] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:40.575 [2024-11-15 11:46:41.175470] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:40.575 [2024-11-15 11:46:41.175476] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4f34000b90 00:28:40.575 [2024-11-15 11:46:41.175490] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:40.575 qpair failed and we were unable to recover it. 00:28:40.575 [2024-11-15 11:46:41.185373] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:40.575 [2024-11-15 11:46:41.185439] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:40.575 [2024-11-15 11:46:41.185454] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:40.575 [2024-11-15 11:46:41.185464] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:40.575 [2024-11-15 11:46:41.185471] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4f34000b90 00:28:40.575 [2024-11-15 11:46:41.185486] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:40.575 qpair failed and we were unable to recover it. 00:28:40.575 [2024-11-15 11:46:41.195529] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:40.575 [2024-11-15 11:46:41.195590] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:40.575 [2024-11-15 11:46:41.195603] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:40.575 [2024-11-15 11:46:41.195610] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:40.575 [2024-11-15 11:46:41.195617] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4f34000b90 00:28:40.575 [2024-11-15 11:46:41.195631] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:40.575 qpair failed and we were unable to recover it. 00:28:40.575 [2024-11-15 11:46:41.205407] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:40.575 [2024-11-15 11:46:41.205486] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:40.575 [2024-11-15 11:46:41.205500] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:40.575 [2024-11-15 11:46:41.205508] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:40.575 [2024-11-15 11:46:41.205515] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4f34000b90 00:28:40.575 [2024-11-15 11:46:41.205531] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:40.575 qpair failed and we were unable to recover it. 00:28:40.575 [2024-11-15 11:46:41.215386] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:40.575 [2024-11-15 11:46:41.215444] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:40.575 [2024-11-15 11:46:41.215457] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:40.575 [2024-11-15 11:46:41.215469] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:40.575 [2024-11-15 11:46:41.215474] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4f34000b90 00:28:40.575 [2024-11-15 11:46:41.215489] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:40.575 qpair failed and we were unable to recover it. 00:28:40.575 [2024-11-15 11:46:41.225508] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:40.575 [2024-11-15 11:46:41.225570] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:40.575 [2024-11-15 11:46:41.225583] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:40.575 [2024-11-15 11:46:41.225591] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:40.575 [2024-11-15 11:46:41.225597] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4f34000b90 00:28:40.575 [2024-11-15 11:46:41.225611] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:40.575 qpair failed and we were unable to recover it. 00:28:40.575 [2024-11-15 11:46:41.235567] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:40.575 [2024-11-15 11:46:41.235629] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:40.575 [2024-11-15 11:46:41.235644] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:40.575 [2024-11-15 11:46:41.235651] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:40.575 [2024-11-15 11:46:41.235657] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4f34000b90 00:28:40.575 [2024-11-15 11:46:41.235672] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:40.575 qpair failed and we were unable to recover it. 00:28:40.575 [2024-11-15 11:46:41.245603] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:40.575 [2024-11-15 11:46:41.245674] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:40.575 [2024-11-15 11:46:41.245689] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:40.575 [2024-11-15 11:46:41.245696] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:40.575 [2024-11-15 11:46:41.245702] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4f34000b90 00:28:40.575 [2024-11-15 11:46:41.245716] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:40.575 qpair failed and we were unable to recover it. 00:28:40.575 [2024-11-15 11:46:41.255509] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:40.575 [2024-11-15 11:46:41.255565] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:40.575 [2024-11-15 11:46:41.255579] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:40.575 [2024-11-15 11:46:41.255585] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:40.575 [2024-11-15 11:46:41.255591] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4f34000b90 00:28:40.575 [2024-11-15 11:46:41.255606] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:40.575 qpair failed and we were unable to recover it. 00:28:40.575 [2024-11-15 11:46:41.265692] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:40.575 [2024-11-15 11:46:41.265748] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:40.575 [2024-11-15 11:46:41.265761] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:40.575 [2024-11-15 11:46:41.265768] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:40.575 [2024-11-15 11:46:41.265774] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4f34000b90 00:28:40.575 [2024-11-15 11:46:41.265789] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:40.575 qpair failed and we were unable to recover it. 00:28:40.575 [2024-11-15 11:46:41.275629] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:40.575 [2024-11-15 11:46:41.275687] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:40.575 [2024-11-15 11:46:41.275700] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:40.575 [2024-11-15 11:46:41.275707] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:40.575 [2024-11-15 11:46:41.275713] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4f34000b90 00:28:40.575 [2024-11-15 11:46:41.275728] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:40.575 qpair failed and we were unable to recover it. 00:28:40.575 [2024-11-15 11:46:41.285699] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:40.575 [2024-11-15 11:46:41.285757] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:40.576 [2024-11-15 11:46:41.285771] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:40.576 [2024-11-15 11:46:41.285780] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:40.576 [2024-11-15 11:46:41.285786] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4f34000b90 00:28:40.576 [2024-11-15 11:46:41.285800] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:40.576 qpair failed and we were unable to recover it. 00:28:40.576 [2024-11-15 11:46:41.295681] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:40.576 [2024-11-15 11:46:41.295739] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:40.576 [2024-11-15 11:46:41.295754] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:40.576 [2024-11-15 11:46:41.295762] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:40.576 [2024-11-15 11:46:41.295768] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4f34000b90 00:28:40.576 [2024-11-15 11:46:41.295783] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:40.576 qpair failed and we were unable to recover it. 00:28:40.576 [2024-11-15 11:46:41.305804] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:40.576 [2024-11-15 11:46:41.305874] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:40.576 [2024-11-15 11:46:41.305889] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:40.576 [2024-11-15 11:46:41.305895] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:40.576 [2024-11-15 11:46:41.305902] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4f34000b90 00:28:40.576 [2024-11-15 11:46:41.305916] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:40.576 qpair failed and we were unable to recover it. 00:28:40.576 [2024-11-15 11:46:41.315797] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:40.576 [2024-11-15 11:46:41.315862] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:40.576 [2024-11-15 11:46:41.315877] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:40.576 [2024-11-15 11:46:41.315883] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:40.576 [2024-11-15 11:46:41.315889] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4f34000b90 00:28:40.576 [2024-11-15 11:46:41.315904] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:40.576 qpair failed and we were unable to recover it. 00:28:40.576 [2024-11-15 11:46:41.325824] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:40.576 [2024-11-15 11:46:41.325887] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:40.576 [2024-11-15 11:46:41.325901] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:40.576 [2024-11-15 11:46:41.325908] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:40.576 [2024-11-15 11:46:41.325914] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4f34000b90 00:28:40.576 [2024-11-15 11:46:41.325931] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:40.576 qpair failed and we were unable to recover it. 00:28:40.576 [2024-11-15 11:46:41.335799] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:40.576 [2024-11-15 11:46:41.335855] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:40.576 [2024-11-15 11:46:41.335868] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:40.576 [2024-11-15 11:46:41.335874] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:40.576 [2024-11-15 11:46:41.335880] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4f34000b90 00:28:40.576 [2024-11-15 11:46:41.335895] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:40.576 qpair failed and we were unable to recover it. 00:28:40.576 [2024-11-15 11:46:41.345878] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:40.576 [2024-11-15 11:46:41.345948] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:40.576 [2024-11-15 11:46:41.345962] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:40.576 [2024-11-15 11:46:41.345968] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:40.576 [2024-11-15 11:46:41.345974] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4f34000b90 00:28:40.576 [2024-11-15 11:46:41.345988] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:40.576 qpair failed and we were unable to recover it. 00:28:40.576 [2024-11-15 11:46:41.355910] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:40.576 [2024-11-15 11:46:41.355972] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:40.576 [2024-11-15 11:46:41.355984] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:40.576 [2024-11-15 11:46:41.355991] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:40.576 [2024-11-15 11:46:41.355997] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4f34000b90 00:28:40.576 [2024-11-15 11:46:41.356012] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:40.576 qpair failed and we were unable to recover it. 00:28:40.576 [2024-11-15 11:46:41.365980] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:40.576 [2024-11-15 11:46:41.366072] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:40.576 [2024-11-15 11:46:41.366086] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:40.576 [2024-11-15 11:46:41.366093] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:40.576 [2024-11-15 11:46:41.366099] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4f34000b90 00:28:40.576 [2024-11-15 11:46:41.366113] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:40.576 qpair failed and we were unable to recover it. 00:28:40.576 [2024-11-15 11:46:41.375914] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:40.576 [2024-11-15 11:46:41.375971] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:40.576 [2024-11-15 11:46:41.375984] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:40.576 [2024-11-15 11:46:41.375991] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:40.576 [2024-11-15 11:46:41.375996] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4f34000b90 00:28:40.576 [2024-11-15 11:46:41.376011] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:40.576 qpair failed and we were unable to recover it. 00:28:40.576 [2024-11-15 11:46:41.386006] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:40.576 [2024-11-15 11:46:41.386070] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:40.576 [2024-11-15 11:46:41.386083] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:40.576 [2024-11-15 11:46:41.386090] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:40.576 [2024-11-15 11:46:41.386096] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4f34000b90 00:28:40.576 [2024-11-15 11:46:41.386111] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:40.576 qpair failed and we were unable to recover it. 00:28:40.576 [2024-11-15 11:46:41.396026] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:40.576 [2024-11-15 11:46:41.396086] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:40.576 [2024-11-15 11:46:41.396099] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:40.576 [2024-11-15 11:46:41.396107] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:40.576 [2024-11-15 11:46:41.396113] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4f34000b90 00:28:40.576 [2024-11-15 11:46:41.396128] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:40.576 qpair failed and we were unable to recover it. 00:28:40.576 [2024-11-15 11:46:41.406056] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:40.576 [2024-11-15 11:46:41.406132] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:40.576 [2024-11-15 11:46:41.406146] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:40.576 [2024-11-15 11:46:41.406153] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:40.576 [2024-11-15 11:46:41.406159] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4f34000b90 00:28:40.576 [2024-11-15 11:46:41.406174] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:40.576 qpair failed and we were unable to recover it. 00:28:40.576 [2024-11-15 11:46:41.416017] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:40.576 [2024-11-15 11:46:41.416098] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:40.576 [2024-11-15 11:46:41.416115] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:40.577 [2024-11-15 11:46:41.416122] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:40.577 [2024-11-15 11:46:41.416128] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4f34000b90 00:28:40.577 [2024-11-15 11:46:41.416142] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:40.577 qpair failed and we were unable to recover it. 00:28:40.837 [2024-11-15 11:46:41.426123] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:40.837 [2024-11-15 11:46:41.426227] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:40.837 [2024-11-15 11:46:41.426242] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:40.837 [2024-11-15 11:46:41.426249] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:40.837 [2024-11-15 11:46:41.426255] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4f34000b90 00:28:40.837 [2024-11-15 11:46:41.426269] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:40.837 qpair failed and we were unable to recover it. 00:28:40.837 [2024-11-15 11:46:41.436129] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:40.837 [2024-11-15 11:46:41.436207] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:40.837 [2024-11-15 11:46:41.436221] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:40.837 [2024-11-15 11:46:41.436227] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:40.837 [2024-11-15 11:46:41.436234] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4f34000b90 00:28:40.837 [2024-11-15 11:46:41.436248] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:40.837 qpair failed and we were unable to recover it. 00:28:40.837 [2024-11-15 11:46:41.446164] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:40.837 [2024-11-15 11:46:41.446231] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:40.837 [2024-11-15 11:46:41.446254] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:40.837 [2024-11-15 11:46:41.446260] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:40.837 [2024-11-15 11:46:41.446267] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4f34000b90 00:28:40.837 [2024-11-15 11:46:41.446284] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:40.837 qpair failed and we were unable to recover it. 00:28:40.837 [2024-11-15 11:46:41.456132] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:40.837 [2024-11-15 11:46:41.456194] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:40.837 [2024-11-15 11:46:41.456207] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:40.837 [2024-11-15 11:46:41.456214] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:40.837 [2024-11-15 11:46:41.456227] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4f34000b90 00:28:40.837 [2024-11-15 11:46:41.456242] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:40.837 qpair failed and we were unable to recover it. 00:28:40.837 [2024-11-15 11:46:41.466283] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:40.837 [2024-11-15 11:46:41.466349] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:40.837 [2024-11-15 11:46:41.466363] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:40.837 [2024-11-15 11:46:41.466369] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:40.837 [2024-11-15 11:46:41.466375] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4f34000b90 00:28:40.837 [2024-11-15 11:46:41.466390] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:40.837 qpair failed and we were unable to recover it. 00:28:40.837 [2024-11-15 11:46:41.476235] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:40.837 [2024-11-15 11:46:41.476300] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:40.837 [2024-11-15 11:46:41.476315] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:40.837 [2024-11-15 11:46:41.476322] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:40.837 [2024-11-15 11:46:41.476327] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4f34000b90 00:28:40.837 [2024-11-15 11:46:41.476342] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:40.837 qpair failed and we were unable to recover it. 00:28:40.837 [2024-11-15 11:46:41.486278] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:40.837 [2024-11-15 11:46:41.486338] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:40.837 [2024-11-15 11:46:41.486351] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:40.837 [2024-11-15 11:46:41.486358] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:40.837 [2024-11-15 11:46:41.486365] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4f34000b90 00:28:40.837 [2024-11-15 11:46:41.486380] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:40.837 qpair failed and we were unable to recover it. 00:28:40.837 [2024-11-15 11:46:41.496267] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:40.837 [2024-11-15 11:46:41.496323] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:40.837 [2024-11-15 11:46:41.496337] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:40.837 [2024-11-15 11:46:41.496343] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:40.837 [2024-11-15 11:46:41.496349] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4f34000b90 00:28:40.837 [2024-11-15 11:46:41.496363] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:40.837 qpair failed and we were unable to recover it. 00:28:40.837 [2024-11-15 11:46:41.506402] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:40.837 [2024-11-15 11:46:41.506491] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:40.837 [2024-11-15 11:46:41.506507] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:40.837 [2024-11-15 11:46:41.506513] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:40.837 [2024-11-15 11:46:41.506519] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4f34000b90 00:28:40.837 [2024-11-15 11:46:41.506535] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:40.837 qpair failed and we were unable to recover it. 00:28:40.837 [2024-11-15 11:46:41.516353] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:40.837 [2024-11-15 11:46:41.516416] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:40.837 [2024-11-15 11:46:41.516429] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:40.837 [2024-11-15 11:46:41.516436] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:40.837 [2024-11-15 11:46:41.516443] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4f34000b90 00:28:40.837 [2024-11-15 11:46:41.516460] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:40.837 qpair failed and we were unable to recover it. 00:28:40.837 [2024-11-15 11:46:41.526395] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:40.837 [2024-11-15 11:46:41.526505] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:40.837 [2024-11-15 11:46:41.526520] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:40.837 [2024-11-15 11:46:41.526526] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:40.837 [2024-11-15 11:46:41.526532] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4f34000b90 00:28:40.837 [2024-11-15 11:46:41.526547] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:40.837 qpair failed and we were unable to recover it. 00:28:40.837 [2024-11-15 11:46:41.536376] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:40.837 [2024-11-15 11:46:41.536436] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:40.837 [2024-11-15 11:46:41.536450] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:40.837 [2024-11-15 11:46:41.536457] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:40.837 [2024-11-15 11:46:41.536467] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4f34000b90 00:28:40.837 [2024-11-15 11:46:41.536482] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:40.837 qpair failed and we were unable to recover it. 00:28:40.837 [2024-11-15 11:46:41.546449] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:40.837 [2024-11-15 11:46:41.546519] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:40.837 [2024-11-15 11:46:41.546536] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:40.837 [2024-11-15 11:46:41.546542] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:40.838 [2024-11-15 11:46:41.546548] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4f34000b90 00:28:40.838 [2024-11-15 11:46:41.546562] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:40.838 qpair failed and we were unable to recover it. 00:28:40.838 [2024-11-15 11:46:41.556522] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:40.838 [2024-11-15 11:46:41.556603] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:40.838 [2024-11-15 11:46:41.556617] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:40.838 [2024-11-15 11:46:41.556624] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:40.838 [2024-11-15 11:46:41.556630] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4f34000b90 00:28:40.838 [2024-11-15 11:46:41.556645] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:40.838 qpair failed and we were unable to recover it. 00:28:40.838 [2024-11-15 11:46:41.566498] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:40.838 [2024-11-15 11:46:41.566564] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:40.838 [2024-11-15 11:46:41.566578] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:40.838 [2024-11-15 11:46:41.566585] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:40.838 [2024-11-15 11:46:41.566591] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4f34000b90 00:28:40.838 [2024-11-15 11:46:41.566605] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:40.838 qpair failed and we were unable to recover it. 00:28:40.838 [2024-11-15 11:46:41.576466] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:40.838 [2024-11-15 11:46:41.576522] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:40.838 [2024-11-15 11:46:41.576535] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:40.838 [2024-11-15 11:46:41.576541] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:40.838 [2024-11-15 11:46:41.576547] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4f34000b90 00:28:40.838 [2024-11-15 11:46:41.576562] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:40.838 qpair failed and we were unable to recover it. 00:28:40.838 [2024-11-15 11:46:41.586618] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:40.838 [2024-11-15 11:46:41.586704] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:40.838 [2024-11-15 11:46:41.586718] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:40.838 [2024-11-15 11:46:41.586724] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:40.838 [2024-11-15 11:46:41.586733] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4f34000b90 00:28:40.838 [2024-11-15 11:46:41.586747] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:40.838 qpair failed and we were unable to recover it. 00:28:40.838 [2024-11-15 11:46:41.596643] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:40.838 [2024-11-15 11:46:41.596731] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:40.838 [2024-11-15 11:46:41.596746] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:40.838 [2024-11-15 11:46:41.596752] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:40.838 [2024-11-15 11:46:41.596759] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4f34000b90 00:28:40.838 [2024-11-15 11:46:41.596773] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:40.838 qpair failed and we were unable to recover it. 00:28:40.838 [2024-11-15 11:46:41.606620] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:40.838 [2024-11-15 11:46:41.606680] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:40.838 [2024-11-15 11:46:41.606694] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:40.838 [2024-11-15 11:46:41.606701] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:40.838 [2024-11-15 11:46:41.606707] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4f34000b90 00:28:40.838 [2024-11-15 11:46:41.606722] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:40.838 qpair failed and we were unable to recover it. 00:28:40.838 [2024-11-15 11:46:41.616602] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:40.838 [2024-11-15 11:46:41.616660] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:40.838 [2024-11-15 11:46:41.616673] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:40.838 [2024-11-15 11:46:41.616679] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:40.838 [2024-11-15 11:46:41.616685] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4f34000b90 00:28:40.838 [2024-11-15 11:46:41.616699] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:40.838 qpair failed and we were unable to recover it. 00:28:40.838 [2024-11-15 11:46:41.626699] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:40.838 [2024-11-15 11:46:41.626761] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:40.838 [2024-11-15 11:46:41.626774] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:40.838 [2024-11-15 11:46:41.626781] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:40.838 [2024-11-15 11:46:41.626787] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4f34000b90 00:28:40.838 [2024-11-15 11:46:41.626803] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:40.838 qpair failed and we were unable to recover it. 00:28:40.838 [2024-11-15 11:46:41.636706] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:40.838 [2024-11-15 11:46:41.636806] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:40.838 [2024-11-15 11:46:41.636825] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:40.838 [2024-11-15 11:46:41.636833] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:40.838 [2024-11-15 11:46:41.636839] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4f30000b90 00:28:40.838 [2024-11-15 11:46:41.636858] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:28:40.838 qpair failed and we were unable to recover it. 00:28:40.838 [2024-11-15 11:46:41.646727] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:40.838 [2024-11-15 11:46:41.646791] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:40.838 [2024-11-15 11:46:41.646805] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:40.838 [2024-11-15 11:46:41.646812] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:40.838 [2024-11-15 11:46:41.646818] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4f30000b90 00:28:40.838 [2024-11-15 11:46:41.646833] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:28:40.838 qpair failed and we were unable to recover it. 00:28:40.838 [2024-11-15 11:46:41.656705] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:40.838 [2024-11-15 11:46:41.656780] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:40.838 [2024-11-15 11:46:41.656802] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:40.838 [2024-11-15 11:46:41.656811] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:40.838 [2024-11-15 11:46:41.656817] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4f3c000b90 00:28:40.838 [2024-11-15 11:46:41.656834] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:40.838 qpair failed and we were unable to recover it. 00:28:40.838 [2024-11-15 11:46:41.666790] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:40.838 [2024-11-15 11:46:41.666852] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:40.838 [2024-11-15 11:46:41.666866] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:40.838 [2024-11-15 11:46:41.666873] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:40.838 [2024-11-15 11:46:41.666879] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4f3c000b90 00:28:40.839 [2024-11-15 11:46:41.666894] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:40.839 qpair failed and we were unable to recover it. 00:28:40.839 [2024-11-15 11:46:41.667050] nvme_ctrlr.c:4518:nvme_ctrlr_keep_alive: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Submitting Keep Alive failed 00:28:40.839 A controller has encountered a failure and is being reset. 00:28:40.839 [2024-11-15 11:46:41.676843] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:40.839 [2024-11-15 11:46:41.676966] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:40.839 [2024-11-15 11:46:41.677023] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:40.839 [2024-11-15 11:46:41.677049] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:40.839 [2024-11-15 11:46:41.677072] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1922550 00:28:40.839 [2024-11-15 11:46:41.677124] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:40.839 qpair failed and we were unable to recover it. 00:28:40.839 [2024-11-15 11:46:41.686796] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:40.839 [2024-11-15 11:46:41.686882] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:40.839 [2024-11-15 11:46:41.686911] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:40.839 [2024-11-15 11:46:41.686926] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:40.839 [2024-11-15 11:46:41.686940] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1922550 00:28:40.839 [2024-11-15 11:46:41.686972] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:40.839 qpair failed and we were unable to recover it. 00:28:41.097 Controller properly reset. 00:28:41.097 Initializing NVMe Controllers 00:28:41.097 Attaching to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:28:41.097 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:28:41.098 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 0 00:28:41.098 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 1 00:28:41.098 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 2 00:28:41.098 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 3 00:28:41.098 Initialization complete. Launching workers. 00:28:41.098 Starting thread on core 1 00:28:41.098 Starting thread on core 2 00:28:41.098 Starting thread on core 3 00:28:41.098 Starting thread on core 0 00:28:41.098 11:46:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@51 -- # sync 00:28:41.098 00:28:41.098 real 0m10.977s 00:28:41.098 user 0m18.931s 00:28:41.098 sys 0m4.503s 00:28:41.098 11:46:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@1128 -- # xtrace_disable 00:28:41.098 11:46:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:41.098 ************************************ 00:28:41.098 END TEST nvmf_target_disconnect_tc2 00:28:41.098 ************************************ 00:28:41.098 11:46:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@72 -- # '[' -n '' ']' 00:28:41.098 11:46:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@76 -- # trap - SIGINT SIGTERM EXIT 00:28:41.098 11:46:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@77 -- # nvmftestfini 00:28:41.098 11:46:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@516 -- # nvmfcleanup 00:28:41.098 11:46:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@121 -- # sync 00:28:41.098 11:46:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:28:41.098 11:46:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@124 -- # set +e 00:28:41.098 11:46:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@125 -- # for i in {1..20} 00:28:41.098 11:46:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:28:41.098 rmmod nvme_tcp 00:28:41.098 rmmod nvme_fabrics 00:28:41.098 rmmod nvme_keyring 00:28:41.098 11:46:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:28:41.098 11:46:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@128 -- # set -e 00:28:41.098 11:46:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@129 -- # return 0 00:28:41.098 11:46:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@517 -- # '[' -n 1408311 ']' 00:28:41.098 11:46:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@518 -- # killprocess 1408311 00:28:41.098 11:46:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@952 -- # '[' -z 1408311 ']' 00:28:41.098 11:46:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@956 -- # kill -0 1408311 00:28:41.098 11:46:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@957 -- # uname 00:28:41.098 11:46:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:28:41.098 11:46:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 1408311 00:28:41.098 11:46:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@958 -- # process_name=reactor_4 00:28:41.098 11:46:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@962 -- # '[' reactor_4 = sudo ']' 00:28:41.098 11:46:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@970 -- # echo 'killing process with pid 1408311' 00:28:41.098 killing process with pid 1408311 00:28:41.098 11:46:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@971 -- # kill 1408311 00:28:41.098 11:46:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@976 -- # wait 1408311 00:28:41.357 11:46:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:28:41.357 11:46:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:28:41.357 11:46:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:28:41.357 11:46:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@297 -- # iptr 00:28:41.357 11:46:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@791 -- # iptables-save 00:28:41.357 11:46:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:28:41.357 11:46:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@791 -- # iptables-restore 00:28:41.357 11:46:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:28:41.357 11:46:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@302 -- # remove_spdk_ns 00:28:41.357 11:46:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:41.357 11:46:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:41.357 11:46:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:43.893 11:46:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:28:43.893 00:28:43.893 real 0m19.520s 00:28:43.893 user 0m47.265s 00:28:43.893 sys 0m9.258s 00:28:43.893 11:46:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1128 -- # xtrace_disable 00:28:43.893 11:46:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:28:43.893 ************************************ 00:28:43.893 END TEST nvmf_target_disconnect 00:28:43.893 ************************************ 00:28:43.893 11:46:44 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:28:43.893 00:28:43.893 real 5m58.966s 00:28:43.893 user 11m23.973s 00:28:43.893 sys 1m53.005s 00:28:43.893 11:46:44 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1128 -- # xtrace_disable 00:28:43.893 11:46:44 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:28:43.893 ************************************ 00:28:43.893 END TEST nvmf_host 00:28:43.893 ************************************ 00:28:43.893 11:46:44 nvmf_tcp -- nvmf/nvmf.sh@19 -- # [[ tcp = \t\c\p ]] 00:28:43.893 11:46:44 nvmf_tcp -- nvmf/nvmf.sh@19 -- # [[ 0 -eq 0 ]] 00:28:43.893 11:46:44 nvmf_tcp -- nvmf/nvmf.sh@20 -- # run_test nvmf_target_core_interrupt_mode /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp --interrupt-mode 00:28:43.893 11:46:44 nvmf_tcp -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:28:43.893 11:46:44 nvmf_tcp -- common/autotest_common.sh@1109 -- # xtrace_disable 00:28:43.893 11:46:44 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:28:43.893 ************************************ 00:28:43.893 START TEST nvmf_target_core_interrupt_mode 00:28:43.893 ************************************ 00:28:43.893 11:46:44 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp --interrupt-mode 00:28:43.893 * Looking for test storage... 00:28:43.893 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:28:43.893 11:46:44 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:28:43.893 11:46:44 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1691 -- # lcov --version 00:28:43.893 11:46:44 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:28:43.893 11:46:44 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:28:43.893 11:46:44 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:28:43.893 11:46:44 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@333 -- # local ver1 ver1_l 00:28:43.893 11:46:44 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@334 -- # local ver2 ver2_l 00:28:43.893 11:46:44 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@336 -- # IFS=.-: 00:28:43.893 11:46:44 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@336 -- # read -ra ver1 00:28:43.893 11:46:44 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@337 -- # IFS=.-: 00:28:43.893 11:46:44 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@337 -- # read -ra ver2 00:28:43.893 11:46:44 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@338 -- # local 'op=<' 00:28:43.893 11:46:44 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@340 -- # ver1_l=2 00:28:43.893 11:46:44 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@341 -- # ver2_l=1 00:28:43.893 11:46:44 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:28:43.893 11:46:44 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@344 -- # case "$op" in 00:28:43.893 11:46:44 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@345 -- # : 1 00:28:43.894 11:46:44 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@364 -- # (( v = 0 )) 00:28:43.894 11:46:44 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:28:43.894 11:46:44 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@365 -- # decimal 1 00:28:43.894 11:46:44 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@353 -- # local d=1 00:28:43.894 11:46:44 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:28:43.894 11:46:44 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@355 -- # echo 1 00:28:43.894 11:46:44 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@365 -- # ver1[v]=1 00:28:43.894 11:46:44 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@366 -- # decimal 2 00:28:43.894 11:46:44 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@353 -- # local d=2 00:28:43.894 11:46:44 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:28:43.894 11:46:44 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@355 -- # echo 2 00:28:43.894 11:46:44 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@366 -- # ver2[v]=2 00:28:43.894 11:46:44 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:28:43.894 11:46:44 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:28:43.894 11:46:44 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@368 -- # return 0 00:28:43.894 11:46:44 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:28:43.894 11:46:44 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:28:43.894 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:43.894 --rc genhtml_branch_coverage=1 00:28:43.894 --rc genhtml_function_coverage=1 00:28:43.894 --rc genhtml_legend=1 00:28:43.894 --rc geninfo_all_blocks=1 00:28:43.894 --rc geninfo_unexecuted_blocks=1 00:28:43.894 00:28:43.894 ' 00:28:43.894 11:46:44 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:28:43.894 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:43.894 --rc genhtml_branch_coverage=1 00:28:43.894 --rc genhtml_function_coverage=1 00:28:43.894 --rc genhtml_legend=1 00:28:43.894 --rc geninfo_all_blocks=1 00:28:43.894 --rc geninfo_unexecuted_blocks=1 00:28:43.894 00:28:43.894 ' 00:28:43.894 11:46:44 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:28:43.894 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:43.894 --rc genhtml_branch_coverage=1 00:28:43.894 --rc genhtml_function_coverage=1 00:28:43.894 --rc genhtml_legend=1 00:28:43.894 --rc geninfo_all_blocks=1 00:28:43.894 --rc geninfo_unexecuted_blocks=1 00:28:43.894 00:28:43.894 ' 00:28:43.894 11:46:44 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:28:43.894 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:43.894 --rc genhtml_branch_coverage=1 00:28:43.894 --rc genhtml_function_coverage=1 00:28:43.894 --rc genhtml_legend=1 00:28:43.894 --rc geninfo_all_blocks=1 00:28:43.894 --rc geninfo_unexecuted_blocks=1 00:28:43.894 00:28:43.894 ' 00:28:43.894 11:46:44 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@10 -- # uname -s 00:28:43.894 11:46:44 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@10 -- # '[' '!' Linux = Linux ']' 00:28:43.894 11:46:44 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:28:43.894 11:46:44 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@7 -- # uname -s 00:28:43.894 11:46:44 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:43.894 11:46:44 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:43.894 11:46:44 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:43.894 11:46:44 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:43.894 11:46:44 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:28:43.894 11:46:44 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:28:43.894 11:46:44 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:43.894 11:46:44 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:28:43.894 11:46:44 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:43.894 11:46:44 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:28:43.894 11:46:44 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:28:43.894 11:46:44 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@18 -- # NVME_HOSTID=00abaa28-3537-eb11-906e-0017a4403562 00:28:43.894 11:46:44 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:43.894 11:46:44 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:28:43.894 11:46:44 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:28:43.894 11:46:44 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:28:43.894 11:46:44 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:28:43.894 11:46:44 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@15 -- # shopt -s extglob 00:28:43.894 11:46:44 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:43.894 11:46:44 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:43.894 11:46:44 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:43.894 11:46:44 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:43.894 11:46:44 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:43.894 11:46:44 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:43.894 11:46:44 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@5 -- # export PATH 00:28:43.894 11:46:44 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:43.894 11:46:44 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@51 -- # : 0 00:28:43.894 11:46:44 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:28:43.894 11:46:44 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:28:43.894 11:46:44 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:28:43.894 11:46:44 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:43.894 11:46:44 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:43.894 11:46:44 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:28:43.894 11:46:44 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:28:43.894 11:46:44 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:28:43.894 11:46:44 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:28:43.894 11:46:44 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@55 -- # have_pci_nics=0 00:28:43.894 11:46:44 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@16 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:28:43.894 11:46:44 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@18 -- # TEST_ARGS=("$@") 00:28:43.894 11:46:44 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@20 -- # [[ 0 -eq 0 ]] 00:28:43.894 11:46:44 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@21 -- # run_test nvmf_abort /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp --interrupt-mode 00:28:43.894 11:46:44 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:28:43.894 11:46:44 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1109 -- # xtrace_disable 00:28:43.894 11:46:44 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:28:43.894 ************************************ 00:28:43.894 START TEST nvmf_abort 00:28:43.894 ************************************ 00:28:43.894 11:46:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp --interrupt-mode 00:28:43.894 * Looking for test storage... 00:28:43.894 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:28:43.894 11:46:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:28:43.894 11:46:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1691 -- # lcov --version 00:28:43.894 11:46:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:28:43.894 11:46:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:28:43.894 11:46:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:28:43.894 11:46:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@333 -- # local ver1 ver1_l 00:28:43.894 11:46:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@334 -- # local ver2 ver2_l 00:28:43.894 11:46:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@336 -- # IFS=.-: 00:28:43.894 11:46:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@336 -- # read -ra ver1 00:28:43.895 11:46:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@337 -- # IFS=.-: 00:28:43.895 11:46:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@337 -- # read -ra ver2 00:28:43.895 11:46:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@338 -- # local 'op=<' 00:28:43.895 11:46:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@340 -- # ver1_l=2 00:28:43.895 11:46:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@341 -- # ver2_l=1 00:28:43.895 11:46:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:28:43.895 11:46:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@344 -- # case "$op" in 00:28:43.895 11:46:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@345 -- # : 1 00:28:43.895 11:46:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@364 -- # (( v = 0 )) 00:28:43.895 11:46:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:28:43.895 11:46:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@365 -- # decimal 1 00:28:43.895 11:46:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@353 -- # local d=1 00:28:43.895 11:46:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:28:43.895 11:46:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@355 -- # echo 1 00:28:43.895 11:46:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@365 -- # ver1[v]=1 00:28:43.895 11:46:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@366 -- # decimal 2 00:28:43.895 11:46:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@353 -- # local d=2 00:28:43.895 11:46:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:28:43.895 11:46:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@355 -- # echo 2 00:28:43.895 11:46:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@366 -- # ver2[v]=2 00:28:43.895 11:46:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:28:43.895 11:46:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:28:43.895 11:46:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@368 -- # return 0 00:28:43.895 11:46:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:28:43.895 11:46:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:28:43.895 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:43.895 --rc genhtml_branch_coverage=1 00:28:43.895 --rc genhtml_function_coverage=1 00:28:43.895 --rc genhtml_legend=1 00:28:43.895 --rc geninfo_all_blocks=1 00:28:43.895 --rc geninfo_unexecuted_blocks=1 00:28:43.895 00:28:43.895 ' 00:28:43.895 11:46:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:28:43.895 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:43.895 --rc genhtml_branch_coverage=1 00:28:43.895 --rc genhtml_function_coverage=1 00:28:43.895 --rc genhtml_legend=1 00:28:43.895 --rc geninfo_all_blocks=1 00:28:43.895 --rc geninfo_unexecuted_blocks=1 00:28:43.895 00:28:43.895 ' 00:28:43.895 11:46:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:28:43.895 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:43.895 --rc genhtml_branch_coverage=1 00:28:43.895 --rc genhtml_function_coverage=1 00:28:43.895 --rc genhtml_legend=1 00:28:43.895 --rc geninfo_all_blocks=1 00:28:43.895 --rc geninfo_unexecuted_blocks=1 00:28:43.895 00:28:43.895 ' 00:28:43.895 11:46:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:28:43.895 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:43.895 --rc genhtml_branch_coverage=1 00:28:43.895 --rc genhtml_function_coverage=1 00:28:43.895 --rc genhtml_legend=1 00:28:43.895 --rc geninfo_all_blocks=1 00:28:43.895 --rc geninfo_unexecuted_blocks=1 00:28:43.895 00:28:43.895 ' 00:28:43.895 11:46:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:28:43.895 11:46:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@7 -- # uname -s 00:28:43.895 11:46:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:43.895 11:46:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:43.895 11:46:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:43.895 11:46:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:43.895 11:46:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:28:43.895 11:46:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:28:43.895 11:46:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:43.895 11:46:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:28:43.895 11:46:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:43.895 11:46:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:28:43.895 11:46:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:28:43.895 11:46:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@18 -- # NVME_HOSTID=00abaa28-3537-eb11-906e-0017a4403562 00:28:43.895 11:46:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:43.895 11:46:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:28:43.895 11:46:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:28:43.895 11:46:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:28:43.895 11:46:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:28:43.895 11:46:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@15 -- # shopt -s extglob 00:28:43.895 11:46:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:43.895 11:46:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:43.895 11:46:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:43.895 11:46:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:43.895 11:46:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:43.895 11:46:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:43.895 11:46:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@5 -- # export PATH 00:28:43.895 11:46:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:43.895 11:46:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@51 -- # : 0 00:28:43.895 11:46:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:28:43.895 11:46:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:28:43.895 11:46:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:28:43.895 11:46:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:43.895 11:46:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:43.895 11:46:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:28:43.895 11:46:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:28:43.895 11:46:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:28:43.895 11:46:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:28:43.895 11:46:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@55 -- # have_pci_nics=0 00:28:43.895 11:46:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@11 -- # MALLOC_BDEV_SIZE=64 00:28:43.895 11:46:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@12 -- # MALLOC_BLOCK_SIZE=4096 00:28:43.895 11:46:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@14 -- # nvmftestinit 00:28:43.895 11:46:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:28:43.895 11:46:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:43.895 11:46:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@476 -- # prepare_net_devs 00:28:43.895 11:46:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@438 -- # local -g is_hw=no 00:28:43.895 11:46:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@440 -- # remove_spdk_ns 00:28:43.895 11:46:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:43.896 11:46:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:43.896 11:46:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:43.896 11:46:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:28:43.896 11:46:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:28:43.896 11:46:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@309 -- # xtrace_disable 00:28:43.896 11:46:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:28:50.466 11:46:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:28:50.466 11:46:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@315 -- # pci_devs=() 00:28:50.466 11:46:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@315 -- # local -a pci_devs 00:28:50.466 11:46:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@316 -- # pci_net_devs=() 00:28:50.466 11:46:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:28:50.466 11:46:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@317 -- # pci_drivers=() 00:28:50.466 11:46:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@317 -- # local -A pci_drivers 00:28:50.466 11:46:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@319 -- # net_devs=() 00:28:50.466 11:46:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@319 -- # local -ga net_devs 00:28:50.466 11:46:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@320 -- # e810=() 00:28:50.466 11:46:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@320 -- # local -ga e810 00:28:50.466 11:46:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@321 -- # x722=() 00:28:50.466 11:46:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@321 -- # local -ga x722 00:28:50.466 11:46:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@322 -- # mlx=() 00:28:50.466 11:46:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@322 -- # local -ga mlx 00:28:50.466 11:46:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:28:50.466 11:46:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:28:50.466 11:46:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:28:50.466 11:46:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:28:50.466 11:46:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:28:50.466 11:46:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:28:50.466 11:46:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:28:50.466 11:46:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:28:50.466 11:46:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:28:50.466 11:46:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:28:50.466 11:46:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:28:50.466 11:46:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:28:50.466 11:46:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:28:50.466 11:46:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:28:50.466 11:46:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:28:50.466 11:46:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:28:50.466 11:46:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:28:50.466 11:46:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:28:50.466 11:46:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:28:50.466 11:46:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:28:50.466 Found 0000:af:00.0 (0x8086 - 0x159b) 00:28:50.466 11:46:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:28:50.466 11:46:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:28:50.466 11:46:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:50.466 11:46:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:50.466 11:46:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:28:50.466 11:46:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:28:50.466 11:46:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:28:50.466 Found 0000:af:00.1 (0x8086 - 0x159b) 00:28:50.466 11:46:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:28:50.466 11:46:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:28:50.466 11:46:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:50.466 11:46:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:50.466 11:46:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:28:50.466 11:46:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:28:50.466 11:46:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:28:50.466 11:46:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:28:50.466 11:46:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:28:50.466 11:46:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:50.466 11:46:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:28:50.466 11:46:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:50.466 11:46:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@418 -- # [[ up == up ]] 00:28:50.466 11:46:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:28:50.466 11:46:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:50.466 11:46:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:28:50.466 Found net devices under 0000:af:00.0: cvl_0_0 00:28:50.466 11:46:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:28:50.466 11:46:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:28:50.466 11:46:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:50.466 11:46:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:28:50.466 11:46:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:50.466 11:46:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@418 -- # [[ up == up ]] 00:28:50.466 11:46:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:28:50.466 11:46:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:50.466 11:46:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:28:50.466 Found net devices under 0000:af:00.1: cvl_0_1 00:28:50.466 11:46:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:28:50.466 11:46:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:28:50.466 11:46:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@442 -- # is_hw=yes 00:28:50.466 11:46:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:28:50.467 11:46:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:28:50.467 11:46:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:28:50.467 11:46:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:28:50.467 11:46:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:28:50.467 11:46:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:28:50.467 11:46:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:28:50.467 11:46:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:28:50.467 11:46:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:28:50.467 11:46:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:28:50.467 11:46:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:28:50.467 11:46:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:28:50.467 11:46:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:28:50.467 11:46:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:28:50.467 11:46:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:28:50.467 11:46:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:28:50.467 11:46:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:28:50.467 11:46:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:28:50.467 11:46:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:28:50.467 11:46:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:28:50.467 11:46:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:28:50.467 11:46:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:28:50.467 11:46:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:28:50.467 11:46:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:28:50.467 11:46:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:28:50.467 11:46:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:28:50.467 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:28:50.467 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.483 ms 00:28:50.467 00:28:50.467 --- 10.0.0.2 ping statistics --- 00:28:50.467 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:50.467 rtt min/avg/max/mdev = 0.483/0.483/0.483/0.000 ms 00:28:50.467 11:46:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:28:50.467 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:28:50.467 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.207 ms 00:28:50.467 00:28:50.467 --- 10.0.0.1 ping statistics --- 00:28:50.467 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:50.467 rtt min/avg/max/mdev = 0.207/0.207/0.207/0.000 ms 00:28:50.467 11:46:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:28:50.467 11:46:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@450 -- # return 0 00:28:50.467 11:46:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:28:50.467 11:46:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:28:50.467 11:46:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:28:50.467 11:46:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:28:50.467 11:46:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:28:50.467 11:46:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:28:50.467 11:46:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:28:50.467 11:46:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@15 -- # nvmfappstart -m 0xE 00:28:50.467 11:46:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:28:50.467 11:46:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@724 -- # xtrace_disable 00:28:50.467 11:46:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:28:50.467 11:46:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@509 -- # nvmfpid=1413177 00:28:50.467 11:46:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@510 -- # waitforlisten 1413177 00:28:50.467 11:46:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xE 00:28:50.467 11:46:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@833 -- # '[' -z 1413177 ']' 00:28:50.467 11:46:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:50.467 11:46:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@838 -- # local max_retries=100 00:28:50.467 11:46:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:50.467 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:50.467 11:46:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@842 -- # xtrace_disable 00:28:50.467 11:46:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:28:50.467 [2024-11-15 11:46:50.654524] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:28:50.467 [2024-11-15 11:46:50.655880] Starting SPDK v25.01-pre git sha1 4b2d483c6 / DPDK 24.03.0 initialization... 00:28:50.467 [2024-11-15 11:46:50.655924] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:50.467 [2024-11-15 11:46:50.728009] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:28:50.467 [2024-11-15 11:46:50.768384] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:50.467 [2024-11-15 11:46:50.768414] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:50.467 [2024-11-15 11:46:50.768421] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:28:50.467 [2024-11-15 11:46:50.768426] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:28:50.467 [2024-11-15 11:46:50.768431] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:50.467 [2024-11-15 11:46:50.769745] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:28:50.467 [2024-11-15 11:46:50.769823] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:28:50.467 [2024-11-15 11:46:50.769825] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:28:50.467 [2024-11-15 11:46:50.836630] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:28:50.467 [2024-11-15 11:46:50.836657] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:28:50.467 [2024-11-15 11:46:50.836752] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:28:50.467 [2024-11-15 11:46:50.836848] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:28:50.467 11:46:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:28:50.467 11:46:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@866 -- # return 0 00:28:50.467 11:46:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:28:50.467 11:46:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@730 -- # xtrace_disable 00:28:50.467 11:46:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:28:50.467 11:46:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:50.467 11:46:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -a 256 00:28:50.467 11:46:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:50.467 11:46:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:28:50.467 [2024-11-15 11:46:50.926477] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:50.467 11:46:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:50.467 11:46:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@20 -- # rpc_cmd bdev_malloc_create 64 4096 -b Malloc0 00:28:50.467 11:46:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:50.467 11:46:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:28:50.467 Malloc0 00:28:50.467 11:46:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:50.467 11:46:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@21 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:28:50.467 11:46:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:50.467 11:46:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:28:50.467 Delay0 00:28:50.467 11:46:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:50.467 11:46:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:28:50.467 11:46:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:50.467 11:46:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:28:50.467 11:46:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:50.467 11:46:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 Delay0 00:28:50.467 11:46:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:50.467 11:46:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:28:50.468 11:46:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:50.468 11:46:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:28:50.468 11:46:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:50.468 11:46:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:28:50.468 [2024-11-15 11:46:51.002409] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:50.468 11:46:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:50.468 11:46:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:28:50.468 11:46:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:50.468 11:46:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:28:50.468 11:46:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:50.468 11:46:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0x1 -t 1 -l warning -q 128 00:28:50.468 [2024-11-15 11:46:51.133315] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:28:52.373 Initializing NVMe Controllers 00:28:52.373 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:28:52.373 controller IO queue size 128 less than required 00:28:52.373 Consider using lower queue depth or small IO size because IO requests may be queued at the NVMe driver. 00:28:52.373 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 0 00:28:52.373 Initialization complete. Launching workers. 00:28:52.373 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 I/O completed: 127, failed: 24196 00:28:52.373 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) abort submitted 24257, failed to submit 66 00:28:52.373 success 24196, unsuccessful 61, failed 0 00:28:52.373 11:46:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:28:52.373 11:46:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:52.373 11:46:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:28:52.373 11:46:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:52.373 11:46:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:28:52.373 11:46:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@38 -- # nvmftestfini 00:28:52.373 11:46:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@516 -- # nvmfcleanup 00:28:52.373 11:46:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@121 -- # sync 00:28:52.373 11:46:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:28:52.373 11:46:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@124 -- # set +e 00:28:52.373 11:46:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@125 -- # for i in {1..20} 00:28:52.373 11:46:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:28:52.373 rmmod nvme_tcp 00:28:52.373 rmmod nvme_fabrics 00:28:52.632 rmmod nvme_keyring 00:28:52.632 11:46:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:28:52.632 11:46:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@128 -- # set -e 00:28:52.632 11:46:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@129 -- # return 0 00:28:52.632 11:46:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@517 -- # '[' -n 1413177 ']' 00:28:52.632 11:46:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@518 -- # killprocess 1413177 00:28:52.632 11:46:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@952 -- # '[' -z 1413177 ']' 00:28:52.632 11:46:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@956 -- # kill -0 1413177 00:28:52.632 11:46:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@957 -- # uname 00:28:52.632 11:46:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:28:52.632 11:46:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 1413177 00:28:52.632 11:46:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:28:52.632 11:46:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:28:52.632 11:46:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@970 -- # echo 'killing process with pid 1413177' 00:28:52.632 killing process with pid 1413177 00:28:52.632 11:46:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@971 -- # kill 1413177 00:28:52.632 11:46:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@976 -- # wait 1413177 00:28:52.891 11:46:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:28:52.891 11:46:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:28:52.891 11:46:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:28:52.891 11:46:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@297 -- # iptr 00:28:52.891 11:46:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@791 -- # iptables-save 00:28:52.891 11:46:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:28:52.891 11:46:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@791 -- # iptables-restore 00:28:52.891 11:46:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:28:52.891 11:46:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@302 -- # remove_spdk_ns 00:28:52.891 11:46:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:52.891 11:46:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:52.891 11:46:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:54.795 11:46:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:28:54.795 00:28:54.795 real 0m11.069s 00:28:54.795 user 0m10.320s 00:28:54.795 sys 0m5.588s 00:28:54.795 11:46:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1128 -- # xtrace_disable 00:28:54.795 11:46:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:28:54.795 ************************************ 00:28:54.795 END TEST nvmf_abort 00:28:54.795 ************************************ 00:28:54.795 11:46:55 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@22 -- # run_test nvmf_ns_hotplug_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp --interrupt-mode 00:28:54.795 11:46:55 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:28:54.795 11:46:55 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1109 -- # xtrace_disable 00:28:54.796 11:46:55 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:28:54.796 ************************************ 00:28:54.796 START TEST nvmf_ns_hotplug_stress 00:28:54.796 ************************************ 00:28:54.796 11:46:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp --interrupt-mode 00:28:55.055 * Looking for test storage... 00:28:55.055 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:28:55.055 11:46:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:28:55.055 11:46:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1691 -- # lcov --version 00:28:55.055 11:46:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:28:55.055 11:46:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:28:55.055 11:46:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:28:55.055 11:46:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@333 -- # local ver1 ver1_l 00:28:55.055 11:46:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@334 -- # local ver2 ver2_l 00:28:55.055 11:46:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@336 -- # IFS=.-: 00:28:55.055 11:46:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@336 -- # read -ra ver1 00:28:55.055 11:46:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@337 -- # IFS=.-: 00:28:55.055 11:46:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@337 -- # read -ra ver2 00:28:55.055 11:46:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@338 -- # local 'op=<' 00:28:55.055 11:46:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@340 -- # ver1_l=2 00:28:55.055 11:46:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@341 -- # ver2_l=1 00:28:55.055 11:46:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:28:55.055 11:46:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@344 -- # case "$op" in 00:28:55.055 11:46:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@345 -- # : 1 00:28:55.055 11:46:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@364 -- # (( v = 0 )) 00:28:55.055 11:46:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:28:55.055 11:46:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@365 -- # decimal 1 00:28:55.055 11:46:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@353 -- # local d=1 00:28:55.055 11:46:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:28:55.055 11:46:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@355 -- # echo 1 00:28:55.055 11:46:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@365 -- # ver1[v]=1 00:28:55.055 11:46:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@366 -- # decimal 2 00:28:55.055 11:46:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@353 -- # local d=2 00:28:55.055 11:46:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:28:55.055 11:46:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@355 -- # echo 2 00:28:55.055 11:46:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@366 -- # ver2[v]=2 00:28:55.055 11:46:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:28:55.055 11:46:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:28:55.055 11:46:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@368 -- # return 0 00:28:55.055 11:46:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:28:55.055 11:46:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:28:55.055 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:55.055 --rc genhtml_branch_coverage=1 00:28:55.055 --rc genhtml_function_coverage=1 00:28:55.055 --rc genhtml_legend=1 00:28:55.055 --rc geninfo_all_blocks=1 00:28:55.055 --rc geninfo_unexecuted_blocks=1 00:28:55.055 00:28:55.055 ' 00:28:55.055 11:46:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:28:55.055 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:55.055 --rc genhtml_branch_coverage=1 00:28:55.055 --rc genhtml_function_coverage=1 00:28:55.055 --rc genhtml_legend=1 00:28:55.055 --rc geninfo_all_blocks=1 00:28:55.055 --rc geninfo_unexecuted_blocks=1 00:28:55.055 00:28:55.055 ' 00:28:55.055 11:46:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:28:55.055 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:55.055 --rc genhtml_branch_coverage=1 00:28:55.055 --rc genhtml_function_coverage=1 00:28:55.055 --rc genhtml_legend=1 00:28:55.055 --rc geninfo_all_blocks=1 00:28:55.055 --rc geninfo_unexecuted_blocks=1 00:28:55.055 00:28:55.055 ' 00:28:55.055 11:46:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:28:55.055 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:55.055 --rc genhtml_branch_coverage=1 00:28:55.055 --rc genhtml_function_coverage=1 00:28:55.055 --rc genhtml_legend=1 00:28:55.055 --rc geninfo_all_blocks=1 00:28:55.055 --rc geninfo_unexecuted_blocks=1 00:28:55.055 00:28:55.055 ' 00:28:55.055 11:46:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:28:55.055 11:46:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # uname -s 00:28:55.055 11:46:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:55.055 11:46:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:55.055 11:46:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:55.055 11:46:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:55.055 11:46:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:28:55.055 11:46:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:28:55.055 11:46:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:55.055 11:46:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:28:55.055 11:46:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:55.055 11:46:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:28:55.055 11:46:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:28:55.055 11:46:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=00abaa28-3537-eb11-906e-0017a4403562 00:28:55.055 11:46:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:55.055 11:46:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:28:55.055 11:46:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:28:55.055 11:46:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:28:55.055 11:46:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:28:55.055 11:46:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@15 -- # shopt -s extglob 00:28:55.055 11:46:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:55.055 11:46:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:55.055 11:46:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:55.055 11:46:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:55.055 11:46:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:55.055 11:46:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:55.055 11:46:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@5 -- # export PATH 00:28:55.056 11:46:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:55.056 11:46:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@51 -- # : 0 00:28:55.056 11:46:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:28:55.056 11:46:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:28:55.056 11:46:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:28:55.056 11:46:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:55.056 11:46:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:55.056 11:46:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:28:55.056 11:46:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:28:55.056 11:46:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:28:55.056 11:46:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:28:55.056 11:46:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@55 -- # have_pci_nics=0 00:28:55.056 11:46:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:28:55.056 11:46:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@22 -- # nvmftestinit 00:28:55.056 11:46:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:28:55.056 11:46:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:55.056 11:46:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@476 -- # prepare_net_devs 00:28:55.056 11:46:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@438 -- # local -g is_hw=no 00:28:55.056 11:46:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@440 -- # remove_spdk_ns 00:28:55.056 11:46:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:55.056 11:46:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:55.056 11:46:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:55.056 11:46:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:28:55.056 11:46:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:28:55.056 11:46:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@309 -- # xtrace_disable 00:28:55.056 11:46:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:29:00.327 11:47:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:29:00.327 11:47:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # pci_devs=() 00:29:00.327 11:47:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # local -a pci_devs 00:29:00.327 11:47:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@316 -- # pci_net_devs=() 00:29:00.327 11:47:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:29:00.327 11:47:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@317 -- # pci_drivers=() 00:29:00.327 11:47:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@317 -- # local -A pci_drivers 00:29:00.327 11:47:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@319 -- # net_devs=() 00:29:00.327 11:47:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@319 -- # local -ga net_devs 00:29:00.327 11:47:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@320 -- # e810=() 00:29:00.327 11:47:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@320 -- # local -ga e810 00:29:00.327 11:47:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # x722=() 00:29:00.327 11:47:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # local -ga x722 00:29:00.327 11:47:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@322 -- # mlx=() 00:29:00.327 11:47:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@322 -- # local -ga mlx 00:29:00.327 11:47:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:29:00.327 11:47:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:29:00.327 11:47:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:29:00.327 11:47:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:29:00.327 11:47:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:29:00.327 11:47:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:29:00.327 11:47:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:29:00.327 11:47:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:29:00.327 11:47:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:29:00.327 11:47:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:29:00.327 11:47:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:29:00.327 11:47:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:29:00.327 11:47:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:29:00.327 11:47:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:29:00.327 11:47:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:29:00.327 11:47:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:29:00.327 11:47:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:29:00.327 11:47:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:29:00.327 11:47:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:00.327 11:47:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:29:00.327 Found 0000:af:00.0 (0x8086 - 0x159b) 00:29:00.327 11:47:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:00.327 11:47:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:00.327 11:47:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:00.327 11:47:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:00.327 11:47:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:00.327 11:47:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:00.327 11:47:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:29:00.327 Found 0000:af:00.1 (0x8086 - 0x159b) 00:29:00.327 11:47:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:00.327 11:47:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:00.327 11:47:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:00.327 11:47:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:00.327 11:47:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:00.327 11:47:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:29:00.327 11:47:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:29:00.327 11:47:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:29:00.327 11:47:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:29:00.327 11:47:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:00.327 11:47:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:29:00.327 11:47:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:00.327 11:47:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@418 -- # [[ up == up ]] 00:29:00.327 11:47:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:29:00.327 11:47:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:00.327 11:47:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:29:00.327 Found net devices under 0000:af:00.0: cvl_0_0 00:29:00.327 11:47:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:29:00.327 11:47:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:29:00.327 11:47:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:00.327 11:47:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:29:00.327 11:47:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:00.327 11:47:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@418 -- # [[ up == up ]] 00:29:00.327 11:47:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:29:00.327 11:47:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:00.327 11:47:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:29:00.327 Found net devices under 0000:af:00.1: cvl_0_1 00:29:00.327 11:47:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:29:00.327 11:47:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:29:00.327 11:47:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # is_hw=yes 00:29:00.327 11:47:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:29:00.327 11:47:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:29:00.327 11:47:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:29:00.327 11:47:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:29:00.327 11:47:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:29:00.327 11:47:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:29:00.327 11:47:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:29:00.327 11:47:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:29:00.327 11:47:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:29:00.327 11:47:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:29:00.327 11:47:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:29:00.327 11:47:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:29:00.327 11:47:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:29:00.327 11:47:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:29:00.327 11:47:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:29:00.327 11:47:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:29:00.327 11:47:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:29:00.586 11:47:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:29:00.586 11:47:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:29:00.586 11:47:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:29:00.586 11:47:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:29:00.586 11:47:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:29:00.586 11:47:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:29:00.586 11:47:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:29:00.586 11:47:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:29:00.586 11:47:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:29:00.586 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:29:00.586 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.510 ms 00:29:00.586 00:29:00.586 --- 10.0.0.2 ping statistics --- 00:29:00.586 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:00.586 rtt min/avg/max/mdev = 0.510/0.510/0.510/0.000 ms 00:29:00.586 11:47:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:29:00.586 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:29:00.586 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.150 ms 00:29:00.586 00:29:00.586 --- 10.0.0.1 ping statistics --- 00:29:00.586 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:00.586 rtt min/avg/max/mdev = 0.150/0.150/0.150/0.000 ms 00:29:00.586 11:47:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:29:00.586 11:47:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@450 -- # return 0 00:29:00.586 11:47:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:29:00.586 11:47:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:29:00.586 11:47:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:29:00.586 11:47:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:29:00.586 11:47:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:29:00.586 11:47:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:29:00.586 11:47:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:29:00.845 11:47:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@23 -- # nvmfappstart -m 0xE 00:29:00.845 11:47:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:29:00.845 11:47:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@724 -- # xtrace_disable 00:29:00.845 11:47:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:29:00.845 11:47:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@509 -- # nvmfpid=1417186 00:29:00.845 11:47:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@510 -- # waitforlisten 1417186 00:29:00.845 11:47:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xE 00:29:00.845 11:47:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@833 -- # '[' -z 1417186 ']' 00:29:00.845 11:47:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:00.845 11:47:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@838 -- # local max_retries=100 00:29:00.845 11:47:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:00.846 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:00.846 11:47:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@842 -- # xtrace_disable 00:29:00.846 11:47:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:29:00.846 [2024-11-15 11:47:01.524045] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:29:00.846 [2024-11-15 11:47:01.525398] Starting SPDK v25.01-pre git sha1 4b2d483c6 / DPDK 24.03.0 initialization... 00:29:00.846 [2024-11-15 11:47:01.525446] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:00.846 [2024-11-15 11:47:01.597232] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:29:00.846 [2024-11-15 11:47:01.634168] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:00.846 [2024-11-15 11:47:01.634204] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:00.846 [2024-11-15 11:47:01.634210] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:00.846 [2024-11-15 11:47:01.634216] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:00.846 [2024-11-15 11:47:01.634220] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:00.846 [2024-11-15 11:47:01.635510] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:29:00.846 [2024-11-15 11:47:01.635551] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:29:00.846 [2024-11-15 11:47:01.635553] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:29:01.105 [2024-11-15 11:47:01.701243] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:29:01.105 [2024-11-15 11:47:01.701246] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:29:01.105 [2024-11-15 11:47:01.701334] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:29:01.105 [2024-11-15 11:47:01.701468] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:29:01.105 11:47:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:29:01.105 11:47:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@866 -- # return 0 00:29:01.105 11:47:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:29:01.105 11:47:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@730 -- # xtrace_disable 00:29:01.105 11:47:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:29:01.105 11:47:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:01.105 11:47:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@25 -- # null_size=1000 00:29:01.105 11:47:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:29:01.363 [2024-11-15 11:47:02.036018] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:01.364 11:47:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:29:01.622 11:47:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:29:01.880 [2024-11-15 11:47:02.580373] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:01.880 11:47:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:29:02.139 11:47:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 512 -b Malloc0 00:29:02.398 Malloc0 00:29:02.398 11:47:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:29:02.656 Delay0 00:29:02.656 11:47:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:29:02.914 11:47:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create NULL1 1000 512 00:29:03.173 NULL1 00:29:03.173 11:47:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:29:03.431 11:47:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@42 -- # PERF_PID=1417730 00:29:03.431 11:47:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 30 -q 128 -w randread -o 512 -Q 1000 00:29:03.431 11:47:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1417730 00:29:03.431 11:47:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:29:04.812 Read completed with error (sct=0, sc=11) 00:29:04.812 11:47:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:29:04.812 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:29:04.812 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:29:04.812 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:29:04.812 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:29:04.812 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:29:04.812 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:29:04.812 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:29:05.072 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:29:05.072 11:47:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1001 00:29:05.072 11:47:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1001 00:29:05.330 true 00:29:05.330 11:47:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1417730 00:29:05.330 11:47:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:29:05.897 11:47:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:29:06.465 11:47:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1002 00:29:06.465 11:47:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1002 00:29:06.465 true 00:29:06.465 11:47:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1417730 00:29:06.465 11:47:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:29:06.723 11:47:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:29:06.981 11:47:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1003 00:29:06.981 11:47:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1003 00:29:07.239 true 00:29:07.239 11:47:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1417730 00:29:07.239 11:47:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:29:07.806 11:47:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:29:07.806 11:47:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1004 00:29:07.806 11:47:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1004 00:29:08.065 true 00:29:08.065 11:47:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1417730 00:29:08.065 11:47:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:29:09.006 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:29:09.006 11:47:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:29:09.006 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:29:09.317 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:29:09.317 11:47:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1005 00:29:09.317 11:47:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1005 00:29:09.607 true 00:29:09.607 11:47:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1417730 00:29:09.607 11:47:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:29:09.888 11:47:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:29:10.148 11:47:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1006 00:29:10.148 11:47:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1006 00:29:10.406 true 00:29:10.406 11:47:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1417730 00:29:10.406 11:47:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:29:10.666 11:47:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:29:10.925 11:47:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1007 00:29:10.925 11:47:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1007 00:29:11.184 true 00:29:11.184 11:47:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1417730 00:29:11.184 11:47:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:29:12.122 11:47:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:29:12.393 11:47:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1008 00:29:12.393 11:47:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1008 00:29:12.652 true 00:29:12.911 11:47:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1417730 00:29:12.911 11:47:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:29:13.169 11:47:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:29:13.428 11:47:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1009 00:29:13.428 11:47:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1009 00:29:13.687 true 00:29:13.687 11:47:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1417730 00:29:13.687 11:47:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:29:13.946 11:47:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:29:14.203 11:47:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1010 00:29:14.203 11:47:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1010 00:29:14.461 true 00:29:14.461 11:47:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1417730 00:29:14.461 11:47:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:29:15.399 11:47:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:29:15.399 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:29:15.399 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:29:15.399 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:29:15.658 11:47:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1011 00:29:15.658 11:47:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1011 00:29:15.917 true 00:29:15.917 11:47:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1417730 00:29:15.917 11:47:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:29:16.176 11:47:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:29:16.435 11:47:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1012 00:29:16.435 11:47:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1012 00:29:16.693 true 00:29:16.693 11:47:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1417730 00:29:16.693 11:47:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:29:17.630 11:47:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:29:17.630 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:29:17.630 11:47:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1013 00:29:17.630 11:47:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1013 00:29:17.889 true 00:29:17.889 11:47:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1417730 00:29:17.889 11:47:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:29:18.147 11:47:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:29:18.406 11:47:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1014 00:29:18.406 11:47:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1014 00:29:18.664 true 00:29:18.664 11:47:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1417730 00:29:18.664 11:47:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:29:19.601 11:47:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:29:19.860 11:47:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1015 00:29:19.860 11:47:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1015 00:29:20.119 true 00:29:20.119 11:47:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1417730 00:29:20.119 11:47:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:29:20.378 11:47:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:29:20.637 11:47:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1016 00:29:20.637 11:47:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1016 00:29:20.895 true 00:29:20.895 11:47:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1417730 00:29:20.895 11:47:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:29:21.154 11:47:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:29:21.412 11:47:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1017 00:29:21.412 11:47:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1017 00:29:21.671 true 00:29:21.671 11:47:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1417730 00:29:21.930 11:47:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:29:22.867 11:47:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:29:23.126 11:47:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1018 00:29:23.126 11:47:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1018 00:29:23.385 true 00:29:23.385 11:47:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1417730 00:29:23.385 11:47:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:29:23.643 11:47:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:29:23.902 11:47:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1019 00:29:23.902 11:47:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1019 00:29:24.160 true 00:29:24.160 11:47:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1417730 00:29:24.160 11:47:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:29:24.418 11:47:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:29:24.677 11:47:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1020 00:29:24.677 11:47:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1020 00:29:24.936 true 00:29:24.936 11:47:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1417730 00:29:24.936 11:47:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:29:25.873 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:29:25.873 11:47:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:29:25.873 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:29:25.873 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:29:26.132 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:29:26.132 11:47:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1021 00:29:26.132 11:47:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1021 00:29:26.391 true 00:29:26.391 11:47:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1417730 00:29:26.391 11:47:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:29:26.650 11:47:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:29:26.909 11:47:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1022 00:29:26.909 11:47:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1022 00:29:27.168 true 00:29:27.168 11:47:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1417730 00:29:27.168 11:47:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:29:28.105 11:47:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:29:28.105 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:29:28.105 11:47:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1023 00:29:28.105 11:47:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1023 00:29:28.364 true 00:29:28.364 11:47:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1417730 00:29:28.364 11:47:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:29:28.623 11:47:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:29:28.881 11:47:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1024 00:29:28.881 11:47:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1024 00:29:29.140 true 00:29:29.140 11:47:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1417730 00:29:29.140 11:47:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:29:30.077 11:47:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:29:30.077 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:29:30.336 11:47:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1025 00:29:30.336 11:47:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1025 00:29:30.594 true 00:29:30.594 11:47:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1417730 00:29:30.594 11:47:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:29:30.853 11:47:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:29:31.421 11:47:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1026 00:29:31.421 11:47:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1026 00:29:31.421 true 00:29:31.421 11:47:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1417730 00:29:31.421 11:47:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:29:31.680 11:47:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:29:31.939 11:47:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1027 00:29:31.939 11:47:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1027 00:29:32.197 true 00:29:32.455 11:47:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1417730 00:29:32.455 11:47:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:29:33.391 11:47:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:29:33.650 11:47:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1028 00:29:33.650 11:47:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1028 00:29:33.909 Initializing NVMe Controllers 00:29:33.909 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:29:33.909 Controller IO queue size 128, less than required. 00:29:33.909 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:29:33.909 Controller IO queue size 128, less than required. 00:29:33.909 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:29:33.909 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:29:33.909 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:29:33.909 Initialization complete. Launching workers. 00:29:33.909 ======================================================== 00:29:33.909 Latency(us) 00:29:33.909 Device Information : IOPS MiB/s Average min max 00:29:33.909 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 774.48 0.38 74948.47 2842.54 1052949.94 00:29:33.909 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 12774.14 6.24 10019.21 1568.99 610177.13 00:29:33.909 ======================================================== 00:29:33.909 Total : 13548.62 6.62 13730.76 1568.99 1052949.94 00:29:33.909 00:29:33.909 true 00:29:33.909 11:47:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1417730 00:29:33.909 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh: line 44: kill: (1417730) - No such process 00:29:33.909 11:47:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@53 -- # wait 1417730 00:29:33.909 11:47:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:29:34.167 11:47:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:29:34.426 11:47:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # nthreads=8 00:29:34.426 11:47:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # pids=() 00:29:34.426 11:47:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i = 0 )) 00:29:34.426 11:47:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:29:34.426 11:47:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null0 100 4096 00:29:34.685 null0 00:29:34.685 11:47:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:29:34.685 11:47:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:29:34.685 11:47:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null1 100 4096 00:29:34.944 null1 00:29:34.944 11:47:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:29:34.944 11:47:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:29:34.944 11:47:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null2 100 4096 00:29:35.202 null2 00:29:35.202 11:47:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:29:35.202 11:47:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:29:35.202 11:47:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null3 100 4096 00:29:35.460 null3 00:29:35.460 11:47:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:29:35.460 11:47:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:29:35.460 11:47:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null4 100 4096 00:29:35.718 null4 00:29:35.718 11:47:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:29:35.718 11:47:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:29:35.718 11:47:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null5 100 4096 00:29:35.976 null5 00:29:35.976 11:47:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:29:35.976 11:47:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:29:35.976 11:47:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null6 100 4096 00:29:36.234 null6 00:29:36.234 11:47:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:29:36.234 11:47:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:29:36.234 11:47:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null7 100 4096 00:29:36.493 null7 00:29:36.493 11:47:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:29:36.493 11:47:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:29:36.493 11:47:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i = 0 )) 00:29:36.493 11:47:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:29:36.493 11:47:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:29:36.493 11:47:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 1 null0 00:29:36.493 11:47:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=1 bdev=null0 00:29:36.493 11:47:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:29:36.493 11:47:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:29:36.493 11:47:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:36.493 11:47:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:29:36.493 11:47:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:29:36.493 11:47:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:29:36.493 11:47:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 2 null1 00:29:36.493 11:47:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:29:36.493 11:47:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:29:36.493 11:47:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=2 bdev=null1 00:29:36.493 11:47:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:29:36.493 11:47:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:36.493 11:47:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:29:36.493 11:47:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:29:36.493 11:47:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:29:36.493 11:47:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 3 null2 00:29:36.493 11:47:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:29:36.494 11:47:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=3 bdev=null2 00:29:36.494 11:47:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:29:36.494 11:47:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:36.494 11:47:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:29:36.494 11:47:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:29:36.494 11:47:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:29:36.494 11:47:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 4 null3 00:29:36.494 11:47:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:29:36.494 11:47:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=4 bdev=null3 00:29:36.494 11:47:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:29:36.494 11:47:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:36.494 11:47:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:29:36.494 11:47:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:29:36.494 11:47:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 5 null4 00:29:36.494 11:47:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:29:36.494 11:47:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:29:36.494 11:47:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=5 bdev=null4 00:29:36.494 11:47:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:29:36.494 11:47:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:36.494 11:47:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:29:36.494 11:47:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:29:36.494 11:47:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:29:36.494 11:47:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 6 null5 00:29:36.494 11:47:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:29:36.494 11:47:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=6 bdev=null5 00:29:36.494 11:47:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:29:36.494 11:47:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:29:36.494 11:47:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:36.494 11:47:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:29:36.494 11:47:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:29:36.494 11:47:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:29:36.494 11:47:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 7 null6 00:29:36.494 11:47:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=7 bdev=null6 00:29:36.494 11:47:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:29:36.494 11:47:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 8 null7 00:29:36.494 11:47:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:29:36.494 11:47:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:29:36.494 11:47:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=8 bdev=null7 00:29:36.494 11:47:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:29:36.494 11:47:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:29:36.494 11:47:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:36.494 11:47:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:36.494 11:47:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:29:36.494 11:47:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@66 -- # wait 1423552 1423553 1423555 1423557 1423559 1423561 1423563 1423564 00:29:36.494 11:47:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:29:36.753 11:47:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:29:36.753 11:47:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:29:36.753 11:47:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:29:36.753 11:47:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:29:36.753 11:47:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:29:36.753 11:47:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:29:36.753 11:47:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:29:36.753 11:47:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:29:37.011 11:47:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:37.011 11:47:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:37.011 11:47:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:29:37.012 11:47:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:37.012 11:47:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:37.012 11:47:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:29:37.012 11:47:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:37.012 11:47:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:37.012 11:47:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:29:37.012 11:47:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:37.012 11:47:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:37.012 11:47:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:29:37.012 11:47:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:37.012 11:47:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:37.012 11:47:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:29:37.012 11:47:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:37.012 11:47:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:37.012 11:47:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:29:37.012 11:47:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:37.012 11:47:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:37.012 11:47:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:29:37.270 11:47:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:37.270 11:47:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:37.270 11:47:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:29:37.271 11:47:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:29:37.271 11:47:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:29:37.271 11:47:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:29:37.271 11:47:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:29:37.271 11:47:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:29:37.529 11:47:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:29:37.529 11:47:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:29:37.529 11:47:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:29:37.529 11:47:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:37.529 11:47:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:37.529 11:47:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:29:37.529 11:47:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:37.529 11:47:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:37.529 11:47:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:29:37.788 11:47:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:37.788 11:47:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:37.788 11:47:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:29:37.788 11:47:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:37.788 11:47:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:37.788 11:47:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:29:37.788 11:47:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:37.788 11:47:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:37.788 11:47:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:29:37.788 11:47:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:37.788 11:47:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:37.788 11:47:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:29:37.788 11:47:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:37.788 11:47:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:37.788 11:47:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:29:37.788 11:47:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:37.788 11:47:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:37.788 11:47:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:29:37.788 11:47:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:29:37.788 11:47:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:29:37.788 11:47:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:29:38.047 11:47:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:29:38.047 11:47:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:29:38.047 11:47:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:29:38.047 11:47:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:29:38.047 11:47:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:29:38.047 11:47:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:38.047 11:47:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:38.047 11:47:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:29:38.047 11:47:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:38.047 11:47:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:38.047 11:47:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:29:38.047 11:47:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:38.047 11:47:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:38.047 11:47:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:29:38.047 11:47:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:38.047 11:47:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:38.047 11:47:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:29:38.305 11:47:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:29:38.305 11:47:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:38.305 11:47:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:38.306 11:47:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:29:38.306 11:47:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:38.306 11:47:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:38.306 11:47:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:29:38.306 11:47:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:38.306 11:47:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:38.306 11:47:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:29:38.306 11:47:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:38.306 11:47:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:38.306 11:47:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:29:38.306 11:47:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:38.306 11:47:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:38.306 11:47:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:29:38.306 11:47:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:29:38.306 11:47:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:29:38.306 11:47:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:29:38.564 11:47:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:29:38.564 11:47:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:29:38.564 11:47:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:29:38.564 11:47:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:29:38.564 11:47:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:29:38.564 11:47:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:38.564 11:47:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:38.564 11:47:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:29:38.823 11:47:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:38.823 11:47:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:38.823 11:47:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:29:38.823 11:47:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:38.823 11:47:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:38.823 11:47:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:29:38.823 11:47:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:38.823 11:47:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:38.823 11:47:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:29:38.823 11:47:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:38.823 11:47:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:38.823 11:47:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:29:38.823 11:47:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:38.823 11:47:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:38.823 11:47:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:29:38.823 11:47:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:38.823 11:47:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:38.823 11:47:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:29:38.823 11:47:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:38.823 11:47:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:38.823 11:47:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:29:38.823 11:47:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:29:38.823 11:47:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:29:39.082 11:47:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:29:39.082 11:47:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:29:39.082 11:47:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:29:39.082 11:47:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:29:39.082 11:47:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:29:39.082 11:47:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:29:39.082 11:47:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:39.082 11:47:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:39.082 11:47:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:29:39.082 11:47:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:39.082 11:47:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:39.082 11:47:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:29:39.082 11:47:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:39.341 11:47:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:39.341 11:47:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:29:39.341 11:47:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:39.342 11:47:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:39.342 11:47:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:29:39.342 11:47:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:39.342 11:47:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:39.342 11:47:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:29:39.342 11:47:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:39.342 11:47:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:39.342 11:47:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:29:39.342 11:47:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:39.342 11:47:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:39.342 11:47:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:29:39.342 11:47:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:39.342 11:47:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:39.342 11:47:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:29:39.342 11:47:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:29:39.600 11:47:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:29:39.600 11:47:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:29:39.600 11:47:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:29:39.600 11:47:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:29:39.600 11:47:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:29:39.600 11:47:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:29:39.600 11:47:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:29:39.600 11:47:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:39.600 11:47:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:39.601 11:47:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:29:39.859 11:47:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:39.859 11:47:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:39.859 11:47:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:29:39.860 11:47:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:39.860 11:47:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:39.860 11:47:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:29:39.860 11:47:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:39.860 11:47:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:39.860 11:47:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:29:39.860 11:47:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:39.860 11:47:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:39.860 11:47:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:29:39.860 11:47:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:39.860 11:47:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:39.860 11:47:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:29:39.860 11:47:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:39.860 11:47:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:39.860 11:47:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:29:39.860 11:47:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:39.860 11:47:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:39.860 11:47:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:29:39.860 11:47:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:29:39.860 11:47:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:29:40.119 11:47:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:29:40.119 11:47:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:29:40.119 11:47:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:29:40.119 11:47:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:29:40.120 11:47:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:29:40.120 11:47:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:29:40.120 11:47:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:40.120 11:47:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:40.120 11:47:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:29:40.120 11:47:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:40.120 11:47:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:40.120 11:47:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:29:40.379 11:47:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:40.379 11:47:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:40.379 11:47:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:29:40.379 11:47:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:40.379 11:47:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:40.379 11:47:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:29:40.379 11:47:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:40.379 11:47:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:40.379 11:47:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:29:40.379 11:47:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:40.379 11:47:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:40.379 11:47:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:29:40.379 11:47:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:40.379 11:47:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:40.379 11:47:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:29:40.379 11:47:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:40.379 11:47:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:40.379 11:47:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:29:40.379 11:47:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:29:40.638 11:47:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:29:40.638 11:47:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:29:40.638 11:47:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:29:40.638 11:47:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:29:40.638 11:47:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:29:40.638 11:47:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:29:40.638 11:47:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:29:40.638 11:47:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:40.638 11:47:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:40.638 11:47:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:29:40.897 11:47:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:40.897 11:47:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:40.897 11:47:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:29:40.897 11:47:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:40.897 11:47:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:40.897 11:47:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:29:40.897 11:47:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:40.897 11:47:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:40.897 11:47:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:29:40.897 11:47:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:40.897 11:47:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:40.897 11:47:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:29:40.897 11:47:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:40.897 11:47:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:40.897 11:47:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:29:40.897 11:47:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:40.897 11:47:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:40.897 11:47:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:29:40.897 11:47:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:40.897 11:47:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:40.897 11:47:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:29:40.897 11:47:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:29:41.156 11:47:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:29:41.156 11:47:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:29:41.156 11:47:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:29:41.156 11:47:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:29:41.156 11:47:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:29:41.156 11:47:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:29:41.156 11:47:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:29:41.156 11:47:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:41.156 11:47:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:41.416 11:47:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:41.416 11:47:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:41.416 11:47:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:29:41.416 11:47:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:41.416 11:47:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:41.416 11:47:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:29:41.416 11:47:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:41.416 11:47:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:41.416 11:47:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:29:41.416 11:47:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:41.416 11:47:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:41.416 11:47:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:29:41.416 11:47:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:41.416 11:47:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:41.416 11:47:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:29:41.416 11:47:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:41.416 11:47:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:41.416 11:47:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:29:41.416 11:47:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:41.416 11:47:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:41.416 11:47:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:29:41.676 11:47:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:29:41.676 11:47:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:29:41.676 11:47:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:29:41.676 11:47:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:29:41.676 11:47:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:29:41.676 11:47:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:29:41.676 11:47:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:29:41.935 11:47:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:41.935 11:47:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:41.935 11:47:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:41.935 11:47:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:41.935 11:47:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:41.935 11:47:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:41.935 11:47:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:41.935 11:47:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:41.935 11:47:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:41.935 11:47:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:41.935 11:47:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:41.935 11:47:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:42.194 11:47:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:42.194 11:47:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:42.194 11:47:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:29:42.194 11:47:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@70 -- # nvmftestfini 00:29:42.194 11:47:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@516 -- # nvmfcleanup 00:29:42.194 11:47:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@121 -- # sync 00:29:42.195 11:47:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:29:42.195 11:47:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@124 -- # set +e 00:29:42.195 11:47:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@125 -- # for i in {1..20} 00:29:42.195 11:47:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:29:42.195 rmmod nvme_tcp 00:29:42.195 rmmod nvme_fabrics 00:29:42.195 rmmod nvme_keyring 00:29:42.195 11:47:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:29:42.195 11:47:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@128 -- # set -e 00:29:42.195 11:47:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@129 -- # return 0 00:29:42.195 11:47:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@517 -- # '[' -n 1417186 ']' 00:29:42.195 11:47:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@518 -- # killprocess 1417186 00:29:42.195 11:47:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@952 -- # '[' -z 1417186 ']' 00:29:42.195 11:47:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@956 -- # kill -0 1417186 00:29:42.195 11:47:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@957 -- # uname 00:29:42.195 11:47:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:29:42.195 11:47:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 1417186 00:29:42.195 11:47:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:29:42.195 11:47:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:29:42.195 11:47:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@970 -- # echo 'killing process with pid 1417186' 00:29:42.195 killing process with pid 1417186 00:29:42.195 11:47:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@971 -- # kill 1417186 00:29:42.195 11:47:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@976 -- # wait 1417186 00:29:42.454 11:47:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:29:42.454 11:47:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:29:42.454 11:47:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:29:42.454 11:47:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@297 -- # iptr 00:29:42.454 11:47:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@791 -- # iptables-save 00:29:42.454 11:47:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:29:42.454 11:47:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@791 -- # iptables-restore 00:29:42.454 11:47:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:29:42.454 11:47:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@302 -- # remove_spdk_ns 00:29:42.454 11:47:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:42.454 11:47:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:42.454 11:47:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:44.362 11:47:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:29:44.362 00:29:44.362 real 0m49.520s 00:29:44.362 user 3m17.969s 00:29:44.362 sys 0m20.274s 00:29:44.362 11:47:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1128 -- # xtrace_disable 00:29:44.362 11:47:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:29:44.362 ************************************ 00:29:44.362 END TEST nvmf_ns_hotplug_stress 00:29:44.362 ************************************ 00:29:44.362 11:47:45 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@23 -- # run_test nvmf_delete_subsystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp --interrupt-mode 00:29:44.362 11:47:45 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:29:44.362 11:47:45 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1109 -- # xtrace_disable 00:29:44.362 11:47:45 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:29:44.621 ************************************ 00:29:44.621 START TEST nvmf_delete_subsystem 00:29:44.621 ************************************ 00:29:44.621 11:47:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp --interrupt-mode 00:29:44.621 * Looking for test storage... 00:29:44.621 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:29:44.621 11:47:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:29:44.621 11:47:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1691 -- # lcov --version 00:29:44.621 11:47:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:29:44.621 11:47:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:29:44.621 11:47:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:29:44.621 11:47:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@333 -- # local ver1 ver1_l 00:29:44.621 11:47:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@334 -- # local ver2 ver2_l 00:29:44.621 11:47:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@336 -- # IFS=.-: 00:29:44.621 11:47:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@336 -- # read -ra ver1 00:29:44.621 11:47:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@337 -- # IFS=.-: 00:29:44.621 11:47:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@337 -- # read -ra ver2 00:29:44.621 11:47:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@338 -- # local 'op=<' 00:29:44.622 11:47:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@340 -- # ver1_l=2 00:29:44.622 11:47:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@341 -- # ver2_l=1 00:29:44.622 11:47:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:29:44.622 11:47:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@344 -- # case "$op" in 00:29:44.622 11:47:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@345 -- # : 1 00:29:44.622 11:47:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@364 -- # (( v = 0 )) 00:29:44.622 11:47:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:29:44.622 11:47:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@365 -- # decimal 1 00:29:44.622 11:47:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@353 -- # local d=1 00:29:44.622 11:47:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:29:44.622 11:47:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@355 -- # echo 1 00:29:44.622 11:47:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@365 -- # ver1[v]=1 00:29:44.622 11:47:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@366 -- # decimal 2 00:29:44.622 11:47:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@353 -- # local d=2 00:29:44.622 11:47:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:29:44.622 11:47:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@355 -- # echo 2 00:29:44.622 11:47:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@366 -- # ver2[v]=2 00:29:44.622 11:47:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:29:44.622 11:47:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:29:44.622 11:47:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@368 -- # return 0 00:29:44.622 11:47:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:29:44.622 11:47:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:29:44.622 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:44.622 --rc genhtml_branch_coverage=1 00:29:44.622 --rc genhtml_function_coverage=1 00:29:44.622 --rc genhtml_legend=1 00:29:44.622 --rc geninfo_all_blocks=1 00:29:44.622 --rc geninfo_unexecuted_blocks=1 00:29:44.622 00:29:44.622 ' 00:29:44.622 11:47:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:29:44.622 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:44.622 --rc genhtml_branch_coverage=1 00:29:44.622 --rc genhtml_function_coverage=1 00:29:44.622 --rc genhtml_legend=1 00:29:44.622 --rc geninfo_all_blocks=1 00:29:44.622 --rc geninfo_unexecuted_blocks=1 00:29:44.622 00:29:44.622 ' 00:29:44.622 11:47:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:29:44.622 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:44.622 --rc genhtml_branch_coverage=1 00:29:44.622 --rc genhtml_function_coverage=1 00:29:44.622 --rc genhtml_legend=1 00:29:44.622 --rc geninfo_all_blocks=1 00:29:44.622 --rc geninfo_unexecuted_blocks=1 00:29:44.622 00:29:44.622 ' 00:29:44.622 11:47:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:29:44.622 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:44.622 --rc genhtml_branch_coverage=1 00:29:44.622 --rc genhtml_function_coverage=1 00:29:44.622 --rc genhtml_legend=1 00:29:44.622 --rc geninfo_all_blocks=1 00:29:44.622 --rc geninfo_unexecuted_blocks=1 00:29:44.622 00:29:44.622 ' 00:29:44.622 11:47:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:29:44.622 11:47:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # uname -s 00:29:44.622 11:47:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:44.622 11:47:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:44.622 11:47:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:44.622 11:47:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:44.622 11:47:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:44.622 11:47:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:44.622 11:47:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:44.622 11:47:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:44.622 11:47:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:44.622 11:47:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:44.622 11:47:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:29:44.622 11:47:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@18 -- # NVME_HOSTID=00abaa28-3537-eb11-906e-0017a4403562 00:29:44.622 11:47:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:44.622 11:47:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:44.622 11:47:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:29:44.622 11:47:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:29:44.622 11:47:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:29:44.622 11:47:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@15 -- # shopt -s extglob 00:29:44.622 11:47:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:44.622 11:47:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:44.622 11:47:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:44.622 11:47:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:44.622 11:47:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:44.622 11:47:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:44.622 11:47:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@5 -- # export PATH 00:29:44.622 11:47:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:44.622 11:47:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@51 -- # : 0 00:29:44.622 11:47:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:29:44.622 11:47:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:29:44.622 11:47:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:29:44.622 11:47:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:44.622 11:47:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:44.622 11:47:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:29:44.622 11:47:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:29:44.622 11:47:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:29:44.622 11:47:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:29:44.622 11:47:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@55 -- # have_pci_nics=0 00:29:44.622 11:47:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@12 -- # nvmftestinit 00:29:44.622 11:47:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:29:44.622 11:47:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:29:44.622 11:47:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@476 -- # prepare_net_devs 00:29:44.622 11:47:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@438 -- # local -g is_hw=no 00:29:44.622 11:47:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@440 -- # remove_spdk_ns 00:29:44.623 11:47:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:44.623 11:47:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:44.623 11:47:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:44.623 11:47:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:29:44.623 11:47:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:29:44.623 11:47:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@309 -- # xtrace_disable 00:29:44.623 11:47:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:29:51.198 11:47:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:29:51.198 11:47:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # pci_devs=() 00:29:51.198 11:47:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # local -a pci_devs 00:29:51.198 11:47:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@316 -- # pci_net_devs=() 00:29:51.198 11:47:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:29:51.198 11:47:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@317 -- # pci_drivers=() 00:29:51.198 11:47:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@317 -- # local -A pci_drivers 00:29:51.198 11:47:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@319 -- # net_devs=() 00:29:51.198 11:47:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@319 -- # local -ga net_devs 00:29:51.198 11:47:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@320 -- # e810=() 00:29:51.198 11:47:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@320 -- # local -ga e810 00:29:51.198 11:47:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # x722=() 00:29:51.198 11:47:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # local -ga x722 00:29:51.198 11:47:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@322 -- # mlx=() 00:29:51.198 11:47:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@322 -- # local -ga mlx 00:29:51.198 11:47:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:29:51.198 11:47:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:29:51.198 11:47:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:29:51.198 11:47:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:29:51.198 11:47:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:29:51.198 11:47:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:29:51.198 11:47:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:29:51.198 11:47:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:29:51.198 11:47:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:29:51.198 11:47:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:29:51.198 11:47:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:29:51.198 11:47:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:29:51.198 11:47:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:29:51.198 11:47:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:29:51.198 11:47:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:29:51.198 11:47:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:29:51.198 11:47:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:29:51.198 11:47:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:29:51.198 11:47:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:51.198 11:47:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:29:51.198 Found 0000:af:00.0 (0x8086 - 0x159b) 00:29:51.198 11:47:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:51.198 11:47:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:51.198 11:47:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:51.198 11:47:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:51.198 11:47:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:51.198 11:47:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:51.198 11:47:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:29:51.198 Found 0000:af:00.1 (0x8086 - 0x159b) 00:29:51.198 11:47:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:51.198 11:47:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:51.198 11:47:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:51.198 11:47:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:51.198 11:47:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:51.199 11:47:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:29:51.199 11:47:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:29:51.199 11:47:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:29:51.199 11:47:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:29:51.199 11:47:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:51.199 11:47:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:29:51.199 11:47:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:51.199 11:47:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@418 -- # [[ up == up ]] 00:29:51.199 11:47:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:29:51.199 11:47:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:51.199 11:47:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:29:51.199 Found net devices under 0000:af:00.0: cvl_0_0 00:29:51.199 11:47:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:29:51.199 11:47:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:29:51.199 11:47:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:51.199 11:47:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:29:51.199 11:47:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:51.199 11:47:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@418 -- # [[ up == up ]] 00:29:51.199 11:47:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:29:51.199 11:47:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:51.199 11:47:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:29:51.199 Found net devices under 0000:af:00.1: cvl_0_1 00:29:51.199 11:47:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:29:51.199 11:47:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:29:51.199 11:47:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # is_hw=yes 00:29:51.199 11:47:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:29:51.199 11:47:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:29:51.199 11:47:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:29:51.199 11:47:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:29:51.199 11:47:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:29:51.199 11:47:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:29:51.199 11:47:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:29:51.199 11:47:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:29:51.199 11:47:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:29:51.199 11:47:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:29:51.199 11:47:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:29:51.199 11:47:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:29:51.199 11:47:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:29:51.199 11:47:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:29:51.199 11:47:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:29:51.199 11:47:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:29:51.199 11:47:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:29:51.199 11:47:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:29:51.199 11:47:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:29:51.199 11:47:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:29:51.199 11:47:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:29:51.199 11:47:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:29:51.199 11:47:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:29:51.199 11:47:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:29:51.199 11:47:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:29:51.199 11:47:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:29:51.199 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:29:51.199 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.455 ms 00:29:51.199 00:29:51.199 --- 10.0.0.2 ping statistics --- 00:29:51.199 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:51.199 rtt min/avg/max/mdev = 0.455/0.455/0.455/0.000 ms 00:29:51.199 11:47:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:29:51.199 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:29:51.199 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.230 ms 00:29:51.199 00:29:51.199 --- 10.0.0.1 ping statistics --- 00:29:51.199 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:51.199 rtt min/avg/max/mdev = 0.230/0.230/0.230/0.000 ms 00:29:51.199 11:47:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:29:51.199 11:47:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@450 -- # return 0 00:29:51.199 11:47:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:29:51.199 11:47:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:29:51.199 11:47:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:29:51.199 11:47:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:29:51.199 11:47:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:29:51.199 11:47:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:29:51.199 11:47:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:29:51.199 11:47:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@13 -- # nvmfappstart -m 0x3 00:29:51.199 11:47:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:29:51.199 11:47:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@724 -- # xtrace_disable 00:29:51.199 11:47:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:29:51.199 11:47:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@509 -- # nvmfpid=1428231 00:29:51.199 11:47:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@510 -- # waitforlisten 1428231 00:29:51.199 11:47:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x3 00:29:51.199 11:47:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@833 -- # '[' -z 1428231 ']' 00:29:51.199 11:47:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:51.199 11:47:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@838 -- # local max_retries=100 00:29:51.199 11:47:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:51.199 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:51.199 11:47:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@842 -- # xtrace_disable 00:29:51.199 11:47:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:29:51.199 [2024-11-15 11:47:51.202612] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:29:51.199 [2024-11-15 11:47:51.203929] Starting SPDK v25.01-pre git sha1 4b2d483c6 / DPDK 24.03.0 initialization... 00:29:51.199 [2024-11-15 11:47:51.203971] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:51.199 [2024-11-15 11:47:51.304067] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:29:51.199 [2024-11-15 11:47:51.353368] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:51.199 [2024-11-15 11:47:51.353408] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:51.199 [2024-11-15 11:47:51.353418] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:51.200 [2024-11-15 11:47:51.353427] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:51.200 [2024-11-15 11:47:51.353435] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:51.200 [2024-11-15 11:47:51.354917] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:29:51.200 [2024-11-15 11:47:51.354928] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:29:51.200 [2024-11-15 11:47:51.433360] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:29:51.200 [2024-11-15 11:47:51.433372] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:29:51.200 [2024-11-15 11:47:51.433697] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:29:51.459 11:47:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:29:51.459 11:47:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@866 -- # return 0 00:29:51.459 11:47:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:29:51.459 11:47:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@730 -- # xtrace_disable 00:29:51.459 11:47:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:29:51.459 11:47:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:51.459 11:47:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:29:51.459 11:47:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:51.459 11:47:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:29:51.459 [2024-11-15 11:47:52.134140] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:51.459 11:47:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:51.459 11:47:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:29:51.459 11:47:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:51.459 11:47:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:29:51.459 11:47:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:51.459 11:47:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:29:51.459 11:47:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:51.459 11:47:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:29:51.459 [2024-11-15 11:47:52.158560] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:51.459 11:47:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:51.459 11:47:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:29:51.459 11:47:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:51.459 11:47:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:29:51.459 NULL1 00:29:51.459 11:47:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:51.459 11:47:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@23 -- # rpc_cmd bdev_delay_create -b NULL1 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:29:51.459 11:47:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:51.459 11:47:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:29:51.459 Delay0 00:29:51.459 11:47:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:51.459 11:47:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:29:51.459 11:47:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:51.459 11:47:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:29:51.459 11:47:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:51.459 11:47:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@28 -- # perf_pid=1428404 00:29:51.459 11:47:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@30 -- # sleep 2 00:29:51.459 11:47:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 5 -q 128 -w randrw -M 70 -o 512 -P 4 00:29:51.459 [2024-11-15 11:47:52.250574] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:29:53.362 11:47:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@32 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:29:53.362 11:47:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:53.362 11:47:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:29:53.620 Read completed with error (sct=0, sc=8) 00:29:53.620 Read completed with error (sct=0, sc=8) 00:29:53.621 Write completed with error (sct=0, sc=8) 00:29:53.621 starting I/O failed: -6 00:29:53.621 Write completed with error (sct=0, sc=8) 00:29:53.621 Read completed with error (sct=0, sc=8) 00:29:53.621 Write completed with error (sct=0, sc=8) 00:29:53.621 Write completed with error (sct=0, sc=8) 00:29:53.621 starting I/O failed: -6 00:29:53.621 Read completed with error (sct=0, sc=8) 00:29:53.621 Read completed with error (sct=0, sc=8) 00:29:53.621 Read completed with error (sct=0, sc=8) 00:29:53.621 Read completed with error (sct=0, sc=8) 00:29:53.621 starting I/O failed: -6 00:29:53.621 Read completed with error (sct=0, sc=8) 00:29:53.621 Read completed with error (sct=0, sc=8) 00:29:53.621 Read completed with error (sct=0, sc=8) 00:29:53.621 Read completed with error (sct=0, sc=8) 00:29:53.621 starting I/O failed: -6 00:29:53.621 Read completed with error (sct=0, sc=8) 00:29:53.621 Read completed with error (sct=0, sc=8) 00:29:53.621 Write completed with error (sct=0, sc=8) 00:29:53.621 Write completed with error (sct=0, sc=8) 00:29:53.621 starting I/O failed: -6 00:29:53.621 Read completed with error (sct=0, sc=8) 00:29:53.621 Write completed with error (sct=0, sc=8) 00:29:53.621 Read completed with error (sct=0, sc=8) 00:29:53.621 Read completed with error (sct=0, sc=8) 00:29:53.621 starting I/O failed: -6 00:29:53.621 Write completed with error (sct=0, sc=8) 00:29:53.621 Write completed with error (sct=0, sc=8) 00:29:53.621 Read completed with error (sct=0, sc=8) 00:29:53.621 Read completed with error (sct=0, sc=8) 00:29:53.621 starting I/O failed: -6 00:29:53.621 Read completed with error (sct=0, sc=8) 00:29:53.621 Read completed with error (sct=0, sc=8) 00:29:53.621 Write completed with error (sct=0, sc=8) 00:29:53.621 Read completed with error (sct=0, sc=8) 00:29:53.621 starting I/O failed: -6 00:29:53.621 Read completed with error (sct=0, sc=8) 00:29:53.621 Write completed with error (sct=0, sc=8) 00:29:53.621 Read completed with error (sct=0, sc=8) 00:29:53.621 Read completed with error (sct=0, sc=8) 00:29:53.621 starting I/O failed: -6 00:29:53.621 Read completed with error (sct=0, sc=8) 00:29:53.621 Read completed with error (sct=0, sc=8) 00:29:53.621 Write completed with error (sct=0, sc=8) 00:29:53.621 Read completed with error (sct=0, sc=8) 00:29:53.621 starting I/O failed: -6 00:29:53.621 [2024-11-15 11:47:54.425290] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f1898000c40 is same with the state(6) to be set 00:29:53.621 Read completed with error (sct=0, sc=8) 00:29:53.621 Read completed with error (sct=0, sc=8) 00:29:53.621 Write completed with error (sct=0, sc=8) 00:29:53.621 Read completed with error (sct=0, sc=8) 00:29:53.621 Read completed with error (sct=0, sc=8) 00:29:53.621 Read completed with error (sct=0, sc=8) 00:29:53.621 Write completed with error (sct=0, sc=8) 00:29:53.621 Write completed with error (sct=0, sc=8) 00:29:53.621 Read completed with error (sct=0, sc=8) 00:29:53.621 Read completed with error (sct=0, sc=8) 00:29:53.621 Read completed with error (sct=0, sc=8) 00:29:53.621 Read completed with error (sct=0, sc=8) 00:29:53.621 Write completed with error (sct=0, sc=8) 00:29:53.621 Read completed with error (sct=0, sc=8) 00:29:53.621 Read completed with error (sct=0, sc=8) 00:29:53.621 Read completed with error (sct=0, sc=8) 00:29:53.621 Write completed with error (sct=0, sc=8) 00:29:53.621 Write completed with error (sct=0, sc=8) 00:29:53.621 Read completed with error (sct=0, sc=8) 00:29:53.621 Read completed with error (sct=0, sc=8) 00:29:53.621 Read completed with error (sct=0, sc=8) 00:29:53.621 Write completed with error (sct=0, sc=8) 00:29:53.621 Read completed with error (sct=0, sc=8) 00:29:53.621 Read completed with error (sct=0, sc=8) 00:29:53.621 Read completed with error (sct=0, sc=8) 00:29:53.621 Read completed with error (sct=0, sc=8) 00:29:53.621 Read completed with error (sct=0, sc=8) 00:29:53.621 Write completed with error (sct=0, sc=8) 00:29:53.621 Read completed with error (sct=0, sc=8) 00:29:53.621 Read completed with error (sct=0, sc=8) 00:29:53.621 Read completed with error (sct=0, sc=8) 00:29:53.621 Write completed with error (sct=0, sc=8) 00:29:53.621 Write completed with error (sct=0, sc=8) 00:29:53.621 Read completed with error (sct=0, sc=8) 00:29:53.621 Write completed with error (sct=0, sc=8) 00:29:53.621 Read completed with error (sct=0, sc=8) 00:29:53.621 Write completed with error (sct=0, sc=8) 00:29:53.621 Write completed with error (sct=0, sc=8) 00:29:53.621 Read completed with error (sct=0, sc=8) 00:29:53.621 Write completed with error (sct=0, sc=8) 00:29:53.621 Read completed with error (sct=0, sc=8) 00:29:53.621 Write completed with error (sct=0, sc=8) 00:29:53.621 Read completed with error (sct=0, sc=8) 00:29:53.621 Read completed with error (sct=0, sc=8) 00:29:53.621 Read completed with error (sct=0, sc=8) 00:29:53.621 Read completed with error (sct=0, sc=8) 00:29:53.621 Read completed with error (sct=0, sc=8) 00:29:53.621 Read completed with error (sct=0, sc=8) 00:29:53.621 [2024-11-15 11:47:54.425702] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f189800d680 is same with the state(6) to be set 00:29:53.621 Read completed with error (sct=0, sc=8) 00:29:53.621 Write completed with error (sct=0, sc=8) 00:29:53.621 Read completed with error (sct=0, sc=8) 00:29:53.621 Read completed with error (sct=0, sc=8) 00:29:53.621 Read completed with error (sct=0, sc=8) 00:29:53.621 Read completed with error (sct=0, sc=8) 00:29:53.621 Read completed with error (sct=0, sc=8) 00:29:53.621 Read completed with error (sct=0, sc=8) 00:29:53.621 Read completed with error (sct=0, sc=8) 00:29:53.621 Write completed with error (sct=0, sc=8) 00:29:53.621 Read completed with error (sct=0, sc=8) 00:29:53.621 Write completed with error (sct=0, sc=8) 00:29:53.621 Read completed with error (sct=0, sc=8) 00:29:53.621 Read completed with error (sct=0, sc=8) 00:29:53.621 Read completed with error (sct=0, sc=8) 00:29:53.621 Read completed with error (sct=0, sc=8) 00:29:53.621 Write completed with error (sct=0, sc=8) 00:29:53.621 Write completed with error (sct=0, sc=8) 00:29:53.621 Read completed with error (sct=0, sc=8) 00:29:53.621 Read completed with error (sct=0, sc=8) 00:29:53.621 Read completed with error (sct=0, sc=8) 00:29:53.621 Write completed with error (sct=0, sc=8) 00:29:53.621 Read completed with error (sct=0, sc=8) 00:29:53.621 Write completed with error (sct=0, sc=8) 00:29:53.621 Write completed with error (sct=0, sc=8) 00:29:53.621 Read completed with error (sct=0, sc=8) 00:29:53.621 Write completed with error (sct=0, sc=8) 00:29:53.621 Read completed with error (sct=0, sc=8) 00:29:53.621 Write completed with error (sct=0, sc=8) 00:29:53.621 Read completed with error (sct=0, sc=8) 00:29:53.621 Write completed with error (sct=0, sc=8) 00:29:53.621 Read completed with error (sct=0, sc=8) 00:29:53.621 Read completed with error (sct=0, sc=8) 00:29:53.621 Read completed with error (sct=0, sc=8) 00:29:53.621 Write completed with error (sct=0, sc=8) 00:29:53.621 Read completed with error (sct=0, sc=8) 00:29:53.621 Read completed with error (sct=0, sc=8) 00:29:53.621 Write completed with error (sct=0, sc=8) 00:29:53.621 Read completed with error (sct=0, sc=8) 00:29:53.621 Read completed with error (sct=0, sc=8) 00:29:53.621 Read completed with error (sct=0, sc=8) 00:29:53.621 Read completed with error (sct=0, sc=8) 00:29:53.621 Read completed with error (sct=0, sc=8) 00:29:53.621 Write completed with error (sct=0, sc=8) 00:29:53.621 Read completed with error (sct=0, sc=8) 00:29:53.621 Read completed with error (sct=0, sc=8) 00:29:53.621 Write completed with error (sct=0, sc=8) 00:29:53.621 Write completed with error (sct=0, sc=8) 00:29:53.621 [2024-11-15 11:47:54.425878] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f189800d020 is same with the state(6) to be set 00:29:53.621 Read completed with error (sct=0, sc=8) 00:29:53.621 starting I/O failed: -6 00:29:53.621 Read completed with error (sct=0, sc=8) 00:29:53.621 Read completed with error (sct=0, sc=8) 00:29:53.622 Read completed with error (sct=0, sc=8) 00:29:53.622 Read completed with error (sct=0, sc=8) 00:29:53.622 starting I/O failed: -6 00:29:53.622 Read completed with error (sct=0, sc=8) 00:29:53.622 Read completed with error (sct=0, sc=8) 00:29:53.622 Read completed with error (sct=0, sc=8) 00:29:53.622 Read completed with error (sct=0, sc=8) 00:29:53.622 starting I/O failed: -6 00:29:53.622 Read completed with error (sct=0, sc=8) 00:29:53.622 Read completed with error (sct=0, sc=8) 00:29:53.622 Read completed with error (sct=0, sc=8) 00:29:53.622 Read completed with error (sct=0, sc=8) 00:29:53.622 starting I/O failed: -6 00:29:53.622 Write completed with error (sct=0, sc=8) 00:29:53.622 Read completed with error (sct=0, sc=8) 00:29:53.622 Write completed with error (sct=0, sc=8) 00:29:53.622 Write completed with error (sct=0, sc=8) 00:29:53.622 starting I/O failed: -6 00:29:53.622 Read completed with error (sct=0, sc=8) 00:29:53.622 Read completed with error (sct=0, sc=8) 00:29:53.622 Read completed with error (sct=0, sc=8) 00:29:53.622 Read completed with error (sct=0, sc=8) 00:29:53.622 starting I/O failed: -6 00:29:53.622 Read completed with error (sct=0, sc=8) 00:29:53.622 Write completed with error (sct=0, sc=8) 00:29:53.622 Read completed with error (sct=0, sc=8) 00:29:53.622 Read completed with error (sct=0, sc=8) 00:29:53.622 starting I/O failed: -6 00:29:53.622 Read completed with error (sct=0, sc=8) 00:29:53.622 Read completed with error (sct=0, sc=8) 00:29:53.622 Read completed with error (sct=0, sc=8) 00:29:53.622 Write completed with error (sct=0, sc=8) 00:29:53.622 starting I/O failed: -6 00:29:53.622 Read completed with error (sct=0, sc=8) 00:29:53.622 Write completed with error (sct=0, sc=8) 00:29:53.622 Read completed with error (sct=0, sc=8) 00:29:53.622 Read completed with error (sct=0, sc=8) 00:29:53.622 starting I/O failed: -6 00:29:53.622 Read completed with error (sct=0, sc=8) 00:29:53.622 Read completed with error (sct=0, sc=8) 00:29:53.622 Read completed with error (sct=0, sc=8) 00:29:53.622 Read completed with error (sct=0, sc=8) 00:29:53.622 starting I/O failed: -6 00:29:53.622 Read completed with error (sct=0, sc=8) 00:29:53.622 Read completed with error (sct=0, sc=8) 00:29:53.622 Read completed with error (sct=0, sc=8) 00:29:53.622 Read completed with error (sct=0, sc=8) 00:29:53.622 starting I/O failed: -6 00:29:53.622 Write completed with error (sct=0, sc=8) 00:29:53.622 Read completed with error (sct=0, sc=8) 00:29:53.622 Read completed with error (sct=0, sc=8) 00:29:53.622 Write completed with error (sct=0, sc=8) 00:29:53.622 starting I/O failed: -6 00:29:53.622 Read completed with error (sct=0, sc=8) 00:29:53.622 starting I/O failed: -6 00:29:54.556 [2024-11-15 11:47:55.386817] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20f95e0 is same with the state(6) to be set 00:29:54.815 Write completed with error (sct=0, sc=8) 00:29:54.815 Write completed with error (sct=0, sc=8) 00:29:54.815 Read completed with error (sct=0, sc=8) 00:29:54.815 Read completed with error (sct=0, sc=8) 00:29:54.815 Read completed with error (sct=0, sc=8) 00:29:54.815 Read completed with error (sct=0, sc=8) 00:29:54.815 Write completed with error (sct=0, sc=8) 00:29:54.815 Read completed with error (sct=0, sc=8) 00:29:54.815 Write completed with error (sct=0, sc=8) 00:29:54.815 Write completed with error (sct=0, sc=8) 00:29:54.815 Read completed with error (sct=0, sc=8) 00:29:54.815 Read completed with error (sct=0, sc=8) 00:29:54.815 Read completed with error (sct=0, sc=8) 00:29:54.815 Write completed with error (sct=0, sc=8) 00:29:54.815 Read completed with error (sct=0, sc=8) 00:29:54.815 Read completed with error (sct=0, sc=8) 00:29:54.816 [2024-11-15 11:47:55.428668] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f189800d350 is same with the state(6) to be set 00:29:54.816 Read completed with error (sct=0, sc=8) 00:29:54.816 Read completed with error (sct=0, sc=8) 00:29:54.816 Write completed with error (sct=0, sc=8) 00:29:54.816 Read completed with error (sct=0, sc=8) 00:29:54.816 Read completed with error (sct=0, sc=8) 00:29:54.816 Read completed with error (sct=0, sc=8) 00:29:54.816 Write completed with error (sct=0, sc=8) 00:29:54.816 Read completed with error (sct=0, sc=8) 00:29:54.816 Read completed with error (sct=0, sc=8) 00:29:54.816 Write completed with error (sct=0, sc=8) 00:29:54.816 Read completed with error (sct=0, sc=8) 00:29:54.816 Write completed with error (sct=0, sc=8) 00:29:54.816 Read completed with error (sct=0, sc=8) 00:29:54.816 Read completed with error (sct=0, sc=8) 00:29:54.816 Read completed with error (sct=0, sc=8) 00:29:54.816 Read completed with error (sct=0, sc=8) 00:29:54.816 Read completed with error (sct=0, sc=8) 00:29:54.816 Read completed with error (sct=0, sc=8) 00:29:54.816 Read completed with error (sct=0, sc=8) 00:29:54.816 Read completed with error (sct=0, sc=8) 00:29:54.816 Read completed with error (sct=0, sc=8) 00:29:54.816 Read completed with error (sct=0, sc=8) 00:29:54.816 Read completed with error (sct=0, sc=8) 00:29:54.816 Write completed with error (sct=0, sc=8) 00:29:54.816 Read completed with error (sct=0, sc=8) 00:29:54.816 Write completed with error (sct=0, sc=8) 00:29:54.816 Write completed with error (sct=0, sc=8) 00:29:54.816 Write completed with error (sct=0, sc=8) 00:29:54.816 Read completed with error (sct=0, sc=8) 00:29:54.816 Write completed with error (sct=0, sc=8) 00:29:54.816 Read completed with error (sct=0, sc=8) 00:29:54.816 Read completed with error (sct=0, sc=8) 00:29:54.816 Write completed with error (sct=0, sc=8) 00:29:54.816 Read completed with error (sct=0, sc=8) 00:29:54.816 Write completed with error (sct=0, sc=8) 00:29:54.816 Read completed with error (sct=0, sc=8) 00:29:54.816 Read completed with error (sct=0, sc=8) 00:29:54.816 Read completed with error (sct=0, sc=8) 00:29:54.816 [2024-11-15 11:47:55.429098] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20f80e0 is same with the state(6) to be set 00:29:54.816 Read completed with error (sct=0, sc=8) 00:29:54.816 Read completed with error (sct=0, sc=8) 00:29:54.816 Read completed with error (sct=0, sc=8) 00:29:54.816 Read completed with error (sct=0, sc=8) 00:29:54.816 Read completed with error (sct=0, sc=8) 00:29:54.816 Read completed with error (sct=0, sc=8) 00:29:54.816 Write completed with error (sct=0, sc=8) 00:29:54.816 Write completed with error (sct=0, sc=8) 00:29:54.816 Read completed with error (sct=0, sc=8) 00:29:54.816 Read completed with error (sct=0, sc=8) 00:29:54.816 Read completed with error (sct=0, sc=8) 00:29:54.816 Read completed with error (sct=0, sc=8) 00:29:54.816 Read completed with error (sct=0, sc=8) 00:29:54.816 Read completed with error (sct=0, sc=8) 00:29:54.816 Read completed with error (sct=0, sc=8) 00:29:54.816 Read completed with error (sct=0, sc=8) 00:29:54.816 Read completed with error (sct=0, sc=8) 00:29:54.816 Write completed with error (sct=0, sc=8) 00:29:54.816 Read completed with error (sct=0, sc=8) 00:29:54.816 Read completed with error (sct=0, sc=8) 00:29:54.816 Write completed with error (sct=0, sc=8) 00:29:54.816 Read completed with error (sct=0, sc=8) 00:29:54.816 Write completed with error (sct=0, sc=8) 00:29:54.816 Read completed with error (sct=0, sc=8) 00:29:54.816 Read completed with error (sct=0, sc=8) 00:29:54.816 Read completed with error (sct=0, sc=8) 00:29:54.816 Read completed with error (sct=0, sc=8) 00:29:54.816 Read completed with error (sct=0, sc=8) 00:29:54.816 Read completed with error (sct=0, sc=8) 00:29:54.816 Read completed with error (sct=0, sc=8) 00:29:54.816 Read completed with error (sct=0, sc=8) 00:29:54.816 Read completed with error (sct=0, sc=8) 00:29:54.816 Read completed with error (sct=0, sc=8) 00:29:54.816 Read completed with error (sct=0, sc=8) 00:29:54.816 Read completed with error (sct=0, sc=8) 00:29:54.816 Read completed with error (sct=0, sc=8) 00:29:54.816 Read completed with error (sct=0, sc=8) 00:29:54.816 Read completed with error (sct=0, sc=8) 00:29:54.816 [2024-11-15 11:47:55.429375] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20f7f00 is same with the state(6) to be set 00:29:54.816 Write completed with error (sct=0, sc=8) 00:29:54.816 Read completed with error (sct=0, sc=8) 00:29:54.816 Read completed with error (sct=0, sc=8) 00:29:54.816 Write completed with error (sct=0, sc=8) 00:29:54.816 Read completed with error (sct=0, sc=8) 00:29:54.816 Write completed with error (sct=0, sc=8) 00:29:54.816 Read completed with error (sct=0, sc=8) 00:29:54.816 Write completed with error (sct=0, sc=8) 00:29:54.816 Write completed with error (sct=0, sc=8) 00:29:54.816 Write completed with error (sct=0, sc=8) 00:29:54.816 Write completed with error (sct=0, sc=8) 00:29:54.816 Read completed with error (sct=0, sc=8) 00:29:54.816 Read completed with error (sct=0, sc=8) 00:29:54.816 Write completed with error (sct=0, sc=8) 00:29:54.816 Read completed with error (sct=0, sc=8) 00:29:54.816 Read completed with error (sct=0, sc=8) 00:29:54.816 Read completed with error (sct=0, sc=8) 00:29:54.816 Read completed with error (sct=0, sc=8) 00:29:54.816 Write completed with error (sct=0, sc=8) 00:29:54.816 Read completed with error (sct=0, sc=8) 00:29:54.816 Read completed with error (sct=0, sc=8) 00:29:54.816 Read completed with error (sct=0, sc=8) 00:29:54.816 Read completed with error (sct=0, sc=8) 00:29:54.816 Read completed with error (sct=0, sc=8) 00:29:54.816 Read completed with error (sct=0, sc=8) 00:29:54.816 Read completed with error (sct=0, sc=8) 00:29:54.816 Write completed with error (sct=0, sc=8) 00:29:54.816 Read completed with error (sct=0, sc=8) 00:29:54.816 Read completed with error (sct=0, sc=8) 00:29:54.816 Write completed with error (sct=0, sc=8) 00:29:54.816 Read completed with error (sct=0, sc=8) 00:29:54.816 Read completed with error (sct=0, sc=8) 00:29:54.816 Read completed with error (sct=0, sc=8) 00:29:54.816 Read completed with error (sct=0, sc=8) 00:29:54.816 Write completed with error (sct=0, sc=8) 00:29:54.816 Read completed with error (sct=0, sc=8) 00:29:54.816 Read completed with error (sct=0, sc=8) 00:29:54.816 Read completed with error (sct=0, sc=8) 00:29:54.816 Write completed with error (sct=0, sc=8) 00:29:54.816 [2024-11-15 11:47:55.429870] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20f84a0 is same with the state(6) to be set 00:29:54.816 Initializing NVMe Controllers 00:29:54.816 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:29:54.816 Controller IO queue size 128, less than required. 00:29:54.816 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:29:54.816 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:29:54.816 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:29:54.816 Initialization complete. Launching workers. 00:29:54.816 ======================================================== 00:29:54.816 Latency(us) 00:29:54.816 Device Information : IOPS MiB/s Average min max 00:29:54.816 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 174.17 0.09 1062091.10 352.16 2001157.00 00:29:54.816 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 154.32 0.08 879934.50 419.28 1012374.21 00:29:54.816 ======================================================== 00:29:54.816 Total : 328.50 0.16 976516.02 352.16 2001157.00 00:29:54.816 00:29:54.816 [2024-11-15 11:47:55.430393] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20f95e0 (9): Bad file descriptor 00:29:54.816 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf: errors occurred 00:29:54.816 11:47:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:54.816 11:47:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@34 -- # delay=0 00:29:54.816 11:47:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 1428404 00:29:54.816 11:47:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@36 -- # sleep 0.5 00:29:55.384 11:47:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@38 -- # (( delay++ > 30 )) 00:29:55.384 11:47:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 1428404 00:29:55.384 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 35: kill: (1428404) - No such process 00:29:55.384 11:47:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@45 -- # NOT wait 1428404 00:29:55.384 11:47:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@650 -- # local es=0 00:29:55.384 11:47:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@652 -- # valid_exec_arg wait 1428404 00:29:55.384 11:47:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@638 -- # local arg=wait 00:29:55.384 11:47:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:29:55.384 11:47:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@642 -- # type -t wait 00:29:55.384 11:47:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:29:55.384 11:47:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@653 -- # wait 1428404 00:29:55.384 11:47:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@653 -- # es=1 00:29:55.384 11:47:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:29:55.384 11:47:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:29:55.384 11:47:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:29:55.384 11:47:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:29:55.384 11:47:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:55.384 11:47:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:29:55.384 11:47:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:55.384 11:47:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:29:55.384 11:47:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:55.384 11:47:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:29:55.384 [2024-11-15 11:47:55.962553] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:55.384 11:47:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:55.384 11:47:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@50 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:29:55.384 11:47:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:55.384 11:47:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:29:55.384 11:47:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:55.384 11:47:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@54 -- # perf_pid=1429042 00:29:55.384 11:47:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@56 -- # delay=0 00:29:55.384 11:47:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 3 -q 128 -w randrw -M 70 -o 512 -P 4 00:29:55.384 11:47:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1429042 00:29:55.384 11:47:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:29:55.384 [2024-11-15 11:47:56.030253] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:29:55.643 11:47:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:29:55.643 11:47:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1429042 00:29:55.643 11:47:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:29:56.210 11:47:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:29:56.210 11:47:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1429042 00:29:56.210 11:47:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:29:56.777 11:47:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:29:56.777 11:47:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1429042 00:29:56.777 11:47:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:29:57.344 11:47:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:29:57.345 11:47:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1429042 00:29:57.345 11:47:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:29:57.911 11:47:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:29:57.911 11:47:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1429042 00:29:57.911 11:47:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:29:58.170 11:47:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:29:58.170 11:47:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1429042 00:29:58.170 11:47:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:29:58.430 Initializing NVMe Controllers 00:29:58.430 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:29:58.430 Controller IO queue size 128, less than required. 00:29:58.430 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:29:58.430 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:29:58.430 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:29:58.430 Initialization complete. Launching workers. 00:29:58.430 ======================================================== 00:29:58.430 Latency(us) 00:29:58.430 Device Information : IOPS MiB/s Average min max 00:29:58.431 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 128.00 0.06 1003947.97 1000164.69 1012239.53 00:29:58.431 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 128.00 0.06 1003049.07 1000190.31 1011893.69 00:29:58.431 ======================================================== 00:29:58.431 Total : 256.00 0.12 1003498.52 1000164.69 1012239.53 00:29:58.431 00:29:58.689 11:47:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:29:58.689 11:47:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1429042 00:29:58.689 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 57: kill: (1429042) - No such process 00:29:58.689 11:47:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@67 -- # wait 1429042 00:29:58.689 11:47:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:29:58.689 11:47:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@71 -- # nvmftestfini 00:29:58.689 11:47:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@516 -- # nvmfcleanup 00:29:58.689 11:47:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@121 -- # sync 00:29:58.689 11:47:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:29:58.689 11:47:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@124 -- # set +e 00:29:58.689 11:47:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@125 -- # for i in {1..20} 00:29:58.689 11:47:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:29:58.689 rmmod nvme_tcp 00:29:58.689 rmmod nvme_fabrics 00:29:58.948 rmmod nvme_keyring 00:29:58.948 11:47:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:29:58.948 11:47:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@128 -- # set -e 00:29:58.948 11:47:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@129 -- # return 0 00:29:58.948 11:47:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@517 -- # '[' -n 1428231 ']' 00:29:58.948 11:47:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@518 -- # killprocess 1428231 00:29:58.948 11:47:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@952 -- # '[' -z 1428231 ']' 00:29:58.948 11:47:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@956 -- # kill -0 1428231 00:29:58.948 11:47:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@957 -- # uname 00:29:58.948 11:47:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:29:58.948 11:47:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 1428231 00:29:58.948 11:47:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:29:58.948 11:47:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:29:58.948 11:47:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@970 -- # echo 'killing process with pid 1428231' 00:29:58.948 killing process with pid 1428231 00:29:58.948 11:47:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@971 -- # kill 1428231 00:29:58.948 11:47:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@976 -- # wait 1428231 00:29:59.207 11:47:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:29:59.207 11:47:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:29:59.207 11:47:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:29:59.207 11:47:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@297 -- # iptr 00:29:59.207 11:47:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@791 -- # iptables-save 00:29:59.207 11:47:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:29:59.207 11:47:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@791 -- # iptables-restore 00:29:59.207 11:47:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:29:59.207 11:47:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@302 -- # remove_spdk_ns 00:29:59.207 11:47:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:59.207 11:47:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:59.207 11:47:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:01.110 11:48:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:30:01.110 00:30:01.110 real 0m16.675s 00:30:01.110 user 0m26.471s 00:30:01.110 sys 0m5.906s 00:30:01.110 11:48:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1128 -- # xtrace_disable 00:30:01.110 11:48:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:30:01.110 ************************************ 00:30:01.110 END TEST nvmf_delete_subsystem 00:30:01.110 ************************************ 00:30:01.110 11:48:01 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@26 -- # run_test nvmf_host_management /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp --interrupt-mode 00:30:01.110 11:48:01 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:30:01.110 11:48:01 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1109 -- # xtrace_disable 00:30:01.110 11:48:01 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:30:01.370 ************************************ 00:30:01.370 START TEST nvmf_host_management 00:30:01.370 ************************************ 00:30:01.370 11:48:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp --interrupt-mode 00:30:01.370 * Looking for test storage... 00:30:01.370 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:30:01.370 11:48:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:30:01.370 11:48:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1691 -- # lcov --version 00:30:01.370 11:48:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:30:01.370 11:48:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:30:01.370 11:48:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:30:01.370 11:48:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@333 -- # local ver1 ver1_l 00:30:01.370 11:48:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@334 -- # local ver2 ver2_l 00:30:01.370 11:48:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@336 -- # IFS=.-: 00:30:01.370 11:48:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@336 -- # read -ra ver1 00:30:01.370 11:48:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@337 -- # IFS=.-: 00:30:01.370 11:48:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@337 -- # read -ra ver2 00:30:01.370 11:48:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@338 -- # local 'op=<' 00:30:01.370 11:48:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@340 -- # ver1_l=2 00:30:01.370 11:48:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@341 -- # ver2_l=1 00:30:01.370 11:48:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:30:01.370 11:48:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@344 -- # case "$op" in 00:30:01.370 11:48:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@345 -- # : 1 00:30:01.370 11:48:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@364 -- # (( v = 0 )) 00:30:01.370 11:48:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:30:01.370 11:48:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@365 -- # decimal 1 00:30:01.370 11:48:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@353 -- # local d=1 00:30:01.370 11:48:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:30:01.370 11:48:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@355 -- # echo 1 00:30:01.370 11:48:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@365 -- # ver1[v]=1 00:30:01.370 11:48:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@366 -- # decimal 2 00:30:01.370 11:48:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@353 -- # local d=2 00:30:01.370 11:48:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:30:01.370 11:48:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@355 -- # echo 2 00:30:01.370 11:48:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@366 -- # ver2[v]=2 00:30:01.370 11:48:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:30:01.370 11:48:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:30:01.370 11:48:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@368 -- # return 0 00:30:01.370 11:48:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:30:01.370 11:48:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:30:01.370 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:01.370 --rc genhtml_branch_coverage=1 00:30:01.370 --rc genhtml_function_coverage=1 00:30:01.370 --rc genhtml_legend=1 00:30:01.370 --rc geninfo_all_blocks=1 00:30:01.370 --rc geninfo_unexecuted_blocks=1 00:30:01.370 00:30:01.370 ' 00:30:01.370 11:48:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:30:01.370 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:01.370 --rc genhtml_branch_coverage=1 00:30:01.370 --rc genhtml_function_coverage=1 00:30:01.370 --rc genhtml_legend=1 00:30:01.370 --rc geninfo_all_blocks=1 00:30:01.370 --rc geninfo_unexecuted_blocks=1 00:30:01.370 00:30:01.370 ' 00:30:01.370 11:48:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:30:01.370 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:01.370 --rc genhtml_branch_coverage=1 00:30:01.370 --rc genhtml_function_coverage=1 00:30:01.370 --rc genhtml_legend=1 00:30:01.370 --rc geninfo_all_blocks=1 00:30:01.370 --rc geninfo_unexecuted_blocks=1 00:30:01.370 00:30:01.370 ' 00:30:01.370 11:48:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:30:01.370 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:01.370 --rc genhtml_branch_coverage=1 00:30:01.370 --rc genhtml_function_coverage=1 00:30:01.370 --rc genhtml_legend=1 00:30:01.370 --rc geninfo_all_blocks=1 00:30:01.370 --rc geninfo_unexecuted_blocks=1 00:30:01.370 00:30:01.370 ' 00:30:01.370 11:48:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:30:01.370 11:48:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@7 -- # uname -s 00:30:01.370 11:48:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:30:01.370 11:48:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:30:01.370 11:48:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:30:01.370 11:48:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:30:01.370 11:48:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:30:01.370 11:48:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:30:01.370 11:48:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:30:01.370 11:48:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:30:01.370 11:48:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:30:01.370 11:48:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:30:01.370 11:48:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:30:01.370 11:48:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@18 -- # NVME_HOSTID=00abaa28-3537-eb11-906e-0017a4403562 00:30:01.370 11:48:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:30:01.370 11:48:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:30:01.370 11:48:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:30:01.370 11:48:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:30:01.370 11:48:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:30:01.370 11:48:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@15 -- # shopt -s extglob 00:30:01.370 11:48:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:01.370 11:48:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:01.370 11:48:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:01.370 11:48:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:01.371 11:48:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:01.371 11:48:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:01.371 11:48:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@5 -- # export PATH 00:30:01.371 11:48:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:01.371 11:48:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@51 -- # : 0 00:30:01.371 11:48:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:30:01.371 11:48:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:30:01.371 11:48:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:30:01.371 11:48:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:30:01.371 11:48:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:30:01.371 11:48:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:30:01.371 11:48:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:30:01.371 11:48:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:30:01.371 11:48:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:30:01.371 11:48:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@55 -- # have_pci_nics=0 00:30:01.371 11:48:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@11 -- # MALLOC_BDEV_SIZE=64 00:30:01.371 11:48:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:30:01.371 11:48:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@105 -- # nvmftestinit 00:30:01.371 11:48:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:30:01.371 11:48:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:30:01.371 11:48:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@476 -- # prepare_net_devs 00:30:01.371 11:48:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@438 -- # local -g is_hw=no 00:30:01.371 11:48:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@440 -- # remove_spdk_ns 00:30:01.371 11:48:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:01.371 11:48:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:01.371 11:48:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:01.371 11:48:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:30:01.371 11:48:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:30:01.371 11:48:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@309 -- # xtrace_disable 00:30:01.371 11:48:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:30:06.643 11:48:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:30:06.643 11:48:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@315 -- # pci_devs=() 00:30:06.643 11:48:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@315 -- # local -a pci_devs 00:30:06.643 11:48:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@316 -- # pci_net_devs=() 00:30:06.643 11:48:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:30:06.643 11:48:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@317 -- # pci_drivers=() 00:30:06.643 11:48:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@317 -- # local -A pci_drivers 00:30:06.643 11:48:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@319 -- # net_devs=() 00:30:06.643 11:48:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@319 -- # local -ga net_devs 00:30:06.643 11:48:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@320 -- # e810=() 00:30:06.643 11:48:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@320 -- # local -ga e810 00:30:06.643 11:48:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@321 -- # x722=() 00:30:06.643 11:48:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@321 -- # local -ga x722 00:30:06.643 11:48:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@322 -- # mlx=() 00:30:06.643 11:48:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@322 -- # local -ga mlx 00:30:06.643 11:48:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:30:06.643 11:48:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:30:06.643 11:48:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:30:06.643 11:48:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:30:06.643 11:48:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:30:06.643 11:48:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:30:06.643 11:48:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:30:06.643 11:48:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:30:06.643 11:48:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:30:06.643 11:48:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:30:06.643 11:48:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:30:06.643 11:48:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:30:06.643 11:48:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:30:06.643 11:48:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:30:06.643 11:48:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:30:06.643 11:48:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:30:06.643 11:48:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:30:06.643 11:48:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:30:06.643 11:48:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:30:06.643 11:48:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:30:06.643 Found 0000:af:00.0 (0x8086 - 0x159b) 00:30:06.643 11:48:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:30:06.644 11:48:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:30:06.644 11:48:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:06.644 11:48:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:06.644 11:48:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:30:06.644 11:48:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:30:06.644 11:48:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:30:06.644 Found 0000:af:00.1 (0x8086 - 0x159b) 00:30:06.644 11:48:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:30:06.644 11:48:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:30:06.644 11:48:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:06.644 11:48:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:06.644 11:48:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:30:06.644 11:48:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:30:06.644 11:48:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:30:06.644 11:48:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:30:06.644 11:48:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:30:06.644 11:48:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:06.644 11:48:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:30:06.644 11:48:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:06.644 11:48:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@418 -- # [[ up == up ]] 00:30:06.644 11:48:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:30:06.644 11:48:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:06.644 11:48:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:30:06.644 Found net devices under 0000:af:00.0: cvl_0_0 00:30:06.644 11:48:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:30:06.644 11:48:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:30:06.644 11:48:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:06.644 11:48:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:30:06.644 11:48:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:06.644 11:48:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@418 -- # [[ up == up ]] 00:30:06.644 11:48:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:30:06.644 11:48:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:06.644 11:48:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:30:06.644 Found net devices under 0000:af:00.1: cvl_0_1 00:30:06.644 11:48:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:30:06.644 11:48:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:30:06.644 11:48:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@442 -- # is_hw=yes 00:30:06.644 11:48:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:30:06.644 11:48:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:30:06.644 11:48:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:30:06.644 11:48:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:30:06.644 11:48:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:30:06.644 11:48:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:30:06.644 11:48:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:30:06.644 11:48:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:30:06.644 11:48:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:30:06.644 11:48:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:30:06.644 11:48:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:30:06.644 11:48:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:30:06.644 11:48:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:30:06.644 11:48:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:30:06.644 11:48:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:30:06.644 11:48:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:30:06.644 11:48:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:30:06.644 11:48:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:30:06.644 11:48:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:30:06.644 11:48:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:30:06.644 11:48:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:30:06.644 11:48:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:30:06.644 11:48:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:30:06.644 11:48:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:30:06.644 11:48:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:30:06.644 11:48:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:30:06.644 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:30:06.644 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.440 ms 00:30:06.644 00:30:06.644 --- 10.0.0.2 ping statistics --- 00:30:06.644 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:06.644 rtt min/avg/max/mdev = 0.440/0.440/0.440/0.000 ms 00:30:06.644 11:48:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:30:06.644 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:30:06.644 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.222 ms 00:30:06.644 00:30:06.644 --- 10.0.0.1 ping statistics --- 00:30:06.644 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:06.644 rtt min/avg/max/mdev = 0.222/0.222/0.222/0.000 ms 00:30:06.644 11:48:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:30:06.644 11:48:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@450 -- # return 0 00:30:06.644 11:48:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:30:06.644 11:48:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:30:06.644 11:48:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:30:06.644 11:48:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:30:06.644 11:48:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:30:06.644 11:48:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:30:06.644 11:48:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:30:06.644 11:48:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@107 -- # nvmf_host_management 00:30:06.645 11:48:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@69 -- # starttarget 00:30:06.645 11:48:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@16 -- # nvmfappstart -m 0x1E 00:30:06.645 11:48:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:30:06.645 11:48:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@724 -- # xtrace_disable 00:30:06.645 11:48:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:30:06.645 11:48:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@509 -- # nvmfpid=1433414 00:30:06.645 11:48:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@510 -- # waitforlisten 1433414 00:30:06.645 11:48:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x1E 00:30:06.645 11:48:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@833 -- # '[' -z 1433414 ']' 00:30:06.645 11:48:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:06.645 11:48:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@838 -- # local max_retries=100 00:30:06.645 11:48:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:06.645 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:06.645 11:48:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@842 -- # xtrace_disable 00:30:06.645 11:48:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:30:06.905 [2024-11-15 11:48:07.535704] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:30:06.905 [2024-11-15 11:48:07.537016] Starting SPDK v25.01-pre git sha1 4b2d483c6 / DPDK 24.03.0 initialization... 00:30:06.905 [2024-11-15 11:48:07.537058] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:30:06.905 [2024-11-15 11:48:07.608132] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:30:06.905 [2024-11-15 11:48:07.649087] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:30:06.905 [2024-11-15 11:48:07.649121] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:30:06.905 [2024-11-15 11:48:07.649128] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:30:06.905 [2024-11-15 11:48:07.649134] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:30:06.905 [2024-11-15 11:48:07.649138] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:30:06.905 [2024-11-15 11:48:07.650581] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:30:06.905 [2024-11-15 11:48:07.650682] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:30:06.905 [2024-11-15 11:48:07.650765] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:30:06.905 [2024-11-15 11:48:07.650766] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:30:06.905 [2024-11-15 11:48:07.716689] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:30:06.905 [2024-11-15 11:48:07.716792] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:30:06.905 [2024-11-15 11:48:07.716931] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:30:06.905 [2024-11-15 11:48:07.717229] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:30:06.905 [2024-11-15 11:48:07.717391] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:30:06.905 11:48:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:30:07.165 11:48:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@866 -- # return 0 00:30:07.165 11:48:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:30:07.165 11:48:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@730 -- # xtrace_disable 00:30:07.165 11:48:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:30:07.165 11:48:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:30:07.165 11:48:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:30:07.165 11:48:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:07.165 11:48:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:30:07.165 [2024-11-15 11:48:07.799433] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:07.165 11:48:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:07.165 11:48:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@20 -- # timing_enter create_subsystem 00:30:07.165 11:48:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@724 -- # xtrace_disable 00:30:07.165 11:48:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:30:07.165 11:48:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@22 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:30:07.165 11:48:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@23 -- # cat 00:30:07.165 11:48:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@30 -- # rpc_cmd 00:30:07.165 11:48:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:07.165 11:48:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:30:07.165 Malloc0 00:30:07.165 [2024-11-15 11:48:07.875330] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:07.165 11:48:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:07.165 11:48:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@31 -- # timing_exit create_subsystems 00:30:07.165 11:48:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@730 -- # xtrace_disable 00:30:07.165 11:48:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:30:07.165 11:48:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@73 -- # perfpid=1433456 00:30:07.165 11:48:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@74 -- # waitforlisten 1433456 /var/tmp/bdevperf.sock 00:30:07.165 11:48:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@833 -- # '[' -z 1433456 ']' 00:30:07.165 11:48:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:30:07.165 11:48:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:30:07.165 11:48:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@72 -- # gen_nvmf_target_json 0 00:30:07.165 11:48:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@838 -- # local max_retries=100 00:30:07.165 11:48:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:30:07.165 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:30:07.165 11:48:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@560 -- # config=() 00:30:07.165 11:48:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@842 -- # xtrace_disable 00:30:07.165 11:48:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@560 -- # local subsystem config 00:30:07.165 11:48:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:30:07.165 11:48:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:30:07.165 11:48:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:30:07.165 { 00:30:07.165 "params": { 00:30:07.165 "name": "Nvme$subsystem", 00:30:07.165 "trtype": "$TEST_TRANSPORT", 00:30:07.165 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:07.165 "adrfam": "ipv4", 00:30:07.165 "trsvcid": "$NVMF_PORT", 00:30:07.165 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:07.165 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:07.165 "hdgst": ${hdgst:-false}, 00:30:07.165 "ddgst": ${ddgst:-false} 00:30:07.165 }, 00:30:07.165 "method": "bdev_nvme_attach_controller" 00:30:07.165 } 00:30:07.165 EOF 00:30:07.165 )") 00:30:07.165 11:48:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@582 -- # cat 00:30:07.165 11:48:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@584 -- # jq . 00:30:07.165 11:48:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@585 -- # IFS=, 00:30:07.165 11:48:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:30:07.165 "params": { 00:30:07.165 "name": "Nvme0", 00:30:07.165 "trtype": "tcp", 00:30:07.165 "traddr": "10.0.0.2", 00:30:07.165 "adrfam": "ipv4", 00:30:07.165 "trsvcid": "4420", 00:30:07.165 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:30:07.165 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:30:07.165 "hdgst": false, 00:30:07.165 "ddgst": false 00:30:07.165 }, 00:30:07.165 "method": "bdev_nvme_attach_controller" 00:30:07.165 }' 00:30:07.165 [2024-11-15 11:48:07.978162] Starting SPDK v25.01-pre git sha1 4b2d483c6 / DPDK 24.03.0 initialization... 00:30:07.165 [2024-11-15 11:48:07.978224] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1433456 ] 00:30:07.426 [2024-11-15 11:48:08.074809] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:07.426 [2024-11-15 11:48:08.123174] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:30:07.687 Running I/O for 10 seconds... 00:30:07.687 11:48:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:30:07.687 11:48:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@866 -- # return 0 00:30:07.687 11:48:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@75 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:30:07.687 11:48:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:07.687 11:48:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:30:07.687 11:48:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:07.687 11:48:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@78 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:30:07.687 11:48:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@80 -- # waitforio /var/tmp/bdevperf.sock Nvme0n1 00:30:07.687 11:48:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@45 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:30:07.687 11:48:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@49 -- # '[' -z Nvme0n1 ']' 00:30:07.687 11:48:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@52 -- # local ret=1 00:30:07.687 11:48:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@53 -- # local i 00:30:07.687 11:48:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@54 -- # (( i = 10 )) 00:30:07.687 11:48:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:30:07.687 11:48:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:30:07.687 11:48:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:30:07.687 11:48:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:07.687 11:48:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:30:07.949 11:48:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:07.949 11:48:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=79 00:30:07.949 11:48:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@58 -- # '[' 79 -ge 100 ']' 00:30:07.949 11:48:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@62 -- # sleep 0.25 00:30:08.218 11:48:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@54 -- # (( i-- )) 00:30:08.218 11:48:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:30:08.218 11:48:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:30:08.218 11:48:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:30:08.218 11:48:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:08.218 11:48:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:30:08.218 11:48:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:08.218 11:48:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=579 00:30:08.218 11:48:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@58 -- # '[' 579 -ge 100 ']' 00:30:08.218 11:48:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@59 -- # ret=0 00:30:08.218 11:48:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@60 -- # break 00:30:08.218 11:48:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@64 -- # return 0 00:30:08.218 11:48:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@84 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:30:08.218 11:48:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:08.218 11:48:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:30:08.218 [2024-11-15 11:48:08.880814] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:30:08.218 [2024-11-15 11:48:08.880860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.218 [2024-11-15 11:48:08.880874] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:30:08.218 [2024-11-15 11:48:08.880885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.218 [2024-11-15 11:48:08.880895] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:30:08.218 [2024-11-15 11:48:08.880905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.218 [2024-11-15 11:48:08.880916] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:30:08.218 [2024-11-15 11:48:08.880926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.218 [2024-11-15 11:48:08.880937] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16fda40 is same with the state(6) to be set 00:30:08.218 [2024-11-15 11:48:08.882940] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1724dd0 is same with the state(6) to be set 00:30:08.219 [2024-11-15 11:48:08.882975] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1724dd0 is same with the state(6) to be set 00:30:08.219 [2024-11-15 11:48:08.882982] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1724dd0 is same with the state(6) to be set 00:30:08.219 [2024-11-15 11:48:08.882988] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1724dd0 is same with the state(6) to be set 00:30:08.219 [2024-11-15 11:48:08.882995] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1724dd0 is same with the state(6) to be set 00:30:08.219 [2024-11-15 11:48:08.883001] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1724dd0 is same with the state(6) to be set 00:30:08.219 [2024-11-15 11:48:08.883006] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1724dd0 is same with the state(6) to be set 00:30:08.219 [2024-11-15 11:48:08.883012] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1724dd0 is same with the state(6) to be set 00:30:08.219 [2024-11-15 11:48:08.883017] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1724dd0 is same with the state(6) to be set 00:30:08.219 [2024-11-15 11:48:08.883022] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1724dd0 is same with the state(6) to be set 00:30:08.219 [2024-11-15 11:48:08.883027] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1724dd0 is same with the state(6) to be set 00:30:08.219 [2024-11-15 11:48:08.883033] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1724dd0 is same with the state(6) to be set 00:30:08.219 [2024-11-15 11:48:08.883038] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1724dd0 is same with the state(6) to be set 00:30:08.219 [2024-11-15 11:48:08.883043] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1724dd0 is same with the state(6) to be set 00:30:08.219 [2024-11-15 11:48:08.883049] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1724dd0 is same with the state(6) to be set 00:30:08.219 [2024-11-15 11:48:08.883054] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1724dd0 is same with the state(6) to be set 00:30:08.219 [2024-11-15 11:48:08.883060] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1724dd0 is same with the state(6) to be set 00:30:08.219 [2024-11-15 11:48:08.883069] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1724dd0 is same with the state(6) to be set 00:30:08.219 [2024-11-15 11:48:08.883074] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1724dd0 is same with the state(6) to be set 00:30:08.219 [2024-11-15 11:48:08.883080] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1724dd0 is same with the state(6) to be set 00:30:08.219 [2024-11-15 11:48:08.883085] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1724dd0 is same with the state(6) to be set 00:30:08.219 [2024-11-15 11:48:08.883090] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1724dd0 is same with the state(6) to be set 00:30:08.219 [2024-11-15 11:48:08.883096] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1724dd0 is same with the state(6) to be set 00:30:08.219 [2024-11-15 11:48:08.883101] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1724dd0 is same with the state(6) to be set 00:30:08.219 [2024-11-15 11:48:08.883106] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1724dd0 is same with the state(6) to be set 00:30:08.219 [2024-11-15 11:48:08.883112] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1724dd0 is same with the state(6) to be set 00:30:08.219 [2024-11-15 11:48:08.883117] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1724dd0 is same with the state(6) to be set 00:30:08.219 [2024-11-15 11:48:08.883122] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1724dd0 is same with the state(6) to be set 00:30:08.219 [2024-11-15 11:48:08.883128] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1724dd0 is same with the state(6) to be set 00:30:08.219 [2024-11-15 11:48:08.883134] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1724dd0 is same with the state(6) to be set 00:30:08.219 [2024-11-15 11:48:08.883139] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1724dd0 is same with the state(6) to be set 00:30:08.219 [2024-11-15 11:48:08.883145] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1724dd0 is same with the state(6) to be set 00:30:08.219 [2024-11-15 11:48:08.883150] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1724dd0 is same with the state(6) to be set 00:30:08.219 [2024-11-15 11:48:08.883156] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1724dd0 is same with the state(6) to be set 00:30:08.219 [2024-11-15 11:48:08.883162] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1724dd0 is same with the state(6) to be set 00:30:08.219 [2024-11-15 11:48:08.883168] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1724dd0 is same with the state(6) to be set 00:30:08.219 [2024-11-15 11:48:08.883173] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1724dd0 is same with the state(6) to be set 00:30:08.219 [2024-11-15 11:48:08.883178] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1724dd0 is same with the state(6) to be set 00:30:08.219 [2024-11-15 11:48:08.883184] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1724dd0 is same with the state(6) to be set 00:30:08.219 [2024-11-15 11:48:08.883190] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1724dd0 is same with the state(6) to be set 00:30:08.219 [2024-11-15 11:48:08.883195] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1724dd0 is same with the state(6) to be set 00:30:08.219 [2024-11-15 11:48:08.883201] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1724dd0 is same with the state(6) to be set 00:30:08.219 [2024-11-15 11:48:08.883206] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1724dd0 is same with the state(6) to be set 00:30:08.219 [2024-11-15 11:48:08.883212] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1724dd0 is same with the state(6) to be set 00:30:08.219 [2024-11-15 11:48:08.883218] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1724dd0 is same with the state(6) to be set 00:30:08.219 [2024-11-15 11:48:08.883224] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1724dd0 is same with the state(6) to be set 00:30:08.219 [2024-11-15 11:48:08.883229] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1724dd0 is same with the state(6) to be set 00:30:08.219 [2024-11-15 11:48:08.883235] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1724dd0 is same with the state(6) to be set 00:30:08.219 [2024-11-15 11:48:08.883240] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1724dd0 is same with the state(6) to be set 00:30:08.219 [2024-11-15 11:48:08.883245] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1724dd0 is same with the state(6) to be set 00:30:08.219 [2024-11-15 11:48:08.883251] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1724dd0 is same with the state(6) to be set 00:30:08.219 [2024-11-15 11:48:08.883256] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1724dd0 is same with the state(6) to be set 00:30:08.219 [2024-11-15 11:48:08.883262] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1724dd0 is same with the state(6) to be set 00:30:08.219 [2024-11-15 11:48:08.883267] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1724dd0 is same with the state(6) to be set 00:30:08.219 [2024-11-15 11:48:08.883272] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1724dd0 is same with the state(6) to be set 00:30:08.219 [2024-11-15 11:48:08.883277] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1724dd0 is same with the state(6) to be set 00:30:08.219 [2024-11-15 11:48:08.883283] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1724dd0 is same with the state(6) to be set 00:30:08.219 [2024-11-15 11:48:08.883288] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1724dd0 is same with the state(6) to be set 00:30:08.219 [2024-11-15 11:48:08.883296] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1724dd0 is same with the state(6) to be set 00:30:08.219 [2024-11-15 11:48:08.883302] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1724dd0 is same with the state(6) to be set 00:30:08.219 [2024-11-15 11:48:08.883307] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1724dd0 is same with the state(6) to be set 00:30:08.219 [2024-11-15 11:48:08.883551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:81920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.219 [2024-11-15 11:48:08.883581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.219 [2024-11-15 11:48:08.883601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:82048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.219 [2024-11-15 11:48:08.883612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.219 [2024-11-15 11:48:08.883625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:82176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.219 [2024-11-15 11:48:08.883635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.219 [2024-11-15 11:48:08.883647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:82304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.219 [2024-11-15 11:48:08.883658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.219 [2024-11-15 11:48:08.883670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:82432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.219 [2024-11-15 11:48:08.883679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.219 [2024-11-15 11:48:08.883697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:82560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.219 [2024-11-15 11:48:08.883706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.219 [2024-11-15 11:48:08.883719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:82688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.220 [2024-11-15 11:48:08.883728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.220 [2024-11-15 11:48:08.883740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:82816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.220 [2024-11-15 11:48:08.883749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.220 [2024-11-15 11:48:08.883761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:82944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.220 [2024-11-15 11:48:08.883772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.220 [2024-11-15 11:48:08.883783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:83072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.220 [2024-11-15 11:48:08.883793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.220 [2024-11-15 11:48:08.883805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:83200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.220 [2024-11-15 11:48:08.883814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.220 [2024-11-15 11:48:08.883826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:83328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.220 [2024-11-15 11:48:08.883836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.220 [2024-11-15 11:48:08.883848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:83456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.220 [2024-11-15 11:48:08.883857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.220 [2024-11-15 11:48:08.883869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:83584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.220 [2024-11-15 11:48:08.883879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.220 [2024-11-15 11:48:08.883891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:83712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.220 [2024-11-15 11:48:08.883901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.220 [2024-11-15 11:48:08.883913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:83840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.220 [2024-11-15 11:48:08.883922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.220 [2024-11-15 11:48:08.883934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:83968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.220 [2024-11-15 11:48:08.883944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.220 [2024-11-15 11:48:08.883957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:84096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.220 [2024-11-15 11:48:08.883968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.220 [2024-11-15 11:48:08.883981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:84224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.220 [2024-11-15 11:48:08.883990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.220 [2024-11-15 11:48:08.884003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:84352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.220 [2024-11-15 11:48:08.884012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.220 [2024-11-15 11:48:08.884024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:84480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.220 [2024-11-15 11:48:08.884033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.220 [2024-11-15 11:48:08.884046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:84608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.220 [2024-11-15 11:48:08.884056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.220 [2024-11-15 11:48:08.884069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:84736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.220 [2024-11-15 11:48:08.884079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.220 [2024-11-15 11:48:08.884091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:84864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.220 [2024-11-15 11:48:08.884101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.220 [2024-11-15 11:48:08.884112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:84992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.220 [2024-11-15 11:48:08.884122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.220 [2024-11-15 11:48:08.884134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:85120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.220 11:48:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:08.220 [2024-11-15 11:48:08.884144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.220 [2024-11-15 11:48:08.884159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:85248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.220 [2024-11-15 11:48:08.884168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.220 [2024-11-15 11:48:08.884180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:85376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.220 [2024-11-15 11:48:08.884189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.220 [2024-11-15 11:48:08.884201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:85504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.220 [2024-11-15 11:48:08.884211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.220 [2024-11-15 11:48:08.884223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:85632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.220 [2024-11-15 11:48:08.884239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.220 [2024-11-15 11:48:08.884251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:85760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.220 [2024-11-15 11:48:08.884261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.220 [2024-11-15 11:48:08.884273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:85888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.220 [2024-11-15 11:48:08.884282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.220 [2024-11-15 11:48:08.884294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:86016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.220 [2024-11-15 11:48:08.884309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.220 [2024-11-15 11:48:08.884321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:86144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.220 [2024-11-15 11:48:08.884330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.220 [2024-11-15 11:48:08.884342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:86272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.220 [2024-11-15 11:48:08.884351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.220 [2024-11-15 11:48:08.884363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:86400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.220 [2024-11-15 11:48:08.884372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.220 [2024-11-15 11:48:08.884384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:86528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.220 [2024-11-15 11:48:08.884393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.220 [2024-11-15 11:48:08.884405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:86656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.220 [2024-11-15 11:48:08.884414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.220 [2024-11-15 11:48:08.884426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:86784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.220 [2024-11-15 11:48:08.884435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.220 [2024-11-15 11:48:08.884448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:86912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.220 [2024-11-15 11:48:08.884465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.220 [2024-11-15 11:48:08.884477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:87040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.220 [2024-11-15 11:48:08.884487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.220 [2024-11-15 11:48:08.884498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:87168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.220 [2024-11-15 11:48:08.884507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.221 [2024-11-15 11:48:08.884522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:87296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.221 [2024-11-15 11:48:08.884531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.221 11:48:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@85 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:30:08.221 [2024-11-15 11:48:08.884543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:87424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.221 [2024-11-15 11:48:08.884553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.221 [2024-11-15 11:48:08.884565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:87552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.221 [2024-11-15 11:48:08.884574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.221 [2024-11-15 11:48:08.884586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:87680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.221 [2024-11-15 11:48:08.884595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.221 [2024-11-15 11:48:08.884607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:87808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.221 [2024-11-15 11:48:08.884617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.221 [2024-11-15 11:48:08.884628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:87936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.221 [2024-11-15 11:48:08.884638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.221 [2024-11-15 11:48:08.884650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:88064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.221 [2024-11-15 11:48:08.884662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.221 [2024-11-15 11:48:08.884674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:88192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.221 [2024-11-15 11:48:08.884683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.221 [2024-11-15 11:48:08.884695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:88320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.221 [2024-11-15 11:48:08.884704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.221 [2024-11-15 11:48:08.884716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:88448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.221 [2024-11-15 11:48:08.884726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.221 [2024-11-15 11:48:08.884737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:88576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.221 [2024-11-15 11:48:08.884747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.221 [2024-11-15 11:48:08.884759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:88704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.221 [2024-11-15 11:48:08.884768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.221 [2024-11-15 11:48:08.884784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:88832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.221 [2024-11-15 11:48:08.884794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.221 [2024-11-15 11:48:08.884806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:88960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.221 [2024-11-15 11:48:08.884815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.221 [2024-11-15 11:48:08.884827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:89088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.221 [2024-11-15 11:48:08.884836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.221 11:48:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:08.221 [2024-11-15 11:48:08.884848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:89216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.221 [2024-11-15 11:48:08.884858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.221 [2024-11-15 11:48:08.884870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:89344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.221 [2024-11-15 11:48:08.884879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.221 [2024-11-15 11:48:08.884891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:89472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.221 [2024-11-15 11:48:08.884900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.221 [2024-11-15 11:48:08.884912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:89600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.221 [2024-11-15 11:48:08.884922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.221 [2024-11-15 11:48:08.884934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:89728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.221 [2024-11-15 11:48:08.884943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.221 [2024-11-15 11:48:08.884954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:89856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.221 [2024-11-15 11:48:08.884963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.221 [2024-11-15 11:48:08.884975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:89984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.221 [2024-11-15 11:48:08.884985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.221 [2024-11-15 11:48:08.884995] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1916990 is same with the state(6) to be set 00:30:08.221 11:48:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:30:08.221 [2024-11-15 11:48:08.886444] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:30:08.221 task offset: 81920 on job bdev=Nvme0n1 fails 00:30:08.221 00:30:08.221 Latency(us) 00:30:08.221 [2024-11-15T10:48:09.074Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:08.221 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:30:08.221 Job: Nvme0n1 ended in about 0.43 seconds with error 00:30:08.221 Verification LBA range: start 0x0 length 0x400 00:30:08.221 Nvme0n1 : 0.43 1475.06 92.19 147.51 0.00 38016.40 3872.58 34078.72 00:30:08.221 [2024-11-15T10:48:09.074Z] =================================================================================================================== 00:30:08.221 [2024-11-15T10:48:09.074Z] Total : 1475.06 92.19 147.51 0.00 38016.40 3872.58 34078.72 00:30:08.221 [2024-11-15 11:48:08.889609] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:30:08.221 [2024-11-15 11:48:08.889636] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16fda40 (9): Bad file descriptor 00:30:08.221 [2024-11-15 11:48:08.890842] ctrlr.c: 823:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode0' does not allow host 'nqn.2016-06.io.spdk:host0' 00:30:08.221 [2024-11-15 11:48:08.890929] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:3 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:30:08.221 [2024-11-15 11:48:08.890957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND SPECIFIC (01/84) qid:0 cid:3 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.221 [2024-11-15 11:48:08.890977] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode0 00:30:08.221 [2024-11-15 11:48:08.890988] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 132 00:30:08.221 [2024-11-15 11:48:08.890997] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:08.221 [2024-11-15 11:48:08.891006] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16fda40 00:30:08.221 [2024-11-15 11:48:08.891031] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16fda40 (9): Bad file descriptor 00:30:08.221 [2024-11-15 11:48:08.891048] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:30:08.221 [2024-11-15 11:48:08.891057] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:30:08.221 [2024-11-15 11:48:08.891069] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:30:08.221 [2024-11-15 11:48:08.891080] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:30:08.221 11:48:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:08.221 11:48:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@87 -- # sleep 1 00:30:09.158 11:48:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@91 -- # kill -9 1433456 00:30:09.158 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh: line 91: kill: (1433456) - No such process 00:30:09.158 11:48:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@91 -- # true 00:30:09.158 11:48:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@97 -- # rm -f /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 /var/tmp/spdk_cpu_lock_003 /var/tmp/spdk_cpu_lock_004 00:30:09.158 11:48:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:30:09.158 11:48:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@100 -- # gen_nvmf_target_json 0 00:30:09.158 11:48:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@560 -- # config=() 00:30:09.158 11:48:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@560 -- # local subsystem config 00:30:09.158 11:48:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:30:09.158 11:48:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:30:09.158 { 00:30:09.158 "params": { 00:30:09.158 "name": "Nvme$subsystem", 00:30:09.158 "trtype": "$TEST_TRANSPORT", 00:30:09.158 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:09.158 "adrfam": "ipv4", 00:30:09.158 "trsvcid": "$NVMF_PORT", 00:30:09.159 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:09.159 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:09.159 "hdgst": ${hdgst:-false}, 00:30:09.159 "ddgst": ${ddgst:-false} 00:30:09.159 }, 00:30:09.159 "method": "bdev_nvme_attach_controller" 00:30:09.159 } 00:30:09.159 EOF 00:30:09.159 )") 00:30:09.159 11:48:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@582 -- # cat 00:30:09.159 11:48:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@584 -- # jq . 00:30:09.159 11:48:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@585 -- # IFS=, 00:30:09.159 11:48:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:30:09.159 "params": { 00:30:09.159 "name": "Nvme0", 00:30:09.159 "trtype": "tcp", 00:30:09.159 "traddr": "10.0.0.2", 00:30:09.159 "adrfam": "ipv4", 00:30:09.159 "trsvcid": "4420", 00:30:09.159 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:30:09.159 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:30:09.159 "hdgst": false, 00:30:09.159 "ddgst": false 00:30:09.159 }, 00:30:09.159 "method": "bdev_nvme_attach_controller" 00:30:09.159 }' 00:30:09.159 [2024-11-15 11:48:09.955116] Starting SPDK v25.01-pre git sha1 4b2d483c6 / DPDK 24.03.0 initialization... 00:30:09.159 [2024-11-15 11:48:09.955182] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1434083 ] 00:30:09.417 [2024-11-15 11:48:10.053675] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:09.417 [2024-11-15 11:48:10.109471] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:30:09.676 Running I/O for 1 seconds... 00:30:10.612 1600.00 IOPS, 100.00 MiB/s 00:30:10.612 Latency(us) 00:30:10.612 [2024-11-15T10:48:11.465Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:10.612 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:30:10.612 Verification LBA range: start 0x0 length 0x400 00:30:10.612 Nvme0n1 : 1.02 1625.61 101.60 0.00 0.00 38531.19 6345.08 34078.72 00:30:10.612 [2024-11-15T10:48:11.465Z] =================================================================================================================== 00:30:10.612 [2024-11-15T10:48:11.465Z] Total : 1625.61 101.60 0.00 0.00 38531.19 6345.08 34078.72 00:30:10.871 11:48:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@102 -- # stoptarget 00:30:10.871 11:48:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@36 -- # rm -f ./local-job0-0-verify.state 00:30:10.871 11:48:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@37 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:30:10.871 11:48:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@38 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:30:10.871 11:48:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@40 -- # nvmftestfini 00:30:10.871 11:48:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@516 -- # nvmfcleanup 00:30:10.871 11:48:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@121 -- # sync 00:30:10.871 11:48:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:30:10.871 11:48:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@124 -- # set +e 00:30:10.871 11:48:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@125 -- # for i in {1..20} 00:30:10.871 11:48:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:30:10.871 rmmod nvme_tcp 00:30:10.871 rmmod nvme_fabrics 00:30:10.871 rmmod nvme_keyring 00:30:10.871 11:48:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:30:10.871 11:48:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@128 -- # set -e 00:30:10.871 11:48:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@129 -- # return 0 00:30:10.871 11:48:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@517 -- # '[' -n 1433414 ']' 00:30:10.871 11:48:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@518 -- # killprocess 1433414 00:30:10.871 11:48:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@952 -- # '[' -z 1433414 ']' 00:30:10.871 11:48:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@956 -- # kill -0 1433414 00:30:10.871 11:48:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@957 -- # uname 00:30:10.871 11:48:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:30:10.871 11:48:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 1433414 00:30:11.131 11:48:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:30:11.131 11:48:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:30:11.131 11:48:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@970 -- # echo 'killing process with pid 1433414' 00:30:11.131 killing process with pid 1433414 00:30:11.131 11:48:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@971 -- # kill 1433414 00:30:11.131 11:48:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@976 -- # wait 1433414 00:30:11.131 [2024-11-15 11:48:11.893635] app.c: 721:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 1, errno: 2 00:30:11.131 11:48:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:30:11.131 11:48:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:30:11.131 11:48:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:30:11.131 11:48:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@297 -- # iptr 00:30:11.131 11:48:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@791 -- # iptables-save 00:30:11.131 11:48:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:30:11.131 11:48:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@791 -- # iptables-restore 00:30:11.131 11:48:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:30:11.131 11:48:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@302 -- # remove_spdk_ns 00:30:11.131 11:48:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:11.131 11:48:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:11.131 11:48:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:13.665 11:48:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:30:13.666 11:48:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@109 -- # trap - SIGINT SIGTERM EXIT 00:30:13.666 00:30:13.666 real 0m12.032s 00:30:13.666 user 0m19.373s 00:30:13.666 sys 0m5.878s 00:30:13.666 11:48:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1128 -- # xtrace_disable 00:30:13.666 11:48:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:30:13.666 ************************************ 00:30:13.666 END TEST nvmf_host_management 00:30:13.666 ************************************ 00:30:13.666 11:48:14 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@27 -- # run_test nvmf_lvol /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp --interrupt-mode 00:30:13.666 11:48:14 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:30:13.666 11:48:14 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1109 -- # xtrace_disable 00:30:13.666 11:48:14 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:30:13.666 ************************************ 00:30:13.666 START TEST nvmf_lvol 00:30:13.666 ************************************ 00:30:13.666 11:48:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp --interrupt-mode 00:30:13.666 * Looking for test storage... 00:30:13.666 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:30:13.666 11:48:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:30:13.666 11:48:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1691 -- # lcov --version 00:30:13.666 11:48:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:30:13.666 11:48:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:30:13.666 11:48:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:30:13.666 11:48:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@333 -- # local ver1 ver1_l 00:30:13.666 11:48:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@334 -- # local ver2 ver2_l 00:30:13.666 11:48:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@336 -- # IFS=.-: 00:30:13.666 11:48:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@336 -- # read -ra ver1 00:30:13.666 11:48:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@337 -- # IFS=.-: 00:30:13.666 11:48:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@337 -- # read -ra ver2 00:30:13.666 11:48:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@338 -- # local 'op=<' 00:30:13.666 11:48:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@340 -- # ver1_l=2 00:30:13.666 11:48:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@341 -- # ver2_l=1 00:30:13.666 11:48:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:30:13.666 11:48:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@344 -- # case "$op" in 00:30:13.666 11:48:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@345 -- # : 1 00:30:13.666 11:48:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@364 -- # (( v = 0 )) 00:30:13.666 11:48:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:30:13.666 11:48:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@365 -- # decimal 1 00:30:13.666 11:48:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@353 -- # local d=1 00:30:13.666 11:48:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:30:13.666 11:48:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@355 -- # echo 1 00:30:13.666 11:48:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@365 -- # ver1[v]=1 00:30:13.666 11:48:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@366 -- # decimal 2 00:30:13.666 11:48:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@353 -- # local d=2 00:30:13.666 11:48:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:30:13.666 11:48:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@355 -- # echo 2 00:30:13.666 11:48:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@366 -- # ver2[v]=2 00:30:13.666 11:48:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:30:13.666 11:48:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:30:13.666 11:48:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@368 -- # return 0 00:30:13.666 11:48:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:30:13.666 11:48:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:30:13.666 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:13.666 --rc genhtml_branch_coverage=1 00:30:13.666 --rc genhtml_function_coverage=1 00:30:13.666 --rc genhtml_legend=1 00:30:13.666 --rc geninfo_all_blocks=1 00:30:13.666 --rc geninfo_unexecuted_blocks=1 00:30:13.666 00:30:13.666 ' 00:30:13.666 11:48:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:30:13.666 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:13.666 --rc genhtml_branch_coverage=1 00:30:13.666 --rc genhtml_function_coverage=1 00:30:13.666 --rc genhtml_legend=1 00:30:13.666 --rc geninfo_all_blocks=1 00:30:13.666 --rc geninfo_unexecuted_blocks=1 00:30:13.666 00:30:13.666 ' 00:30:13.666 11:48:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:30:13.666 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:13.666 --rc genhtml_branch_coverage=1 00:30:13.666 --rc genhtml_function_coverage=1 00:30:13.666 --rc genhtml_legend=1 00:30:13.666 --rc geninfo_all_blocks=1 00:30:13.666 --rc geninfo_unexecuted_blocks=1 00:30:13.666 00:30:13.666 ' 00:30:13.666 11:48:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:30:13.666 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:13.666 --rc genhtml_branch_coverage=1 00:30:13.666 --rc genhtml_function_coverage=1 00:30:13.666 --rc genhtml_legend=1 00:30:13.666 --rc geninfo_all_blocks=1 00:30:13.666 --rc geninfo_unexecuted_blocks=1 00:30:13.666 00:30:13.666 ' 00:30:13.666 11:48:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:30:13.666 11:48:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@7 -- # uname -s 00:30:13.666 11:48:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:30:13.666 11:48:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:30:13.666 11:48:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:30:13.666 11:48:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:30:13.666 11:48:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:30:13.666 11:48:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:30:13.666 11:48:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:30:13.666 11:48:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:30:13.666 11:48:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:30:13.666 11:48:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:30:13.666 11:48:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:30:13.666 11:48:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@18 -- # NVME_HOSTID=00abaa28-3537-eb11-906e-0017a4403562 00:30:13.666 11:48:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:30:13.666 11:48:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:30:13.666 11:48:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:30:13.666 11:48:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:30:13.666 11:48:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:30:13.666 11:48:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@15 -- # shopt -s extglob 00:30:13.666 11:48:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:13.666 11:48:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:13.666 11:48:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:13.666 11:48:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:13.666 11:48:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:13.667 11:48:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:13.667 11:48:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@5 -- # export PATH 00:30:13.667 11:48:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:13.667 11:48:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@51 -- # : 0 00:30:13.667 11:48:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:30:13.667 11:48:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:30:13.667 11:48:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:30:13.667 11:48:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:30:13.667 11:48:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:30:13.667 11:48:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:30:13.667 11:48:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:30:13.667 11:48:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:30:13.667 11:48:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:30:13.667 11:48:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@55 -- # have_pci_nics=0 00:30:13.667 11:48:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@11 -- # MALLOC_BDEV_SIZE=64 00:30:13.667 11:48:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:30:13.667 11:48:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@13 -- # LVOL_BDEV_INIT_SIZE=20 00:30:13.667 11:48:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@14 -- # LVOL_BDEV_FINAL_SIZE=30 00:30:13.667 11:48:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:30:13.667 11:48:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@18 -- # nvmftestinit 00:30:13.667 11:48:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:30:13.667 11:48:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:30:13.667 11:48:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@476 -- # prepare_net_devs 00:30:13.667 11:48:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@438 -- # local -g is_hw=no 00:30:13.667 11:48:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@440 -- # remove_spdk_ns 00:30:13.667 11:48:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:13.667 11:48:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:13.667 11:48:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:13.667 11:48:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:30:13.667 11:48:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:30:13.667 11:48:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@309 -- # xtrace_disable 00:30:13.667 11:48:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:30:18.940 11:48:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:30:18.940 11:48:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@315 -- # pci_devs=() 00:30:18.940 11:48:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@315 -- # local -a pci_devs 00:30:18.940 11:48:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@316 -- # pci_net_devs=() 00:30:18.940 11:48:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:30:18.940 11:48:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@317 -- # pci_drivers=() 00:30:18.940 11:48:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@317 -- # local -A pci_drivers 00:30:18.940 11:48:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@319 -- # net_devs=() 00:30:18.940 11:48:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@319 -- # local -ga net_devs 00:30:18.940 11:48:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@320 -- # e810=() 00:30:18.940 11:48:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@320 -- # local -ga e810 00:30:18.940 11:48:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@321 -- # x722=() 00:30:18.940 11:48:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@321 -- # local -ga x722 00:30:18.940 11:48:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@322 -- # mlx=() 00:30:18.940 11:48:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@322 -- # local -ga mlx 00:30:18.940 11:48:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:30:18.940 11:48:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:30:18.940 11:48:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:30:18.940 11:48:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:30:18.940 11:48:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:30:18.940 11:48:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:30:18.940 11:48:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:30:18.940 11:48:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:30:18.940 11:48:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:30:18.940 11:48:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:30:18.940 11:48:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:30:18.940 11:48:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:30:18.940 11:48:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:30:18.940 11:48:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:30:18.940 11:48:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:30:18.940 11:48:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:30:18.940 11:48:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:30:18.940 11:48:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:30:18.940 11:48:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:30:18.940 11:48:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:30:18.940 Found 0000:af:00.0 (0x8086 - 0x159b) 00:30:18.940 11:48:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:30:18.941 11:48:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:30:18.941 11:48:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:18.941 11:48:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:18.941 11:48:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:30:18.941 11:48:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:30:18.941 11:48:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:30:18.941 Found 0000:af:00.1 (0x8086 - 0x159b) 00:30:18.941 11:48:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:30:18.941 11:48:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:30:18.941 11:48:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:18.941 11:48:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:18.941 11:48:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:30:18.941 11:48:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:30:18.941 11:48:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:30:18.941 11:48:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:30:18.941 11:48:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:30:18.941 11:48:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:18.941 11:48:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:30:18.941 11:48:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:18.941 11:48:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@418 -- # [[ up == up ]] 00:30:18.941 11:48:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:30:18.941 11:48:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:18.941 11:48:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:30:18.941 Found net devices under 0000:af:00.0: cvl_0_0 00:30:18.941 11:48:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:30:18.941 11:48:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:30:18.941 11:48:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:18.941 11:48:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:30:18.941 11:48:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:18.941 11:48:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@418 -- # [[ up == up ]] 00:30:18.941 11:48:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:30:18.941 11:48:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:18.941 11:48:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:30:18.941 Found net devices under 0000:af:00.1: cvl_0_1 00:30:18.941 11:48:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:30:18.941 11:48:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:30:18.941 11:48:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@442 -- # is_hw=yes 00:30:18.941 11:48:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:30:18.941 11:48:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:30:18.941 11:48:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:30:18.941 11:48:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:30:18.941 11:48:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:30:18.941 11:48:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:30:18.941 11:48:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:30:18.941 11:48:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:30:18.941 11:48:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:30:18.941 11:48:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:30:18.941 11:48:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:30:18.941 11:48:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:30:18.941 11:48:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:30:18.941 11:48:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:30:18.941 11:48:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:30:18.941 11:48:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:30:18.941 11:48:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:30:18.941 11:48:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:30:18.941 11:48:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:30:18.941 11:48:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:30:18.941 11:48:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:30:18.941 11:48:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:30:19.199 11:48:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:30:19.199 11:48:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:30:19.200 11:48:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:30:19.200 11:48:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:30:19.200 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:30:19.200 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.458 ms 00:30:19.200 00:30:19.200 --- 10.0.0.2 ping statistics --- 00:30:19.200 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:19.200 rtt min/avg/max/mdev = 0.458/0.458/0.458/0.000 ms 00:30:19.200 11:48:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:30:19.200 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:30:19.200 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.228 ms 00:30:19.200 00:30:19.200 --- 10.0.0.1 ping statistics --- 00:30:19.200 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:19.200 rtt min/avg/max/mdev = 0.228/0.228/0.228/0.000 ms 00:30:19.200 11:48:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:30:19.200 11:48:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@450 -- # return 0 00:30:19.200 11:48:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:30:19.200 11:48:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:30:19.200 11:48:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:30:19.200 11:48:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:30:19.200 11:48:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:30:19.200 11:48:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:30:19.200 11:48:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:30:19.200 11:48:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@19 -- # nvmfappstart -m 0x7 00:30:19.200 11:48:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:30:19.200 11:48:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@724 -- # xtrace_disable 00:30:19.200 11:48:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:30:19.200 11:48:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@509 -- # nvmfpid=1438110 00:30:19.200 11:48:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x7 00:30:19.200 11:48:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@510 -- # waitforlisten 1438110 00:30:19.200 11:48:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@833 -- # '[' -z 1438110 ']' 00:30:19.200 11:48:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:19.200 11:48:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@838 -- # local max_retries=100 00:30:19.200 11:48:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:19.200 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:19.200 11:48:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@842 -- # xtrace_disable 00:30:19.200 11:48:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:30:19.200 [2024-11-15 11:48:19.922673] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:30:19.200 [2024-11-15 11:48:19.924013] Starting SPDK v25.01-pre git sha1 4b2d483c6 / DPDK 24.03.0 initialization... 00:30:19.200 [2024-11-15 11:48:19.924057] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:30:19.200 [2024-11-15 11:48:20.028050] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:30:19.459 [2024-11-15 11:48:20.086165] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:30:19.459 [2024-11-15 11:48:20.086206] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:30:19.459 [2024-11-15 11:48:20.086216] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:30:19.459 [2024-11-15 11:48:20.086224] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:30:19.459 [2024-11-15 11:48:20.086232] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:30:19.459 [2024-11-15 11:48:20.087831] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:30:19.459 [2024-11-15 11:48:20.087934] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:30:19.459 [2024-11-15 11:48:20.087935] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:30:19.459 [2024-11-15 11:48:20.163626] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:30:19.459 [2024-11-15 11:48:20.163784] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:30:19.459 [2024-11-15 11:48:20.163789] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:30:19.459 [2024-11-15 11:48:20.164094] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:30:19.459 11:48:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:30:19.459 11:48:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@866 -- # return 0 00:30:19.459 11:48:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:30:19.459 11:48:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@730 -- # xtrace_disable 00:30:19.459 11:48:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:30:19.459 11:48:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:30:19.459 11:48:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:30:19.718 [2024-11-15 11:48:20.480691] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:19.718 11:48:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:30:19.977 11:48:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # base_bdevs='Malloc0 ' 00:30:19.977 11:48:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:30:20.545 11:48:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # base_bdevs+=Malloc1 00:30:20.545 11:48:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc0 Malloc1' 00:30:20.803 11:48:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore raid0 lvs 00:30:21.062 11:48:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # lvs=387f3648-a2f2-4823-915d-8ce48e03475b 00:30:21.062 11:48:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 387f3648-a2f2-4823-915d-8ce48e03475b lvol 20 00:30:21.321 11:48:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # lvol=e7b8f011-a38d-430e-a532-55c84d96e61a 00:30:21.321 11:48:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:30:21.580 11:48:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 e7b8f011-a38d-430e-a532-55c84d96e61a 00:30:21.838 11:48:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:30:22.097 [2024-11-15 11:48:22.784669] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:22.097 11:48:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:30:22.355 11:48:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@42 -- # perf_pid=1438667 00:30:22.355 11:48:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -o 4096 -q 128 -s 512 -w randwrite -t 10 -c 0x18 00:30:22.355 11:48:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@44 -- # sleep 1 00:30:23.293 11:48:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_snapshot e7b8f011-a38d-430e-a532-55c84d96e61a MY_SNAPSHOT 00:30:23.552 11:48:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # snapshot=4dd4818b-cec7-4580-886b-b168c2e72722 00:30:23.552 11:48:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_resize e7b8f011-a38d-430e-a532-55c84d96e61a 30 00:30:24.118 11:48:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_clone 4dd4818b-cec7-4580-886b-b168c2e72722 MY_CLONE 00:30:24.376 11:48:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # clone=14b5824a-e374-4034-ba42-6d5cfad8d408 00:30:24.376 11:48:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_inflate 14b5824a-e374-4034-ba42-6d5cfad8d408 00:30:24.944 11:48:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@53 -- # wait 1438667 00:30:33.265 Initializing NVMe Controllers 00:30:33.265 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:30:33.265 Controller IO queue size 128, less than required. 00:30:33.265 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:30:33.265 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 3 00:30:33.265 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 4 00:30:33.265 Initialization complete. Launching workers. 00:30:33.265 ======================================================== 00:30:33.265 Latency(us) 00:30:33.265 Device Information : IOPS MiB/s Average min max 00:30:33.265 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 3: 13576.50 53.03 9433.28 1499.86 51506.28 00:30:33.265 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 4: 8577.20 33.50 14932.59 5963.62 90556.14 00:30:33.265 ======================================================== 00:30:33.265 Total : 22153.70 86.54 11562.43 1499.86 90556.14 00:30:33.265 00:30:33.265 11:48:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:30:33.265 11:48:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete e7b8f011-a38d-430e-a532-55c84d96e61a 00:30:33.265 11:48:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 387f3648-a2f2-4823-915d-8ce48e03475b 00:30:33.524 11:48:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@60 -- # rm -f 00:30:33.524 11:48:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@62 -- # trap - SIGINT SIGTERM EXIT 00:30:33.524 11:48:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@64 -- # nvmftestfini 00:30:33.524 11:48:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@516 -- # nvmfcleanup 00:30:33.524 11:48:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@121 -- # sync 00:30:33.524 11:48:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:30:33.524 11:48:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@124 -- # set +e 00:30:33.524 11:48:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@125 -- # for i in {1..20} 00:30:33.524 11:48:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:30:33.524 rmmod nvme_tcp 00:30:33.524 rmmod nvme_fabrics 00:30:33.524 rmmod nvme_keyring 00:30:33.524 11:48:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:30:33.524 11:48:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@128 -- # set -e 00:30:33.524 11:48:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@129 -- # return 0 00:30:33.524 11:48:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@517 -- # '[' -n 1438110 ']' 00:30:33.524 11:48:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@518 -- # killprocess 1438110 00:30:33.524 11:48:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@952 -- # '[' -z 1438110 ']' 00:30:33.524 11:48:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@956 -- # kill -0 1438110 00:30:33.524 11:48:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@957 -- # uname 00:30:33.783 11:48:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:30:33.783 11:48:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 1438110 00:30:33.783 11:48:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:30:33.783 11:48:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:30:33.783 11:48:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@970 -- # echo 'killing process with pid 1438110' 00:30:33.783 killing process with pid 1438110 00:30:33.783 11:48:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@971 -- # kill 1438110 00:30:33.783 11:48:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@976 -- # wait 1438110 00:30:34.041 11:48:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:30:34.041 11:48:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:30:34.041 11:48:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:30:34.041 11:48:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@297 -- # iptr 00:30:34.041 11:48:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@791 -- # iptables-save 00:30:34.041 11:48:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:30:34.041 11:48:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@791 -- # iptables-restore 00:30:34.041 11:48:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:30:34.042 11:48:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@302 -- # remove_spdk_ns 00:30:34.042 11:48:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:34.042 11:48:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:34.042 11:48:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:35.944 11:48:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:30:35.944 00:30:35.944 real 0m22.659s 00:30:35.944 user 0m58.107s 00:30:35.944 sys 0m9.677s 00:30:35.945 11:48:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1128 -- # xtrace_disable 00:30:35.945 11:48:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:30:35.945 ************************************ 00:30:35.945 END TEST nvmf_lvol 00:30:35.945 ************************************ 00:30:35.945 11:48:36 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@28 -- # run_test nvmf_lvs_grow /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp --interrupt-mode 00:30:35.945 11:48:36 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:30:35.945 11:48:36 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1109 -- # xtrace_disable 00:30:35.945 11:48:36 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:30:35.945 ************************************ 00:30:35.945 START TEST nvmf_lvs_grow 00:30:35.945 ************************************ 00:30:35.945 11:48:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp --interrupt-mode 00:30:36.205 * Looking for test storage... 00:30:36.205 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:30:36.205 11:48:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:30:36.205 11:48:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1691 -- # lcov --version 00:30:36.205 11:48:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:30:36.205 11:48:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:30:36.205 11:48:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:30:36.205 11:48:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@333 -- # local ver1 ver1_l 00:30:36.205 11:48:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@334 -- # local ver2 ver2_l 00:30:36.205 11:48:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@336 -- # IFS=.-: 00:30:36.205 11:48:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@336 -- # read -ra ver1 00:30:36.205 11:48:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@337 -- # IFS=.-: 00:30:36.205 11:48:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@337 -- # read -ra ver2 00:30:36.205 11:48:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@338 -- # local 'op=<' 00:30:36.205 11:48:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@340 -- # ver1_l=2 00:30:36.205 11:48:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@341 -- # ver2_l=1 00:30:36.205 11:48:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:30:36.205 11:48:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@344 -- # case "$op" in 00:30:36.205 11:48:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@345 -- # : 1 00:30:36.205 11:48:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v = 0 )) 00:30:36.205 11:48:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:30:36.205 11:48:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@365 -- # decimal 1 00:30:36.205 11:48:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=1 00:30:36.205 11:48:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:30:36.205 11:48:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 1 00:30:36.205 11:48:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@365 -- # ver1[v]=1 00:30:36.205 11:48:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@366 -- # decimal 2 00:30:36.205 11:48:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=2 00:30:36.205 11:48:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:30:36.205 11:48:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 2 00:30:36.205 11:48:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@366 -- # ver2[v]=2 00:30:36.205 11:48:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:30:36.205 11:48:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:30:36.205 11:48:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@368 -- # return 0 00:30:36.205 11:48:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:30:36.205 11:48:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:30:36.205 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:36.205 --rc genhtml_branch_coverage=1 00:30:36.205 --rc genhtml_function_coverage=1 00:30:36.205 --rc genhtml_legend=1 00:30:36.205 --rc geninfo_all_blocks=1 00:30:36.206 --rc geninfo_unexecuted_blocks=1 00:30:36.206 00:30:36.206 ' 00:30:36.206 11:48:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:30:36.206 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:36.206 --rc genhtml_branch_coverage=1 00:30:36.206 --rc genhtml_function_coverage=1 00:30:36.206 --rc genhtml_legend=1 00:30:36.206 --rc geninfo_all_blocks=1 00:30:36.206 --rc geninfo_unexecuted_blocks=1 00:30:36.206 00:30:36.206 ' 00:30:36.206 11:48:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:30:36.206 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:36.206 --rc genhtml_branch_coverage=1 00:30:36.206 --rc genhtml_function_coverage=1 00:30:36.206 --rc genhtml_legend=1 00:30:36.206 --rc geninfo_all_blocks=1 00:30:36.206 --rc geninfo_unexecuted_blocks=1 00:30:36.206 00:30:36.206 ' 00:30:36.206 11:48:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:30:36.206 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:36.206 --rc genhtml_branch_coverage=1 00:30:36.206 --rc genhtml_function_coverage=1 00:30:36.206 --rc genhtml_legend=1 00:30:36.206 --rc geninfo_all_blocks=1 00:30:36.206 --rc geninfo_unexecuted_blocks=1 00:30:36.206 00:30:36.206 ' 00:30:36.206 11:48:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:30:36.206 11:48:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@7 -- # uname -s 00:30:36.206 11:48:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:30:36.206 11:48:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:30:36.206 11:48:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:30:36.206 11:48:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:30:36.206 11:48:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:30:36.206 11:48:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:30:36.206 11:48:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:30:36.206 11:48:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:30:36.206 11:48:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:30:36.206 11:48:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:30:36.206 11:48:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:30:36.206 11:48:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@18 -- # NVME_HOSTID=00abaa28-3537-eb11-906e-0017a4403562 00:30:36.206 11:48:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:30:36.206 11:48:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:30:36.206 11:48:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:30:36.206 11:48:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:30:36.206 11:48:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:30:36.206 11:48:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@15 -- # shopt -s extglob 00:30:36.206 11:48:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:36.206 11:48:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:36.206 11:48:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:36.206 11:48:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:36.206 11:48:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:36.206 11:48:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:36.206 11:48:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@5 -- # export PATH 00:30:36.206 11:48:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:36.206 11:48:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@51 -- # : 0 00:30:36.206 11:48:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:30:36.206 11:48:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:30:36.206 11:48:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:30:36.206 11:48:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:30:36.206 11:48:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:30:36.206 11:48:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:30:36.206 11:48:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:30:36.206 11:48:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:30:36.206 11:48:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:30:36.206 11:48:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@55 -- # have_pci_nics=0 00:30:36.206 11:48:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:30:36.206 11:48:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@12 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:30:36.206 11:48:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@98 -- # nvmftestinit 00:30:36.206 11:48:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:30:36.206 11:48:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:30:36.206 11:48:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@476 -- # prepare_net_devs 00:30:36.206 11:48:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@438 -- # local -g is_hw=no 00:30:36.206 11:48:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@440 -- # remove_spdk_ns 00:30:36.206 11:48:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:36.206 11:48:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:36.206 11:48:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:36.206 11:48:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:30:36.206 11:48:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:30:36.206 11:48:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@309 -- # xtrace_disable 00:30:36.206 11:48:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:30:41.477 11:48:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:30:41.477 11:48:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@315 -- # pci_devs=() 00:30:41.477 11:48:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@315 -- # local -a pci_devs 00:30:41.477 11:48:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@316 -- # pci_net_devs=() 00:30:41.477 11:48:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:30:41.477 11:48:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@317 -- # pci_drivers=() 00:30:41.477 11:48:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@317 -- # local -A pci_drivers 00:30:41.477 11:48:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@319 -- # net_devs=() 00:30:41.477 11:48:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@319 -- # local -ga net_devs 00:30:41.477 11:48:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@320 -- # e810=() 00:30:41.477 11:48:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@320 -- # local -ga e810 00:30:41.477 11:48:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@321 -- # x722=() 00:30:41.477 11:48:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@321 -- # local -ga x722 00:30:41.477 11:48:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@322 -- # mlx=() 00:30:41.477 11:48:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@322 -- # local -ga mlx 00:30:41.477 11:48:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:30:41.478 11:48:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:30:41.478 11:48:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:30:41.478 11:48:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:30:41.478 11:48:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:30:41.478 11:48:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:30:41.478 11:48:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:30:41.478 11:48:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:30:41.478 11:48:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:30:41.478 11:48:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:30:41.478 11:48:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:30:41.478 11:48:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:30:41.478 11:48:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:30:41.478 11:48:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:30:41.478 11:48:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:30:41.478 11:48:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:30:41.478 11:48:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:30:41.478 11:48:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:30:41.478 11:48:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:30:41.478 11:48:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:30:41.478 Found 0000:af:00.0 (0x8086 - 0x159b) 00:30:41.478 11:48:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:30:41.478 11:48:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:30:41.478 11:48:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:41.478 11:48:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:41.478 11:48:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:30:41.478 11:48:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:30:41.478 11:48:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:30:41.478 Found 0000:af:00.1 (0x8086 - 0x159b) 00:30:41.478 11:48:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:30:41.478 11:48:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:30:41.478 11:48:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:41.478 11:48:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:41.478 11:48:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:30:41.478 11:48:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:30:41.478 11:48:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:30:41.478 11:48:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:30:41.478 11:48:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:30:41.478 11:48:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:41.478 11:48:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:30:41.478 11:48:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:41.478 11:48:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@418 -- # [[ up == up ]] 00:30:41.478 11:48:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:30:41.478 11:48:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:41.478 11:48:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:30:41.478 Found net devices under 0000:af:00.0: cvl_0_0 00:30:41.478 11:48:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:30:41.478 11:48:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:30:41.478 11:48:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:41.478 11:48:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:30:41.478 11:48:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:41.478 11:48:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@418 -- # [[ up == up ]] 00:30:41.478 11:48:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:30:41.478 11:48:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:41.478 11:48:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:30:41.478 Found net devices under 0000:af:00.1: cvl_0_1 00:30:41.478 11:48:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:30:41.478 11:48:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:30:41.478 11:48:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@442 -- # is_hw=yes 00:30:41.478 11:48:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:30:41.478 11:48:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:30:41.478 11:48:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:30:41.478 11:48:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:30:41.478 11:48:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:30:41.478 11:48:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:30:41.478 11:48:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:30:41.478 11:48:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:30:41.478 11:48:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:30:41.478 11:48:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:30:41.478 11:48:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:30:41.478 11:48:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:30:41.478 11:48:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:30:41.478 11:48:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:30:41.478 11:48:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:30:41.478 11:48:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:30:41.478 11:48:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:30:41.478 11:48:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:30:41.478 11:48:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:30:41.478 11:48:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:30:41.478 11:48:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:30:41.478 11:48:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:30:41.478 11:48:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:30:41.478 11:48:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:30:41.478 11:48:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:30:41.478 11:48:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:30:41.478 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:30:41.478 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.291 ms 00:30:41.478 00:30:41.478 --- 10.0.0.2 ping statistics --- 00:30:41.478 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:41.478 rtt min/avg/max/mdev = 0.291/0.291/0.291/0.000 ms 00:30:41.478 11:48:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:30:41.478 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:30:41.478 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.200 ms 00:30:41.478 00:30:41.478 --- 10.0.0.1 ping statistics --- 00:30:41.478 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:41.478 rtt min/avg/max/mdev = 0.200/0.200/0.200/0.000 ms 00:30:41.478 11:48:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:30:41.478 11:48:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@450 -- # return 0 00:30:41.478 11:48:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:30:41.478 11:48:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:30:41.478 11:48:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:30:41.478 11:48:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:30:41.478 11:48:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:30:41.478 11:48:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:30:41.479 11:48:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:30:41.479 11:48:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@99 -- # nvmfappstart -m 0x1 00:30:41.479 11:48:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:30:41.479 11:48:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@724 -- # xtrace_disable 00:30:41.479 11:48:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:30:41.479 11:48:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@509 -- # nvmfpid=1444180 00:30:41.479 11:48:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@510 -- # waitforlisten 1444180 00:30:41.479 11:48:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@833 -- # '[' -z 1444180 ']' 00:30:41.479 11:48:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:41.479 11:48:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@838 -- # local max_retries=100 00:30:41.479 11:48:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:41.479 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:41.479 11:48:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@842 -- # xtrace_disable 00:30:41.479 11:48:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:30:41.479 11:48:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x1 00:30:41.479 [2024-11-15 11:48:42.228115] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:30:41.479 [2024-11-15 11:48:42.229457] Starting SPDK v25.01-pre git sha1 4b2d483c6 / DPDK 24.03.0 initialization... 00:30:41.479 [2024-11-15 11:48:42.229509] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:30:41.738 [2024-11-15 11:48:42.330498] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:41.738 [2024-11-15 11:48:42.378222] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:30:41.738 [2024-11-15 11:48:42.378263] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:30:41.738 [2024-11-15 11:48:42.378273] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:30:41.738 [2024-11-15 11:48:42.378282] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:30:41.738 [2024-11-15 11:48:42.378289] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:30:41.738 [2024-11-15 11:48:42.378990] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:30:41.738 [2024-11-15 11:48:42.453356] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:30:41.738 [2024-11-15 11:48:42.453660] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:30:41.738 11:48:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:30:41.738 11:48:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@866 -- # return 0 00:30:41.738 11:48:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:30:41.738 11:48:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@730 -- # xtrace_disable 00:30:41.738 11:48:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:30:41.738 11:48:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:30:41.738 11:48:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:30:41.997 [2024-11-15 11:48:42.767682] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:41.997 11:48:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@102 -- # run_test lvs_grow_clean lvs_grow 00:30:41.997 11:48:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:30:41.997 11:48:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1109 -- # xtrace_disable 00:30:41.997 11:48:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:30:41.997 ************************************ 00:30:41.997 START TEST lvs_grow_clean 00:30:41.997 ************************************ 00:30:41.997 11:48:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1127 -- # lvs_grow 00:30:41.997 11:48:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:30:41.997 11:48:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:30:41.997 11:48:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:30:41.997 11:48:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:30:41.997 11:48:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:30:41.997 11:48:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:30:41.997 11:48:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:30:41.997 11:48:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:30:41.997 11:48:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:30:42.564 11:48:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:30:42.564 11:48:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:30:42.824 11:48:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # lvs=75d663f0-3f10-4085-a3da-a2bf96003d54 00:30:42.824 11:48:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 75d663f0-3f10-4085-a3da-a2bf96003d54 00:30:42.824 11:48:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:30:43.083 11:48:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:30:43.083 11:48:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:30:43.083 11:48:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 75d663f0-3f10-4085-a3da-a2bf96003d54 lvol 150 00:30:43.342 11:48:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # lvol=2b62afb4-14d1-4182-82ca-5be53f213b11 00:30:43.342 11:48:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:30:43.342 11:48:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:30:43.601 [2024-11-15 11:48:44.227437] bdev_aio.c:1053:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:30:43.601 [2024-11-15 11:48:44.227591] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:30:43.601 true 00:30:43.601 11:48:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 75d663f0-3f10-4085-a3da-a2bf96003d54 00:30:43.601 11:48:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:30:43.860 11:48:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:30:43.860 11:48:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:30:44.119 11:48:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 2b62afb4-14d1-4182-82ca-5be53f213b11 00:30:44.119 11:48:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:30:44.378 [2024-11-15 11:48:45.207927] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:44.637 11:48:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:30:44.897 11:48:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=1444793 00:30:44.897 11:48:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:30:44.897 11:48:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 1444793 /var/tmp/bdevperf.sock 00:30:44.897 11:48:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@833 -- # '[' -z 1444793 ']' 00:30:44.897 11:48:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:30:44.897 11:48:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@838 -- # local max_retries=100 00:30:44.897 11:48:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:30:44.897 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:30:44.897 11:48:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@842 -- # xtrace_disable 00:30:44.897 11:48:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:30:44.897 11:48:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:30:44.897 [2024-11-15 11:48:45.558076] Starting SPDK v25.01-pre git sha1 4b2d483c6 / DPDK 24.03.0 initialization... 00:30:44.897 [2024-11-15 11:48:45.558138] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1444793 ] 00:30:44.897 [2024-11-15 11:48:45.624189] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:44.897 [2024-11-15 11:48:45.664514] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:30:45.156 11:48:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:30:45.156 11:48:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@866 -- # return 0 00:30:45.157 11:48:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:30:45.416 Nvme0n1 00:30:45.416 11:48:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:30:45.676 [ 00:30:45.676 { 00:30:45.676 "name": "Nvme0n1", 00:30:45.676 "aliases": [ 00:30:45.676 "2b62afb4-14d1-4182-82ca-5be53f213b11" 00:30:45.676 ], 00:30:45.676 "product_name": "NVMe disk", 00:30:45.676 "block_size": 4096, 00:30:45.676 "num_blocks": 38912, 00:30:45.676 "uuid": "2b62afb4-14d1-4182-82ca-5be53f213b11", 00:30:45.676 "numa_id": 1, 00:30:45.676 "assigned_rate_limits": { 00:30:45.676 "rw_ios_per_sec": 0, 00:30:45.676 "rw_mbytes_per_sec": 0, 00:30:45.676 "r_mbytes_per_sec": 0, 00:30:45.676 "w_mbytes_per_sec": 0 00:30:45.676 }, 00:30:45.676 "claimed": false, 00:30:45.676 "zoned": false, 00:30:45.676 "supported_io_types": { 00:30:45.676 "read": true, 00:30:45.676 "write": true, 00:30:45.676 "unmap": true, 00:30:45.676 "flush": true, 00:30:45.676 "reset": true, 00:30:45.676 "nvme_admin": true, 00:30:45.676 "nvme_io": true, 00:30:45.676 "nvme_io_md": false, 00:30:45.676 "write_zeroes": true, 00:30:45.676 "zcopy": false, 00:30:45.676 "get_zone_info": false, 00:30:45.676 "zone_management": false, 00:30:45.676 "zone_append": false, 00:30:45.676 "compare": true, 00:30:45.676 "compare_and_write": true, 00:30:45.676 "abort": true, 00:30:45.676 "seek_hole": false, 00:30:45.676 "seek_data": false, 00:30:45.676 "copy": true, 00:30:45.676 "nvme_iov_md": false 00:30:45.676 }, 00:30:45.676 "memory_domains": [ 00:30:45.676 { 00:30:45.676 "dma_device_id": "system", 00:30:45.676 "dma_device_type": 1 00:30:45.676 } 00:30:45.676 ], 00:30:45.676 "driver_specific": { 00:30:45.676 "nvme": [ 00:30:45.676 { 00:30:45.676 "trid": { 00:30:45.676 "trtype": "TCP", 00:30:45.676 "adrfam": "IPv4", 00:30:45.676 "traddr": "10.0.0.2", 00:30:45.676 "trsvcid": "4420", 00:30:45.676 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:30:45.676 }, 00:30:45.676 "ctrlr_data": { 00:30:45.676 "cntlid": 1, 00:30:45.676 "vendor_id": "0x8086", 00:30:45.676 "model_number": "SPDK bdev Controller", 00:30:45.676 "serial_number": "SPDK0", 00:30:45.676 "firmware_revision": "25.01", 00:30:45.676 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:30:45.676 "oacs": { 00:30:45.676 "security": 0, 00:30:45.676 "format": 0, 00:30:45.676 "firmware": 0, 00:30:45.676 "ns_manage": 0 00:30:45.676 }, 00:30:45.676 "multi_ctrlr": true, 00:30:45.676 "ana_reporting": false 00:30:45.676 }, 00:30:45.676 "vs": { 00:30:45.676 "nvme_version": "1.3" 00:30:45.676 }, 00:30:45.676 "ns_data": { 00:30:45.676 "id": 1, 00:30:45.676 "can_share": true 00:30:45.676 } 00:30:45.676 } 00:30:45.676 ], 00:30:45.676 "mp_policy": "active_passive" 00:30:45.676 } 00:30:45.676 } 00:30:45.676 ] 00:30:45.676 11:48:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=1444841 00:30:45.676 11:48:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:30:45.676 11:48:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:30:45.676 Running I/O for 10 seconds... 00:30:47.055 Latency(us) 00:30:47.055 [2024-11-15T10:48:47.908Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:47.055 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:30:47.055 Nvme0n1 : 1.00 14605.00 57.05 0.00 0.00 0.00 0.00 0.00 00:30:47.055 [2024-11-15T10:48:47.908Z] =================================================================================================================== 00:30:47.055 [2024-11-15T10:48:47.908Z] Total : 14605.00 57.05 0.00 0.00 0.00 0.00 0.00 00:30:47.055 00:30:47.623 11:48:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 75d663f0-3f10-4085-a3da-a2bf96003d54 00:30:47.881 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:30:47.882 Nvme0n1 : 2.00 14795.50 57.79 0.00 0.00 0.00 0.00 0.00 00:30:47.882 [2024-11-15T10:48:48.735Z] =================================================================================================================== 00:30:47.882 [2024-11-15T10:48:48.735Z] Total : 14795.50 57.79 0.00 0.00 0.00 0.00 0.00 00:30:47.882 00:30:47.882 true 00:30:47.882 11:48:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 75d663f0-3f10-4085-a3da-a2bf96003d54 00:30:47.882 11:48:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:30:48.141 11:48:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:30:48.141 11:48:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:30:48.141 11:48:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@65 -- # wait 1444841 00:30:48.709 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:30:48.709 Nvme0n1 : 3.00 14843.67 57.98 0.00 0.00 0.00 0.00 0.00 00:30:48.709 [2024-11-15T10:48:49.562Z] =================================================================================================================== 00:30:48.709 [2024-11-15T10:48:49.562Z] Total : 14843.67 57.98 0.00 0.00 0.00 0.00 0.00 00:30:48.709 00:30:50.089 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:30:50.089 Nvme0n1 : 4.00 14875.00 58.11 0.00 0.00 0.00 0.00 0.00 00:30:50.089 [2024-11-15T10:48:50.942Z] =================================================================================================================== 00:30:50.089 [2024-11-15T10:48:50.942Z] Total : 14875.00 58.11 0.00 0.00 0.00 0.00 0.00 00:30:50.089 00:30:51.028 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:30:51.028 Nvme0n1 : 5.00 14909.80 58.24 0.00 0.00 0.00 0.00 0.00 00:30:51.028 [2024-11-15T10:48:51.881Z] =================================================================================================================== 00:30:51.028 [2024-11-15T10:48:51.881Z] Total : 14909.80 58.24 0.00 0.00 0.00 0.00 0.00 00:30:51.028 00:30:51.962 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:30:51.962 Nvme0n1 : 6.00 14943.67 58.37 0.00 0.00 0.00 0.00 0.00 00:30:51.962 [2024-11-15T10:48:52.815Z] =================================================================================================================== 00:30:51.962 [2024-11-15T10:48:52.815Z] Total : 14943.67 58.37 0.00 0.00 0.00 0.00 0.00 00:30:51.962 00:30:52.898 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:30:52.898 Nvme0n1 : 7.00 14958.86 58.43 0.00 0.00 0.00 0.00 0.00 00:30:52.898 [2024-11-15T10:48:53.751Z] =================================================================================================================== 00:30:52.898 [2024-11-15T10:48:53.751Z] Total : 14958.86 58.43 0.00 0.00 0.00 0.00 0.00 00:30:52.898 00:30:53.834 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:30:53.834 Nvme0n1 : 8.00 14970.12 58.48 0.00 0.00 0.00 0.00 0.00 00:30:53.834 [2024-11-15T10:48:54.687Z] =================================================================================================================== 00:30:53.834 [2024-11-15T10:48:54.687Z] Total : 14970.12 58.48 0.00 0.00 0.00 0.00 0.00 00:30:53.834 00:30:54.768 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:30:54.768 Nvme0n1 : 9.00 14986.00 58.54 0.00 0.00 0.00 0.00 0.00 00:30:54.768 [2024-11-15T10:48:55.621Z] =================================================================================================================== 00:30:54.768 [2024-11-15T10:48:55.621Z] Total : 14986.00 58.54 0.00 0.00 0.00 0.00 0.00 00:30:54.768 00:30:55.704 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:30:55.704 Nvme0n1 : 10.00 14998.70 58.59 0.00 0.00 0.00 0.00 0.00 00:30:55.704 [2024-11-15T10:48:56.557Z] =================================================================================================================== 00:30:55.704 [2024-11-15T10:48:56.557Z] Total : 14998.70 58.59 0.00 0.00 0.00 0.00 0.00 00:30:55.704 00:30:55.704 00:30:55.704 Latency(us) 00:30:55.704 [2024-11-15T10:48:56.557Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:55.704 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:30:55.704 Nvme0n1 : 10.00 15006.83 58.62 0.00 0.00 8526.31 7149.38 27048.49 00:30:55.704 [2024-11-15T10:48:56.557Z] =================================================================================================================== 00:30:55.704 [2024-11-15T10:48:56.557Z] Total : 15006.83 58.62 0.00 0.00 8526.31 7149.38 27048.49 00:30:55.704 { 00:30:55.704 "results": [ 00:30:55.704 { 00:30:55.704 "job": "Nvme0n1", 00:30:55.704 "core_mask": "0x2", 00:30:55.704 "workload": "randwrite", 00:30:55.704 "status": "finished", 00:30:55.704 "queue_depth": 128, 00:30:55.704 "io_size": 4096, 00:30:55.704 "runtime": 10.003114, 00:30:55.704 "iops": 15006.826874111403, 00:30:55.704 "mibps": 58.620417476997666, 00:30:55.704 "io_failed": 0, 00:30:55.704 "io_timeout": 0, 00:30:55.704 "avg_latency_us": 8526.310436665222, 00:30:55.704 "min_latency_us": 7149.381818181818, 00:30:55.704 "max_latency_us": 27048.494545454545 00:30:55.704 } 00:30:55.704 ], 00:30:55.704 "core_count": 1 00:30:55.704 } 00:30:55.963 11:48:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@66 -- # killprocess 1444793 00:30:55.963 11:48:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@952 -- # '[' -z 1444793 ']' 00:30:55.963 11:48:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@956 -- # kill -0 1444793 00:30:55.963 11:48:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@957 -- # uname 00:30:55.963 11:48:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:30:55.963 11:48:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 1444793 00:30:55.963 11:48:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:30:55.963 11:48:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:30:55.963 11:48:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@970 -- # echo 'killing process with pid 1444793' 00:30:55.963 killing process with pid 1444793 00:30:55.963 11:48:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@971 -- # kill 1444793 00:30:55.963 Received shutdown signal, test time was about 10.000000 seconds 00:30:55.963 00:30:55.963 Latency(us) 00:30:55.963 [2024-11-15T10:48:56.816Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:55.963 [2024-11-15T10:48:56.816Z] =================================================================================================================== 00:30:55.963 [2024-11-15T10:48:56.816Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:30:55.963 11:48:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@976 -- # wait 1444793 00:30:55.963 11:48:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:30:56.222 11:48:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:30:56.790 11:48:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 75d663f0-3f10-4085-a3da-a2bf96003d54 00:30:56.790 11:48:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:30:56.790 11:48:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:30:56.790 11:48:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@72 -- # [[ '' == \d\i\r\t\y ]] 00:30:56.790 11:48:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:30:57.049 [2024-11-15 11:48:57.883522] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:30:57.307 11:48:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 75d663f0-3f10-4085-a3da-a2bf96003d54 00:30:57.307 11:48:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@650 -- # local es=0 00:30:57.308 11:48:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 75d663f0-3f10-4085-a3da-a2bf96003d54 00:30:57.308 11:48:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:30:57.308 11:48:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:30:57.308 11:48:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:30:57.308 11:48:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:30:57.308 11:48:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:30:57.308 11:48:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:30:57.308 11:48:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:30:57.308 11:48:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:30:57.308 11:48:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 75d663f0-3f10-4085-a3da-a2bf96003d54 00:30:57.567 request: 00:30:57.567 { 00:30:57.567 "uuid": "75d663f0-3f10-4085-a3da-a2bf96003d54", 00:30:57.567 "method": "bdev_lvol_get_lvstores", 00:30:57.567 "req_id": 1 00:30:57.567 } 00:30:57.567 Got JSON-RPC error response 00:30:57.567 response: 00:30:57.567 { 00:30:57.567 "code": -19, 00:30:57.567 "message": "No such device" 00:30:57.567 } 00:30:57.567 11:48:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@653 -- # es=1 00:30:57.567 11:48:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:30:57.567 11:48:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:30:57.567 11:48:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:30:57.567 11:48:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:30:57.826 aio_bdev 00:30:57.826 11:48:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev 2b62afb4-14d1-4182-82ca-5be53f213b11 00:30:57.826 11:48:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@901 -- # local bdev_name=2b62afb4-14d1-4182-82ca-5be53f213b11 00:30:57.826 11:48:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:30:57.826 11:48:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@903 -- # local i 00:30:57.826 11:48:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:30:57.826 11:48:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:30:57.826 11:48:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:30:58.084 11:48:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@908 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 2b62afb4-14d1-4182-82ca-5be53f213b11 -t 2000 00:30:58.343 [ 00:30:58.343 { 00:30:58.343 "name": "2b62afb4-14d1-4182-82ca-5be53f213b11", 00:30:58.343 "aliases": [ 00:30:58.343 "lvs/lvol" 00:30:58.343 ], 00:30:58.343 "product_name": "Logical Volume", 00:30:58.343 "block_size": 4096, 00:30:58.343 "num_blocks": 38912, 00:30:58.343 "uuid": "2b62afb4-14d1-4182-82ca-5be53f213b11", 00:30:58.343 "assigned_rate_limits": { 00:30:58.343 "rw_ios_per_sec": 0, 00:30:58.343 "rw_mbytes_per_sec": 0, 00:30:58.343 "r_mbytes_per_sec": 0, 00:30:58.343 "w_mbytes_per_sec": 0 00:30:58.343 }, 00:30:58.343 "claimed": false, 00:30:58.343 "zoned": false, 00:30:58.343 "supported_io_types": { 00:30:58.343 "read": true, 00:30:58.343 "write": true, 00:30:58.343 "unmap": true, 00:30:58.343 "flush": false, 00:30:58.343 "reset": true, 00:30:58.343 "nvme_admin": false, 00:30:58.343 "nvme_io": false, 00:30:58.343 "nvme_io_md": false, 00:30:58.343 "write_zeroes": true, 00:30:58.343 "zcopy": false, 00:30:58.343 "get_zone_info": false, 00:30:58.343 "zone_management": false, 00:30:58.343 "zone_append": false, 00:30:58.343 "compare": false, 00:30:58.343 "compare_and_write": false, 00:30:58.343 "abort": false, 00:30:58.343 "seek_hole": true, 00:30:58.343 "seek_data": true, 00:30:58.343 "copy": false, 00:30:58.343 "nvme_iov_md": false 00:30:58.343 }, 00:30:58.343 "driver_specific": { 00:30:58.343 "lvol": { 00:30:58.343 "lvol_store_uuid": "75d663f0-3f10-4085-a3da-a2bf96003d54", 00:30:58.343 "base_bdev": "aio_bdev", 00:30:58.343 "thin_provision": false, 00:30:58.343 "num_allocated_clusters": 38, 00:30:58.343 "snapshot": false, 00:30:58.343 "clone": false, 00:30:58.343 "esnap_clone": false 00:30:58.343 } 00:30:58.343 } 00:30:58.343 } 00:30:58.343 ] 00:30:58.343 11:48:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@909 -- # return 0 00:30:58.343 11:48:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 75d663f0-3f10-4085-a3da-a2bf96003d54 00:30:58.343 11:48:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:30:58.602 11:48:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:30:58.602 11:48:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:30:58.602 11:48:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 75d663f0-3f10-4085-a3da-a2bf96003d54 00:30:58.860 11:48:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:30:58.860 11:48:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 2b62afb4-14d1-4182-82ca-5be53f213b11 00:30:59.119 11:48:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 75d663f0-3f10-4085-a3da-a2bf96003d54 00:30:59.378 11:49:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:30:59.636 11:49:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:30:59.636 00:30:59.636 real 0m17.655s 00:30:59.636 user 0m17.420s 00:30:59.636 sys 0m1.654s 00:30:59.636 11:49:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1128 -- # xtrace_disable 00:30:59.636 11:49:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:30:59.636 ************************************ 00:30:59.636 END TEST lvs_grow_clean 00:30:59.636 ************************************ 00:30:59.895 11:49:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@103 -- # run_test lvs_grow_dirty lvs_grow dirty 00:30:59.895 11:49:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:30:59.895 11:49:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1109 -- # xtrace_disable 00:30:59.895 11:49:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:30:59.895 ************************************ 00:30:59.895 START TEST lvs_grow_dirty 00:30:59.895 ************************************ 00:30:59.895 11:49:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1127 -- # lvs_grow dirty 00:30:59.895 11:49:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:30:59.895 11:49:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:30:59.895 11:49:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:30:59.895 11:49:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:30:59.895 11:49:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:30:59.895 11:49:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:30:59.895 11:49:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:30:59.895 11:49:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:30:59.895 11:49:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:31:00.154 11:49:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:31:00.154 11:49:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:31:00.413 11:49:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # lvs=82d542cd-9a59-4f5a-b0cb-d77ff01b3c95 00:31:00.413 11:49:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 82d542cd-9a59-4f5a-b0cb-d77ff01b3c95 00:31:00.413 11:49:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:31:00.673 11:49:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:31:00.673 11:49:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:31:00.673 11:49:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 82d542cd-9a59-4f5a-b0cb-d77ff01b3c95 lvol 150 00:31:00.935 11:49:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # lvol=db5b30f1-1718-4d1f-a450-7e6acf8d94de 00:31:00.935 11:49:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:31:00.935 11:49:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:31:01.193 [2024-11-15 11:49:01.927437] bdev_aio.c:1053:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:31:01.193 [2024-11-15 11:49:01.927584] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:31:01.193 true 00:31:01.193 11:49:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 82d542cd-9a59-4f5a-b0cb-d77ff01b3c95 00:31:01.194 11:49:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:31:01.453 11:49:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:31:01.454 11:49:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:31:01.713 11:49:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 db5b30f1-1718-4d1f-a450-7e6acf8d94de 00:31:01.974 11:49:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:31:02.233 [2024-11-15 11:49:03.023906] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:31:02.233 11:49:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:31:02.492 11:49:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=1447891 00:31:02.492 11:49:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:31:02.492 11:49:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 1447891 /var/tmp/bdevperf.sock 00:31:02.492 11:49:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@833 -- # '[' -z 1447891 ']' 00:31:02.492 11:49:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:31:02.492 11:49:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:31:02.492 11:49:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@838 -- # local max_retries=100 00:31:02.492 11:49:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:31:02.492 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:31:02.492 11:49:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@842 -- # xtrace_disable 00:31:02.492 11:49:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:31:02.753 [2024-11-15 11:49:03.356276] Starting SPDK v25.01-pre git sha1 4b2d483c6 / DPDK 24.03.0 initialization... 00:31:02.753 [2024-11-15 11:49:03.356342] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1447891 ] 00:31:02.753 [2024-11-15 11:49:03.422876] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:02.753 [2024-11-15 11:49:03.465062] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:31:02.753 11:49:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:31:02.753 11:49:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@866 -- # return 0 00:31:02.753 11:49:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:31:03.321 Nvme0n1 00:31:03.321 11:49:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:31:03.580 [ 00:31:03.580 { 00:31:03.580 "name": "Nvme0n1", 00:31:03.580 "aliases": [ 00:31:03.580 "db5b30f1-1718-4d1f-a450-7e6acf8d94de" 00:31:03.580 ], 00:31:03.580 "product_name": "NVMe disk", 00:31:03.580 "block_size": 4096, 00:31:03.580 "num_blocks": 38912, 00:31:03.580 "uuid": "db5b30f1-1718-4d1f-a450-7e6acf8d94de", 00:31:03.580 "numa_id": 1, 00:31:03.580 "assigned_rate_limits": { 00:31:03.580 "rw_ios_per_sec": 0, 00:31:03.580 "rw_mbytes_per_sec": 0, 00:31:03.580 "r_mbytes_per_sec": 0, 00:31:03.580 "w_mbytes_per_sec": 0 00:31:03.580 }, 00:31:03.580 "claimed": false, 00:31:03.580 "zoned": false, 00:31:03.580 "supported_io_types": { 00:31:03.580 "read": true, 00:31:03.580 "write": true, 00:31:03.580 "unmap": true, 00:31:03.580 "flush": true, 00:31:03.580 "reset": true, 00:31:03.580 "nvme_admin": true, 00:31:03.580 "nvme_io": true, 00:31:03.580 "nvme_io_md": false, 00:31:03.580 "write_zeroes": true, 00:31:03.580 "zcopy": false, 00:31:03.580 "get_zone_info": false, 00:31:03.580 "zone_management": false, 00:31:03.580 "zone_append": false, 00:31:03.580 "compare": true, 00:31:03.580 "compare_and_write": true, 00:31:03.580 "abort": true, 00:31:03.580 "seek_hole": false, 00:31:03.580 "seek_data": false, 00:31:03.580 "copy": true, 00:31:03.580 "nvme_iov_md": false 00:31:03.580 }, 00:31:03.580 "memory_domains": [ 00:31:03.580 { 00:31:03.580 "dma_device_id": "system", 00:31:03.580 "dma_device_type": 1 00:31:03.580 } 00:31:03.580 ], 00:31:03.580 "driver_specific": { 00:31:03.580 "nvme": [ 00:31:03.580 { 00:31:03.580 "trid": { 00:31:03.581 "trtype": "TCP", 00:31:03.581 "adrfam": "IPv4", 00:31:03.581 "traddr": "10.0.0.2", 00:31:03.581 "trsvcid": "4420", 00:31:03.581 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:31:03.581 }, 00:31:03.581 "ctrlr_data": { 00:31:03.581 "cntlid": 1, 00:31:03.581 "vendor_id": "0x8086", 00:31:03.581 "model_number": "SPDK bdev Controller", 00:31:03.581 "serial_number": "SPDK0", 00:31:03.581 "firmware_revision": "25.01", 00:31:03.581 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:31:03.581 "oacs": { 00:31:03.581 "security": 0, 00:31:03.581 "format": 0, 00:31:03.581 "firmware": 0, 00:31:03.581 "ns_manage": 0 00:31:03.581 }, 00:31:03.581 "multi_ctrlr": true, 00:31:03.581 "ana_reporting": false 00:31:03.581 }, 00:31:03.581 "vs": { 00:31:03.581 "nvme_version": "1.3" 00:31:03.581 }, 00:31:03.581 "ns_data": { 00:31:03.581 "id": 1, 00:31:03.581 "can_share": true 00:31:03.581 } 00:31:03.581 } 00:31:03.581 ], 00:31:03.581 "mp_policy": "active_passive" 00:31:03.581 } 00:31:03.581 } 00:31:03.581 ] 00:31:03.581 11:49:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=1447996 00:31:03.581 11:49:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:31:03.581 11:49:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:31:03.581 Running I/O for 10 seconds... 00:31:04.516 Latency(us) 00:31:04.516 [2024-11-15T10:49:05.369Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:04.516 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:31:04.516 Nvme0n1 : 1.00 14605.00 57.05 0.00 0.00 0.00 0.00 0.00 00:31:04.516 [2024-11-15T10:49:05.369Z] =================================================================================================================== 00:31:04.516 [2024-11-15T10:49:05.369Z] Total : 14605.00 57.05 0.00 0.00 0.00 0.00 0.00 00:31:04.516 00:31:05.455 11:49:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 82d542cd-9a59-4f5a-b0cb-d77ff01b3c95 00:31:05.715 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:31:05.715 Nvme0n1 : 2.00 14795.50 57.79 0.00 0.00 0.00 0.00 0.00 00:31:05.715 [2024-11-15T10:49:06.568Z] =================================================================================================================== 00:31:05.715 [2024-11-15T10:49:06.568Z] Total : 14795.50 57.79 0.00 0.00 0.00 0.00 0.00 00:31:05.715 00:31:05.715 true 00:31:05.715 11:49:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 82d542cd-9a59-4f5a-b0cb-d77ff01b3c95 00:31:05.715 11:49:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:31:05.974 11:49:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:31:05.974 11:49:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:31:05.974 11:49:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@65 -- # wait 1447996 00:31:06.543 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:31:06.543 Nvme0n1 : 3.00 14859.00 58.04 0.00 0.00 0.00 0.00 0.00 00:31:06.543 [2024-11-15T10:49:07.396Z] =================================================================================================================== 00:31:06.543 [2024-11-15T10:49:07.396Z] Total : 14859.00 58.04 0.00 0.00 0.00 0.00 0.00 00:31:06.543 00:31:07.479 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:31:07.479 Nvme0n1 : 4.00 14922.50 58.29 0.00 0.00 0.00 0.00 0.00 00:31:07.479 [2024-11-15T10:49:08.332Z] =================================================================================================================== 00:31:07.479 [2024-11-15T10:49:08.332Z] Total : 14922.50 58.29 0.00 0.00 0.00 0.00 0.00 00:31:07.479 00:31:08.856 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:31:08.856 Nvme0n1 : 5.00 14948.00 58.39 0.00 0.00 0.00 0.00 0.00 00:31:08.856 [2024-11-15T10:49:09.709Z] =================================================================================================================== 00:31:08.856 [2024-11-15T10:49:09.709Z] Total : 14948.00 58.39 0.00 0.00 0.00 0.00 0.00 00:31:08.856 00:31:09.792 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:31:09.792 Nvme0n1 : 6.00 14964.83 58.46 0.00 0.00 0.00 0.00 0.00 00:31:09.792 [2024-11-15T10:49:10.645Z] =================================================================================================================== 00:31:09.792 [2024-11-15T10:49:10.645Z] Total : 14964.83 58.46 0.00 0.00 0.00 0.00 0.00 00:31:09.792 00:31:10.728 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:31:10.728 Nvme0n1 : 7.00 14986.00 58.54 0.00 0.00 0.00 0.00 0.00 00:31:10.728 [2024-11-15T10:49:11.581Z] =================================================================================================================== 00:31:10.728 [2024-11-15T10:49:11.581Z] Total : 14986.00 58.54 0.00 0.00 0.00 0.00 0.00 00:31:10.728 00:31:11.670 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:31:11.670 Nvme0n1 : 8.00 15001.88 58.60 0.00 0.00 0.00 0.00 0.00 00:31:11.670 [2024-11-15T10:49:12.523Z] =================================================================================================================== 00:31:11.670 [2024-11-15T10:49:12.523Z] Total : 15001.88 58.60 0.00 0.00 0.00 0.00 0.00 00:31:11.670 00:31:12.606 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:31:12.606 Nvme0n1 : 9.00 15014.22 58.65 0.00 0.00 0.00 0.00 0.00 00:31:12.606 [2024-11-15T10:49:13.459Z] =================================================================================================================== 00:31:12.606 [2024-11-15T10:49:13.459Z] Total : 15014.22 58.65 0.00 0.00 0.00 0.00 0.00 00:31:12.606 00:31:13.543 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:31:13.543 Nvme0n1 : 10.00 15024.10 58.69 0.00 0.00 0.00 0.00 0.00 00:31:13.543 [2024-11-15T10:49:14.396Z] =================================================================================================================== 00:31:13.543 [2024-11-15T10:49:14.396Z] Total : 15024.10 58.69 0.00 0.00 0.00 0.00 0.00 00:31:13.543 00:31:13.543 00:31:13.543 Latency(us) 00:31:13.543 [2024-11-15T10:49:14.396Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:13.543 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:31:13.544 Nvme0n1 : 10.00 15030.83 58.71 0.00 0.00 8512.02 7298.33 26452.71 00:31:13.544 [2024-11-15T10:49:14.397Z] =================================================================================================================== 00:31:13.544 [2024-11-15T10:49:14.397Z] Total : 15030.83 58.71 0.00 0.00 8512.02 7298.33 26452.71 00:31:13.544 { 00:31:13.544 "results": [ 00:31:13.544 { 00:31:13.544 "job": "Nvme0n1", 00:31:13.544 "core_mask": "0x2", 00:31:13.544 "workload": "randwrite", 00:31:13.544 "status": "finished", 00:31:13.544 "queue_depth": 128, 00:31:13.544 "io_size": 4096, 00:31:13.544 "runtime": 10.004038, 00:31:13.544 "iops": 15030.830550623657, 00:31:13.544 "mibps": 58.71418183837366, 00:31:13.544 "io_failed": 0, 00:31:13.544 "io_timeout": 0, 00:31:13.544 "avg_latency_us": 8512.01585007548, 00:31:13.544 "min_latency_us": 7298.327272727272, 00:31:13.544 "max_latency_us": 26452.712727272727 00:31:13.544 } 00:31:13.544 ], 00:31:13.544 "core_count": 1 00:31:13.544 } 00:31:13.544 11:49:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@66 -- # killprocess 1447891 00:31:13.544 11:49:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@952 -- # '[' -z 1447891 ']' 00:31:13.544 11:49:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@956 -- # kill -0 1447891 00:31:13.544 11:49:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@957 -- # uname 00:31:13.544 11:49:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:31:13.544 11:49:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 1447891 00:31:13.544 11:49:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:31:13.544 11:49:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:31:13.544 11:49:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@970 -- # echo 'killing process with pid 1447891' 00:31:13.544 killing process with pid 1447891 00:31:13.544 11:49:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@971 -- # kill 1447891 00:31:13.544 Received shutdown signal, test time was about 10.000000 seconds 00:31:13.544 00:31:13.544 Latency(us) 00:31:13.544 [2024-11-15T10:49:14.397Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:13.544 [2024-11-15T10:49:14.397Z] =================================================================================================================== 00:31:13.544 [2024-11-15T10:49:14.397Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:31:13.544 11:49:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@976 -- # wait 1447891 00:31:13.803 11:49:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:31:14.063 11:49:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:31:14.063 11:49:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 82d542cd-9a59-4f5a-b0cb-d77ff01b3c95 00:31:14.063 11:49:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:31:14.323 11:49:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:31:14.323 11:49:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@72 -- # [[ dirty == \d\i\r\t\y ]] 00:31:14.323 11:49:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@74 -- # kill -9 1444180 00:31:14.323 11:49:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # wait 1444180 00:31:14.323 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh: line 75: 1444180 Killed "${NVMF_APP[@]}" "$@" 00:31:14.323 11:49:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # true 00:31:14.323 11:49:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@76 -- # nvmfappstart -m 0x1 00:31:14.323 11:49:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:31:14.323 11:49:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@724 -- # xtrace_disable 00:31:14.323 11:49:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:31:14.323 11:49:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@509 -- # nvmfpid=1449839 00:31:14.323 11:49:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@510 -- # waitforlisten 1449839 00:31:14.323 11:49:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@833 -- # '[' -z 1449839 ']' 00:31:14.323 11:49:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:14.323 11:49:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@838 -- # local max_retries=100 00:31:14.323 11:49:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:14.323 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:14.323 11:49:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@842 -- # xtrace_disable 00:31:14.323 11:49:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:31:14.323 11:49:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x1 00:31:14.323 [2024-11-15 11:49:15.132089] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:31:14.323 [2024-11-15 11:49:15.133411] Starting SPDK v25.01-pre git sha1 4b2d483c6 / DPDK 24.03.0 initialization... 00:31:14.323 [2024-11-15 11:49:15.133456] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:31:14.583 [2024-11-15 11:49:15.235652] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:14.583 [2024-11-15 11:49:15.282901] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:31:14.583 [2024-11-15 11:49:15.282943] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:31:14.583 [2024-11-15 11:49:15.282953] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:31:14.583 [2024-11-15 11:49:15.282961] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:31:14.583 [2024-11-15 11:49:15.282969] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:31:14.583 [2024-11-15 11:49:15.283670] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:31:14.583 [2024-11-15 11:49:15.358437] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:31:14.583 [2024-11-15 11:49:15.358740] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:31:15.150 11:49:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:31:15.150 11:49:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@866 -- # return 0 00:31:15.150 11:49:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:31:15.150 11:49:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@730 -- # xtrace_disable 00:31:15.150 11:49:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:31:15.150 11:49:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:31:15.150 11:49:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:31:15.409 [2024-11-15 11:49:16.177065] blobstore.c:4875:bs_recover: *NOTICE*: Performing recovery on blobstore 00:31:15.409 [2024-11-15 11:49:16.177269] blobstore.c:4822:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:31:15.409 [2024-11-15 11:49:16.177353] blobstore.c:4822:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:31:15.409 11:49:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # aio_bdev=aio_bdev 00:31:15.409 11:49:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@78 -- # waitforbdev db5b30f1-1718-4d1f-a450-7e6acf8d94de 00:31:15.409 11:49:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@901 -- # local bdev_name=db5b30f1-1718-4d1f-a450-7e6acf8d94de 00:31:15.409 11:49:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:31:15.409 11:49:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@903 -- # local i 00:31:15.409 11:49:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:31:15.409 11:49:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:31:15.409 11:49:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:31:15.668 11:49:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@908 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b db5b30f1-1718-4d1f-a450-7e6acf8d94de -t 2000 00:31:15.927 [ 00:31:15.927 { 00:31:15.927 "name": "db5b30f1-1718-4d1f-a450-7e6acf8d94de", 00:31:15.927 "aliases": [ 00:31:15.927 "lvs/lvol" 00:31:15.927 ], 00:31:15.927 "product_name": "Logical Volume", 00:31:15.927 "block_size": 4096, 00:31:15.927 "num_blocks": 38912, 00:31:15.927 "uuid": "db5b30f1-1718-4d1f-a450-7e6acf8d94de", 00:31:15.927 "assigned_rate_limits": { 00:31:15.927 "rw_ios_per_sec": 0, 00:31:15.927 "rw_mbytes_per_sec": 0, 00:31:15.927 "r_mbytes_per_sec": 0, 00:31:15.927 "w_mbytes_per_sec": 0 00:31:15.927 }, 00:31:15.927 "claimed": false, 00:31:15.927 "zoned": false, 00:31:15.927 "supported_io_types": { 00:31:15.927 "read": true, 00:31:15.927 "write": true, 00:31:15.927 "unmap": true, 00:31:15.927 "flush": false, 00:31:15.927 "reset": true, 00:31:15.927 "nvme_admin": false, 00:31:15.927 "nvme_io": false, 00:31:15.927 "nvme_io_md": false, 00:31:15.927 "write_zeroes": true, 00:31:15.927 "zcopy": false, 00:31:15.927 "get_zone_info": false, 00:31:15.927 "zone_management": false, 00:31:15.927 "zone_append": false, 00:31:15.927 "compare": false, 00:31:15.927 "compare_and_write": false, 00:31:15.927 "abort": false, 00:31:15.927 "seek_hole": true, 00:31:15.927 "seek_data": true, 00:31:15.927 "copy": false, 00:31:15.927 "nvme_iov_md": false 00:31:15.927 }, 00:31:15.927 "driver_specific": { 00:31:15.927 "lvol": { 00:31:15.927 "lvol_store_uuid": "82d542cd-9a59-4f5a-b0cb-d77ff01b3c95", 00:31:15.927 "base_bdev": "aio_bdev", 00:31:15.927 "thin_provision": false, 00:31:15.927 "num_allocated_clusters": 38, 00:31:15.927 "snapshot": false, 00:31:15.927 "clone": false, 00:31:15.927 "esnap_clone": false 00:31:15.927 } 00:31:15.927 } 00:31:15.927 } 00:31:15.927 ] 00:31:15.927 11:49:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@909 -- # return 0 00:31:15.927 11:49:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 82d542cd-9a59-4f5a-b0cb-d77ff01b3c95 00:31:15.927 11:49:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # jq -r '.[0].free_clusters' 00:31:16.186 11:49:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # (( free_clusters == 61 )) 00:31:16.186 11:49:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 82d542cd-9a59-4f5a-b0cb-d77ff01b3c95 00:31:16.186 11:49:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # jq -r '.[0].total_data_clusters' 00:31:16.445 11:49:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # (( data_clusters == 99 )) 00:31:16.445 11:49:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:31:16.445 [2024-11-15 11:49:17.220166] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:31:16.445 11:49:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 82d542cd-9a59-4f5a-b0cb-d77ff01b3c95 00:31:16.445 11:49:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@650 -- # local es=0 00:31:16.445 11:49:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 82d542cd-9a59-4f5a-b0cb-d77ff01b3c95 00:31:16.445 11:49:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:31:16.445 11:49:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:31:16.446 11:49:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:31:16.446 11:49:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:31:16.446 11:49:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:31:16.446 11:49:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:31:16.446 11:49:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:31:16.446 11:49:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:31:16.446 11:49:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 82d542cd-9a59-4f5a-b0cb-d77ff01b3c95 00:31:16.705 request: 00:31:16.705 { 00:31:16.705 "uuid": "82d542cd-9a59-4f5a-b0cb-d77ff01b3c95", 00:31:16.705 "method": "bdev_lvol_get_lvstores", 00:31:16.705 "req_id": 1 00:31:16.705 } 00:31:16.705 Got JSON-RPC error response 00:31:16.705 response: 00:31:16.705 { 00:31:16.705 "code": -19, 00:31:16.705 "message": "No such device" 00:31:16.705 } 00:31:16.705 11:49:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@653 -- # es=1 00:31:16.705 11:49:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:31:16.705 11:49:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:31:16.705 11:49:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:31:16.705 11:49:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:31:16.965 aio_bdev 00:31:16.965 11:49:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev db5b30f1-1718-4d1f-a450-7e6acf8d94de 00:31:16.965 11:49:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@901 -- # local bdev_name=db5b30f1-1718-4d1f-a450-7e6acf8d94de 00:31:16.965 11:49:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:31:16.965 11:49:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@903 -- # local i 00:31:16.965 11:49:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:31:16.965 11:49:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:31:16.965 11:49:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:31:16.966 11:49:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@908 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b db5b30f1-1718-4d1f-a450-7e6acf8d94de -t 2000 00:31:17.225 [ 00:31:17.225 { 00:31:17.225 "name": "db5b30f1-1718-4d1f-a450-7e6acf8d94de", 00:31:17.225 "aliases": [ 00:31:17.225 "lvs/lvol" 00:31:17.225 ], 00:31:17.225 "product_name": "Logical Volume", 00:31:17.225 "block_size": 4096, 00:31:17.225 "num_blocks": 38912, 00:31:17.225 "uuid": "db5b30f1-1718-4d1f-a450-7e6acf8d94de", 00:31:17.225 "assigned_rate_limits": { 00:31:17.225 "rw_ios_per_sec": 0, 00:31:17.225 "rw_mbytes_per_sec": 0, 00:31:17.225 "r_mbytes_per_sec": 0, 00:31:17.225 "w_mbytes_per_sec": 0 00:31:17.225 }, 00:31:17.225 "claimed": false, 00:31:17.225 "zoned": false, 00:31:17.225 "supported_io_types": { 00:31:17.225 "read": true, 00:31:17.225 "write": true, 00:31:17.225 "unmap": true, 00:31:17.225 "flush": false, 00:31:17.225 "reset": true, 00:31:17.225 "nvme_admin": false, 00:31:17.225 "nvme_io": false, 00:31:17.225 "nvme_io_md": false, 00:31:17.225 "write_zeroes": true, 00:31:17.225 "zcopy": false, 00:31:17.225 "get_zone_info": false, 00:31:17.225 "zone_management": false, 00:31:17.225 "zone_append": false, 00:31:17.225 "compare": false, 00:31:17.225 "compare_and_write": false, 00:31:17.225 "abort": false, 00:31:17.225 "seek_hole": true, 00:31:17.225 "seek_data": true, 00:31:17.225 "copy": false, 00:31:17.225 "nvme_iov_md": false 00:31:17.225 }, 00:31:17.225 "driver_specific": { 00:31:17.225 "lvol": { 00:31:17.225 "lvol_store_uuid": "82d542cd-9a59-4f5a-b0cb-d77ff01b3c95", 00:31:17.225 "base_bdev": "aio_bdev", 00:31:17.225 "thin_provision": false, 00:31:17.225 "num_allocated_clusters": 38, 00:31:17.225 "snapshot": false, 00:31:17.225 "clone": false, 00:31:17.225 "esnap_clone": false 00:31:17.225 } 00:31:17.225 } 00:31:17.225 } 00:31:17.225 ] 00:31:17.225 11:49:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@909 -- # return 0 00:31:17.225 11:49:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 82d542cd-9a59-4f5a-b0cb-d77ff01b3c95 00:31:17.225 11:49:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:31:17.484 11:49:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:31:17.484 11:49:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 82d542cd-9a59-4f5a-b0cb-d77ff01b3c95 00:31:17.484 11:49:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:31:17.743 11:49:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:31:17.743 11:49:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete db5b30f1-1718-4d1f-a450-7e6acf8d94de 00:31:18.001 11:49:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 82d542cd-9a59-4f5a-b0cb-d77ff01b3c95 00:31:18.260 11:49:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:31:18.519 11:49:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:31:18.519 00:31:18.519 real 0m18.647s 00:31:18.519 user 0m36.219s 00:31:18.519 sys 0m3.588s 00:31:18.519 11:49:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1128 -- # xtrace_disable 00:31:18.519 11:49:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:31:18.519 ************************************ 00:31:18.519 END TEST lvs_grow_dirty 00:31:18.519 ************************************ 00:31:18.519 11:49:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # process_shm --id 0 00:31:18.519 11:49:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@810 -- # type=--id 00:31:18.519 11:49:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@811 -- # id=0 00:31:18.519 11:49:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@812 -- # '[' --id = --pid ']' 00:31:18.519 11:49:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@816 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:31:18.519 11:49:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@816 -- # shm_files=nvmf_trace.0 00:31:18.519 11:49:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@818 -- # [[ -z nvmf_trace.0 ]] 00:31:18.519 11:49:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@822 -- # for n in $shm_files 00:31:18.519 11:49:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@823 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:31:18.519 nvmf_trace.0 00:31:18.519 11:49:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@825 -- # return 0 00:31:18.519 11:49:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # nvmftestfini 00:31:18.519 11:49:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@516 -- # nvmfcleanup 00:31:18.519 11:49:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@121 -- # sync 00:31:18.519 11:49:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:31:18.519 11:49:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@124 -- # set +e 00:31:18.519 11:49:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@125 -- # for i in {1..20} 00:31:18.519 11:49:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:31:18.519 rmmod nvme_tcp 00:31:18.519 rmmod nvme_fabrics 00:31:18.519 rmmod nvme_keyring 00:31:18.519 11:49:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:31:18.519 11:49:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@128 -- # set -e 00:31:18.520 11:49:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@129 -- # return 0 00:31:18.520 11:49:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@517 -- # '[' -n 1449839 ']' 00:31:18.520 11:49:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@518 -- # killprocess 1449839 00:31:18.520 11:49:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@952 -- # '[' -z 1449839 ']' 00:31:18.520 11:49:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@956 -- # kill -0 1449839 00:31:18.520 11:49:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@957 -- # uname 00:31:18.520 11:49:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:31:18.520 11:49:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 1449839 00:31:18.778 11:49:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:31:18.778 11:49:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:31:18.778 11:49:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@970 -- # echo 'killing process with pid 1449839' 00:31:18.778 killing process with pid 1449839 00:31:18.778 11:49:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@971 -- # kill 1449839 00:31:18.778 11:49:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@976 -- # wait 1449839 00:31:18.778 11:49:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:31:18.778 11:49:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:31:18.778 11:49:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:31:18.778 11:49:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@297 -- # iptr 00:31:18.778 11:49:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@791 -- # iptables-save 00:31:18.778 11:49:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:31:18.778 11:49:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@791 -- # iptables-restore 00:31:18.779 11:49:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:31:18.779 11:49:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@302 -- # remove_spdk_ns 00:31:18.779 11:49:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:18.779 11:49:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:31:18.779 11:49:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:21.315 11:49:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:31:21.315 00:31:21.315 real 0m44.848s 00:31:21.315 user 0m55.926s 00:31:21.315 sys 0m9.557s 00:31:21.315 11:49:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1128 -- # xtrace_disable 00:31:21.315 11:49:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:31:21.315 ************************************ 00:31:21.315 END TEST nvmf_lvs_grow 00:31:21.315 ************************************ 00:31:21.315 11:49:21 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@29 -- # run_test nvmf_bdev_io_wait /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp --interrupt-mode 00:31:21.315 11:49:21 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:31:21.315 11:49:21 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1109 -- # xtrace_disable 00:31:21.315 11:49:21 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:31:21.315 ************************************ 00:31:21.315 START TEST nvmf_bdev_io_wait 00:31:21.315 ************************************ 00:31:21.315 11:49:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp --interrupt-mode 00:31:21.315 * Looking for test storage... 00:31:21.315 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:31:21.315 11:49:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:31:21.315 11:49:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1691 -- # lcov --version 00:31:21.315 11:49:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:31:21.315 11:49:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:31:21.315 11:49:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:31:21.315 11:49:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@333 -- # local ver1 ver1_l 00:31:21.315 11:49:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@334 -- # local ver2 ver2_l 00:31:21.315 11:49:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # IFS=.-: 00:31:21.315 11:49:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # read -ra ver1 00:31:21.315 11:49:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # IFS=.-: 00:31:21.315 11:49:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # read -ra ver2 00:31:21.315 11:49:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@338 -- # local 'op=<' 00:31:21.315 11:49:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@340 -- # ver1_l=2 00:31:21.315 11:49:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@341 -- # ver2_l=1 00:31:21.315 11:49:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:31:21.315 11:49:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@344 -- # case "$op" in 00:31:21.315 11:49:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@345 -- # : 1 00:31:21.315 11:49:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v = 0 )) 00:31:21.315 11:49:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:31:21.315 11:49:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # decimal 1 00:31:21.315 11:49:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=1 00:31:21.315 11:49:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:31:21.315 11:49:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 1 00:31:21.315 11:49:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # ver1[v]=1 00:31:21.315 11:49:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # decimal 2 00:31:21.315 11:49:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=2 00:31:21.315 11:49:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:31:21.315 11:49:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 2 00:31:21.315 11:49:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # ver2[v]=2 00:31:21.315 11:49:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:31:21.315 11:49:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:31:21.315 11:49:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # return 0 00:31:21.315 11:49:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:31:21.315 11:49:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:31:21.315 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:21.315 --rc genhtml_branch_coverage=1 00:31:21.315 --rc genhtml_function_coverage=1 00:31:21.315 --rc genhtml_legend=1 00:31:21.315 --rc geninfo_all_blocks=1 00:31:21.315 --rc geninfo_unexecuted_blocks=1 00:31:21.315 00:31:21.315 ' 00:31:21.315 11:49:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:31:21.315 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:21.315 --rc genhtml_branch_coverage=1 00:31:21.315 --rc genhtml_function_coverage=1 00:31:21.315 --rc genhtml_legend=1 00:31:21.315 --rc geninfo_all_blocks=1 00:31:21.315 --rc geninfo_unexecuted_blocks=1 00:31:21.315 00:31:21.315 ' 00:31:21.315 11:49:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:31:21.315 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:21.315 --rc genhtml_branch_coverage=1 00:31:21.315 --rc genhtml_function_coverage=1 00:31:21.315 --rc genhtml_legend=1 00:31:21.315 --rc geninfo_all_blocks=1 00:31:21.315 --rc geninfo_unexecuted_blocks=1 00:31:21.315 00:31:21.315 ' 00:31:21.315 11:49:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:31:21.315 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:21.315 --rc genhtml_branch_coverage=1 00:31:21.315 --rc genhtml_function_coverage=1 00:31:21.315 --rc genhtml_legend=1 00:31:21.315 --rc geninfo_all_blocks=1 00:31:21.315 --rc geninfo_unexecuted_blocks=1 00:31:21.315 00:31:21.315 ' 00:31:21.315 11:49:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:31:21.315 11:49:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # uname -s 00:31:21.315 11:49:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:31:21.315 11:49:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:31:21.315 11:49:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:31:21.315 11:49:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:31:21.315 11:49:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:31:21.315 11:49:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:31:21.315 11:49:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:31:21.315 11:49:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:31:21.315 11:49:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:31:21.315 11:49:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:31:21.315 11:49:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:31:21.316 11:49:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@18 -- # NVME_HOSTID=00abaa28-3537-eb11-906e-0017a4403562 00:31:21.316 11:49:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:31:21.316 11:49:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:31:21.316 11:49:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:31:21.316 11:49:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:31:21.316 11:49:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:31:21.316 11:49:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@15 -- # shopt -s extglob 00:31:21.316 11:49:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:31:21.316 11:49:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:31:21.316 11:49:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:31:21.316 11:49:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:21.316 11:49:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:21.316 11:49:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:21.316 11:49:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@5 -- # export PATH 00:31:21.316 11:49:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:21.316 11:49:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@51 -- # : 0 00:31:21.316 11:49:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:31:21.316 11:49:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:31:21.316 11:49:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:31:21.316 11:49:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:31:21.316 11:49:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:31:21.316 11:49:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:31:21.316 11:49:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:31:21.316 11:49:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:31:21.316 11:49:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:31:21.316 11:49:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@55 -- # have_pci_nics=0 00:31:21.316 11:49:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@11 -- # MALLOC_BDEV_SIZE=64 00:31:21.316 11:49:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:31:21.316 11:49:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@14 -- # nvmftestinit 00:31:21.316 11:49:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:31:21.316 11:49:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:31:21.316 11:49:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@476 -- # prepare_net_devs 00:31:21.316 11:49:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@438 -- # local -g is_hw=no 00:31:21.316 11:49:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@440 -- # remove_spdk_ns 00:31:21.316 11:49:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:21.316 11:49:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:31:21.316 11:49:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:21.316 11:49:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:31:21.316 11:49:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:31:21.316 11:49:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@309 -- # xtrace_disable 00:31:21.316 11:49:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:31:26.590 11:49:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:31:26.590 11:49:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # pci_devs=() 00:31:26.590 11:49:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # local -a pci_devs 00:31:26.590 11:49:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@316 -- # pci_net_devs=() 00:31:26.590 11:49:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:31:26.590 11:49:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@317 -- # pci_drivers=() 00:31:26.590 11:49:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@317 -- # local -A pci_drivers 00:31:26.590 11:49:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@319 -- # net_devs=() 00:31:26.590 11:49:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@319 -- # local -ga net_devs 00:31:26.590 11:49:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@320 -- # e810=() 00:31:26.590 11:49:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@320 -- # local -ga e810 00:31:26.590 11:49:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # x722=() 00:31:26.590 11:49:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # local -ga x722 00:31:26.590 11:49:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@322 -- # mlx=() 00:31:26.590 11:49:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@322 -- # local -ga mlx 00:31:26.590 11:49:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:31:26.590 11:49:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:31:26.590 11:49:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:31:26.590 11:49:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:31:26.590 11:49:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:31:26.590 11:49:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:31:26.590 11:49:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:31:26.590 11:49:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:31:26.590 11:49:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:31:26.590 11:49:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:31:26.590 11:49:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:31:26.590 11:49:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:31:26.590 11:49:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:31:26.590 11:49:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:31:26.590 11:49:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:31:26.590 11:49:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:31:26.590 11:49:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:31:26.590 11:49:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:31:26.590 11:49:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:31:26.590 11:49:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:31:26.590 Found 0000:af:00.0 (0x8086 - 0x159b) 00:31:26.590 11:49:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:31:26.590 11:49:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:31:26.590 11:49:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:26.590 11:49:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:26.590 11:49:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:31:26.590 11:49:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:31:26.590 11:49:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:31:26.590 Found 0000:af:00.1 (0x8086 - 0x159b) 00:31:26.590 11:49:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:31:26.590 11:49:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:31:26.590 11:49:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:26.590 11:49:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:26.590 11:49:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:31:26.590 11:49:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:31:26.590 11:49:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:31:26.590 11:49:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:31:26.590 11:49:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:31:26.590 11:49:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:26.590 11:49:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:31:26.590 11:49:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:26.590 11:49:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@418 -- # [[ up == up ]] 00:31:26.590 11:49:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:31:26.590 11:49:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:26.590 11:49:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:31:26.590 Found net devices under 0000:af:00.0: cvl_0_0 00:31:26.590 11:49:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:31:26.590 11:49:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:31:26.590 11:49:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:26.590 11:49:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:31:26.590 11:49:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:26.590 11:49:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@418 -- # [[ up == up ]] 00:31:26.590 11:49:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:31:26.590 11:49:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:26.590 11:49:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:31:26.590 Found net devices under 0000:af:00.1: cvl_0_1 00:31:26.590 11:49:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:31:26.590 11:49:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:31:26.590 11:49:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # is_hw=yes 00:31:26.590 11:49:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:31:26.590 11:49:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:31:26.590 11:49:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:31:26.590 11:49:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:31:26.590 11:49:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:31:26.590 11:49:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:31:26.590 11:49:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:31:26.590 11:49:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:31:26.590 11:49:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:31:26.590 11:49:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:31:26.590 11:49:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:31:26.590 11:49:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:31:26.590 11:49:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:31:26.590 11:49:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:31:26.590 11:49:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:31:26.590 11:49:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:31:26.590 11:49:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:31:26.590 11:49:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:31:26.590 11:49:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:31:26.590 11:49:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:31:26.590 11:49:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:31:26.590 11:49:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:31:26.849 11:49:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:31:26.850 11:49:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:31:26.850 11:49:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:31:26.850 11:49:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:31:26.850 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:31:26.850 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.468 ms 00:31:26.850 00:31:26.850 --- 10.0.0.2 ping statistics --- 00:31:26.850 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:26.850 rtt min/avg/max/mdev = 0.468/0.468/0.468/0.000 ms 00:31:26.850 11:49:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:31:26.850 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:31:26.850 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.202 ms 00:31:26.850 00:31:26.850 --- 10.0.0.1 ping statistics --- 00:31:26.850 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:26.850 rtt min/avg/max/mdev = 0.202/0.202/0.202/0.000 ms 00:31:26.850 11:49:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:31:26.850 11:49:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@450 -- # return 0 00:31:26.850 11:49:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:31:26.850 11:49:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:31:26.850 11:49:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:31:26.850 11:49:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:31:26.850 11:49:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:31:26.850 11:49:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:31:26.850 11:49:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:31:26.850 11:49:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@15 -- # nvmfappstart -m 0xF --wait-for-rpc 00:31:26.850 11:49:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:31:26.850 11:49:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@724 -- # xtrace_disable 00:31:26.850 11:49:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:31:26.850 11:49:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@509 -- # nvmfpid=1454291 00:31:26.850 11:49:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@510 -- # waitforlisten 1454291 00:31:26.850 11:49:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@833 -- # '[' -z 1454291 ']' 00:31:26.850 11:49:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:26.850 11:49:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@838 -- # local max_retries=100 00:31:26.850 11:49:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:26.850 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:26.850 11:49:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@842 -- # xtrace_disable 00:31:26.850 11:49:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:31:26.850 11:49:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xF --wait-for-rpc 00:31:26.850 [2024-11-15 11:49:27.635426] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:31:26.850 [2024-11-15 11:49:27.636753] Starting SPDK v25.01-pre git sha1 4b2d483c6 / DPDK 24.03.0 initialization... 00:31:26.850 [2024-11-15 11:49:27.636795] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:31:27.109 [2024-11-15 11:49:27.737046] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:31:27.109 [2024-11-15 11:49:27.786996] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:31:27.109 [2024-11-15 11:49:27.787040] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:31:27.109 [2024-11-15 11:49:27.787051] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:31:27.109 [2024-11-15 11:49:27.787060] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:31:27.109 [2024-11-15 11:49:27.787068] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:31:27.109 [2024-11-15 11:49:27.789129] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:31:27.109 [2024-11-15 11:49:27.789233] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:31:27.109 [2024-11-15 11:49:27.789326] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:31:27.109 [2024-11-15 11:49:27.789327] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:31:27.109 [2024-11-15 11:49:27.789690] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:31:27.109 11:49:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:31:27.109 11:49:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@866 -- # return 0 00:31:27.109 11:49:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:31:27.109 11:49:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@730 -- # xtrace_disable 00:31:27.109 11:49:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:31:27.109 11:49:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:31:27.109 11:49:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@18 -- # rpc_cmd bdev_set_options -p 5 -c 1 00:31:27.109 11:49:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:27.109 11:49:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:31:27.109 11:49:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:27.109 11:49:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@19 -- # rpc_cmd framework_start_init 00:31:27.109 11:49:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:27.109 11:49:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:31:27.109 [2024-11-15 11:49:27.939828] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:31:27.109 [2024-11-15 11:49:27.939963] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:31:27.109 [2024-11-15 11:49:27.940597] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:31:27.109 [2024-11-15 11:49:27.941211] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:31:27.109 11:49:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:27.109 11:49:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:31:27.109 11:49:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:27.109 11:49:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:31:27.109 [2024-11-15 11:49:27.946061] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:31:27.109 11:49:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:27.109 11:49:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:31:27.109 11:49:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:27.109 11:49:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:31:27.370 Malloc0 00:31:27.370 11:49:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:27.370 11:49:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:31:27.370 11:49:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:27.370 11:49:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:31:27.370 11:49:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:27.370 11:49:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:31:27.370 11:49:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:27.370 11:49:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:31:27.370 11:49:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:27.370 11:49:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:31:27.370 11:49:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:27.370 11:49:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:31:27.370 [2024-11-15 11:49:27.998308] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:31:27.370 11:49:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:27.370 11:49:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@28 -- # WRITE_PID=1454413 00:31:27.370 11:49:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # gen_nvmf_target_json 00:31:27.370 11:49:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x10 -i 1 --json /dev/fd/63 -q 128 -o 4096 -w write -t 1 -s 256 00:31:27.370 11:49:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@30 -- # READ_PID=1454415 00:31:27.370 11:49:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:31:27.370 11:49:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:31:27.370 11:49:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:31:27.370 11:49:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:31:27.370 { 00:31:27.370 "params": { 00:31:27.370 "name": "Nvme$subsystem", 00:31:27.370 "trtype": "$TEST_TRANSPORT", 00:31:27.370 "traddr": "$NVMF_FIRST_TARGET_IP", 00:31:27.370 "adrfam": "ipv4", 00:31:27.370 "trsvcid": "$NVMF_PORT", 00:31:27.370 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:31:27.370 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:31:27.370 "hdgst": ${hdgst:-false}, 00:31:27.370 "ddgst": ${ddgst:-false} 00:31:27.370 }, 00:31:27.370 "method": "bdev_nvme_attach_controller" 00:31:27.370 } 00:31:27.370 EOF 00:31:27.370 )") 00:31:27.370 11:49:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x20 -i 2 --json /dev/fd/63 -q 128 -o 4096 -w read -t 1 -s 256 00:31:27.370 11:49:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@32 -- # FLUSH_PID=1454417 00:31:27.370 11:49:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # gen_nvmf_target_json 00:31:27.370 11:49:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:31:27.370 11:49:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:31:27.370 11:49:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:31:27.370 11:49:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x40 -i 3 --json /dev/fd/63 -q 128 -o 4096 -w flush -t 1 -s 256 00:31:27.370 11:49:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # gen_nvmf_target_json 00:31:27.370 11:49:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@34 -- # UNMAP_PID=1454420 00:31:27.370 11:49:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:31:27.370 { 00:31:27.370 "params": { 00:31:27.370 "name": "Nvme$subsystem", 00:31:27.370 "trtype": "$TEST_TRANSPORT", 00:31:27.370 "traddr": "$NVMF_FIRST_TARGET_IP", 00:31:27.370 "adrfam": "ipv4", 00:31:27.370 "trsvcid": "$NVMF_PORT", 00:31:27.370 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:31:27.370 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:31:27.370 "hdgst": ${hdgst:-false}, 00:31:27.370 "ddgst": ${ddgst:-false} 00:31:27.370 }, 00:31:27.370 "method": "bdev_nvme_attach_controller" 00:31:27.370 } 00:31:27.370 EOF 00:31:27.370 )") 00:31:27.370 11:49:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@35 -- # sync 00:31:27.370 11:49:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:31:27.370 11:49:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:31:27.370 11:49:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:31:27.370 11:49:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:31:27.370 11:49:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:31:27.370 { 00:31:27.370 "params": { 00:31:27.370 "name": "Nvme$subsystem", 00:31:27.370 "trtype": "$TEST_TRANSPORT", 00:31:27.370 "traddr": "$NVMF_FIRST_TARGET_IP", 00:31:27.370 "adrfam": "ipv4", 00:31:27.370 "trsvcid": "$NVMF_PORT", 00:31:27.370 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:31:27.370 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:31:27.370 "hdgst": ${hdgst:-false}, 00:31:27.370 "ddgst": ${ddgst:-false} 00:31:27.370 }, 00:31:27.370 "method": "bdev_nvme_attach_controller" 00:31:27.370 } 00:31:27.370 EOF 00:31:27.370 )") 00:31:27.370 11:49:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x80 -i 4 --json /dev/fd/63 -q 128 -o 4096 -w unmap -t 1 -s 256 00:31:27.370 11:49:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # gen_nvmf_target_json 00:31:27.370 11:49:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:31:27.370 11:49:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:31:27.370 11:49:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:31:27.370 11:49:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:31:27.370 11:49:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:31:27.370 { 00:31:27.370 "params": { 00:31:27.370 "name": "Nvme$subsystem", 00:31:27.370 "trtype": "$TEST_TRANSPORT", 00:31:27.370 "traddr": "$NVMF_FIRST_TARGET_IP", 00:31:27.370 "adrfam": "ipv4", 00:31:27.370 "trsvcid": "$NVMF_PORT", 00:31:27.370 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:31:27.370 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:31:27.370 "hdgst": ${hdgst:-false}, 00:31:27.370 "ddgst": ${ddgst:-false} 00:31:27.371 }, 00:31:27.371 "method": "bdev_nvme_attach_controller" 00:31:27.371 } 00:31:27.371 EOF 00:31:27.371 )") 00:31:27.371 11:49:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:31:27.371 11:49:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@37 -- # wait 1454413 00:31:27.371 11:49:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:31:27.371 11:49:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:31:27.371 11:49:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:31:27.371 11:49:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:31:27.371 11:49:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:31:27.371 "params": { 00:31:27.371 "name": "Nvme1", 00:31:27.371 "trtype": "tcp", 00:31:27.371 "traddr": "10.0.0.2", 00:31:27.371 "adrfam": "ipv4", 00:31:27.371 "trsvcid": "4420", 00:31:27.371 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:31:27.371 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:31:27.371 "hdgst": false, 00:31:27.371 "ddgst": false 00:31:27.371 }, 00:31:27.371 "method": "bdev_nvme_attach_controller" 00:31:27.371 }' 00:31:27.371 11:49:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:31:27.371 11:49:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:31:27.371 11:49:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:31:27.371 11:49:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:31:27.371 "params": { 00:31:27.371 "name": "Nvme1", 00:31:27.371 "trtype": "tcp", 00:31:27.371 "traddr": "10.0.0.2", 00:31:27.371 "adrfam": "ipv4", 00:31:27.371 "trsvcid": "4420", 00:31:27.371 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:31:27.371 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:31:27.371 "hdgst": false, 00:31:27.371 "ddgst": false 00:31:27.371 }, 00:31:27.371 "method": "bdev_nvme_attach_controller" 00:31:27.371 }' 00:31:27.371 11:49:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:31:27.371 11:49:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:31:27.371 "params": { 00:31:27.371 "name": "Nvme1", 00:31:27.371 "trtype": "tcp", 00:31:27.371 "traddr": "10.0.0.2", 00:31:27.371 "adrfam": "ipv4", 00:31:27.371 "trsvcid": "4420", 00:31:27.371 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:31:27.371 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:31:27.371 "hdgst": false, 00:31:27.371 "ddgst": false 00:31:27.371 }, 00:31:27.371 "method": "bdev_nvme_attach_controller" 00:31:27.371 }' 00:31:27.371 11:49:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:31:27.371 11:49:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:31:27.371 "params": { 00:31:27.371 "name": "Nvme1", 00:31:27.371 "trtype": "tcp", 00:31:27.371 "traddr": "10.0.0.2", 00:31:27.371 "adrfam": "ipv4", 00:31:27.371 "trsvcid": "4420", 00:31:27.371 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:31:27.371 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:31:27.371 "hdgst": false, 00:31:27.371 "ddgst": false 00:31:27.371 }, 00:31:27.371 "method": "bdev_nvme_attach_controller" 00:31:27.371 }' 00:31:27.371 [2024-11-15 11:49:28.054890] Starting SPDK v25.01-pre git sha1 4b2d483c6 / DPDK 24.03.0 initialization... 00:31:27.371 [2024-11-15 11:49:28.054950] [ DPDK EAL parameters: bdevperf -c 0x10 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:31:27.371 [2024-11-15 11:49:28.056048] Starting SPDK v25.01-pre git sha1 4b2d483c6 / DPDK 24.03.0 initialization... 00:31:27.371 [2024-11-15 11:49:28.056107] [ DPDK EAL parameters: bdevperf -c 0x20 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk2 --proc-type=auto ] 00:31:27.371 [2024-11-15 11:49:28.056539] Starting SPDK v25.01-pre git sha1 4b2d483c6 / DPDK 24.03.0 initialization... 00:31:27.371 [2024-11-15 11:49:28.056601] [ DPDK EAL parameters: bdevperf -c 0x40 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk3 --proc-type=auto ] 00:31:27.371 [2024-11-15 11:49:28.060070] Starting SPDK v25.01-pre git sha1 4b2d483c6 / DPDK 24.03.0 initialization... 00:31:27.371 [2024-11-15 11:49:28.060126] [ DPDK EAL parameters: bdevperf -c 0x80 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk4 --proc-type=auto ] 00:31:27.630 [2024-11-15 11:49:28.236514] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:27.630 [2024-11-15 11:49:28.285813] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:31:27.630 [2024-11-15 11:49:28.324486] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:27.630 [2024-11-15 11:49:28.371855] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:27.630 [2024-11-15 11:49:28.373796] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:31:27.630 [2024-11-15 11:49:28.413314] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 7 00:31:27.630 [2024-11-15 11:49:28.476087] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:27.889 [2024-11-15 11:49:28.539186] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:31:27.889 Running I/O for 1 seconds... 00:31:27.890 Running I/O for 1 seconds... 00:31:27.890 Running I/O for 1 seconds... 00:31:27.890 Running I/O for 1 seconds... 00:31:28.826 12009.00 IOPS, 46.91 MiB/s [2024-11-15T10:49:29.679Z] 7416.00 IOPS, 28.97 MiB/s [2024-11-15T10:49:29.679Z] 11891.00 IOPS, 46.45 MiB/s 00:31:28.826 Latency(us) 00:31:28.826 [2024-11-15T10:49:29.679Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:28.826 Job: Nvme1n1 (Core Mask 0x10, workload: write, depth: 128, IO size: 4096) 00:31:28.826 Nvme1n1 : 1.01 12078.44 47.18 0.00 0.00 10565.18 3991.74 13822.14 00:31:28.826 [2024-11-15T10:49:29.679Z] =================================================================================================================== 00:31:28.826 [2024-11-15T10:49:29.679Z] Total : 12078.44 47.18 0.00 0.00 10565.18 3991.74 13822.14 00:31:28.826 00:31:28.826 Latency(us) 00:31:28.826 [2024-11-15T10:49:29.679Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:28.826 Job: Nvme1n1 (Core Mask 0x80, workload: unmap, depth: 128, IO size: 4096) 00:31:28.826 Nvme1n1 : 1.01 11963.06 46.73 0.00 0.00 10669.06 3678.95 20256.58 00:31:28.826 [2024-11-15T10:49:29.679Z] =================================================================================================================== 00:31:28.826 [2024-11-15T10:49:29.679Z] Total : 11963.06 46.73 0.00 0.00 10669.06 3678.95 20256.58 00:31:28.826 00:31:28.826 Latency(us) 00:31:28.826 [2024-11-15T10:49:29.679Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:28.826 Job: Nvme1n1 (Core Mask 0x20, workload: read, depth: 128, IO size: 4096) 00:31:28.826 Nvme1n1 : 1.01 7474.75 29.20 0.00 0.00 17048.34 6047.19 23473.80 00:31:28.826 [2024-11-15T10:49:29.679Z] =================================================================================================================== 00:31:28.826 [2024-11-15T10:49:29.679Z] Total : 7474.75 29.20 0.00 0.00 17048.34 6047.19 23473.80 00:31:29.085 11:49:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@38 -- # wait 1454415 00:31:29.085 162056.00 IOPS, 633.03 MiB/s 00:31:29.085 Latency(us) 00:31:29.085 [2024-11-15T10:49:29.938Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:29.085 Job: Nvme1n1 (Core Mask 0x40, workload: flush, depth: 128, IO size: 4096) 00:31:29.085 Nvme1n1 : 1.00 161676.95 631.55 0.00 0.00 786.85 357.47 2338.44 00:31:29.085 [2024-11-15T10:49:29.938Z] =================================================================================================================== 00:31:29.085 [2024-11-15T10:49:29.938Z] Total : 161676.95 631.55 0.00 0.00 786.85 357.47 2338.44 00:31:29.085 11:49:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@39 -- # wait 1454417 00:31:29.085 11:49:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@40 -- # wait 1454420 00:31:29.085 11:49:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@42 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:31:29.085 11:49:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:29.085 11:49:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:31:29.085 11:49:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:29.085 11:49:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@44 -- # trap - SIGINT SIGTERM EXIT 00:31:29.085 11:49:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@46 -- # nvmftestfini 00:31:29.085 11:49:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@516 -- # nvmfcleanup 00:31:29.085 11:49:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@121 -- # sync 00:31:29.085 11:49:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:31:29.085 11:49:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@124 -- # set +e 00:31:29.085 11:49:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@125 -- # for i in {1..20} 00:31:29.085 11:49:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:31:29.085 rmmod nvme_tcp 00:31:29.085 rmmod nvme_fabrics 00:31:29.085 rmmod nvme_keyring 00:31:29.085 11:49:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:31:29.085 11:49:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@128 -- # set -e 00:31:29.085 11:49:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@129 -- # return 0 00:31:29.085 11:49:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@517 -- # '[' -n 1454291 ']' 00:31:29.085 11:49:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@518 -- # killprocess 1454291 00:31:29.085 11:49:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@952 -- # '[' -z 1454291 ']' 00:31:29.085 11:49:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@956 -- # kill -0 1454291 00:31:29.085 11:49:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@957 -- # uname 00:31:29.085 11:49:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:31:29.344 11:49:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 1454291 00:31:29.345 11:49:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:31:29.345 11:49:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:31:29.345 11:49:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@970 -- # echo 'killing process with pid 1454291' 00:31:29.345 killing process with pid 1454291 00:31:29.345 11:49:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@971 -- # kill 1454291 00:31:29.345 11:49:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@976 -- # wait 1454291 00:31:29.345 11:49:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:31:29.345 11:49:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:31:29.345 11:49:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:31:29.345 11:49:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@297 -- # iptr 00:31:29.345 11:49:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:31:29.345 11:49:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # iptables-restore 00:31:29.345 11:49:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # iptables-save 00:31:29.345 11:49:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:31:29.345 11:49:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@302 -- # remove_spdk_ns 00:31:29.345 11:49:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:29.345 11:49:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:31:29.345 11:49:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:31.879 11:49:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:31:31.879 00:31:31.879 real 0m10.542s 00:31:31.879 user 0m14.665s 00:31:31.879 sys 0m6.403s 00:31:31.879 11:49:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1128 -- # xtrace_disable 00:31:31.879 11:49:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:31:31.879 ************************************ 00:31:31.879 END TEST nvmf_bdev_io_wait 00:31:31.879 ************************************ 00:31:31.879 11:49:32 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@30 -- # run_test nvmf_queue_depth /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp --interrupt-mode 00:31:31.879 11:49:32 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:31:31.879 11:49:32 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1109 -- # xtrace_disable 00:31:31.879 11:49:32 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:31:31.879 ************************************ 00:31:31.879 START TEST nvmf_queue_depth 00:31:31.879 ************************************ 00:31:31.879 11:49:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp --interrupt-mode 00:31:31.879 * Looking for test storage... 00:31:31.879 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:31:31.879 11:49:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:31:31.879 11:49:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1691 -- # lcov --version 00:31:31.879 11:49:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:31:31.879 11:49:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:31:31.879 11:49:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:31:31.879 11:49:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@333 -- # local ver1 ver1_l 00:31:31.879 11:49:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@334 -- # local ver2 ver2_l 00:31:31.879 11:49:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@336 -- # IFS=.-: 00:31:31.879 11:49:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@336 -- # read -ra ver1 00:31:31.879 11:49:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@337 -- # IFS=.-: 00:31:31.879 11:49:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@337 -- # read -ra ver2 00:31:31.879 11:49:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@338 -- # local 'op=<' 00:31:31.879 11:49:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@340 -- # ver1_l=2 00:31:31.879 11:49:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@341 -- # ver2_l=1 00:31:31.879 11:49:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:31:31.879 11:49:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@344 -- # case "$op" in 00:31:31.879 11:49:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@345 -- # : 1 00:31:31.879 11:49:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v = 0 )) 00:31:31.879 11:49:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:31:31.879 11:49:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@365 -- # decimal 1 00:31:31.879 11:49:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=1 00:31:31.879 11:49:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:31:31.879 11:49:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 1 00:31:31.879 11:49:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@365 -- # ver1[v]=1 00:31:31.879 11:49:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@366 -- # decimal 2 00:31:31.879 11:49:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=2 00:31:31.879 11:49:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:31:31.879 11:49:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 2 00:31:31.879 11:49:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@366 -- # ver2[v]=2 00:31:31.879 11:49:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:31:31.879 11:49:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:31:31.879 11:49:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@368 -- # return 0 00:31:31.879 11:49:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:31:31.879 11:49:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:31:31.879 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:31.879 --rc genhtml_branch_coverage=1 00:31:31.879 --rc genhtml_function_coverage=1 00:31:31.879 --rc genhtml_legend=1 00:31:31.879 --rc geninfo_all_blocks=1 00:31:31.879 --rc geninfo_unexecuted_blocks=1 00:31:31.879 00:31:31.879 ' 00:31:31.879 11:49:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:31:31.879 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:31.879 --rc genhtml_branch_coverage=1 00:31:31.879 --rc genhtml_function_coverage=1 00:31:31.879 --rc genhtml_legend=1 00:31:31.879 --rc geninfo_all_blocks=1 00:31:31.879 --rc geninfo_unexecuted_blocks=1 00:31:31.879 00:31:31.879 ' 00:31:31.879 11:49:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:31:31.879 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:31.879 --rc genhtml_branch_coverage=1 00:31:31.879 --rc genhtml_function_coverage=1 00:31:31.879 --rc genhtml_legend=1 00:31:31.879 --rc geninfo_all_blocks=1 00:31:31.879 --rc geninfo_unexecuted_blocks=1 00:31:31.879 00:31:31.879 ' 00:31:31.879 11:49:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:31:31.879 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:31.879 --rc genhtml_branch_coverage=1 00:31:31.879 --rc genhtml_function_coverage=1 00:31:31.879 --rc genhtml_legend=1 00:31:31.879 --rc geninfo_all_blocks=1 00:31:31.879 --rc geninfo_unexecuted_blocks=1 00:31:31.879 00:31:31.879 ' 00:31:31.879 11:49:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:31:31.879 11:49:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@7 -- # uname -s 00:31:31.879 11:49:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:31:31.879 11:49:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:31:31.879 11:49:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:31:31.879 11:49:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:31:31.879 11:49:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:31:31.879 11:49:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:31:31.879 11:49:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:31:31.879 11:49:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:31:31.880 11:49:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:31:31.880 11:49:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:31:31.880 11:49:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:31:31.880 11:49:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@18 -- # NVME_HOSTID=00abaa28-3537-eb11-906e-0017a4403562 00:31:31.880 11:49:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:31:31.880 11:49:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:31:31.880 11:49:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:31:31.880 11:49:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:31:31.880 11:49:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:31:31.880 11:49:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@15 -- # shopt -s extglob 00:31:31.880 11:49:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:31:31.880 11:49:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:31:31.880 11:49:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:31:31.880 11:49:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:31.880 11:49:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:31.880 11:49:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:31.880 11:49:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@5 -- # export PATH 00:31:31.880 11:49:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:31.880 11:49:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@51 -- # : 0 00:31:31.880 11:49:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:31:31.880 11:49:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:31:31.880 11:49:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:31:31.880 11:49:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:31:31.880 11:49:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:31:31.880 11:49:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:31:31.880 11:49:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:31:31.880 11:49:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:31:31.880 11:49:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:31:31.880 11:49:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@55 -- # have_pci_nics=0 00:31:31.880 11:49:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@14 -- # MALLOC_BDEV_SIZE=64 00:31:31.880 11:49:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@15 -- # MALLOC_BLOCK_SIZE=512 00:31:31.880 11:49:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:31:31.880 11:49:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@19 -- # nvmftestinit 00:31:31.880 11:49:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:31:31.880 11:49:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:31:31.880 11:49:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@476 -- # prepare_net_devs 00:31:31.880 11:49:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@438 -- # local -g is_hw=no 00:31:31.880 11:49:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@440 -- # remove_spdk_ns 00:31:31.880 11:49:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:31.880 11:49:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:31:31.880 11:49:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:31.880 11:49:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:31:31.880 11:49:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:31:31.880 11:49:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@309 -- # xtrace_disable 00:31:31.880 11:49:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:31:37.151 11:49:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:31:37.151 11:49:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@315 -- # pci_devs=() 00:31:37.151 11:49:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@315 -- # local -a pci_devs 00:31:37.151 11:49:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@316 -- # pci_net_devs=() 00:31:37.151 11:49:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:31:37.151 11:49:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@317 -- # pci_drivers=() 00:31:37.151 11:49:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@317 -- # local -A pci_drivers 00:31:37.151 11:49:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@319 -- # net_devs=() 00:31:37.151 11:49:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@319 -- # local -ga net_devs 00:31:37.151 11:49:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@320 -- # e810=() 00:31:37.151 11:49:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@320 -- # local -ga e810 00:31:37.151 11:49:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@321 -- # x722=() 00:31:37.151 11:49:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@321 -- # local -ga x722 00:31:37.151 11:49:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@322 -- # mlx=() 00:31:37.151 11:49:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@322 -- # local -ga mlx 00:31:37.151 11:49:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:31:37.152 11:49:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:31:37.152 11:49:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:31:37.152 11:49:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:31:37.152 11:49:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:31:37.152 11:49:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:31:37.152 11:49:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:31:37.152 11:49:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:31:37.152 11:49:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:31:37.152 11:49:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:31:37.152 11:49:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:31:37.152 11:49:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:31:37.152 11:49:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:31:37.152 11:49:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:31:37.152 11:49:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:31:37.152 11:49:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:31:37.152 11:49:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:31:37.152 11:49:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:31:37.152 11:49:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:31:37.152 11:49:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:31:37.152 Found 0000:af:00.0 (0x8086 - 0x159b) 00:31:37.152 11:49:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:31:37.152 11:49:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:31:37.152 11:49:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:37.152 11:49:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:37.152 11:49:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:31:37.152 11:49:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:31:37.152 11:49:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:31:37.152 Found 0000:af:00.1 (0x8086 - 0x159b) 00:31:37.152 11:49:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:31:37.152 11:49:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:31:37.152 11:49:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:37.152 11:49:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:37.152 11:49:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:31:37.152 11:49:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:31:37.152 11:49:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:31:37.152 11:49:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:31:37.152 11:49:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:31:37.152 11:49:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:37.152 11:49:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:31:37.152 11:49:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:37.152 11:49:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@418 -- # [[ up == up ]] 00:31:37.152 11:49:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:31:37.152 11:49:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:37.152 11:49:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:31:37.152 Found net devices under 0000:af:00.0: cvl_0_0 00:31:37.152 11:49:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:31:37.152 11:49:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:31:37.152 11:49:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:37.152 11:49:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:31:37.152 11:49:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:37.152 11:49:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@418 -- # [[ up == up ]] 00:31:37.152 11:49:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:31:37.152 11:49:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:37.152 11:49:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:31:37.152 Found net devices under 0000:af:00.1: cvl_0_1 00:31:37.152 11:49:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:31:37.152 11:49:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:31:37.152 11:49:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@442 -- # is_hw=yes 00:31:37.152 11:49:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:31:37.152 11:49:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:31:37.152 11:49:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:31:37.152 11:49:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:31:37.152 11:49:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:31:37.152 11:49:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:31:37.152 11:49:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:31:37.152 11:49:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:31:37.152 11:49:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:31:37.152 11:49:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:31:37.152 11:49:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:31:37.152 11:49:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:31:37.152 11:49:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:31:37.152 11:49:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:31:37.152 11:49:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:31:37.152 11:49:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:31:37.152 11:49:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:31:37.152 11:49:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:31:37.152 11:49:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:31:37.152 11:49:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:31:37.152 11:49:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:31:37.152 11:49:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:31:37.152 11:49:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:31:37.152 11:49:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:31:37.152 11:49:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:31:37.152 11:49:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:31:37.152 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:31:37.152 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.482 ms 00:31:37.152 00:31:37.152 --- 10.0.0.2 ping statistics --- 00:31:37.152 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:37.152 rtt min/avg/max/mdev = 0.482/0.482/0.482/0.000 ms 00:31:37.152 11:49:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:31:37.152 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:31:37.152 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.233 ms 00:31:37.152 00:31:37.152 --- 10.0.0.1 ping statistics --- 00:31:37.152 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:37.152 rtt min/avg/max/mdev = 0.233/0.233/0.233/0.000 ms 00:31:37.152 11:49:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:31:37.152 11:49:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@450 -- # return 0 00:31:37.152 11:49:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:31:37.152 11:49:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:31:37.152 11:49:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:31:37.152 11:49:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:31:37.152 11:49:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:31:37.152 11:49:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:31:37.153 11:49:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:31:37.153 11:49:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@21 -- # nvmfappstart -m 0x2 00:31:37.153 11:49:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:31:37.153 11:49:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@724 -- # xtrace_disable 00:31:37.153 11:49:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:31:37.153 11:49:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@509 -- # nvmfpid=1458181 00:31:37.153 11:49:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@510 -- # waitforlisten 1458181 00:31:37.153 11:49:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@833 -- # '[' -z 1458181 ']' 00:31:37.153 11:49:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:37.153 11:49:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@838 -- # local max_retries=100 00:31:37.153 11:49:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:37.153 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:37.153 11:49:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@842 -- # xtrace_disable 00:31:37.153 11:49:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:31:37.153 11:49:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x2 00:31:37.153 [2024-11-15 11:49:37.703809] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:31:37.153 [2024-11-15 11:49:37.705151] Starting SPDK v25.01-pre git sha1 4b2d483c6 / DPDK 24.03.0 initialization... 00:31:37.153 [2024-11-15 11:49:37.705194] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:31:37.153 [2024-11-15 11:49:37.780503] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:37.153 [2024-11-15 11:49:37.819112] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:31:37.153 [2024-11-15 11:49:37.819144] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:31:37.153 [2024-11-15 11:49:37.819151] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:31:37.153 [2024-11-15 11:49:37.819156] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:31:37.153 [2024-11-15 11:49:37.819161] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:31:37.153 [2024-11-15 11:49:37.819687] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:31:37.153 [2024-11-15 11:49:37.885251] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:31:37.153 [2024-11-15 11:49:37.885448] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:31:37.153 11:49:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:31:37.153 11:49:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@866 -- # return 0 00:31:37.153 11:49:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:31:37.153 11:49:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@730 -- # xtrace_disable 00:31:37.153 11:49:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:31:37.153 11:49:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:31:37.153 11:49:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:31:37.153 11:49:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:37.153 11:49:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:31:37.153 [2024-11-15 11:49:37.968019] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:31:37.153 11:49:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:37.153 11:49:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:31:37.153 11:49:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:37.153 11:49:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:31:37.411 Malloc0 00:31:37.411 11:49:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:37.411 11:49:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:31:37.411 11:49:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:37.411 11:49:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:31:37.411 11:49:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:37.411 11:49:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:31:37.411 11:49:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:37.411 11:49:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:31:37.411 11:49:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:37.411 11:49:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:31:37.411 11:49:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:37.411 11:49:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:31:37.412 [2024-11-15 11:49:38.028142] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:31:37.412 11:49:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:37.412 11:49:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@30 -- # bdevperf_pid=1458269 00:31:37.412 11:49:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@32 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:31:37.412 11:49:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@33 -- # waitforlisten 1458269 /var/tmp/bdevperf.sock 00:31:37.412 11:49:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@833 -- # '[' -z 1458269 ']' 00:31:37.412 11:49:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:31:37.412 11:49:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@838 -- # local max_retries=100 00:31:37.412 11:49:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:31:37.412 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:31:37.412 11:49:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@842 -- # xtrace_disable 00:31:37.412 11:49:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:31:37.412 11:49:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 1024 -o 4096 -w verify -t 10 00:31:37.412 [2024-11-15 11:49:38.083268] Starting SPDK v25.01-pre git sha1 4b2d483c6 / DPDK 24.03.0 initialization... 00:31:37.412 [2024-11-15 11:49:38.083326] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1458269 ] 00:31:37.412 [2024-11-15 11:49:38.179186] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:37.412 [2024-11-15 11:49:38.229213] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:31:37.687 11:49:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:31:37.687 11:49:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@866 -- # return 0 00:31:37.687 11:49:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@34 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:31:37.687 11:49:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:37.687 11:49:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:31:37.687 NVMe0n1 00:31:37.687 11:49:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:37.687 11:49:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:31:37.953 Running I/O for 10 seconds... 00:31:39.915 10187.00 IOPS, 39.79 MiB/s [2024-11-15T10:49:41.704Z] 10244.50 IOPS, 40.02 MiB/s [2024-11-15T10:49:42.641Z] 10360.33 IOPS, 40.47 MiB/s [2024-11-15T10:49:44.018Z] 10489.75 IOPS, 40.98 MiB/s [2024-11-15T10:49:44.955Z] 10463.40 IOPS, 40.87 MiB/s [2024-11-15T10:49:45.893Z] 10543.50 IOPS, 41.19 MiB/s [2024-11-15T10:49:46.831Z] 10536.86 IOPS, 41.16 MiB/s [2024-11-15T10:49:47.767Z] 10602.50 IOPS, 41.42 MiB/s [2024-11-15T10:49:48.705Z] 10598.56 IOPS, 41.40 MiB/s [2024-11-15T10:49:48.705Z] 10636.70 IOPS, 41.55 MiB/s 00:31:47.852 Latency(us) 00:31:47.852 [2024-11-15T10:49:48.705Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:47.852 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 1024, IO size: 4096) 00:31:47.852 Verification LBA range: start 0x0 length 0x4000 00:31:47.852 NVMe0n1 : 10.07 10648.73 41.60 0.00 0.00 95755.01 25022.84 66250.94 00:31:47.852 [2024-11-15T10:49:48.705Z] =================================================================================================================== 00:31:47.852 [2024-11-15T10:49:48.705Z] Total : 10648.73 41.60 0.00 0.00 95755.01 25022.84 66250.94 00:31:47.852 { 00:31:47.852 "results": [ 00:31:47.852 { 00:31:47.852 "job": "NVMe0n1", 00:31:47.852 "core_mask": "0x1", 00:31:47.852 "workload": "verify", 00:31:47.852 "status": "finished", 00:31:47.852 "verify_range": { 00:31:47.852 "start": 0, 00:31:47.852 "length": 16384 00:31:47.852 }, 00:31:47.852 "queue_depth": 1024, 00:31:47.852 "io_size": 4096, 00:31:47.852 "runtime": 10.06674, 00:31:47.852 "iops": 10648.730373487346, 00:31:47.852 "mibps": 41.596603021434944, 00:31:47.852 "io_failed": 0, 00:31:47.852 "io_timeout": 0, 00:31:47.852 "avg_latency_us": 95755.00608647718, 00:31:47.852 "min_latency_us": 25022.836363636365, 00:31:47.852 "max_latency_us": 66250.93818181819 00:31:47.852 } 00:31:47.852 ], 00:31:47.852 "core_count": 1 00:31:47.852 } 00:31:47.852 11:49:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@39 -- # killprocess 1458269 00:31:47.852 11:49:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@952 -- # '[' -z 1458269 ']' 00:31:47.852 11:49:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@956 -- # kill -0 1458269 00:31:47.852 11:49:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@957 -- # uname 00:31:47.852 11:49:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:31:47.852 11:49:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 1458269 00:31:48.112 11:49:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:31:48.112 11:49:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:31:48.112 11:49:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@970 -- # echo 'killing process with pid 1458269' 00:31:48.112 killing process with pid 1458269 00:31:48.112 11:49:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@971 -- # kill 1458269 00:31:48.112 Received shutdown signal, test time was about 10.000000 seconds 00:31:48.112 00:31:48.112 Latency(us) 00:31:48.112 [2024-11-15T10:49:48.965Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:48.112 [2024-11-15T10:49:48.965Z] =================================================================================================================== 00:31:48.112 [2024-11-15T10:49:48.965Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:31:48.112 11:49:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@976 -- # wait 1458269 00:31:48.112 11:49:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:31:48.112 11:49:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@43 -- # nvmftestfini 00:31:48.112 11:49:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@516 -- # nvmfcleanup 00:31:48.112 11:49:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@121 -- # sync 00:31:48.112 11:49:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:31:48.112 11:49:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@124 -- # set +e 00:31:48.112 11:49:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@125 -- # for i in {1..20} 00:31:48.112 11:49:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:31:48.112 rmmod nvme_tcp 00:31:48.112 rmmod nvme_fabrics 00:31:48.112 rmmod nvme_keyring 00:31:48.372 11:49:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:31:48.372 11:49:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@128 -- # set -e 00:31:48.372 11:49:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@129 -- # return 0 00:31:48.372 11:49:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@517 -- # '[' -n 1458181 ']' 00:31:48.372 11:49:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@518 -- # killprocess 1458181 00:31:48.372 11:49:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@952 -- # '[' -z 1458181 ']' 00:31:48.372 11:49:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@956 -- # kill -0 1458181 00:31:48.372 11:49:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@957 -- # uname 00:31:48.372 11:49:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:31:48.372 11:49:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 1458181 00:31:48.372 11:49:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:31:48.372 11:49:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:31:48.372 11:49:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@970 -- # echo 'killing process with pid 1458181' 00:31:48.372 killing process with pid 1458181 00:31:48.372 11:49:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@971 -- # kill 1458181 00:31:48.372 11:49:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@976 -- # wait 1458181 00:31:48.372 11:49:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:31:48.372 11:49:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:31:48.372 11:49:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:31:48.372 11:49:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@297 -- # iptr 00:31:48.372 11:49:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@791 -- # iptables-save 00:31:48.372 11:49:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:31:48.372 11:49:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@791 -- # iptables-restore 00:31:48.631 11:49:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:31:48.631 11:49:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@302 -- # remove_spdk_ns 00:31:48.631 11:49:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:48.631 11:49:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:31:48.631 11:49:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:50.535 11:49:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:31:50.535 00:31:50.535 real 0m18.977s 00:31:50.535 user 0m22.338s 00:31:50.535 sys 0m5.896s 00:31:50.535 11:49:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1128 -- # xtrace_disable 00:31:50.535 11:49:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:31:50.535 ************************************ 00:31:50.535 END TEST nvmf_queue_depth 00:31:50.535 ************************************ 00:31:50.535 11:49:51 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@31 -- # run_test nvmf_target_multipath /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp --interrupt-mode 00:31:50.535 11:49:51 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:31:50.535 11:49:51 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1109 -- # xtrace_disable 00:31:50.535 11:49:51 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:31:50.535 ************************************ 00:31:50.535 START TEST nvmf_target_multipath 00:31:50.535 ************************************ 00:31:50.535 11:49:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp --interrupt-mode 00:31:50.795 * Looking for test storage... 00:31:50.795 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:31:50.795 11:49:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:31:50.795 11:49:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1691 -- # lcov --version 00:31:50.795 11:49:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:31:50.795 11:49:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:31:50.795 11:49:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:31:50.795 11:49:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@333 -- # local ver1 ver1_l 00:31:50.795 11:49:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@334 -- # local ver2 ver2_l 00:31:50.795 11:49:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@336 -- # IFS=.-: 00:31:50.795 11:49:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@336 -- # read -ra ver1 00:31:50.795 11:49:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@337 -- # IFS=.-: 00:31:50.795 11:49:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@337 -- # read -ra ver2 00:31:50.795 11:49:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@338 -- # local 'op=<' 00:31:50.795 11:49:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@340 -- # ver1_l=2 00:31:50.795 11:49:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@341 -- # ver2_l=1 00:31:50.795 11:49:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:31:50.795 11:49:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@344 -- # case "$op" in 00:31:50.795 11:49:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@345 -- # : 1 00:31:50.795 11:49:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v = 0 )) 00:31:50.795 11:49:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:31:50.795 11:49:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@365 -- # decimal 1 00:31:50.795 11:49:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=1 00:31:50.795 11:49:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:31:50.795 11:49:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 1 00:31:50.795 11:49:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@365 -- # ver1[v]=1 00:31:50.795 11:49:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@366 -- # decimal 2 00:31:50.795 11:49:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=2 00:31:50.795 11:49:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:31:50.795 11:49:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 2 00:31:50.795 11:49:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@366 -- # ver2[v]=2 00:31:50.795 11:49:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:31:50.795 11:49:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:31:50.795 11:49:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@368 -- # return 0 00:31:50.795 11:49:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:31:50.795 11:49:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:31:50.795 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:50.795 --rc genhtml_branch_coverage=1 00:31:50.795 --rc genhtml_function_coverage=1 00:31:50.795 --rc genhtml_legend=1 00:31:50.795 --rc geninfo_all_blocks=1 00:31:50.795 --rc geninfo_unexecuted_blocks=1 00:31:50.795 00:31:50.795 ' 00:31:50.795 11:49:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:31:50.795 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:50.795 --rc genhtml_branch_coverage=1 00:31:50.795 --rc genhtml_function_coverage=1 00:31:50.795 --rc genhtml_legend=1 00:31:50.795 --rc geninfo_all_blocks=1 00:31:50.795 --rc geninfo_unexecuted_blocks=1 00:31:50.795 00:31:50.795 ' 00:31:50.795 11:49:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:31:50.795 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:50.795 --rc genhtml_branch_coverage=1 00:31:50.795 --rc genhtml_function_coverage=1 00:31:50.795 --rc genhtml_legend=1 00:31:50.795 --rc geninfo_all_blocks=1 00:31:50.795 --rc geninfo_unexecuted_blocks=1 00:31:50.795 00:31:50.795 ' 00:31:50.795 11:49:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:31:50.795 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:50.795 --rc genhtml_branch_coverage=1 00:31:50.795 --rc genhtml_function_coverage=1 00:31:50.795 --rc genhtml_legend=1 00:31:50.795 --rc geninfo_all_blocks=1 00:31:50.795 --rc geninfo_unexecuted_blocks=1 00:31:50.795 00:31:50.795 ' 00:31:50.795 11:49:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:31:50.795 11:49:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@7 -- # uname -s 00:31:50.795 11:49:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:31:50.795 11:49:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:31:50.795 11:49:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:31:50.795 11:49:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:31:50.795 11:49:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:31:50.795 11:49:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:31:50.795 11:49:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:31:50.795 11:49:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:31:50.795 11:49:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:31:50.796 11:49:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:31:50.796 11:49:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:31:50.796 11:49:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@18 -- # NVME_HOSTID=00abaa28-3537-eb11-906e-0017a4403562 00:31:50.796 11:49:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:31:50.796 11:49:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:31:50.796 11:49:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:31:50.796 11:49:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:31:50.796 11:49:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:31:50.796 11:49:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@15 -- # shopt -s extglob 00:31:50.796 11:49:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:31:50.796 11:49:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:31:50.796 11:49:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:31:50.796 11:49:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:50.796 11:49:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:50.796 11:49:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:50.796 11:49:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@5 -- # export PATH 00:31:50.796 11:49:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:50.796 11:49:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@51 -- # : 0 00:31:50.796 11:49:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:31:50.796 11:49:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:31:50.796 11:49:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:31:50.796 11:49:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:31:50.796 11:49:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:31:50.796 11:49:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:31:50.796 11:49:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:31:50.796 11:49:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:31:50.796 11:49:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:31:50.796 11:49:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@55 -- # have_pci_nics=0 00:31:50.796 11:49:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:31:50.796 11:49:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:31:50.796 11:49:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:31:50.796 11:49:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:31:50.796 11:49:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@43 -- # nvmftestinit 00:31:50.796 11:49:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:31:50.796 11:49:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:31:50.796 11:49:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@476 -- # prepare_net_devs 00:31:50.796 11:49:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@438 -- # local -g is_hw=no 00:31:50.796 11:49:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@440 -- # remove_spdk_ns 00:31:50.796 11:49:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:50.796 11:49:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:31:50.796 11:49:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:50.796 11:49:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:31:50.796 11:49:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:31:50.796 11:49:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@309 -- # xtrace_disable 00:31:50.796 11:49:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:31:56.075 11:49:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:31:56.075 11:49:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@315 -- # pci_devs=() 00:31:56.075 11:49:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@315 -- # local -a pci_devs 00:31:56.075 11:49:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@316 -- # pci_net_devs=() 00:31:56.075 11:49:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:31:56.075 11:49:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@317 -- # pci_drivers=() 00:31:56.075 11:49:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@317 -- # local -A pci_drivers 00:31:56.075 11:49:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@319 -- # net_devs=() 00:31:56.075 11:49:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@319 -- # local -ga net_devs 00:31:56.075 11:49:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@320 -- # e810=() 00:31:56.075 11:49:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@320 -- # local -ga e810 00:31:56.075 11:49:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@321 -- # x722=() 00:31:56.075 11:49:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@321 -- # local -ga x722 00:31:56.075 11:49:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@322 -- # mlx=() 00:31:56.075 11:49:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@322 -- # local -ga mlx 00:31:56.075 11:49:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:31:56.075 11:49:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:31:56.075 11:49:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:31:56.075 11:49:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:31:56.075 11:49:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:31:56.075 11:49:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:31:56.075 11:49:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:31:56.075 11:49:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:31:56.075 11:49:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:31:56.075 11:49:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:31:56.075 11:49:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:31:56.075 11:49:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:31:56.075 11:49:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:31:56.075 11:49:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:31:56.075 11:49:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:31:56.075 11:49:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:31:56.075 11:49:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:31:56.075 11:49:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:31:56.075 11:49:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:31:56.075 11:49:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:31:56.075 Found 0000:af:00.0 (0x8086 - 0x159b) 00:31:56.075 11:49:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:31:56.075 11:49:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:31:56.075 11:49:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:56.075 11:49:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:56.075 11:49:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:31:56.075 11:49:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:31:56.075 11:49:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:31:56.075 Found 0000:af:00.1 (0x8086 - 0x159b) 00:31:56.075 11:49:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:31:56.075 11:49:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:31:56.075 11:49:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:56.075 11:49:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:56.075 11:49:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:31:56.075 11:49:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:31:56.075 11:49:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:31:56.075 11:49:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:31:56.075 11:49:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:31:56.075 11:49:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:56.075 11:49:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:31:56.075 11:49:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:56.076 11:49:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@418 -- # [[ up == up ]] 00:31:56.076 11:49:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:31:56.076 11:49:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:56.076 11:49:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:31:56.076 Found net devices under 0000:af:00.0: cvl_0_0 00:31:56.076 11:49:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:31:56.076 11:49:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:31:56.076 11:49:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:56.076 11:49:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:31:56.076 11:49:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:56.076 11:49:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@418 -- # [[ up == up ]] 00:31:56.076 11:49:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:31:56.076 11:49:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:56.076 11:49:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:31:56.076 Found net devices under 0000:af:00.1: cvl_0_1 00:31:56.076 11:49:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:31:56.076 11:49:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:31:56.076 11:49:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@442 -- # is_hw=yes 00:31:56.076 11:49:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:31:56.076 11:49:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:31:56.076 11:49:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:31:56.076 11:49:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:31:56.076 11:49:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:31:56.076 11:49:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:31:56.076 11:49:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:31:56.076 11:49:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:31:56.076 11:49:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:31:56.076 11:49:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:31:56.076 11:49:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:31:56.076 11:49:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:31:56.076 11:49:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:31:56.076 11:49:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:31:56.076 11:49:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:31:56.076 11:49:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:31:56.076 11:49:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:31:56.076 11:49:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:31:56.335 11:49:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:31:56.335 11:49:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:31:56.335 11:49:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:31:56.335 11:49:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:31:56.335 11:49:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:31:56.335 11:49:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:31:56.335 11:49:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:31:56.335 11:49:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:31:56.335 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:31:56.335 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.247 ms 00:31:56.335 00:31:56.335 --- 10.0.0.2 ping statistics --- 00:31:56.335 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:56.335 rtt min/avg/max/mdev = 0.247/0.247/0.247/0.000 ms 00:31:56.335 11:49:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:31:56.335 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:31:56.335 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.153 ms 00:31:56.335 00:31:56.335 --- 10.0.0.1 ping statistics --- 00:31:56.335 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:56.335 rtt min/avg/max/mdev = 0.153/0.153/0.153/0.000 ms 00:31:56.335 11:49:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:31:56.335 11:49:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@450 -- # return 0 00:31:56.335 11:49:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:31:56.335 11:49:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:31:56.335 11:49:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:31:56.335 11:49:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:31:56.335 11:49:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:31:56.335 11:49:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:31:56.335 11:49:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:31:56.594 11:49:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@45 -- # '[' -z ']' 00:31:56.594 11:49:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@46 -- # echo 'only one NIC for nvmf test' 00:31:56.594 only one NIC for nvmf test 00:31:56.594 11:49:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@47 -- # nvmftestfini 00:31:56.594 11:49:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@516 -- # nvmfcleanup 00:31:56.594 11:49:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@121 -- # sync 00:31:56.594 11:49:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:31:56.594 11:49:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@124 -- # set +e 00:31:56.594 11:49:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@125 -- # for i in {1..20} 00:31:56.595 11:49:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:31:56.595 rmmod nvme_tcp 00:31:56.595 rmmod nvme_fabrics 00:31:56.595 rmmod nvme_keyring 00:31:56.595 11:49:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:31:56.595 11:49:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@128 -- # set -e 00:31:56.595 11:49:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@129 -- # return 0 00:31:56.595 11:49:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:31:56.595 11:49:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:31:56.595 11:49:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:31:56.595 11:49:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:31:56.595 11:49:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@297 -- # iptr 00:31:56.595 11:49:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-save 00:31:56.595 11:49:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:31:56.595 11:49:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-restore 00:31:56.595 11:49:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:31:56.595 11:49:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@302 -- # remove_spdk_ns 00:31:56.595 11:49:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:56.595 11:49:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:31:56.595 11:49:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:58.498 11:49:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:31:58.498 11:49:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@48 -- # exit 0 00:31:58.498 11:49:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@1 -- # nvmftestfini 00:31:58.498 11:49:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@516 -- # nvmfcleanup 00:31:58.498 11:49:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@121 -- # sync 00:31:58.498 11:49:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:31:58.498 11:49:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@124 -- # set +e 00:31:58.498 11:49:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@125 -- # for i in {1..20} 00:31:58.498 11:49:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:31:58.757 11:49:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:31:58.757 11:49:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@128 -- # set -e 00:31:58.757 11:49:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@129 -- # return 0 00:31:58.757 11:49:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:31:58.757 11:49:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:31:58.757 11:49:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:31:58.757 11:49:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:31:58.757 11:49:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@297 -- # iptr 00:31:58.757 11:49:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-save 00:31:58.757 11:49:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:31:58.757 11:49:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-restore 00:31:58.757 11:49:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:31:58.757 11:49:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@302 -- # remove_spdk_ns 00:31:58.757 11:49:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:58.757 11:49:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:31:58.757 11:49:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:58.757 11:49:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:31:58.757 00:31:58.757 real 0m8.021s 00:31:58.757 user 0m1.732s 00:31:58.757 sys 0m4.269s 00:31:58.757 11:49:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1128 -- # xtrace_disable 00:31:58.757 11:49:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:31:58.757 ************************************ 00:31:58.757 END TEST nvmf_target_multipath 00:31:58.757 ************************************ 00:31:58.757 11:49:59 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@32 -- # run_test nvmf_zcopy /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp --interrupt-mode 00:31:58.757 11:49:59 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:31:58.757 11:49:59 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1109 -- # xtrace_disable 00:31:58.757 11:49:59 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:31:58.757 ************************************ 00:31:58.757 START TEST nvmf_zcopy 00:31:58.757 ************************************ 00:31:58.757 11:49:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp --interrupt-mode 00:31:58.757 * Looking for test storage... 00:31:58.757 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:31:58.757 11:49:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:31:58.757 11:49:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1691 -- # lcov --version 00:31:58.757 11:49:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:31:59.017 11:49:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:31:59.017 11:49:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:31:59.017 11:49:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@333 -- # local ver1 ver1_l 00:31:59.017 11:49:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@334 -- # local ver2 ver2_l 00:31:59.017 11:49:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@336 -- # IFS=.-: 00:31:59.017 11:49:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@336 -- # read -ra ver1 00:31:59.017 11:49:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@337 -- # IFS=.-: 00:31:59.017 11:49:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@337 -- # read -ra ver2 00:31:59.017 11:49:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@338 -- # local 'op=<' 00:31:59.017 11:49:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@340 -- # ver1_l=2 00:31:59.017 11:49:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@341 -- # ver2_l=1 00:31:59.017 11:49:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:31:59.017 11:49:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@344 -- # case "$op" in 00:31:59.017 11:49:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@345 -- # : 1 00:31:59.017 11:49:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@364 -- # (( v = 0 )) 00:31:59.017 11:49:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:31:59.017 11:49:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@365 -- # decimal 1 00:31:59.017 11:49:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@353 -- # local d=1 00:31:59.017 11:49:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:31:59.017 11:49:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@355 -- # echo 1 00:31:59.017 11:49:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@365 -- # ver1[v]=1 00:31:59.017 11:49:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@366 -- # decimal 2 00:31:59.017 11:49:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@353 -- # local d=2 00:31:59.017 11:49:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:31:59.017 11:49:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@355 -- # echo 2 00:31:59.017 11:49:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@366 -- # ver2[v]=2 00:31:59.017 11:49:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:31:59.017 11:49:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:31:59.017 11:49:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@368 -- # return 0 00:31:59.017 11:49:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:31:59.017 11:49:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:31:59.017 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:59.017 --rc genhtml_branch_coverage=1 00:31:59.017 --rc genhtml_function_coverage=1 00:31:59.017 --rc genhtml_legend=1 00:31:59.017 --rc geninfo_all_blocks=1 00:31:59.017 --rc geninfo_unexecuted_blocks=1 00:31:59.017 00:31:59.017 ' 00:31:59.017 11:49:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:31:59.017 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:59.017 --rc genhtml_branch_coverage=1 00:31:59.017 --rc genhtml_function_coverage=1 00:31:59.017 --rc genhtml_legend=1 00:31:59.017 --rc geninfo_all_blocks=1 00:31:59.017 --rc geninfo_unexecuted_blocks=1 00:31:59.017 00:31:59.017 ' 00:31:59.017 11:49:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:31:59.017 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:59.017 --rc genhtml_branch_coverage=1 00:31:59.017 --rc genhtml_function_coverage=1 00:31:59.017 --rc genhtml_legend=1 00:31:59.017 --rc geninfo_all_blocks=1 00:31:59.017 --rc geninfo_unexecuted_blocks=1 00:31:59.017 00:31:59.017 ' 00:31:59.017 11:49:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:31:59.017 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:59.017 --rc genhtml_branch_coverage=1 00:31:59.018 --rc genhtml_function_coverage=1 00:31:59.018 --rc genhtml_legend=1 00:31:59.018 --rc geninfo_all_blocks=1 00:31:59.018 --rc geninfo_unexecuted_blocks=1 00:31:59.018 00:31:59.018 ' 00:31:59.018 11:49:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:31:59.018 11:49:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@7 -- # uname -s 00:31:59.018 11:49:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:31:59.018 11:49:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:31:59.018 11:49:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:31:59.018 11:49:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:31:59.018 11:49:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:31:59.018 11:49:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:31:59.018 11:49:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:31:59.018 11:49:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:31:59.018 11:49:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:31:59.018 11:49:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:31:59.018 11:49:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:31:59.018 11:49:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@18 -- # NVME_HOSTID=00abaa28-3537-eb11-906e-0017a4403562 00:31:59.018 11:49:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:31:59.018 11:49:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:31:59.018 11:49:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:31:59.018 11:49:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:31:59.018 11:49:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:31:59.018 11:49:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@15 -- # shopt -s extglob 00:31:59.018 11:49:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:31:59.018 11:49:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:31:59.018 11:49:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:31:59.018 11:49:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:59.018 11:49:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:59.018 11:49:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:59.018 11:49:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@5 -- # export PATH 00:31:59.018 11:49:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:59.018 11:49:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@51 -- # : 0 00:31:59.018 11:49:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:31:59.018 11:49:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:31:59.018 11:49:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:31:59.018 11:49:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:31:59.018 11:49:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:31:59.018 11:49:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:31:59.018 11:49:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:31:59.018 11:49:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:31:59.018 11:49:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:31:59.018 11:49:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@55 -- # have_pci_nics=0 00:31:59.018 11:49:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@12 -- # nvmftestinit 00:31:59.018 11:49:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:31:59.018 11:49:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:31:59.018 11:49:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@476 -- # prepare_net_devs 00:31:59.018 11:49:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@438 -- # local -g is_hw=no 00:31:59.018 11:49:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@440 -- # remove_spdk_ns 00:31:59.018 11:49:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:59.018 11:49:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:31:59.018 11:49:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:59.018 11:49:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:31:59.018 11:49:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:31:59.018 11:49:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@309 -- # xtrace_disable 00:31:59.018 11:49:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:32:05.584 11:50:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:32:05.584 11:50:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@315 -- # pci_devs=() 00:32:05.584 11:50:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@315 -- # local -a pci_devs 00:32:05.584 11:50:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@316 -- # pci_net_devs=() 00:32:05.584 11:50:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:32:05.584 11:50:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@317 -- # pci_drivers=() 00:32:05.584 11:50:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@317 -- # local -A pci_drivers 00:32:05.584 11:50:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@319 -- # net_devs=() 00:32:05.584 11:50:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@319 -- # local -ga net_devs 00:32:05.584 11:50:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@320 -- # e810=() 00:32:05.584 11:50:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@320 -- # local -ga e810 00:32:05.584 11:50:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@321 -- # x722=() 00:32:05.584 11:50:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@321 -- # local -ga x722 00:32:05.584 11:50:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@322 -- # mlx=() 00:32:05.584 11:50:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@322 -- # local -ga mlx 00:32:05.584 11:50:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:32:05.584 11:50:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:32:05.584 11:50:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:32:05.584 11:50:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:32:05.584 11:50:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:32:05.584 11:50:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:32:05.584 11:50:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:32:05.584 11:50:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:32:05.584 11:50:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:32:05.584 11:50:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:32:05.584 11:50:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:32:05.584 11:50:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:32:05.584 11:50:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:32:05.584 11:50:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:32:05.584 11:50:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:32:05.584 11:50:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:32:05.584 11:50:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:32:05.584 11:50:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:32:05.584 11:50:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:32:05.584 11:50:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:32:05.584 Found 0000:af:00.0 (0x8086 - 0x159b) 00:32:05.584 11:50:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:32:05.584 11:50:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:32:05.584 11:50:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:05.584 11:50:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:05.584 11:50:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:32:05.584 11:50:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:32:05.584 11:50:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:32:05.584 Found 0000:af:00.1 (0x8086 - 0x159b) 00:32:05.584 11:50:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:32:05.584 11:50:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:32:05.584 11:50:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:05.584 11:50:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:05.584 11:50:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:32:05.584 11:50:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:32:05.584 11:50:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:32:05.584 11:50:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:32:05.584 11:50:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:32:05.584 11:50:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:05.584 11:50:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:32:05.584 11:50:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:05.584 11:50:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@418 -- # [[ up == up ]] 00:32:05.584 11:50:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:32:05.584 11:50:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:05.584 11:50:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:32:05.584 Found net devices under 0000:af:00.0: cvl_0_0 00:32:05.584 11:50:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:32:05.584 11:50:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:32:05.584 11:50:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:05.584 11:50:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:32:05.584 11:50:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:05.584 11:50:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@418 -- # [[ up == up ]] 00:32:05.584 11:50:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:32:05.584 11:50:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:05.584 11:50:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:32:05.584 Found net devices under 0000:af:00.1: cvl_0_1 00:32:05.584 11:50:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:32:05.584 11:50:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:32:05.584 11:50:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@442 -- # is_hw=yes 00:32:05.584 11:50:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:32:05.585 11:50:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:32:05.585 11:50:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:32:05.585 11:50:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:32:05.585 11:50:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:32:05.585 11:50:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:32:05.585 11:50:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:32:05.585 11:50:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:32:05.585 11:50:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:32:05.585 11:50:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:32:05.585 11:50:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:32:05.585 11:50:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:32:05.585 11:50:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:32:05.585 11:50:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:32:05.585 11:50:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:32:05.585 11:50:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:32:05.585 11:50:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:32:05.585 11:50:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:32:05.585 11:50:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:32:05.585 11:50:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:32:05.585 11:50:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:32:05.585 11:50:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:32:05.585 11:50:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:32:05.585 11:50:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:32:05.585 11:50:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:32:05.585 11:50:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:32:05.585 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:32:05.585 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.474 ms 00:32:05.585 00:32:05.585 --- 10.0.0.2 ping statistics --- 00:32:05.585 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:05.585 rtt min/avg/max/mdev = 0.474/0.474/0.474/0.000 ms 00:32:05.585 11:50:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:32:05.585 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:32:05.585 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.227 ms 00:32:05.585 00:32:05.585 --- 10.0.0.1 ping statistics --- 00:32:05.585 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:05.585 rtt min/avg/max/mdev = 0.227/0.227/0.227/0.000 ms 00:32:05.585 11:50:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:32:05.585 11:50:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@450 -- # return 0 00:32:05.585 11:50:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:32:05.585 11:50:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:32:05.585 11:50:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:32:05.585 11:50:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:32:05.585 11:50:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:32:05.585 11:50:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:32:05.585 11:50:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:32:05.585 11:50:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@13 -- # nvmfappstart -m 0x2 00:32:05.585 11:50:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:32:05.585 11:50:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@724 -- # xtrace_disable 00:32:05.585 11:50:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:32:05.585 11:50:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@509 -- # nvmfpid=1467283 00:32:05.585 11:50:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@510 -- # waitforlisten 1467283 00:32:05.585 11:50:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x2 00:32:05.585 11:50:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@833 -- # '[' -z 1467283 ']' 00:32:05.585 11:50:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:05.585 11:50:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@838 -- # local max_retries=100 00:32:05.585 11:50:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:05.585 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:05.585 11:50:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@842 -- # xtrace_disable 00:32:05.585 11:50:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:32:05.585 [2024-11-15 11:50:05.505675] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:32:05.585 [2024-11-15 11:50:05.507003] Starting SPDK v25.01-pre git sha1 4b2d483c6 / DPDK 24.03.0 initialization... 00:32:05.585 [2024-11-15 11:50:05.507048] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:32:05.585 [2024-11-15 11:50:05.578508] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:05.585 [2024-11-15 11:50:05.616885] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:32:05.585 [2024-11-15 11:50:05.616916] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:32:05.585 [2024-11-15 11:50:05.616922] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:32:05.585 [2024-11-15 11:50:05.616928] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:32:05.585 [2024-11-15 11:50:05.616932] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:32:05.585 [2024-11-15 11:50:05.617483] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:32:05.585 [2024-11-15 11:50:05.683054] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:32:05.585 [2024-11-15 11:50:05.683254] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:32:05.585 11:50:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:32:05.585 11:50:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@866 -- # return 0 00:32:05.585 11:50:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:32:05.585 11:50:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@730 -- # xtrace_disable 00:32:05.585 11:50:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:32:05.585 11:50:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:32:05.585 11:50:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@15 -- # '[' tcp '!=' tcp ']' 00:32:05.585 11:50:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@22 -- # rpc_cmd nvmf_create_transport -t tcp -o -c 0 --zcopy 00:32:05.585 11:50:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:05.585 11:50:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:32:05.585 [2024-11-15 11:50:05.773819] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:32:05.585 11:50:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:05.585 11:50:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:32:05.585 11:50:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:05.585 11:50:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:32:05.585 11:50:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:05.585 11:50:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:32:05.585 11:50:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:05.585 11:50:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:32:05.585 [2024-11-15 11:50:05.798365] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:32:05.585 11:50:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:05.585 11:50:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:32:05.585 11:50:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:05.585 11:50:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:32:05.585 11:50:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:05.585 11:50:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@29 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc0 00:32:05.585 11:50:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:05.585 11:50:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:32:05.585 malloc0 00:32:05.585 11:50:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:05.585 11:50:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:32:05.585 11:50:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:05.586 11:50:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:32:05.586 11:50:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:05.586 11:50:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -t 10 -q 128 -w verify -o 8192 00:32:05.586 11:50:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@33 -- # gen_nvmf_target_json 00:32:05.586 11:50:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@560 -- # config=() 00:32:05.586 11:50:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@560 -- # local subsystem config 00:32:05.586 11:50:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:32:05.586 11:50:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:32:05.586 { 00:32:05.586 "params": { 00:32:05.586 "name": "Nvme$subsystem", 00:32:05.586 "trtype": "$TEST_TRANSPORT", 00:32:05.586 "traddr": "$NVMF_FIRST_TARGET_IP", 00:32:05.586 "adrfam": "ipv4", 00:32:05.586 "trsvcid": "$NVMF_PORT", 00:32:05.586 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:32:05.586 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:32:05.586 "hdgst": ${hdgst:-false}, 00:32:05.586 "ddgst": ${ddgst:-false} 00:32:05.586 }, 00:32:05.586 "method": "bdev_nvme_attach_controller" 00:32:05.586 } 00:32:05.586 EOF 00:32:05.586 )") 00:32:05.586 11:50:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@582 -- # cat 00:32:05.586 11:50:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@584 -- # jq . 00:32:05.586 11:50:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@585 -- # IFS=, 00:32:05.586 11:50:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:32:05.586 "params": { 00:32:05.586 "name": "Nvme1", 00:32:05.586 "trtype": "tcp", 00:32:05.586 "traddr": "10.0.0.2", 00:32:05.586 "adrfam": "ipv4", 00:32:05.586 "trsvcid": "4420", 00:32:05.586 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:32:05.586 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:32:05.586 "hdgst": false, 00:32:05.586 "ddgst": false 00:32:05.586 }, 00:32:05.586 "method": "bdev_nvme_attach_controller" 00:32:05.586 }' 00:32:05.586 [2024-11-15 11:50:05.894573] Starting SPDK v25.01-pre git sha1 4b2d483c6 / DPDK 24.03.0 initialization... 00:32:05.586 [2024-11-15 11:50:05.894631] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1467499 ] 00:32:05.586 [2024-11-15 11:50:05.990429] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:05.586 [2024-11-15 11:50:06.038701] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:32:05.586 Running I/O for 10 seconds... 00:32:07.455 8269.00 IOPS, 64.60 MiB/s [2024-11-15T10:50:09.241Z] 8321.00 IOPS, 65.01 MiB/s [2024-11-15T10:50:10.615Z] 8337.67 IOPS, 65.14 MiB/s [2024-11-15T10:50:11.549Z] 8336.75 IOPS, 65.13 MiB/s [2024-11-15T10:50:12.482Z] 8347.00 IOPS, 65.21 MiB/s [2024-11-15T10:50:13.417Z] 8352.67 IOPS, 65.26 MiB/s [2024-11-15T10:50:14.358Z] 8357.00 IOPS, 65.29 MiB/s [2024-11-15T10:50:15.306Z] 8359.25 IOPS, 65.31 MiB/s [2024-11-15T10:50:16.243Z] 8360.67 IOPS, 65.32 MiB/s [2024-11-15T10:50:16.243Z] 8367.10 IOPS, 65.37 MiB/s 00:32:15.390 Latency(us) 00:32:15.390 [2024-11-15T10:50:16.243Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:15.390 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 8192) 00:32:15.390 Verification LBA range: start 0x0 length 0x1000 00:32:15.390 Nvme1n1 : 10.01 8364.50 65.35 0.00 0.00 15239.02 1228.80 22282.24 00:32:15.390 [2024-11-15T10:50:16.243Z] =================================================================================================================== 00:32:15.390 [2024-11-15T10:50:16.243Z] Total : 8364.50 65.35 0.00 0.00 15239.02 1228.80 22282.24 00:32:15.648 11:50:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@39 -- # perfpid=1469261 00:32:15.648 11:50:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@41 -- # xtrace_disable 00:32:15.648 11:50:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:32:15.648 11:50:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -t 5 -q 128 -w randrw -M 50 -o 8192 00:32:15.648 11:50:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@37 -- # gen_nvmf_target_json 00:32:15.648 11:50:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@560 -- # config=() 00:32:15.648 11:50:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@560 -- # local subsystem config 00:32:15.648 11:50:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:32:15.648 11:50:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:32:15.648 { 00:32:15.648 "params": { 00:32:15.648 "name": "Nvme$subsystem", 00:32:15.648 "trtype": "$TEST_TRANSPORT", 00:32:15.648 "traddr": "$NVMF_FIRST_TARGET_IP", 00:32:15.648 "adrfam": "ipv4", 00:32:15.648 "trsvcid": "$NVMF_PORT", 00:32:15.648 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:32:15.648 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:32:15.648 "hdgst": ${hdgst:-false}, 00:32:15.648 "ddgst": ${ddgst:-false} 00:32:15.648 }, 00:32:15.648 "method": "bdev_nvme_attach_controller" 00:32:15.648 } 00:32:15.648 EOF 00:32:15.648 )") 00:32:15.648 [2024-11-15 11:50:16.417799] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:15.648 [2024-11-15 11:50:16.417828] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:15.648 11:50:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@582 -- # cat 00:32:15.648 11:50:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@584 -- # jq . 00:32:15.648 11:50:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@585 -- # IFS=, 00:32:15.648 11:50:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:32:15.648 "params": { 00:32:15.648 "name": "Nvme1", 00:32:15.648 "trtype": "tcp", 00:32:15.648 "traddr": "10.0.0.2", 00:32:15.648 "adrfam": "ipv4", 00:32:15.648 "trsvcid": "4420", 00:32:15.648 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:32:15.649 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:32:15.649 "hdgst": false, 00:32:15.649 "ddgst": false 00:32:15.649 }, 00:32:15.649 "method": "bdev_nvme_attach_controller" 00:32:15.649 }' 00:32:15.649 [2024-11-15 11:50:16.429769] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:15.649 [2024-11-15 11:50:16.429782] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:15.649 [2024-11-15 11:50:16.441770] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:15.649 [2024-11-15 11:50:16.441780] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:15.649 [2024-11-15 11:50:16.453767] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:15.649 [2024-11-15 11:50:16.453776] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:15.649 [2024-11-15 11:50:16.463630] Starting SPDK v25.01-pre git sha1 4b2d483c6 / DPDK 24.03.0 initialization... 00:32:15.649 [2024-11-15 11:50:16.463687] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1469261 ] 00:32:15.649 [2024-11-15 11:50:16.465768] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:15.649 [2024-11-15 11:50:16.465779] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:15.649 [2024-11-15 11:50:16.477777] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:15.649 [2024-11-15 11:50:16.477786] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:15.649 [2024-11-15 11:50:16.489766] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:15.649 [2024-11-15 11:50:16.489776] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:15.908 [2024-11-15 11:50:16.501768] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:15.908 [2024-11-15 11:50:16.501778] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:15.908 [2024-11-15 11:50:16.513766] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:15.908 [2024-11-15 11:50:16.513776] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:15.908 [2024-11-15 11:50:16.525767] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:15.908 [2024-11-15 11:50:16.525777] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:15.908 [2024-11-15 11:50:16.537768] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:15.908 [2024-11-15 11:50:16.537777] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:15.908 [2024-11-15 11:50:16.549768] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:15.908 [2024-11-15 11:50:16.549778] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:15.908 [2024-11-15 11:50:16.557221] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:15.908 [2024-11-15 11:50:16.561767] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:15.909 [2024-11-15 11:50:16.561778] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:15.909 [2024-11-15 11:50:16.573771] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:15.909 [2024-11-15 11:50:16.573783] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:15.909 [2024-11-15 11:50:16.585775] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:15.909 [2024-11-15 11:50:16.585784] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:15.909 [2024-11-15 11:50:16.597767] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:15.909 [2024-11-15 11:50:16.597776] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:15.909 [2024-11-15 11:50:16.606365] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:32:15.909 [2024-11-15 11:50:16.609777] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:15.909 [2024-11-15 11:50:16.609792] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:15.909 [2024-11-15 11:50:16.621777] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:15.909 [2024-11-15 11:50:16.621792] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:15.909 [2024-11-15 11:50:16.633772] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:15.909 [2024-11-15 11:50:16.633788] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:15.909 [2024-11-15 11:50:16.645770] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:15.909 [2024-11-15 11:50:16.645782] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:15.909 [2024-11-15 11:50:16.657769] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:15.909 [2024-11-15 11:50:16.657779] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:15.909 [2024-11-15 11:50:16.669771] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:15.909 [2024-11-15 11:50:16.669781] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:15.909 [2024-11-15 11:50:16.681778] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:15.909 [2024-11-15 11:50:16.681794] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:15.909 [2024-11-15 11:50:16.693766] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:15.909 [2024-11-15 11:50:16.693775] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:15.909 [2024-11-15 11:50:16.705777] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:15.909 [2024-11-15 11:50:16.705797] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:15.909 [2024-11-15 11:50:16.717771] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:15.909 [2024-11-15 11:50:16.717785] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:15.909 [2024-11-15 11:50:16.729773] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:15.909 [2024-11-15 11:50:16.729788] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:15.909 [2024-11-15 11:50:16.741771] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:15.909 [2024-11-15 11:50:16.741783] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:15.909 [2024-11-15 11:50:16.753767] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:15.909 [2024-11-15 11:50:16.753776] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:16.168 [2024-11-15 11:50:16.765766] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:16.168 [2024-11-15 11:50:16.765774] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:16.168 [2024-11-15 11:50:16.777768] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:16.168 [2024-11-15 11:50:16.777778] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:16.168 [2024-11-15 11:50:16.789768] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:16.168 [2024-11-15 11:50:16.789779] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:16.168 [2024-11-15 11:50:16.801767] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:16.168 [2024-11-15 11:50:16.801775] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:16.168 [2024-11-15 11:50:16.813767] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:16.168 [2024-11-15 11:50:16.813776] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:16.168 [2024-11-15 11:50:16.825769] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:16.168 [2024-11-15 11:50:16.825781] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:16.168 [2024-11-15 11:50:16.837766] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:16.168 [2024-11-15 11:50:16.837774] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:16.168 [2024-11-15 11:50:16.849766] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:16.168 [2024-11-15 11:50:16.849774] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:16.168 [2024-11-15 11:50:16.861765] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:16.168 [2024-11-15 11:50:16.861775] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:16.168 [2024-11-15 11:50:16.873773] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:16.168 [2024-11-15 11:50:16.873789] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:16.168 [2024-11-15 11:50:16.885780] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:16.168 [2024-11-15 11:50:16.885792] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:16.168 Running I/O for 5 seconds... 00:32:16.168 [2024-11-15 11:50:16.900617] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:16.168 [2024-11-15 11:50:16.900636] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:16.168 [2024-11-15 11:50:16.914562] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:16.168 [2024-11-15 11:50:16.914581] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:16.168 [2024-11-15 11:50:16.929063] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:16.168 [2024-11-15 11:50:16.929081] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:16.168 [2024-11-15 11:50:16.942763] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:16.168 [2024-11-15 11:50:16.942781] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:16.169 [2024-11-15 11:50:16.957144] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:16.169 [2024-11-15 11:50:16.957161] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:16.169 [2024-11-15 11:50:16.971278] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:16.169 [2024-11-15 11:50:16.971296] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:16.169 [2024-11-15 11:50:16.985021] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:16.169 [2024-11-15 11:50:16.985039] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:16.169 [2024-11-15 11:50:16.998467] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:16.169 [2024-11-15 11:50:16.998484] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:16.169 [2024-11-15 11:50:17.013477] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:16.169 [2024-11-15 11:50:17.013497] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:16.427 [2024-11-15 11:50:17.027542] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:16.427 [2024-11-15 11:50:17.027560] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:16.427 [2024-11-15 11:50:17.041697] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:16.427 [2024-11-15 11:50:17.041716] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:16.427 [2024-11-15 11:50:17.055486] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:16.427 [2024-11-15 11:50:17.055504] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:16.427 [2024-11-15 11:50:17.069899] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:16.427 [2024-11-15 11:50:17.069917] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:16.427 [2024-11-15 11:50:17.083144] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:16.427 [2024-11-15 11:50:17.083162] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:16.427 [2024-11-15 11:50:17.093775] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:16.427 [2024-11-15 11:50:17.093793] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:16.427 [2024-11-15 11:50:17.107550] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:16.427 [2024-11-15 11:50:17.107568] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:16.427 [2024-11-15 11:50:17.121057] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:16.427 [2024-11-15 11:50:17.121075] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:16.427 [2024-11-15 11:50:17.134794] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:16.427 [2024-11-15 11:50:17.134813] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:16.427 [2024-11-15 11:50:17.149631] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:16.427 [2024-11-15 11:50:17.149650] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:16.427 [2024-11-15 11:50:17.163319] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:16.427 [2024-11-15 11:50:17.163337] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:16.427 [2024-11-15 11:50:17.177898] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:16.427 [2024-11-15 11:50:17.177917] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:16.427 [2024-11-15 11:50:17.190291] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:16.427 [2024-11-15 11:50:17.190309] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:16.427 [2024-11-15 11:50:17.203278] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:16.427 [2024-11-15 11:50:17.203297] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:16.427 [2024-11-15 11:50:17.217320] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:16.427 [2024-11-15 11:50:17.217338] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:16.427 [2024-11-15 11:50:17.230936] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:16.427 [2024-11-15 11:50:17.230958] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:16.427 [2024-11-15 11:50:17.245386] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:16.427 [2024-11-15 11:50:17.245405] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:16.427 [2024-11-15 11:50:17.259192] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:16.427 [2024-11-15 11:50:17.259210] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:16.427 [2024-11-15 11:50:17.273670] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:16.427 [2024-11-15 11:50:17.273687] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:16.686 [2024-11-15 11:50:17.286860] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:16.686 [2024-11-15 11:50:17.286877] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:16.686 [2024-11-15 11:50:17.300683] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:16.686 [2024-11-15 11:50:17.300701] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:16.686 [2024-11-15 11:50:17.313976] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:16.686 [2024-11-15 11:50:17.313994] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:16.686 [2024-11-15 11:50:17.327361] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:16.686 [2024-11-15 11:50:17.327379] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:16.686 [2024-11-15 11:50:17.341202] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:16.686 [2024-11-15 11:50:17.341221] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:16.686 [2024-11-15 11:50:17.354827] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:16.686 [2024-11-15 11:50:17.354845] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:16.686 [2024-11-15 11:50:17.369548] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:16.686 [2024-11-15 11:50:17.369567] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:16.686 [2024-11-15 11:50:17.383309] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:16.686 [2024-11-15 11:50:17.383327] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:16.686 [2024-11-15 11:50:17.397178] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:16.686 [2024-11-15 11:50:17.397196] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:16.686 [2024-11-15 11:50:17.410924] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:16.686 [2024-11-15 11:50:17.410942] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:16.686 [2024-11-15 11:50:17.425384] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:16.686 [2024-11-15 11:50:17.425403] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:16.686 [2024-11-15 11:50:17.439034] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:16.686 [2024-11-15 11:50:17.439053] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:16.686 [2024-11-15 11:50:17.453594] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:16.686 [2024-11-15 11:50:17.453612] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:16.686 [2024-11-15 11:50:17.467387] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:16.686 [2024-11-15 11:50:17.467405] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:16.686 [2024-11-15 11:50:17.480973] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:16.686 [2024-11-15 11:50:17.480991] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:16.687 [2024-11-15 11:50:17.494096] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:16.687 [2024-11-15 11:50:17.494113] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:16.687 [2024-11-15 11:50:17.507181] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:16.687 [2024-11-15 11:50:17.507199] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:16.687 [2024-11-15 11:50:17.521124] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:16.687 [2024-11-15 11:50:17.521141] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:16.687 [2024-11-15 11:50:17.534835] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:16.687 [2024-11-15 11:50:17.534853] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:16.946 [2024-11-15 11:50:17.548629] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:16.946 [2024-11-15 11:50:17.548649] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:16.946 [2024-11-15 11:50:17.562174] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:16.946 [2024-11-15 11:50:17.562192] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:16.946 [2024-11-15 11:50:17.574695] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:16.946 [2024-11-15 11:50:17.574714] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:16.946 [2024-11-15 11:50:17.589783] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:16.946 [2024-11-15 11:50:17.589803] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:16.946 [2024-11-15 11:50:17.603164] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:16.946 [2024-11-15 11:50:17.603186] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:16.946 [2024-11-15 11:50:17.617705] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:16.946 [2024-11-15 11:50:17.617724] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:16.946 [2024-11-15 11:50:17.630809] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:16.946 [2024-11-15 11:50:17.630827] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:16.946 [2024-11-15 11:50:17.645136] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:16.946 [2024-11-15 11:50:17.645154] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:16.946 [2024-11-15 11:50:17.658641] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:16.946 [2024-11-15 11:50:17.658659] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:16.946 [2024-11-15 11:50:17.673083] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:16.946 [2024-11-15 11:50:17.673102] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:16.946 [2024-11-15 11:50:17.686675] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:16.946 [2024-11-15 11:50:17.686693] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:16.946 [2024-11-15 11:50:17.699130] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:16.946 [2024-11-15 11:50:17.699147] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:16.946 [2024-11-15 11:50:17.710905] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:16.946 [2024-11-15 11:50:17.710923] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:16.946 [2024-11-15 11:50:17.725587] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:16.946 [2024-11-15 11:50:17.725607] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:16.946 [2024-11-15 11:50:17.739242] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:16.946 [2024-11-15 11:50:17.739261] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:16.946 [2024-11-15 11:50:17.753100] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:16.946 [2024-11-15 11:50:17.753118] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:16.946 [2024-11-15 11:50:17.766559] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:16.946 [2024-11-15 11:50:17.766577] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:16.946 [2024-11-15 11:50:17.778876] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:16.946 [2024-11-15 11:50:17.778894] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:16.946 [2024-11-15 11:50:17.793517] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:16.946 [2024-11-15 11:50:17.793539] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:17.205 [2024-11-15 11:50:17.806857] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:17.205 [2024-11-15 11:50:17.806875] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:17.205 [2024-11-15 11:50:17.821968] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:17.205 [2024-11-15 11:50:17.821987] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:17.205 [2024-11-15 11:50:17.834997] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:17.205 [2024-11-15 11:50:17.835017] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:17.205 [2024-11-15 11:50:17.849352] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:17.205 [2024-11-15 11:50:17.849371] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:17.205 [2024-11-15 11:50:17.863041] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:17.205 [2024-11-15 11:50:17.863064] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:17.205 [2024-11-15 11:50:17.877849] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:17.205 [2024-11-15 11:50:17.877868] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:17.205 [2024-11-15 11:50:17.891169] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:17.205 [2024-11-15 11:50:17.891187] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:17.205 18263.00 IOPS, 142.68 MiB/s [2024-11-15T10:50:18.058Z] [2024-11-15 11:50:17.904947] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:17.205 [2024-11-15 11:50:17.904967] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:17.205 [2024-11-15 11:50:17.918421] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:17.205 [2024-11-15 11:50:17.918441] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:17.205 [2024-11-15 11:50:17.933773] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:17.205 [2024-11-15 11:50:17.933791] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:17.205 [2024-11-15 11:50:17.946661] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:17.205 [2024-11-15 11:50:17.946679] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:17.205 [2024-11-15 11:50:17.959050] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:17.205 [2024-11-15 11:50:17.959069] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:17.205 [2024-11-15 11:50:17.969056] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:17.205 [2024-11-15 11:50:17.969074] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:17.205 [2024-11-15 11:50:17.982587] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:17.205 [2024-11-15 11:50:17.982605] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:17.205 [2024-11-15 11:50:17.997198] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:17.205 [2024-11-15 11:50:17.997218] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:17.205 [2024-11-15 11:50:18.010591] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:17.205 [2024-11-15 11:50:18.010610] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:17.205 [2024-11-15 11:50:18.025049] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:17.205 [2024-11-15 11:50:18.025068] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:17.205 [2024-11-15 11:50:18.038107] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:17.205 [2024-11-15 11:50:18.038125] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:17.205 [2024-11-15 11:50:18.053838] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:17.205 [2024-11-15 11:50:18.053858] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:17.463 [2024-11-15 11:50:18.067311] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:17.463 [2024-11-15 11:50:18.067331] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:17.463 [2024-11-15 11:50:18.081784] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:17.463 [2024-11-15 11:50:18.081803] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:17.463 [2024-11-15 11:50:18.095215] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:17.463 [2024-11-15 11:50:18.095234] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:17.463 [2024-11-15 11:50:18.108696] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:17.463 [2024-11-15 11:50:18.108715] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:17.463 [2024-11-15 11:50:18.122267] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:17.463 [2024-11-15 11:50:18.122289] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:17.463 [2024-11-15 11:50:18.137549] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:17.463 [2024-11-15 11:50:18.137568] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:17.463 [2024-11-15 11:50:18.151480] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:17.463 [2024-11-15 11:50:18.151498] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:17.463 [2024-11-15 11:50:18.165154] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:17.463 [2024-11-15 11:50:18.165173] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:17.463 [2024-11-15 11:50:18.178515] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:17.463 [2024-11-15 11:50:18.178533] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:17.463 [2024-11-15 11:50:18.192936] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:17.463 [2024-11-15 11:50:18.192955] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:17.463 [2024-11-15 11:50:18.206510] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:17.463 [2024-11-15 11:50:18.206528] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:17.463 [2024-11-15 11:50:18.221703] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:17.463 [2024-11-15 11:50:18.221723] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:17.463 [2024-11-15 11:50:18.235311] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:17.463 [2024-11-15 11:50:18.235328] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:17.463 [2024-11-15 11:50:18.249482] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:17.463 [2024-11-15 11:50:18.249500] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:17.463 [2024-11-15 11:50:18.263304] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:17.463 [2024-11-15 11:50:18.263321] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:17.463 [2024-11-15 11:50:18.277334] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:17.463 [2024-11-15 11:50:18.277352] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:17.463 [2024-11-15 11:50:18.291308] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:17.464 [2024-11-15 11:50:18.291326] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:17.464 [2024-11-15 11:50:18.305439] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:17.464 [2024-11-15 11:50:18.305462] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:17.722 [2024-11-15 11:50:18.319304] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:17.722 [2024-11-15 11:50:18.319323] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:17.722 [2024-11-15 11:50:18.333211] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:17.722 [2024-11-15 11:50:18.333228] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:17.722 [2024-11-15 11:50:18.347199] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:17.722 [2024-11-15 11:50:18.347217] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:17.722 [2024-11-15 11:50:18.360744] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:17.722 [2024-11-15 11:50:18.360761] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:17.722 [2024-11-15 11:50:18.374203] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:17.722 [2024-11-15 11:50:18.374220] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:17.722 [2024-11-15 11:50:18.389773] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:17.722 [2024-11-15 11:50:18.389791] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:17.722 [2024-11-15 11:50:18.403169] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:17.722 [2024-11-15 11:50:18.403187] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:17.722 [2024-11-15 11:50:18.417308] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:17.722 [2024-11-15 11:50:18.417328] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:17.722 [2024-11-15 11:50:18.431130] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:17.722 [2024-11-15 11:50:18.431148] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:17.722 [2024-11-15 11:50:18.445422] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:17.722 [2024-11-15 11:50:18.445440] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:17.722 [2024-11-15 11:50:18.459045] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:17.722 [2024-11-15 11:50:18.459063] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:17.722 [2024-11-15 11:50:18.473524] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:17.722 [2024-11-15 11:50:18.473544] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:17.722 [2024-11-15 11:50:18.486949] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:17.722 [2024-11-15 11:50:18.486967] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:17.722 [2024-11-15 11:50:18.502043] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:17.722 [2024-11-15 11:50:18.502060] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:17.722 [2024-11-15 11:50:18.516261] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:17.722 [2024-11-15 11:50:18.516279] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:17.722 [2024-11-15 11:50:18.530159] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:17.722 [2024-11-15 11:50:18.530177] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:17.722 [2024-11-15 11:50:18.543486] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:17.722 [2024-11-15 11:50:18.543504] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:17.722 [2024-11-15 11:50:18.557002] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:17.722 [2024-11-15 11:50:18.557021] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:17.722 [2024-11-15 11:50:18.570351] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:17.722 [2024-11-15 11:50:18.570369] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:17.981 [2024-11-15 11:50:18.583173] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:17.981 [2024-11-15 11:50:18.583191] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:17.981 [2024-11-15 11:50:18.596700] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:17.981 [2024-11-15 11:50:18.596718] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:17.981 [2024-11-15 11:50:18.610420] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:17.981 [2024-11-15 11:50:18.610437] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:17.981 [2024-11-15 11:50:18.625087] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:17.981 [2024-11-15 11:50:18.625105] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:17.981 [2024-11-15 11:50:18.639043] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:17.981 [2024-11-15 11:50:18.639061] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:17.981 [2024-11-15 11:50:18.653551] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:17.981 [2024-11-15 11:50:18.653569] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:17.981 [2024-11-15 11:50:18.667413] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:17.981 [2024-11-15 11:50:18.667431] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:17.981 [2024-11-15 11:50:18.682137] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:17.981 [2024-11-15 11:50:18.682154] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:17.981 [2024-11-15 11:50:18.696844] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:17.981 [2024-11-15 11:50:18.696861] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:17.981 [2024-11-15 11:50:18.710540] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:17.981 [2024-11-15 11:50:18.710558] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:17.981 [2024-11-15 11:50:18.725395] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:17.981 [2024-11-15 11:50:18.725413] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:17.981 [2024-11-15 11:50:18.739200] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:17.981 [2024-11-15 11:50:18.739218] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:17.981 [2024-11-15 11:50:18.754154] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:17.981 [2024-11-15 11:50:18.754171] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:17.981 [2024-11-15 11:50:18.769535] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:17.981 [2024-11-15 11:50:18.769553] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:17.981 [2024-11-15 11:50:18.783486] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:17.981 [2024-11-15 11:50:18.783504] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:17.981 [2024-11-15 11:50:18.797789] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:17.981 [2024-11-15 11:50:18.797807] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:17.981 [2024-11-15 11:50:18.810980] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:17.981 [2024-11-15 11:50:18.810998] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:17.981 [2024-11-15 11:50:18.823232] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:17.981 [2024-11-15 11:50:18.823251] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:18.240 [2024-11-15 11:50:18.837660] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:18.240 [2024-11-15 11:50:18.837678] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:18.240 [2024-11-15 11:50:18.850267] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:18.240 [2024-11-15 11:50:18.850285] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:18.240 [2024-11-15 11:50:18.863291] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:18.240 [2024-11-15 11:50:18.863310] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:18.240 [2024-11-15 11:50:18.877740] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:18.240 [2024-11-15 11:50:18.877759] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:18.241 [2024-11-15 11:50:18.891515] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:18.241 [2024-11-15 11:50:18.891533] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:18.241 18202.00 IOPS, 142.20 MiB/s [2024-11-15T10:50:19.094Z] [2024-11-15 11:50:18.906365] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:18.241 [2024-11-15 11:50:18.906383] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:18.241 [2024-11-15 11:50:18.921794] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:18.241 [2024-11-15 11:50:18.921812] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:18.241 [2024-11-15 11:50:18.934429] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:18.241 [2024-11-15 11:50:18.934447] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:18.241 [2024-11-15 11:50:18.947330] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:18.241 [2024-11-15 11:50:18.947348] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:18.241 [2024-11-15 11:50:18.961122] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:18.241 [2024-11-15 11:50:18.961140] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:18.241 [2024-11-15 11:50:18.974809] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:18.241 [2024-11-15 11:50:18.974827] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:18.241 [2024-11-15 11:50:18.989407] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:18.241 [2024-11-15 11:50:18.989425] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:18.241 [2024-11-15 11:50:19.003335] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:18.241 [2024-11-15 11:50:19.003354] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:18.241 [2024-11-15 11:50:19.017749] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:18.241 [2024-11-15 11:50:19.017767] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:18.241 [2024-11-15 11:50:19.031602] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:18.241 [2024-11-15 11:50:19.031620] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:18.241 [2024-11-15 11:50:19.044876] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:18.241 [2024-11-15 11:50:19.044893] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:18.241 [2024-11-15 11:50:19.058879] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:18.241 [2024-11-15 11:50:19.058897] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:18.241 [2024-11-15 11:50:19.073297] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:18.241 [2024-11-15 11:50:19.073316] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:18.241 [2024-11-15 11:50:19.086935] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:18.241 [2024-11-15 11:50:19.086953] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:18.500 [2024-11-15 11:50:19.097812] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:18.500 [2024-11-15 11:50:19.097832] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:18.500 [2024-11-15 11:50:19.111027] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:18.500 [2024-11-15 11:50:19.111045] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:18.500 [2024-11-15 11:50:19.125220] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:18.500 [2024-11-15 11:50:19.125241] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:18.500 [2024-11-15 11:50:19.139018] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:18.500 [2024-11-15 11:50:19.139037] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:18.500 [2024-11-15 11:50:19.153556] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:18.500 [2024-11-15 11:50:19.153575] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:18.500 [2024-11-15 11:50:19.167263] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:18.500 [2024-11-15 11:50:19.167286] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:18.500 [2024-11-15 11:50:19.181433] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:18.500 [2024-11-15 11:50:19.181452] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:18.500 [2024-11-15 11:50:19.195634] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:18.500 [2024-11-15 11:50:19.195654] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:18.500 [2024-11-15 11:50:19.209250] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:18.500 [2024-11-15 11:50:19.209269] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:18.500 [2024-11-15 11:50:19.222891] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:18.500 [2024-11-15 11:50:19.222910] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:18.500 [2024-11-15 11:50:19.237325] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:18.500 [2024-11-15 11:50:19.237343] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:18.500 [2024-11-15 11:50:19.250930] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:18.500 [2024-11-15 11:50:19.250950] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:18.500 [2024-11-15 11:50:19.265017] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:18.500 [2024-11-15 11:50:19.265037] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:18.500 [2024-11-15 11:50:19.278684] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:18.500 [2024-11-15 11:50:19.278703] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:18.500 [2024-11-15 11:50:19.293128] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:18.500 [2024-11-15 11:50:19.293148] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:18.500 [2024-11-15 11:50:19.306697] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:18.500 [2024-11-15 11:50:19.306716] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:18.500 [2024-11-15 11:50:19.321953] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:18.500 [2024-11-15 11:50:19.321972] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:18.500 [2024-11-15 11:50:19.335408] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:18.500 [2024-11-15 11:50:19.335426] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:18.500 [2024-11-15 11:50:19.349295] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:18.500 [2024-11-15 11:50:19.349314] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:18.759 [2024-11-15 11:50:19.363110] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:18.759 [2024-11-15 11:50:19.363129] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:18.759 [2024-11-15 11:50:19.377733] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:18.759 [2024-11-15 11:50:19.377752] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:18.759 [2024-11-15 11:50:19.391297] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:18.759 [2024-11-15 11:50:19.391317] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:18.759 [2024-11-15 11:50:19.405633] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:18.759 [2024-11-15 11:50:19.405651] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:18.759 [2024-11-15 11:50:19.419734] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:18.759 [2024-11-15 11:50:19.419753] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:18.759 [2024-11-15 11:50:19.433481] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:18.759 [2024-11-15 11:50:19.433504] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:18.759 [2024-11-15 11:50:19.447371] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:18.759 [2024-11-15 11:50:19.447392] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:18.759 [2024-11-15 11:50:19.461576] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:18.759 [2024-11-15 11:50:19.461595] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:18.759 [2024-11-15 11:50:19.475195] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:18.759 [2024-11-15 11:50:19.475213] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:18.759 [2024-11-15 11:50:19.489223] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:18.759 [2024-11-15 11:50:19.489243] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:18.759 [2024-11-15 11:50:19.502956] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:18.759 [2024-11-15 11:50:19.502976] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:18.759 [2024-11-15 11:50:19.517877] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:18.759 [2024-11-15 11:50:19.517896] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:18.759 [2024-11-15 11:50:19.531089] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:18.759 [2024-11-15 11:50:19.531108] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:18.759 [2024-11-15 11:50:19.544641] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:18.759 [2024-11-15 11:50:19.544660] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:18.759 [2024-11-15 11:50:19.558044] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:18.759 [2024-11-15 11:50:19.558062] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:18.759 [2024-11-15 11:50:19.573171] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:18.759 [2024-11-15 11:50:19.573190] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:18.759 [2024-11-15 11:50:19.586893] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:18.759 [2024-11-15 11:50:19.586912] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:18.759 [2024-11-15 11:50:19.601126] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:18.759 [2024-11-15 11:50:19.601145] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:19.018 [2024-11-15 11:50:19.614818] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:19.018 [2024-11-15 11:50:19.614836] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:19.018 [2024-11-15 11:50:19.629148] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:19.018 [2024-11-15 11:50:19.629166] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:19.018 [2024-11-15 11:50:19.643273] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:19.018 [2024-11-15 11:50:19.643291] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:19.018 [2024-11-15 11:50:19.657709] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:19.018 [2024-11-15 11:50:19.657728] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:19.018 [2024-11-15 11:50:19.671329] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:19.018 [2024-11-15 11:50:19.671347] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:19.018 [2024-11-15 11:50:19.684868] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:19.018 [2024-11-15 11:50:19.684886] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:19.018 [2024-11-15 11:50:19.699267] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:19.018 [2024-11-15 11:50:19.699289] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:19.018 [2024-11-15 11:50:19.713399] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:19.018 [2024-11-15 11:50:19.713417] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:19.018 [2024-11-15 11:50:19.726536] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:19.018 [2024-11-15 11:50:19.726555] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:19.018 [2024-11-15 11:50:19.739480] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:19.018 [2024-11-15 11:50:19.739498] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:19.018 [2024-11-15 11:50:19.753298] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:19.018 [2024-11-15 11:50:19.753316] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:19.018 [2024-11-15 11:50:19.767426] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:19.018 [2024-11-15 11:50:19.767443] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:19.018 [2024-11-15 11:50:19.781129] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:19.018 [2024-11-15 11:50:19.781146] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:19.018 [2024-11-15 11:50:19.794860] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:19.018 [2024-11-15 11:50:19.794877] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:19.018 [2024-11-15 11:50:19.809334] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:19.018 [2024-11-15 11:50:19.809352] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:19.018 [2024-11-15 11:50:19.823006] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:19.018 [2024-11-15 11:50:19.823023] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:19.018 [2024-11-15 11:50:19.837169] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:19.018 [2024-11-15 11:50:19.837186] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:19.018 [2024-11-15 11:50:19.850545] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:19.018 [2024-11-15 11:50:19.850563] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:19.018 [2024-11-15 11:50:19.865916] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:19.018 [2024-11-15 11:50:19.865934] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:19.278 [2024-11-15 11:50:19.879205] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:19.278 [2024-11-15 11:50:19.879223] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:19.278 [2024-11-15 11:50:19.893162] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:19.278 [2024-11-15 11:50:19.893180] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:19.278 18181.33 IOPS, 142.04 MiB/s [2024-11-15T10:50:20.131Z] [2024-11-15 11:50:19.907195] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:19.278 [2024-11-15 11:50:19.907212] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:19.278 [2024-11-15 11:50:19.916908] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:19.278 [2024-11-15 11:50:19.916926] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:19.278 [2024-11-15 11:50:19.930570] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:19.278 [2024-11-15 11:50:19.930588] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:19.278 [2024-11-15 11:50:19.944641] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:19.278 [2024-11-15 11:50:19.944660] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:19.278 [2024-11-15 11:50:19.958242] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:19.278 [2024-11-15 11:50:19.958260] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:19.278 [2024-11-15 11:50:19.971222] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:19.278 [2024-11-15 11:50:19.971240] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:19.278 [2024-11-15 11:50:19.985328] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:19.278 [2024-11-15 11:50:19.985346] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:19.278 [2024-11-15 11:50:19.999176] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:19.278 [2024-11-15 11:50:19.999194] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:19.278 [2024-11-15 11:50:20.014157] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:19.278 [2024-11-15 11:50:20.014193] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:19.278 [2024-11-15 11:50:20.030236] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:19.278 [2024-11-15 11:50:20.030256] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:19.278 [2024-11-15 11:50:20.045812] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:19.278 [2024-11-15 11:50:20.045833] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:19.278 [2024-11-15 11:50:20.058663] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:19.278 [2024-11-15 11:50:20.058681] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:19.278 [2024-11-15 11:50:20.070412] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:19.278 [2024-11-15 11:50:20.070430] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:19.278 [2024-11-15 11:50:20.085527] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:19.278 [2024-11-15 11:50:20.085545] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:19.278 [2024-11-15 11:50:20.099305] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:19.278 [2024-11-15 11:50:20.099323] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:19.278 [2024-11-15 11:50:20.115026] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:19.278 [2024-11-15 11:50:20.115045] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:19.537 [2024-11-15 11:50:20.129247] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:19.537 [2024-11-15 11:50:20.129266] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:19.537 [2024-11-15 11:50:20.142560] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:19.537 [2024-11-15 11:50:20.142578] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:19.537 [2024-11-15 11:50:20.157933] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:19.537 [2024-11-15 11:50:20.157952] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:19.537 [2024-11-15 11:50:20.170801] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:19.537 [2024-11-15 11:50:20.170819] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:19.537 [2024-11-15 11:50:20.185051] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:19.537 [2024-11-15 11:50:20.185069] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:19.537 [2024-11-15 11:50:20.199146] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:19.537 [2024-11-15 11:50:20.199165] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:19.537 [2024-11-15 11:50:20.213110] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:19.537 [2024-11-15 11:50:20.213130] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:19.537 [2024-11-15 11:50:20.226744] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:19.537 [2024-11-15 11:50:20.226763] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:19.537 [2024-11-15 11:50:20.242096] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:19.537 [2024-11-15 11:50:20.242115] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:19.537 [2024-11-15 11:50:20.255134] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:19.537 [2024-11-15 11:50:20.255153] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:19.537 [2024-11-15 11:50:20.269592] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:19.537 [2024-11-15 11:50:20.269610] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:19.537 [2024-11-15 11:50:20.283498] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:19.537 [2024-11-15 11:50:20.283517] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:19.537 [2024-11-15 11:50:20.297941] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:19.537 [2024-11-15 11:50:20.297961] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:19.537 [2024-11-15 11:50:20.310895] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:19.537 [2024-11-15 11:50:20.310914] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:19.537 [2024-11-15 11:50:20.323216] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:19.537 [2024-11-15 11:50:20.323235] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:19.537 [2024-11-15 11:50:20.337020] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:19.537 [2024-11-15 11:50:20.337038] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:19.537 [2024-11-15 11:50:20.350741] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:19.537 [2024-11-15 11:50:20.350760] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:19.537 [2024-11-15 11:50:20.365117] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:19.537 [2024-11-15 11:50:20.365137] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:19.537 [2024-11-15 11:50:20.379222] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:19.537 [2024-11-15 11:50:20.379241] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:19.796 [2024-11-15 11:50:20.393413] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:19.797 [2024-11-15 11:50:20.393432] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:19.797 [2024-11-15 11:50:20.406871] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:19.797 [2024-11-15 11:50:20.406890] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:19.797 [2024-11-15 11:50:20.422002] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:19.797 [2024-11-15 11:50:20.422021] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:19.797 [2024-11-15 11:50:20.435105] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:19.797 [2024-11-15 11:50:20.435124] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:19.797 [2024-11-15 11:50:20.449004] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:19.797 [2024-11-15 11:50:20.449021] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:19.797 [2024-11-15 11:50:20.462573] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:19.797 [2024-11-15 11:50:20.462590] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:19.797 [2024-11-15 11:50:20.475117] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:19.797 [2024-11-15 11:50:20.475135] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:19.797 [2024-11-15 11:50:20.489528] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:19.797 [2024-11-15 11:50:20.489546] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:19.797 [2024-11-15 11:50:20.503178] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:19.797 [2024-11-15 11:50:20.503197] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:19.797 [2024-11-15 11:50:20.517342] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:19.797 [2024-11-15 11:50:20.517362] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:19.797 [2024-11-15 11:50:20.530912] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:19.797 [2024-11-15 11:50:20.530931] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:19.797 [2024-11-15 11:50:20.545474] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:19.797 [2024-11-15 11:50:20.545493] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:19.797 [2024-11-15 11:50:20.559196] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:19.797 [2024-11-15 11:50:20.559214] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:19.797 [2024-11-15 11:50:20.572888] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:19.797 [2024-11-15 11:50:20.572906] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:19.797 [2024-11-15 11:50:20.586925] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:19.797 [2024-11-15 11:50:20.586943] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:19.797 [2024-11-15 11:50:20.601245] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:19.797 [2024-11-15 11:50:20.601263] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:19.797 [2024-11-15 11:50:20.615077] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:19.797 [2024-11-15 11:50:20.615096] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:19.797 [2024-11-15 11:50:20.629251] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:19.797 [2024-11-15 11:50:20.629269] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:19.797 [2024-11-15 11:50:20.642832] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:19.797 [2024-11-15 11:50:20.642851] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:20.056 [2024-11-15 11:50:20.656601] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:20.056 [2024-11-15 11:50:20.656620] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:20.056 [2024-11-15 11:50:20.669826] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:20.056 [2024-11-15 11:50:20.669845] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:20.056 [2024-11-15 11:50:20.682911] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:20.056 [2024-11-15 11:50:20.682929] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:20.056 [2024-11-15 11:50:20.696947] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:20.056 [2024-11-15 11:50:20.696966] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:20.056 [2024-11-15 11:50:20.710518] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:20.056 [2024-11-15 11:50:20.710536] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:20.056 [2024-11-15 11:50:20.725617] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:20.056 [2024-11-15 11:50:20.725636] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:20.056 [2024-11-15 11:50:20.739281] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:20.056 [2024-11-15 11:50:20.739303] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:20.056 [2024-11-15 11:50:20.753734] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:20.056 [2024-11-15 11:50:20.753752] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:20.056 [2024-11-15 11:50:20.767253] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:20.056 [2024-11-15 11:50:20.767271] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:20.056 [2024-11-15 11:50:20.781160] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:20.056 [2024-11-15 11:50:20.781179] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:20.056 [2024-11-15 11:50:20.794988] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:20.056 [2024-11-15 11:50:20.795006] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:20.056 [2024-11-15 11:50:20.809713] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:20.056 [2024-11-15 11:50:20.809731] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:20.056 [2024-11-15 11:50:20.823270] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:20.056 [2024-11-15 11:50:20.823289] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:20.056 [2024-11-15 11:50:20.837224] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:20.056 [2024-11-15 11:50:20.837242] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:20.056 [2024-11-15 11:50:20.850755] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:20.056 [2024-11-15 11:50:20.850773] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:20.056 [2024-11-15 11:50:20.865713] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:20.056 [2024-11-15 11:50:20.865732] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:20.056 [2024-11-15 11:50:20.878969] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:20.056 [2024-11-15 11:50:20.878988] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:20.056 [2024-11-15 11:50:20.890720] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:20.056 [2024-11-15 11:50:20.890738] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:20.056 18167.00 IOPS, 141.93 MiB/s [2024-11-15T10:50:20.909Z] [2024-11-15 11:50:20.905053] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:20.056 [2024-11-15 11:50:20.905072] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:20.315 [2024-11-15 11:50:20.918699] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:20.315 [2024-11-15 11:50:20.918718] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:20.315 [2024-11-15 11:50:20.933028] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:20.315 [2024-11-15 11:50:20.933047] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:20.315 [2024-11-15 11:50:20.947037] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:20.315 [2024-11-15 11:50:20.947055] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:20.315 [2024-11-15 11:50:20.961470] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:20.315 [2024-11-15 11:50:20.961488] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:20.315 [2024-11-15 11:50:20.975025] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:20.315 [2024-11-15 11:50:20.975044] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:20.315 [2024-11-15 11:50:20.990207] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:20.315 [2024-11-15 11:50:20.990224] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:20.315 [2024-11-15 11:50:21.002772] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:20.315 [2024-11-15 11:50:21.002801] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:20.315 [2024-11-15 11:50:21.017047] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:20.315 [2024-11-15 11:50:21.017066] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:20.315 [2024-11-15 11:50:21.030405] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:20.315 [2024-11-15 11:50:21.030423] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:20.315 [2024-11-15 11:50:21.044813] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:20.315 [2024-11-15 11:50:21.044840] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:20.315 [2024-11-15 11:50:21.058749] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:20.315 [2024-11-15 11:50:21.058768] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:20.315 [2024-11-15 11:50:21.072907] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:20.315 [2024-11-15 11:50:21.072926] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:20.315 [2024-11-15 11:50:21.086382] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:20.315 [2024-11-15 11:50:21.086400] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:20.315 [2024-11-15 11:50:21.099329] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:20.315 [2024-11-15 11:50:21.099346] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:20.315 [2024-11-15 11:50:21.113223] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:20.315 [2024-11-15 11:50:21.113241] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:20.315 [2024-11-15 11:50:21.126758] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:20.315 [2024-11-15 11:50:21.126776] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:20.315 [2024-11-15 11:50:21.141109] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:20.315 [2024-11-15 11:50:21.141128] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:20.315 [2024-11-15 11:50:21.154449] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:20.315 [2024-11-15 11:50:21.154473] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:20.574 [2024-11-15 11:50:21.169962] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:20.574 [2024-11-15 11:50:21.169980] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:20.574 [2024-11-15 11:50:21.182590] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:20.574 [2024-11-15 11:50:21.182608] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:20.574 [2024-11-15 11:50:21.197391] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:20.574 [2024-11-15 11:50:21.197408] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:20.574 [2024-11-15 11:50:21.211207] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:20.574 [2024-11-15 11:50:21.211225] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:20.574 [2024-11-15 11:50:21.225728] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:20.574 [2024-11-15 11:50:21.225746] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:20.574 [2024-11-15 11:50:21.239546] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:20.574 [2024-11-15 11:50:21.239564] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:20.574 [2024-11-15 11:50:21.253416] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:20.574 [2024-11-15 11:50:21.253434] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:20.574 [2024-11-15 11:50:21.266638] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:20.574 [2024-11-15 11:50:21.266658] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:20.574 [2024-11-15 11:50:21.279082] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:20.574 [2024-11-15 11:50:21.279099] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:20.574 [2024-11-15 11:50:21.293617] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:20.574 [2024-11-15 11:50:21.293634] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:20.574 [2024-11-15 11:50:21.307153] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:20.574 [2024-11-15 11:50:21.307171] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:20.574 [2024-11-15 11:50:21.321232] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:20.574 [2024-11-15 11:50:21.321250] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:20.574 [2024-11-15 11:50:21.334500] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:20.574 [2024-11-15 11:50:21.334517] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:20.574 [2024-11-15 11:50:21.349533] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:20.574 [2024-11-15 11:50:21.349551] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:20.574 [2024-11-15 11:50:21.363276] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:20.575 [2024-11-15 11:50:21.363293] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:20.575 [2024-11-15 11:50:21.376805] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:20.575 [2024-11-15 11:50:21.376824] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:20.575 [2024-11-15 11:50:21.389965] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:20.575 [2024-11-15 11:50:21.389983] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:20.575 [2024-11-15 11:50:21.402287] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:20.575 [2024-11-15 11:50:21.402305] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:20.575 [2024-11-15 11:50:21.414738] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:20.575 [2024-11-15 11:50:21.414756] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:20.833 [2024-11-15 11:50:21.426555] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:20.833 [2024-11-15 11:50:21.426572] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:20.833 [2024-11-15 11:50:21.438996] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:20.833 [2024-11-15 11:50:21.439014] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:20.833 [2024-11-15 11:50:21.451452] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:20.833 [2024-11-15 11:50:21.451475] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:20.833 [2024-11-15 11:50:21.465641] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:20.833 [2024-11-15 11:50:21.465660] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:20.833 [2024-11-15 11:50:21.479450] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:20.833 [2024-11-15 11:50:21.479473] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:20.833 [2024-11-15 11:50:21.492918] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:20.833 [2024-11-15 11:50:21.492935] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:20.833 [2024-11-15 11:50:21.506176] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:20.834 [2024-11-15 11:50:21.506192] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:20.834 [2024-11-15 11:50:21.521388] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:20.834 [2024-11-15 11:50:21.521407] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:20.834 [2024-11-15 11:50:21.534884] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:20.834 [2024-11-15 11:50:21.534903] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:20.834 [2024-11-15 11:50:21.549410] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:20.834 [2024-11-15 11:50:21.549430] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:20.834 [2024-11-15 11:50:21.563124] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:20.834 [2024-11-15 11:50:21.563142] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:20.834 [2024-11-15 11:50:21.577242] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:20.834 [2024-11-15 11:50:21.577260] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:20.834 [2024-11-15 11:50:21.591212] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:20.834 [2024-11-15 11:50:21.591230] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:20.834 [2024-11-15 11:50:21.605457] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:20.834 [2024-11-15 11:50:21.605481] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:20.834 [2024-11-15 11:50:21.619162] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:20.834 [2024-11-15 11:50:21.619180] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:20.834 [2024-11-15 11:50:21.633094] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:20.834 [2024-11-15 11:50:21.633112] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:20.834 [2024-11-15 11:50:21.646629] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:20.834 [2024-11-15 11:50:21.646646] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:20.834 [2024-11-15 11:50:21.661218] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:20.834 [2024-11-15 11:50:21.661236] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:20.834 [2024-11-15 11:50:21.674783] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:20.834 [2024-11-15 11:50:21.674802] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:21.093 [2024-11-15 11:50:21.689510] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:21.093 [2024-11-15 11:50:21.689528] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:21.093 [2024-11-15 11:50:21.703043] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:21.093 [2024-11-15 11:50:21.703060] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:21.093 [2024-11-15 11:50:21.717969] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:21.093 [2024-11-15 11:50:21.717987] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:21.093 [2024-11-15 11:50:21.730406] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:21.093 [2024-11-15 11:50:21.730424] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:21.093 [2024-11-15 11:50:21.745378] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:21.093 [2024-11-15 11:50:21.745396] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:21.093 [2024-11-15 11:50:21.759028] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:21.093 [2024-11-15 11:50:21.759046] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:21.093 [2024-11-15 11:50:21.773536] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:21.093 [2024-11-15 11:50:21.773555] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:21.093 [2024-11-15 11:50:21.787570] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:21.093 [2024-11-15 11:50:21.787588] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:21.093 [2024-11-15 11:50:21.801099] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:21.093 [2024-11-15 11:50:21.801117] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:21.093 [2024-11-15 11:50:21.814342] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:21.093 [2024-11-15 11:50:21.814359] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:21.093 [2024-11-15 11:50:21.829674] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:21.093 [2024-11-15 11:50:21.829692] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:21.093 [2024-11-15 11:50:21.843441] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:21.093 [2024-11-15 11:50:21.843463] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:21.093 [2024-11-15 11:50:21.858187] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:21.093 [2024-11-15 11:50:21.858204] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:21.093 [2024-11-15 11:50:21.873603] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:21.093 [2024-11-15 11:50:21.873621] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:21.093 [2024-11-15 11:50:21.887306] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:21.093 [2024-11-15 11:50:21.887325] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:21.093 [2024-11-15 11:50:21.901515] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:21.093 [2024-11-15 11:50:21.901538] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:21.093 18190.60 IOPS, 142.11 MiB/s 00:32:21.093 Latency(us) 00:32:21.093 [2024-11-15T10:50:21.946Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:21.093 Job: Nvme1n1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 128, IO size: 8192) 00:32:21.093 Nvme1n1 : 5.01 18193.77 142.14 0.00 0.00 7028.33 2308.65 13702.98 00:32:21.093 [2024-11-15T10:50:21.946Z] =================================================================================================================== 00:32:21.093 [2024-11-15T10:50:21.946Z] Total : 18193.77 142.14 0.00 0.00 7028.33 2308.65 13702.98 00:32:21.093 [2024-11-15 11:50:21.909775] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:21.093 [2024-11-15 11:50:21.909791] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:21.093 [2024-11-15 11:50:21.921772] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:21.093 [2024-11-15 11:50:21.921787] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:21.093 [2024-11-15 11:50:21.933771] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:21.093 [2024-11-15 11:50:21.933781] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:21.353 [2024-11-15 11:50:21.945779] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:21.353 [2024-11-15 11:50:21.945795] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:21.353 [2024-11-15 11:50:21.957770] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:21.353 [2024-11-15 11:50:21.957782] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:21.353 [2024-11-15 11:50:21.969773] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:21.353 [2024-11-15 11:50:21.969785] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:21.353 [2024-11-15 11:50:21.981782] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:21.353 [2024-11-15 11:50:21.981801] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:21.353 [2024-11-15 11:50:21.993770] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:21.353 [2024-11-15 11:50:21.993782] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:21.353 [2024-11-15 11:50:22.005768] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:21.353 [2024-11-15 11:50:22.005780] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:21.353 [2024-11-15 11:50:22.017769] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:21.353 [2024-11-15 11:50:22.017780] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:21.353 [2024-11-15 11:50:22.029766] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:21.353 [2024-11-15 11:50:22.029775] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:21.353 [2024-11-15 11:50:22.041771] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:21.353 [2024-11-15 11:50:22.041779] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:21.353 [2024-11-15 11:50:22.053771] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:21.353 [2024-11-15 11:50:22.053782] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:21.353 [2024-11-15 11:50:22.065767] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:21.353 [2024-11-15 11:50:22.065775] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:21.353 [2024-11-15 11:50:22.077767] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:21.353 [2024-11-15 11:50:22.077776] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:21.353 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh: line 42: kill: (1469261) - No such process 00:32:21.353 11:50:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@49 -- # wait 1469261 00:32:21.353 11:50:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@52 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:32:21.353 11:50:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:21.353 11:50:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:32:21.353 11:50:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:21.353 11:50:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@53 -- # rpc_cmd bdev_delay_create -b malloc0 -d delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:32:21.353 11:50:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:21.353 11:50:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:32:21.353 delay0 00:32:21.353 11:50:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:21.353 11:50:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@54 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 delay0 -n 1 00:32:21.353 11:50:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:21.353 11:50:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:32:21.353 11:50:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:21.353 11:50:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -c 0x1 -t 5 -q 64 -w randrw -M 50 -l warning -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 ns:1' 00:32:21.353 [2024-11-15 11:50:22.184236] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:32:29.468 Initializing NVMe Controllers 00:32:29.468 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:32:29.468 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:32:29.468 Initialization complete. Launching workers. 00:32:29.468 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 I/O completed: 237, failed: 28634 00:32:29.468 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) abort submitted 28743, failed to submit 128 00:32:29.468 success 28662, unsuccessful 81, failed 0 00:32:29.468 11:50:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@59 -- # trap - SIGINT SIGTERM EXIT 00:32:29.468 11:50:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@60 -- # nvmftestfini 00:32:29.468 11:50:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@516 -- # nvmfcleanup 00:32:29.468 11:50:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@121 -- # sync 00:32:29.468 11:50:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:32:29.468 11:50:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@124 -- # set +e 00:32:29.468 11:50:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@125 -- # for i in {1..20} 00:32:29.468 11:50:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:32:29.468 rmmod nvme_tcp 00:32:29.468 rmmod nvme_fabrics 00:32:29.468 rmmod nvme_keyring 00:32:29.468 11:50:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:32:29.468 11:50:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@128 -- # set -e 00:32:29.468 11:50:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@129 -- # return 0 00:32:29.468 11:50:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@517 -- # '[' -n 1467283 ']' 00:32:29.468 11:50:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@518 -- # killprocess 1467283 00:32:29.468 11:50:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@952 -- # '[' -z 1467283 ']' 00:32:29.468 11:50:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@956 -- # kill -0 1467283 00:32:29.468 11:50:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@957 -- # uname 00:32:29.468 11:50:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:32:29.468 11:50:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 1467283 00:32:29.468 11:50:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:32:29.468 11:50:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:32:29.468 11:50:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@970 -- # echo 'killing process with pid 1467283' 00:32:29.468 killing process with pid 1467283 00:32:29.468 11:50:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@971 -- # kill 1467283 00:32:29.468 11:50:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@976 -- # wait 1467283 00:32:29.468 11:50:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:32:29.468 11:50:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:32:29.468 11:50:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:32:29.468 11:50:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@297 -- # iptr 00:32:29.468 11:50:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@791 -- # iptables-save 00:32:29.468 11:50:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:32:29.468 11:50:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@791 -- # iptables-restore 00:32:29.468 11:50:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:32:29.468 11:50:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@302 -- # remove_spdk_ns 00:32:29.468 11:50:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:29.468 11:50:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:32:29.468 11:50:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:31.370 11:50:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:32:31.370 00:32:31.370 real 0m32.275s 00:32:31.370 user 0m42.279s 00:32:31.370 sys 0m12.608s 00:32:31.370 11:50:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1128 -- # xtrace_disable 00:32:31.370 11:50:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:32:31.370 ************************************ 00:32:31.370 END TEST nvmf_zcopy 00:32:31.370 ************************************ 00:32:31.370 11:50:31 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@33 -- # run_test nvmf_nmic /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp --interrupt-mode 00:32:31.370 11:50:31 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:32:31.370 11:50:31 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1109 -- # xtrace_disable 00:32:31.370 11:50:31 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:32:31.370 ************************************ 00:32:31.370 START TEST nvmf_nmic 00:32:31.370 ************************************ 00:32:31.370 11:50:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp --interrupt-mode 00:32:31.370 * Looking for test storage... 00:32:31.370 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:32:31.370 11:50:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:32:31.370 11:50:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1691 -- # lcov --version 00:32:31.370 11:50:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:32:31.370 11:50:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:32:31.370 11:50:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:32:31.370 11:50:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@333 -- # local ver1 ver1_l 00:32:31.370 11:50:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@334 -- # local ver2 ver2_l 00:32:31.370 11:50:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@336 -- # IFS=.-: 00:32:31.370 11:50:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@336 -- # read -ra ver1 00:32:31.370 11:50:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@337 -- # IFS=.-: 00:32:31.370 11:50:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@337 -- # read -ra ver2 00:32:31.370 11:50:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@338 -- # local 'op=<' 00:32:31.370 11:50:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@340 -- # ver1_l=2 00:32:31.370 11:50:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@341 -- # ver2_l=1 00:32:31.370 11:50:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:32:31.370 11:50:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@344 -- # case "$op" in 00:32:31.370 11:50:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@345 -- # : 1 00:32:31.370 11:50:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@364 -- # (( v = 0 )) 00:32:31.370 11:50:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:32:31.370 11:50:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@365 -- # decimal 1 00:32:31.370 11:50:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@353 -- # local d=1 00:32:31.370 11:50:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:32:31.370 11:50:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@355 -- # echo 1 00:32:31.370 11:50:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@365 -- # ver1[v]=1 00:32:31.370 11:50:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@366 -- # decimal 2 00:32:31.370 11:50:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@353 -- # local d=2 00:32:31.370 11:50:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:32:31.370 11:50:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@355 -- # echo 2 00:32:31.370 11:50:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@366 -- # ver2[v]=2 00:32:31.370 11:50:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:32:31.370 11:50:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:32:31.370 11:50:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@368 -- # return 0 00:32:31.370 11:50:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:32:31.370 11:50:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:32:31.371 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:31.371 --rc genhtml_branch_coverage=1 00:32:31.371 --rc genhtml_function_coverage=1 00:32:31.371 --rc genhtml_legend=1 00:32:31.371 --rc geninfo_all_blocks=1 00:32:31.371 --rc geninfo_unexecuted_blocks=1 00:32:31.371 00:32:31.371 ' 00:32:31.371 11:50:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:32:31.371 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:31.371 --rc genhtml_branch_coverage=1 00:32:31.371 --rc genhtml_function_coverage=1 00:32:31.371 --rc genhtml_legend=1 00:32:31.371 --rc geninfo_all_blocks=1 00:32:31.371 --rc geninfo_unexecuted_blocks=1 00:32:31.371 00:32:31.371 ' 00:32:31.371 11:50:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:32:31.371 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:31.371 --rc genhtml_branch_coverage=1 00:32:31.371 --rc genhtml_function_coverage=1 00:32:31.371 --rc genhtml_legend=1 00:32:31.371 --rc geninfo_all_blocks=1 00:32:31.371 --rc geninfo_unexecuted_blocks=1 00:32:31.371 00:32:31.371 ' 00:32:31.371 11:50:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:32:31.371 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:31.371 --rc genhtml_branch_coverage=1 00:32:31.371 --rc genhtml_function_coverage=1 00:32:31.371 --rc genhtml_legend=1 00:32:31.371 --rc geninfo_all_blocks=1 00:32:31.371 --rc geninfo_unexecuted_blocks=1 00:32:31.371 00:32:31.371 ' 00:32:31.371 11:50:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:32:31.371 11:50:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@7 -- # uname -s 00:32:31.371 11:50:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:32:31.371 11:50:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:32:31.371 11:50:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:32:31.371 11:50:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:32:31.371 11:50:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:32:31.371 11:50:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:32:31.371 11:50:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:32:31.371 11:50:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:32:31.371 11:50:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:32:31.371 11:50:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:32:31.371 11:50:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:32:31.371 11:50:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@18 -- # NVME_HOSTID=00abaa28-3537-eb11-906e-0017a4403562 00:32:31.371 11:50:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:32:31.371 11:50:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:32:31.371 11:50:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:32:31.371 11:50:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:32:31.371 11:50:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:32:31.371 11:50:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@15 -- # shopt -s extglob 00:32:31.371 11:50:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:32:31.371 11:50:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:32:31.371 11:50:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:32:31.371 11:50:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:31.371 11:50:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:31.371 11:50:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:31.371 11:50:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@5 -- # export PATH 00:32:31.371 11:50:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:31.371 11:50:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@51 -- # : 0 00:32:31.371 11:50:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:32:31.371 11:50:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:32:31.371 11:50:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:32:31.371 11:50:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:32:31.371 11:50:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:32:31.371 11:50:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:32:31.371 11:50:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:32:31.371 11:50:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:32:31.371 11:50:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:32:31.371 11:50:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@55 -- # have_pci_nics=0 00:32:31.371 11:50:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@11 -- # MALLOC_BDEV_SIZE=64 00:32:31.371 11:50:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:32:31.371 11:50:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@14 -- # nvmftestinit 00:32:31.371 11:50:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:32:31.371 11:50:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:32:31.371 11:50:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@476 -- # prepare_net_devs 00:32:31.371 11:50:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@438 -- # local -g is_hw=no 00:32:31.371 11:50:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@440 -- # remove_spdk_ns 00:32:31.371 11:50:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:31.371 11:50:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:32:31.371 11:50:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:31.371 11:50:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:32:31.371 11:50:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:32:31.371 11:50:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@309 -- # xtrace_disable 00:32:31.371 11:50:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:32:36.641 11:50:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:32:36.641 11:50:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@315 -- # pci_devs=() 00:32:36.641 11:50:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@315 -- # local -a pci_devs 00:32:36.641 11:50:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@316 -- # pci_net_devs=() 00:32:36.641 11:50:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:32:36.641 11:50:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@317 -- # pci_drivers=() 00:32:36.641 11:50:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@317 -- # local -A pci_drivers 00:32:36.641 11:50:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@319 -- # net_devs=() 00:32:36.641 11:50:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@319 -- # local -ga net_devs 00:32:36.641 11:50:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@320 -- # e810=() 00:32:36.641 11:50:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@320 -- # local -ga e810 00:32:36.641 11:50:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@321 -- # x722=() 00:32:36.641 11:50:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@321 -- # local -ga x722 00:32:36.641 11:50:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@322 -- # mlx=() 00:32:36.641 11:50:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@322 -- # local -ga mlx 00:32:36.641 11:50:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:32:36.641 11:50:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:32:36.641 11:50:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:32:36.641 11:50:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:32:36.641 11:50:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:32:36.641 11:50:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:32:36.641 11:50:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:32:36.641 11:50:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:32:36.641 11:50:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:32:36.641 11:50:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:32:36.641 11:50:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:32:36.641 11:50:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:32:36.641 11:50:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:32:36.641 11:50:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:32:36.641 11:50:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:32:36.641 11:50:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:32:36.641 11:50:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:32:36.641 11:50:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:32:36.641 11:50:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:32:36.641 11:50:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:32:36.641 Found 0000:af:00.0 (0x8086 - 0x159b) 00:32:36.641 11:50:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:32:36.641 11:50:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:32:36.641 11:50:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:36.641 11:50:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:36.641 11:50:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:32:36.641 11:50:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:32:36.641 11:50:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:32:36.641 Found 0000:af:00.1 (0x8086 - 0x159b) 00:32:36.641 11:50:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:32:36.641 11:50:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:32:36.641 11:50:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:36.641 11:50:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:36.641 11:50:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:32:36.641 11:50:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:32:36.641 11:50:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:32:36.641 11:50:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:32:36.641 11:50:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:32:36.641 11:50:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:36.641 11:50:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:32:36.641 11:50:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:36.641 11:50:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@418 -- # [[ up == up ]] 00:32:36.641 11:50:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:32:36.641 11:50:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:36.641 11:50:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:32:36.641 Found net devices under 0000:af:00.0: cvl_0_0 00:32:36.641 11:50:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:32:36.641 11:50:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:32:36.641 11:50:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:36.641 11:50:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:32:36.641 11:50:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:36.641 11:50:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@418 -- # [[ up == up ]] 00:32:36.641 11:50:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:32:36.641 11:50:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:36.641 11:50:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:32:36.641 Found net devices under 0000:af:00.1: cvl_0_1 00:32:36.641 11:50:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:32:36.641 11:50:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:32:36.641 11:50:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@442 -- # is_hw=yes 00:32:36.641 11:50:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:32:36.641 11:50:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:32:36.641 11:50:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:32:36.641 11:50:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:32:36.641 11:50:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:32:36.641 11:50:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:32:36.641 11:50:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:32:36.641 11:50:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:32:36.641 11:50:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:32:36.641 11:50:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:32:36.641 11:50:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:32:36.641 11:50:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:32:36.641 11:50:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:32:36.641 11:50:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:32:36.641 11:50:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:32:36.641 11:50:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:32:36.641 11:50:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:32:36.641 11:50:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:32:36.900 11:50:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:32:36.900 11:50:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:32:36.900 11:50:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:32:36.900 11:50:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:32:36.900 11:50:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:32:36.900 11:50:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:32:36.900 11:50:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:32:36.900 11:50:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:32:36.900 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:32:36.900 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.441 ms 00:32:36.900 00:32:36.900 --- 10.0.0.2 ping statistics --- 00:32:36.900 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:36.900 rtt min/avg/max/mdev = 0.441/0.441/0.441/0.000 ms 00:32:36.900 11:50:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:32:36.900 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:32:36.900 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.157 ms 00:32:36.900 00:32:36.900 --- 10.0.0.1 ping statistics --- 00:32:36.900 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:36.900 rtt min/avg/max/mdev = 0.157/0.157/0.157/0.000 ms 00:32:36.901 11:50:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:32:36.901 11:50:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@450 -- # return 0 00:32:36.901 11:50:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:32:36.901 11:50:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:32:36.901 11:50:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:32:36.901 11:50:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:32:36.901 11:50:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:32:36.901 11:50:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:32:36.901 11:50:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:32:36.901 11:50:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@15 -- # nvmfappstart -m 0xF 00:32:36.901 11:50:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:32:36.901 11:50:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@724 -- # xtrace_disable 00:32:36.901 11:50:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:32:36.901 11:50:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@509 -- # nvmfpid=1475028 00:32:36.901 11:50:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@510 -- # waitforlisten 1475028 00:32:36.901 11:50:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xF 00:32:36.901 11:50:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@833 -- # '[' -z 1475028 ']' 00:32:36.901 11:50:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:36.901 11:50:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@838 -- # local max_retries=100 00:32:36.901 11:50:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:36.901 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:36.901 11:50:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@842 -- # xtrace_disable 00:32:36.901 11:50:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:32:36.901 [2024-11-15 11:50:37.725210] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:32:36.901 [2024-11-15 11:50:37.726571] Starting SPDK v25.01-pre git sha1 4b2d483c6 / DPDK 24.03.0 initialization... 00:32:36.901 [2024-11-15 11:50:37.726616] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:32:37.159 [2024-11-15 11:50:37.827774] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:32:37.159 [2024-11-15 11:50:37.877801] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:32:37.159 [2024-11-15 11:50:37.877846] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:32:37.159 [2024-11-15 11:50:37.877857] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:32:37.159 [2024-11-15 11:50:37.877867] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:32:37.159 [2024-11-15 11:50:37.877875] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:32:37.159 [2024-11-15 11:50:37.879931] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:32:37.159 [2024-11-15 11:50:37.880047] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:32:37.159 [2024-11-15 11:50:37.880137] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:32:37.159 [2024-11-15 11:50:37.880141] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:32:37.159 [2024-11-15 11:50:37.954852] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:32:37.159 [2024-11-15 11:50:37.955039] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:32:37.159 [2024-11-15 11:50:37.955267] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:32:37.159 [2024-11-15 11:50:37.955718] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:32:37.159 [2024-11-15 11:50:37.955976] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:32:37.159 11:50:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:32:37.159 11:50:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@866 -- # return 0 00:32:37.159 11:50:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:32:37.159 11:50:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@730 -- # xtrace_disable 00:32:37.159 11:50:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:32:37.418 11:50:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:32:37.418 11:50:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:32:37.418 11:50:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:37.418 11:50:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:32:37.418 [2024-11-15 11:50:38.020857] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:32:37.418 11:50:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:37.418 11:50:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:32:37.418 11:50:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:37.418 11:50:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:32:37.418 Malloc0 00:32:37.418 11:50:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:37.418 11:50:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@21 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:32:37.418 11:50:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:37.418 11:50:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:32:37.418 11:50:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:37.418 11:50:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:32:37.418 11:50:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:37.418 11:50:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:32:37.418 11:50:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:37.418 11:50:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:32:37.418 11:50:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:37.418 11:50:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:32:37.418 [2024-11-15 11:50:38.084946] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:32:37.418 11:50:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:37.418 11:50:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@25 -- # echo 'test case1: single bdev can'\''t be used in multiple subsystems' 00:32:37.418 test case1: single bdev can't be used in multiple subsystems 00:32:37.418 11:50:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@26 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:32:37.418 11:50:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:37.418 11:50:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:32:37.418 11:50:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:37.418 11:50:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:32:37.418 11:50:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:37.418 11:50:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:32:37.418 11:50:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:37.418 11:50:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@28 -- # nmic_status=0 00:32:37.418 11:50:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc0 00:32:37.418 11:50:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:37.418 11:50:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:32:37.419 [2024-11-15 11:50:38.112634] bdev.c:8468:bdev_open: *ERROR*: bdev Malloc0 already claimed: type exclusive_write by module NVMe-oF Target 00:32:37.419 [2024-11-15 11:50:38.112661] subsystem.c:2156:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2: bdev Malloc0 cannot be opened, error=-1 00:32:37.419 [2024-11-15 11:50:38.112672] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:37.419 request: 00:32:37.419 { 00:32:37.419 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:32:37.419 "namespace": { 00:32:37.419 "bdev_name": "Malloc0", 00:32:37.419 "no_auto_visible": false, 00:32:37.419 "no_metadata": false 00:32:37.419 }, 00:32:37.419 "method": "nvmf_subsystem_add_ns", 00:32:37.419 "req_id": 1 00:32:37.419 } 00:32:37.419 Got JSON-RPC error response 00:32:37.419 response: 00:32:37.419 { 00:32:37.419 "code": -32602, 00:32:37.419 "message": "Invalid parameters" 00:32:37.419 } 00:32:37.419 11:50:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:32:37.419 11:50:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@29 -- # nmic_status=1 00:32:37.419 11:50:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@31 -- # '[' 1 -eq 0 ']' 00:32:37.419 11:50:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@36 -- # echo ' Adding namespace failed - expected result.' 00:32:37.419 Adding namespace failed - expected result. 00:32:37.419 11:50:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@39 -- # echo 'test case2: host connect to nvmf target in multiple paths' 00:32:37.419 test case2: host connect to nvmf target in multiple paths 00:32:37.419 11:50:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:32:37.419 11:50:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:37.419 11:50:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:32:37.419 [2024-11-15 11:50:38.124763] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:32:37.419 11:50:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:37.419 11:50:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@41 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid=00abaa28-3537-eb11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:32:37.678 11:50:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@42 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid=00abaa28-3537-eb11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4421 00:32:37.936 11:50:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@44 -- # waitforserial SPDKISFASTANDAWESOME 00:32:37.936 11:50:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1200 -- # local i=0 00:32:37.936 11:50:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1201 -- # local nvme_device_counter=1 nvme_devices=0 00:32:37.936 11:50:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1202 -- # [[ -n '' ]] 00:32:37.936 11:50:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1207 -- # sleep 2 00:32:39.836 11:50:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1208 -- # (( i++ <= 15 )) 00:32:39.836 11:50:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1209 -- # lsblk -l -o NAME,SERIAL 00:32:39.836 11:50:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1209 -- # grep -c SPDKISFASTANDAWESOME 00:32:39.836 11:50:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1209 -- # nvme_devices=1 00:32:39.836 11:50:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1210 -- # (( nvme_devices == nvme_device_counter )) 00:32:39.836 11:50:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1210 -- # return 0 00:32:39.836 11:50:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:32:39.836 [global] 00:32:39.836 thread=1 00:32:39.836 invalidate=1 00:32:39.836 rw=write 00:32:39.836 time_based=1 00:32:39.836 runtime=1 00:32:39.836 ioengine=libaio 00:32:39.836 direct=1 00:32:39.836 bs=4096 00:32:39.836 iodepth=1 00:32:39.836 norandommap=0 00:32:39.836 numjobs=1 00:32:39.836 00:32:39.836 verify_dump=1 00:32:39.836 verify_backlog=512 00:32:39.836 verify_state_save=0 00:32:39.836 do_verify=1 00:32:39.836 verify=crc32c-intel 00:32:39.836 [job0] 00:32:39.836 filename=/dev/nvme0n1 00:32:39.836 Could not set queue depth (nvme0n1) 00:32:40.401 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:32:40.401 fio-3.35 00:32:40.401 Starting 1 thread 00:32:41.336 00:32:41.337 job0: (groupid=0, jobs=1): err= 0: pid=1475821: Fri Nov 15 11:50:42 2024 00:32:41.337 read: IOPS=1619, BW=6478KiB/s (6633kB/s)(6484KiB/1001msec) 00:32:41.337 slat (nsec): min=7283, max=35799, avg=8311.26, stdev=1233.28 00:32:41.337 clat (usec): min=251, max=492, avg=326.11, stdev=62.54 00:32:41.337 lat (usec): min=259, max=500, avg=334.43, stdev=62.58 00:32:41.337 clat percentiles (usec): 00:32:41.337 | 1.00th=[ 273], 5.00th=[ 277], 10.00th=[ 277], 20.00th=[ 281], 00:32:41.337 | 30.00th=[ 285], 40.00th=[ 285], 50.00th=[ 289], 60.00th=[ 297], 00:32:41.337 | 70.00th=[ 330], 80.00th=[ 424], 90.00th=[ 433], 95.00th=[ 437], 00:32:41.337 | 99.00th=[ 461], 99.50th=[ 474], 99.90th=[ 478], 99.95th=[ 494], 00:32:41.337 | 99.99th=[ 494] 00:32:41.337 write: IOPS=2045, BW=8184KiB/s (8380kB/s)(8192KiB/1001msec); 0 zone resets 00:32:41.337 slat (usec): min=10, max=25903, avg=24.93, stdev=572.13 00:32:41.337 clat (usec): min=163, max=355, avg=191.17, stdev=17.90 00:32:41.337 lat (usec): min=182, max=26162, avg=216.10, stdev=573.90 00:32:41.337 clat percentiles (usec): 00:32:41.337 | 1.00th=[ 176], 5.00th=[ 178], 10.00th=[ 180], 20.00th=[ 180], 00:32:41.337 | 30.00th=[ 182], 40.00th=[ 184], 50.00th=[ 184], 60.00th=[ 186], 00:32:41.337 | 70.00th=[ 190], 80.00th=[ 200], 90.00th=[ 219], 95.00th=[ 225], 00:32:41.337 | 99.00th=[ 247], 99.50th=[ 285], 99.90th=[ 297], 99.95th=[ 302], 00:32:41.337 | 99.99th=[ 355] 00:32:41.337 bw ( KiB/s): min= 8192, max= 8192, per=100.00%, avg=8192.00, stdev= 0.00, samples=1 00:32:41.337 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:32:41.337 lat (usec) : 250=55.30%, 500=44.70% 00:32:41.337 cpu : usr=3.50%, sys=5.60%, ctx=3674, majf=0, minf=1 00:32:41.337 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:32:41.337 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:41.337 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:41.337 issued rwts: total=1621,2048,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:41.337 latency : target=0, window=0, percentile=100.00%, depth=1 00:32:41.337 00:32:41.337 Run status group 0 (all jobs): 00:32:41.337 READ: bw=6478KiB/s (6633kB/s), 6478KiB/s-6478KiB/s (6633kB/s-6633kB/s), io=6484KiB (6640kB), run=1001-1001msec 00:32:41.337 WRITE: bw=8184KiB/s (8380kB/s), 8184KiB/s-8184KiB/s (8380kB/s-8380kB/s), io=8192KiB (8389kB), run=1001-1001msec 00:32:41.337 00:32:41.337 Disk stats (read/write): 00:32:41.337 nvme0n1: ios=1562/1664, merge=0/0, ticks=1467/297, in_queue=1764, util=98.20% 00:32:41.337 11:50:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@48 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:32:41.597 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:32:41.597 11:50:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@49 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:32:41.597 11:50:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1221 -- # local i=0 00:32:41.597 11:50:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1222 -- # lsblk -o NAME,SERIAL 00:32:41.597 11:50:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1222 -- # grep -q -w SPDKISFASTANDAWESOME 00:32:41.597 11:50:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1229 -- # lsblk -l -o NAME,SERIAL 00:32:41.597 11:50:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1229 -- # grep -q -w SPDKISFASTANDAWESOME 00:32:41.597 11:50:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1233 -- # return 0 00:32:41.597 11:50:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:32:41.597 11:50:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@53 -- # nvmftestfini 00:32:41.597 11:50:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@516 -- # nvmfcleanup 00:32:41.597 11:50:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@121 -- # sync 00:32:41.597 11:50:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:32:41.597 11:50:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@124 -- # set +e 00:32:41.597 11:50:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@125 -- # for i in {1..20} 00:32:41.597 11:50:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:32:41.597 rmmod nvme_tcp 00:32:41.597 rmmod nvme_fabrics 00:32:41.597 rmmod nvme_keyring 00:32:41.597 11:50:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:32:41.597 11:50:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@128 -- # set -e 00:32:41.597 11:50:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@129 -- # return 0 00:32:41.597 11:50:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@517 -- # '[' -n 1475028 ']' 00:32:41.597 11:50:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@518 -- # killprocess 1475028 00:32:41.597 11:50:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@952 -- # '[' -z 1475028 ']' 00:32:41.597 11:50:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@956 -- # kill -0 1475028 00:32:41.597 11:50:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@957 -- # uname 00:32:41.597 11:50:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:32:41.597 11:50:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 1475028 00:32:41.856 11:50:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:32:41.856 11:50:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:32:41.856 11:50:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@970 -- # echo 'killing process with pid 1475028' 00:32:41.856 killing process with pid 1475028 00:32:41.856 11:50:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@971 -- # kill 1475028 00:32:41.856 11:50:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@976 -- # wait 1475028 00:32:41.856 11:50:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:32:41.856 11:50:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:32:41.856 11:50:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:32:41.856 11:50:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@297 -- # iptr 00:32:41.856 11:50:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@791 -- # iptables-save 00:32:41.856 11:50:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@791 -- # iptables-restore 00:32:41.856 11:50:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:32:41.856 11:50:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:32:41.856 11:50:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@302 -- # remove_spdk_ns 00:32:41.856 11:50:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:41.856 11:50:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:32:41.856 11:50:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:44.390 11:50:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:32:44.390 00:32:44.390 real 0m12.940s 00:32:44.390 user 0m29.318s 00:32:44.390 sys 0m5.984s 00:32:44.390 11:50:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1128 -- # xtrace_disable 00:32:44.390 11:50:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:32:44.390 ************************************ 00:32:44.390 END TEST nvmf_nmic 00:32:44.390 ************************************ 00:32:44.390 11:50:44 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@34 -- # run_test nvmf_fio_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp --interrupt-mode 00:32:44.390 11:50:44 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:32:44.390 11:50:44 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1109 -- # xtrace_disable 00:32:44.390 11:50:44 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:32:44.390 ************************************ 00:32:44.390 START TEST nvmf_fio_target 00:32:44.390 ************************************ 00:32:44.390 11:50:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp --interrupt-mode 00:32:44.390 * Looking for test storage... 00:32:44.390 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:32:44.390 11:50:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:32:44.390 11:50:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1691 -- # lcov --version 00:32:44.390 11:50:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:32:44.390 11:50:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:32:44.390 11:50:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:32:44.390 11:50:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:32:44.390 11:50:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:32:44.390 11:50:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@336 -- # IFS=.-: 00:32:44.390 11:50:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@336 -- # read -ra ver1 00:32:44.390 11:50:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@337 -- # IFS=.-: 00:32:44.390 11:50:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@337 -- # read -ra ver2 00:32:44.390 11:50:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@338 -- # local 'op=<' 00:32:44.390 11:50:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@340 -- # ver1_l=2 00:32:44.390 11:50:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@341 -- # ver2_l=1 00:32:44.390 11:50:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:32:44.390 11:50:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@344 -- # case "$op" in 00:32:44.390 11:50:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@345 -- # : 1 00:32:44.390 11:50:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:32:44.390 11:50:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:32:44.390 11:50:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@365 -- # decimal 1 00:32:44.390 11:50:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@353 -- # local d=1 00:32:44.390 11:50:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:32:44.390 11:50:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@355 -- # echo 1 00:32:44.390 11:50:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@365 -- # ver1[v]=1 00:32:44.390 11:50:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@366 -- # decimal 2 00:32:44.390 11:50:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@353 -- # local d=2 00:32:44.390 11:50:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:32:44.390 11:50:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@355 -- # echo 2 00:32:44.390 11:50:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@366 -- # ver2[v]=2 00:32:44.390 11:50:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:32:44.390 11:50:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:32:44.390 11:50:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@368 -- # return 0 00:32:44.390 11:50:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:32:44.390 11:50:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:32:44.390 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:44.390 --rc genhtml_branch_coverage=1 00:32:44.390 --rc genhtml_function_coverage=1 00:32:44.390 --rc genhtml_legend=1 00:32:44.390 --rc geninfo_all_blocks=1 00:32:44.390 --rc geninfo_unexecuted_blocks=1 00:32:44.390 00:32:44.390 ' 00:32:44.390 11:50:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:32:44.390 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:44.390 --rc genhtml_branch_coverage=1 00:32:44.390 --rc genhtml_function_coverage=1 00:32:44.390 --rc genhtml_legend=1 00:32:44.390 --rc geninfo_all_blocks=1 00:32:44.390 --rc geninfo_unexecuted_blocks=1 00:32:44.390 00:32:44.390 ' 00:32:44.390 11:50:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:32:44.390 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:44.390 --rc genhtml_branch_coverage=1 00:32:44.390 --rc genhtml_function_coverage=1 00:32:44.390 --rc genhtml_legend=1 00:32:44.390 --rc geninfo_all_blocks=1 00:32:44.390 --rc geninfo_unexecuted_blocks=1 00:32:44.390 00:32:44.390 ' 00:32:44.390 11:50:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:32:44.390 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:44.390 --rc genhtml_branch_coverage=1 00:32:44.390 --rc genhtml_function_coverage=1 00:32:44.390 --rc genhtml_legend=1 00:32:44.390 --rc geninfo_all_blocks=1 00:32:44.390 --rc geninfo_unexecuted_blocks=1 00:32:44.390 00:32:44.390 ' 00:32:44.390 11:50:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:32:44.390 11:50:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@7 -- # uname -s 00:32:44.390 11:50:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:32:44.390 11:50:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:32:44.390 11:50:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:32:44.390 11:50:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:32:44.390 11:50:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:32:44.390 11:50:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:32:44.390 11:50:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:32:44.390 11:50:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:32:44.390 11:50:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:32:44.390 11:50:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:32:44.390 11:50:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:32:44.390 11:50:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@18 -- # NVME_HOSTID=00abaa28-3537-eb11-906e-0017a4403562 00:32:44.391 11:50:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:32:44.391 11:50:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:32:44.391 11:50:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:32:44.391 11:50:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:32:44.391 11:50:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:32:44.391 11:50:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@15 -- # shopt -s extglob 00:32:44.391 11:50:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:32:44.391 11:50:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:32:44.391 11:50:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:32:44.391 11:50:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:44.391 11:50:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:44.391 11:50:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:44.391 11:50:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@5 -- # export PATH 00:32:44.391 11:50:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:44.391 11:50:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@51 -- # : 0 00:32:44.391 11:50:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:32:44.391 11:50:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:32:44.391 11:50:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:32:44.391 11:50:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:32:44.391 11:50:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:32:44.391 11:50:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:32:44.391 11:50:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:32:44.391 11:50:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:32:44.391 11:50:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:32:44.391 11:50:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:32:44.391 11:50:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:32:44.391 11:50:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:32:44.391 11:50:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:32:44.391 11:50:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@16 -- # nvmftestinit 00:32:44.391 11:50:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:32:44.391 11:50:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:32:44.391 11:50:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@476 -- # prepare_net_devs 00:32:44.391 11:50:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@438 -- # local -g is_hw=no 00:32:44.391 11:50:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@440 -- # remove_spdk_ns 00:32:44.391 11:50:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:44.391 11:50:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:32:44.391 11:50:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:44.391 11:50:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:32:44.391 11:50:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:32:44.391 11:50:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@309 -- # xtrace_disable 00:32:44.391 11:50:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:32:50.960 11:50:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:32:50.960 11:50:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@315 -- # pci_devs=() 00:32:50.960 11:50:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@315 -- # local -a pci_devs 00:32:50.960 11:50:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@316 -- # pci_net_devs=() 00:32:50.960 11:50:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:32:50.960 11:50:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@317 -- # pci_drivers=() 00:32:50.960 11:50:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@317 -- # local -A pci_drivers 00:32:50.960 11:50:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@319 -- # net_devs=() 00:32:50.960 11:50:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@319 -- # local -ga net_devs 00:32:50.960 11:50:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@320 -- # e810=() 00:32:50.960 11:50:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@320 -- # local -ga e810 00:32:50.960 11:50:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@321 -- # x722=() 00:32:50.960 11:50:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@321 -- # local -ga x722 00:32:50.960 11:50:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@322 -- # mlx=() 00:32:50.960 11:50:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@322 -- # local -ga mlx 00:32:50.960 11:50:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:32:50.960 11:50:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:32:50.960 11:50:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:32:50.960 11:50:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:32:50.961 11:50:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:32:50.961 11:50:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:32:50.961 11:50:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:32:50.961 11:50:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:32:50.961 11:50:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:32:50.961 11:50:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:32:50.961 11:50:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:32:50.961 11:50:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:32:50.961 11:50:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:32:50.961 11:50:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:32:50.961 11:50:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:32:50.961 11:50:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:32:50.961 11:50:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:32:50.961 11:50:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:32:50.961 11:50:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:32:50.961 11:50:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:32:50.961 Found 0000:af:00.0 (0x8086 - 0x159b) 00:32:50.961 11:50:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:32:50.961 11:50:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:32:50.961 11:50:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:50.961 11:50:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:50.961 11:50:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:32:50.961 11:50:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:32:50.961 11:50:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:32:50.961 Found 0000:af:00.1 (0x8086 - 0x159b) 00:32:50.961 11:50:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:32:50.961 11:50:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:32:50.961 11:50:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:50.961 11:50:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:50.961 11:50:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:32:50.961 11:50:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:32:50.961 11:50:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:32:50.961 11:50:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:32:50.961 11:50:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:32:50.961 11:50:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:50.961 11:50:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:32:50.961 11:50:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:50.961 11:50:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:32:50.961 11:50:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:32:50.961 11:50:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:50.961 11:50:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:32:50.961 Found net devices under 0000:af:00.0: cvl_0_0 00:32:50.961 11:50:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:32:50.961 11:50:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:32:50.961 11:50:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:50.961 11:50:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:32:50.961 11:50:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:50.961 11:50:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:32:50.961 11:50:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:32:50.961 11:50:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:50.961 11:50:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:32:50.961 Found net devices under 0000:af:00.1: cvl_0_1 00:32:50.961 11:50:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:32:50.961 11:50:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:32:50.961 11:50:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@442 -- # is_hw=yes 00:32:50.961 11:50:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:32:50.961 11:50:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:32:50.961 11:50:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:32:50.961 11:50:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:32:50.961 11:50:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:32:50.961 11:50:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:32:50.961 11:50:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:32:50.961 11:50:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:32:50.961 11:50:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:32:50.961 11:50:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:32:50.961 11:50:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:32:50.961 11:50:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:32:50.961 11:50:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:32:50.961 11:50:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:32:50.961 11:50:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:32:50.961 11:50:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:32:50.961 11:50:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:32:50.961 11:50:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:32:50.961 11:50:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:32:50.961 11:50:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:32:50.961 11:50:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:32:50.961 11:50:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:32:50.961 11:50:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:32:50.961 11:50:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:32:50.961 11:50:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:32:50.961 11:50:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:32:50.961 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:32:50.961 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.500 ms 00:32:50.961 00:32:50.961 --- 10.0.0.2 ping statistics --- 00:32:50.961 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:50.961 rtt min/avg/max/mdev = 0.500/0.500/0.500/0.000 ms 00:32:50.961 11:50:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:32:50.961 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:32:50.961 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.203 ms 00:32:50.961 00:32:50.961 --- 10.0.0.1 ping statistics --- 00:32:50.961 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:50.961 rtt min/avg/max/mdev = 0.203/0.203/0.203/0.000 ms 00:32:50.961 11:50:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:32:50.961 11:50:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@450 -- # return 0 00:32:50.961 11:50:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:32:50.961 11:50:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:32:50.961 11:50:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:32:50.961 11:50:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:32:50.961 11:50:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:32:50.961 11:50:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:32:50.961 11:50:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:32:50.961 11:50:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@17 -- # nvmfappstart -m 0xF 00:32:50.961 11:50:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:32:50.961 11:50:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@724 -- # xtrace_disable 00:32:50.962 11:50:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:32:50.962 11:50:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@509 -- # nvmfpid=1479582 00:32:50.962 11:50:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@510 -- # waitforlisten 1479582 00:32:50.962 11:50:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xF 00:32:50.962 11:50:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@833 -- # '[' -z 1479582 ']' 00:32:50.962 11:50:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:50.962 11:50:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@838 -- # local max_retries=100 00:32:50.962 11:50:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:50.962 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:50.962 11:50:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@842 -- # xtrace_disable 00:32:50.962 11:50:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:32:50.962 [2024-11-15 11:50:50.916544] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:32:50.962 [2024-11-15 11:50:50.917922] Starting SPDK v25.01-pre git sha1 4b2d483c6 / DPDK 24.03.0 initialization... 00:32:50.962 [2024-11-15 11:50:50.917967] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:32:50.962 [2024-11-15 11:50:51.020512] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:32:50.962 [2024-11-15 11:50:51.069517] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:32:50.962 [2024-11-15 11:50:51.069562] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:32:50.962 [2024-11-15 11:50:51.069574] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:32:50.962 [2024-11-15 11:50:51.069583] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:32:50.962 [2024-11-15 11:50:51.069591] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:32:50.962 [2024-11-15 11:50:51.071641] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:32:50.962 [2024-11-15 11:50:51.071742] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:32:50.962 [2024-11-15 11:50:51.071761] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:32:50.962 [2024-11-15 11:50:51.071764] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:32:50.962 [2024-11-15 11:50:51.146779] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:32:50.962 [2024-11-15 11:50:51.146975] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:32:50.962 [2024-11-15 11:50:51.147153] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:32:50.962 [2024-11-15 11:50:51.147562] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:32:50.962 [2024-11-15 11:50:51.147818] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:32:50.962 11:50:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:32:50.962 11:50:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@866 -- # return 0 00:32:50.962 11:50:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:32:50.962 11:50:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@730 -- # xtrace_disable 00:32:50.962 11:50:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:32:50.962 11:50:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:32:50.962 11:50:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:32:50.962 [2024-11-15 11:50:51.460354] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:32:50.962 11:50:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:32:50.962 11:50:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@21 -- # malloc_bdevs='Malloc0 ' 00:32:50.962 11:50:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:32:51.530 11:50:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@22 -- # malloc_bdevs+=Malloc1 00:32:51.530 11:50:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:32:51.530 11:50:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@24 -- # raid_malloc_bdevs='Malloc2 ' 00:32:51.788 11:50:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:32:52.046 11:50:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@25 -- # raid_malloc_bdevs+=Malloc3 00:32:52.046 11:50:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc2 Malloc3' 00:32:52.305 11:50:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:32:52.564 11:50:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@29 -- # concat_malloc_bdevs='Malloc4 ' 00:32:52.564 11:50:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:32:52.822 11:50:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@30 -- # concat_malloc_bdevs+='Malloc5 ' 00:32:52.822 11:50:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:32:53.081 11:50:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@31 -- # concat_malloc_bdevs+=Malloc6 00:32:53.081 11:50:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n concat0 -r concat -z 64 -b 'Malloc4 Malloc5 Malloc6' 00:32:53.338 11:50:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:32:53.596 11:50:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:32:53.596 11:50:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:32:53.854 11:50:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:32:53.854 11:50:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:32:54.113 11:50:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:32:54.371 [2024-11-15 11:50:55.164565] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:32:54.371 11:50:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 raid0 00:32:54.629 11:50:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 concat0 00:32:54.887 11:50:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@46 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid=00abaa28-3537-eb11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:32:55.145 11:50:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@48 -- # waitforserial SPDKISFASTANDAWESOME 4 00:32:55.145 11:50:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1200 -- # local i=0 00:32:55.145 11:50:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1201 -- # local nvme_device_counter=1 nvme_devices=0 00:32:55.145 11:50:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1202 -- # [[ -n 4 ]] 00:32:55.145 11:50:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1203 -- # nvme_device_counter=4 00:32:55.145 11:50:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1207 -- # sleep 2 00:32:57.675 11:50:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1208 -- # (( i++ <= 15 )) 00:32:57.675 11:50:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1209 -- # lsblk -l -o NAME,SERIAL 00:32:57.675 11:50:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1209 -- # grep -c SPDKISFASTANDAWESOME 00:32:57.675 11:50:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1209 -- # nvme_devices=4 00:32:57.675 11:50:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1210 -- # (( nvme_devices == nvme_device_counter )) 00:32:57.675 11:50:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1210 -- # return 0 00:32:57.675 11:50:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:32:57.675 [global] 00:32:57.675 thread=1 00:32:57.675 invalidate=1 00:32:57.675 rw=write 00:32:57.675 time_based=1 00:32:57.675 runtime=1 00:32:57.675 ioengine=libaio 00:32:57.675 direct=1 00:32:57.675 bs=4096 00:32:57.675 iodepth=1 00:32:57.675 norandommap=0 00:32:57.675 numjobs=1 00:32:57.675 00:32:57.675 verify_dump=1 00:32:57.675 verify_backlog=512 00:32:57.675 verify_state_save=0 00:32:57.675 do_verify=1 00:32:57.675 verify=crc32c-intel 00:32:57.675 [job0] 00:32:57.675 filename=/dev/nvme0n1 00:32:57.675 [job1] 00:32:57.675 filename=/dev/nvme0n2 00:32:57.675 [job2] 00:32:57.675 filename=/dev/nvme0n3 00:32:57.675 [job3] 00:32:57.675 filename=/dev/nvme0n4 00:32:57.675 Could not set queue depth (nvme0n1) 00:32:57.675 Could not set queue depth (nvme0n2) 00:32:57.675 Could not set queue depth (nvme0n3) 00:32:57.675 Could not set queue depth (nvme0n4) 00:32:57.675 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:32:57.675 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:32:57.675 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:32:57.675 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:32:57.675 fio-3.35 00:32:57.675 Starting 4 threads 00:32:59.069 00:32:59.069 job0: (groupid=0, jobs=1): err= 0: pid=1481099: Fri Nov 15 11:50:59 2024 00:32:59.069 read: IOPS=22, BW=88.4KiB/s (90.5kB/s)(92.0KiB/1041msec) 00:32:59.069 slat (nsec): min=10179, max=25760, avg=16459.78, stdev=4106.21 00:32:59.069 clat (usec): min=266, max=41861, avg=39259.90, stdev=8502.37 00:32:59.069 lat (usec): min=278, max=41878, avg=39276.36, stdev=8503.45 00:32:59.069 clat percentiles (usec): 00:32:59.069 | 1.00th=[ 269], 5.00th=[40633], 10.00th=[41157], 20.00th=[41157], 00:32:59.069 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:32:59.069 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:32:59.069 | 99.00th=[41681], 99.50th=[41681], 99.90th=[41681], 99.95th=[41681], 00:32:59.069 | 99.99th=[41681] 00:32:59.069 write: IOPS=491, BW=1967KiB/s (2015kB/s)(2048KiB/1041msec); 0 zone resets 00:32:59.069 slat (usec): min=11, max=39271, avg=90.79, stdev=1734.95 00:32:59.069 clat (usec): min=145, max=356, avg=173.23, stdev=19.41 00:32:59.069 lat (usec): min=157, max=39599, avg=264.01, stdev=1741.93 00:32:59.069 clat percentiles (usec): 00:32:59.069 | 1.00th=[ 149], 5.00th=[ 153], 10.00th=[ 157], 20.00th=[ 159], 00:32:59.069 | 30.00th=[ 163], 40.00th=[ 165], 50.00th=[ 167], 60.00th=[ 174], 00:32:59.069 | 70.00th=[ 178], 80.00th=[ 188], 90.00th=[ 198], 95.00th=[ 204], 00:32:59.069 | 99.00th=[ 223], 99.50th=[ 245], 99.90th=[ 359], 99.95th=[ 359], 00:32:59.069 | 99.99th=[ 359] 00:32:59.069 bw ( KiB/s): min= 4096, max= 4096, per=29.74%, avg=4096.00, stdev= 0.00, samples=1 00:32:59.069 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:32:59.069 lat (usec) : 250=95.33%, 500=0.56% 00:32:59.069 lat (msec) : 50=4.11% 00:32:59.069 cpu : usr=0.38%, sys=0.87%, ctx=537, majf=0, minf=1 00:32:59.069 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:32:59.069 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:59.069 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:59.069 issued rwts: total=23,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:59.069 latency : target=0, window=0, percentile=100.00%, depth=1 00:32:59.069 job1: (groupid=0, jobs=1): err= 0: pid=1481101: Fri Nov 15 11:50:59 2024 00:32:59.069 read: IOPS=1834, BW=7337KiB/s (7513kB/s)(7344KiB/1001msec) 00:32:59.069 slat (nsec): min=6999, max=25881, avg=8146.11, stdev=992.74 00:32:59.069 clat (usec): min=186, max=508, avg=264.26, stdev=35.88 00:32:59.069 lat (usec): min=194, max=516, avg=272.41, stdev=35.94 00:32:59.069 clat percentiles (usec): 00:32:59.069 | 1.00th=[ 194], 5.00th=[ 198], 10.00th=[ 200], 20.00th=[ 262], 00:32:59.069 | 30.00th=[ 269], 40.00th=[ 269], 50.00th=[ 273], 60.00th=[ 273], 00:32:59.069 | 70.00th=[ 277], 80.00th=[ 281], 90.00th=[ 285], 95.00th=[ 289], 00:32:59.069 | 99.00th=[ 465], 99.50th=[ 482], 99.90th=[ 506], 99.95th=[ 510], 00:32:59.069 | 99.99th=[ 510] 00:32:59.069 write: IOPS=2045, BW=8184KiB/s (8380kB/s)(8192KiB/1001msec); 0 zone resets 00:32:59.069 slat (usec): min=10, max=40508, avg=51.24, stdev=1253.23 00:32:59.069 clat (usec): min=117, max=453, avg=187.09, stdev=30.88 00:32:59.069 lat (usec): min=144, max=40837, avg=238.33, stdev=1259.04 00:32:59.069 clat percentiles (usec): 00:32:59.069 | 1.00th=[ 137], 5.00th=[ 139], 10.00th=[ 141], 20.00th=[ 147], 00:32:59.069 | 30.00th=[ 186], 40.00th=[ 188], 50.00th=[ 190], 60.00th=[ 194], 00:32:59.069 | 70.00th=[ 198], 80.00th=[ 210], 90.00th=[ 225], 95.00th=[ 233], 00:32:59.069 | 99.00th=[ 255], 99.50th=[ 265], 99.90th=[ 355], 99.95th=[ 396], 00:32:59.069 | 99.99th=[ 453] 00:32:59.069 bw ( KiB/s): min= 8192, max= 8192, per=59.49%, avg=8192.00, stdev= 0.00, samples=1 00:32:59.069 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:32:59.069 lat (usec) : 250=59.32%, 500=40.63%, 750=0.05% 00:32:59.069 cpu : usr=3.80%, sys=5.70%, ctx=3887, majf=0, minf=1 00:32:59.069 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:32:59.069 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:59.069 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:59.069 issued rwts: total=1836,2048,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:59.069 latency : target=0, window=0, percentile=100.00%, depth=1 00:32:59.069 job2: (groupid=0, jobs=1): err= 0: pid=1481103: Fri Nov 15 11:50:59 2024 00:32:59.069 read: IOPS=23, BW=92.8KiB/s (95.1kB/s)(96.0KiB/1034msec) 00:32:59.069 slat (nsec): min=11093, max=23875, avg=20516.17, stdev=3906.35 00:32:59.069 clat (usec): min=282, max=41195, avg=39274.47, stdev=8305.91 00:32:59.069 lat (usec): min=304, max=41208, avg=39294.99, stdev=8305.59 00:32:59.069 clat percentiles (usec): 00:32:59.069 | 1.00th=[ 281], 5.00th=[40633], 10.00th=[40633], 20.00th=[41157], 00:32:59.069 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:32:59.069 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:32:59.069 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:32:59.069 | 99.99th=[41157] 00:32:59.069 write: IOPS=495, BW=1981KiB/s (2028kB/s)(2048KiB/1034msec); 0 zone resets 00:32:59.069 slat (nsec): min=10241, max=41172, avg=11664.64, stdev=2155.91 00:32:59.069 clat (usec): min=137, max=280, avg=161.34, stdev=17.91 00:32:59.069 lat (usec): min=148, max=322, avg=173.01, stdev=18.79 00:32:59.069 clat percentiles (usec): 00:32:59.069 | 1.00th=[ 141], 5.00th=[ 143], 10.00th=[ 145], 20.00th=[ 145], 00:32:59.069 | 30.00th=[ 147], 40.00th=[ 153], 50.00th=[ 159], 60.00th=[ 165], 00:32:59.069 | 70.00th=[ 169], 80.00th=[ 174], 90.00th=[ 184], 95.00th=[ 194], 00:32:59.069 | 99.00th=[ 221], 99.50th=[ 237], 99.90th=[ 281], 99.95th=[ 281], 00:32:59.069 | 99.99th=[ 281] 00:32:59.069 bw ( KiB/s): min= 4096, max= 4096, per=29.74%, avg=4096.00, stdev= 0.00, samples=1 00:32:59.069 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:32:59.069 lat (usec) : 250=95.15%, 500=0.56% 00:32:59.069 lat (msec) : 50=4.29% 00:32:59.069 cpu : usr=0.19%, sys=1.06%, ctx=536, majf=0, minf=2 00:32:59.069 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:32:59.069 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:59.069 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:59.069 issued rwts: total=24,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:59.069 latency : target=0, window=0, percentile=100.00%, depth=1 00:32:59.069 job3: (groupid=0, jobs=1): err= 0: pid=1481105: Fri Nov 15 11:50:59 2024 00:32:59.069 read: IOPS=21, BW=85.0KiB/s (87.1kB/s)(88.0KiB/1035msec) 00:32:59.069 slat (nsec): min=10389, max=24687, avg=21091.27, stdev=4200.38 00:32:59.069 clat (usec): min=273, max=44706, avg=38956.41, stdev=8808.29 00:32:59.069 lat (usec): min=296, max=44729, avg=38977.50, stdev=8807.93 00:32:59.069 clat percentiles (usec): 00:32:59.069 | 1.00th=[ 273], 5.00th=[33817], 10.00th=[40633], 20.00th=[40633], 00:32:59.069 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:32:59.069 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:32:59.069 | 99.00th=[44827], 99.50th=[44827], 99.90th=[44827], 99.95th=[44827], 00:32:59.069 | 99.99th=[44827] 00:32:59.069 write: IOPS=494, BW=1979KiB/s (2026kB/s)(2048KiB/1035msec); 0 zone resets 00:32:59.069 slat (usec): min=10, max=40525, avg=169.08, stdev=2506.90 00:32:59.069 clat (usec): min=147, max=368, avg=172.74, stdev=16.34 00:32:59.069 lat (usec): min=159, max=40894, avg=341.82, stdev=2517.63 00:32:59.069 clat percentiles (usec): 00:32:59.069 | 1.00th=[ 153], 5.00th=[ 157], 10.00th=[ 161], 20.00th=[ 163], 00:32:59.070 | 30.00th=[ 165], 40.00th=[ 167], 50.00th=[ 172], 60.00th=[ 174], 00:32:59.070 | 70.00th=[ 176], 80.00th=[ 180], 90.00th=[ 186], 95.00th=[ 194], 00:32:59.070 | 99.00th=[ 212], 99.50th=[ 265], 99.90th=[ 371], 99.95th=[ 371], 00:32:59.070 | 99.99th=[ 371] 00:32:59.070 bw ( KiB/s): min= 4096, max= 4096, per=29.74%, avg=4096.00, stdev= 0.00, samples=1 00:32:59.070 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:32:59.070 lat (usec) : 250=95.13%, 500=0.94% 00:32:59.070 lat (msec) : 50=3.93% 00:32:59.070 cpu : usr=0.77%, sys=0.58%, ctx=537, majf=0, minf=1 00:32:59.070 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:32:59.070 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:59.070 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:59.070 issued rwts: total=22,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:59.070 latency : target=0, window=0, percentile=100.00%, depth=1 00:32:59.070 00:32:59.070 Run status group 0 (all jobs): 00:32:59.070 READ: bw=7320KiB/s (7496kB/s), 85.0KiB/s-7337KiB/s (87.1kB/s-7513kB/s), io=7620KiB (7803kB), run=1001-1041msec 00:32:59.070 WRITE: bw=13.4MiB/s (14.1MB/s), 1967KiB/s-8184KiB/s (2015kB/s-8380kB/s), io=14.0MiB (14.7MB), run=1001-1041msec 00:32:59.070 00:32:59.070 Disk stats (read/write): 00:32:59.070 nvme0n1: ios=41/512, merge=0/0, ticks=1525/81, in_queue=1606, util=87.37% 00:32:59.070 nvme0n2: ios=1464/1536, merge=0/0, ticks=1301/282, in_queue=1583, util=91.50% 00:32:59.070 nvme0n3: ios=74/512, merge=0/0, ticks=765/78, in_queue=843, util=92.22% 00:32:59.070 nvme0n4: ios=39/512, merge=0/0, ticks=1522/81, in_queue=1603, util=98.60% 00:32:59.070 11:50:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t randwrite -r 1 -v 00:32:59.070 [global] 00:32:59.070 thread=1 00:32:59.070 invalidate=1 00:32:59.070 rw=randwrite 00:32:59.070 time_based=1 00:32:59.070 runtime=1 00:32:59.070 ioengine=libaio 00:32:59.070 direct=1 00:32:59.070 bs=4096 00:32:59.070 iodepth=1 00:32:59.070 norandommap=0 00:32:59.070 numjobs=1 00:32:59.070 00:32:59.070 verify_dump=1 00:32:59.070 verify_backlog=512 00:32:59.070 verify_state_save=0 00:32:59.070 do_verify=1 00:32:59.070 verify=crc32c-intel 00:32:59.070 [job0] 00:32:59.070 filename=/dev/nvme0n1 00:32:59.070 [job1] 00:32:59.070 filename=/dev/nvme0n2 00:32:59.070 [job2] 00:32:59.070 filename=/dev/nvme0n3 00:32:59.070 [job3] 00:32:59.070 filename=/dev/nvme0n4 00:32:59.070 Could not set queue depth (nvme0n1) 00:32:59.070 Could not set queue depth (nvme0n2) 00:32:59.070 Could not set queue depth (nvme0n3) 00:32:59.070 Could not set queue depth (nvme0n4) 00:32:59.331 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:32:59.331 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:32:59.331 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:32:59.331 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:32:59.331 fio-3.35 00:32:59.331 Starting 4 threads 00:33:00.713 00:33:00.713 job0: (groupid=0, jobs=1): err= 0: pid=1481525: Fri Nov 15 11:51:01 2024 00:33:00.713 read: IOPS=21, BW=86.7KiB/s (88.8kB/s)(88.0KiB/1015msec) 00:33:00.713 slat (nsec): min=12055, max=29163, avg=22737.59, stdev=3006.38 00:33:00.713 clat (usec): min=40813, max=43815, avg=41103.28, stdev=612.99 00:33:00.713 lat (usec): min=40837, max=43836, avg=41126.02, stdev=612.86 00:33:00.713 clat percentiles (usec): 00:33:00.713 | 1.00th=[40633], 5.00th=[40633], 10.00th=[40633], 20.00th=[41157], 00:33:00.713 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:33:00.713 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:33:00.713 | 99.00th=[43779], 99.50th=[43779], 99.90th=[43779], 99.95th=[43779], 00:33:00.713 | 99.99th=[43779] 00:33:00.713 write: IOPS=504, BW=2018KiB/s (2066kB/s)(2048KiB/1015msec); 0 zone resets 00:33:00.713 slat (nsec): min=6933, max=47678, avg=11613.04, stdev=2729.46 00:33:00.713 clat (usec): min=155, max=1015, avg=200.12, stdev=51.91 00:33:00.713 lat (usec): min=166, max=1025, avg=211.74, stdev=51.53 00:33:00.713 clat percentiles (usec): 00:33:00.713 | 1.00th=[ 159], 5.00th=[ 163], 10.00th=[ 167], 20.00th=[ 172], 00:33:00.713 | 30.00th=[ 176], 40.00th=[ 178], 50.00th=[ 184], 60.00th=[ 188], 00:33:00.713 | 70.00th=[ 202], 80.00th=[ 241], 90.00th=[ 262], 95.00th=[ 273], 00:33:00.713 | 99.00th=[ 293], 99.50th=[ 330], 99.90th=[ 1012], 99.95th=[ 1012], 00:33:00.713 | 99.99th=[ 1012] 00:33:00.713 bw ( KiB/s): min= 4096, max= 4096, per=41.56%, avg=4096.00, stdev= 0.00, samples=1 00:33:00.713 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:33:00.713 lat (usec) : 250=80.71%, 500=14.98% 00:33:00.713 lat (msec) : 2=0.19%, 50=4.12% 00:33:00.713 cpu : usr=0.49%, sys=0.69%, ctx=534, majf=0, minf=1 00:33:00.713 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:33:00.713 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:00.713 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:00.713 issued rwts: total=22,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:00.713 latency : target=0, window=0, percentile=100.00%, depth=1 00:33:00.713 job1: (groupid=0, jobs=1): err= 0: pid=1481534: Fri Nov 15 11:51:01 2024 00:33:00.713 read: IOPS=21, BW=87.0KiB/s (89.1kB/s)(88.0KiB/1011msec) 00:33:00.713 slat (nsec): min=12474, max=24716, avg=20948.32, stdev=3867.55 00:33:00.713 clat (usec): min=40640, max=42415, avg=41010.91, stdev=327.72 00:33:00.713 lat (usec): min=40662, max=42430, avg=41031.86, stdev=326.65 00:33:00.713 clat percentiles (usec): 00:33:00.713 | 1.00th=[40633], 5.00th=[40633], 10.00th=[40633], 20.00th=[40633], 00:33:00.713 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:33:00.713 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:33:00.713 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:33:00.713 | 99.99th=[42206] 00:33:00.713 write: IOPS=506, BW=2026KiB/s (2074kB/s)(2048KiB/1011msec); 0 zone resets 00:33:00.713 slat (nsec): min=7487, max=28110, avg=13275.46, stdev=2027.52 00:33:00.713 clat (usec): min=154, max=526, avg=192.92, stdev=25.36 00:33:00.713 lat (usec): min=165, max=540, avg=206.20, stdev=26.04 00:33:00.713 clat percentiles (usec): 00:33:00.713 | 1.00th=[ 159], 5.00th=[ 163], 10.00th=[ 167], 20.00th=[ 176], 00:33:00.713 | 30.00th=[ 180], 40.00th=[ 184], 50.00th=[ 190], 60.00th=[ 196], 00:33:00.713 | 70.00th=[ 202], 80.00th=[ 210], 90.00th=[ 221], 95.00th=[ 231], 00:33:00.713 | 99.00th=[ 258], 99.50th=[ 269], 99.90th=[ 529], 99.95th=[ 529], 00:33:00.713 | 99.99th=[ 529] 00:33:00.713 bw ( KiB/s): min= 4096, max= 4096, per=41.56%, avg=4096.00, stdev= 0.00, samples=1 00:33:00.713 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:33:00.713 lat (usec) : 250=94.57%, 500=1.12%, 750=0.19% 00:33:00.713 lat (msec) : 50=4.12% 00:33:00.713 cpu : usr=0.50%, sys=0.99%, ctx=538, majf=0, minf=1 00:33:00.713 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:33:00.713 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:00.713 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:00.713 issued rwts: total=22,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:00.713 latency : target=0, window=0, percentile=100.00%, depth=1 00:33:00.713 job2: (groupid=0, jobs=1): err= 0: pid=1481545: Fri Nov 15 11:51:01 2024 00:33:00.713 read: IOPS=21, BW=84.7KiB/s (86.7kB/s)(88.0KiB/1039msec) 00:33:00.713 slat (nsec): min=10437, max=25026, avg=23460.86, stdev=2950.68 00:33:00.713 clat (usec): min=40621, max=42071, avg=41040.02, stdev=335.79 00:33:00.713 lat (usec): min=40631, max=42095, avg=41063.48, stdev=336.59 00:33:00.713 clat percentiles (usec): 00:33:00.713 | 1.00th=[40633], 5.00th=[40633], 10.00th=[40633], 20.00th=[40633], 00:33:00.713 | 30.00th=[40633], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:33:00.713 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[42206], 00:33:00.713 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:33:00.713 | 99.99th=[42206] 00:33:00.713 write: IOPS=492, BW=1971KiB/s (2018kB/s)(2048KiB/1039msec); 0 zone resets 00:33:00.713 slat (nsec): min=9914, max=38127, avg=11802.49, stdev=2162.14 00:33:00.713 clat (usec): min=178, max=410, avg=248.24, stdev=29.53 00:33:00.713 lat (usec): min=188, max=422, avg=260.04, stdev=29.88 00:33:00.713 clat percentiles (usec): 00:33:00.713 | 1.00th=[ 184], 5.00th=[ 198], 10.00th=[ 202], 20.00th=[ 223], 00:33:00.713 | 30.00th=[ 237], 40.00th=[ 247], 50.00th=[ 255], 60.00th=[ 262], 00:33:00.713 | 70.00th=[ 265], 80.00th=[ 273], 90.00th=[ 281], 95.00th=[ 285], 00:33:00.713 | 99.00th=[ 302], 99.50th=[ 310], 99.90th=[ 412], 99.95th=[ 412], 00:33:00.713 | 99.99th=[ 412] 00:33:00.713 bw ( KiB/s): min= 4096, max= 4096, per=41.56%, avg=4096.00, stdev= 0.00, samples=1 00:33:00.713 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:33:00.713 lat (usec) : 250=41.39%, 500=54.49% 00:33:00.713 lat (msec) : 50=4.12% 00:33:00.713 cpu : usr=0.39%, sys=0.48%, ctx=536, majf=0, minf=1 00:33:00.713 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:33:00.713 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:00.713 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:00.713 issued rwts: total=22,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:00.713 latency : target=0, window=0, percentile=100.00%, depth=1 00:33:00.713 job3: (groupid=0, jobs=1): err= 0: pid=1481551: Fri Nov 15 11:51:01 2024 00:33:00.713 read: IOPS=524, BW=2098KiB/s (2148kB/s)(2108KiB/1005msec) 00:33:00.713 slat (nsec): min=6527, max=32901, avg=7915.88, stdev=2808.81 00:33:00.713 clat (usec): min=238, max=41070, avg=1474.31, stdev=6761.99 00:33:00.713 lat (usec): min=245, max=41091, avg=1482.23, stdev=6764.35 00:33:00.713 clat percentiles (usec): 00:33:00.713 | 1.00th=[ 243], 5.00th=[ 247], 10.00th=[ 249], 20.00th=[ 255], 00:33:00.713 | 30.00th=[ 269], 40.00th=[ 281], 50.00th=[ 285], 60.00th=[ 293], 00:33:00.713 | 70.00th=[ 306], 80.00th=[ 457], 90.00th=[ 486], 95.00th=[ 502], 00:33:00.713 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:33:00.713 | 99.99th=[41157] 00:33:00.713 write: IOPS=1018, BW=4076KiB/s (4173kB/s)(4096KiB/1005msec); 0 zone resets 00:33:00.713 slat (nsec): min=9263, max=52685, avg=11776.89, stdev=4336.28 00:33:00.713 clat (usec): min=133, max=701, avg=202.49, stdev=52.36 00:33:00.713 lat (usec): min=150, max=717, avg=214.27, stdev=52.08 00:33:00.713 clat percentiles (usec): 00:33:00.713 | 1.00th=[ 141], 5.00th=[ 145], 10.00th=[ 145], 20.00th=[ 149], 00:33:00.713 | 30.00th=[ 159], 40.00th=[ 184], 50.00th=[ 194], 60.00th=[ 204], 00:33:00.713 | 70.00th=[ 233], 80.00th=[ 255], 90.00th=[ 277], 95.00th=[ 289], 00:33:00.713 | 99.00th=[ 306], 99.50th=[ 314], 99.90th=[ 502], 99.95th=[ 701], 00:33:00.713 | 99.99th=[ 701] 00:33:00.713 bw ( KiB/s): min= 8192, max= 8192, per=83.12%, avg=8192.00, stdev= 0.00, samples=1 00:33:00.713 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:33:00.713 lat (usec) : 250=55.51%, 500=42.30%, 750=1.23% 00:33:00.713 lat (msec) : 50=0.97% 00:33:00.713 cpu : usr=0.80%, sys=1.99%, ctx=1551, majf=0, minf=1 00:33:00.713 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:33:00.713 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:00.714 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:00.714 issued rwts: total=527,1024,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:00.714 latency : target=0, window=0, percentile=100.00%, depth=1 00:33:00.714 00:33:00.714 Run status group 0 (all jobs): 00:33:00.714 READ: bw=2283KiB/s (2338kB/s), 84.7KiB/s-2098KiB/s (86.7kB/s-2148kB/s), io=2372KiB (2429kB), run=1005-1039msec 00:33:00.714 WRITE: bw=9856KiB/s (10.1MB/s), 1971KiB/s-4076KiB/s (2018kB/s-4173kB/s), io=10.0MiB (10.5MB), run=1005-1039msec 00:33:00.714 00:33:00.714 Disk stats (read/write): 00:33:00.714 nvme0n1: ios=67/512, merge=0/0, ticks=719/93, in_queue=812, util=86.57% 00:33:00.714 nvme0n2: ios=41/512, merge=0/0, ticks=1644/94, in_queue=1738, util=89.53% 00:33:00.714 nvme0n3: ios=81/512, merge=0/0, ticks=916/121, in_queue=1037, util=95.30% 00:33:00.714 nvme0n4: ios=580/1024, merge=0/0, ticks=688/201, in_queue=889, util=95.68% 00:33:00.714 11:51:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t write -r 1 -v 00:33:00.714 [global] 00:33:00.714 thread=1 00:33:00.714 invalidate=1 00:33:00.714 rw=write 00:33:00.714 time_based=1 00:33:00.714 runtime=1 00:33:00.714 ioengine=libaio 00:33:00.714 direct=1 00:33:00.714 bs=4096 00:33:00.714 iodepth=128 00:33:00.714 norandommap=0 00:33:00.714 numjobs=1 00:33:00.714 00:33:00.714 verify_dump=1 00:33:00.714 verify_backlog=512 00:33:00.714 verify_state_save=0 00:33:00.714 do_verify=1 00:33:00.714 verify=crc32c-intel 00:33:00.714 [job0] 00:33:00.714 filename=/dev/nvme0n1 00:33:00.714 [job1] 00:33:00.714 filename=/dev/nvme0n2 00:33:00.714 [job2] 00:33:00.714 filename=/dev/nvme0n3 00:33:00.714 [job3] 00:33:00.714 filename=/dev/nvme0n4 00:33:00.714 Could not set queue depth (nvme0n1) 00:33:00.714 Could not set queue depth (nvme0n2) 00:33:00.714 Could not set queue depth (nvme0n3) 00:33:00.714 Could not set queue depth (nvme0n4) 00:33:00.976 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:33:00.977 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:33:00.977 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:33:00.977 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:33:00.977 fio-3.35 00:33:00.977 Starting 4 threads 00:33:02.426 00:33:02.426 job0: (groupid=0, jobs=1): err= 0: pid=1482051: Fri Nov 15 11:51:02 2024 00:33:02.426 read: IOPS=4589, BW=17.9MiB/s (18.8MB/s)(18.0MiB/1004msec) 00:33:02.426 slat (nsec): min=1656, max=27247k, avg=110385.48, stdev=921386.29 00:33:02.426 clat (usec): min=2099, max=52747, avg=14372.40, stdev=8191.88 00:33:02.426 lat (usec): min=2581, max=52765, avg=14482.79, stdev=8244.63 00:33:02.426 clat percentiles (usec): 00:33:02.426 | 1.00th=[ 4047], 5.00th=[ 7963], 10.00th=[ 8848], 20.00th=[ 9634], 00:33:02.426 | 30.00th=[10028], 40.00th=[10814], 50.00th=[11731], 60.00th=[12649], 00:33:02.426 | 70.00th=[13829], 80.00th=[16188], 90.00th=[27395], 95.00th=[32113], 00:33:02.427 | 99.00th=[49021], 99.50th=[49021], 99.90th=[49021], 99.95th=[49021], 00:33:02.427 | 99.99th=[52691] 00:33:02.427 write: IOPS=4665, BW=18.2MiB/s (19.1MB/s)(18.3MiB/1004msec); 0 zone resets 00:33:02.427 slat (usec): min=3, max=22650, avg=93.46, stdev=706.04 00:33:02.427 clat (usec): min=430, max=54228, avg=12972.74, stdev=5282.89 00:33:02.427 lat (usec): min=615, max=54245, avg=13066.20, stdev=5325.65 00:33:02.427 clat percentiles (usec): 00:33:02.427 | 1.00th=[ 3392], 5.00th=[ 8094], 10.00th=[ 9241], 20.00th=[ 9634], 00:33:02.427 | 30.00th=[ 9765], 40.00th=[11076], 50.00th=[11994], 60.00th=[13304], 00:33:02.427 | 70.00th=[13566], 80.00th=[14091], 90.00th=[18220], 95.00th=[26608], 00:33:02.427 | 99.00th=[33817], 99.50th=[33817], 99.90th=[49021], 99.95th=[49021], 00:33:02.427 | 99.99th=[54264] 00:33:02.427 bw ( KiB/s): min=16384, max=20480, per=28.07%, avg=18432.00, stdev=2896.31, samples=2 00:33:02.427 iops : min= 4096, max= 5120, avg=4608.00, stdev=724.08, samples=2 00:33:02.427 lat (usec) : 500=0.01%, 750=0.05% 00:33:02.427 lat (msec) : 2=0.01%, 4=1.17%, 10=30.89%, 20=56.80%, 50=11.04% 00:33:02.427 lat (msec) : 100=0.02% 00:33:02.427 cpu : usr=3.69%, sys=4.89%, ctx=382, majf=0, minf=1 00:33:02.427 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.3% 00:33:02.427 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:02.427 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:33:02.427 issued rwts: total=4608,4684,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:02.427 latency : target=0, window=0, percentile=100.00%, depth=128 00:33:02.427 job1: (groupid=0, jobs=1): err= 0: pid=1482057: Fri Nov 15 11:51:02 2024 00:33:02.427 read: IOPS=5180, BW=20.2MiB/s (21.2MB/s)(20.3MiB/1003msec) 00:33:02.427 slat (usec): min=2, max=12409, avg=95.70, stdev=684.17 00:33:02.427 clat (usec): min=1263, max=37606, avg=12365.79, stdev=4927.79 00:33:02.427 lat (usec): min=5735, max=37625, avg=12461.49, stdev=4974.36 00:33:02.427 clat percentiles (usec): 00:33:02.427 | 1.00th=[ 6521], 5.00th=[ 7242], 10.00th=[ 8291], 20.00th=[ 8979], 00:33:02.427 | 30.00th=[ 9372], 40.00th=[ 9896], 50.00th=[10552], 60.00th=[11469], 00:33:02.427 | 70.00th=[13304], 80.00th=[15008], 90.00th=[19268], 95.00th=[23987], 00:33:02.427 | 99.00th=[30802], 99.50th=[30802], 99.90th=[32900], 99.95th=[32900], 00:33:02.427 | 99.99th=[37487] 00:33:02.427 write: IOPS=5615, BW=21.9MiB/s (23.0MB/s)(22.0MiB/1003msec); 0 zone resets 00:33:02.427 slat (usec): min=3, max=12181, avg=81.88, stdev=581.65 00:33:02.427 clat (usec): min=398, max=58946, avg=11177.14, stdev=4498.43 00:33:02.427 lat (usec): min=404, max=58956, avg=11259.02, stdev=4534.36 00:33:02.427 clat percentiles (usec): 00:33:02.427 | 1.00th=[ 4621], 5.00th=[ 5735], 10.00th=[ 7701], 20.00th=[ 8717], 00:33:02.427 | 30.00th=[ 9110], 40.00th=[ 9503], 50.00th=[ 9765], 60.00th=[10945], 00:33:02.427 | 70.00th=[12780], 80.00th=[13566], 90.00th=[14877], 95.00th=[17695], 00:33:02.427 | 99.00th=[24249], 99.50th=[37487], 99.90th=[58459], 99.95th=[58459], 00:33:02.427 | 99.99th=[58983] 00:33:02.427 bw ( KiB/s): min=20064, max=24576, per=33.99%, avg=22320.00, stdev=3190.47, samples=2 00:33:02.427 iops : min= 5016, max= 6144, avg=5580.00, stdev=797.62, samples=2 00:33:02.427 lat (usec) : 500=0.04% 00:33:02.427 lat (msec) : 2=0.08%, 4=0.13%, 10=47.73%, 20=46.31%, 50=5.58% 00:33:02.427 lat (msec) : 100=0.14% 00:33:02.427 cpu : usr=5.39%, sys=6.69%, ctx=393, majf=0, minf=2 00:33:02.427 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.4% 00:33:02.427 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:02.427 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:33:02.427 issued rwts: total=5196,5632,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:02.427 latency : target=0, window=0, percentile=100.00%, depth=128 00:33:02.427 job2: (groupid=0, jobs=1): err= 0: pid=1482066: Fri Nov 15 11:51:02 2024 00:33:02.427 read: IOPS=2035, BW=8143KiB/s (8339kB/s)(8192KiB/1006msec) 00:33:02.427 slat (usec): min=2, max=23243, avg=215.09, stdev=1437.06 00:33:02.427 clat (usec): min=12085, max=90029, avg=25336.00, stdev=13201.21 00:33:02.427 lat (usec): min=12204, max=90038, avg=25551.09, stdev=13305.95 00:33:02.427 clat percentiles (usec): 00:33:02.427 | 1.00th=[13173], 5.00th=[15008], 10.00th=[16188], 20.00th=[16712], 00:33:02.427 | 30.00th=[17433], 40.00th=[18482], 50.00th=[19530], 60.00th=[22938], 00:33:02.427 | 70.00th=[26346], 80.00th=[31851], 90.00th=[42730], 95.00th=[51119], 00:33:02.427 | 99.00th=[76022], 99.50th=[89654], 99.90th=[89654], 99.95th=[89654], 00:33:02.427 | 99.99th=[89654] 00:33:02.427 write: IOPS=2091, BW=8366KiB/s (8567kB/s)(8416KiB/1006msec); 0 zone resets 00:33:02.427 slat (usec): min=2, max=20562, avg=259.52, stdev=1506.98 00:33:02.427 clat (usec): min=1958, max=90050, avg=35610.25, stdev=17499.00 00:33:02.427 lat (usec): min=8844, max=90060, avg=35869.76, stdev=17522.55 00:33:02.427 clat percentiles (usec): 00:33:02.427 | 1.00th=[11731], 5.00th=[15926], 10.00th=[16188], 20.00th=[19268], 00:33:02.427 | 30.00th=[20841], 40.00th=[25297], 50.00th=[30540], 60.00th=[40633], 00:33:02.427 | 70.00th=[45876], 80.00th=[51119], 90.00th=[57410], 95.00th=[65274], 00:33:02.427 | 99.00th=[79168], 99.50th=[81265], 99.90th=[89654], 99.95th=[89654], 00:33:02.427 | 99.99th=[89654] 00:33:02.427 bw ( KiB/s): min= 8136, max= 8248, per=12.47%, avg=8192.00, stdev=79.20, samples=2 00:33:02.427 iops : min= 2034, max= 2062, avg=2048.00, stdev=19.80, samples=2 00:33:02.427 lat (msec) : 2=0.02%, 10=0.39%, 20=37.76%, 50=47.37%, 100=14.45% 00:33:02.427 cpu : usr=2.39%, sys=2.59%, ctx=215, majf=0, minf=1 00:33:02.427 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.8%, >=64=98.5% 00:33:02.427 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:02.427 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:33:02.427 issued rwts: total=2048,2104,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:02.427 latency : target=0, window=0, percentile=100.00%, depth=128 00:33:02.427 job3: (groupid=0, jobs=1): err= 0: pid=1482067: Fri Nov 15 11:51:02 2024 00:33:02.427 read: IOPS=4033, BW=15.8MiB/s (16.5MB/s)(15.9MiB/1006msec) 00:33:02.427 slat (usec): min=2, max=17912, avg=113.36, stdev=855.94 00:33:02.427 clat (usec): min=2928, max=40903, avg=14037.74, stdev=5434.78 00:33:02.427 lat (usec): min=4586, max=40911, avg=14151.10, stdev=5496.29 00:33:02.427 clat percentiles (usec): 00:33:02.427 | 1.00th=[ 5276], 5.00th=[ 8160], 10.00th=[ 9110], 20.00th=[10290], 00:33:02.427 | 30.00th=[11207], 40.00th=[11731], 50.00th=[12649], 60.00th=[13829], 00:33:02.427 | 70.00th=[15139], 80.00th=[16909], 90.00th=[20055], 95.00th=[25822], 00:33:02.427 | 99.00th=[32900], 99.50th=[35914], 99.90th=[40109], 99.95th=[41157], 00:33:02.427 | 99.99th=[41157] 00:33:02.427 write: IOPS=4071, BW=15.9MiB/s (16.7MB/s)(16.0MiB/1006msec); 0 zone resets 00:33:02.427 slat (usec): min=3, max=20393, avg=111.77, stdev=757.70 00:33:02.427 clat (usec): min=1364, max=40917, avg=16873.57, stdev=7907.50 00:33:02.427 lat (usec): min=1373, max=40926, avg=16985.35, stdev=7957.91 00:33:02.427 clat percentiles (usec): 00:33:02.427 | 1.00th=[ 5473], 5.00th=[ 6783], 10.00th=[ 8029], 20.00th=[10159], 00:33:02.427 | 30.00th=[11469], 40.00th=[12518], 50.00th=[13960], 60.00th=[19006], 00:33:02.427 | 70.00th=[21627], 80.00th=[24249], 90.00th=[27919], 95.00th=[30802], 00:33:02.427 | 99.00th=[39584], 99.50th=[39584], 99.90th=[39584], 99.95th=[41157], 00:33:02.427 | 99.99th=[41157] 00:33:02.427 bw ( KiB/s): min=16360, max=16408, per=24.95%, avg=16384.00, stdev=33.94, samples=2 00:33:02.427 iops : min= 4090, max= 4102, avg=4096.00, stdev= 8.49, samples=2 00:33:02.427 lat (msec) : 2=0.10%, 4=0.01%, 10=18.08%, 20=57.76%, 50=24.05% 00:33:02.427 cpu : usr=3.78%, sys=5.67%, ctx=338, majf=0, minf=1 00:33:02.427 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:33:02.427 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:02.427 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:33:02.427 issued rwts: total=4058,4096,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:02.427 latency : target=0, window=0, percentile=100.00%, depth=128 00:33:02.427 00:33:02.427 Run status group 0 (all jobs): 00:33:02.427 READ: bw=61.8MiB/s (64.8MB/s), 8143KiB/s-20.2MiB/s (8339kB/s-21.2MB/s), io=62.1MiB (65.2MB), run=1003-1006msec 00:33:02.427 WRITE: bw=64.1MiB/s (67.2MB/s), 8366KiB/s-21.9MiB/s (8567kB/s-23.0MB/s), io=64.5MiB (67.6MB), run=1003-1006msec 00:33:02.427 00:33:02.427 Disk stats (read/write): 00:33:02.427 nvme0n1: ios=3634/3775, merge=0/0, ticks=29403/24371, in_queue=53774, util=86.87% 00:33:02.427 nvme0n2: ios=4658/4839, merge=0/0, ticks=34043/30661, in_queue=64704, util=90.86% 00:33:02.427 nvme0n3: ios=1593/1847, merge=0/0, ticks=14246/16767, in_queue=31013, util=94.17% 00:33:02.427 nvme0n4: ios=3498/3584, merge=0/0, ticks=47679/55118, in_queue=102797, util=93.91% 00:33:02.427 11:51:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randwrite -r 1 -v 00:33:02.427 [global] 00:33:02.427 thread=1 00:33:02.427 invalidate=1 00:33:02.427 rw=randwrite 00:33:02.427 time_based=1 00:33:02.427 runtime=1 00:33:02.427 ioengine=libaio 00:33:02.427 direct=1 00:33:02.427 bs=4096 00:33:02.427 iodepth=128 00:33:02.427 norandommap=0 00:33:02.427 numjobs=1 00:33:02.427 00:33:02.427 verify_dump=1 00:33:02.427 verify_backlog=512 00:33:02.427 verify_state_save=0 00:33:02.427 do_verify=1 00:33:02.427 verify=crc32c-intel 00:33:02.427 [job0] 00:33:02.427 filename=/dev/nvme0n1 00:33:02.427 [job1] 00:33:02.427 filename=/dev/nvme0n2 00:33:02.427 [job2] 00:33:02.427 filename=/dev/nvme0n3 00:33:02.427 [job3] 00:33:02.427 filename=/dev/nvme0n4 00:33:02.427 Could not set queue depth (nvme0n1) 00:33:02.427 Could not set queue depth (nvme0n2) 00:33:02.427 Could not set queue depth (nvme0n3) 00:33:02.427 Could not set queue depth (nvme0n4) 00:33:02.427 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:33:02.427 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:33:02.427 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:33:02.427 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:33:02.427 fio-3.35 00:33:02.427 Starting 4 threads 00:33:03.848 00:33:03.848 job0: (groupid=0, jobs=1): err= 0: pid=1482504: Fri Nov 15 11:51:04 2024 00:33:03.848 read: IOPS=6095, BW=23.8MiB/s (25.0MB/s)(24.0MiB/1008msec) 00:33:03.848 slat (nsec): min=918, max=12331k, avg=71994.83, stdev=491855.29 00:33:03.848 clat (usec): min=2064, max=27496, avg=9721.32, stdev=3919.96 00:33:03.848 lat (usec): min=2071, max=34714, avg=9793.31, stdev=3951.31 00:33:03.848 clat percentiles (usec): 00:33:03.848 | 1.00th=[ 3228], 5.00th=[ 5407], 10.00th=[ 5997], 20.00th=[ 6849], 00:33:03.848 | 30.00th=[ 7242], 40.00th=[ 7701], 50.00th=[ 8455], 60.00th=[ 9241], 00:33:03.848 | 70.00th=[11076], 80.00th=[12911], 90.00th=[15139], 95.00th=[16909], 00:33:03.848 | 99.00th=[24511], 99.50th=[27395], 99.90th=[27395], 99.95th=[27395], 00:33:03.848 | 99.99th=[27395] 00:33:03.848 write: IOPS=6539, BW=25.5MiB/s (26.8MB/s)(25.8MiB/1008msec); 0 zone resets 00:33:03.848 slat (nsec): min=1544, max=15886k, avg=76284.03, stdev=548878.37 00:33:03.848 clat (usec): min=273, max=41201, avg=10311.92, stdev=5736.13 00:33:03.848 lat (usec): min=277, max=41233, avg=10388.21, stdev=5788.82 00:33:03.848 clat percentiles (usec): 00:33:03.848 | 1.00th=[ 1893], 5.00th=[ 2638], 10.00th=[ 4883], 20.00th=[ 7177], 00:33:03.848 | 30.00th=[ 7373], 40.00th=[ 7635], 50.00th=[ 8455], 60.00th=[ 9503], 00:33:03.848 | 70.00th=[10945], 80.00th=[13698], 90.00th=[19530], 95.00th=[23725], 00:33:03.848 | 99.00th=[28181], 99.50th=[28443], 99.90th=[30278], 99.95th=[30802], 00:33:03.848 | 99.99th=[41157] 00:33:03.848 bw ( KiB/s): min=20112, max=31608, per=41.35%, avg=25860.00, stdev=8128.90, samples=2 00:33:03.848 iops : min= 5028, max= 7902, avg=6465.00, stdev=2032.22, samples=2 00:33:03.848 lat (usec) : 500=0.02%, 1000=0.04% 00:33:03.848 lat (msec) : 2=0.97%, 4=3.27%, 10=60.34%, 20=29.82%, 50=5.54% 00:33:03.848 cpu : usr=5.66%, sys=5.56%, ctx=514, majf=0, minf=1 00:33:03.848 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.5% 00:33:03.848 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:03.848 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:33:03.848 issued rwts: total=6144,6592,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:03.848 latency : target=0, window=0, percentile=100.00%, depth=128 00:33:03.848 job1: (groupid=0, jobs=1): err= 0: pid=1482505: Fri Nov 15 11:51:04 2024 00:33:03.848 read: IOPS=3026, BW=11.8MiB/s (12.4MB/s)(12.0MiB/1015msec) 00:33:03.848 slat (nsec): min=1613, max=33651k, avg=124102.08, stdev=1098067.81 00:33:03.848 clat (msec): min=4, max=119, avg=16.50, stdev=14.22 00:33:03.848 lat (msec): min=4, max=125, avg=16.62, stdev=14.33 00:33:03.848 clat percentiles (msec): 00:33:03.848 | 1.00th=[ 6], 5.00th=[ 8], 10.00th=[ 8], 20.00th=[ 9], 00:33:03.848 | 30.00th=[ 10], 40.00th=[ 11], 50.00th=[ 12], 60.00th=[ 12], 00:33:03.848 | 70.00th=[ 14], 80.00th=[ 28], 90.00th=[ 36], 95.00th=[ 43], 00:33:03.848 | 99.00th=[ 80], 99.50th=[ 112], 99.90th=[ 120], 99.95th=[ 120], 00:33:03.848 | 99.99th=[ 120] 00:33:03.848 write: IOPS=3151, BW=12.3MiB/s (12.9MB/s)(12.5MiB/1015msec); 0 zone resets 00:33:03.848 slat (usec): min=2, max=18427, avg=191.16, stdev=1134.19 00:33:03.848 clat (msec): min=4, max=118, avg=23.83, stdev=27.78 00:33:03.848 lat (msec): min=5, max=118, avg=24.02, stdev=27.95 00:33:03.848 clat percentiles (msec): 00:33:03.848 | 1.00th=[ 7], 5.00th=[ 7], 10.00th=[ 8], 20.00th=[ 8], 00:33:03.848 | 30.00th=[ 10], 40.00th=[ 10], 50.00th=[ 12], 60.00th=[ 12], 00:33:03.848 | 70.00th=[ 15], 80.00th=[ 28], 90.00th=[ 77], 95.00th=[ 89], 00:33:03.848 | 99.00th=[ 120], 99.50th=[ 120], 99.90th=[ 120], 99.95th=[ 120], 00:33:03.848 | 99.99th=[ 120] 00:33:03.848 bw ( KiB/s): min= 4096, max=20536, per=19.69%, avg=12316.00, stdev=11624.84, samples=2 00:33:03.848 iops : min= 1024, max= 5134, avg=3079.00, stdev=2906.21, samples=2 00:33:03.848 lat (msec) : 10=41.86%, 20=34.68%, 50=12.85%, 100=9.36%, 250=1.24% 00:33:03.848 cpu : usr=2.17%, sys=2.96%, ctx=294, majf=0, minf=1 00:33:03.848 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.5%, >=64=99.0% 00:33:03.848 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:03.848 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:33:03.848 issued rwts: total=3072,3199,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:03.848 latency : target=0, window=0, percentile=100.00%, depth=128 00:33:03.848 job2: (groupid=0, jobs=1): err= 0: pid=1482511: Fri Nov 15 11:51:04 2024 00:33:03.848 read: IOPS=2527, BW=9.87MiB/s (10.4MB/s)(10.0MiB/1013msec) 00:33:03.849 slat (nsec): min=1778, max=18303k, avg=161235.07, stdev=1032018.53 00:33:03.849 clat (usec): min=7255, max=49183, avg=21572.15, stdev=7934.71 00:33:03.849 lat (usec): min=7260, max=49670, avg=21733.39, stdev=8011.99 00:33:03.849 clat percentiles (usec): 00:33:03.849 | 1.00th=[ 8094], 5.00th=[11076], 10.00th=[11994], 20.00th=[15139], 00:33:03.849 | 30.00th=[16712], 40.00th=[18744], 50.00th=[19268], 60.00th=[22152], 00:33:03.849 | 70.00th=[25297], 80.00th=[27132], 90.00th=[33817], 95.00th=[37487], 00:33:03.849 | 99.00th=[43254], 99.50th=[43254], 99.90th=[46400], 99.95th=[48497], 00:33:03.849 | 99.99th=[49021] 00:33:03.849 write: IOPS=2968, BW=11.6MiB/s (12.2MB/s)(11.7MiB/1013msec); 0 zone resets 00:33:03.849 slat (usec): min=2, max=31603, avg=192.04, stdev=1399.07 00:33:03.849 clat (usec): min=1672, max=79917, avg=23938.81, stdev=12169.00 00:33:03.849 lat (usec): min=6921, max=81578, avg=24130.85, stdev=12292.42 00:33:03.849 clat percentiles (usec): 00:33:03.849 | 1.00th=[10159], 5.00th=[10683], 10.00th=[11338], 20.00th=[15008], 00:33:03.849 | 30.00th=[15664], 40.00th=[18482], 50.00th=[20055], 60.00th=[21627], 00:33:03.849 | 70.00th=[25822], 80.00th=[34866], 90.00th=[42206], 95.00th=[45876], 00:33:03.849 | 99.00th=[61080], 99.50th=[61604], 99.90th=[62129], 99.95th=[72877], 00:33:03.849 | 99.99th=[80217] 00:33:03.849 bw ( KiB/s): min=10744, max=12288, per=18.41%, avg=11516.00, stdev=1091.77, samples=2 00:33:03.849 iops : min= 2686, max= 3072, avg=2879.00, stdev=272.94, samples=2 00:33:03.849 lat (msec) : 2=0.02%, 10=1.20%, 20=49.07%, 50=47.37%, 100=2.34% 00:33:03.849 cpu : usr=1.88%, sys=4.25%, ctx=193, majf=0, minf=1 00:33:03.849 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.6%, >=64=98.9% 00:33:03.849 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:03.849 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:33:03.849 issued rwts: total=2560,3007,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:03.849 latency : target=0, window=0, percentile=100.00%, depth=128 00:33:03.849 job3: (groupid=0, jobs=1): err= 0: pid=1482514: Fri Nov 15 11:51:04 2024 00:33:03.849 read: IOPS=2562, BW=10.0MiB/s (10.5MB/s)(10.1MiB/1011msec) 00:33:03.849 slat (nsec): min=1953, max=18376k, avg=133480.34, stdev=1068639.04 00:33:03.849 clat (usec): min=922, max=95410, avg=18841.88, stdev=14315.84 00:33:03.849 lat (usec): min=933, max=95431, avg=18975.36, stdev=14407.55 00:33:03.849 clat percentiles (usec): 00:33:03.849 | 1.00th=[ 2442], 5.00th=[ 3425], 10.00th=[10421], 20.00th=[11863], 00:33:03.849 | 30.00th=[14222], 40.00th=[14877], 50.00th=[15401], 60.00th=[17433], 00:33:03.849 | 70.00th=[20317], 80.00th=[21103], 90.00th=[26346], 95.00th=[38011], 00:33:03.849 | 99.00th=[93848], 99.50th=[94897], 99.90th=[94897], 99.95th=[94897], 00:33:03.849 | 99.99th=[94897] 00:33:03.849 write: IOPS=3038, BW=11.9MiB/s (12.4MB/s)(12.0MiB/1011msec); 0 zone resets 00:33:03.849 slat (usec): min=3, max=14805, avg=183.44, stdev=1028.05 00:33:03.849 clat (usec): min=549, max=133665, avg=25831.38, stdev=22970.66 00:33:03.849 lat (usec): min=662, max=133674, avg=26014.82, stdev=23103.66 00:33:03.849 clat percentiles (msec): 00:33:03.849 | 1.00th=[ 3], 5.00th=[ 5], 10.00th=[ 8], 20.00th=[ 11], 00:33:03.849 | 30.00th=[ 14], 40.00th=[ 16], 50.00th=[ 16], 60.00th=[ 18], 00:33:03.849 | 70.00th=[ 29], 80.00th=[ 36], 90.00th=[ 58], 95.00th=[ 78], 00:33:03.849 | 99.00th=[ 104], 99.50th=[ 109], 99.90th=[ 125], 99.95th=[ 125], 00:33:03.849 | 99.99th=[ 134] 00:33:03.849 bw ( KiB/s): min= 8848, max=14960, per=19.03%, avg=11904.00, stdev=4321.84, samples=2 00:33:03.849 iops : min= 2212, max= 3740, avg=2976.00, stdev=1080.46, samples=2 00:33:03.849 lat (usec) : 750=0.02%, 1000=0.18% 00:33:03.849 lat (msec) : 2=0.64%, 4=4.27%, 10=7.70%, 20=53.49%, 50=24.28% 00:33:03.849 lat (msec) : 100=8.76%, 250=0.67% 00:33:03.849 cpu : usr=1.98%, sys=4.46%, ctx=258, majf=0, minf=2 00:33:03.849 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.6%, >=64=98.9% 00:33:03.849 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:03.849 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:33:03.849 issued rwts: total=2591,3072,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:03.849 latency : target=0, window=0, percentile=100.00%, depth=128 00:33:03.849 00:33:03.849 Run status group 0 (all jobs): 00:33:03.849 READ: bw=55.3MiB/s (58.0MB/s), 9.87MiB/s-23.8MiB/s (10.4MB/s-25.0MB/s), io=56.1MiB (58.8MB), run=1008-1015msec 00:33:03.849 WRITE: bw=61.1MiB/s (64.0MB/s), 11.6MiB/s-25.5MiB/s (12.2MB/s-26.8MB/s), io=62.0MiB (65.0MB), run=1008-1015msec 00:33:03.849 00:33:03.849 Disk stats (read/write): 00:33:03.849 nvme0n1: ios=4750/5120, merge=0/0, ticks=22556/28236, in_queue=50792, util=86.97% 00:33:03.849 nvme0n2: ios=2911/3072, merge=0/0, ticks=18454/28403, in_queue=46857, util=97.66% 00:33:03.849 nvme0n3: ios=2576/2565, merge=0/0, ticks=23755/20497, in_queue=44252, util=90.00% 00:33:03.849 nvme0n4: ios=2269/2522, merge=0/0, ticks=33978/66056, in_queue=100034, util=98.11% 00:33:03.849 11:51:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@55 -- # sync 00:33:03.849 11:51:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@59 -- # fio_pid=1482764 00:33:03.849 11:51:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t read -r 10 00:33:03.849 11:51:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@61 -- # sleep 3 00:33:03.849 [global] 00:33:03.849 thread=1 00:33:03.849 invalidate=1 00:33:03.849 rw=read 00:33:03.849 time_based=1 00:33:03.849 runtime=10 00:33:03.849 ioengine=libaio 00:33:03.849 direct=1 00:33:03.849 bs=4096 00:33:03.849 iodepth=1 00:33:03.849 norandommap=1 00:33:03.849 numjobs=1 00:33:03.849 00:33:03.849 [job0] 00:33:03.849 filename=/dev/nvme0n1 00:33:03.849 [job1] 00:33:03.849 filename=/dev/nvme0n2 00:33:03.849 [job2] 00:33:03.849 filename=/dev/nvme0n3 00:33:03.849 [job3] 00:33:03.849 filename=/dev/nvme0n4 00:33:03.849 Could not set queue depth (nvme0n1) 00:33:03.849 Could not set queue depth (nvme0n2) 00:33:03.849 Could not set queue depth (nvme0n3) 00:33:03.849 Could not set queue depth (nvme0n4) 00:33:04.107 job0: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:33:04.107 job1: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:33:04.107 job2: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:33:04.107 job3: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:33:04.107 fio-3.35 00:33:04.107 Starting 4 threads 00:33:07.394 11:51:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete concat0 00:33:07.394 11:51:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete raid0 00:33:07.394 fio: io_u error on file /dev/nvme0n4: Operation not supported: read offset=278528, buflen=4096 00:33:07.394 fio: pid=1482981, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:33:07.394 11:51:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:33:07.394 11:51:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc0 00:33:07.394 fio: io_u error on file /dev/nvme0n3: Operation not supported: read offset=311296, buflen=4096 00:33:07.394 fio: pid=1482973, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:33:07.653 11:51:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:33:07.653 11:51:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc1 00:33:07.653 fio: io_u error on file /dev/nvme0n1: Operation not supported: read offset=54312960, buflen=4096 00:33:07.653 fio: pid=1482941, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:33:07.912 fio: io_u error on file /dev/nvme0n2: Operation not supported: read offset=56131584, buflen=4096 00:33:07.912 fio: pid=1482951, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:33:07.912 11:51:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:33:07.912 11:51:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc2 00:33:07.912 00:33:07.912 job0: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=1482941: Fri Nov 15 11:51:08 2024 00:33:07.912 read: IOPS=4047, BW=15.8MiB/s (16.6MB/s)(51.8MiB/3276msec) 00:33:07.912 slat (usec): min=4, max=15666, avg= 8.88, stdev=159.63 00:33:07.912 clat (usec): min=189, max=1280, avg=234.77, stdev=36.84 00:33:07.912 lat (usec): min=195, max=15975, avg=243.65, stdev=164.80 00:33:07.912 clat percentiles (usec): 00:33:07.912 | 1.00th=[ 206], 5.00th=[ 212], 10.00th=[ 215], 20.00th=[ 217], 00:33:07.912 | 30.00th=[ 219], 40.00th=[ 221], 50.00th=[ 223], 60.00th=[ 225], 00:33:07.912 | 70.00th=[ 227], 80.00th=[ 233], 90.00th=[ 306], 95.00th=[ 330], 00:33:07.912 | 99.00th=[ 343], 99.50th=[ 347], 99.90th=[ 494], 99.95th=[ 510], 00:33:07.912 | 99.99th=[ 644] 00:33:07.912 bw ( KiB/s): min=13592, max=17520, per=53.90%, avg=16462.67, stdev=1627.69, samples=6 00:33:07.912 iops : min= 3398, max= 4380, avg=4115.67, stdev=406.92, samples=6 00:33:07.912 lat (usec) : 250=86.99%, 500=12.93%, 750=0.07% 00:33:07.912 lat (msec) : 2=0.01% 00:33:07.912 cpu : usr=1.10%, sys=3.39%, ctx=13263, majf=0, minf=2 00:33:07.912 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:33:07.912 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:07.912 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:07.912 issued rwts: total=13261,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:07.912 latency : target=0, window=0, percentile=100.00%, depth=1 00:33:07.912 job1: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=1482951: Fri Nov 15 11:51:08 2024 00:33:07.912 read: IOPS=3860, BW=15.1MiB/s (15.8MB/s)(53.5MiB/3550msec) 00:33:07.912 slat (usec): min=6, max=15590, avg=12.32, stdev=257.49 00:33:07.912 clat (usec): min=179, max=1708, avg=244.10, stdev=24.86 00:33:07.912 lat (usec): min=186, max=15940, avg=256.41, stdev=261.11 00:33:07.912 clat percentiles (usec): 00:33:07.912 | 1.00th=[ 192], 5.00th=[ 208], 10.00th=[ 219], 20.00th=[ 235], 00:33:07.912 | 30.00th=[ 243], 40.00th=[ 247], 50.00th=[ 249], 60.00th=[ 251], 00:33:07.912 | 70.00th=[ 253], 80.00th=[ 255], 90.00th=[ 258], 95.00th=[ 262], 00:33:07.912 | 99.00th=[ 269], 99.50th=[ 277], 99.90th=[ 441], 99.95th=[ 474], 00:33:07.912 | 99.99th=[ 1467] 00:33:07.912 bw ( KiB/s): min=15472, max=15567, per=50.80%, avg=15517.17, stdev=33.52, samples=6 00:33:07.912 iops : min= 3868, max= 3891, avg=3879.17, stdev= 8.16, samples=6 00:33:07.912 lat (usec) : 250=59.55%, 500=40.41%, 750=0.01% 00:33:07.912 lat (msec) : 2=0.01% 00:33:07.912 cpu : usr=0.76%, sys=3.55%, ctx=13712, majf=0, minf=2 00:33:07.912 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:33:07.913 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:07.913 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:07.913 issued rwts: total=13705,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:07.913 latency : target=0, window=0, percentile=100.00%, depth=1 00:33:07.913 job2: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=1482973: Fri Nov 15 11:51:08 2024 00:33:07.913 read: IOPS=25, BW=101KiB/s (103kB/s)(304KiB/3019msec) 00:33:07.913 slat (nsec): min=10308, max=36220, avg=23571.56, stdev=4190.56 00:33:07.913 clat (usec): min=295, max=43028, avg=39415.13, stdev=7947.02 00:33:07.913 lat (usec): min=317, max=43044, avg=39438.69, stdev=7946.18 00:33:07.913 clat percentiles (usec): 00:33:07.913 | 1.00th=[ 297], 5.00th=[40633], 10.00th=[40633], 20.00th=[41157], 00:33:07.913 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:33:07.913 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:33:07.913 | 99.00th=[43254], 99.50th=[43254], 99.90th=[43254], 99.95th=[43254], 00:33:07.913 | 99.99th=[43254] 00:33:07.913 bw ( KiB/s): min= 96, max= 104, per=0.32%, avg=97.60, stdev= 3.58, samples=5 00:33:07.913 iops : min= 24, max= 26, avg=24.40, stdev= 0.89, samples=5 00:33:07.913 lat (usec) : 500=1.30%, 750=2.60% 00:33:07.913 lat (msec) : 50=94.81% 00:33:07.913 cpu : usr=0.00%, sys=0.13%, ctx=77, majf=0, minf=2 00:33:07.913 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:33:07.913 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:07.913 complete : 0=1.3%, 4=98.7%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:07.913 issued rwts: total=77,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:07.913 latency : target=0, window=0, percentile=100.00%, depth=1 00:33:07.913 job3: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=1482981: Fri Nov 15 11:51:08 2024 00:33:07.913 read: IOPS=25, BW=99.6KiB/s (102kB/s)(272KiB/2732msec) 00:33:07.913 slat (nsec): min=10153, max=37411, avg=14899.86, stdev=4160.71 00:33:07.913 clat (usec): min=264, max=42007, avg=39833.38, stdev=6929.65 00:33:07.913 lat (usec): min=281, max=42019, avg=39848.27, stdev=6927.45 00:33:07.913 clat percentiles (usec): 00:33:07.913 | 1.00th=[ 265], 5.00th=[40633], 10.00th=[41157], 20.00th=[41157], 00:33:07.913 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:33:07.913 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41681], 00:33:07.913 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:33:07.913 | 99.99th=[42206] 00:33:07.913 bw ( KiB/s): min= 96, max= 104, per=0.32%, avg=99.20, stdev= 4.38, samples=5 00:33:07.913 iops : min= 24, max= 26, avg=24.80, stdev= 1.10, samples=5 00:33:07.913 lat (usec) : 500=2.90% 00:33:07.913 lat (msec) : 50=95.65% 00:33:07.913 cpu : usr=0.07%, sys=0.00%, ctx=69, majf=0, minf=1 00:33:07.913 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:33:07.913 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:07.913 complete : 0=1.4%, 4=98.6%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:07.913 issued rwts: total=69,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:07.913 latency : target=0, window=0, percentile=100.00%, depth=1 00:33:07.913 00:33:07.913 Run status group 0 (all jobs): 00:33:07.913 READ: bw=29.8MiB/s (31.3MB/s), 99.6KiB/s-15.8MiB/s (102kB/s-16.6MB/s), io=106MiB (111MB), run=2732-3550msec 00:33:07.913 00:33:07.913 Disk stats (read/write): 00:33:07.913 nvme0n1: ios=12632/0, merge=0/0, ticks=2896/0, in_queue=2896, util=94.17% 00:33:07.913 nvme0n2: ios=12931/0, merge=0/0, ticks=3113/0, in_queue=3113, util=94.53% 00:33:07.913 nvme0n3: ios=70/0, merge=0/0, ticks=2792/0, in_queue=2792, util=96.32% 00:33:07.913 nvme0n4: ios=64/0, merge=0/0, ticks=2545/0, in_queue=2545, util=96.45% 00:33:08.172 11:51:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:33:08.172 11:51:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc3 00:33:08.431 11:51:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:33:08.431 11:51:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc4 00:33:08.691 11:51:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:33:08.691 11:51:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc5 00:33:08.950 11:51:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:33:08.950 11:51:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc6 00:33:09.209 11:51:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@69 -- # fio_status=0 00:33:09.209 11:51:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@70 -- # wait 1482764 00:33:09.209 11:51:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@70 -- # fio_status=4 00:33:09.209 11:51:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@72 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:33:09.469 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:33:09.469 11:51:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@73 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:33:09.469 11:51:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1221 -- # local i=0 00:33:09.469 11:51:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1222 -- # lsblk -o NAME,SERIAL 00:33:09.469 11:51:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1222 -- # grep -q -w SPDKISFASTANDAWESOME 00:33:09.469 11:51:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1229 -- # lsblk -l -o NAME,SERIAL 00:33:09.469 11:51:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1229 -- # grep -q -w SPDKISFASTANDAWESOME 00:33:09.469 11:51:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1233 -- # return 0 00:33:09.469 11:51:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@75 -- # '[' 4 -eq 0 ']' 00:33:09.469 11:51:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@80 -- # echo 'nvmf hotplug test: fio failed as expected' 00:33:09.469 nvmf hotplug test: fio failed as expected 00:33:09.469 11:51:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:33:09.728 11:51:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@85 -- # rm -f ./local-job0-0-verify.state 00:33:09.728 11:51:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@86 -- # rm -f ./local-job1-1-verify.state 00:33:09.728 11:51:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@87 -- # rm -f ./local-job2-2-verify.state 00:33:09.728 11:51:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@89 -- # trap - SIGINT SIGTERM EXIT 00:33:09.728 11:51:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@91 -- # nvmftestfini 00:33:09.728 11:51:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@516 -- # nvmfcleanup 00:33:09.728 11:51:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@121 -- # sync 00:33:09.728 11:51:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:33:09.728 11:51:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@124 -- # set +e 00:33:09.728 11:51:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:33:09.728 11:51:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:33:09.728 rmmod nvme_tcp 00:33:09.728 rmmod nvme_fabrics 00:33:09.728 rmmod nvme_keyring 00:33:09.728 11:51:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:33:09.728 11:51:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@128 -- # set -e 00:33:09.728 11:51:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@129 -- # return 0 00:33:09.728 11:51:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@517 -- # '[' -n 1479582 ']' 00:33:09.728 11:51:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@518 -- # killprocess 1479582 00:33:09.728 11:51:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@952 -- # '[' -z 1479582 ']' 00:33:09.728 11:51:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@956 -- # kill -0 1479582 00:33:09.728 11:51:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@957 -- # uname 00:33:09.728 11:51:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:33:09.728 11:51:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 1479582 00:33:09.728 11:51:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:33:09.728 11:51:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:33:09.728 11:51:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@970 -- # echo 'killing process with pid 1479582' 00:33:09.728 killing process with pid 1479582 00:33:09.728 11:51:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@971 -- # kill 1479582 00:33:09.728 11:51:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@976 -- # wait 1479582 00:33:09.986 11:51:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:33:09.986 11:51:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:33:09.986 11:51:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:33:09.986 11:51:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@297 -- # iptr 00:33:09.986 11:51:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@791 -- # iptables-save 00:33:09.986 11:51:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:33:09.986 11:51:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@791 -- # iptables-restore 00:33:09.986 11:51:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:33:09.987 11:51:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@302 -- # remove_spdk_ns 00:33:09.987 11:51:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:09.987 11:51:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:33:09.987 11:51:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:12.520 11:51:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:33:12.520 00:33:12.520 real 0m28.013s 00:33:12.520 user 2m1.788s 00:33:12.520 sys 0m11.678s 00:33:12.520 11:51:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1128 -- # xtrace_disable 00:33:12.520 11:51:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:33:12.520 ************************************ 00:33:12.520 END TEST nvmf_fio_target 00:33:12.520 ************************************ 00:33:12.520 11:51:12 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@35 -- # run_test nvmf_bdevio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --interrupt-mode 00:33:12.520 11:51:12 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:33:12.520 11:51:12 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1109 -- # xtrace_disable 00:33:12.520 11:51:12 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:33:12.520 ************************************ 00:33:12.520 START TEST nvmf_bdevio 00:33:12.520 ************************************ 00:33:12.520 11:51:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --interrupt-mode 00:33:12.520 * Looking for test storage... 00:33:12.520 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:33:12.520 11:51:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:33:12.520 11:51:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1691 -- # lcov --version 00:33:12.520 11:51:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:33:12.520 11:51:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:33:12.520 11:51:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:33:12.520 11:51:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@333 -- # local ver1 ver1_l 00:33:12.520 11:51:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@334 -- # local ver2 ver2_l 00:33:12.520 11:51:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@336 -- # IFS=.-: 00:33:12.520 11:51:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@336 -- # read -ra ver1 00:33:12.520 11:51:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@337 -- # IFS=.-: 00:33:12.520 11:51:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@337 -- # read -ra ver2 00:33:12.520 11:51:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@338 -- # local 'op=<' 00:33:12.520 11:51:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@340 -- # ver1_l=2 00:33:12.520 11:51:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@341 -- # ver2_l=1 00:33:12.520 11:51:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:33:12.520 11:51:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@344 -- # case "$op" in 00:33:12.520 11:51:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@345 -- # : 1 00:33:12.520 11:51:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@364 -- # (( v = 0 )) 00:33:12.521 11:51:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:33:12.521 11:51:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@365 -- # decimal 1 00:33:12.521 11:51:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@353 -- # local d=1 00:33:12.521 11:51:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:33:12.521 11:51:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@355 -- # echo 1 00:33:12.521 11:51:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@365 -- # ver1[v]=1 00:33:12.521 11:51:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@366 -- # decimal 2 00:33:12.521 11:51:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@353 -- # local d=2 00:33:12.521 11:51:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:33:12.521 11:51:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@355 -- # echo 2 00:33:12.521 11:51:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@366 -- # ver2[v]=2 00:33:12.521 11:51:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:33:12.521 11:51:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:33:12.521 11:51:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@368 -- # return 0 00:33:12.521 11:51:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:33:12.521 11:51:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:33:12.521 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:12.521 --rc genhtml_branch_coverage=1 00:33:12.521 --rc genhtml_function_coverage=1 00:33:12.521 --rc genhtml_legend=1 00:33:12.521 --rc geninfo_all_blocks=1 00:33:12.521 --rc geninfo_unexecuted_blocks=1 00:33:12.521 00:33:12.521 ' 00:33:12.521 11:51:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:33:12.521 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:12.521 --rc genhtml_branch_coverage=1 00:33:12.521 --rc genhtml_function_coverage=1 00:33:12.521 --rc genhtml_legend=1 00:33:12.521 --rc geninfo_all_blocks=1 00:33:12.521 --rc geninfo_unexecuted_blocks=1 00:33:12.521 00:33:12.521 ' 00:33:12.521 11:51:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:33:12.521 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:12.521 --rc genhtml_branch_coverage=1 00:33:12.521 --rc genhtml_function_coverage=1 00:33:12.521 --rc genhtml_legend=1 00:33:12.521 --rc geninfo_all_blocks=1 00:33:12.521 --rc geninfo_unexecuted_blocks=1 00:33:12.521 00:33:12.521 ' 00:33:12.521 11:51:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:33:12.521 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:12.521 --rc genhtml_branch_coverage=1 00:33:12.521 --rc genhtml_function_coverage=1 00:33:12.521 --rc genhtml_legend=1 00:33:12.521 --rc geninfo_all_blocks=1 00:33:12.521 --rc geninfo_unexecuted_blocks=1 00:33:12.521 00:33:12.521 ' 00:33:12.521 11:51:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:33:12.521 11:51:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@7 -- # uname -s 00:33:12.521 11:51:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:33:12.521 11:51:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:33:12.521 11:51:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:33:12.521 11:51:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:33:12.521 11:51:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:33:12.521 11:51:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:33:12.521 11:51:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:33:12.521 11:51:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:33:12.521 11:51:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:33:12.521 11:51:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:33:12.521 11:51:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:33:12.521 11:51:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@18 -- # NVME_HOSTID=00abaa28-3537-eb11-906e-0017a4403562 00:33:12.521 11:51:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:33:12.521 11:51:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:33:12.521 11:51:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:33:12.521 11:51:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:33:12.521 11:51:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:33:12.521 11:51:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@15 -- # shopt -s extglob 00:33:12.521 11:51:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:33:12.521 11:51:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:33:12.521 11:51:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:33:12.521 11:51:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:12.521 11:51:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:12.521 11:51:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:12.521 11:51:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@5 -- # export PATH 00:33:12.521 11:51:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:12.521 11:51:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@51 -- # : 0 00:33:12.521 11:51:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:33:12.521 11:51:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:33:12.521 11:51:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:33:12.521 11:51:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:33:12.521 11:51:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:33:12.521 11:51:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:33:12.521 11:51:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:33:12.521 11:51:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:33:12.521 11:51:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:33:12.521 11:51:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@55 -- # have_pci_nics=0 00:33:12.521 11:51:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:33:12.521 11:51:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:33:12.521 11:51:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@14 -- # nvmftestinit 00:33:12.521 11:51:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:33:12.521 11:51:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:33:12.521 11:51:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@476 -- # prepare_net_devs 00:33:12.521 11:51:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@438 -- # local -g is_hw=no 00:33:12.521 11:51:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@440 -- # remove_spdk_ns 00:33:12.521 11:51:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:12.521 11:51:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:33:12.521 11:51:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:12.521 11:51:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:33:12.521 11:51:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:33:12.521 11:51:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@309 -- # xtrace_disable 00:33:12.521 11:51:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:33:17.797 11:51:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:33:17.797 11:51:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@315 -- # pci_devs=() 00:33:17.797 11:51:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@315 -- # local -a pci_devs 00:33:17.797 11:51:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@316 -- # pci_net_devs=() 00:33:17.797 11:51:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:33:17.797 11:51:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@317 -- # pci_drivers=() 00:33:17.797 11:51:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@317 -- # local -A pci_drivers 00:33:17.797 11:51:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@319 -- # net_devs=() 00:33:17.797 11:51:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@319 -- # local -ga net_devs 00:33:17.797 11:51:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@320 -- # e810=() 00:33:17.797 11:51:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@320 -- # local -ga e810 00:33:17.797 11:51:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@321 -- # x722=() 00:33:17.797 11:51:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@321 -- # local -ga x722 00:33:17.797 11:51:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@322 -- # mlx=() 00:33:17.797 11:51:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@322 -- # local -ga mlx 00:33:17.797 11:51:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:33:17.797 11:51:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:33:17.797 11:51:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:33:17.797 11:51:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:33:17.797 11:51:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:33:17.797 11:51:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:33:17.797 11:51:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:33:17.797 11:51:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:33:17.797 11:51:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:33:17.797 11:51:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:33:17.797 11:51:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:33:17.797 11:51:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:33:17.797 11:51:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:33:17.797 11:51:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:33:17.797 11:51:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:33:17.797 11:51:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:33:17.797 11:51:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:33:17.797 11:51:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:33:17.797 11:51:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:33:17.797 11:51:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:33:17.797 Found 0000:af:00.0 (0x8086 - 0x159b) 00:33:17.797 11:51:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:33:17.797 11:51:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:33:17.797 11:51:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:17.797 11:51:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:17.797 11:51:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:33:17.797 11:51:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:33:17.797 11:51:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:33:17.797 Found 0000:af:00.1 (0x8086 - 0x159b) 00:33:17.797 11:51:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:33:17.797 11:51:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:33:17.797 11:51:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:17.797 11:51:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:17.797 11:51:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:33:17.797 11:51:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:33:17.797 11:51:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:33:17.797 11:51:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:33:17.797 11:51:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:33:17.797 11:51:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:17.797 11:51:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:33:17.797 11:51:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:17.797 11:51:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@418 -- # [[ up == up ]] 00:33:17.797 11:51:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:33:17.797 11:51:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:17.797 11:51:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:33:17.797 Found net devices under 0000:af:00.0: cvl_0_0 00:33:17.797 11:51:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:33:17.797 11:51:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:33:17.797 11:51:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:17.797 11:51:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:33:17.797 11:51:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:17.797 11:51:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@418 -- # [[ up == up ]] 00:33:17.797 11:51:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:33:17.797 11:51:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:17.797 11:51:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:33:17.797 Found net devices under 0000:af:00.1: cvl_0_1 00:33:17.797 11:51:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:33:17.797 11:51:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:33:17.797 11:51:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@442 -- # is_hw=yes 00:33:17.797 11:51:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:33:17.797 11:51:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:33:17.797 11:51:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:33:17.798 11:51:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:33:17.798 11:51:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:33:17.798 11:51:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:33:17.798 11:51:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:33:17.798 11:51:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:33:17.798 11:51:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:33:17.798 11:51:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:33:17.798 11:51:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:33:17.798 11:51:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:33:17.798 11:51:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:33:17.798 11:51:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:33:17.798 11:51:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:33:17.798 11:51:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:33:17.798 11:51:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:33:17.798 11:51:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:33:17.798 11:51:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:33:17.798 11:51:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:33:17.798 11:51:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:33:17.798 11:51:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:33:17.798 11:51:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:33:17.798 11:51:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:33:17.798 11:51:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:33:17.798 11:51:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:33:17.798 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:33:17.798 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.471 ms 00:33:17.798 00:33:17.798 --- 10.0.0.2 ping statistics --- 00:33:17.798 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:17.798 rtt min/avg/max/mdev = 0.471/0.471/0.471/0.000 ms 00:33:17.798 11:51:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:33:17.798 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:33:17.798 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.206 ms 00:33:17.798 00:33:17.798 --- 10.0.0.1 ping statistics --- 00:33:17.798 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:17.798 rtt min/avg/max/mdev = 0.206/0.206/0.206/0.000 ms 00:33:17.798 11:51:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:33:17.798 11:51:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@450 -- # return 0 00:33:17.798 11:51:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:33:17.798 11:51:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:33:17.798 11:51:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:33:17.798 11:51:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:33:17.798 11:51:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:33:17.798 11:51:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:33:17.798 11:51:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:33:17.798 11:51:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:33:17.798 11:51:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:33:17.798 11:51:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@724 -- # xtrace_disable 00:33:17.798 11:51:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:33:17.798 11:51:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@509 -- # nvmfpid=1487854 00:33:17.798 11:51:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@510 -- # waitforlisten 1487854 00:33:17.798 11:51:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@833 -- # '[' -z 1487854 ']' 00:33:17.798 11:51:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x78 00:33:17.798 11:51:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:33:17.798 11:51:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@838 -- # local max_retries=100 00:33:17.798 11:51:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:33:17.798 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:33:17.798 11:51:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@842 -- # xtrace_disable 00:33:17.798 11:51:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:33:17.798 [2024-11-15 11:51:18.591963] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:33:17.798 [2024-11-15 11:51:18.593314] Starting SPDK v25.01-pre git sha1 4b2d483c6 / DPDK 24.03.0 initialization... 00:33:17.798 [2024-11-15 11:51:18.593359] [ DPDK EAL parameters: nvmf -c 0x78 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:33:18.057 [2024-11-15 11:51:18.665197] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:33:18.057 [2024-11-15 11:51:18.703954] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:33:18.057 [2024-11-15 11:51:18.703988] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:33:18.057 [2024-11-15 11:51:18.703994] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:33:18.057 [2024-11-15 11:51:18.703999] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:33:18.057 [2024-11-15 11:51:18.704003] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:33:18.057 [2024-11-15 11:51:18.705660] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:33:18.057 [2024-11-15 11:51:18.705775] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:33:18.057 [2024-11-15 11:51:18.705907] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:33:18.057 [2024-11-15 11:51:18.705909] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:33:18.057 [2024-11-15 11:51:18.771732] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:33:18.057 [2024-11-15 11:51:18.772372] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:33:18.057 [2024-11-15 11:51:18.772768] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:33:18.057 [2024-11-15 11:51:18.772935] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:33:18.057 [2024-11-15 11:51:18.773052] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:33:18.057 11:51:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:33:18.057 11:51:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@866 -- # return 0 00:33:18.057 11:51:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:33:18.057 11:51:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@730 -- # xtrace_disable 00:33:18.057 11:51:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:33:18.057 11:51:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:33:18.058 11:51:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:33:18.058 11:51:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:18.058 11:51:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:33:18.058 [2024-11-15 11:51:18.850560] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:33:18.058 11:51:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:18.058 11:51:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:33:18.058 11:51:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:18.058 11:51:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:33:18.058 Malloc0 00:33:18.058 11:51:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:18.058 11:51:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:33:18.058 11:51:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:18.058 11:51:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:33:18.058 11:51:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:18.058 11:51:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:33:18.058 11:51:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:18.058 11:51:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:33:18.318 11:51:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:18.318 11:51:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:33:18.318 11:51:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:18.318 11:51:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:33:18.318 [2024-11-15 11:51:18.918485] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:33:18.318 11:51:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:18.318 11:51:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 00:33:18.318 11:51:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:33:18.318 11:51:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@560 -- # config=() 00:33:18.318 11:51:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@560 -- # local subsystem config 00:33:18.318 11:51:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:33:18.318 11:51:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:33:18.318 { 00:33:18.318 "params": { 00:33:18.318 "name": "Nvme$subsystem", 00:33:18.318 "trtype": "$TEST_TRANSPORT", 00:33:18.318 "traddr": "$NVMF_FIRST_TARGET_IP", 00:33:18.318 "adrfam": "ipv4", 00:33:18.318 "trsvcid": "$NVMF_PORT", 00:33:18.318 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:33:18.318 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:33:18.318 "hdgst": ${hdgst:-false}, 00:33:18.318 "ddgst": ${ddgst:-false} 00:33:18.318 }, 00:33:18.318 "method": "bdev_nvme_attach_controller" 00:33:18.318 } 00:33:18.318 EOF 00:33:18.318 )") 00:33:18.318 11:51:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@582 -- # cat 00:33:18.318 11:51:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@584 -- # jq . 00:33:18.318 11:51:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@585 -- # IFS=, 00:33:18.318 11:51:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:33:18.318 "params": { 00:33:18.318 "name": "Nvme1", 00:33:18.318 "trtype": "tcp", 00:33:18.318 "traddr": "10.0.0.2", 00:33:18.318 "adrfam": "ipv4", 00:33:18.318 "trsvcid": "4420", 00:33:18.318 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:33:18.318 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:33:18.318 "hdgst": false, 00:33:18.318 "ddgst": false 00:33:18.318 }, 00:33:18.318 "method": "bdev_nvme_attach_controller" 00:33:18.318 }' 00:33:18.318 [2024-11-15 11:51:18.974797] Starting SPDK v25.01-pre git sha1 4b2d483c6 / DPDK 24.03.0 initialization... 00:33:18.318 [2024-11-15 11:51:18.974859] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1488040 ] 00:33:18.318 [2024-11-15 11:51:19.071213] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:33:18.318 [2024-11-15 11:51:19.122774] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:33:18.318 [2024-11-15 11:51:19.122875] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:33:18.318 [2024-11-15 11:51:19.122876] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:33:18.577 I/O targets: 00:33:18.577 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:33:18.577 00:33:18.577 00:33:18.578 CUnit - A unit testing framework for C - Version 2.1-3 00:33:18.578 http://cunit.sourceforge.net/ 00:33:18.578 00:33:18.578 00:33:18.578 Suite: bdevio tests on: Nvme1n1 00:33:18.578 Test: blockdev write read block ...passed 00:33:18.578 Test: blockdev write zeroes read block ...passed 00:33:18.578 Test: blockdev write zeroes read no split ...passed 00:33:18.836 Test: blockdev write zeroes read split ...passed 00:33:18.836 Test: blockdev write zeroes read split partial ...passed 00:33:18.836 Test: blockdev reset ...[2024-11-15 11:51:19.511152] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:33:18.836 [2024-11-15 11:51:19.511227] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa7a7c0 (9): Bad file descriptor 00:33:18.836 [2024-11-15 11:51:19.644368] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller successful. 00:33:18.836 passed 00:33:18.836 Test: blockdev write read 8 blocks ...passed 00:33:18.836 Test: blockdev write read size > 128k ...passed 00:33:18.836 Test: blockdev write read invalid size ...passed 00:33:19.096 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:33:19.096 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:33:19.096 Test: blockdev write read max offset ...passed 00:33:19.096 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:33:19.096 Test: blockdev writev readv 8 blocks ...passed 00:33:19.096 Test: blockdev writev readv 30 x 1block ...passed 00:33:19.096 Test: blockdev writev readv block ...passed 00:33:19.096 Test: blockdev writev readv size > 128k ...passed 00:33:19.096 Test: blockdev writev readv size > 128k in two iovs ...passed 00:33:19.096 Test: blockdev comparev and writev ...[2024-11-15 11:51:19.855045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:33:19.096 [2024-11-15 11:51:19.855071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:19.096 [2024-11-15 11:51:19.855084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:33:19.096 [2024-11-15 11:51:19.855091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:33:19.096 [2024-11-15 11:51:19.855385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:33:19.096 [2024-11-15 11:51:19.855394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:33:19.096 [2024-11-15 11:51:19.855404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:33:19.096 [2024-11-15 11:51:19.855411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:33:19.096 [2024-11-15 11:51:19.855705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:33:19.096 [2024-11-15 11:51:19.855714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:33:19.096 [2024-11-15 11:51:19.855725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:33:19.096 [2024-11-15 11:51:19.855731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:33:19.096 [2024-11-15 11:51:19.856022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:33:19.096 [2024-11-15 11:51:19.856037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:33:19.096 [2024-11-15 11:51:19.856048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:33:19.096 [2024-11-15 11:51:19.856055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:33:19.096 passed 00:33:19.096 Test: blockdev nvme passthru rw ...passed 00:33:19.096 Test: blockdev nvme passthru vendor specific ...[2024-11-15 11:51:19.937746] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:33:19.096 [2024-11-15 11:51:19.937759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:33:19.096 [2024-11-15 11:51:19.937871] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:33:19.096 [2024-11-15 11:51:19.937879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:33:19.096 [2024-11-15 11:51:19.937983] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:33:19.096 [2024-11-15 11:51:19.937992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:33:19.096 [2024-11-15 11:51:19.938096] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:33:19.096 [2024-11-15 11:51:19.938104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:33:19.096 passed 00:33:19.355 Test: blockdev nvme admin passthru ...passed 00:33:19.355 Test: blockdev copy ...passed 00:33:19.355 00:33:19.355 Run Summary: Type Total Ran Passed Failed Inactive 00:33:19.355 suites 1 1 n/a 0 0 00:33:19.355 tests 23 23 23 0 0 00:33:19.355 asserts 152 152 152 0 n/a 00:33:19.355 00:33:19.355 Elapsed time = 1.341 seconds 00:33:19.355 11:51:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:33:19.355 11:51:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:19.355 11:51:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:33:19.355 11:51:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:19.355 11:51:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:33:19.355 11:51:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@30 -- # nvmftestfini 00:33:19.355 11:51:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@516 -- # nvmfcleanup 00:33:19.355 11:51:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@121 -- # sync 00:33:19.355 11:51:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:33:19.355 11:51:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@124 -- # set +e 00:33:19.355 11:51:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@125 -- # for i in {1..20} 00:33:19.355 11:51:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:33:19.355 rmmod nvme_tcp 00:33:19.355 rmmod nvme_fabrics 00:33:19.355 rmmod nvme_keyring 00:33:19.614 11:51:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:33:19.614 11:51:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@128 -- # set -e 00:33:19.614 11:51:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@129 -- # return 0 00:33:19.614 11:51:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@517 -- # '[' -n 1487854 ']' 00:33:19.614 11:51:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@518 -- # killprocess 1487854 00:33:19.614 11:51:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@952 -- # '[' -z 1487854 ']' 00:33:19.614 11:51:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@956 -- # kill -0 1487854 00:33:19.614 11:51:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@957 -- # uname 00:33:19.614 11:51:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:33:19.614 11:51:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 1487854 00:33:19.614 11:51:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@958 -- # process_name=reactor_3 00:33:19.614 11:51:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@962 -- # '[' reactor_3 = sudo ']' 00:33:19.614 11:51:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@970 -- # echo 'killing process with pid 1487854' 00:33:19.614 killing process with pid 1487854 00:33:19.614 11:51:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@971 -- # kill 1487854 00:33:19.614 11:51:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@976 -- # wait 1487854 00:33:19.874 11:51:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:33:19.874 11:51:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:33:19.874 11:51:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:33:19.874 11:51:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@297 -- # iptr 00:33:19.874 11:51:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@791 -- # iptables-save 00:33:19.874 11:51:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:33:19.874 11:51:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@791 -- # iptables-restore 00:33:19.874 11:51:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:33:19.874 11:51:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@302 -- # remove_spdk_ns 00:33:19.874 11:51:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:19.874 11:51:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:33:19.874 11:51:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:21.787 11:51:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:33:21.787 00:33:21.787 real 0m9.673s 00:33:21.787 user 0m9.664s 00:33:21.787 sys 0m4.890s 00:33:21.787 11:51:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1128 -- # xtrace_disable 00:33:21.787 11:51:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:33:21.787 ************************************ 00:33:21.787 END TEST nvmf_bdevio 00:33:21.787 ************************************ 00:33:21.787 11:51:22 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:33:21.787 00:33:21.787 real 4m38.311s 00:33:21.787 user 10m10.393s 00:33:21.787 sys 1m49.302s 00:33:21.787 11:51:22 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1128 -- # xtrace_disable 00:33:21.787 11:51:22 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:33:21.787 ************************************ 00:33:21.787 END TEST nvmf_target_core_interrupt_mode 00:33:21.787 ************************************ 00:33:21.787 11:51:22 nvmf_tcp -- nvmf/nvmf.sh@21 -- # run_test nvmf_interrupt /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/interrupt.sh --transport=tcp --interrupt-mode 00:33:21.787 11:51:22 nvmf_tcp -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:33:21.787 11:51:22 nvmf_tcp -- common/autotest_common.sh@1109 -- # xtrace_disable 00:33:21.787 11:51:22 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:33:21.787 ************************************ 00:33:21.787 START TEST nvmf_interrupt 00:33:21.787 ************************************ 00:33:21.787 11:51:22 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/interrupt.sh --transport=tcp --interrupt-mode 00:33:22.047 * Looking for test storage... 00:33:22.047 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:33:22.047 11:51:22 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:33:22.047 11:51:22 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1691 -- # lcov --version 00:33:22.047 11:51:22 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:33:22.047 11:51:22 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:33:22.047 11:51:22 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:33:22.047 11:51:22 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@333 -- # local ver1 ver1_l 00:33:22.047 11:51:22 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@334 -- # local ver2 ver2_l 00:33:22.047 11:51:22 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@336 -- # IFS=.-: 00:33:22.047 11:51:22 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@336 -- # read -ra ver1 00:33:22.047 11:51:22 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@337 -- # IFS=.-: 00:33:22.047 11:51:22 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@337 -- # read -ra ver2 00:33:22.047 11:51:22 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@338 -- # local 'op=<' 00:33:22.047 11:51:22 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@340 -- # ver1_l=2 00:33:22.047 11:51:22 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@341 -- # ver2_l=1 00:33:22.047 11:51:22 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:33:22.047 11:51:22 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@344 -- # case "$op" in 00:33:22.047 11:51:22 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@345 -- # : 1 00:33:22.047 11:51:22 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@364 -- # (( v = 0 )) 00:33:22.047 11:51:22 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:33:22.047 11:51:22 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@365 -- # decimal 1 00:33:22.047 11:51:22 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@353 -- # local d=1 00:33:22.047 11:51:22 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:33:22.047 11:51:22 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@355 -- # echo 1 00:33:22.047 11:51:22 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@365 -- # ver1[v]=1 00:33:22.047 11:51:22 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@366 -- # decimal 2 00:33:22.047 11:51:22 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@353 -- # local d=2 00:33:22.047 11:51:22 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:33:22.047 11:51:22 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@355 -- # echo 2 00:33:22.047 11:51:22 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@366 -- # ver2[v]=2 00:33:22.047 11:51:22 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:33:22.047 11:51:22 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:33:22.047 11:51:22 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@368 -- # return 0 00:33:22.047 11:51:22 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:33:22.047 11:51:22 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:33:22.047 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:22.047 --rc genhtml_branch_coverage=1 00:33:22.047 --rc genhtml_function_coverage=1 00:33:22.047 --rc genhtml_legend=1 00:33:22.047 --rc geninfo_all_blocks=1 00:33:22.047 --rc geninfo_unexecuted_blocks=1 00:33:22.047 00:33:22.047 ' 00:33:22.047 11:51:22 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:33:22.047 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:22.047 --rc genhtml_branch_coverage=1 00:33:22.047 --rc genhtml_function_coverage=1 00:33:22.047 --rc genhtml_legend=1 00:33:22.047 --rc geninfo_all_blocks=1 00:33:22.047 --rc geninfo_unexecuted_blocks=1 00:33:22.047 00:33:22.047 ' 00:33:22.047 11:51:22 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:33:22.047 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:22.047 --rc genhtml_branch_coverage=1 00:33:22.047 --rc genhtml_function_coverage=1 00:33:22.047 --rc genhtml_legend=1 00:33:22.047 --rc geninfo_all_blocks=1 00:33:22.047 --rc geninfo_unexecuted_blocks=1 00:33:22.047 00:33:22.047 ' 00:33:22.047 11:51:22 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:33:22.047 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:22.047 --rc genhtml_branch_coverage=1 00:33:22.047 --rc genhtml_function_coverage=1 00:33:22.047 --rc genhtml_legend=1 00:33:22.047 --rc geninfo_all_blocks=1 00:33:22.047 --rc geninfo_unexecuted_blocks=1 00:33:22.047 00:33:22.047 ' 00:33:22.047 11:51:22 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:33:22.047 11:51:22 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@7 -- # uname -s 00:33:22.047 11:51:22 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:33:22.047 11:51:22 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:33:22.047 11:51:22 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:33:22.047 11:51:22 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:33:22.047 11:51:22 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:33:22.047 11:51:22 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:33:22.047 11:51:22 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:33:22.047 11:51:22 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:33:22.047 11:51:22 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:33:22.047 11:51:22 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:33:22.047 11:51:22 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:33:22.047 11:51:22 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@18 -- # NVME_HOSTID=00abaa28-3537-eb11-906e-0017a4403562 00:33:22.047 11:51:22 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:33:22.047 11:51:22 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:33:22.047 11:51:22 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:33:22.047 11:51:22 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:33:22.047 11:51:22 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:33:22.047 11:51:22 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@15 -- # shopt -s extglob 00:33:22.047 11:51:22 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:33:22.047 11:51:22 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:33:22.047 11:51:22 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:33:22.047 11:51:22 nvmf_tcp.nvmf_interrupt -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:22.047 11:51:22 nvmf_tcp.nvmf_interrupt -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:22.048 11:51:22 nvmf_tcp.nvmf_interrupt -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:22.048 11:51:22 nvmf_tcp.nvmf_interrupt -- paths/export.sh@5 -- # export PATH 00:33:22.048 11:51:22 nvmf_tcp.nvmf_interrupt -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:22.048 11:51:22 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@51 -- # : 0 00:33:22.048 11:51:22 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:33:22.048 11:51:22 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:33:22.048 11:51:22 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:33:22.048 11:51:22 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:33:22.048 11:51:22 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:33:22.048 11:51:22 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:33:22.048 11:51:22 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:33:22.048 11:51:22 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:33:22.048 11:51:22 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:33:22.048 11:51:22 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@55 -- # have_pci_nics=0 00:33:22.048 11:51:22 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/interrupt/common.sh 00:33:22.048 11:51:22 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@12 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:33:22.048 11:51:22 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@14 -- # nvmftestinit 00:33:22.048 11:51:22 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:33:22.048 11:51:22 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:33:22.048 11:51:22 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@476 -- # prepare_net_devs 00:33:22.048 11:51:22 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@438 -- # local -g is_hw=no 00:33:22.048 11:51:22 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@440 -- # remove_spdk_ns 00:33:22.048 11:51:22 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:22.048 11:51:22 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:33:22.048 11:51:22 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:22.048 11:51:22 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:33:22.048 11:51:22 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:33:22.048 11:51:22 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@309 -- # xtrace_disable 00:33:22.048 11:51:22 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:33:27.322 11:51:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:33:27.322 11:51:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@315 -- # pci_devs=() 00:33:27.322 11:51:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@315 -- # local -a pci_devs 00:33:27.322 11:51:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@316 -- # pci_net_devs=() 00:33:27.322 11:51:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:33:27.322 11:51:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@317 -- # pci_drivers=() 00:33:27.322 11:51:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@317 -- # local -A pci_drivers 00:33:27.322 11:51:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@319 -- # net_devs=() 00:33:27.322 11:51:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@319 -- # local -ga net_devs 00:33:27.322 11:51:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@320 -- # e810=() 00:33:27.322 11:51:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@320 -- # local -ga e810 00:33:27.322 11:51:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@321 -- # x722=() 00:33:27.322 11:51:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@321 -- # local -ga x722 00:33:27.322 11:51:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@322 -- # mlx=() 00:33:27.322 11:51:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@322 -- # local -ga mlx 00:33:27.322 11:51:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:33:27.322 11:51:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:33:27.322 11:51:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:33:27.322 11:51:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:33:27.322 11:51:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:33:27.322 11:51:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:33:27.322 11:51:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:33:27.322 11:51:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:33:27.322 11:51:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:33:27.322 11:51:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:33:27.322 11:51:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:33:27.322 11:51:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:33:27.322 11:51:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:33:27.322 11:51:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:33:27.322 11:51:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:33:27.322 11:51:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:33:27.322 11:51:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:33:27.323 11:51:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:33:27.323 11:51:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:33:27.323 11:51:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:33:27.323 Found 0000:af:00.0 (0x8086 - 0x159b) 00:33:27.323 11:51:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:33:27.323 11:51:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:33:27.323 11:51:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:27.323 11:51:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:27.323 11:51:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:33:27.323 11:51:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:33:27.323 11:51:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:33:27.323 Found 0000:af:00.1 (0x8086 - 0x159b) 00:33:27.323 11:51:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:33:27.323 11:51:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:33:27.323 11:51:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:27.323 11:51:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:27.323 11:51:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:33:27.323 11:51:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:33:27.323 11:51:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:33:27.323 11:51:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:33:27.323 11:51:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:33:27.323 11:51:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:27.323 11:51:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:33:27.323 11:51:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:27.323 11:51:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@418 -- # [[ up == up ]] 00:33:27.323 11:51:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:33:27.323 11:51:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:27.323 11:51:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:33:27.323 Found net devices under 0000:af:00.0: cvl_0_0 00:33:27.323 11:51:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:33:27.323 11:51:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:33:27.323 11:51:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:27.323 11:51:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:33:27.323 11:51:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:27.323 11:51:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@418 -- # [[ up == up ]] 00:33:27.323 11:51:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:33:27.323 11:51:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:27.323 11:51:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:33:27.323 Found net devices under 0000:af:00.1: cvl_0_1 00:33:27.323 11:51:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:33:27.323 11:51:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:33:27.323 11:51:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@442 -- # is_hw=yes 00:33:27.323 11:51:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:33:27.323 11:51:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:33:27.323 11:51:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:33:27.323 11:51:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:33:27.323 11:51:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:33:27.323 11:51:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:33:27.323 11:51:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:33:27.323 11:51:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:33:27.323 11:51:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:33:27.323 11:51:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:33:27.323 11:51:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:33:27.323 11:51:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:33:27.323 11:51:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:33:27.323 11:51:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:33:27.323 11:51:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:33:27.323 11:51:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:33:27.323 11:51:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:33:27.323 11:51:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:33:27.323 11:51:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:33:27.323 11:51:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:33:27.323 11:51:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:33:27.323 11:51:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:33:27.323 11:51:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:33:27.323 11:51:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:33:27.323 11:51:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:33:27.323 11:51:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:33:27.323 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:33:27.323 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.418 ms 00:33:27.323 00:33:27.323 --- 10.0.0.2 ping statistics --- 00:33:27.323 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:27.323 rtt min/avg/max/mdev = 0.418/0.418/0.418/0.000 ms 00:33:27.323 11:51:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:33:27.323 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:33:27.323 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.091 ms 00:33:27.323 00:33:27.323 --- 10.0.0.1 ping statistics --- 00:33:27.323 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:27.323 rtt min/avg/max/mdev = 0.091/0.091/0.091/0.000 ms 00:33:27.323 11:51:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:33:27.323 11:51:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@450 -- # return 0 00:33:27.323 11:51:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:33:27.323 11:51:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:33:27.323 11:51:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:33:27.323 11:51:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:33:27.323 11:51:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:33:27.323 11:51:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:33:27.323 11:51:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:33:27.323 11:51:27 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@15 -- # nvmfappstart -m 0x3 00:33:27.323 11:51:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:33:27.323 11:51:27 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@724 -- # xtrace_disable 00:33:27.323 11:51:27 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:33:27.323 11:51:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@509 -- # nvmfpid=1491761 00:33:27.323 11:51:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@510 -- # waitforlisten 1491761 00:33:27.323 11:51:27 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@833 -- # '[' -z 1491761 ']' 00:33:27.323 11:51:27 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:33:27.323 11:51:27 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@838 -- # local max_retries=100 00:33:27.323 11:51:27 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:33:27.324 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:33:27.324 11:51:27 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@842 -- # xtrace_disable 00:33:27.324 11:51:27 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:33:27.324 11:51:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x3 00:33:27.324 [2024-11-15 11:51:28.012161] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:33:27.324 [2024-11-15 11:51:28.013529] Starting SPDK v25.01-pre git sha1 4b2d483c6 / DPDK 24.03.0 initialization... 00:33:27.324 [2024-11-15 11:51:28.013572] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:33:27.324 [2024-11-15 11:51:28.115775] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:33:27.324 [2024-11-15 11:51:28.164517] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:33:27.324 [2024-11-15 11:51:28.164560] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:33:27.324 [2024-11-15 11:51:28.164570] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:33:27.324 [2024-11-15 11:51:28.164580] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:33:27.324 [2024-11-15 11:51:28.164587] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:33:27.324 [2024-11-15 11:51:28.166080] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:33:27.324 [2024-11-15 11:51:28.166087] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:33:27.583 [2024-11-15 11:51:28.242215] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:33:27.583 [2024-11-15 11:51:28.242306] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:33:27.583 [2024-11-15 11:51:28.242515] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:33:27.583 11:51:28 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:33:27.583 11:51:28 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@866 -- # return 0 00:33:27.583 11:51:28 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:33:27.583 11:51:28 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@730 -- # xtrace_disable 00:33:27.583 11:51:28 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:33:27.583 11:51:28 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:33:27.583 11:51:28 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@16 -- # setup_bdev_aio 00:33:27.583 11:51:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@77 -- # uname -s 00:33:27.583 11:51:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@77 -- # [[ Linux != \F\r\e\e\B\S\D ]] 00:33:27.583 11:51:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@78 -- # dd if=/dev/zero of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aiofile bs=2048 count=5000 00:33:27.583 5000+0 records in 00:33:27.583 5000+0 records out 00:33:27.583 10240000 bytes (10 MB, 9.8 MiB) copied, 0.0185199 s, 553 MB/s 00:33:27.583 11:51:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@79 -- # rpc_cmd bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aiofile AIO0 2048 00:33:27.583 11:51:28 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:27.583 11:51:28 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:33:27.583 AIO0 00:33:27.583 11:51:28 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:27.583 11:51:28 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -q 256 00:33:27.583 11:51:28 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:27.583 11:51:28 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:33:27.584 [2024-11-15 11:51:28.358898] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:33:27.584 11:51:28 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:27.584 11:51:28 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:33:27.584 11:51:28 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:27.584 11:51:28 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:33:27.584 11:51:28 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:27.584 11:51:28 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 AIO0 00:33:27.584 11:51:28 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:27.584 11:51:28 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:33:27.584 11:51:28 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:27.584 11:51:28 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:33:27.584 11:51:28 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:27.584 11:51:28 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:33:27.584 [2024-11-15 11:51:28.387046] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:33:27.584 11:51:28 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:27.584 11:51:28 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@24 -- # for i in {0..1} 00:33:27.584 11:51:28 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@25 -- # reactor_is_idle 1491761 0 00:33:27.584 11:51:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 1491761 0 idle 00:33:27.584 11:51:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=1491761 00:33:27.584 11:51:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=0 00:33:27.584 11:51:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:33:27.584 11:51:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:33:27.584 11:51:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:33:27.584 11:51:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:33:27.584 11:51:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:33:27.584 11:51:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:33:27.584 11:51:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:33:27.584 11:51:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:33:27.584 11:51:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 1491761 -w 256 00:33:27.584 11:51:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_0 00:33:27.843 11:51:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='1491761 root 20 0 128.2g 45696 34048 S 0.0 0.0 0:00.29 reactor_0' 00:33:27.843 11:51:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 1491761 root 20 0 128.2g 45696 34048 S 0.0 0.0 0:00.29 reactor_0 00:33:27.843 11:51:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:33:27.843 11:51:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:33:27.843 11:51:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:33:27.843 11:51:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:33:27.843 11:51:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:33:27.843 11:51:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:33:27.843 11:51:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:33:27.843 11:51:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:33:27.843 11:51:28 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@24 -- # for i in {0..1} 00:33:27.843 11:51:28 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@25 -- # reactor_is_idle 1491761 1 00:33:27.843 11:51:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 1491761 1 idle 00:33:27.843 11:51:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=1491761 00:33:27.843 11:51:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=1 00:33:27.843 11:51:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:33:27.843 11:51:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:33:27.843 11:51:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:33:27.843 11:51:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:33:27.843 11:51:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:33:27.843 11:51:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:33:27.843 11:51:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:33:27.843 11:51:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:33:27.843 11:51:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 1491761 -w 256 00:33:27.843 11:51:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_1 00:33:28.102 11:51:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='1491815 root 20 0 128.2g 45696 34048 S 0.0 0.0 0:00.00 reactor_1' 00:33:28.102 11:51:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 1491815 root 20 0 128.2g 45696 34048 S 0.0 0.0 0:00.00 reactor_1 00:33:28.102 11:51:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:33:28.102 11:51:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:33:28.102 11:51:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:33:28.102 11:51:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:33:28.102 11:51:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:33:28.102 11:51:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:33:28.102 11:51:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:33:28.102 11:51:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:33:28.102 11:51:28 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@28 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:33:28.102 11:51:28 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@35 -- # perf_pid=1491915 00:33:28.102 11:51:28 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@38 -- # for i in {0..1} 00:33:28.102 11:51:28 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@39 -- # BUSY_THRESHOLD=30 00:33:28.102 11:51:28 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@39 -- # reactor_is_busy 1491761 0 00:33:28.102 11:51:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@49 -- # reactor_is_busy_or_idle 1491761 0 busy 00:33:28.102 11:51:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=1491761 00:33:28.102 11:51:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=0 00:33:28.102 11:51:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=busy 00:33:28.102 11:51:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=30 00:33:28.102 11:51:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:33:28.103 11:51:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ busy != \b\u\s\y ]] 00:33:28.103 11:51:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:33:28.103 11:51:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:33:28.103 11:51:28 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 256 -o 4096 -w randrw -M 30 -t 10 -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:33:28.103 11:51:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:33:28.103 11:51:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 1491761 -w 256 00:33:28.103 11:51:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_0 00:33:28.103 11:51:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='1491761 root 20 0 128.2g 46592 34048 R 99.9 0.1 0:00.51 reactor_0' 00:33:28.103 11:51:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 1491761 root 20 0 128.2g 46592 34048 R 99.9 0.1 0:00.51 reactor_0 00:33:28.103 11:51:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:33:28.103 11:51:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:33:28.103 11:51:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=99.9 00:33:28.103 11:51:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=99 00:33:28.103 11:51:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ busy = \b\u\s\y ]] 00:33:28.103 11:51:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # (( cpu_rate < busy_threshold )) 00:33:28.103 11:51:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ busy = \i\d\l\e ]] 00:33:28.103 11:51:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:33:28.103 11:51:28 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@38 -- # for i in {0..1} 00:33:28.103 11:51:28 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@39 -- # BUSY_THRESHOLD=30 00:33:28.103 11:51:28 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@39 -- # reactor_is_busy 1491761 1 00:33:28.103 11:51:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@49 -- # reactor_is_busy_or_idle 1491761 1 busy 00:33:28.103 11:51:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=1491761 00:33:28.103 11:51:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=1 00:33:28.103 11:51:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=busy 00:33:28.103 11:51:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=30 00:33:28.103 11:51:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:33:28.103 11:51:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ busy != \b\u\s\y ]] 00:33:28.103 11:51:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:33:28.103 11:51:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:33:28.103 11:51:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:33:28.103 11:51:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 1491761 -w 256 00:33:28.103 11:51:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_1 00:33:28.362 11:51:29 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='1491815 root 20 0 128.2g 46592 34048 R 99.9 0.1 0:00.29 reactor_1' 00:33:28.362 11:51:29 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 1491815 root 20 0 128.2g 46592 34048 R 99.9 0.1 0:00.29 reactor_1 00:33:28.362 11:51:29 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:33:28.362 11:51:29 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:33:28.362 11:51:29 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=99.9 00:33:28.362 11:51:29 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=99 00:33:28.362 11:51:29 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ busy = \b\u\s\y ]] 00:33:28.362 11:51:29 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # (( cpu_rate < busy_threshold )) 00:33:28.362 11:51:29 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ busy = \i\d\l\e ]] 00:33:28.362 11:51:29 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:33:28.362 11:51:29 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@42 -- # wait 1491915 00:33:38.362 Initializing NVMe Controllers 00:33:38.362 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:33:38.362 Controller IO queue size 256, less than required. 00:33:38.362 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:33:38.362 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:33:38.362 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:33:38.362 Initialization complete. Launching workers. 00:33:38.362 ======================================================== 00:33:38.362 Latency(us) 00:33:38.362 Device Information : IOPS MiB/s Average min max 00:33:38.362 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 18216.90 71.16 14059.29 3279.26 17312.18 00:33:38.362 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 11600.50 45.31 22086.34 3271.57 30540.44 00:33:38.362 ======================================================== 00:33:38.362 Total : 29817.40 116.47 17182.22 3271.57 30540.44 00:33:38.362 00:33:38.362 11:51:38 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@45 -- # for i in {0..1} 00:33:38.362 11:51:38 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@46 -- # reactor_is_idle 1491761 0 00:33:38.362 11:51:38 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 1491761 0 idle 00:33:38.362 11:51:38 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=1491761 00:33:38.362 11:51:38 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=0 00:33:38.362 11:51:38 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:33:38.362 11:51:38 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:33:38.362 11:51:38 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:33:38.362 11:51:38 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:33:38.362 11:51:38 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:33:38.362 11:51:38 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:33:38.362 11:51:38 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:33:38.362 11:51:38 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:33:38.362 11:51:38 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 1491761 -w 256 00:33:38.362 11:51:38 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_0 00:33:38.362 11:51:39 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='1491761 root 20 0 128.2g 46592 34048 R 0.0 0.1 0:20.27 reactor_0' 00:33:38.362 11:51:39 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 1491761 root 20 0 128.2g 46592 34048 R 0.0 0.1 0:20.27 reactor_0 00:33:38.362 11:51:39 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:33:38.362 11:51:39 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:33:38.362 11:51:39 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:33:38.362 11:51:39 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:33:38.362 11:51:39 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:33:38.362 11:51:39 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:33:38.362 11:51:39 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:33:38.362 11:51:39 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:33:38.362 11:51:39 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@45 -- # for i in {0..1} 00:33:38.362 11:51:39 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@46 -- # reactor_is_idle 1491761 1 00:33:38.362 11:51:39 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 1491761 1 idle 00:33:38.362 11:51:39 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=1491761 00:33:38.362 11:51:39 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=1 00:33:38.362 11:51:39 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:33:38.362 11:51:39 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:33:38.362 11:51:39 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:33:38.362 11:51:39 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:33:38.362 11:51:39 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:33:38.362 11:51:39 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:33:38.362 11:51:39 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:33:38.362 11:51:39 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:33:38.362 11:51:39 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_1 00:33:38.362 11:51:39 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 1491761 -w 256 00:33:38.623 11:51:39 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='1491815 root 20 0 128.2g 46592 34048 S 0.0 0.1 0:09.99 reactor_1' 00:33:38.623 11:51:39 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 1491815 root 20 0 128.2g 46592 34048 S 0.0 0.1 0:09.99 reactor_1 00:33:38.623 11:51:39 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:33:38.623 11:51:39 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:33:38.623 11:51:39 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:33:38.623 11:51:39 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:33:38.623 11:51:39 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:33:38.623 11:51:39 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:33:38.623 11:51:39 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:33:38.623 11:51:39 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:33:38.624 11:51:39 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@50 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid=00abaa28-3537-eb11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:33:38.884 11:51:39 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@51 -- # waitforserial SPDKISFASTANDAWESOME 00:33:38.884 11:51:39 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1200 -- # local i=0 00:33:38.884 11:51:39 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1201 -- # local nvme_device_counter=1 nvme_devices=0 00:33:38.884 11:51:39 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1202 -- # [[ -n '' ]] 00:33:38.884 11:51:39 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1207 -- # sleep 2 00:33:41.427 11:51:41 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1208 -- # (( i++ <= 15 )) 00:33:41.427 11:51:41 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1209 -- # lsblk -l -o NAME,SERIAL 00:33:41.427 11:51:41 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1209 -- # grep -c SPDKISFASTANDAWESOME 00:33:41.427 11:51:41 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1209 -- # nvme_devices=1 00:33:41.427 11:51:41 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1210 -- # (( nvme_devices == nvme_device_counter )) 00:33:41.427 11:51:41 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1210 -- # return 0 00:33:41.427 11:51:41 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@52 -- # for i in {0..1} 00:33:41.427 11:51:41 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@53 -- # reactor_is_idle 1491761 0 00:33:41.427 11:51:41 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 1491761 0 idle 00:33:41.427 11:51:41 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=1491761 00:33:41.427 11:51:41 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=0 00:33:41.427 11:51:41 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:33:41.427 11:51:41 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:33:41.427 11:51:41 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:33:41.427 11:51:41 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:33:41.427 11:51:41 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:33:41.427 11:51:41 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:33:41.427 11:51:41 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:33:41.427 11:51:41 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:33:41.427 11:51:41 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 1491761 -w 256 00:33:41.427 11:51:41 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_0 00:33:41.427 11:51:41 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='1491761 root 20 0 128.2g 77056 34048 S 0.0 0.1 0:20.52 reactor_0' 00:33:41.427 11:51:41 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 1491761 root 20 0 128.2g 77056 34048 S 0.0 0.1 0:20.52 reactor_0 00:33:41.427 11:51:41 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:33:41.427 11:51:41 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:33:41.427 11:51:41 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:33:41.427 11:51:41 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:33:41.427 11:51:41 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:33:41.427 11:51:41 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:33:41.427 11:51:41 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:33:41.427 11:51:41 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:33:41.427 11:51:41 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@52 -- # for i in {0..1} 00:33:41.427 11:51:41 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@53 -- # reactor_is_idle 1491761 1 00:33:41.427 11:51:41 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 1491761 1 idle 00:33:41.427 11:51:41 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=1491761 00:33:41.427 11:51:41 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=1 00:33:41.427 11:51:41 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:33:41.427 11:51:41 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:33:41.427 11:51:41 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:33:41.427 11:51:41 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:33:41.427 11:51:41 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:33:41.427 11:51:41 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:33:41.427 11:51:41 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:33:41.427 11:51:41 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:33:41.427 11:51:41 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 1491761 -w 256 00:33:41.427 11:51:41 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_1 00:33:41.427 11:51:42 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='1491815 root 20 0 128.2g 77056 34048 S 0.0 0.1 0:10.07 reactor_1' 00:33:41.427 11:51:42 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 1491815 root 20 0 128.2g 77056 34048 S 0.0 0.1 0:10.07 reactor_1 00:33:41.427 11:51:42 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:33:41.427 11:51:42 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:33:41.427 11:51:42 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:33:41.427 11:51:42 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:33:41.427 11:51:42 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:33:41.427 11:51:42 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:33:41.427 11:51:42 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:33:41.427 11:51:42 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:33:41.427 11:51:42 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@55 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:33:41.427 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:33:41.427 11:51:42 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@56 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:33:41.427 11:51:42 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1221 -- # local i=0 00:33:41.427 11:51:42 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1222 -- # grep -q -w SPDKISFASTANDAWESOME 00:33:41.427 11:51:42 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1222 -- # lsblk -o NAME,SERIAL 00:33:41.427 11:51:42 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1229 -- # lsblk -l -o NAME,SERIAL 00:33:41.427 11:51:42 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1229 -- # grep -q -w SPDKISFASTANDAWESOME 00:33:41.688 11:51:42 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1233 -- # return 0 00:33:41.688 11:51:42 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@58 -- # trap - SIGINT SIGTERM EXIT 00:33:41.688 11:51:42 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@59 -- # nvmftestfini 00:33:41.688 11:51:42 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@516 -- # nvmfcleanup 00:33:41.688 11:51:42 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@121 -- # sync 00:33:41.688 11:51:42 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:33:41.688 11:51:42 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@124 -- # set +e 00:33:41.688 11:51:42 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@125 -- # for i in {1..20} 00:33:41.688 11:51:42 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:33:41.688 rmmod nvme_tcp 00:33:41.688 rmmod nvme_fabrics 00:33:41.688 rmmod nvme_keyring 00:33:41.688 11:51:42 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:33:41.688 11:51:42 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@128 -- # set -e 00:33:41.688 11:51:42 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@129 -- # return 0 00:33:41.688 11:51:42 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@517 -- # '[' -n 1491761 ']' 00:33:41.688 11:51:42 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@518 -- # killprocess 1491761 00:33:41.688 11:51:42 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@952 -- # '[' -z 1491761 ']' 00:33:41.688 11:51:42 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@956 -- # kill -0 1491761 00:33:41.688 11:51:42 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@957 -- # uname 00:33:41.688 11:51:42 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:33:41.688 11:51:42 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 1491761 00:33:41.688 11:51:42 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:33:41.688 11:51:42 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:33:41.688 11:51:42 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@970 -- # echo 'killing process with pid 1491761' 00:33:41.688 killing process with pid 1491761 00:33:41.688 11:51:42 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@971 -- # kill 1491761 00:33:41.688 11:51:42 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@976 -- # wait 1491761 00:33:41.950 11:51:42 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:33:41.950 11:51:42 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:33:41.950 11:51:42 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:33:41.950 11:51:42 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@297 -- # iptr 00:33:41.950 11:51:42 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:33:41.950 11:51:42 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@791 -- # iptables-save 00:33:41.950 11:51:42 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@791 -- # iptables-restore 00:33:41.950 11:51:42 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:33:41.950 11:51:42 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@302 -- # remove_spdk_ns 00:33:41.950 11:51:42 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:41.950 11:51:42 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:33:41.950 11:51:42 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:43.860 11:51:44 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:33:43.860 00:33:43.860 real 0m22.084s 00:33:43.860 user 0m39.365s 00:33:43.860 sys 0m7.822s 00:33:43.860 11:51:44 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1128 -- # xtrace_disable 00:33:43.860 11:51:44 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:33:43.860 ************************************ 00:33:43.860 END TEST nvmf_interrupt 00:33:43.860 ************************************ 00:33:44.119 00:33:44.119 real 28m24.852s 00:33:44.119 user 61m54.544s 00:33:44.119 sys 9m4.388s 00:33:44.119 11:51:44 nvmf_tcp -- common/autotest_common.sh@1128 -- # xtrace_disable 00:33:44.119 11:51:44 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:33:44.119 ************************************ 00:33:44.119 END TEST nvmf_tcp 00:33:44.119 ************************************ 00:33:44.119 11:51:44 -- spdk/autotest.sh@281 -- # [[ 0 -eq 0 ]] 00:33:44.119 11:51:44 -- spdk/autotest.sh@282 -- # run_test spdkcli_nvmf_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:33:44.119 11:51:44 -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:33:44.119 11:51:44 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:33:44.119 11:51:44 -- common/autotest_common.sh@10 -- # set +x 00:33:44.119 ************************************ 00:33:44.119 START TEST spdkcli_nvmf_tcp 00:33:44.119 ************************************ 00:33:44.119 11:51:44 spdkcli_nvmf_tcp -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:33:44.119 * Looking for test storage... 00:33:44.119 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli 00:33:44.119 11:51:44 spdkcli_nvmf_tcp -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:33:44.119 11:51:44 spdkcli_nvmf_tcp -- common/autotest_common.sh@1691 -- # lcov --version 00:33:44.119 11:51:44 spdkcli_nvmf_tcp -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:33:44.379 11:51:44 spdkcli_nvmf_tcp -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:33:44.379 11:51:44 spdkcli_nvmf_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:33:44.379 11:51:44 spdkcli_nvmf_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:33:44.379 11:51:44 spdkcli_nvmf_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:33:44.379 11:51:44 spdkcli_nvmf_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:33:44.379 11:51:44 spdkcli_nvmf_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:33:44.379 11:51:44 spdkcli_nvmf_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:33:44.379 11:51:44 spdkcli_nvmf_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:33:44.379 11:51:44 spdkcli_nvmf_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:33:44.379 11:51:44 spdkcli_nvmf_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:33:44.379 11:51:44 spdkcli_nvmf_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:33:44.379 11:51:44 spdkcli_nvmf_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:33:44.379 11:51:44 spdkcli_nvmf_tcp -- scripts/common.sh@344 -- # case "$op" in 00:33:44.379 11:51:44 spdkcli_nvmf_tcp -- scripts/common.sh@345 -- # : 1 00:33:44.379 11:51:44 spdkcli_nvmf_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:33:44.379 11:51:44 spdkcli_nvmf_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:33:44.379 11:51:44 spdkcli_nvmf_tcp -- scripts/common.sh@365 -- # decimal 1 00:33:44.379 11:51:44 spdkcli_nvmf_tcp -- scripts/common.sh@353 -- # local d=1 00:33:44.379 11:51:44 spdkcli_nvmf_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:33:44.379 11:51:44 spdkcli_nvmf_tcp -- scripts/common.sh@355 -- # echo 1 00:33:44.379 11:51:44 spdkcli_nvmf_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:33:44.379 11:51:44 spdkcli_nvmf_tcp -- scripts/common.sh@366 -- # decimal 2 00:33:44.379 11:51:44 spdkcli_nvmf_tcp -- scripts/common.sh@353 -- # local d=2 00:33:44.379 11:51:44 spdkcli_nvmf_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:33:44.379 11:51:44 spdkcli_nvmf_tcp -- scripts/common.sh@355 -- # echo 2 00:33:44.379 11:51:44 spdkcli_nvmf_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:33:44.380 11:51:44 spdkcli_nvmf_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:33:44.380 11:51:44 spdkcli_nvmf_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:33:44.380 11:51:44 spdkcli_nvmf_tcp -- scripts/common.sh@368 -- # return 0 00:33:44.380 11:51:44 spdkcli_nvmf_tcp -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:33:44.380 11:51:44 spdkcli_nvmf_tcp -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:33:44.380 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:44.380 --rc genhtml_branch_coverage=1 00:33:44.380 --rc genhtml_function_coverage=1 00:33:44.380 --rc genhtml_legend=1 00:33:44.380 --rc geninfo_all_blocks=1 00:33:44.380 --rc geninfo_unexecuted_blocks=1 00:33:44.380 00:33:44.380 ' 00:33:44.380 11:51:44 spdkcli_nvmf_tcp -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:33:44.380 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:44.380 --rc genhtml_branch_coverage=1 00:33:44.380 --rc genhtml_function_coverage=1 00:33:44.380 --rc genhtml_legend=1 00:33:44.380 --rc geninfo_all_blocks=1 00:33:44.380 --rc geninfo_unexecuted_blocks=1 00:33:44.380 00:33:44.380 ' 00:33:44.380 11:51:44 spdkcli_nvmf_tcp -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:33:44.380 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:44.380 --rc genhtml_branch_coverage=1 00:33:44.380 --rc genhtml_function_coverage=1 00:33:44.380 --rc genhtml_legend=1 00:33:44.380 --rc geninfo_all_blocks=1 00:33:44.380 --rc geninfo_unexecuted_blocks=1 00:33:44.380 00:33:44.380 ' 00:33:44.380 11:51:44 spdkcli_nvmf_tcp -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:33:44.380 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:44.380 --rc genhtml_branch_coverage=1 00:33:44.380 --rc genhtml_function_coverage=1 00:33:44.380 --rc genhtml_legend=1 00:33:44.380 --rc geninfo_all_blocks=1 00:33:44.380 --rc geninfo_unexecuted_blocks=1 00:33:44.380 00:33:44.380 ' 00:33:44.380 11:51:44 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/common.sh 00:33:44.380 11:51:44 spdkcli_nvmf_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:33:44.380 11:51:44 spdkcli_nvmf_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py 00:33:44.380 11:51:44 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:33:44.380 11:51:44 spdkcli_nvmf_tcp -- nvmf/common.sh@7 -- # uname -s 00:33:44.380 11:51:45 spdkcli_nvmf_tcp -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:33:44.380 11:51:45 spdkcli_nvmf_tcp -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:33:44.380 11:51:45 spdkcli_nvmf_tcp -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:33:44.380 11:51:45 spdkcli_nvmf_tcp -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:33:44.380 11:51:45 spdkcli_nvmf_tcp -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:33:44.380 11:51:45 spdkcli_nvmf_tcp -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:33:44.380 11:51:45 spdkcli_nvmf_tcp -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:33:44.380 11:51:45 spdkcli_nvmf_tcp -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:33:44.380 11:51:45 spdkcli_nvmf_tcp -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:33:44.380 11:51:45 spdkcli_nvmf_tcp -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:33:44.380 11:51:45 spdkcli_nvmf_tcp -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:33:44.380 11:51:45 spdkcli_nvmf_tcp -- nvmf/common.sh@18 -- # NVME_HOSTID=00abaa28-3537-eb11-906e-0017a4403562 00:33:44.380 11:51:45 spdkcli_nvmf_tcp -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:33:44.380 11:51:45 spdkcli_nvmf_tcp -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:33:44.380 11:51:45 spdkcli_nvmf_tcp -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:33:44.380 11:51:45 spdkcli_nvmf_tcp -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:33:44.380 11:51:45 spdkcli_nvmf_tcp -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:33:44.380 11:51:45 spdkcli_nvmf_tcp -- scripts/common.sh@15 -- # shopt -s extglob 00:33:44.380 11:51:45 spdkcli_nvmf_tcp -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:33:44.380 11:51:45 spdkcli_nvmf_tcp -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:33:44.380 11:51:45 spdkcli_nvmf_tcp -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:33:44.380 11:51:45 spdkcli_nvmf_tcp -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:44.380 11:51:45 spdkcli_nvmf_tcp -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:44.380 11:51:45 spdkcli_nvmf_tcp -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:44.380 11:51:45 spdkcli_nvmf_tcp -- paths/export.sh@5 -- # export PATH 00:33:44.380 11:51:45 spdkcli_nvmf_tcp -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:44.380 11:51:45 spdkcli_nvmf_tcp -- nvmf/common.sh@51 -- # : 0 00:33:44.380 11:51:45 spdkcli_nvmf_tcp -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:33:44.380 11:51:45 spdkcli_nvmf_tcp -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:33:44.380 11:51:45 spdkcli_nvmf_tcp -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:33:44.380 11:51:45 spdkcli_nvmf_tcp -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:33:44.380 11:51:45 spdkcli_nvmf_tcp -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:33:44.380 11:51:45 spdkcli_nvmf_tcp -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:33:44.380 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:33:44.380 11:51:45 spdkcli_nvmf_tcp -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:33:44.380 11:51:45 spdkcli_nvmf_tcp -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:33:44.380 11:51:45 spdkcli_nvmf_tcp -- nvmf/common.sh@55 -- # have_pci_nics=0 00:33:44.380 11:51:45 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@12 -- # MATCH_FILE=spdkcli_nvmf.test 00:33:44.380 11:51:45 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@13 -- # SPDKCLI_BRANCH=/nvmf 00:33:44.380 11:51:45 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@15 -- # trap cleanup EXIT 00:33:44.380 11:51:45 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@17 -- # timing_enter run_nvmf_tgt 00:33:44.380 11:51:45 spdkcli_nvmf_tcp -- common/autotest_common.sh@724 -- # xtrace_disable 00:33:44.380 11:51:45 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:33:44.380 11:51:45 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@18 -- # run_nvmf_tgt 00:33:44.380 11:51:45 spdkcli_nvmf_tcp -- spdkcli/common.sh@33 -- # nvmf_tgt_pid=1494953 00:33:44.380 11:51:45 spdkcli_nvmf_tcp -- spdkcli/common.sh@34 -- # waitforlisten 1494953 00:33:44.380 11:51:45 spdkcli_nvmf_tcp -- common/autotest_common.sh@833 -- # '[' -z 1494953 ']' 00:33:44.380 11:51:45 spdkcli_nvmf_tcp -- spdkcli/common.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x3 -p 0 00:33:44.380 11:51:45 spdkcli_nvmf_tcp -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:33:44.380 11:51:45 spdkcli_nvmf_tcp -- common/autotest_common.sh@838 -- # local max_retries=100 00:33:44.380 11:51:45 spdkcli_nvmf_tcp -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:33:44.380 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:33:44.380 11:51:45 spdkcli_nvmf_tcp -- common/autotest_common.sh@842 -- # xtrace_disable 00:33:44.380 11:51:45 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:33:44.380 [2024-11-15 11:51:45.088188] Starting SPDK v25.01-pre git sha1 4b2d483c6 / DPDK 24.03.0 initialization... 00:33:44.380 [2024-11-15 11:51:45.088250] [ DPDK EAL parameters: nvmf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1494953 ] 00:33:44.380 [2024-11-15 11:51:45.172493] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:33:44.380 [2024-11-15 11:51:45.224689] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:33:44.380 [2024-11-15 11:51:45.224697] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:33:44.640 11:51:45 spdkcli_nvmf_tcp -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:33:44.640 11:51:45 spdkcli_nvmf_tcp -- common/autotest_common.sh@866 -- # return 0 00:33:44.640 11:51:45 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@19 -- # timing_exit run_nvmf_tgt 00:33:44.641 11:51:45 spdkcli_nvmf_tcp -- common/autotest_common.sh@730 -- # xtrace_disable 00:33:44.641 11:51:45 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:33:44.641 11:51:45 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@21 -- # NVMF_TARGET_IP=127.0.0.1 00:33:44.641 11:51:45 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@22 -- # [[ tcp == \r\d\m\a ]] 00:33:44.641 11:51:45 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@27 -- # timing_enter spdkcli_create_nvmf_config 00:33:44.641 11:51:45 spdkcli_nvmf_tcp -- common/autotest_common.sh@724 -- # xtrace_disable 00:33:44.641 11:51:45 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:33:44.641 11:51:45 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/malloc create 32 512 Malloc1'\'' '\''Malloc1'\'' True 00:33:44.641 '\''/bdevs/malloc create 32 512 Malloc2'\'' '\''Malloc2'\'' True 00:33:44.641 '\''/bdevs/malloc create 32 512 Malloc3'\'' '\''Malloc3'\'' True 00:33:44.641 '\''/bdevs/malloc create 32 512 Malloc4'\'' '\''Malloc4'\'' True 00:33:44.641 '\''/bdevs/malloc create 32 512 Malloc5'\'' '\''Malloc5'\'' True 00:33:44.641 '\''/bdevs/malloc create 32 512 Malloc6'\'' '\''Malloc6'\'' True 00:33:44.641 '\''nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192'\'' '\'''\'' True 00:33:44.641 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:33:44.641 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1'\'' '\''Malloc3'\'' True 00:33:44.641 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2'\'' '\''Malloc4'\'' True 00:33:44.641 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:33:44.641 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:33:44.641 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2'\'' '\''Malloc2'\'' True 00:33:44.641 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:33:44.641 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:33:44.641 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1'\'' '\''Malloc1'\'' True 00:33:44.641 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:33:44.641 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:33:44.641 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:33:44.641 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:33:44.641 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True'\'' '\''Allow any host'\'' 00:33:44.641 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False'\'' '\''Allow any host'\'' True 00:33:44.641 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:33:44.641 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4'\'' '\''127.0.0.1:4262'\'' True 00:33:44.641 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:33:44.641 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5'\'' '\''Malloc5'\'' True 00:33:44.641 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6'\'' '\''Malloc6'\'' True 00:33:44.641 '\''/nvmf/referral create tcp 127.0.0.2 4030 IPv4'\'' 00:33:44.641 ' 00:33:47.183 [2024-11-15 11:51:47.904897] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:33:48.566 [2024-11-15 11:51:49.125468] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4260 *** 00:33:51.108 [2024-11-15 11:51:51.373180] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4261 *** 00:33:52.490 [2024-11-15 11:51:53.303891] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4262 *** 00:33:54.401 Executing command: ['/bdevs/malloc create 32 512 Malloc1', 'Malloc1', True] 00:33:54.401 Executing command: ['/bdevs/malloc create 32 512 Malloc2', 'Malloc2', True] 00:33:54.401 Executing command: ['/bdevs/malloc create 32 512 Malloc3', 'Malloc3', True] 00:33:54.401 Executing command: ['/bdevs/malloc create 32 512 Malloc4', 'Malloc4', True] 00:33:54.401 Executing command: ['/bdevs/malloc create 32 512 Malloc5', 'Malloc5', True] 00:33:54.401 Executing command: ['/bdevs/malloc create 32 512 Malloc6', 'Malloc6', True] 00:33:54.401 Executing command: ['nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192', '', True] 00:33:54.401 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode1', True] 00:33:54.401 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1', 'Malloc3', True] 00:33:54.401 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2', 'Malloc4', True] 00:33:54.401 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:33:54.401 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:33:54.401 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2', 'Malloc2', True] 00:33:54.401 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:33:54.401 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:33:54.402 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1', 'Malloc1', True] 00:33:54.402 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:33:54.402 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:33:54.402 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1', 'nqn.2014-08.org.spdk:cnode1', True] 00:33:54.402 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:33:54.402 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True', 'Allow any host', False] 00:33:54.402 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False', 'Allow any host', True] 00:33:54.402 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:33:54.402 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4', '127.0.0.1:4262', True] 00:33:54.402 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:33:54.402 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5', 'Malloc5', True] 00:33:54.402 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6', 'Malloc6', True] 00:33:54.402 Executing command: ['/nvmf/referral create tcp 127.0.0.2 4030 IPv4', False] 00:33:54.402 11:51:54 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@66 -- # timing_exit spdkcli_create_nvmf_config 00:33:54.402 11:51:54 spdkcli_nvmf_tcp -- common/autotest_common.sh@730 -- # xtrace_disable 00:33:54.402 11:51:54 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:33:54.402 11:51:54 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@68 -- # timing_enter spdkcli_check_match 00:33:54.402 11:51:54 spdkcli_nvmf_tcp -- common/autotest_common.sh@724 -- # xtrace_disable 00:33:54.402 11:51:54 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:33:54.402 11:51:54 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@69 -- # check_match 00:33:54.402 11:51:54 spdkcli_nvmf_tcp -- spdkcli/common.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdkcli.py ll /nvmf 00:33:54.661 11:51:55 spdkcli_nvmf_tcp -- spdkcli/common.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/match/match /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_nvmf.test.match 00:33:54.661 11:51:55 spdkcli_nvmf_tcp -- spdkcli/common.sh@46 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_nvmf.test 00:33:54.661 11:51:55 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@70 -- # timing_exit spdkcli_check_match 00:33:54.661 11:51:55 spdkcli_nvmf_tcp -- common/autotest_common.sh@730 -- # xtrace_disable 00:33:54.661 11:51:55 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:33:54.661 11:51:55 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@72 -- # timing_enter spdkcli_clear_nvmf_config 00:33:54.661 11:51:55 spdkcli_nvmf_tcp -- common/autotest_common.sh@724 -- # xtrace_disable 00:33:54.661 11:51:55 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:33:54.661 11:51:55 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py ''\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1'\'' '\''Malloc3'\'' 00:33:54.662 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all'\'' '\''Malloc4'\'' 00:33:54.662 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:33:54.662 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' 00:33:54.662 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262'\'' '\''127.0.0.1:4262'\'' 00:33:54.662 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all'\'' '\''127.0.0.1:4261'\'' 00:33:54.662 '\''/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3'\'' '\''nqn.2014-08.org.spdk:cnode3'\'' 00:33:54.662 '\''/nvmf/subsystem delete_all'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:33:54.662 '\''/bdevs/malloc delete Malloc6'\'' '\''Malloc6'\'' 00:33:54.662 '\''/bdevs/malloc delete Malloc5'\'' '\''Malloc5'\'' 00:33:54.662 '\''/bdevs/malloc delete Malloc4'\'' '\''Malloc4'\'' 00:33:54.662 '\''/bdevs/malloc delete Malloc3'\'' '\''Malloc3'\'' 00:33:54.662 '\''/bdevs/malloc delete Malloc2'\'' '\''Malloc2'\'' 00:33:54.662 '\''/bdevs/malloc delete Malloc1'\'' '\''Malloc1'\'' 00:33:54.662 ' 00:33:59.944 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1', 'Malloc3', False] 00:33:59.944 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all', 'Malloc4', False] 00:33:59.944 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', False] 00:33:59.944 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all', 'nqn.2014-08.org.spdk:cnode1', False] 00:33:59.944 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262', '127.0.0.1:4262', False] 00:33:59.944 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all', '127.0.0.1:4261', False] 00:33:59.944 Executing command: ['/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3', 'nqn.2014-08.org.spdk:cnode3', False] 00:33:59.944 Executing command: ['/nvmf/subsystem delete_all', 'nqn.2014-08.org.spdk:cnode2', False] 00:33:59.944 Executing command: ['/bdevs/malloc delete Malloc6', 'Malloc6', False] 00:33:59.944 Executing command: ['/bdevs/malloc delete Malloc5', 'Malloc5', False] 00:33:59.944 Executing command: ['/bdevs/malloc delete Malloc4', 'Malloc4', False] 00:33:59.944 Executing command: ['/bdevs/malloc delete Malloc3', 'Malloc3', False] 00:33:59.944 Executing command: ['/bdevs/malloc delete Malloc2', 'Malloc2', False] 00:33:59.944 Executing command: ['/bdevs/malloc delete Malloc1', 'Malloc1', False] 00:33:59.944 11:52:00 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@88 -- # timing_exit spdkcli_clear_nvmf_config 00:33:59.944 11:52:00 spdkcli_nvmf_tcp -- common/autotest_common.sh@730 -- # xtrace_disable 00:33:59.944 11:52:00 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:33:59.944 11:52:00 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@90 -- # killprocess 1494953 00:33:59.945 11:52:00 spdkcli_nvmf_tcp -- common/autotest_common.sh@952 -- # '[' -z 1494953 ']' 00:33:59.945 11:52:00 spdkcli_nvmf_tcp -- common/autotest_common.sh@956 -- # kill -0 1494953 00:33:59.945 11:52:00 spdkcli_nvmf_tcp -- common/autotest_common.sh@957 -- # uname 00:33:59.945 11:52:00 spdkcli_nvmf_tcp -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:33:59.945 11:52:00 spdkcli_nvmf_tcp -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 1494953 00:33:59.945 11:52:00 spdkcli_nvmf_tcp -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:33:59.945 11:52:00 spdkcli_nvmf_tcp -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:33:59.945 11:52:00 spdkcli_nvmf_tcp -- common/autotest_common.sh@970 -- # echo 'killing process with pid 1494953' 00:33:59.945 killing process with pid 1494953 00:33:59.945 11:52:00 spdkcli_nvmf_tcp -- common/autotest_common.sh@971 -- # kill 1494953 00:33:59.945 11:52:00 spdkcli_nvmf_tcp -- common/autotest_common.sh@976 -- # wait 1494953 00:34:00.204 11:52:00 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@1 -- # cleanup 00:34:00.204 11:52:00 spdkcli_nvmf_tcp -- spdkcli/common.sh@10 -- # '[' -n '' ']' 00:34:00.204 11:52:00 spdkcli_nvmf_tcp -- spdkcli/common.sh@13 -- # '[' -n 1494953 ']' 00:34:00.204 11:52:00 spdkcli_nvmf_tcp -- spdkcli/common.sh@14 -- # killprocess 1494953 00:34:00.204 11:52:00 spdkcli_nvmf_tcp -- common/autotest_common.sh@952 -- # '[' -z 1494953 ']' 00:34:00.204 11:52:00 spdkcli_nvmf_tcp -- common/autotest_common.sh@956 -- # kill -0 1494953 00:34:00.204 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 956: kill: (1494953) - No such process 00:34:00.204 11:52:00 spdkcli_nvmf_tcp -- common/autotest_common.sh@979 -- # echo 'Process with pid 1494953 is not found' 00:34:00.204 Process with pid 1494953 is not found 00:34:00.204 11:52:00 spdkcli_nvmf_tcp -- spdkcli/common.sh@16 -- # '[' -n '' ']' 00:34:00.204 11:52:00 spdkcli_nvmf_tcp -- spdkcli/common.sh@19 -- # '[' -n '' ']' 00:34:00.204 11:52:00 spdkcli_nvmf_tcp -- spdkcli/common.sh@22 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_nvmf.test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_details_vhost.test /tmp/sample_aio 00:34:00.204 00:34:00.204 real 0m16.120s 00:34:00.204 user 0m33.620s 00:34:00.204 sys 0m0.736s 00:34:00.204 11:52:00 spdkcli_nvmf_tcp -- common/autotest_common.sh@1128 -- # xtrace_disable 00:34:00.204 11:52:00 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:34:00.204 ************************************ 00:34:00.204 END TEST spdkcli_nvmf_tcp 00:34:00.204 ************************************ 00:34:00.204 11:52:00 -- spdk/autotest.sh@283 -- # run_test nvmf_identify_passthru /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:34:00.204 11:52:00 -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:34:00.205 11:52:00 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:34:00.205 11:52:00 -- common/autotest_common.sh@10 -- # set +x 00:34:00.205 ************************************ 00:34:00.205 START TEST nvmf_identify_passthru 00:34:00.205 ************************************ 00:34:00.205 11:52:00 nvmf_identify_passthru -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:34:00.465 * Looking for test storage... 00:34:00.465 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:34:00.465 11:52:01 nvmf_identify_passthru -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:34:00.465 11:52:01 nvmf_identify_passthru -- common/autotest_common.sh@1691 -- # lcov --version 00:34:00.465 11:52:01 nvmf_identify_passthru -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:34:00.465 11:52:01 nvmf_identify_passthru -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:34:00.465 11:52:01 nvmf_identify_passthru -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:34:00.465 11:52:01 nvmf_identify_passthru -- scripts/common.sh@333 -- # local ver1 ver1_l 00:34:00.465 11:52:01 nvmf_identify_passthru -- scripts/common.sh@334 -- # local ver2 ver2_l 00:34:00.465 11:52:01 nvmf_identify_passthru -- scripts/common.sh@336 -- # IFS=.-: 00:34:00.465 11:52:01 nvmf_identify_passthru -- scripts/common.sh@336 -- # read -ra ver1 00:34:00.465 11:52:01 nvmf_identify_passthru -- scripts/common.sh@337 -- # IFS=.-: 00:34:00.465 11:52:01 nvmf_identify_passthru -- scripts/common.sh@337 -- # read -ra ver2 00:34:00.465 11:52:01 nvmf_identify_passthru -- scripts/common.sh@338 -- # local 'op=<' 00:34:00.465 11:52:01 nvmf_identify_passthru -- scripts/common.sh@340 -- # ver1_l=2 00:34:00.465 11:52:01 nvmf_identify_passthru -- scripts/common.sh@341 -- # ver2_l=1 00:34:00.465 11:52:01 nvmf_identify_passthru -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:34:00.465 11:52:01 nvmf_identify_passthru -- scripts/common.sh@344 -- # case "$op" in 00:34:00.465 11:52:01 nvmf_identify_passthru -- scripts/common.sh@345 -- # : 1 00:34:00.465 11:52:01 nvmf_identify_passthru -- scripts/common.sh@364 -- # (( v = 0 )) 00:34:00.465 11:52:01 nvmf_identify_passthru -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:34:00.465 11:52:01 nvmf_identify_passthru -- scripts/common.sh@365 -- # decimal 1 00:34:00.465 11:52:01 nvmf_identify_passthru -- scripts/common.sh@353 -- # local d=1 00:34:00.465 11:52:01 nvmf_identify_passthru -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:34:00.465 11:52:01 nvmf_identify_passthru -- scripts/common.sh@355 -- # echo 1 00:34:00.465 11:52:01 nvmf_identify_passthru -- scripts/common.sh@365 -- # ver1[v]=1 00:34:00.465 11:52:01 nvmf_identify_passthru -- scripts/common.sh@366 -- # decimal 2 00:34:00.465 11:52:01 nvmf_identify_passthru -- scripts/common.sh@353 -- # local d=2 00:34:00.465 11:52:01 nvmf_identify_passthru -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:34:00.465 11:52:01 nvmf_identify_passthru -- scripts/common.sh@355 -- # echo 2 00:34:00.465 11:52:01 nvmf_identify_passthru -- scripts/common.sh@366 -- # ver2[v]=2 00:34:00.465 11:52:01 nvmf_identify_passthru -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:34:00.465 11:52:01 nvmf_identify_passthru -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:34:00.465 11:52:01 nvmf_identify_passthru -- scripts/common.sh@368 -- # return 0 00:34:00.465 11:52:01 nvmf_identify_passthru -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:34:00.465 11:52:01 nvmf_identify_passthru -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:34:00.465 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:00.465 --rc genhtml_branch_coverage=1 00:34:00.465 --rc genhtml_function_coverage=1 00:34:00.465 --rc genhtml_legend=1 00:34:00.465 --rc geninfo_all_blocks=1 00:34:00.465 --rc geninfo_unexecuted_blocks=1 00:34:00.465 00:34:00.465 ' 00:34:00.465 11:52:01 nvmf_identify_passthru -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:34:00.465 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:00.465 --rc genhtml_branch_coverage=1 00:34:00.465 --rc genhtml_function_coverage=1 00:34:00.465 --rc genhtml_legend=1 00:34:00.465 --rc geninfo_all_blocks=1 00:34:00.465 --rc geninfo_unexecuted_blocks=1 00:34:00.465 00:34:00.465 ' 00:34:00.465 11:52:01 nvmf_identify_passthru -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:34:00.465 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:00.465 --rc genhtml_branch_coverage=1 00:34:00.465 --rc genhtml_function_coverage=1 00:34:00.465 --rc genhtml_legend=1 00:34:00.465 --rc geninfo_all_blocks=1 00:34:00.465 --rc geninfo_unexecuted_blocks=1 00:34:00.465 00:34:00.465 ' 00:34:00.465 11:52:01 nvmf_identify_passthru -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:34:00.465 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:00.465 --rc genhtml_branch_coverage=1 00:34:00.465 --rc genhtml_function_coverage=1 00:34:00.465 --rc genhtml_legend=1 00:34:00.465 --rc geninfo_all_blocks=1 00:34:00.465 --rc geninfo_unexecuted_blocks=1 00:34:00.465 00:34:00.465 ' 00:34:00.465 11:52:01 nvmf_identify_passthru -- target/identify_passthru.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:34:00.465 11:52:01 nvmf_identify_passthru -- nvmf/common.sh@7 -- # uname -s 00:34:00.465 11:52:01 nvmf_identify_passthru -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:34:00.465 11:52:01 nvmf_identify_passthru -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:34:00.465 11:52:01 nvmf_identify_passthru -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:34:00.465 11:52:01 nvmf_identify_passthru -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:34:00.465 11:52:01 nvmf_identify_passthru -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:34:00.465 11:52:01 nvmf_identify_passthru -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:34:00.465 11:52:01 nvmf_identify_passthru -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:34:00.465 11:52:01 nvmf_identify_passthru -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:34:00.465 11:52:01 nvmf_identify_passthru -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:34:00.465 11:52:01 nvmf_identify_passthru -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:34:00.465 11:52:01 nvmf_identify_passthru -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:34:00.465 11:52:01 nvmf_identify_passthru -- nvmf/common.sh@18 -- # NVME_HOSTID=00abaa28-3537-eb11-906e-0017a4403562 00:34:00.465 11:52:01 nvmf_identify_passthru -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:34:00.465 11:52:01 nvmf_identify_passthru -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:34:00.466 11:52:01 nvmf_identify_passthru -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:34:00.466 11:52:01 nvmf_identify_passthru -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:34:00.466 11:52:01 nvmf_identify_passthru -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:34:00.466 11:52:01 nvmf_identify_passthru -- scripts/common.sh@15 -- # shopt -s extglob 00:34:00.466 11:52:01 nvmf_identify_passthru -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:34:00.466 11:52:01 nvmf_identify_passthru -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:34:00.466 11:52:01 nvmf_identify_passthru -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:34:00.466 11:52:01 nvmf_identify_passthru -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:00.466 11:52:01 nvmf_identify_passthru -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:00.466 11:52:01 nvmf_identify_passthru -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:00.466 11:52:01 nvmf_identify_passthru -- paths/export.sh@5 -- # export PATH 00:34:00.466 11:52:01 nvmf_identify_passthru -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:00.466 11:52:01 nvmf_identify_passthru -- nvmf/common.sh@51 -- # : 0 00:34:00.466 11:52:01 nvmf_identify_passthru -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:34:00.466 11:52:01 nvmf_identify_passthru -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:34:00.466 11:52:01 nvmf_identify_passthru -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:34:00.466 11:52:01 nvmf_identify_passthru -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:34:00.466 11:52:01 nvmf_identify_passthru -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:34:00.466 11:52:01 nvmf_identify_passthru -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:34:00.466 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:34:00.466 11:52:01 nvmf_identify_passthru -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:34:00.466 11:52:01 nvmf_identify_passthru -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:34:00.466 11:52:01 nvmf_identify_passthru -- nvmf/common.sh@55 -- # have_pci_nics=0 00:34:00.466 11:52:01 nvmf_identify_passthru -- target/identify_passthru.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:34:00.466 11:52:01 nvmf_identify_passthru -- scripts/common.sh@15 -- # shopt -s extglob 00:34:00.466 11:52:01 nvmf_identify_passthru -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:34:00.466 11:52:01 nvmf_identify_passthru -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:34:00.466 11:52:01 nvmf_identify_passthru -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:34:00.466 11:52:01 nvmf_identify_passthru -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:00.466 11:52:01 nvmf_identify_passthru -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:00.466 11:52:01 nvmf_identify_passthru -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:00.466 11:52:01 nvmf_identify_passthru -- paths/export.sh@5 -- # export PATH 00:34:00.466 11:52:01 nvmf_identify_passthru -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:00.466 11:52:01 nvmf_identify_passthru -- target/identify_passthru.sh@12 -- # nvmftestinit 00:34:00.466 11:52:01 nvmf_identify_passthru -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:34:00.466 11:52:01 nvmf_identify_passthru -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:34:00.466 11:52:01 nvmf_identify_passthru -- nvmf/common.sh@476 -- # prepare_net_devs 00:34:00.466 11:52:01 nvmf_identify_passthru -- nvmf/common.sh@438 -- # local -g is_hw=no 00:34:00.466 11:52:01 nvmf_identify_passthru -- nvmf/common.sh@440 -- # remove_spdk_ns 00:34:00.466 11:52:01 nvmf_identify_passthru -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:00.466 11:52:01 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:34:00.466 11:52:01 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:00.466 11:52:01 nvmf_identify_passthru -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:34:00.466 11:52:01 nvmf_identify_passthru -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:34:00.466 11:52:01 nvmf_identify_passthru -- nvmf/common.sh@309 -- # xtrace_disable 00:34:00.466 11:52:01 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:34:05.747 11:52:06 nvmf_identify_passthru -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:34:05.747 11:52:06 nvmf_identify_passthru -- nvmf/common.sh@315 -- # pci_devs=() 00:34:05.747 11:52:06 nvmf_identify_passthru -- nvmf/common.sh@315 -- # local -a pci_devs 00:34:05.747 11:52:06 nvmf_identify_passthru -- nvmf/common.sh@316 -- # pci_net_devs=() 00:34:05.747 11:52:06 nvmf_identify_passthru -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:34:05.748 11:52:06 nvmf_identify_passthru -- nvmf/common.sh@317 -- # pci_drivers=() 00:34:05.748 11:52:06 nvmf_identify_passthru -- nvmf/common.sh@317 -- # local -A pci_drivers 00:34:05.748 11:52:06 nvmf_identify_passthru -- nvmf/common.sh@319 -- # net_devs=() 00:34:05.748 11:52:06 nvmf_identify_passthru -- nvmf/common.sh@319 -- # local -ga net_devs 00:34:05.748 11:52:06 nvmf_identify_passthru -- nvmf/common.sh@320 -- # e810=() 00:34:05.748 11:52:06 nvmf_identify_passthru -- nvmf/common.sh@320 -- # local -ga e810 00:34:05.748 11:52:06 nvmf_identify_passthru -- nvmf/common.sh@321 -- # x722=() 00:34:05.748 11:52:06 nvmf_identify_passthru -- nvmf/common.sh@321 -- # local -ga x722 00:34:05.748 11:52:06 nvmf_identify_passthru -- nvmf/common.sh@322 -- # mlx=() 00:34:05.748 11:52:06 nvmf_identify_passthru -- nvmf/common.sh@322 -- # local -ga mlx 00:34:05.748 11:52:06 nvmf_identify_passthru -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:34:05.748 11:52:06 nvmf_identify_passthru -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:34:05.748 11:52:06 nvmf_identify_passthru -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:34:05.748 11:52:06 nvmf_identify_passthru -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:34:05.748 11:52:06 nvmf_identify_passthru -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:34:05.748 11:52:06 nvmf_identify_passthru -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:34:05.748 11:52:06 nvmf_identify_passthru -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:34:05.748 11:52:06 nvmf_identify_passthru -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:34:05.748 11:52:06 nvmf_identify_passthru -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:34:05.748 11:52:06 nvmf_identify_passthru -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:34:05.748 11:52:06 nvmf_identify_passthru -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:34:05.748 11:52:06 nvmf_identify_passthru -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:34:05.748 11:52:06 nvmf_identify_passthru -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:34:05.748 11:52:06 nvmf_identify_passthru -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:34:05.748 11:52:06 nvmf_identify_passthru -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:34:05.748 11:52:06 nvmf_identify_passthru -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:34:05.748 11:52:06 nvmf_identify_passthru -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:34:05.748 11:52:06 nvmf_identify_passthru -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:34:05.748 11:52:06 nvmf_identify_passthru -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:34:05.748 11:52:06 nvmf_identify_passthru -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:34:05.748 Found 0000:af:00.0 (0x8086 - 0x159b) 00:34:05.748 11:52:06 nvmf_identify_passthru -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:34:05.748 11:52:06 nvmf_identify_passthru -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:34:05.748 11:52:06 nvmf_identify_passthru -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:34:05.748 11:52:06 nvmf_identify_passthru -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:34:05.748 11:52:06 nvmf_identify_passthru -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:34:05.748 11:52:06 nvmf_identify_passthru -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:34:05.748 11:52:06 nvmf_identify_passthru -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:34:05.748 Found 0000:af:00.1 (0x8086 - 0x159b) 00:34:05.748 11:52:06 nvmf_identify_passthru -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:34:05.748 11:52:06 nvmf_identify_passthru -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:34:05.748 11:52:06 nvmf_identify_passthru -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:34:05.748 11:52:06 nvmf_identify_passthru -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:34:05.748 11:52:06 nvmf_identify_passthru -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:34:05.748 11:52:06 nvmf_identify_passthru -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:34:05.748 11:52:06 nvmf_identify_passthru -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:34:05.748 11:52:06 nvmf_identify_passthru -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:34:05.748 11:52:06 nvmf_identify_passthru -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:34:05.748 11:52:06 nvmf_identify_passthru -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:05.748 11:52:06 nvmf_identify_passthru -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:34:05.748 11:52:06 nvmf_identify_passthru -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:34:05.748 11:52:06 nvmf_identify_passthru -- nvmf/common.sh@418 -- # [[ up == up ]] 00:34:05.748 11:52:06 nvmf_identify_passthru -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:34:05.748 11:52:06 nvmf_identify_passthru -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:05.748 11:52:06 nvmf_identify_passthru -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:34:05.748 Found net devices under 0000:af:00.0: cvl_0_0 00:34:05.748 11:52:06 nvmf_identify_passthru -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:34:05.748 11:52:06 nvmf_identify_passthru -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:34:05.748 11:52:06 nvmf_identify_passthru -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:05.748 11:52:06 nvmf_identify_passthru -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:34:05.748 11:52:06 nvmf_identify_passthru -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:34:05.748 11:52:06 nvmf_identify_passthru -- nvmf/common.sh@418 -- # [[ up == up ]] 00:34:05.748 11:52:06 nvmf_identify_passthru -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:34:05.748 11:52:06 nvmf_identify_passthru -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:05.748 11:52:06 nvmf_identify_passthru -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:34:05.748 Found net devices under 0000:af:00.1: cvl_0_1 00:34:05.748 11:52:06 nvmf_identify_passthru -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:34:05.748 11:52:06 nvmf_identify_passthru -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:34:05.748 11:52:06 nvmf_identify_passthru -- nvmf/common.sh@442 -- # is_hw=yes 00:34:05.748 11:52:06 nvmf_identify_passthru -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:34:05.748 11:52:06 nvmf_identify_passthru -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:34:05.748 11:52:06 nvmf_identify_passthru -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:34:05.748 11:52:06 nvmf_identify_passthru -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:34:05.748 11:52:06 nvmf_identify_passthru -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:34:05.748 11:52:06 nvmf_identify_passthru -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:34:05.748 11:52:06 nvmf_identify_passthru -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:34:05.748 11:52:06 nvmf_identify_passthru -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:34:05.748 11:52:06 nvmf_identify_passthru -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:34:05.748 11:52:06 nvmf_identify_passthru -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:34:05.748 11:52:06 nvmf_identify_passthru -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:34:05.748 11:52:06 nvmf_identify_passthru -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:34:05.748 11:52:06 nvmf_identify_passthru -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:34:05.748 11:52:06 nvmf_identify_passthru -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:34:05.748 11:52:06 nvmf_identify_passthru -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:34:05.748 11:52:06 nvmf_identify_passthru -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:34:05.748 11:52:06 nvmf_identify_passthru -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:34:05.748 11:52:06 nvmf_identify_passthru -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:34:06.009 11:52:06 nvmf_identify_passthru -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:34:06.009 11:52:06 nvmf_identify_passthru -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:34:06.009 11:52:06 nvmf_identify_passthru -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:34:06.009 11:52:06 nvmf_identify_passthru -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:34:06.009 11:52:06 nvmf_identify_passthru -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:34:06.009 11:52:06 nvmf_identify_passthru -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:34:06.009 11:52:06 nvmf_identify_passthru -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:34:06.009 11:52:06 nvmf_identify_passthru -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:34:06.009 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:34:06.009 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.530 ms 00:34:06.009 00:34:06.009 --- 10.0.0.2 ping statistics --- 00:34:06.009 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:06.009 rtt min/avg/max/mdev = 0.530/0.530/0.530/0.000 ms 00:34:06.009 11:52:06 nvmf_identify_passthru -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:34:06.009 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:34:06.009 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.220 ms 00:34:06.009 00:34:06.009 --- 10.0.0.1 ping statistics --- 00:34:06.009 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:06.009 rtt min/avg/max/mdev = 0.220/0.220/0.220/0.000 ms 00:34:06.009 11:52:06 nvmf_identify_passthru -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:34:06.009 11:52:06 nvmf_identify_passthru -- nvmf/common.sh@450 -- # return 0 00:34:06.009 11:52:06 nvmf_identify_passthru -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:34:06.009 11:52:06 nvmf_identify_passthru -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:34:06.009 11:52:06 nvmf_identify_passthru -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:34:06.009 11:52:06 nvmf_identify_passthru -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:34:06.009 11:52:06 nvmf_identify_passthru -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:34:06.009 11:52:06 nvmf_identify_passthru -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:34:06.009 11:52:06 nvmf_identify_passthru -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:34:06.270 11:52:06 nvmf_identify_passthru -- target/identify_passthru.sh@14 -- # timing_enter nvme_identify 00:34:06.270 11:52:06 nvmf_identify_passthru -- common/autotest_common.sh@724 -- # xtrace_disable 00:34:06.270 11:52:06 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:34:06.270 11:52:06 nvmf_identify_passthru -- target/identify_passthru.sh@16 -- # get_first_nvme_bdf 00:34:06.270 11:52:06 nvmf_identify_passthru -- common/autotest_common.sh@1507 -- # bdfs=() 00:34:06.270 11:52:06 nvmf_identify_passthru -- common/autotest_common.sh@1507 -- # local bdfs 00:34:06.270 11:52:06 nvmf_identify_passthru -- common/autotest_common.sh@1508 -- # bdfs=($(get_nvme_bdfs)) 00:34:06.270 11:52:06 nvmf_identify_passthru -- common/autotest_common.sh@1508 -- # get_nvme_bdfs 00:34:06.270 11:52:06 nvmf_identify_passthru -- common/autotest_common.sh@1496 -- # bdfs=() 00:34:06.270 11:52:06 nvmf_identify_passthru -- common/autotest_common.sh@1496 -- # local bdfs 00:34:06.270 11:52:06 nvmf_identify_passthru -- common/autotest_common.sh@1497 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:34:06.270 11:52:06 nvmf_identify_passthru -- common/autotest_common.sh@1497 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:34:06.270 11:52:06 nvmf_identify_passthru -- common/autotest_common.sh@1497 -- # jq -r '.config[].params.traddr' 00:34:06.270 11:52:06 nvmf_identify_passthru -- common/autotest_common.sh@1498 -- # (( 1 == 0 )) 00:34:06.270 11:52:06 nvmf_identify_passthru -- common/autotest_common.sh@1502 -- # printf '%s\n' 0000:86:00.0 00:34:06.270 11:52:06 nvmf_identify_passthru -- common/autotest_common.sh@1510 -- # echo 0000:86:00.0 00:34:06.270 11:52:06 nvmf_identify_passthru -- target/identify_passthru.sh@16 -- # bdf=0000:86:00.0 00:34:06.270 11:52:06 nvmf_identify_passthru -- target/identify_passthru.sh@17 -- # '[' -z 0000:86:00.0 ']' 00:34:06.270 11:52:06 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:86:00.0' -i 0 00:34:06.270 11:52:06 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # grep 'Serial Number:' 00:34:06.270 11:52:06 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # awk '{print $3}' 00:34:10.471 11:52:11 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # nvme_serial_number=BTLJ916308MR1P0FGN 00:34:10.471 11:52:11 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:86:00.0' -i 0 00:34:10.471 11:52:11 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # grep 'Model Number:' 00:34:10.471 11:52:11 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # awk '{print $3}' 00:34:14.671 11:52:15 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # nvme_model_number=INTEL 00:34:14.671 11:52:15 nvmf_identify_passthru -- target/identify_passthru.sh@26 -- # timing_exit nvme_identify 00:34:14.671 11:52:15 nvmf_identify_passthru -- common/autotest_common.sh@730 -- # xtrace_disable 00:34:14.671 11:52:15 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:34:14.932 11:52:15 nvmf_identify_passthru -- target/identify_passthru.sh@28 -- # timing_enter start_nvmf_tgt 00:34:14.932 11:52:15 nvmf_identify_passthru -- common/autotest_common.sh@724 -- # xtrace_disable 00:34:14.932 11:52:15 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:34:14.932 11:52:15 nvmf_identify_passthru -- target/identify_passthru.sh@31 -- # nvmfpid=1502425 00:34:14.932 11:52:15 nvmf_identify_passthru -- target/identify_passthru.sh@30 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:34:14.932 11:52:15 nvmf_identify_passthru -- target/identify_passthru.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:34:14.932 11:52:15 nvmf_identify_passthru -- target/identify_passthru.sh@35 -- # waitforlisten 1502425 00:34:14.932 11:52:15 nvmf_identify_passthru -- common/autotest_common.sh@833 -- # '[' -z 1502425 ']' 00:34:14.932 11:52:15 nvmf_identify_passthru -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:34:14.932 11:52:15 nvmf_identify_passthru -- common/autotest_common.sh@838 -- # local max_retries=100 00:34:14.932 11:52:15 nvmf_identify_passthru -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:34:14.932 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:34:14.932 11:52:15 nvmf_identify_passthru -- common/autotest_common.sh@842 -- # xtrace_disable 00:34:14.932 11:52:15 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:34:14.932 [2024-11-15 11:52:15.612143] Starting SPDK v25.01-pre git sha1 4b2d483c6 / DPDK 24.03.0 initialization... 00:34:14.932 [2024-11-15 11:52:15.612204] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:34:14.932 [2024-11-15 11:52:15.712918] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:34:14.932 [2024-11-15 11:52:15.763339] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:34:14.932 [2024-11-15 11:52:15.763384] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:34:14.932 [2024-11-15 11:52:15.763395] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:34:14.932 [2024-11-15 11:52:15.763404] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:34:14.932 [2024-11-15 11:52:15.763411] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:34:14.932 [2024-11-15 11:52:15.765480] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:34:14.932 [2024-11-15 11:52:15.765539] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:34:14.932 [2024-11-15 11:52:15.765635] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:34:14.932 [2024-11-15 11:52:15.765646] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:34:15.872 11:52:16 nvmf_identify_passthru -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:34:15.872 11:52:16 nvmf_identify_passthru -- common/autotest_common.sh@866 -- # return 0 00:34:15.872 11:52:16 nvmf_identify_passthru -- target/identify_passthru.sh@36 -- # rpc_cmd -v nvmf_set_config --passthru-identify-ctrlr 00:34:15.872 11:52:16 nvmf_identify_passthru -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:15.872 11:52:16 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:34:15.872 INFO: Log level set to 20 00:34:15.872 INFO: Requests: 00:34:15.872 { 00:34:15.872 "jsonrpc": "2.0", 00:34:15.872 "method": "nvmf_set_config", 00:34:15.872 "id": 1, 00:34:15.872 "params": { 00:34:15.872 "admin_cmd_passthru": { 00:34:15.872 "identify_ctrlr": true 00:34:15.872 } 00:34:15.872 } 00:34:15.872 } 00:34:15.872 00:34:15.872 INFO: response: 00:34:15.872 { 00:34:15.872 "jsonrpc": "2.0", 00:34:15.872 "id": 1, 00:34:15.872 "result": true 00:34:15.872 } 00:34:15.872 00:34:15.872 11:52:16 nvmf_identify_passthru -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:15.872 11:52:16 nvmf_identify_passthru -- target/identify_passthru.sh@37 -- # rpc_cmd -v framework_start_init 00:34:15.872 11:52:16 nvmf_identify_passthru -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:15.872 11:52:16 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:34:15.872 INFO: Setting log level to 20 00:34:15.872 INFO: Setting log level to 20 00:34:15.872 INFO: Log level set to 20 00:34:15.872 INFO: Log level set to 20 00:34:15.872 INFO: Requests: 00:34:15.872 { 00:34:15.872 "jsonrpc": "2.0", 00:34:15.872 "method": "framework_start_init", 00:34:15.872 "id": 1 00:34:15.872 } 00:34:15.872 00:34:15.872 INFO: Requests: 00:34:15.872 { 00:34:15.872 "jsonrpc": "2.0", 00:34:15.872 "method": "framework_start_init", 00:34:15.872 "id": 1 00:34:15.872 } 00:34:15.872 00:34:15.872 [2024-11-15 11:52:16.599660] nvmf_tgt.c: 462:nvmf_tgt_advance_state: *NOTICE*: Custom identify ctrlr handler enabled 00:34:15.872 INFO: response: 00:34:15.872 { 00:34:15.872 "jsonrpc": "2.0", 00:34:15.872 "id": 1, 00:34:15.872 "result": true 00:34:15.872 } 00:34:15.872 00:34:15.872 INFO: response: 00:34:15.872 { 00:34:15.872 "jsonrpc": "2.0", 00:34:15.872 "id": 1, 00:34:15.872 "result": true 00:34:15.872 } 00:34:15.872 00:34:15.872 11:52:16 nvmf_identify_passthru -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:15.872 11:52:16 nvmf_identify_passthru -- target/identify_passthru.sh@38 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:34:15.872 11:52:16 nvmf_identify_passthru -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:15.872 11:52:16 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:34:15.872 INFO: Setting log level to 40 00:34:15.872 INFO: Setting log level to 40 00:34:15.872 INFO: Setting log level to 40 00:34:15.872 [2024-11-15 11:52:16.613308] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:34:15.872 11:52:16 nvmf_identify_passthru -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:15.872 11:52:16 nvmf_identify_passthru -- target/identify_passthru.sh@39 -- # timing_exit start_nvmf_tgt 00:34:15.872 11:52:16 nvmf_identify_passthru -- common/autotest_common.sh@730 -- # xtrace_disable 00:34:15.872 11:52:16 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:34:15.872 11:52:16 nvmf_identify_passthru -- target/identify_passthru.sh@41 -- # rpc_cmd bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:86:00.0 00:34:15.872 11:52:16 nvmf_identify_passthru -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:15.872 11:52:16 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:34:19.170 Nvme0n1 00:34:19.170 11:52:19 nvmf_identify_passthru -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:19.170 11:52:19 nvmf_identify_passthru -- target/identify_passthru.sh@42 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 1 00:34:19.170 11:52:19 nvmf_identify_passthru -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:19.170 11:52:19 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:34:19.170 11:52:19 nvmf_identify_passthru -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:19.170 11:52:19 nvmf_identify_passthru -- target/identify_passthru.sh@43 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:34:19.170 11:52:19 nvmf_identify_passthru -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:19.170 11:52:19 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:34:19.170 11:52:19 nvmf_identify_passthru -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:19.171 11:52:19 nvmf_identify_passthru -- target/identify_passthru.sh@44 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:34:19.171 11:52:19 nvmf_identify_passthru -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:19.171 11:52:19 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:34:19.171 [2024-11-15 11:52:19.546236] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:34:19.171 11:52:19 nvmf_identify_passthru -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:19.171 11:52:19 nvmf_identify_passthru -- target/identify_passthru.sh@46 -- # rpc_cmd nvmf_get_subsystems 00:34:19.171 11:52:19 nvmf_identify_passthru -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:19.171 11:52:19 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:34:19.171 [ 00:34:19.171 { 00:34:19.171 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:34:19.171 "subtype": "Discovery", 00:34:19.171 "listen_addresses": [], 00:34:19.171 "allow_any_host": true, 00:34:19.171 "hosts": [] 00:34:19.171 }, 00:34:19.171 { 00:34:19.171 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:34:19.171 "subtype": "NVMe", 00:34:19.171 "listen_addresses": [ 00:34:19.171 { 00:34:19.171 "trtype": "TCP", 00:34:19.171 "adrfam": "IPv4", 00:34:19.171 "traddr": "10.0.0.2", 00:34:19.171 "trsvcid": "4420" 00:34:19.171 } 00:34:19.171 ], 00:34:19.171 "allow_any_host": true, 00:34:19.171 "hosts": [], 00:34:19.171 "serial_number": "SPDK00000000000001", 00:34:19.171 "model_number": "SPDK bdev Controller", 00:34:19.171 "max_namespaces": 1, 00:34:19.171 "min_cntlid": 1, 00:34:19.171 "max_cntlid": 65519, 00:34:19.171 "namespaces": [ 00:34:19.171 { 00:34:19.171 "nsid": 1, 00:34:19.171 "bdev_name": "Nvme0n1", 00:34:19.171 "name": "Nvme0n1", 00:34:19.171 "nguid": "FC37E7257DB446A5AB743E324CAFF346", 00:34:19.171 "uuid": "fc37e725-7db4-46a5-ab74-3e324caff346" 00:34:19.171 } 00:34:19.171 ] 00:34:19.171 } 00:34:19.171 ] 00:34:19.171 11:52:19 nvmf_identify_passthru -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:19.171 11:52:19 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:34:19.171 11:52:19 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # grep 'Serial Number:' 00:34:19.171 11:52:19 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # awk '{print $3}' 00:34:19.171 11:52:19 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # nvmf_serial_number=BTLJ916308MR1P0FGN 00:34:19.171 11:52:19 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:34:19.171 11:52:19 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # grep 'Model Number:' 00:34:19.171 11:52:19 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # awk '{print $3}' 00:34:19.171 11:52:19 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # nvmf_model_number=INTEL 00:34:19.171 11:52:19 nvmf_identify_passthru -- target/identify_passthru.sh@63 -- # '[' BTLJ916308MR1P0FGN '!=' BTLJ916308MR1P0FGN ']' 00:34:19.171 11:52:19 nvmf_identify_passthru -- target/identify_passthru.sh@68 -- # '[' INTEL '!=' INTEL ']' 00:34:19.171 11:52:19 nvmf_identify_passthru -- target/identify_passthru.sh@73 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:34:19.171 11:52:19 nvmf_identify_passthru -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:19.171 11:52:19 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:34:19.171 11:52:19 nvmf_identify_passthru -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:19.171 11:52:19 nvmf_identify_passthru -- target/identify_passthru.sh@75 -- # trap - SIGINT SIGTERM EXIT 00:34:19.171 11:52:19 nvmf_identify_passthru -- target/identify_passthru.sh@77 -- # nvmftestfini 00:34:19.171 11:52:19 nvmf_identify_passthru -- nvmf/common.sh@516 -- # nvmfcleanup 00:34:19.171 11:52:19 nvmf_identify_passthru -- nvmf/common.sh@121 -- # sync 00:34:19.171 11:52:19 nvmf_identify_passthru -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:34:19.171 11:52:19 nvmf_identify_passthru -- nvmf/common.sh@124 -- # set +e 00:34:19.171 11:52:19 nvmf_identify_passthru -- nvmf/common.sh@125 -- # for i in {1..20} 00:34:19.171 11:52:19 nvmf_identify_passthru -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:34:19.171 rmmod nvme_tcp 00:34:19.171 rmmod nvme_fabrics 00:34:19.171 rmmod nvme_keyring 00:34:19.171 11:52:19 nvmf_identify_passthru -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:34:19.171 11:52:19 nvmf_identify_passthru -- nvmf/common.sh@128 -- # set -e 00:34:19.171 11:52:19 nvmf_identify_passthru -- nvmf/common.sh@129 -- # return 0 00:34:19.171 11:52:19 nvmf_identify_passthru -- nvmf/common.sh@517 -- # '[' -n 1502425 ']' 00:34:19.171 11:52:19 nvmf_identify_passthru -- nvmf/common.sh@518 -- # killprocess 1502425 00:34:19.171 11:52:19 nvmf_identify_passthru -- common/autotest_common.sh@952 -- # '[' -z 1502425 ']' 00:34:19.171 11:52:19 nvmf_identify_passthru -- common/autotest_common.sh@956 -- # kill -0 1502425 00:34:19.171 11:52:19 nvmf_identify_passthru -- common/autotest_common.sh@957 -- # uname 00:34:19.171 11:52:19 nvmf_identify_passthru -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:34:19.171 11:52:19 nvmf_identify_passthru -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 1502425 00:34:19.171 11:52:20 nvmf_identify_passthru -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:34:19.171 11:52:20 nvmf_identify_passthru -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:34:19.171 11:52:20 nvmf_identify_passthru -- common/autotest_common.sh@970 -- # echo 'killing process with pid 1502425' 00:34:19.171 killing process with pid 1502425 00:34:19.171 11:52:20 nvmf_identify_passthru -- common/autotest_common.sh@971 -- # kill 1502425 00:34:19.171 11:52:20 nvmf_identify_passthru -- common/autotest_common.sh@976 -- # wait 1502425 00:34:21.080 11:52:21 nvmf_identify_passthru -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:34:21.080 11:52:21 nvmf_identify_passthru -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:34:21.080 11:52:21 nvmf_identify_passthru -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:34:21.080 11:52:21 nvmf_identify_passthru -- nvmf/common.sh@297 -- # iptr 00:34:21.080 11:52:21 nvmf_identify_passthru -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:34:21.080 11:52:21 nvmf_identify_passthru -- nvmf/common.sh@791 -- # iptables-save 00:34:21.080 11:52:21 nvmf_identify_passthru -- nvmf/common.sh@791 -- # iptables-restore 00:34:21.080 11:52:21 nvmf_identify_passthru -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:34:21.080 11:52:21 nvmf_identify_passthru -- nvmf/common.sh@302 -- # remove_spdk_ns 00:34:21.080 11:52:21 nvmf_identify_passthru -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:21.080 11:52:21 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:34:21.080 11:52:21 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:22.992 11:52:23 nvmf_identify_passthru -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:34:22.992 00:34:22.992 real 0m22.614s 00:34:22.992 user 0m29.976s 00:34:22.992 sys 0m6.160s 00:34:22.992 11:52:23 nvmf_identify_passthru -- common/autotest_common.sh@1128 -- # xtrace_disable 00:34:22.992 11:52:23 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:34:22.992 ************************************ 00:34:22.992 END TEST nvmf_identify_passthru 00:34:22.992 ************************************ 00:34:22.992 11:52:23 -- spdk/autotest.sh@285 -- # run_test nvmf_dif /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/dif.sh 00:34:22.992 11:52:23 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:34:22.992 11:52:23 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:34:22.992 11:52:23 -- common/autotest_common.sh@10 -- # set +x 00:34:22.992 ************************************ 00:34:22.992 START TEST nvmf_dif 00:34:22.992 ************************************ 00:34:22.992 11:52:23 nvmf_dif -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/dif.sh 00:34:22.992 * Looking for test storage... 00:34:22.992 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:34:22.992 11:52:23 nvmf_dif -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:34:22.992 11:52:23 nvmf_dif -- common/autotest_common.sh@1691 -- # lcov --version 00:34:22.992 11:52:23 nvmf_dif -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:34:22.992 11:52:23 nvmf_dif -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:34:22.992 11:52:23 nvmf_dif -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:34:22.992 11:52:23 nvmf_dif -- scripts/common.sh@333 -- # local ver1 ver1_l 00:34:22.992 11:52:23 nvmf_dif -- scripts/common.sh@334 -- # local ver2 ver2_l 00:34:22.992 11:52:23 nvmf_dif -- scripts/common.sh@336 -- # IFS=.-: 00:34:22.992 11:52:23 nvmf_dif -- scripts/common.sh@336 -- # read -ra ver1 00:34:22.992 11:52:23 nvmf_dif -- scripts/common.sh@337 -- # IFS=.-: 00:34:22.992 11:52:23 nvmf_dif -- scripts/common.sh@337 -- # read -ra ver2 00:34:22.992 11:52:23 nvmf_dif -- scripts/common.sh@338 -- # local 'op=<' 00:34:22.992 11:52:23 nvmf_dif -- scripts/common.sh@340 -- # ver1_l=2 00:34:22.992 11:52:23 nvmf_dif -- scripts/common.sh@341 -- # ver2_l=1 00:34:22.992 11:52:23 nvmf_dif -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:34:22.992 11:52:23 nvmf_dif -- scripts/common.sh@344 -- # case "$op" in 00:34:22.992 11:52:23 nvmf_dif -- scripts/common.sh@345 -- # : 1 00:34:22.992 11:52:23 nvmf_dif -- scripts/common.sh@364 -- # (( v = 0 )) 00:34:22.992 11:52:23 nvmf_dif -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:34:22.992 11:52:23 nvmf_dif -- scripts/common.sh@365 -- # decimal 1 00:34:22.992 11:52:23 nvmf_dif -- scripts/common.sh@353 -- # local d=1 00:34:22.992 11:52:23 nvmf_dif -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:34:22.992 11:52:23 nvmf_dif -- scripts/common.sh@355 -- # echo 1 00:34:22.992 11:52:23 nvmf_dif -- scripts/common.sh@365 -- # ver1[v]=1 00:34:22.992 11:52:23 nvmf_dif -- scripts/common.sh@366 -- # decimal 2 00:34:22.992 11:52:23 nvmf_dif -- scripts/common.sh@353 -- # local d=2 00:34:22.992 11:52:23 nvmf_dif -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:34:22.992 11:52:23 nvmf_dif -- scripts/common.sh@355 -- # echo 2 00:34:22.992 11:52:23 nvmf_dif -- scripts/common.sh@366 -- # ver2[v]=2 00:34:22.992 11:52:23 nvmf_dif -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:34:22.992 11:52:23 nvmf_dif -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:34:22.992 11:52:23 nvmf_dif -- scripts/common.sh@368 -- # return 0 00:34:22.992 11:52:23 nvmf_dif -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:34:22.992 11:52:23 nvmf_dif -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:34:22.992 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:22.992 --rc genhtml_branch_coverage=1 00:34:22.992 --rc genhtml_function_coverage=1 00:34:22.992 --rc genhtml_legend=1 00:34:22.992 --rc geninfo_all_blocks=1 00:34:22.992 --rc geninfo_unexecuted_blocks=1 00:34:22.992 00:34:22.992 ' 00:34:22.992 11:52:23 nvmf_dif -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:34:22.992 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:22.992 --rc genhtml_branch_coverage=1 00:34:22.992 --rc genhtml_function_coverage=1 00:34:22.992 --rc genhtml_legend=1 00:34:22.992 --rc geninfo_all_blocks=1 00:34:22.992 --rc geninfo_unexecuted_blocks=1 00:34:22.992 00:34:22.992 ' 00:34:22.992 11:52:23 nvmf_dif -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:34:22.992 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:22.992 --rc genhtml_branch_coverage=1 00:34:22.992 --rc genhtml_function_coverage=1 00:34:22.993 --rc genhtml_legend=1 00:34:22.993 --rc geninfo_all_blocks=1 00:34:22.993 --rc geninfo_unexecuted_blocks=1 00:34:22.993 00:34:22.993 ' 00:34:22.993 11:52:23 nvmf_dif -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:34:22.993 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:22.993 --rc genhtml_branch_coverage=1 00:34:22.993 --rc genhtml_function_coverage=1 00:34:22.993 --rc genhtml_legend=1 00:34:22.993 --rc geninfo_all_blocks=1 00:34:22.993 --rc geninfo_unexecuted_blocks=1 00:34:22.993 00:34:22.993 ' 00:34:22.993 11:52:23 nvmf_dif -- target/dif.sh@13 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:34:22.993 11:52:23 nvmf_dif -- nvmf/common.sh@7 -- # uname -s 00:34:23.253 11:52:23 nvmf_dif -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:34:23.253 11:52:23 nvmf_dif -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:34:23.253 11:52:23 nvmf_dif -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:34:23.253 11:52:23 nvmf_dif -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:34:23.253 11:52:23 nvmf_dif -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:34:23.253 11:52:23 nvmf_dif -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:34:23.253 11:52:23 nvmf_dif -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:34:23.253 11:52:23 nvmf_dif -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:34:23.253 11:52:23 nvmf_dif -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:34:23.253 11:52:23 nvmf_dif -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:34:23.253 11:52:23 nvmf_dif -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:34:23.253 11:52:23 nvmf_dif -- nvmf/common.sh@18 -- # NVME_HOSTID=00abaa28-3537-eb11-906e-0017a4403562 00:34:23.253 11:52:23 nvmf_dif -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:34:23.253 11:52:23 nvmf_dif -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:34:23.253 11:52:23 nvmf_dif -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:34:23.253 11:52:23 nvmf_dif -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:34:23.253 11:52:23 nvmf_dif -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:34:23.253 11:52:23 nvmf_dif -- scripts/common.sh@15 -- # shopt -s extglob 00:34:23.253 11:52:23 nvmf_dif -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:34:23.253 11:52:23 nvmf_dif -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:34:23.253 11:52:23 nvmf_dif -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:34:23.253 11:52:23 nvmf_dif -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:23.253 11:52:23 nvmf_dif -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:23.253 11:52:23 nvmf_dif -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:23.253 11:52:23 nvmf_dif -- paths/export.sh@5 -- # export PATH 00:34:23.253 11:52:23 nvmf_dif -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:23.253 11:52:23 nvmf_dif -- nvmf/common.sh@51 -- # : 0 00:34:23.253 11:52:23 nvmf_dif -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:34:23.253 11:52:23 nvmf_dif -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:34:23.253 11:52:23 nvmf_dif -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:34:23.253 11:52:23 nvmf_dif -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:34:23.253 11:52:23 nvmf_dif -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:34:23.253 11:52:23 nvmf_dif -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:34:23.253 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:34:23.253 11:52:23 nvmf_dif -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:34:23.253 11:52:23 nvmf_dif -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:34:23.253 11:52:23 nvmf_dif -- nvmf/common.sh@55 -- # have_pci_nics=0 00:34:23.253 11:52:23 nvmf_dif -- target/dif.sh@15 -- # NULL_META=16 00:34:23.253 11:52:23 nvmf_dif -- target/dif.sh@15 -- # NULL_BLOCK_SIZE=512 00:34:23.253 11:52:23 nvmf_dif -- target/dif.sh@15 -- # NULL_SIZE=64 00:34:23.253 11:52:23 nvmf_dif -- target/dif.sh@15 -- # NULL_DIF=1 00:34:23.253 11:52:23 nvmf_dif -- target/dif.sh@135 -- # nvmftestinit 00:34:23.253 11:52:23 nvmf_dif -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:34:23.253 11:52:23 nvmf_dif -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:34:23.253 11:52:23 nvmf_dif -- nvmf/common.sh@476 -- # prepare_net_devs 00:34:23.253 11:52:23 nvmf_dif -- nvmf/common.sh@438 -- # local -g is_hw=no 00:34:23.253 11:52:23 nvmf_dif -- nvmf/common.sh@440 -- # remove_spdk_ns 00:34:23.253 11:52:23 nvmf_dif -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:23.253 11:52:23 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:34:23.253 11:52:23 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:23.253 11:52:23 nvmf_dif -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:34:23.253 11:52:23 nvmf_dif -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:34:23.253 11:52:23 nvmf_dif -- nvmf/common.sh@309 -- # xtrace_disable 00:34:23.253 11:52:23 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:34:28.538 11:52:29 nvmf_dif -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:34:28.538 11:52:29 nvmf_dif -- nvmf/common.sh@315 -- # pci_devs=() 00:34:28.538 11:52:29 nvmf_dif -- nvmf/common.sh@315 -- # local -a pci_devs 00:34:28.538 11:52:29 nvmf_dif -- nvmf/common.sh@316 -- # pci_net_devs=() 00:34:28.538 11:52:29 nvmf_dif -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:34:28.538 11:52:29 nvmf_dif -- nvmf/common.sh@317 -- # pci_drivers=() 00:34:28.538 11:52:29 nvmf_dif -- nvmf/common.sh@317 -- # local -A pci_drivers 00:34:28.538 11:52:29 nvmf_dif -- nvmf/common.sh@319 -- # net_devs=() 00:34:28.538 11:52:29 nvmf_dif -- nvmf/common.sh@319 -- # local -ga net_devs 00:34:28.538 11:52:29 nvmf_dif -- nvmf/common.sh@320 -- # e810=() 00:34:28.538 11:52:29 nvmf_dif -- nvmf/common.sh@320 -- # local -ga e810 00:34:28.538 11:52:29 nvmf_dif -- nvmf/common.sh@321 -- # x722=() 00:34:28.538 11:52:29 nvmf_dif -- nvmf/common.sh@321 -- # local -ga x722 00:34:28.538 11:52:29 nvmf_dif -- nvmf/common.sh@322 -- # mlx=() 00:34:28.538 11:52:29 nvmf_dif -- nvmf/common.sh@322 -- # local -ga mlx 00:34:28.538 11:52:29 nvmf_dif -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:34:28.538 11:52:29 nvmf_dif -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:34:28.538 11:52:29 nvmf_dif -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:34:28.538 11:52:29 nvmf_dif -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:34:28.538 11:52:29 nvmf_dif -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:34:28.538 11:52:29 nvmf_dif -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:34:28.538 11:52:29 nvmf_dif -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:34:28.538 11:52:29 nvmf_dif -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:34:28.538 11:52:29 nvmf_dif -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:34:28.538 11:52:29 nvmf_dif -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:34:28.538 11:52:29 nvmf_dif -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:34:28.538 11:52:29 nvmf_dif -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:34:28.538 11:52:29 nvmf_dif -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:34:28.538 11:52:29 nvmf_dif -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:34:28.538 11:52:29 nvmf_dif -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:34:28.538 11:52:29 nvmf_dif -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:34:28.538 11:52:29 nvmf_dif -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:34:28.538 11:52:29 nvmf_dif -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:34:28.538 11:52:29 nvmf_dif -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:34:28.538 11:52:29 nvmf_dif -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:34:28.538 Found 0000:af:00.0 (0x8086 - 0x159b) 00:34:28.538 11:52:29 nvmf_dif -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:34:28.538 11:52:29 nvmf_dif -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:34:28.538 11:52:29 nvmf_dif -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:34:28.538 11:52:29 nvmf_dif -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:34:28.538 11:52:29 nvmf_dif -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:34:28.538 11:52:29 nvmf_dif -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:34:28.538 11:52:29 nvmf_dif -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:34:28.538 Found 0000:af:00.1 (0x8086 - 0x159b) 00:34:28.538 11:52:29 nvmf_dif -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:34:28.538 11:52:29 nvmf_dif -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:34:28.539 11:52:29 nvmf_dif -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:34:28.539 11:52:29 nvmf_dif -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:34:28.539 11:52:29 nvmf_dif -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:34:28.539 11:52:29 nvmf_dif -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:34:28.539 11:52:29 nvmf_dif -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:34:28.539 11:52:29 nvmf_dif -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:34:28.539 11:52:29 nvmf_dif -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:34:28.539 11:52:29 nvmf_dif -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:28.539 11:52:29 nvmf_dif -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:34:28.539 11:52:29 nvmf_dif -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:34:28.539 11:52:29 nvmf_dif -- nvmf/common.sh@418 -- # [[ up == up ]] 00:34:28.539 11:52:29 nvmf_dif -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:34:28.539 11:52:29 nvmf_dif -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:28.539 11:52:29 nvmf_dif -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:34:28.539 Found net devices under 0000:af:00.0: cvl_0_0 00:34:28.539 11:52:29 nvmf_dif -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:34:28.539 11:52:29 nvmf_dif -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:34:28.539 11:52:29 nvmf_dif -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:28.539 11:52:29 nvmf_dif -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:34:28.539 11:52:29 nvmf_dif -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:34:28.539 11:52:29 nvmf_dif -- nvmf/common.sh@418 -- # [[ up == up ]] 00:34:28.539 11:52:29 nvmf_dif -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:34:28.539 11:52:29 nvmf_dif -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:28.539 11:52:29 nvmf_dif -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:34:28.539 Found net devices under 0000:af:00.1: cvl_0_1 00:34:28.539 11:52:29 nvmf_dif -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:34:28.539 11:52:29 nvmf_dif -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:34:28.539 11:52:29 nvmf_dif -- nvmf/common.sh@442 -- # is_hw=yes 00:34:28.539 11:52:29 nvmf_dif -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:34:28.539 11:52:29 nvmf_dif -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:34:28.539 11:52:29 nvmf_dif -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:34:28.539 11:52:29 nvmf_dif -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:34:28.539 11:52:29 nvmf_dif -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:34:28.539 11:52:29 nvmf_dif -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:34:28.539 11:52:29 nvmf_dif -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:34:28.539 11:52:29 nvmf_dif -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:34:28.539 11:52:29 nvmf_dif -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:34:28.539 11:52:29 nvmf_dif -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:34:28.539 11:52:29 nvmf_dif -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:34:28.539 11:52:29 nvmf_dif -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:34:28.539 11:52:29 nvmf_dif -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:34:28.539 11:52:29 nvmf_dif -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:34:28.539 11:52:29 nvmf_dif -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:34:28.539 11:52:29 nvmf_dif -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:34:28.539 11:52:29 nvmf_dif -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:34:28.539 11:52:29 nvmf_dif -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:34:28.539 11:52:29 nvmf_dif -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:34:28.539 11:52:29 nvmf_dif -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:34:28.539 11:52:29 nvmf_dif -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:34:28.539 11:52:29 nvmf_dif -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:34:28.539 11:52:29 nvmf_dif -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:34:28.539 11:52:29 nvmf_dif -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:34:28.539 11:52:29 nvmf_dif -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:34:28.539 11:52:29 nvmf_dif -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:34:28.539 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:34:28.539 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.509 ms 00:34:28.539 00:34:28.539 --- 10.0.0.2 ping statistics --- 00:34:28.539 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:28.539 rtt min/avg/max/mdev = 0.509/0.509/0.509/0.000 ms 00:34:28.539 11:52:29 nvmf_dif -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:34:28.539 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:34:28.539 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.243 ms 00:34:28.539 00:34:28.539 --- 10.0.0.1 ping statistics --- 00:34:28.539 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:28.539 rtt min/avg/max/mdev = 0.243/0.243/0.243/0.000 ms 00:34:28.539 11:52:29 nvmf_dif -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:34:28.539 11:52:29 nvmf_dif -- nvmf/common.sh@450 -- # return 0 00:34:28.539 11:52:29 nvmf_dif -- nvmf/common.sh@478 -- # '[' iso == iso ']' 00:34:28.539 11:52:29 nvmf_dif -- nvmf/common.sh@479 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:34:31.078 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:34:31.078 0000:86:00.0 (8086 0a54): Already using the vfio-pci driver 00:34:31.078 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:34:31.078 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:34:31.078 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:34:31.078 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:34:31.078 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:34:31.078 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:34:31.078 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:34:31.078 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:34:31.079 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:34:31.079 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:34:31.079 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:34:31.079 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:34:31.079 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:34:31.079 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:34:31.079 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:34:31.339 11:52:32 nvmf_dif -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:34:31.339 11:52:32 nvmf_dif -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:34:31.339 11:52:32 nvmf_dif -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:34:31.339 11:52:32 nvmf_dif -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:34:31.339 11:52:32 nvmf_dif -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:34:31.339 11:52:32 nvmf_dif -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:34:31.339 11:52:32 nvmf_dif -- target/dif.sh@136 -- # NVMF_TRANSPORT_OPTS+=' --dif-insert-or-strip' 00:34:31.339 11:52:32 nvmf_dif -- target/dif.sh@137 -- # nvmfappstart 00:34:31.339 11:52:32 nvmf_dif -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:34:31.339 11:52:32 nvmf_dif -- common/autotest_common.sh@724 -- # xtrace_disable 00:34:31.339 11:52:32 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:34:31.339 11:52:32 nvmf_dif -- nvmf/common.sh@509 -- # nvmfpid=1508218 00:34:31.339 11:52:32 nvmf_dif -- nvmf/common.sh@510 -- # waitforlisten 1508218 00:34:31.339 11:52:32 nvmf_dif -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:34:31.339 11:52:32 nvmf_dif -- common/autotest_common.sh@833 -- # '[' -z 1508218 ']' 00:34:31.339 11:52:32 nvmf_dif -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:34:31.339 11:52:32 nvmf_dif -- common/autotest_common.sh@838 -- # local max_retries=100 00:34:31.339 11:52:32 nvmf_dif -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:34:31.339 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:34:31.339 11:52:32 nvmf_dif -- common/autotest_common.sh@842 -- # xtrace_disable 00:34:31.339 11:52:32 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:34:31.339 [2024-11-15 11:52:32.111207] Starting SPDK v25.01-pre git sha1 4b2d483c6 / DPDK 24.03.0 initialization... 00:34:31.339 [2024-11-15 11:52:32.111264] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:34:31.599 [2024-11-15 11:52:32.210085] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:31.599 [2024-11-15 11:52:32.258586] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:34:31.599 [2024-11-15 11:52:32.258626] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:34:31.599 [2024-11-15 11:52:32.258637] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:34:31.599 [2024-11-15 11:52:32.258647] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:34:31.599 [2024-11-15 11:52:32.258655] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:34:31.599 [2024-11-15 11:52:32.259369] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:34:31.599 11:52:32 nvmf_dif -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:34:31.599 11:52:32 nvmf_dif -- common/autotest_common.sh@866 -- # return 0 00:34:31.599 11:52:32 nvmf_dif -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:34:31.599 11:52:32 nvmf_dif -- common/autotest_common.sh@730 -- # xtrace_disable 00:34:31.599 11:52:32 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:34:31.599 11:52:32 nvmf_dif -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:34:31.599 11:52:32 nvmf_dif -- target/dif.sh@139 -- # create_transport 00:34:31.599 11:52:32 nvmf_dif -- target/dif.sh@50 -- # rpc_cmd nvmf_create_transport -t tcp -o --dif-insert-or-strip 00:34:31.599 11:52:32 nvmf_dif -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:31.599 11:52:32 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:34:31.599 [2024-11-15 11:52:32.407245] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:34:31.599 11:52:32 nvmf_dif -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:31.599 11:52:32 nvmf_dif -- target/dif.sh@141 -- # run_test fio_dif_1_default fio_dif_1 00:34:31.599 11:52:32 nvmf_dif -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:34:31.599 11:52:32 nvmf_dif -- common/autotest_common.sh@1109 -- # xtrace_disable 00:34:31.599 11:52:32 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:34:31.599 ************************************ 00:34:31.599 START TEST fio_dif_1_default 00:34:31.599 ************************************ 00:34:31.599 11:52:32 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1127 -- # fio_dif_1 00:34:31.599 11:52:32 nvmf_dif.fio_dif_1_default -- target/dif.sh@86 -- # create_subsystems 0 00:34:31.599 11:52:32 nvmf_dif.fio_dif_1_default -- target/dif.sh@28 -- # local sub 00:34:31.599 11:52:32 nvmf_dif.fio_dif_1_default -- target/dif.sh@30 -- # for sub in "$@" 00:34:31.599 11:52:32 nvmf_dif.fio_dif_1_default -- target/dif.sh@31 -- # create_subsystem 0 00:34:31.599 11:52:32 nvmf_dif.fio_dif_1_default -- target/dif.sh@18 -- # local sub_id=0 00:34:31.599 11:52:32 nvmf_dif.fio_dif_1_default -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:34:31.860 11:52:32 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:31.860 11:52:32 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:34:31.860 bdev_null0 00:34:31.860 11:52:32 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:31.860 11:52:32 nvmf_dif.fio_dif_1_default -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:34:31.860 11:52:32 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:31.860 11:52:32 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:34:31.860 11:52:32 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:31.860 11:52:32 nvmf_dif.fio_dif_1_default -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:34:31.860 11:52:32 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:31.860 11:52:32 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:34:31.860 11:52:32 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:31.860 11:52:32 nvmf_dif.fio_dif_1_default -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:34:31.860 11:52:32 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:31.860 11:52:32 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:34:31.860 [2024-11-15 11:52:32.479601] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:34:31.860 11:52:32 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:31.860 11:52:32 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # fio /dev/fd/62 00:34:31.860 11:52:32 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # create_json_sub_conf 0 00:34:31.860 11:52:32 nvmf_dif.fio_dif_1_default -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:34:31.860 11:52:32 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@560 -- # config=() 00:34:31.860 11:52:32 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@560 -- # local subsystem config 00:34:31.860 11:52:32 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:34:31.860 11:52:32 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:34:31.860 { 00:34:31.860 "params": { 00:34:31.860 "name": "Nvme$subsystem", 00:34:31.860 "trtype": "$TEST_TRANSPORT", 00:34:31.860 "traddr": "$NVMF_FIRST_TARGET_IP", 00:34:31.860 "adrfam": "ipv4", 00:34:31.860 "trsvcid": "$NVMF_PORT", 00:34:31.860 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:34:31.860 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:34:31.860 "hdgst": ${hdgst:-false}, 00:34:31.860 "ddgst": ${ddgst:-false} 00:34:31.860 }, 00:34:31.860 "method": "bdev_nvme_attach_controller" 00:34:31.860 } 00:34:31.860 EOF 00:34:31.860 )") 00:34:31.860 11:52:32 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:34:31.860 11:52:32 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1358 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:34:31.860 11:52:32 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # gen_fio_conf 00:34:31.860 11:52:32 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1339 -- # local fio_dir=/usr/src/fio 00:34:31.860 11:52:32 nvmf_dif.fio_dif_1_default -- target/dif.sh@54 -- # local file 00:34:31.860 11:52:32 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1341 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:34:31.860 11:52:32 nvmf_dif.fio_dif_1_default -- target/dif.sh@56 -- # cat 00:34:31.860 11:52:32 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1341 -- # local sanitizers 00:34:31.860 11:52:32 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1342 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:34:31.860 11:52:32 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@582 -- # cat 00:34:31.860 11:52:32 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1343 -- # shift 00:34:31.860 11:52:32 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # local asan_lib= 00:34:31.860 11:52:32 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1346 -- # for sanitizer in "${sanitizers[@]}" 00:34:31.860 11:52:32 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1347 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:34:31.860 11:52:32 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1347 -- # awk '{print $3}' 00:34:31.860 11:52:32 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@584 -- # jq . 00:34:31.860 11:52:32 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1347 -- # grep libasan 00:34:31.860 11:52:32 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file = 1 )) 00:34:31.860 11:52:32 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file <= files )) 00:34:31.860 11:52:32 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@585 -- # IFS=, 00:34:31.860 11:52:32 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:34:31.860 "params": { 00:34:31.860 "name": "Nvme0", 00:34:31.860 "trtype": "tcp", 00:34:31.860 "traddr": "10.0.0.2", 00:34:31.860 "adrfam": "ipv4", 00:34:31.860 "trsvcid": "4420", 00:34:31.860 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:34:31.860 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:34:31.860 "hdgst": false, 00:34:31.860 "ddgst": false 00:34:31.860 }, 00:34:31.860 "method": "bdev_nvme_attach_controller" 00:34:31.860 }' 00:34:31.860 11:52:32 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1347 -- # asan_lib= 00:34:31.860 11:52:32 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1348 -- # [[ -n '' ]] 00:34:31.860 11:52:32 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1346 -- # for sanitizer in "${sanitizers[@]}" 00:34:31.860 11:52:32 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1347 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:34:31.860 11:52:32 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1347 -- # grep libclang_rt.asan 00:34:31.860 11:52:32 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1347 -- # awk '{print $3}' 00:34:31.860 11:52:32 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1347 -- # asan_lib= 00:34:31.860 11:52:32 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1348 -- # [[ -n '' ]] 00:34:31.860 11:52:32 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1354 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:34:31.860 11:52:32 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1354 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:34:32.120 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:34:32.120 fio-3.35 00:34:32.120 Starting 1 thread 00:34:44.476 00:34:44.476 filename0: (groupid=0, jobs=1): err= 0: pid=1508559: Fri Nov 15 11:52:43 2024 00:34:44.476 read: IOPS=189, BW=758KiB/s (776kB/s)(7600KiB/10025msec) 00:34:44.476 slat (nsec): min=2613, max=11459, avg=5516.88, stdev=404.26 00:34:44.476 clat (usec): min=545, max=48428, avg=21089.24, stdev=20448.16 00:34:44.476 lat (usec): min=550, max=48439, avg=21094.75, stdev=20448.16 00:34:44.476 clat percentiles (usec): 00:34:44.476 | 1.00th=[ 562], 5.00th=[ 562], 10.00th=[ 570], 20.00th=[ 578], 00:34:44.476 | 30.00th=[ 586], 40.00th=[ 644], 50.00th=[41157], 60.00th=[41157], 00:34:44.476 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41681], 95.00th=[41681], 00:34:44.476 | 99.00th=[42206], 99.50th=[42206], 99.90th=[48497], 99.95th=[48497], 00:34:44.476 | 99.99th=[48497] 00:34:44.476 bw ( KiB/s): min= 702, max= 768, per=99.99%, avg=758.30, stdev=21.30, samples=20 00:34:44.476 iops : min= 175, max= 192, avg=189.55, stdev= 5.39, samples=20 00:34:44.476 lat (usec) : 750=49.74%, 1000=0.16% 00:34:44.476 lat (msec) : 50=50.11% 00:34:44.476 cpu : usr=92.33%, sys=7.42%, ctx=9, majf=0, minf=0 00:34:44.476 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:34:44.476 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:44.476 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:44.476 issued rwts: total=1900,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:44.476 latency : target=0, window=0, percentile=100.00%, depth=4 00:34:44.476 00:34:44.476 Run status group 0 (all jobs): 00:34:44.476 READ: bw=758KiB/s (776kB/s), 758KiB/s-758KiB/s (776kB/s-776kB/s), io=7600KiB (7782kB), run=10025-10025msec 00:34:44.476 11:52:43 nvmf_dif.fio_dif_1_default -- target/dif.sh@88 -- # destroy_subsystems 0 00:34:44.476 11:52:43 nvmf_dif.fio_dif_1_default -- target/dif.sh@43 -- # local sub 00:34:44.476 11:52:43 nvmf_dif.fio_dif_1_default -- target/dif.sh@45 -- # for sub in "$@" 00:34:44.476 11:52:43 nvmf_dif.fio_dif_1_default -- target/dif.sh@46 -- # destroy_subsystem 0 00:34:44.476 11:52:43 nvmf_dif.fio_dif_1_default -- target/dif.sh@36 -- # local sub_id=0 00:34:44.476 11:52:43 nvmf_dif.fio_dif_1_default -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:34:44.476 11:52:43 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:44.476 11:52:43 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:34:44.476 11:52:43 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:44.476 11:52:43 nvmf_dif.fio_dif_1_default -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:34:44.476 11:52:43 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:44.476 11:52:43 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:34:44.476 11:52:43 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:44.476 00:34:44.476 real 0m11.289s 00:34:44.476 user 0m21.533s 00:34:44.476 sys 0m1.054s 00:34:44.476 11:52:43 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1128 -- # xtrace_disable 00:34:44.476 11:52:43 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:34:44.476 ************************************ 00:34:44.476 END TEST fio_dif_1_default 00:34:44.476 ************************************ 00:34:44.476 11:52:43 nvmf_dif -- target/dif.sh@142 -- # run_test fio_dif_1_multi_subsystems fio_dif_1_multi_subsystems 00:34:44.476 11:52:43 nvmf_dif -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:34:44.476 11:52:43 nvmf_dif -- common/autotest_common.sh@1109 -- # xtrace_disable 00:34:44.476 11:52:43 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:34:44.476 ************************************ 00:34:44.476 START TEST fio_dif_1_multi_subsystems 00:34:44.476 ************************************ 00:34:44.476 11:52:43 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1127 -- # fio_dif_1_multi_subsystems 00:34:44.476 11:52:43 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@92 -- # local files=1 00:34:44.476 11:52:43 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@94 -- # create_subsystems 0 1 00:34:44.476 11:52:43 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@28 -- # local sub 00:34:44.476 11:52:43 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 00:34:44.476 11:52:43 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 0 00:34:44.476 11:52:43 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=0 00:34:44.476 11:52:43 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:34:44.476 11:52:43 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:44.476 11:52:43 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:34:44.476 bdev_null0 00:34:44.476 11:52:43 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:44.476 11:52:43 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:34:44.476 11:52:43 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:44.476 11:52:43 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:34:44.476 11:52:43 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:44.476 11:52:43 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:34:44.476 11:52:43 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:44.476 11:52:43 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:34:44.476 11:52:43 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:44.476 11:52:43 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:34:44.476 11:52:43 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:44.476 11:52:43 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:34:44.476 [2024-11-15 11:52:43.848821] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:34:44.476 11:52:43 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:44.476 11:52:43 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 00:34:44.476 11:52:43 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 1 00:34:44.476 11:52:43 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=1 00:34:44.476 11:52:43 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:34:44.476 11:52:43 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:44.476 11:52:43 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:34:44.476 bdev_null1 00:34:44.476 11:52:43 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:44.476 11:52:43 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:34:44.476 11:52:43 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:44.476 11:52:43 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:34:44.476 11:52:43 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:44.476 11:52:43 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:34:44.476 11:52:43 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:44.476 11:52:43 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:34:44.476 11:52:43 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:44.476 11:52:43 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:34:44.476 11:52:43 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:44.476 11:52:43 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:34:44.476 11:52:43 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:44.476 11:52:43 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # fio /dev/fd/62 00:34:44.476 11:52:43 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # create_json_sub_conf 0 1 00:34:44.476 11:52:43 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:34:44.476 11:52:43 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@560 -- # config=() 00:34:44.476 11:52:43 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:34:44.476 11:52:43 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@560 -- # local subsystem config 00:34:44.476 11:52:43 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:34:44.476 11:52:43 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1358 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:34:44.476 11:52:43 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # gen_fio_conf 00:34:44.476 11:52:43 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:34:44.476 { 00:34:44.476 "params": { 00:34:44.476 "name": "Nvme$subsystem", 00:34:44.476 "trtype": "$TEST_TRANSPORT", 00:34:44.476 "traddr": "$NVMF_FIRST_TARGET_IP", 00:34:44.476 "adrfam": "ipv4", 00:34:44.476 "trsvcid": "$NVMF_PORT", 00:34:44.476 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:34:44.476 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:34:44.476 "hdgst": ${hdgst:-false}, 00:34:44.476 "ddgst": ${ddgst:-false} 00:34:44.476 }, 00:34:44.476 "method": "bdev_nvme_attach_controller" 00:34:44.476 } 00:34:44.476 EOF 00:34:44.476 )") 00:34:44.476 11:52:43 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1339 -- # local fio_dir=/usr/src/fio 00:34:44.476 11:52:43 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@54 -- # local file 00:34:44.476 11:52:43 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1341 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:34:44.476 11:52:43 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@56 -- # cat 00:34:44.476 11:52:43 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1341 -- # local sanitizers 00:34:44.477 11:52:43 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1342 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:34:44.477 11:52:43 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1343 -- # shift 00:34:44.477 11:52:43 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # local asan_lib= 00:34:44.477 11:52:43 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1346 -- # for sanitizer in "${sanitizers[@]}" 00:34:44.477 11:52:43 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@582 -- # cat 00:34:44.477 11:52:43 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1347 -- # grep libasan 00:34:44.477 11:52:43 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file = 1 )) 00:34:44.477 11:52:43 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 00:34:44.477 11:52:43 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1347 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:34:44.477 11:52:43 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@73 -- # cat 00:34:44.477 11:52:43 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1347 -- # awk '{print $3}' 00:34:44.477 11:52:43 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:34:44.477 11:52:43 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file++ )) 00:34:44.477 11:52:43 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:34:44.477 { 00:34:44.477 "params": { 00:34:44.477 "name": "Nvme$subsystem", 00:34:44.477 "trtype": "$TEST_TRANSPORT", 00:34:44.477 "traddr": "$NVMF_FIRST_TARGET_IP", 00:34:44.477 "adrfam": "ipv4", 00:34:44.477 "trsvcid": "$NVMF_PORT", 00:34:44.477 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:34:44.477 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:34:44.477 "hdgst": ${hdgst:-false}, 00:34:44.477 "ddgst": ${ddgst:-false} 00:34:44.477 }, 00:34:44.477 "method": "bdev_nvme_attach_controller" 00:34:44.477 } 00:34:44.477 EOF 00:34:44.477 )") 00:34:44.477 11:52:43 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 00:34:44.477 11:52:43 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@582 -- # cat 00:34:44.477 11:52:43 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@584 -- # jq . 00:34:44.477 11:52:43 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@585 -- # IFS=, 00:34:44.477 11:52:43 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:34:44.477 "params": { 00:34:44.477 "name": "Nvme0", 00:34:44.477 "trtype": "tcp", 00:34:44.477 "traddr": "10.0.0.2", 00:34:44.477 "adrfam": "ipv4", 00:34:44.477 "trsvcid": "4420", 00:34:44.477 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:34:44.477 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:34:44.477 "hdgst": false, 00:34:44.477 "ddgst": false 00:34:44.477 }, 00:34:44.477 "method": "bdev_nvme_attach_controller" 00:34:44.477 },{ 00:34:44.477 "params": { 00:34:44.477 "name": "Nvme1", 00:34:44.477 "trtype": "tcp", 00:34:44.477 "traddr": "10.0.0.2", 00:34:44.477 "adrfam": "ipv4", 00:34:44.477 "trsvcid": "4420", 00:34:44.477 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:34:44.477 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:34:44.477 "hdgst": false, 00:34:44.477 "ddgst": false 00:34:44.477 }, 00:34:44.477 "method": "bdev_nvme_attach_controller" 00:34:44.477 }' 00:34:44.477 11:52:43 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1347 -- # asan_lib= 00:34:44.477 11:52:43 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1348 -- # [[ -n '' ]] 00:34:44.477 11:52:43 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1346 -- # for sanitizer in "${sanitizers[@]}" 00:34:44.477 11:52:43 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1347 -- # grep libclang_rt.asan 00:34:44.477 11:52:43 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1347 -- # awk '{print $3}' 00:34:44.477 11:52:43 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1347 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:34:44.477 11:52:43 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1347 -- # asan_lib= 00:34:44.477 11:52:43 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1348 -- # [[ -n '' ]] 00:34:44.477 11:52:43 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1354 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:34:44.477 11:52:43 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1354 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:34:44.477 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:34:44.477 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:34:44.477 fio-3.35 00:34:44.477 Starting 2 threads 00:34:54.463 00:34:54.463 filename0: (groupid=0, jobs=1): err= 0: pid=1510677: Fri Nov 15 11:52:55 2024 00:34:54.463 read: IOPS=95, BW=382KiB/s (391kB/s)(3824KiB/10009msec) 00:34:54.463 slat (nsec): min=9279, max=32624, avg=11449.34, stdev=3224.59 00:34:54.463 clat (usec): min=40885, max=42947, avg=41842.71, stdev=358.45 00:34:54.463 lat (usec): min=40901, max=42965, avg=41854.15, stdev=358.61 00:34:54.463 clat percentiles (usec): 00:34:54.463 | 1.00th=[41157], 5.00th=[41157], 10.00th=[41157], 20.00th=[41681], 00:34:54.463 | 30.00th=[42206], 40.00th=[42206], 50.00th=[42206], 60.00th=[42206], 00:34:54.463 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:34:54.463 | 99.00th=[42206], 99.50th=[42730], 99.90th=[42730], 99.95th=[42730], 00:34:54.463 | 99.99th=[42730] 00:34:54.463 bw ( KiB/s): min= 352, max= 384, per=49.50%, avg=380.80, stdev= 9.85, samples=20 00:34:54.463 iops : min= 88, max= 96, avg=95.20, stdev= 2.46, samples=20 00:34:54.463 lat (msec) : 50=100.00% 00:34:54.463 cpu : usr=96.30%, sys=3.28%, ctx=17, majf=0, minf=0 00:34:54.463 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:34:54.463 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:54.463 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:54.463 issued rwts: total=956,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:54.463 latency : target=0, window=0, percentile=100.00%, depth=4 00:34:54.463 filename1: (groupid=0, jobs=1): err= 0: pid=1510678: Fri Nov 15 11:52:55 2024 00:34:54.463 read: IOPS=96, BW=386KiB/s (395kB/s)(3872KiB/10026msec) 00:34:54.463 slat (nsec): min=9289, max=32450, avg=11535.78, stdev=3286.18 00:34:54.463 clat (usec): min=40839, max=42947, avg=41393.53, stdev=511.14 00:34:54.463 lat (usec): min=40848, max=42962, avg=41405.07, stdev=511.43 00:34:54.463 clat percentiles (usec): 00:34:54.463 | 1.00th=[40633], 5.00th=[41157], 10.00th=[41157], 20.00th=[41157], 00:34:54.463 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41681], 00:34:54.463 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:34:54.463 | 99.00th=[42206], 99.50th=[42730], 99.90th=[42730], 99.95th=[42730], 00:34:54.463 | 99.99th=[42730] 00:34:54.463 bw ( KiB/s): min= 384, max= 416, per=50.16%, avg=385.60, stdev= 7.16, samples=20 00:34:54.463 iops : min= 96, max= 104, avg=96.40, stdev= 1.79, samples=20 00:34:54.463 lat (msec) : 50=100.00% 00:34:54.463 cpu : usr=96.29%, sys=3.38%, ctx=28, majf=0, minf=0 00:34:54.463 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:34:54.463 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:54.463 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:54.463 issued rwts: total=968,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:54.463 latency : target=0, window=0, percentile=100.00%, depth=4 00:34:54.463 00:34:54.463 Run status group 0 (all jobs): 00:34:54.463 READ: bw=768KiB/s (786kB/s), 382KiB/s-386KiB/s (391kB/s-395kB/s), io=7696KiB (7881kB), run=10009-10026msec 00:34:54.463 11:52:55 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@96 -- # destroy_subsystems 0 1 00:34:54.463 11:52:55 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@43 -- # local sub 00:34:54.463 11:52:55 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 00:34:54.463 11:52:55 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 0 00:34:54.463 11:52:55 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=0 00:34:54.463 11:52:55 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:34:54.463 11:52:55 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:54.463 11:52:55 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:34:54.463 11:52:55 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:54.463 11:52:55 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:34:54.463 11:52:55 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:54.463 11:52:55 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:34:54.463 11:52:55 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:54.463 11:52:55 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 00:34:54.463 11:52:55 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 1 00:34:54.463 11:52:55 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=1 00:34:54.463 11:52:55 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:34:54.463 11:52:55 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:54.463 11:52:55 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:34:54.463 11:52:55 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:54.463 11:52:55 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:34:54.463 11:52:55 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:54.463 11:52:55 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:34:54.463 11:52:55 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:54.463 00:34:54.463 real 0m11.422s 00:34:54.463 user 0m31.316s 00:34:54.463 sys 0m1.008s 00:34:54.463 11:52:55 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1128 -- # xtrace_disable 00:34:54.463 11:52:55 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:34:54.463 ************************************ 00:34:54.463 END TEST fio_dif_1_multi_subsystems 00:34:54.463 ************************************ 00:34:54.463 11:52:55 nvmf_dif -- target/dif.sh@143 -- # run_test fio_dif_rand_params fio_dif_rand_params 00:34:54.463 11:52:55 nvmf_dif -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:34:54.463 11:52:55 nvmf_dif -- common/autotest_common.sh@1109 -- # xtrace_disable 00:34:54.463 11:52:55 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:34:54.463 ************************************ 00:34:54.463 START TEST fio_dif_rand_params 00:34:54.463 ************************************ 00:34:54.463 11:52:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1127 -- # fio_dif_rand_params 00:34:54.463 11:52:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@100 -- # local NULL_DIF 00:34:54.463 11:52:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@101 -- # local bs numjobs runtime iodepth files 00:34:54.463 11:52:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # NULL_DIF=3 00:34:54.463 11:52:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # bs=128k 00:34:54.463 11:52:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # numjobs=3 00:34:54.463 11:52:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # iodepth=3 00:34:54.463 11:52:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # runtime=5 00:34:54.463 11:52:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@105 -- # create_subsystems 0 00:34:54.463 11:52:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:34:54.463 11:52:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:34:54.463 11:52:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:34:54.463 11:52:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:34:54.463 11:52:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:34:54.463 11:52:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:54.463 11:52:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:54.724 bdev_null0 00:34:54.724 11:52:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:54.724 11:52:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:34:54.724 11:52:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:54.724 11:52:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:54.724 11:52:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:54.724 11:52:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:34:54.724 11:52:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:54.724 11:52:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:54.724 11:52:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:54.724 11:52:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:34:54.724 11:52:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:54.724 11:52:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:54.724 [2024-11-15 11:52:55.346097] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:34:54.724 11:52:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:54.724 11:52:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # fio /dev/fd/62 00:34:54.724 11:52:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # create_json_sub_conf 0 00:34:54.724 11:52:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:34:54.724 11:52:55 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # config=() 00:34:54.724 11:52:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:34:54.724 11:52:55 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # local subsystem config 00:34:54.724 11:52:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1358 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:34:54.724 11:52:55 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:34:54.724 11:52:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:34:54.724 11:52:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # local fio_dir=/usr/src/fio 00:34:54.724 11:52:55 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:34:54.724 { 00:34:54.724 "params": { 00:34:54.724 "name": "Nvme$subsystem", 00:34:54.724 "trtype": "$TEST_TRANSPORT", 00:34:54.724 "traddr": "$NVMF_FIRST_TARGET_IP", 00:34:54.724 "adrfam": "ipv4", 00:34:54.724 "trsvcid": "$NVMF_PORT", 00:34:54.725 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:34:54.725 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:34:54.725 "hdgst": ${hdgst:-false}, 00:34:54.725 "ddgst": ${ddgst:-false} 00:34:54.725 }, 00:34:54.725 "method": "bdev_nvme_attach_controller" 00:34:54.725 } 00:34:54.725 EOF 00:34:54.725 )") 00:34:54.725 11:52:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:34:54.725 11:52:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:34:54.725 11:52:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:34:54.725 11:52:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # local sanitizers 00:34:54.725 11:52:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1342 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:34:54.725 11:52:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # shift 00:34:54.725 11:52:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # local asan_lib= 00:34:54.725 11:52:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # for sanitizer in "${sanitizers[@]}" 00:34:54.725 11:52:55 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:34:54.725 11:52:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:34:54.725 11:52:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # awk '{print $3}' 00:34:54.725 11:52:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:34:54.725 11:52:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # grep libasan 00:34:54.725 11:52:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:34:54.725 11:52:55 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@584 -- # jq . 00:34:54.725 11:52:55 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@585 -- # IFS=, 00:34:54.725 11:52:55 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:34:54.725 "params": { 00:34:54.725 "name": "Nvme0", 00:34:54.725 "trtype": "tcp", 00:34:54.725 "traddr": "10.0.0.2", 00:34:54.725 "adrfam": "ipv4", 00:34:54.725 "trsvcid": "4420", 00:34:54.725 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:34:54.725 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:34:54.725 "hdgst": false, 00:34:54.725 "ddgst": false 00:34:54.725 }, 00:34:54.725 "method": "bdev_nvme_attach_controller" 00:34:54.725 }' 00:34:54.725 11:52:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # asan_lib= 00:34:54.725 11:52:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # [[ -n '' ]] 00:34:54.725 11:52:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # for sanitizer in "${sanitizers[@]}" 00:34:54.725 11:52:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:34:54.725 11:52:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # grep libclang_rt.asan 00:34:54.725 11:52:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # awk '{print $3}' 00:34:54.725 11:52:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # asan_lib= 00:34:54.725 11:52:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # [[ -n '' ]] 00:34:54.725 11:52:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1354 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:34:54.725 11:52:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1354 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:34:54.984 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:34:54.984 ... 00:34:54.984 fio-3.35 00:34:54.984 Starting 3 threads 00:35:01.556 00:35:01.556 filename0: (groupid=0, jobs=1): err= 0: pid=1512872: Fri Nov 15 11:53:01 2024 00:35:01.556 read: IOPS=214, BW=26.8MiB/s (28.1MB/s)(135MiB/5043msec) 00:35:01.556 slat (nsec): min=9353, max=25316, avg=14642.87, stdev=2035.57 00:35:01.556 clat (usec): min=7448, max=54826, avg=13953.19, stdev=5416.84 00:35:01.556 lat (usec): min=7462, max=54852, avg=13967.83, stdev=5416.89 00:35:01.556 clat percentiles (usec): 00:35:01.556 | 1.00th=[ 8094], 5.00th=[ 9241], 10.00th=[ 9896], 20.00th=[11338], 00:35:01.556 | 30.00th=[12125], 40.00th=[12911], 50.00th=[13435], 60.00th=[14091], 00:35:01.556 | 70.00th=[14615], 80.00th=[15270], 90.00th=[16712], 95.00th=[17957], 00:35:01.556 | 99.00th=[51643], 99.50th=[54264], 99.90th=[54789], 99.95th=[54789], 00:35:01.556 | 99.99th=[54789] 00:35:01.556 bw ( KiB/s): min=22528, max=31232, per=35.97%, avg=27596.80, stdev=2536.57, samples=10 00:35:01.556 iops : min= 176, max= 244, avg=215.60, stdev=19.82, samples=10 00:35:01.556 lat (msec) : 10=10.56%, 20=87.69%, 50=0.46%, 100=1.30% 00:35:01.556 cpu : usr=94.51%, sys=5.18%, ctx=9, majf=0, minf=9 00:35:01.556 IO depths : 1=0.6%, 2=99.4%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:35:01.556 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:01.556 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:01.556 issued rwts: total=1080,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:01.556 latency : target=0, window=0, percentile=100.00%, depth=3 00:35:01.556 filename0: (groupid=0, jobs=1): err= 0: pid=1512873: Fri Nov 15 11:53:01 2024 00:35:01.556 read: IOPS=199, BW=24.9MiB/s (26.1MB/s)(126MiB/5044msec) 00:35:01.556 slat (nsec): min=9402, max=85203, avg=15389.96, stdev=3549.74 00:35:01.556 clat (usec): min=4860, max=55013, avg=14995.78, stdev=4945.64 00:35:01.556 lat (usec): min=4871, max=55028, avg=15011.17, stdev=4945.51 00:35:01.556 clat percentiles (usec): 00:35:01.556 | 1.00th=[ 8586], 5.00th=[ 9765], 10.00th=[10159], 20.00th=[12387], 00:35:01.556 | 30.00th=[13566], 40.00th=[14222], 50.00th=[14615], 60.00th=[15401], 00:35:01.556 | 70.00th=[16188], 80.00th=[16909], 90.00th=[17957], 95.00th=[18482], 00:35:01.556 | 99.00th=[46400], 99.50th=[49546], 99.90th=[54789], 99.95th=[54789], 00:35:01.556 | 99.99th=[54789] 00:35:01.556 bw ( KiB/s): min=22528, max=28416, per=33.47%, avg=25676.80, stdev=1802.33, samples=10 00:35:01.557 iops : min= 176, max= 222, avg=200.60, stdev=14.08, samples=10 00:35:01.557 lat (msec) : 10=8.36%, 20=89.25%, 50=1.99%, 100=0.40% 00:35:01.557 cpu : usr=94.29%, sys=5.37%, ctx=6, majf=0, minf=11 00:35:01.557 IO depths : 1=0.7%, 2=99.3%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:35:01.557 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:01.557 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:01.557 issued rwts: total=1005,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:01.557 latency : target=0, window=0, percentile=100.00%, depth=3 00:35:01.557 filename0: (groupid=0, jobs=1): err= 0: pid=1512874: Fri Nov 15 11:53:01 2024 00:35:01.557 read: IOPS=186, BW=23.3MiB/s (24.4MB/s)(117MiB/5045msec) 00:35:01.557 slat (nsec): min=2791, max=32844, avg=10468.52, stdev=2493.18 00:35:01.557 clat (usec): min=7052, max=58780, avg=16060.76, stdev=9591.82 00:35:01.557 lat (usec): min=7058, max=58790, avg=16071.23, stdev=9591.48 00:35:01.557 clat percentiles (usec): 00:35:01.557 | 1.00th=[ 8979], 5.00th=[10814], 10.00th=[11207], 20.00th=[12125], 00:35:01.557 | 30.00th=[12780], 40.00th=[13304], 50.00th=[13960], 60.00th=[14615], 00:35:01.557 | 70.00th=[15008], 80.00th=[15795], 90.00th=[16712], 95.00th=[52167], 00:35:01.557 | 99.00th=[54264], 99.50th=[55837], 99.90th=[58983], 99.95th=[58983], 00:35:01.557 | 99.99th=[58983] 00:35:01.557 bw ( KiB/s): min=12544, max=30720, per=31.23%, avg=23961.60, stdev=5403.97, samples=10 00:35:01.557 iops : min= 98, max= 240, avg=187.20, stdev=42.22, samples=10 00:35:01.557 lat (msec) : 10=2.45%, 20=91.48%, 50=0.21%, 100=5.86% 00:35:01.557 cpu : usr=95.38%, sys=4.32%, ctx=9, majf=0, minf=9 00:35:01.557 IO depths : 1=2.1%, 2=97.9%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:35:01.557 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:01.557 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:01.557 issued rwts: total=939,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:01.557 latency : target=0, window=0, percentile=100.00%, depth=3 00:35:01.557 00:35:01.557 Run status group 0 (all jobs): 00:35:01.557 READ: bw=74.9MiB/s (78.6MB/s), 23.3MiB/s-26.8MiB/s (24.4MB/s-28.1MB/s), io=378MiB (396MB), run=5043-5045msec 00:35:01.557 11:53:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@107 -- # destroy_subsystems 0 00:35:01.557 11:53:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:35:01.557 11:53:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:35:01.557 11:53:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:35:01.557 11:53:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:35:01.557 11:53:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:35:01.557 11:53:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:01.557 11:53:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:01.557 11:53:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:01.557 11:53:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:35:01.557 11:53:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:01.557 11:53:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:01.557 11:53:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:01.557 11:53:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # NULL_DIF=2 00:35:01.557 11:53:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # bs=4k 00:35:01.557 11:53:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # numjobs=8 00:35:01.557 11:53:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # iodepth=16 00:35:01.557 11:53:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # runtime= 00:35:01.557 11:53:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # files=2 00:35:01.557 11:53:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@111 -- # create_subsystems 0 1 2 00:35:01.557 11:53:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:35:01.557 11:53:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:35:01.557 11:53:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:35:01.557 11:53:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:35:01.557 11:53:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 2 00:35:01.557 11:53:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:01.557 11:53:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:01.557 bdev_null0 00:35:01.557 11:53:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:01.557 11:53:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:35:01.557 11:53:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:01.557 11:53:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:01.557 11:53:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:01.557 11:53:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:35:01.557 11:53:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:01.557 11:53:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:01.557 11:53:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:01.557 11:53:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:35:01.557 11:53:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:01.557 11:53:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:01.557 [2024-11-15 11:53:01.695970] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:35:01.557 11:53:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:01.557 11:53:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:35:01.557 11:53:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 00:35:01.557 11:53:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 00:35:01.557 11:53:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 2 00:35:01.557 11:53:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:01.557 11:53:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:01.557 bdev_null1 00:35:01.557 11:53:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:01.557 11:53:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:35:01.557 11:53:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:01.557 11:53:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:01.557 11:53:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:01.557 11:53:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:35:01.557 11:53:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:01.557 11:53:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:01.557 11:53:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:01.557 11:53:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:35:01.557 11:53:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:01.557 11:53:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:01.557 11:53:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:01.557 11:53:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:35:01.557 11:53:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 2 00:35:01.557 11:53:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=2 00:35:01.557 11:53:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null2 64 512 --md-size 16 --dif-type 2 00:35:01.557 11:53:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:01.557 11:53:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:01.557 bdev_null2 00:35:01.557 11:53:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:01.557 11:53:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 --serial-number 53313233-2 --allow-any-host 00:35:01.557 11:53:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:01.557 11:53:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:01.557 11:53:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:01.557 11:53:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 bdev_null2 00:35:01.557 11:53:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:01.557 11:53:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:01.557 11:53:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:01.557 11:53:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:35:01.557 11:53:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:01.557 11:53:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:01.557 11:53:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:01.557 11:53:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # fio /dev/fd/62 00:35:01.557 11:53:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # create_json_sub_conf 0 1 2 00:35:01.557 11:53:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 2 00:35:01.557 11:53:01 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # config=() 00:35:01.557 11:53:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:35:01.557 11:53:01 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # local subsystem config 00:35:01.557 11:53:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1358 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:35:01.557 11:53:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:35:01.557 11:53:01 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:35:01.557 11:53:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # local fio_dir=/usr/src/fio 00:35:01.557 11:53:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:35:01.557 11:53:01 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:35:01.557 { 00:35:01.557 "params": { 00:35:01.557 "name": "Nvme$subsystem", 00:35:01.557 "trtype": "$TEST_TRANSPORT", 00:35:01.557 "traddr": "$NVMF_FIRST_TARGET_IP", 00:35:01.557 "adrfam": "ipv4", 00:35:01.558 "trsvcid": "$NVMF_PORT", 00:35:01.558 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:35:01.558 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:35:01.558 "hdgst": ${hdgst:-false}, 00:35:01.558 "ddgst": ${ddgst:-false} 00:35:01.558 }, 00:35:01.558 "method": "bdev_nvme_attach_controller" 00:35:01.558 } 00:35:01.558 EOF 00:35:01.558 )") 00:35:01.558 11:53:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:35:01.558 11:53:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:35:01.558 11:53:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # local sanitizers 00:35:01.558 11:53:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1342 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:35:01.558 11:53:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # shift 00:35:01.558 11:53:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # local asan_lib= 00:35:01.558 11:53:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # for sanitizer in "${sanitizers[@]}" 00:35:01.558 11:53:01 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:35:01.558 11:53:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:35:01.558 11:53:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:35:01.558 11:53:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:35:01.558 11:53:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # grep libasan 00:35:01.558 11:53:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # awk '{print $3}' 00:35:01.558 11:53:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:35:01.558 11:53:01 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:35:01.558 11:53:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:35:01.558 11:53:01 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:35:01.558 { 00:35:01.558 "params": { 00:35:01.558 "name": "Nvme$subsystem", 00:35:01.558 "trtype": "$TEST_TRANSPORT", 00:35:01.558 "traddr": "$NVMF_FIRST_TARGET_IP", 00:35:01.558 "adrfam": "ipv4", 00:35:01.558 "trsvcid": "$NVMF_PORT", 00:35:01.558 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:35:01.558 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:35:01.558 "hdgst": ${hdgst:-false}, 00:35:01.558 "ddgst": ${ddgst:-false} 00:35:01.558 }, 00:35:01.558 "method": "bdev_nvme_attach_controller" 00:35:01.558 } 00:35:01.558 EOF 00:35:01.558 )") 00:35:01.558 11:53:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:35:01.558 11:53:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:35:01.558 11:53:01 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:35:01.558 11:53:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:35:01.558 11:53:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:35:01.558 11:53:01 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:35:01.558 11:53:01 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:35:01.558 { 00:35:01.558 "params": { 00:35:01.558 "name": "Nvme$subsystem", 00:35:01.558 "trtype": "$TEST_TRANSPORT", 00:35:01.558 "traddr": "$NVMF_FIRST_TARGET_IP", 00:35:01.558 "adrfam": "ipv4", 00:35:01.558 "trsvcid": "$NVMF_PORT", 00:35:01.558 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:35:01.558 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:35:01.558 "hdgst": ${hdgst:-false}, 00:35:01.558 "ddgst": ${ddgst:-false} 00:35:01.558 }, 00:35:01.558 "method": "bdev_nvme_attach_controller" 00:35:01.558 } 00:35:01.558 EOF 00:35:01.558 )") 00:35:01.558 11:53:01 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:35:01.558 11:53:01 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@584 -- # jq . 00:35:01.558 11:53:01 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@585 -- # IFS=, 00:35:01.558 11:53:01 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:35:01.558 "params": { 00:35:01.558 "name": "Nvme0", 00:35:01.558 "trtype": "tcp", 00:35:01.558 "traddr": "10.0.0.2", 00:35:01.558 "adrfam": "ipv4", 00:35:01.558 "trsvcid": "4420", 00:35:01.558 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:35:01.558 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:35:01.558 "hdgst": false, 00:35:01.558 "ddgst": false 00:35:01.558 }, 00:35:01.558 "method": "bdev_nvme_attach_controller" 00:35:01.558 },{ 00:35:01.558 "params": { 00:35:01.558 "name": "Nvme1", 00:35:01.558 "trtype": "tcp", 00:35:01.558 "traddr": "10.0.0.2", 00:35:01.558 "adrfam": "ipv4", 00:35:01.558 "trsvcid": "4420", 00:35:01.558 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:35:01.558 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:35:01.558 "hdgst": false, 00:35:01.558 "ddgst": false 00:35:01.558 }, 00:35:01.558 "method": "bdev_nvme_attach_controller" 00:35:01.558 },{ 00:35:01.558 "params": { 00:35:01.558 "name": "Nvme2", 00:35:01.558 "trtype": "tcp", 00:35:01.558 "traddr": "10.0.0.2", 00:35:01.558 "adrfam": "ipv4", 00:35:01.558 "trsvcid": "4420", 00:35:01.558 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:35:01.558 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:35:01.558 "hdgst": false, 00:35:01.558 "ddgst": false 00:35:01.558 }, 00:35:01.558 "method": "bdev_nvme_attach_controller" 00:35:01.558 }' 00:35:01.558 11:53:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # asan_lib= 00:35:01.558 11:53:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # [[ -n '' ]] 00:35:01.558 11:53:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # for sanitizer in "${sanitizers[@]}" 00:35:01.558 11:53:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:35:01.558 11:53:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # grep libclang_rt.asan 00:35:01.558 11:53:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # awk '{print $3}' 00:35:01.558 11:53:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # asan_lib= 00:35:01.558 11:53:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # [[ -n '' ]] 00:35:01.558 11:53:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1354 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:35:01.558 11:53:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1354 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:35:01.558 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:35:01.558 ... 00:35:01.558 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:35:01.558 ... 00:35:01.558 filename2: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:35:01.558 ... 00:35:01.558 fio-3.35 00:35:01.558 Starting 24 threads 00:35:13.777 00:35:13.777 filename0: (groupid=0, jobs=1): err= 0: pid=1514107: Fri Nov 15 11:53:13 2024 00:35:13.777 read: IOPS=415, BW=1662KiB/s (1702kB/s)(16.2MiB/10013msec) 00:35:13.777 slat (nsec): min=5666, max=84627, avg=34000.66, stdev=21035.40 00:35:13.777 clat (usec): min=13637, max=60359, avg=38226.92, stdev=2631.59 00:35:13.777 lat (usec): min=13654, max=60370, avg=38260.92, stdev=2632.78 00:35:13.777 clat percentiles (usec): 00:35:13.777 | 1.00th=[22414], 5.00th=[38011], 10.00th=[38011], 20.00th=[38011], 00:35:13.777 | 30.00th=[38536], 40.00th=[38536], 50.00th=[38536], 60.00th=[38536], 00:35:13.777 | 70.00th=[38536], 80.00th=[38536], 90.00th=[39060], 95.00th=[39060], 00:35:13.777 | 99.00th=[40633], 99.50th=[43779], 99.90th=[45351], 99.95th=[45351], 00:35:13.777 | 99.99th=[60556] 00:35:13.777 bw ( KiB/s): min= 1536, max= 1792, per=4.18%, avg=1657.60, stdev=50.44, samples=20 00:35:13.777 iops : min= 384, max= 448, avg=414.40, stdev=12.61, samples=20 00:35:13.777 lat (msec) : 20=0.82%, 50=99.13%, 100=0.05% 00:35:13.777 cpu : usr=98.99%, sys=0.62%, ctx=21, majf=0, minf=9 00:35:13.777 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:35:13.777 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:13.777 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:13.777 issued rwts: total=4160,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:13.777 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:13.777 filename0: (groupid=0, jobs=1): err= 0: pid=1514108: Fri Nov 15 11:53:13 2024 00:35:13.777 read: IOPS=412, BW=1650KiB/s (1689kB/s)(16.1MiB/10010msec) 00:35:13.777 slat (usec): min=5, max=109, avg=52.07, stdev=17.65 00:35:13.777 clat (usec): min=34452, max=45027, avg=38370.44, stdev=692.06 00:35:13.777 lat (usec): min=34470, max=45071, avg=38422.51, stdev=688.57 00:35:13.777 clat percentiles (usec): 00:35:13.777 | 1.00th=[37487], 5.00th=[37487], 10.00th=[38011], 20.00th=[38011], 00:35:13.777 | 30.00th=[38011], 40.00th=[38011], 50.00th=[38536], 60.00th=[38536], 00:35:13.777 | 70.00th=[38536], 80.00th=[38536], 90.00th=[38536], 95.00th=[39060], 00:35:13.777 | 99.00th=[41681], 99.50th=[42206], 99.90th=[44827], 99.95th=[44827], 00:35:13.777 | 99.99th=[44827] 00:35:13.777 bw ( KiB/s): min= 1532, max= 1667, per=4.16%, avg=1650.47, stdev=41.06, samples=19 00:35:13.777 iops : min= 383, max= 416, avg=412.58, stdev=10.25, samples=19 00:35:13.777 lat (msec) : 50=100.00% 00:35:13.777 cpu : usr=98.43%, sys=1.09%, ctx=36, majf=0, minf=9 00:35:13.777 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:35:13.777 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:13.777 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:13.777 issued rwts: total=4128,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:13.777 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:13.777 filename0: (groupid=0, jobs=1): err= 0: pid=1514109: Fri Nov 15 11:53:13 2024 00:35:13.777 read: IOPS=413, BW=1653KiB/s (1692kB/s)(16.2MiB/10029msec) 00:35:13.777 slat (usec): min=9, max=111, avg=30.40, stdev=17.03 00:35:13.777 clat (usec): min=21468, max=45209, avg=38494.72, stdev=1248.48 00:35:13.777 lat (usec): min=21484, max=45238, avg=38525.12, stdev=1245.99 00:35:13.777 clat percentiles (usec): 00:35:13.777 | 1.00th=[35914], 5.00th=[38011], 10.00th=[38011], 20.00th=[38536], 00:35:13.777 | 30.00th=[38536], 40.00th=[38536], 50.00th=[38536], 60.00th=[38536], 00:35:13.777 | 70.00th=[38536], 80.00th=[38536], 90.00th=[39060], 95.00th=[39060], 00:35:13.777 | 99.00th=[42206], 99.50th=[42206], 99.90th=[45351], 99.95th=[45351], 00:35:13.777 | 99.99th=[45351] 00:35:13.777 bw ( KiB/s): min= 1536, max= 1667, per=4.17%, avg=1651.15, stdev=39.40, samples=20 00:35:13.777 iops : min= 384, max= 416, avg=412.75, stdev= 9.83, samples=20 00:35:13.777 lat (msec) : 50=100.00% 00:35:13.777 cpu : usr=98.37%, sys=1.22%, ctx=19, majf=0, minf=9 00:35:13.777 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:35:13.777 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:13.777 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:13.777 issued rwts: total=4144,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:13.777 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:13.777 filename0: (groupid=0, jobs=1): err= 0: pid=1514110: Fri Nov 15 11:53:13 2024 00:35:13.777 read: IOPS=412, BW=1649KiB/s (1689kB/s)(16.1MiB/10013msec) 00:35:13.777 slat (usec): min=7, max=137, avg=55.89, stdev=15.93 00:35:13.777 clat (usec): min=19230, max=66110, avg=38315.19, stdev=1868.06 00:35:13.777 lat (usec): min=19290, max=66124, avg=38371.08, stdev=1866.41 00:35:13.777 clat percentiles (usec): 00:35:13.777 | 1.00th=[37487], 5.00th=[37487], 10.00th=[38011], 20.00th=[38011], 00:35:13.778 | 30.00th=[38011], 40.00th=[38011], 50.00th=[38011], 60.00th=[38536], 00:35:13.778 | 70.00th=[38536], 80.00th=[38536], 90.00th=[38536], 95.00th=[39060], 00:35:13.778 | 99.00th=[42206], 99.50th=[44827], 99.90th=[57934], 99.95th=[57934], 00:35:13.778 | 99.99th=[66323] 00:35:13.778 bw ( KiB/s): min= 1536, max= 1664, per=4.15%, avg=1644.40, stdev=46.74, samples=20 00:35:13.778 iops : min= 384, max= 416, avg=411.10, stdev=11.68, samples=20 00:35:13.778 lat (msec) : 20=0.34%, 50=99.27%, 100=0.39% 00:35:13.778 cpu : usr=98.74%, sys=0.84%, ctx=73, majf=0, minf=9 00:35:13.778 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:35:13.778 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:13.778 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:13.778 issued rwts: total=4128,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:13.778 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:13.778 filename0: (groupid=0, jobs=1): err= 0: pid=1514111: Fri Nov 15 11:53:13 2024 00:35:13.778 read: IOPS=412, BW=1649KiB/s (1688kB/s)(16.1MiB/10014msec) 00:35:13.778 slat (usec): min=4, max=128, avg=54.88, stdev=16.39 00:35:13.778 clat (usec): min=19177, max=59059, avg=38319.79, stdev=1853.87 00:35:13.778 lat (usec): min=19191, max=59073, avg=38374.67, stdev=1851.91 00:35:13.778 clat percentiles (usec): 00:35:13.778 | 1.00th=[37487], 5.00th=[37487], 10.00th=[38011], 20.00th=[38011], 00:35:13.778 | 30.00th=[38011], 40.00th=[38011], 50.00th=[38011], 60.00th=[38536], 00:35:13.778 | 70.00th=[38536], 80.00th=[38536], 90.00th=[38536], 95.00th=[39060], 00:35:13.778 | 99.00th=[41681], 99.50th=[44827], 99.90th=[58983], 99.95th=[58983], 00:35:13.778 | 99.99th=[58983] 00:35:13.778 bw ( KiB/s): min= 1536, max= 1664, per=4.15%, avg=1644.40, stdev=46.33, samples=20 00:35:13.778 iops : min= 384, max= 416, avg=411.05, stdev=11.67, samples=20 00:35:13.778 lat (msec) : 20=0.36%, 50=99.25%, 100=0.39% 00:35:13.778 cpu : usr=98.84%, sys=0.72%, ctx=19, majf=0, minf=9 00:35:13.778 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:35:13.778 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:13.778 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:13.778 issued rwts: total=4128,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:13.778 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:13.778 filename0: (groupid=0, jobs=1): err= 0: pid=1514112: Fri Nov 15 11:53:13 2024 00:35:13.778 read: IOPS=413, BW=1653KiB/s (1693kB/s)(16.2MiB/10025msec) 00:35:13.778 slat (nsec): min=5838, max=80820, avg=43329.02, stdev=18199.50 00:35:13.778 clat (usec): min=17530, max=46931, avg=38314.85, stdev=1304.26 00:35:13.778 lat (usec): min=17541, max=46956, avg=38358.18, stdev=1306.55 00:35:13.778 clat percentiles (usec): 00:35:13.778 | 1.00th=[32637], 5.00th=[38011], 10.00th=[38011], 20.00th=[38011], 00:35:13.778 | 30.00th=[38011], 40.00th=[38011], 50.00th=[38536], 60.00th=[38536], 00:35:13.778 | 70.00th=[38536], 80.00th=[38536], 90.00th=[38536], 95.00th=[39060], 00:35:13.778 | 99.00th=[40109], 99.50th=[43254], 99.90th=[45351], 99.95th=[45351], 00:35:13.778 | 99.99th=[46924] 00:35:13.778 bw ( KiB/s): min= 1536, max= 1664, per=4.17%, avg=1651.00, stdev=39.34, samples=20 00:35:13.778 iops : min= 384, max= 416, avg=412.75, stdev= 9.83, samples=20 00:35:13.778 lat (msec) : 20=0.05%, 50=99.95% 00:35:13.778 cpu : usr=98.58%, sys=0.99%, ctx=36, majf=0, minf=10 00:35:13.778 IO depths : 1=6.2%, 2=12.4%, 4=25.0%, 8=50.1%, 16=6.3%, 32=0.0%, >=64=0.0% 00:35:13.778 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:13.778 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:13.778 issued rwts: total=4144,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:13.778 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:13.778 filename0: (groupid=0, jobs=1): err= 0: pid=1514113: Fri Nov 15 11:53:13 2024 00:35:13.778 read: IOPS=410, BW=1643KiB/s (1682kB/s)(16.1MiB/10013msec) 00:35:13.778 slat (nsec): min=9228, max=77592, avg=27667.92, stdev=15200.45 00:35:13.778 clat (usec): min=23546, max=96421, avg=38713.10, stdev=3330.08 00:35:13.778 lat (usec): min=23563, max=96437, avg=38740.77, stdev=3329.14 00:35:13.778 clat percentiles (usec): 00:35:13.778 | 1.00th=[38011], 5.00th=[38011], 10.00th=[38011], 20.00th=[38536], 00:35:13.778 | 30.00th=[38536], 40.00th=[38536], 50.00th=[38536], 60.00th=[38536], 00:35:13.778 | 70.00th=[38536], 80.00th=[38536], 90.00th=[39060], 95.00th=[39060], 00:35:13.778 | 99.00th=[43254], 99.50th=[44827], 99.90th=[87557], 99.95th=[87557], 00:35:13.778 | 99.99th=[95945] 00:35:13.778 bw ( KiB/s): min= 1408, max= 1664, per=4.13%, avg=1638.00, stdev=66.81, samples=20 00:35:13.778 iops : min= 352, max= 416, avg=409.50, stdev=16.70, samples=20 00:35:13.778 lat (msec) : 50=99.61%, 100=0.39% 00:35:13.778 cpu : usr=98.41%, sys=1.22%, ctx=14, majf=0, minf=9 00:35:13.778 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:35:13.778 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:13.778 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:13.778 issued rwts: total=4112,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:13.778 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:13.778 filename0: (groupid=0, jobs=1): err= 0: pid=1514114: Fri Nov 15 11:53:13 2024 00:35:13.778 read: IOPS=414, BW=1659KiB/s (1699kB/s)(16.2MiB/10031msec) 00:35:13.778 slat (usec): min=9, max=153, avg=49.41, stdev=26.55 00:35:13.778 clat (usec): min=12910, max=44981, avg=38162.48, stdev=2113.32 00:35:13.778 lat (usec): min=12923, max=45015, avg=38211.89, stdev=2114.52 00:35:13.778 clat percentiles (usec): 00:35:13.778 | 1.00th=[22152], 5.00th=[37487], 10.00th=[37487], 20.00th=[38011], 00:35:13.778 | 30.00th=[38011], 40.00th=[38536], 50.00th=[38536], 60.00th=[38536], 00:35:13.778 | 70.00th=[38536], 80.00th=[38536], 90.00th=[38536], 95.00th=[39060], 00:35:13.778 | 99.00th=[41681], 99.50th=[42206], 99.90th=[44827], 99.95th=[44827], 00:35:13.778 | 99.99th=[44827] 00:35:13.778 bw ( KiB/s): min= 1536, max= 1792, per=4.18%, avg=1657.40, stdev=50.42, samples=20 00:35:13.778 iops : min= 384, max= 448, avg=414.35, stdev=12.60, samples=20 00:35:13.778 lat (msec) : 20=0.38%, 50=99.62% 00:35:13.778 cpu : usr=98.38%, sys=1.16%, ctx=65, majf=0, minf=9 00:35:13.778 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:35:13.778 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:13.778 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:13.778 issued rwts: total=4160,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:13.778 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:13.778 filename1: (groupid=0, jobs=1): err= 0: pid=1514115: Fri Nov 15 11:53:13 2024 00:35:13.778 read: IOPS=414, BW=1659KiB/s (1699kB/s)(16.2MiB/10029msec) 00:35:13.778 slat (usec): min=9, max=139, avg=49.84, stdev=25.63 00:35:13.778 clat (usec): min=14160, max=45066, avg=38144.11, stdev=2137.15 00:35:13.778 lat (usec): min=14170, max=45121, avg=38193.95, stdev=2138.67 00:35:13.778 clat percentiles (usec): 00:35:13.778 | 1.00th=[21890], 5.00th=[37487], 10.00th=[37487], 20.00th=[38011], 00:35:13.778 | 30.00th=[38011], 40.00th=[38011], 50.00th=[38536], 60.00th=[38536], 00:35:13.778 | 70.00th=[38536], 80.00th=[38536], 90.00th=[38536], 95.00th=[39060], 00:35:13.778 | 99.00th=[41681], 99.50th=[42206], 99.90th=[44827], 99.95th=[44827], 00:35:13.778 | 99.99th=[44827] 00:35:13.778 bw ( KiB/s): min= 1536, max= 1795, per=4.18%, avg=1657.55, stdev=50.84, samples=20 00:35:13.778 iops : min= 384, max= 448, avg=414.35, stdev=12.60, samples=20 00:35:13.778 lat (msec) : 20=0.38%, 50=99.62% 00:35:13.778 cpu : usr=98.56%, sys=1.06%, ctx=15, majf=0, minf=9 00:35:13.778 IO depths : 1=6.2%, 2=12.4%, 4=25.0%, 8=50.1%, 16=6.3%, 32=0.0%, >=64=0.0% 00:35:13.778 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:13.778 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:13.778 issued rwts: total=4160,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:13.778 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:13.778 filename1: (groupid=0, jobs=1): err= 0: pid=1514116: Fri Nov 15 11:53:13 2024 00:35:13.778 read: IOPS=412, BW=1649KiB/s (1688kB/s)(16.1MiB/10016msec) 00:35:13.778 slat (usec): min=4, max=109, avg=54.42, stdev=16.36 00:35:13.778 clat (usec): min=19322, max=60452, avg=38352.71, stdev=1908.69 00:35:13.778 lat (usec): min=19370, max=60470, avg=38407.14, stdev=1905.92 00:35:13.778 clat percentiles (usec): 00:35:13.778 | 1.00th=[37487], 5.00th=[37487], 10.00th=[38011], 20.00th=[38011], 00:35:13.778 | 30.00th=[38011], 40.00th=[38011], 50.00th=[38536], 60.00th=[38536], 00:35:13.778 | 70.00th=[38536], 80.00th=[38536], 90.00th=[38536], 95.00th=[39060], 00:35:13.778 | 99.00th=[42206], 99.50th=[44827], 99.90th=[60556], 99.95th=[60556], 00:35:13.778 | 99.99th=[60556] 00:35:13.778 bw ( KiB/s): min= 1536, max= 1664, per=4.15%, avg=1644.25, stdev=46.69, samples=20 00:35:13.778 iops : min= 384, max= 416, avg=411.05, stdev=11.67, samples=20 00:35:13.778 lat (msec) : 20=0.29%, 50=99.32%, 100=0.39% 00:35:13.778 cpu : usr=98.41%, sys=1.10%, ctx=57, majf=0, minf=9 00:35:13.778 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:35:13.778 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:13.778 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:13.778 issued rwts: total=4128,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:13.778 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:13.778 filename1: (groupid=0, jobs=1): err= 0: pid=1514117: Fri Nov 15 11:53:13 2024 00:35:13.778 read: IOPS=410, BW=1643KiB/s (1682kB/s)(16.1MiB/10012msec) 00:35:13.778 slat (nsec): min=6398, max=81625, avg=29752.65, stdev=16552.75 00:35:13.778 clat (usec): min=23381, max=96386, avg=38671.55, stdev=3333.11 00:35:13.778 lat (usec): min=23399, max=96401, avg=38701.30, stdev=3332.11 00:35:13.778 clat percentiles (usec): 00:35:13.778 | 1.00th=[38011], 5.00th=[38011], 10.00th=[38011], 20.00th=[38011], 00:35:13.778 | 30.00th=[38536], 40.00th=[38536], 50.00th=[38536], 60.00th=[38536], 00:35:13.778 | 70.00th=[38536], 80.00th=[38536], 90.00th=[38536], 95.00th=[39060], 00:35:13.778 | 99.00th=[43254], 99.50th=[44827], 99.90th=[87557], 99.95th=[87557], 00:35:13.778 | 99.99th=[95945] 00:35:13.778 bw ( KiB/s): min= 1408, max= 1664, per=4.13%, avg=1638.00, stdev=66.81, samples=20 00:35:13.778 iops : min= 352, max= 416, avg=409.50, stdev=16.70, samples=20 00:35:13.778 lat (msec) : 50=99.61%, 100=0.39% 00:35:13.778 cpu : usr=97.85%, sys=1.49%, ctx=79, majf=0, minf=9 00:35:13.778 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:35:13.778 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:13.778 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:13.778 issued rwts: total=4112,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:13.778 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:13.779 filename1: (groupid=0, jobs=1): err= 0: pid=1514118: Fri Nov 15 11:53:13 2024 00:35:13.779 read: IOPS=412, BW=1649KiB/s (1689kB/s)(16.1MiB/10011msec) 00:35:13.779 slat (usec): min=8, max=153, avg=51.42, stdev=23.15 00:35:13.779 clat (usec): min=34316, max=45073, avg=38368.17, stdev=715.34 00:35:13.779 lat (usec): min=34334, max=45115, avg=38419.60, stdev=711.78 00:35:13.779 clat percentiles (usec): 00:35:13.779 | 1.00th=[37487], 5.00th=[37487], 10.00th=[38011], 20.00th=[38011], 00:35:13.779 | 30.00th=[38011], 40.00th=[38011], 50.00th=[38536], 60.00th=[38536], 00:35:13.779 | 70.00th=[38536], 80.00th=[38536], 90.00th=[38536], 95.00th=[39060], 00:35:13.779 | 99.00th=[41157], 99.50th=[42206], 99.90th=[44827], 99.95th=[44827], 00:35:13.779 | 99.99th=[44827] 00:35:13.779 bw ( KiB/s): min= 1536, max= 1664, per=4.16%, avg=1650.11, stdev=40.23, samples=19 00:35:13.779 iops : min= 384, max= 416, avg=412.53, stdev=10.06, samples=19 00:35:13.779 lat (msec) : 50=100.00% 00:35:13.779 cpu : usr=98.46%, sys=1.08%, ctx=41, majf=0, minf=9 00:35:13.779 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:35:13.779 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:13.779 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:13.779 issued rwts: total=4128,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:13.779 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:13.779 filename1: (groupid=0, jobs=1): err= 0: pid=1514119: Fri Nov 15 11:53:13 2024 00:35:13.779 read: IOPS=412, BW=1649KiB/s (1688kB/s)(16.1MiB/10016msec) 00:35:13.779 slat (usec): min=4, max=141, avg=54.58, stdev=20.37 00:35:13.779 clat (usec): min=19332, max=60792, avg=38311.44, stdev=1926.52 00:35:13.779 lat (usec): min=19352, max=60808, avg=38366.02, stdev=1925.06 00:35:13.779 clat percentiles (usec): 00:35:13.779 | 1.00th=[37487], 5.00th=[37487], 10.00th=[38011], 20.00th=[38011], 00:35:13.779 | 30.00th=[38011], 40.00th=[38011], 50.00th=[38011], 60.00th=[38536], 00:35:13.779 | 70.00th=[38536], 80.00th=[38536], 90.00th=[38536], 95.00th=[39060], 00:35:13.779 | 99.00th=[42206], 99.50th=[44827], 99.90th=[60556], 99.95th=[60556], 00:35:13.779 | 99.99th=[60556] 00:35:13.779 bw ( KiB/s): min= 1536, max= 1664, per=4.15%, avg=1644.25, stdev=46.69, samples=20 00:35:13.779 iops : min= 384, max= 416, avg=411.05, stdev=11.67, samples=20 00:35:13.779 lat (msec) : 20=0.34%, 50=99.27%, 100=0.39% 00:35:13.779 cpu : usr=98.47%, sys=0.98%, ctx=39, majf=0, minf=9 00:35:13.779 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:35:13.779 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:13.779 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:13.779 issued rwts: total=4128,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:13.779 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:13.779 filename1: (groupid=0, jobs=1): err= 0: pid=1514120: Fri Nov 15 11:53:13 2024 00:35:13.779 read: IOPS=413, BW=1652KiB/s (1692kB/s)(16.2MiB/10031msec) 00:35:13.779 slat (usec): min=8, max=110, avg=26.98, stdev=16.50 00:35:13.779 clat (usec): min=21509, max=45042, avg=38524.82, stdev=1236.73 00:35:13.779 lat (usec): min=21533, max=45062, avg=38551.80, stdev=1234.95 00:35:13.779 clat percentiles (usec): 00:35:13.779 | 1.00th=[35914], 5.00th=[38011], 10.00th=[38011], 20.00th=[38536], 00:35:13.779 | 30.00th=[38536], 40.00th=[38536], 50.00th=[38536], 60.00th=[38536], 00:35:13.779 | 70.00th=[38536], 80.00th=[38536], 90.00th=[39060], 95.00th=[39060], 00:35:13.779 | 99.00th=[41681], 99.50th=[42206], 99.90th=[44827], 99.95th=[44827], 00:35:13.779 | 99.99th=[44827] 00:35:13.779 bw ( KiB/s): min= 1536, max= 1664, per=4.17%, avg=1651.00, stdev=39.34, samples=20 00:35:13.779 iops : min= 384, max= 416, avg=412.75, stdev= 9.83, samples=20 00:35:13.779 lat (msec) : 50=100.00% 00:35:13.779 cpu : usr=97.26%, sys=1.76%, ctx=89, majf=0, minf=9 00:35:13.779 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:35:13.779 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:13.779 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:13.779 issued rwts: total=4144,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:13.779 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:13.779 filename1: (groupid=0, jobs=1): err= 0: pid=1514121: Fri Nov 15 11:53:13 2024 00:35:13.779 read: IOPS=411, BW=1648KiB/s (1687kB/s)(16.1MiB/10020msec) 00:35:13.779 slat (nsec): min=6167, max=81821, avg=29194.30, stdev=16973.16 00:35:13.779 clat (usec): min=23301, max=57406, avg=38581.65, stdev=1628.91 00:35:13.779 lat (usec): min=23320, max=57426, avg=38610.85, stdev=1627.37 00:35:13.779 clat percentiles (usec): 00:35:13.779 | 1.00th=[38011], 5.00th=[38011], 10.00th=[38011], 20.00th=[38536], 00:35:13.779 | 30.00th=[38536], 40.00th=[38536], 50.00th=[38536], 60.00th=[38536], 00:35:13.779 | 70.00th=[38536], 80.00th=[38536], 90.00th=[39060], 95.00th=[39060], 00:35:13.779 | 99.00th=[43254], 99.50th=[45351], 99.90th=[57410], 99.95th=[57410], 00:35:13.779 | 99.99th=[57410] 00:35:13.779 bw ( KiB/s): min= 1536, max= 1664, per=4.15%, avg=1643.25, stdev=46.61, samples=20 00:35:13.779 iops : min= 384, max= 416, avg=410.80, stdev=11.66, samples=20 00:35:13.779 lat (msec) : 50=99.61%, 100=0.39% 00:35:13.779 cpu : usr=98.47%, sys=1.10%, ctx=45, majf=0, minf=9 00:35:13.779 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:35:13.779 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:13.779 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:13.779 issued rwts: total=4128,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:13.779 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:13.779 filename1: (groupid=0, jobs=1): err= 0: pid=1514122: Fri Nov 15 11:53:13 2024 00:35:13.779 read: IOPS=414, BW=1659KiB/s (1699kB/s)(16.2MiB/10029msec) 00:35:13.779 slat (nsec): min=9413, max=65778, avg=19228.70, stdev=7449.53 00:35:13.779 clat (usec): min=9730, max=45747, avg=38393.31, stdev=2464.12 00:35:13.779 lat (usec): min=9749, max=45770, avg=38412.54, stdev=2464.55 00:35:13.779 clat percentiles (usec): 00:35:13.779 | 1.00th=[21365], 5.00th=[38536], 10.00th=[38536], 20.00th=[38536], 00:35:13.779 | 30.00th=[38536], 40.00th=[38536], 50.00th=[38536], 60.00th=[38536], 00:35:13.779 | 70.00th=[38536], 80.00th=[38536], 90.00th=[39060], 95.00th=[39060], 00:35:13.779 | 99.00th=[42730], 99.50th=[44827], 99.90th=[45876], 99.95th=[45876], 00:35:13.779 | 99.99th=[45876] 00:35:13.779 bw ( KiB/s): min= 1536, max= 1795, per=4.18%, avg=1657.55, stdev=50.84, samples=20 00:35:13.779 iops : min= 384, max= 448, avg=414.35, stdev=12.60, samples=20 00:35:13.779 lat (msec) : 10=0.05%, 20=0.67%, 50=99.28% 00:35:13.779 cpu : usr=98.38%, sys=1.22%, ctx=18, majf=0, minf=9 00:35:13.779 IO depths : 1=6.2%, 2=12.4%, 4=24.9%, 8=50.2%, 16=6.3%, 32=0.0%, >=64=0.0% 00:35:13.779 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:13.779 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:13.779 issued rwts: total=4160,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:13.779 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:13.779 filename2: (groupid=0, jobs=1): err= 0: pid=1514123: Fri Nov 15 11:53:13 2024 00:35:13.779 read: IOPS=412, BW=1649KiB/s (1688kB/s)(16.1MiB/10014msec) 00:35:13.779 slat (usec): min=4, max=113, avg=54.69, stdev=15.57 00:35:13.779 clat (usec): min=19311, max=58672, avg=38327.26, stdev=1831.87 00:35:13.779 lat (usec): min=19358, max=58687, avg=38381.95, stdev=1829.75 00:35:13.779 clat percentiles (usec): 00:35:13.779 | 1.00th=[37487], 5.00th=[37487], 10.00th=[38011], 20.00th=[38011], 00:35:13.779 | 30.00th=[38011], 40.00th=[38011], 50.00th=[38011], 60.00th=[38536], 00:35:13.779 | 70.00th=[38536], 80.00th=[38536], 90.00th=[38536], 95.00th=[39060], 00:35:13.779 | 99.00th=[42206], 99.50th=[44827], 99.90th=[58459], 99.95th=[58459], 00:35:13.779 | 99.99th=[58459] 00:35:13.779 bw ( KiB/s): min= 1536, max= 1664, per=4.15%, avg=1644.40, stdev=46.74, samples=20 00:35:13.779 iops : min= 384, max= 416, avg=411.10, stdev=11.68, samples=20 00:35:13.779 lat (msec) : 20=0.31%, 50=99.30%, 100=0.39% 00:35:13.779 cpu : usr=97.74%, sys=1.51%, ctx=74, majf=0, minf=9 00:35:13.779 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:35:13.779 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:13.779 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:13.779 issued rwts: total=4128,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:13.779 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:13.779 filename2: (groupid=0, jobs=1): err= 0: pid=1514124: Fri Nov 15 11:53:13 2024 00:35:13.779 read: IOPS=413, BW=1654KiB/s (1693kB/s)(16.2MiB/10024msec) 00:35:13.779 slat (nsec): min=9476, max=44496, avg=15831.39, stdev=5605.61 00:35:13.779 clat (usec): min=19126, max=45756, avg=38563.62, stdev=1420.88 00:35:13.779 lat (usec): min=19136, max=45768, avg=38579.45, stdev=1420.74 00:35:13.779 clat percentiles (usec): 00:35:13.779 | 1.00th=[32900], 5.00th=[38536], 10.00th=[38536], 20.00th=[38536], 00:35:13.779 | 30.00th=[38536], 40.00th=[38536], 50.00th=[38536], 60.00th=[38536], 00:35:13.779 | 70.00th=[38536], 80.00th=[38536], 90.00th=[39060], 95.00th=[39060], 00:35:13.779 | 99.00th=[42730], 99.50th=[43254], 99.90th=[45351], 99.95th=[45876], 00:35:13.779 | 99.99th=[45876] 00:35:13.779 bw ( KiB/s): min= 1536, max= 1667, per=4.17%, avg=1651.15, stdev=39.40, samples=20 00:35:13.779 iops : min= 384, max= 416, avg=412.75, stdev= 9.83, samples=20 00:35:13.779 lat (msec) : 20=0.05%, 50=99.95% 00:35:13.779 cpu : usr=98.07%, sys=1.40%, ctx=57, majf=0, minf=9 00:35:13.779 IO depths : 1=6.1%, 2=12.4%, 4=25.0%, 8=50.1%, 16=6.4%, 32=0.0%, >=64=0.0% 00:35:13.779 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:13.779 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:13.779 issued rwts: total=4144,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:13.779 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:13.779 filename2: (groupid=0, jobs=1): err= 0: pid=1514125: Fri Nov 15 11:53:13 2024 00:35:13.779 read: IOPS=415, BW=1662KiB/s (1702kB/s)(16.2MiB/10013msec) 00:35:13.779 slat (nsec): min=9498, max=79616, avg=37084.12, stdev=18888.21 00:35:13.779 clat (usec): min=13600, max=45463, avg=38229.74, stdev=2537.78 00:35:13.779 lat (usec): min=13609, max=45489, avg=38266.82, stdev=2539.08 00:35:13.779 clat percentiles (usec): 00:35:13.779 | 1.00th=[22676], 5.00th=[38011], 10.00th=[38011], 20.00th=[38011], 00:35:13.779 | 30.00th=[38536], 40.00th=[38536], 50.00th=[38536], 60.00th=[38536], 00:35:13.779 | 70.00th=[38536], 80.00th=[38536], 90.00th=[39060], 95.00th=[39060], 00:35:13.779 | 99.00th=[40109], 99.50th=[43254], 99.90th=[45351], 99.95th=[45351], 00:35:13.779 | 99.99th=[45351] 00:35:13.780 bw ( KiB/s): min= 1536, max= 1792, per=4.18%, avg=1657.60, stdev=50.44, samples=20 00:35:13.780 iops : min= 384, max= 448, avg=414.40, stdev=12.61, samples=20 00:35:13.780 lat (msec) : 20=0.77%, 50=99.23% 00:35:13.780 cpu : usr=98.43%, sys=1.19%, ctx=15, majf=0, minf=9 00:35:13.780 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:35:13.780 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:13.780 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:13.780 issued rwts: total=4160,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:13.780 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:13.780 filename2: (groupid=0, jobs=1): err= 0: pid=1514126: Fri Nov 15 11:53:13 2024 00:35:13.780 read: IOPS=412, BW=1649KiB/s (1689kB/s)(16.1MiB/10011msec) 00:35:13.780 slat (usec): min=5, max=111, avg=41.32, stdev=19.67 00:35:13.780 clat (usec): min=34301, max=45031, avg=38491.92, stdev=680.98 00:35:13.780 lat (usec): min=34319, max=45063, avg=38533.24, stdev=676.70 00:35:13.780 clat percentiles (usec): 00:35:13.780 | 1.00th=[37487], 5.00th=[38011], 10.00th=[38011], 20.00th=[38011], 00:35:13.780 | 30.00th=[38536], 40.00th=[38536], 50.00th=[38536], 60.00th=[38536], 00:35:13.780 | 70.00th=[38536], 80.00th=[38536], 90.00th=[39060], 95.00th=[39060], 00:35:13.780 | 99.00th=[41681], 99.50th=[42206], 99.90th=[44827], 99.95th=[44827], 00:35:13.780 | 99.99th=[44827] 00:35:13.780 bw ( KiB/s): min= 1536, max= 1664, per=4.16%, avg=1650.11, stdev=40.23, samples=19 00:35:13.780 iops : min= 384, max= 416, avg=412.53, stdev=10.06, samples=19 00:35:13.780 lat (msec) : 50=100.00% 00:35:13.780 cpu : usr=98.63%, sys=1.00%, ctx=14, majf=0, minf=9 00:35:13.780 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:35:13.780 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:13.780 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:13.780 issued rwts: total=4128,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:13.780 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:13.780 filename2: (groupid=0, jobs=1): err= 0: pid=1514127: Fri Nov 15 11:53:13 2024 00:35:13.780 read: IOPS=412, BW=1649KiB/s (1689kB/s)(16.1MiB/10013msec) 00:35:13.780 slat (usec): min=9, max=112, avg=52.22, stdev=17.31 00:35:13.780 clat (usec): min=19488, max=66021, avg=38374.70, stdev=1855.13 00:35:13.780 lat (usec): min=19534, max=66036, avg=38426.92, stdev=1852.94 00:35:13.780 clat percentiles (usec): 00:35:13.780 | 1.00th=[37487], 5.00th=[37487], 10.00th=[38011], 20.00th=[38011], 00:35:13.780 | 30.00th=[38011], 40.00th=[38011], 50.00th=[38536], 60.00th=[38536], 00:35:13.780 | 70.00th=[38536], 80.00th=[38536], 90.00th=[38536], 95.00th=[39060], 00:35:13.780 | 99.00th=[42206], 99.50th=[44827], 99.90th=[57934], 99.95th=[57934], 00:35:13.780 | 99.99th=[65799] 00:35:13.780 bw ( KiB/s): min= 1536, max= 1664, per=4.15%, avg=1644.40, stdev=46.74, samples=20 00:35:13.780 iops : min= 384, max= 416, avg=411.10, stdev=11.68, samples=20 00:35:13.780 lat (msec) : 20=0.24%, 50=99.37%, 100=0.39% 00:35:13.780 cpu : usr=97.36%, sys=1.84%, ctx=135, majf=0, minf=9 00:35:13.780 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:35:13.780 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:13.780 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:13.780 issued rwts: total=4128,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:13.780 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:13.780 filename2: (groupid=0, jobs=1): err= 0: pid=1514128: Fri Nov 15 11:53:13 2024 00:35:13.780 read: IOPS=412, BW=1649KiB/s (1689kB/s)(16.1MiB/10013msec) 00:35:13.780 slat (usec): min=5, max=100, avg=46.14, stdev=16.63 00:35:13.780 clat (usec): min=19515, max=58186, avg=38431.62, stdev=1798.29 00:35:13.780 lat (usec): min=19564, max=58199, avg=38477.77, stdev=1796.20 00:35:13.780 clat percentiles (usec): 00:35:13.780 | 1.00th=[37487], 5.00th=[38011], 10.00th=[38011], 20.00th=[38011], 00:35:13.780 | 30.00th=[38011], 40.00th=[38536], 50.00th=[38536], 60.00th=[38536], 00:35:13.780 | 70.00th=[38536], 80.00th=[38536], 90.00th=[38536], 95.00th=[39060], 00:35:13.780 | 99.00th=[42206], 99.50th=[44827], 99.90th=[57934], 99.95th=[57934], 00:35:13.780 | 99.99th=[57934] 00:35:13.780 bw ( KiB/s): min= 1536, max= 1664, per=4.15%, avg=1644.40, stdev=46.74, samples=20 00:35:13.780 iops : min= 384, max= 416, avg=411.10, stdev=11.68, samples=20 00:35:13.780 lat (msec) : 20=0.27%, 50=99.35%, 100=0.39% 00:35:13.780 cpu : usr=98.62%, sys=0.86%, ctx=102, majf=0, minf=9 00:35:13.780 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:35:13.780 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:13.780 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:13.780 issued rwts: total=4128,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:13.780 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:13.780 filename2: (groupid=0, jobs=1): err= 0: pid=1514129: Fri Nov 15 11:53:13 2024 00:35:13.780 read: IOPS=413, BW=1653KiB/s (1693kB/s)(16.2MiB/10025msec) 00:35:13.780 slat (nsec): min=8843, max=79591, avg=43287.60, stdev=17177.53 00:35:13.780 clat (usec): min=22851, max=45430, avg=38324.59, stdev=1258.67 00:35:13.780 lat (usec): min=22861, max=45469, avg=38367.88, stdev=1260.35 00:35:13.780 clat percentiles (usec): 00:35:13.780 | 1.00th=[32900], 5.00th=[38011], 10.00th=[38011], 20.00th=[38011], 00:35:13.780 | 30.00th=[38011], 40.00th=[38536], 50.00th=[38536], 60.00th=[38536], 00:35:13.780 | 70.00th=[38536], 80.00th=[38536], 90.00th=[38536], 95.00th=[39060], 00:35:13.780 | 99.00th=[40109], 99.50th=[43254], 99.90th=[45351], 99.95th=[45351], 00:35:13.780 | 99.99th=[45351] 00:35:13.780 bw ( KiB/s): min= 1536, max= 1664, per=4.17%, avg=1651.00, stdev=39.34, samples=20 00:35:13.780 iops : min= 384, max= 416, avg=412.75, stdev= 9.83, samples=20 00:35:13.780 lat (msec) : 50=100.00% 00:35:13.780 cpu : usr=97.39%, sys=1.79%, ctx=99, majf=0, minf=9 00:35:13.780 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:35:13.780 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:13.780 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:13.780 issued rwts: total=4144,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:13.780 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:13.780 filename2: (groupid=0, jobs=1): err= 0: pid=1514130: Fri Nov 15 11:53:13 2024 00:35:13.780 read: IOPS=419, BW=1677KiB/s (1717kB/s)(16.4MiB/10014msec) 00:35:13.780 slat (usec): min=9, max=133, avg=52.37, stdev=24.41 00:35:13.780 clat (usec): min=19290, max=93252, avg=37668.73, stdev=3933.66 00:35:13.780 lat (usec): min=19301, max=93268, avg=37721.11, stdev=3939.69 00:35:13.780 clat percentiles (usec): 00:35:13.780 | 1.00th=[23200], 5.00th=[31065], 10.00th=[37487], 20.00th=[38011], 00:35:13.780 | 30.00th=[38011], 40.00th=[38011], 50.00th=[38011], 60.00th=[38536], 00:35:13.780 | 70.00th=[38536], 80.00th=[38536], 90.00th=[38536], 95.00th=[38536], 00:35:13.780 | 99.00th=[43254], 99.50th=[64226], 99.90th=[66323], 99.95th=[66323], 00:35:13.780 | 99.99th=[92799] 00:35:13.780 bw ( KiB/s): min= 1536, max= 1920, per=4.22%, avg=1672.40, stdev=82.05, samples=20 00:35:13.780 iops : min= 384, max= 480, avg=418.10, stdev=20.51, samples=20 00:35:13.780 lat (msec) : 20=0.33%, 50=99.09%, 100=0.57% 00:35:13.780 cpu : usr=98.35%, sys=1.14%, ctx=34, majf=0, minf=9 00:35:13.780 IO depths : 1=5.8%, 2=11.6%, 4=23.8%, 8=52.0%, 16=6.7%, 32=0.0%, >=64=0.0% 00:35:13.780 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:13.780 complete : 0=0.0%, 4=93.8%, 8=0.4%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:13.780 issued rwts: total=4198,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:13.780 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:13.780 00:35:13.780 Run status group 0 (all jobs): 00:35:13.780 READ: bw=38.7MiB/s (40.6MB/s), 1643KiB/s-1677KiB/s (1682kB/s-1717kB/s), io=388MiB (407MB), run=10010-10031msec 00:35:13.780 11:53:13 nvmf_dif.fio_dif_rand_params -- target/dif.sh@113 -- # destroy_subsystems 0 1 2 00:35:13.780 11:53:13 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:35:13.780 11:53:13 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:35:13.780 11:53:13 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:35:13.780 11:53:13 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:35:13.780 11:53:13 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:35:13.780 11:53:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:13.780 11:53:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:13.780 11:53:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:13.780 11:53:13 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:35:13.780 11:53:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:13.780 11:53:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:13.780 11:53:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:13.780 11:53:13 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:35:13.780 11:53:13 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 00:35:13.780 11:53:13 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 00:35:13.780 11:53:13 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:35:13.780 11:53:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:13.780 11:53:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:13.780 11:53:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:13.780 11:53:13 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:35:13.780 11:53:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:13.780 11:53:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:13.780 11:53:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:13.780 11:53:13 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:35:13.780 11:53:13 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 2 00:35:13.780 11:53:13 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=2 00:35:13.780 11:53:13 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:35:13.780 11:53:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:13.780 11:53:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:13.780 11:53:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:13.780 11:53:13 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null2 00:35:13.780 11:53:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:13.781 11:53:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:13.781 11:53:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:13.781 11:53:13 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # NULL_DIF=1 00:35:13.781 11:53:13 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # bs=8k,16k,128k 00:35:13.781 11:53:13 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # numjobs=2 00:35:13.781 11:53:13 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # iodepth=8 00:35:13.781 11:53:13 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # runtime=5 00:35:13.781 11:53:13 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # files=1 00:35:13.781 11:53:13 nvmf_dif.fio_dif_rand_params -- target/dif.sh@117 -- # create_subsystems 0 1 00:35:13.781 11:53:13 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:35:13.781 11:53:13 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:35:13.781 11:53:13 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:35:13.781 11:53:13 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:35:13.781 11:53:13 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:35:13.781 11:53:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:13.781 11:53:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:13.781 bdev_null0 00:35:13.781 11:53:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:13.781 11:53:13 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:35:13.781 11:53:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:13.781 11:53:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:13.781 11:53:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:13.781 11:53:13 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:35:13.781 11:53:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:13.781 11:53:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:13.781 11:53:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:13.781 11:53:13 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:35:13.781 11:53:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:13.781 11:53:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:13.781 [2024-11-15 11:53:13.535147] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:35:13.781 11:53:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:13.781 11:53:13 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:35:13.781 11:53:13 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 00:35:13.781 11:53:13 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 00:35:13.781 11:53:13 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:35:13.781 11:53:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:13.781 11:53:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:13.781 bdev_null1 00:35:13.781 11:53:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:13.781 11:53:13 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:35:13.781 11:53:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:13.781 11:53:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:13.781 11:53:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:13.781 11:53:13 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:35:13.781 11:53:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:13.781 11:53:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:13.781 11:53:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:13.781 11:53:13 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:35:13.781 11:53:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:13.781 11:53:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:13.781 11:53:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:13.781 11:53:13 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # fio /dev/fd/62 00:35:13.781 11:53:13 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # create_json_sub_conf 0 1 00:35:13.781 11:53:13 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:35:13.781 11:53:13 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # config=() 00:35:13.781 11:53:13 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:35:13.781 11:53:13 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # local subsystem config 00:35:13.781 11:53:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1358 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:35:13.781 11:53:13 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:35:13.781 11:53:13 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:35:13.781 11:53:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # local fio_dir=/usr/src/fio 00:35:13.781 11:53:13 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:35:13.781 { 00:35:13.781 "params": { 00:35:13.781 "name": "Nvme$subsystem", 00:35:13.781 "trtype": "$TEST_TRANSPORT", 00:35:13.781 "traddr": "$NVMF_FIRST_TARGET_IP", 00:35:13.781 "adrfam": "ipv4", 00:35:13.781 "trsvcid": "$NVMF_PORT", 00:35:13.781 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:35:13.781 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:35:13.781 "hdgst": ${hdgst:-false}, 00:35:13.781 "ddgst": ${ddgst:-false} 00:35:13.781 }, 00:35:13.781 "method": "bdev_nvme_attach_controller" 00:35:13.781 } 00:35:13.781 EOF 00:35:13.781 )") 00:35:13.781 11:53:13 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:35:13.781 11:53:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:35:13.781 11:53:13 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:35:13.781 11:53:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # local sanitizers 00:35:13.781 11:53:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1342 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:35:13.781 11:53:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # shift 00:35:13.781 11:53:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # local asan_lib= 00:35:13.781 11:53:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # for sanitizer in "${sanitizers[@]}" 00:35:13.781 11:53:13 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:35:13.781 11:53:13 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:35:13.781 11:53:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:35:13.781 11:53:13 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:35:13.781 11:53:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # grep libasan 00:35:13.781 11:53:13 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:35:13.781 11:53:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # awk '{print $3}' 00:35:13.781 11:53:13 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:35:13.781 11:53:13 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:35:13.781 11:53:13 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:35:13.781 11:53:13 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:35:13.781 { 00:35:13.781 "params": { 00:35:13.781 "name": "Nvme$subsystem", 00:35:13.781 "trtype": "$TEST_TRANSPORT", 00:35:13.781 "traddr": "$NVMF_FIRST_TARGET_IP", 00:35:13.781 "adrfam": "ipv4", 00:35:13.781 "trsvcid": "$NVMF_PORT", 00:35:13.781 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:35:13.781 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:35:13.781 "hdgst": ${hdgst:-false}, 00:35:13.781 "ddgst": ${ddgst:-false} 00:35:13.781 }, 00:35:13.781 "method": "bdev_nvme_attach_controller" 00:35:13.781 } 00:35:13.781 EOF 00:35:13.781 )") 00:35:13.781 11:53:13 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:35:13.781 11:53:13 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@584 -- # jq . 00:35:13.781 11:53:13 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@585 -- # IFS=, 00:35:13.781 11:53:13 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:35:13.781 "params": { 00:35:13.781 "name": "Nvme0", 00:35:13.781 "trtype": "tcp", 00:35:13.781 "traddr": "10.0.0.2", 00:35:13.781 "adrfam": "ipv4", 00:35:13.781 "trsvcid": "4420", 00:35:13.781 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:35:13.781 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:35:13.781 "hdgst": false, 00:35:13.781 "ddgst": false 00:35:13.781 }, 00:35:13.781 "method": "bdev_nvme_attach_controller" 00:35:13.781 },{ 00:35:13.781 "params": { 00:35:13.781 "name": "Nvme1", 00:35:13.781 "trtype": "tcp", 00:35:13.781 "traddr": "10.0.0.2", 00:35:13.781 "adrfam": "ipv4", 00:35:13.781 "trsvcid": "4420", 00:35:13.781 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:35:13.781 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:35:13.781 "hdgst": false, 00:35:13.781 "ddgst": false 00:35:13.781 }, 00:35:13.781 "method": "bdev_nvme_attach_controller" 00:35:13.781 }' 00:35:13.781 11:53:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # asan_lib= 00:35:13.781 11:53:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # [[ -n '' ]] 00:35:13.781 11:53:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # for sanitizer in "${sanitizers[@]}" 00:35:13.781 11:53:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:35:13.781 11:53:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # grep libclang_rt.asan 00:35:13.781 11:53:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # awk '{print $3}' 00:35:13.781 11:53:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # asan_lib= 00:35:13.781 11:53:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # [[ -n '' ]] 00:35:13.782 11:53:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1354 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:35:13.782 11:53:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1354 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:35:13.782 filename0: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:35:13.782 ... 00:35:13.782 filename1: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:35:13.782 ... 00:35:13.782 fio-3.35 00:35:13.782 Starting 4 threads 00:35:19.065 00:35:19.065 filename0: (groupid=0, jobs=1): err= 0: pid=1516309: Fri Nov 15 11:53:19 2024 00:35:19.065 read: IOPS=1859, BW=14.5MiB/s (15.2MB/s)(72.7MiB/5002msec) 00:35:19.065 slat (nsec): min=2648, max=65180, avg=11279.35, stdev=7345.90 00:35:19.065 clat (usec): min=1310, max=9591, avg=4264.51, stdev=691.84 00:35:19.065 lat (usec): min=1332, max=9600, avg=4275.79, stdev=691.67 00:35:19.065 clat percentiles (usec): 00:35:19.065 | 1.00th=[ 2573], 5.00th=[ 3294], 10.00th=[ 3458], 20.00th=[ 3785], 00:35:19.065 | 30.00th=[ 3916], 40.00th=[ 4113], 50.00th=[ 4228], 60.00th=[ 4490], 00:35:19.065 | 70.00th=[ 4555], 80.00th=[ 4621], 90.00th=[ 4883], 95.00th=[ 5407], 00:35:19.065 | 99.00th=[ 6587], 99.50th=[ 7308], 99.90th=[ 8586], 99.95th=[ 8586], 00:35:19.065 | 99.99th=[ 9634] 00:35:19.065 bw ( KiB/s): min=13872, max=16368, per=26.95%, avg=14917.33, stdev=777.28, samples=9 00:35:19.065 iops : min= 1734, max= 2046, avg=1864.67, stdev=97.16, samples=9 00:35:19.065 lat (msec) : 2=0.15%, 4=34.34%, 10=65.51% 00:35:19.065 cpu : usr=97.56%, sys=2.10%, ctx=13, majf=0, minf=9 00:35:19.065 IO depths : 1=0.3%, 2=12.6%, 4=59.0%, 8=28.1%, 16=0.0%, 32=0.0%, >=64=0.0% 00:35:19.065 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:19.065 complete : 0=0.0%, 4=92.7%, 8=7.3%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:19.065 issued rwts: total=9302,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:19.065 latency : target=0, window=0, percentile=100.00%, depth=8 00:35:19.065 filename0: (groupid=0, jobs=1): err= 0: pid=1516310: Fri Nov 15 11:53:19 2024 00:35:19.065 read: IOPS=1788, BW=14.0MiB/s (14.7MB/s)(69.9MiB/5002msec) 00:35:19.065 slat (nsec): min=2695, max=63658, avg=11584.94, stdev=7705.35 00:35:19.065 clat (usec): min=968, max=8322, avg=4434.79, stdev=728.35 00:35:19.065 lat (usec): min=989, max=8335, avg=4446.38, stdev=728.42 00:35:19.065 clat percentiles (usec): 00:35:19.066 | 1.00th=[ 2769], 5.00th=[ 3392], 10.00th=[ 3687], 20.00th=[ 3884], 00:35:19.066 | 30.00th=[ 4113], 40.00th=[ 4228], 50.00th=[ 4490], 60.00th=[ 4555], 00:35:19.066 | 70.00th=[ 4621], 80.00th=[ 4752], 90.00th=[ 5276], 95.00th=[ 5669], 00:35:19.066 | 99.00th=[ 7046], 99.50th=[ 7439], 99.90th=[ 8029], 99.95th=[ 8160], 00:35:19.066 | 99.99th=[ 8291] 00:35:19.066 bw ( KiB/s): min=13616, max=15472, per=25.88%, avg=14324.78, stdev=724.10, samples=9 00:35:19.066 iops : min= 1702, max= 1934, avg=1790.56, stdev=90.56, samples=9 00:35:19.066 lat (usec) : 1000=0.01% 00:35:19.066 lat (msec) : 2=0.04%, 4=25.17%, 10=74.78% 00:35:19.066 cpu : usr=97.32%, sys=2.32%, ctx=8, majf=0, minf=9 00:35:19.066 IO depths : 1=0.2%, 2=11.6%, 4=59.5%, 8=28.7%, 16=0.0%, 32=0.0%, >=64=0.0% 00:35:19.066 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:19.066 complete : 0=0.0%, 4=93.1%, 8=6.9%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:19.066 issued rwts: total=8948,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:19.066 latency : target=0, window=0, percentile=100.00%, depth=8 00:35:19.066 filename1: (groupid=0, jobs=1): err= 0: pid=1516311: Fri Nov 15 11:53:19 2024 00:35:19.066 read: IOPS=1634, BW=12.8MiB/s (13.4MB/s)(63.8MiB/5001msec) 00:35:19.066 slat (nsec): min=9166, max=76751, avg=17355.56, stdev=10200.90 00:35:19.066 clat (usec): min=920, max=8442, avg=4840.01, stdev=709.25 00:35:19.066 lat (usec): min=935, max=8452, avg=4857.36, stdev=708.89 00:35:19.066 clat percentiles (usec): 00:35:19.066 | 1.00th=[ 3326], 5.00th=[ 4047], 10.00th=[ 4228], 20.00th=[ 4490], 00:35:19.066 | 30.00th=[ 4555], 40.00th=[ 4555], 50.00th=[ 4621], 60.00th=[ 4752], 00:35:19.066 | 70.00th=[ 4948], 80.00th=[ 5276], 90.00th=[ 5669], 95.00th=[ 6325], 00:35:19.066 | 99.00th=[ 7308], 99.50th=[ 7701], 99.90th=[ 8160], 99.95th=[ 8225], 00:35:19.066 | 99.99th=[ 8455] 00:35:19.066 bw ( KiB/s): min=12464, max=13616, per=23.52%, avg=13020.44, stdev=393.29, samples=9 00:35:19.066 iops : min= 1558, max= 1702, avg=1627.56, stdev=49.16, samples=9 00:35:19.066 lat (usec) : 1000=0.02% 00:35:19.066 lat (msec) : 2=0.28%, 4=4.16%, 10=95.53% 00:35:19.066 cpu : usr=97.42%, sys=2.24%, ctx=7, majf=0, minf=9 00:35:19.066 IO depths : 1=0.1%, 2=7.3%, 4=64.9%, 8=27.7%, 16=0.0%, 32=0.0%, >=64=0.0% 00:35:19.066 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:19.066 complete : 0=0.0%, 4=92.2%, 8=7.8%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:19.066 issued rwts: total=8172,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:19.066 latency : target=0, window=0, percentile=100.00%, depth=8 00:35:19.066 filename1: (groupid=0, jobs=1): err= 0: pid=1516312: Fri Nov 15 11:53:19 2024 00:35:19.066 read: IOPS=1636, BW=12.8MiB/s (13.4MB/s)(64.0MiB/5002msec) 00:35:19.066 slat (nsec): min=3489, max=75304, avg=12709.74, stdev=9794.62 00:35:19.066 clat (usec): min=891, max=8585, avg=4845.01, stdev=798.05 00:35:19.066 lat (usec): min=909, max=8609, avg=4857.72, stdev=796.71 00:35:19.066 clat percentiles (usec): 00:35:19.066 | 1.00th=[ 3195], 5.00th=[ 3884], 10.00th=[ 4178], 20.00th=[ 4490], 00:35:19.066 | 30.00th=[ 4555], 40.00th=[ 4555], 50.00th=[ 4621], 60.00th=[ 4686], 00:35:19.066 | 70.00th=[ 4883], 80.00th=[ 5276], 90.00th=[ 5735], 95.00th=[ 6521], 00:35:19.066 | 99.00th=[ 7767], 99.50th=[ 8029], 99.90th=[ 8356], 99.95th=[ 8455], 00:35:19.066 | 99.99th=[ 8586] 00:35:19.066 bw ( KiB/s): min=12352, max=14064, per=23.64%, avg=13084.44, stdev=554.19, samples=9 00:35:19.066 iops : min= 1544, max= 1758, avg=1635.56, stdev=69.27, samples=9 00:35:19.066 lat (usec) : 1000=0.01% 00:35:19.066 lat (msec) : 2=0.20%, 4=6.17%, 10=93.62% 00:35:19.066 cpu : usr=96.90%, sys=2.46%, ctx=75, majf=0, minf=9 00:35:19.066 IO depths : 1=0.1%, 2=9.0%, 4=64.0%, 8=26.9%, 16=0.0%, 32=0.0%, >=64=0.0% 00:35:19.066 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:19.066 complete : 0=0.0%, 4=91.8%, 8=8.2%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:19.066 issued rwts: total=8187,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:19.066 latency : target=0, window=0, percentile=100.00%, depth=8 00:35:19.066 00:35:19.066 Run status group 0 (all jobs): 00:35:19.066 READ: bw=54.1MiB/s (56.7MB/s), 12.8MiB/s-14.5MiB/s (13.4MB/s-15.2MB/s), io=270MiB (284MB), run=5001-5002msec 00:35:19.326 11:53:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@119 -- # destroy_subsystems 0 1 00:35:19.326 11:53:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:35:19.326 11:53:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:35:19.326 11:53:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:35:19.326 11:53:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:35:19.326 11:53:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:35:19.326 11:53:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:19.326 11:53:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:19.326 11:53:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:19.326 11:53:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:35:19.326 11:53:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:19.326 11:53:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:19.326 11:53:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:19.326 11:53:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:35:19.326 11:53:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 00:35:19.326 11:53:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 00:35:19.326 11:53:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:35:19.326 11:53:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:19.326 11:53:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:19.326 11:53:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:19.326 11:53:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:35:19.326 11:53:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:19.326 11:53:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:19.326 11:53:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:19.326 00:35:19.326 real 0m24.743s 00:35:19.326 user 5m8.637s 00:35:19.326 sys 0m5.174s 00:35:19.326 11:53:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1128 -- # xtrace_disable 00:35:19.326 11:53:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:19.326 ************************************ 00:35:19.326 END TEST fio_dif_rand_params 00:35:19.326 ************************************ 00:35:19.326 11:53:20 nvmf_dif -- target/dif.sh@144 -- # run_test fio_dif_digest fio_dif_digest 00:35:19.326 11:53:20 nvmf_dif -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:35:19.326 11:53:20 nvmf_dif -- common/autotest_common.sh@1109 -- # xtrace_disable 00:35:19.326 11:53:20 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:35:19.326 ************************************ 00:35:19.326 START TEST fio_dif_digest 00:35:19.326 ************************************ 00:35:19.326 11:53:20 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1127 -- # fio_dif_digest 00:35:19.326 11:53:20 nvmf_dif.fio_dif_digest -- target/dif.sh@123 -- # local NULL_DIF 00:35:19.326 11:53:20 nvmf_dif.fio_dif_digest -- target/dif.sh@124 -- # local bs numjobs runtime iodepth files 00:35:19.326 11:53:20 nvmf_dif.fio_dif_digest -- target/dif.sh@125 -- # local hdgst ddgst 00:35:19.326 11:53:20 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # NULL_DIF=3 00:35:19.326 11:53:20 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # bs=128k,128k,128k 00:35:19.326 11:53:20 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # numjobs=3 00:35:19.326 11:53:20 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # iodepth=3 00:35:19.326 11:53:20 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # runtime=10 00:35:19.326 11:53:20 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # hdgst=true 00:35:19.326 11:53:20 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # ddgst=true 00:35:19.326 11:53:20 nvmf_dif.fio_dif_digest -- target/dif.sh@130 -- # create_subsystems 0 00:35:19.326 11:53:20 nvmf_dif.fio_dif_digest -- target/dif.sh@28 -- # local sub 00:35:19.326 11:53:20 nvmf_dif.fio_dif_digest -- target/dif.sh@30 -- # for sub in "$@" 00:35:19.326 11:53:20 nvmf_dif.fio_dif_digest -- target/dif.sh@31 -- # create_subsystem 0 00:35:19.326 11:53:20 nvmf_dif.fio_dif_digest -- target/dif.sh@18 -- # local sub_id=0 00:35:19.326 11:53:20 nvmf_dif.fio_dif_digest -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:35:19.327 11:53:20 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:19.327 11:53:20 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:35:19.327 bdev_null0 00:35:19.327 11:53:20 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:19.327 11:53:20 nvmf_dif.fio_dif_digest -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:35:19.327 11:53:20 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:19.327 11:53:20 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:35:19.327 11:53:20 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:19.327 11:53:20 nvmf_dif.fio_dif_digest -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:35:19.327 11:53:20 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:19.327 11:53:20 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:35:19.327 11:53:20 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:19.327 11:53:20 nvmf_dif.fio_dif_digest -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:35:19.327 11:53:20 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:19.327 11:53:20 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:35:19.327 [2024-11-15 11:53:20.162391] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:35:19.327 11:53:20 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:19.327 11:53:20 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # fio /dev/fd/62 00:35:19.327 11:53:20 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # create_json_sub_conf 0 00:35:19.327 11:53:20 nvmf_dif.fio_dif_digest -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:35:19.327 11:53:20 nvmf_dif.fio_dif_digest -- nvmf/common.sh@560 -- # config=() 00:35:19.327 11:53:20 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:35:19.327 11:53:20 nvmf_dif.fio_dif_digest -- nvmf/common.sh@560 -- # local subsystem config 00:35:19.327 11:53:20 nvmf_dif.fio_dif_digest -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:35:19.327 11:53:20 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1358 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:35:19.327 11:53:20 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # gen_fio_conf 00:35:19.327 11:53:20 nvmf_dif.fio_dif_digest -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:35:19.327 { 00:35:19.327 "params": { 00:35:19.327 "name": "Nvme$subsystem", 00:35:19.327 "trtype": "$TEST_TRANSPORT", 00:35:19.327 "traddr": "$NVMF_FIRST_TARGET_IP", 00:35:19.327 "adrfam": "ipv4", 00:35:19.327 "trsvcid": "$NVMF_PORT", 00:35:19.327 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:35:19.327 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:35:19.327 "hdgst": ${hdgst:-false}, 00:35:19.327 "ddgst": ${ddgst:-false} 00:35:19.327 }, 00:35:19.327 "method": "bdev_nvme_attach_controller" 00:35:19.327 } 00:35:19.327 EOF 00:35:19.327 )") 00:35:19.327 11:53:20 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1339 -- # local fio_dir=/usr/src/fio 00:35:19.327 11:53:20 nvmf_dif.fio_dif_digest -- target/dif.sh@54 -- # local file 00:35:19.327 11:53:20 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1341 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:35:19.327 11:53:20 nvmf_dif.fio_dif_digest -- target/dif.sh@56 -- # cat 00:35:19.327 11:53:20 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1341 -- # local sanitizers 00:35:19.327 11:53:20 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1342 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:35:19.327 11:53:20 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1343 -- # shift 00:35:19.327 11:53:20 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # local asan_lib= 00:35:19.327 11:53:20 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1346 -- # for sanitizer in "${sanitizers[@]}" 00:35:19.327 11:53:20 nvmf_dif.fio_dif_digest -- nvmf/common.sh@582 -- # cat 00:35:19.327 11:53:20 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file = 1 )) 00:35:19.327 11:53:20 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file <= files )) 00:35:19.327 11:53:20 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1347 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:35:19.327 11:53:20 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1347 -- # grep libasan 00:35:19.327 11:53:20 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1347 -- # awk '{print $3}' 00:35:19.586 11:53:20 nvmf_dif.fio_dif_digest -- nvmf/common.sh@584 -- # jq . 00:35:19.586 11:53:20 nvmf_dif.fio_dif_digest -- nvmf/common.sh@585 -- # IFS=, 00:35:19.586 11:53:20 nvmf_dif.fio_dif_digest -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:35:19.586 "params": { 00:35:19.586 "name": "Nvme0", 00:35:19.586 "trtype": "tcp", 00:35:19.586 "traddr": "10.0.0.2", 00:35:19.586 "adrfam": "ipv4", 00:35:19.586 "trsvcid": "4420", 00:35:19.586 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:35:19.586 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:35:19.586 "hdgst": true, 00:35:19.586 "ddgst": true 00:35:19.586 }, 00:35:19.586 "method": "bdev_nvme_attach_controller" 00:35:19.586 }' 00:35:19.586 11:53:20 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1347 -- # asan_lib= 00:35:19.586 11:53:20 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1348 -- # [[ -n '' ]] 00:35:19.586 11:53:20 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1346 -- # for sanitizer in "${sanitizers[@]}" 00:35:19.586 11:53:20 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1347 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:35:19.586 11:53:20 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1347 -- # grep libclang_rt.asan 00:35:19.586 11:53:20 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1347 -- # awk '{print $3}' 00:35:19.586 11:53:20 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1347 -- # asan_lib= 00:35:19.586 11:53:20 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1348 -- # [[ -n '' ]] 00:35:19.586 11:53:20 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1354 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:35:19.586 11:53:20 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1354 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:35:19.859 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:35:19.859 ... 00:35:19.859 fio-3.35 00:35:19.859 Starting 3 threads 00:35:32.074 00:35:32.074 filename0: (groupid=0, jobs=1): err= 0: pid=1517551: Fri Nov 15 11:53:31 2024 00:35:32.074 read: IOPS=163, BW=20.4MiB/s (21.4MB/s)(205MiB/10047msec) 00:35:32.074 slat (nsec): min=5855, max=42406, avg=12425.28, stdev=5058.50 00:35:32.074 clat (usec): min=14334, max=54931, avg=18336.08, stdev=1639.96 00:35:32.074 lat (usec): min=14345, max=54942, avg=18348.51, stdev=1639.30 00:35:32.074 clat percentiles (usec): 00:35:32.074 | 1.00th=[15795], 5.00th=[16450], 10.00th=[16909], 20.00th=[17171], 00:35:32.074 | 30.00th=[17695], 40.00th=[17957], 50.00th=[18220], 60.00th=[18482], 00:35:32.074 | 70.00th=[19006], 80.00th=[19268], 90.00th=[19792], 95.00th=[20317], 00:35:32.074 | 99.00th=[21365], 99.50th=[21365], 99.90th=[47973], 99.95th=[54789], 00:35:32.074 | 99.99th=[54789] 00:35:32.074 bw ( KiB/s): min=20224, max=21760, per=30.23%, avg=20966.40, stdev=531.18, samples=20 00:35:32.074 iops : min= 158, max= 170, avg=163.80, stdev= 4.15, samples=20 00:35:32.074 lat (msec) : 20=92.13%, 50=7.80%, 100=0.06% 00:35:32.074 cpu : usr=96.34%, sys=3.35%, ctx=24, majf=0, minf=11 00:35:32.074 IO depths : 1=0.2%, 2=99.8%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:35:32.074 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:32.074 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:32.074 issued rwts: total=1640,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:32.074 latency : target=0, window=0, percentile=100.00%, depth=3 00:35:32.074 filename0: (groupid=0, jobs=1): err= 0: pid=1517552: Fri Nov 15 11:53:31 2024 00:35:32.074 read: IOPS=191, BW=24.0MiB/s (25.1MB/s)(240MiB/10006msec) 00:35:32.074 slat (nsec): min=7191, max=39077, avg=14092.49, stdev=5848.26 00:35:32.074 clat (usec): min=7685, max=20414, avg=15617.34, stdev=1066.13 00:35:32.074 lat (usec): min=7697, max=20442, avg=15631.43, stdev=1069.21 00:35:32.074 clat percentiles (usec): 00:35:32.074 | 1.00th=[13435], 5.00th=[14091], 10.00th=[14353], 20.00th=[14746], 00:35:32.074 | 30.00th=[15139], 40.00th=[15270], 50.00th=[15533], 60.00th=[15795], 00:35:32.074 | 70.00th=[16057], 80.00th=[16319], 90.00th=[17171], 95.00th=[17695], 00:35:32.074 | 99.00th=[18220], 99.50th=[18744], 99.90th=[19268], 99.95th=[20317], 00:35:32.074 | 99.99th=[20317] 00:35:32.074 bw ( KiB/s): min=22272, max=25856, per=35.38%, avg=24535.58, stdev=1110.20, samples=19 00:35:32.074 iops : min= 174, max= 202, avg=191.68, stdev= 8.67, samples=19 00:35:32.074 lat (msec) : 10=0.05%, 20=99.90%, 50=0.05% 00:35:32.074 cpu : usr=93.12%, sys=4.97%, ctx=360, majf=0, minf=9 00:35:32.074 IO depths : 1=0.2%, 2=99.8%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:35:32.074 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:32.074 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:32.074 issued rwts: total=1920,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:32.074 latency : target=0, window=0, percentile=100.00%, depth=3 00:35:32.074 filename0: (groupid=0, jobs=1): err= 0: pid=1517553: Fri Nov 15 11:53:31 2024 00:35:32.074 read: IOPS=188, BW=23.5MiB/s (24.7MB/s)(235MiB/10011msec) 00:35:32.074 slat (nsec): min=2779, max=51772, avg=11829.16, stdev=4832.28 00:35:32.074 clat (usec): min=12442, max=20182, avg=15933.54, stdev=1153.76 00:35:32.074 lat (usec): min=12455, max=20192, avg=15945.37, stdev=1152.55 00:35:32.074 clat percentiles (usec): 00:35:32.074 | 1.00th=[13304], 5.00th=[14091], 10.00th=[14484], 20.00th=[15008], 00:35:32.074 | 30.00th=[15270], 40.00th=[15664], 50.00th=[15926], 60.00th=[16188], 00:35:32.074 | 70.00th=[16450], 80.00th=[16909], 90.00th=[17433], 95.00th=[17957], 00:35:32.074 | 99.00th=[18744], 99.50th=[19268], 99.90th=[19792], 99.95th=[20055], 00:35:32.074 | 99.99th=[20055] 00:35:32.074 bw ( KiB/s): min=22784, max=26624, per=34.72%, avg=24074.35, stdev=789.31, samples=20 00:35:32.074 iops : min= 178, max= 208, avg=188.05, stdev= 6.16, samples=20 00:35:32.074 lat (msec) : 20=99.95%, 50=0.05% 00:35:32.074 cpu : usr=96.16%, sys=3.54%, ctx=27, majf=0, minf=12 00:35:32.074 IO depths : 1=0.3%, 2=99.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:35:32.074 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:32.074 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:32.074 issued rwts: total=1883,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:32.074 latency : target=0, window=0, percentile=100.00%, depth=3 00:35:32.074 00:35:32.074 Run status group 0 (all jobs): 00:35:32.074 READ: bw=67.7MiB/s (71.0MB/s), 20.4MiB/s-24.0MiB/s (21.4MB/s-25.1MB/s), io=680MiB (713MB), run=10006-10047msec 00:35:32.074 11:53:31 nvmf_dif.fio_dif_digest -- target/dif.sh@132 -- # destroy_subsystems 0 00:35:32.074 11:53:31 nvmf_dif.fio_dif_digest -- target/dif.sh@43 -- # local sub 00:35:32.074 11:53:31 nvmf_dif.fio_dif_digest -- target/dif.sh@45 -- # for sub in "$@" 00:35:32.074 11:53:31 nvmf_dif.fio_dif_digest -- target/dif.sh@46 -- # destroy_subsystem 0 00:35:32.074 11:53:31 nvmf_dif.fio_dif_digest -- target/dif.sh@36 -- # local sub_id=0 00:35:32.074 11:53:31 nvmf_dif.fio_dif_digest -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:35:32.074 11:53:31 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:32.074 11:53:31 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:35:32.074 11:53:31 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:32.074 11:53:31 nvmf_dif.fio_dif_digest -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:35:32.074 11:53:31 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:32.074 11:53:31 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:35:32.074 11:53:31 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:32.074 00:35:32.074 real 0m11.408s 00:35:32.074 user 0m39.371s 00:35:32.074 sys 0m1.573s 00:35:32.074 11:53:31 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1128 -- # xtrace_disable 00:35:32.074 11:53:31 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:35:32.074 ************************************ 00:35:32.074 END TEST fio_dif_digest 00:35:32.074 ************************************ 00:35:32.074 11:53:31 nvmf_dif -- target/dif.sh@146 -- # trap - SIGINT SIGTERM EXIT 00:35:32.074 11:53:31 nvmf_dif -- target/dif.sh@147 -- # nvmftestfini 00:35:32.074 11:53:31 nvmf_dif -- nvmf/common.sh@516 -- # nvmfcleanup 00:35:32.074 11:53:31 nvmf_dif -- nvmf/common.sh@121 -- # sync 00:35:32.074 11:53:31 nvmf_dif -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:35:32.074 11:53:31 nvmf_dif -- nvmf/common.sh@124 -- # set +e 00:35:32.074 11:53:31 nvmf_dif -- nvmf/common.sh@125 -- # for i in {1..20} 00:35:32.074 11:53:31 nvmf_dif -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:35:32.074 rmmod nvme_tcp 00:35:32.074 rmmod nvme_fabrics 00:35:32.074 rmmod nvme_keyring 00:35:32.074 11:53:31 nvmf_dif -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:35:32.074 11:53:31 nvmf_dif -- nvmf/common.sh@128 -- # set -e 00:35:32.074 11:53:31 nvmf_dif -- nvmf/common.sh@129 -- # return 0 00:35:32.074 11:53:31 nvmf_dif -- nvmf/common.sh@517 -- # '[' -n 1508218 ']' 00:35:32.074 11:53:31 nvmf_dif -- nvmf/common.sh@518 -- # killprocess 1508218 00:35:32.074 11:53:31 nvmf_dif -- common/autotest_common.sh@952 -- # '[' -z 1508218 ']' 00:35:32.074 11:53:31 nvmf_dif -- common/autotest_common.sh@956 -- # kill -0 1508218 00:35:32.074 11:53:31 nvmf_dif -- common/autotest_common.sh@957 -- # uname 00:35:32.074 11:53:31 nvmf_dif -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:35:32.074 11:53:31 nvmf_dif -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 1508218 00:35:32.074 11:53:31 nvmf_dif -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:35:32.074 11:53:31 nvmf_dif -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:35:32.074 11:53:31 nvmf_dif -- common/autotest_common.sh@970 -- # echo 'killing process with pid 1508218' 00:35:32.074 killing process with pid 1508218 00:35:32.074 11:53:31 nvmf_dif -- common/autotest_common.sh@971 -- # kill 1508218 00:35:32.074 11:53:31 nvmf_dif -- common/autotest_common.sh@976 -- # wait 1508218 00:35:32.074 11:53:31 nvmf_dif -- nvmf/common.sh@520 -- # '[' iso == iso ']' 00:35:32.074 11:53:31 nvmf_dif -- nvmf/common.sh@521 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:35:33.983 Waiting for block devices as requested 00:35:33.983 0000:86:00.0 (8086 0a54): vfio-pci -> nvme 00:35:33.983 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:35:33.983 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:35:33.983 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:35:34.242 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:35:34.242 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:35:34.242 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:35:34.242 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:35:34.502 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:35:34.502 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:35:34.502 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:35:34.761 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:35:34.761 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:35:34.761 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:35:35.020 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:35:35.020 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:35:35.020 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:35:35.020 11:53:35 nvmf_dif -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:35:35.020 11:53:35 nvmf_dif -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:35:35.020 11:53:35 nvmf_dif -- nvmf/common.sh@297 -- # iptr 00:35:35.020 11:53:35 nvmf_dif -- nvmf/common.sh@791 -- # iptables-save 00:35:35.020 11:53:35 nvmf_dif -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:35:35.020 11:53:35 nvmf_dif -- nvmf/common.sh@791 -- # iptables-restore 00:35:35.280 11:53:35 nvmf_dif -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:35:35.280 11:53:35 nvmf_dif -- nvmf/common.sh@302 -- # remove_spdk_ns 00:35:35.280 11:53:35 nvmf_dif -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:35:35.280 11:53:35 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:35:35.280 11:53:35 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:35:37.187 11:53:37 nvmf_dif -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:35:37.187 00:35:37.187 real 1m14.295s 00:35:37.187 user 7m41.113s 00:35:37.187 sys 0m19.951s 00:35:37.187 11:53:37 nvmf_dif -- common/autotest_common.sh@1128 -- # xtrace_disable 00:35:37.187 11:53:37 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:35:37.187 ************************************ 00:35:37.187 END TEST nvmf_dif 00:35:37.187 ************************************ 00:35:37.187 11:53:37 -- spdk/autotest.sh@286 -- # run_test nvmf_abort_qd_sizes /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort_qd_sizes.sh 00:35:37.187 11:53:37 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:35:37.187 11:53:37 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:35:37.187 11:53:37 -- common/autotest_common.sh@10 -- # set +x 00:35:37.187 ************************************ 00:35:37.187 START TEST nvmf_abort_qd_sizes 00:35:37.187 ************************************ 00:35:37.187 11:53:38 nvmf_abort_qd_sizes -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort_qd_sizes.sh 00:35:37.447 * Looking for test storage... 00:35:37.447 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:35:37.447 11:53:38 nvmf_abort_qd_sizes -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:35:37.447 11:53:38 nvmf_abort_qd_sizes -- common/autotest_common.sh@1691 -- # lcov --version 00:35:37.447 11:53:38 nvmf_abort_qd_sizes -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:35:37.447 11:53:38 nvmf_abort_qd_sizes -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:35:37.447 11:53:38 nvmf_abort_qd_sizes -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:35:37.447 11:53:38 nvmf_abort_qd_sizes -- scripts/common.sh@333 -- # local ver1 ver1_l 00:35:37.447 11:53:38 nvmf_abort_qd_sizes -- scripts/common.sh@334 -- # local ver2 ver2_l 00:35:37.447 11:53:38 nvmf_abort_qd_sizes -- scripts/common.sh@336 -- # IFS=.-: 00:35:37.447 11:53:38 nvmf_abort_qd_sizes -- scripts/common.sh@336 -- # read -ra ver1 00:35:37.447 11:53:38 nvmf_abort_qd_sizes -- scripts/common.sh@337 -- # IFS=.-: 00:35:37.447 11:53:38 nvmf_abort_qd_sizes -- scripts/common.sh@337 -- # read -ra ver2 00:35:37.447 11:53:38 nvmf_abort_qd_sizes -- scripts/common.sh@338 -- # local 'op=<' 00:35:37.447 11:53:38 nvmf_abort_qd_sizes -- scripts/common.sh@340 -- # ver1_l=2 00:35:37.447 11:53:38 nvmf_abort_qd_sizes -- scripts/common.sh@341 -- # ver2_l=1 00:35:37.447 11:53:38 nvmf_abort_qd_sizes -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:35:37.447 11:53:38 nvmf_abort_qd_sizes -- scripts/common.sh@344 -- # case "$op" in 00:35:37.447 11:53:38 nvmf_abort_qd_sizes -- scripts/common.sh@345 -- # : 1 00:35:37.447 11:53:38 nvmf_abort_qd_sizes -- scripts/common.sh@364 -- # (( v = 0 )) 00:35:37.447 11:53:38 nvmf_abort_qd_sizes -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:35:37.447 11:53:38 nvmf_abort_qd_sizes -- scripts/common.sh@365 -- # decimal 1 00:35:37.447 11:53:38 nvmf_abort_qd_sizes -- scripts/common.sh@353 -- # local d=1 00:35:37.447 11:53:38 nvmf_abort_qd_sizes -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:35:37.447 11:53:38 nvmf_abort_qd_sizes -- scripts/common.sh@355 -- # echo 1 00:35:37.447 11:53:38 nvmf_abort_qd_sizes -- scripts/common.sh@365 -- # ver1[v]=1 00:35:37.447 11:53:38 nvmf_abort_qd_sizes -- scripts/common.sh@366 -- # decimal 2 00:35:37.447 11:53:38 nvmf_abort_qd_sizes -- scripts/common.sh@353 -- # local d=2 00:35:37.447 11:53:38 nvmf_abort_qd_sizes -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:35:37.447 11:53:38 nvmf_abort_qd_sizes -- scripts/common.sh@355 -- # echo 2 00:35:37.447 11:53:38 nvmf_abort_qd_sizes -- scripts/common.sh@366 -- # ver2[v]=2 00:35:37.447 11:53:38 nvmf_abort_qd_sizes -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:35:37.447 11:53:38 nvmf_abort_qd_sizes -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:35:37.447 11:53:38 nvmf_abort_qd_sizes -- scripts/common.sh@368 -- # return 0 00:35:37.447 11:53:38 nvmf_abort_qd_sizes -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:35:37.447 11:53:38 nvmf_abort_qd_sizes -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:35:37.447 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:37.447 --rc genhtml_branch_coverage=1 00:35:37.447 --rc genhtml_function_coverage=1 00:35:37.447 --rc genhtml_legend=1 00:35:37.447 --rc geninfo_all_blocks=1 00:35:37.447 --rc geninfo_unexecuted_blocks=1 00:35:37.447 00:35:37.447 ' 00:35:37.447 11:53:38 nvmf_abort_qd_sizes -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:35:37.447 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:37.447 --rc genhtml_branch_coverage=1 00:35:37.447 --rc genhtml_function_coverage=1 00:35:37.447 --rc genhtml_legend=1 00:35:37.447 --rc geninfo_all_blocks=1 00:35:37.447 --rc geninfo_unexecuted_blocks=1 00:35:37.447 00:35:37.447 ' 00:35:37.447 11:53:38 nvmf_abort_qd_sizes -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:35:37.447 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:37.447 --rc genhtml_branch_coverage=1 00:35:37.447 --rc genhtml_function_coverage=1 00:35:37.447 --rc genhtml_legend=1 00:35:37.447 --rc geninfo_all_blocks=1 00:35:37.447 --rc geninfo_unexecuted_blocks=1 00:35:37.447 00:35:37.447 ' 00:35:37.447 11:53:38 nvmf_abort_qd_sizes -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:35:37.447 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:37.447 --rc genhtml_branch_coverage=1 00:35:37.447 --rc genhtml_function_coverage=1 00:35:37.447 --rc genhtml_legend=1 00:35:37.447 --rc geninfo_all_blocks=1 00:35:37.447 --rc geninfo_unexecuted_blocks=1 00:35:37.447 00:35:37.447 ' 00:35:37.447 11:53:38 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:35:37.447 11:53:38 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # uname -s 00:35:37.447 11:53:38 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:35:37.447 11:53:38 nvmf_abort_qd_sizes -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:35:37.447 11:53:38 nvmf_abort_qd_sizes -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:35:37.447 11:53:38 nvmf_abort_qd_sizes -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:35:37.447 11:53:38 nvmf_abort_qd_sizes -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:35:37.447 11:53:38 nvmf_abort_qd_sizes -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:35:37.447 11:53:38 nvmf_abort_qd_sizes -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:35:37.447 11:53:38 nvmf_abort_qd_sizes -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:35:37.447 11:53:38 nvmf_abort_qd_sizes -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:35:37.447 11:53:38 nvmf_abort_qd_sizes -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:35:37.447 11:53:38 nvmf_abort_qd_sizes -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:35:37.447 11:53:38 nvmf_abort_qd_sizes -- nvmf/common.sh@18 -- # NVME_HOSTID=00abaa28-3537-eb11-906e-0017a4403562 00:35:37.447 11:53:38 nvmf_abort_qd_sizes -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:35:37.447 11:53:38 nvmf_abort_qd_sizes -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:35:37.447 11:53:38 nvmf_abort_qd_sizes -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:35:37.447 11:53:38 nvmf_abort_qd_sizes -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:35:37.447 11:53:38 nvmf_abort_qd_sizes -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:35:37.447 11:53:38 nvmf_abort_qd_sizes -- scripts/common.sh@15 -- # shopt -s extglob 00:35:37.447 11:53:38 nvmf_abort_qd_sizes -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:35:37.447 11:53:38 nvmf_abort_qd_sizes -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:35:37.447 11:53:38 nvmf_abort_qd_sizes -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:35:37.448 11:53:38 nvmf_abort_qd_sizes -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:37.448 11:53:38 nvmf_abort_qd_sizes -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:37.448 11:53:38 nvmf_abort_qd_sizes -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:37.448 11:53:38 nvmf_abort_qd_sizes -- paths/export.sh@5 -- # export PATH 00:35:37.448 11:53:38 nvmf_abort_qd_sizes -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:37.448 11:53:38 nvmf_abort_qd_sizes -- nvmf/common.sh@51 -- # : 0 00:35:37.448 11:53:38 nvmf_abort_qd_sizes -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:35:37.448 11:53:38 nvmf_abort_qd_sizes -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:35:37.448 11:53:38 nvmf_abort_qd_sizes -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:35:37.448 11:53:38 nvmf_abort_qd_sizes -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:35:37.448 11:53:38 nvmf_abort_qd_sizes -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:35:37.448 11:53:38 nvmf_abort_qd_sizes -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:35:37.448 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:35:37.448 11:53:38 nvmf_abort_qd_sizes -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:35:37.448 11:53:38 nvmf_abort_qd_sizes -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:35:37.448 11:53:38 nvmf_abort_qd_sizes -- nvmf/common.sh@55 -- # have_pci_nics=0 00:35:37.448 11:53:38 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@70 -- # nvmftestinit 00:35:37.448 11:53:38 nvmf_abort_qd_sizes -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:35:37.448 11:53:38 nvmf_abort_qd_sizes -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:35:37.448 11:53:38 nvmf_abort_qd_sizes -- nvmf/common.sh@476 -- # prepare_net_devs 00:35:37.448 11:53:38 nvmf_abort_qd_sizes -- nvmf/common.sh@438 -- # local -g is_hw=no 00:35:37.448 11:53:38 nvmf_abort_qd_sizes -- nvmf/common.sh@440 -- # remove_spdk_ns 00:35:37.448 11:53:38 nvmf_abort_qd_sizes -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:35:37.448 11:53:38 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:35:37.448 11:53:38 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:35:37.448 11:53:38 nvmf_abort_qd_sizes -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:35:37.448 11:53:38 nvmf_abort_qd_sizes -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:35:37.448 11:53:38 nvmf_abort_qd_sizes -- nvmf/common.sh@309 -- # xtrace_disable 00:35:37.448 11:53:38 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:35:42.725 11:53:43 nvmf_abort_qd_sizes -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:35:42.725 11:53:43 nvmf_abort_qd_sizes -- nvmf/common.sh@315 -- # pci_devs=() 00:35:42.725 11:53:43 nvmf_abort_qd_sizes -- nvmf/common.sh@315 -- # local -a pci_devs 00:35:42.725 11:53:43 nvmf_abort_qd_sizes -- nvmf/common.sh@316 -- # pci_net_devs=() 00:35:42.725 11:53:43 nvmf_abort_qd_sizes -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:35:42.725 11:53:43 nvmf_abort_qd_sizes -- nvmf/common.sh@317 -- # pci_drivers=() 00:35:42.725 11:53:43 nvmf_abort_qd_sizes -- nvmf/common.sh@317 -- # local -A pci_drivers 00:35:42.725 11:53:43 nvmf_abort_qd_sizes -- nvmf/common.sh@319 -- # net_devs=() 00:35:42.725 11:53:43 nvmf_abort_qd_sizes -- nvmf/common.sh@319 -- # local -ga net_devs 00:35:42.725 11:53:43 nvmf_abort_qd_sizes -- nvmf/common.sh@320 -- # e810=() 00:35:42.725 11:53:43 nvmf_abort_qd_sizes -- nvmf/common.sh@320 -- # local -ga e810 00:35:42.725 11:53:43 nvmf_abort_qd_sizes -- nvmf/common.sh@321 -- # x722=() 00:35:42.725 11:53:43 nvmf_abort_qd_sizes -- nvmf/common.sh@321 -- # local -ga x722 00:35:42.725 11:53:43 nvmf_abort_qd_sizes -- nvmf/common.sh@322 -- # mlx=() 00:35:42.725 11:53:43 nvmf_abort_qd_sizes -- nvmf/common.sh@322 -- # local -ga mlx 00:35:42.725 11:53:43 nvmf_abort_qd_sizes -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:35:42.725 11:53:43 nvmf_abort_qd_sizes -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:35:42.725 11:53:43 nvmf_abort_qd_sizes -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:35:42.725 11:53:43 nvmf_abort_qd_sizes -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:35:42.725 11:53:43 nvmf_abort_qd_sizes -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:35:42.725 11:53:43 nvmf_abort_qd_sizes -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:35:42.725 11:53:43 nvmf_abort_qd_sizes -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:35:42.725 11:53:43 nvmf_abort_qd_sizes -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:35:42.725 11:53:43 nvmf_abort_qd_sizes -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:35:42.725 11:53:43 nvmf_abort_qd_sizes -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:35:42.725 11:53:43 nvmf_abort_qd_sizes -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:35:42.725 11:53:43 nvmf_abort_qd_sizes -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:35:42.725 11:53:43 nvmf_abort_qd_sizes -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:35:42.725 11:53:43 nvmf_abort_qd_sizes -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:35:42.725 11:53:43 nvmf_abort_qd_sizes -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:35:42.725 11:53:43 nvmf_abort_qd_sizes -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:35:42.725 11:53:43 nvmf_abort_qd_sizes -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:35:42.725 11:53:43 nvmf_abort_qd_sizes -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:35:42.725 11:53:43 nvmf_abort_qd_sizes -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:35:42.725 11:53:43 nvmf_abort_qd_sizes -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:35:42.725 Found 0000:af:00.0 (0x8086 - 0x159b) 00:35:42.725 11:53:43 nvmf_abort_qd_sizes -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:35:42.725 11:53:43 nvmf_abort_qd_sizes -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:35:42.725 11:53:43 nvmf_abort_qd_sizes -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:35:42.725 11:53:43 nvmf_abort_qd_sizes -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:35:42.725 11:53:43 nvmf_abort_qd_sizes -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:35:42.725 11:53:43 nvmf_abort_qd_sizes -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:35:42.725 11:53:43 nvmf_abort_qd_sizes -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:35:42.725 Found 0000:af:00.1 (0x8086 - 0x159b) 00:35:42.725 11:53:43 nvmf_abort_qd_sizes -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:35:42.725 11:53:43 nvmf_abort_qd_sizes -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:35:42.725 11:53:43 nvmf_abort_qd_sizes -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:35:42.725 11:53:43 nvmf_abort_qd_sizes -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:35:42.726 11:53:43 nvmf_abort_qd_sizes -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:35:42.726 11:53:43 nvmf_abort_qd_sizes -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:35:42.726 11:53:43 nvmf_abort_qd_sizes -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:35:42.726 11:53:43 nvmf_abort_qd_sizes -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:35:42.726 11:53:43 nvmf_abort_qd_sizes -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:35:42.726 11:53:43 nvmf_abort_qd_sizes -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:35:42.726 11:53:43 nvmf_abort_qd_sizes -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:35:42.726 11:53:43 nvmf_abort_qd_sizes -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:35:42.726 11:53:43 nvmf_abort_qd_sizes -- nvmf/common.sh@418 -- # [[ up == up ]] 00:35:42.726 11:53:43 nvmf_abort_qd_sizes -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:35:42.726 11:53:43 nvmf_abort_qd_sizes -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:35:42.726 11:53:43 nvmf_abort_qd_sizes -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:35:42.726 Found net devices under 0000:af:00.0: cvl_0_0 00:35:42.726 11:53:43 nvmf_abort_qd_sizes -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:35:42.726 11:53:43 nvmf_abort_qd_sizes -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:35:42.726 11:53:43 nvmf_abort_qd_sizes -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:35:42.726 11:53:43 nvmf_abort_qd_sizes -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:35:42.726 11:53:43 nvmf_abort_qd_sizes -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:35:42.726 11:53:43 nvmf_abort_qd_sizes -- nvmf/common.sh@418 -- # [[ up == up ]] 00:35:42.726 11:53:43 nvmf_abort_qd_sizes -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:35:42.726 11:53:43 nvmf_abort_qd_sizes -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:35:42.726 11:53:43 nvmf_abort_qd_sizes -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:35:42.726 Found net devices under 0000:af:00.1: cvl_0_1 00:35:42.726 11:53:43 nvmf_abort_qd_sizes -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:35:42.726 11:53:43 nvmf_abort_qd_sizes -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:35:42.726 11:53:43 nvmf_abort_qd_sizes -- nvmf/common.sh@442 -- # is_hw=yes 00:35:42.726 11:53:43 nvmf_abort_qd_sizes -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:35:42.726 11:53:43 nvmf_abort_qd_sizes -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:35:42.726 11:53:43 nvmf_abort_qd_sizes -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:35:42.726 11:53:43 nvmf_abort_qd_sizes -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:35:42.726 11:53:43 nvmf_abort_qd_sizes -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:35:42.726 11:53:43 nvmf_abort_qd_sizes -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:35:42.726 11:53:43 nvmf_abort_qd_sizes -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:35:42.726 11:53:43 nvmf_abort_qd_sizes -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:35:42.726 11:53:43 nvmf_abort_qd_sizes -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:35:42.726 11:53:43 nvmf_abort_qd_sizes -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:35:42.726 11:53:43 nvmf_abort_qd_sizes -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:35:42.726 11:53:43 nvmf_abort_qd_sizes -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:35:42.726 11:53:43 nvmf_abort_qd_sizes -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:35:42.726 11:53:43 nvmf_abort_qd_sizes -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:35:42.726 11:53:43 nvmf_abort_qd_sizes -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:35:42.726 11:53:43 nvmf_abort_qd_sizes -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:35:42.726 11:53:43 nvmf_abort_qd_sizes -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:35:42.726 11:53:43 nvmf_abort_qd_sizes -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:35:42.985 11:53:43 nvmf_abort_qd_sizes -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:35:42.985 11:53:43 nvmf_abort_qd_sizes -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:35:42.985 11:53:43 nvmf_abort_qd_sizes -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:35:42.985 11:53:43 nvmf_abort_qd_sizes -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:35:42.985 11:53:43 nvmf_abort_qd_sizes -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:35:42.985 11:53:43 nvmf_abort_qd_sizes -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:35:42.985 11:53:43 nvmf_abort_qd_sizes -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:35:42.985 11:53:43 nvmf_abort_qd_sizes -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:35:42.985 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:35:42.985 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.170 ms 00:35:42.985 00:35:42.985 --- 10.0.0.2 ping statistics --- 00:35:42.985 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:35:42.986 rtt min/avg/max/mdev = 0.170/0.170/0.170/0.000 ms 00:35:42.986 11:53:43 nvmf_abort_qd_sizes -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:35:42.986 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:35:42.986 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.069 ms 00:35:42.986 00:35:42.986 --- 10.0.0.1 ping statistics --- 00:35:42.986 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:35:42.986 rtt min/avg/max/mdev = 0.069/0.069/0.069/0.000 ms 00:35:42.986 11:53:43 nvmf_abort_qd_sizes -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:35:42.986 11:53:43 nvmf_abort_qd_sizes -- nvmf/common.sh@450 -- # return 0 00:35:42.986 11:53:43 nvmf_abort_qd_sizes -- nvmf/common.sh@478 -- # '[' iso == iso ']' 00:35:42.986 11:53:43 nvmf_abort_qd_sizes -- nvmf/common.sh@479 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:35:45.522 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:35:45.522 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:35:45.522 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:35:45.522 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:35:45.522 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:35:45.522 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:35:45.522 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:35:45.522 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:35:45.522 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:35:45.522 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:35:45.781 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:35:45.781 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:35:45.781 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:35:45.781 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:35:45.781 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:35:45.781 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:35:46.720 0000:86:00.0 (8086 0a54): nvme -> vfio-pci 00:35:46.720 11:53:47 nvmf_abort_qd_sizes -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:35:46.720 11:53:47 nvmf_abort_qd_sizes -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:35:46.720 11:53:47 nvmf_abort_qd_sizes -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:35:46.720 11:53:47 nvmf_abort_qd_sizes -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:35:46.720 11:53:47 nvmf_abort_qd_sizes -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:35:46.720 11:53:47 nvmf_abort_qd_sizes -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:35:46.720 11:53:47 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@71 -- # nvmfappstart -m 0xf 00:35:46.720 11:53:47 nvmf_abort_qd_sizes -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:35:46.720 11:53:47 nvmf_abort_qd_sizes -- common/autotest_common.sh@724 -- # xtrace_disable 00:35:46.720 11:53:47 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:35:46.720 11:53:47 nvmf_abort_qd_sizes -- nvmf/common.sh@509 -- # nvmfpid=1525652 00:35:46.720 11:53:47 nvmf_abort_qd_sizes -- nvmf/common.sh@510 -- # waitforlisten 1525652 00:35:46.720 11:53:47 nvmf_abort_qd_sizes -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xf 00:35:46.720 11:53:47 nvmf_abort_qd_sizes -- common/autotest_common.sh@833 -- # '[' -z 1525652 ']' 00:35:46.720 11:53:47 nvmf_abort_qd_sizes -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:35:46.720 11:53:47 nvmf_abort_qd_sizes -- common/autotest_common.sh@838 -- # local max_retries=100 00:35:46.720 11:53:47 nvmf_abort_qd_sizes -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:35:46.720 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:35:46.720 11:53:47 nvmf_abort_qd_sizes -- common/autotest_common.sh@842 -- # xtrace_disable 00:35:46.720 11:53:47 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:35:46.720 [2024-11-15 11:53:47.503912] Starting SPDK v25.01-pre git sha1 4b2d483c6 / DPDK 24.03.0 initialization... 00:35:46.720 [2024-11-15 11:53:47.503969] [ DPDK EAL parameters: nvmf -c 0xf --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:35:46.979 [2024-11-15 11:53:47.601574] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:35:46.979 [2024-11-15 11:53:47.656146] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:35:46.979 [2024-11-15 11:53:47.656190] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:35:46.979 [2024-11-15 11:53:47.656201] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:35:46.979 [2024-11-15 11:53:47.656210] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:35:46.979 [2024-11-15 11:53:47.656220] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:35:46.979 [2024-11-15 11:53:47.658264] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:35:46.979 [2024-11-15 11:53:47.658304] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:35:46.979 [2024-11-15 11:53:47.658321] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:35:46.979 [2024-11-15 11:53:47.658328] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:35:47.916 11:53:48 nvmf_abort_qd_sizes -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:35:47.916 11:53:48 nvmf_abort_qd_sizes -- common/autotest_common.sh@866 -- # return 0 00:35:47.916 11:53:48 nvmf_abort_qd_sizes -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:35:47.916 11:53:48 nvmf_abort_qd_sizes -- common/autotest_common.sh@730 -- # xtrace_disable 00:35:47.916 11:53:48 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:35:47.916 11:53:48 nvmf_abort_qd_sizes -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:35:47.916 11:53:48 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@73 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini || :; clean_kernel_target' SIGINT SIGTERM EXIT 00:35:47.916 11:53:48 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # mapfile -t nvmes 00:35:47.916 11:53:48 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # nvme_in_userspace 00:35:47.916 11:53:48 nvmf_abort_qd_sizes -- scripts/common.sh@312 -- # local bdf bdfs 00:35:47.916 11:53:48 nvmf_abort_qd_sizes -- scripts/common.sh@313 -- # local nvmes 00:35:47.916 11:53:48 nvmf_abort_qd_sizes -- scripts/common.sh@315 -- # [[ -n 0000:86:00.0 ]] 00:35:47.916 11:53:48 nvmf_abort_qd_sizes -- scripts/common.sh@316 -- # nvmes=(${pci_bus_cache["0x010802"]}) 00:35:47.916 11:53:48 nvmf_abort_qd_sizes -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:35:47.916 11:53:48 nvmf_abort_qd_sizes -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:86:00.0 ]] 00:35:47.916 11:53:48 nvmf_abort_qd_sizes -- scripts/common.sh@323 -- # uname -s 00:35:47.916 11:53:48 nvmf_abort_qd_sizes -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:35:47.916 11:53:48 nvmf_abort_qd_sizes -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:35:47.916 11:53:48 nvmf_abort_qd_sizes -- scripts/common.sh@328 -- # (( 1 )) 00:35:47.916 11:53:48 nvmf_abort_qd_sizes -- scripts/common.sh@329 -- # printf '%s\n' 0000:86:00.0 00:35:47.916 11:53:48 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@76 -- # (( 1 > 0 )) 00:35:47.916 11:53:48 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@78 -- # nvme=0000:86:00.0 00:35:47.916 11:53:48 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@80 -- # run_test spdk_target_abort spdk_target 00:35:47.916 11:53:48 nvmf_abort_qd_sizes -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:35:47.916 11:53:48 nvmf_abort_qd_sizes -- common/autotest_common.sh@1109 -- # xtrace_disable 00:35:47.916 11:53:48 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:35:47.916 ************************************ 00:35:47.916 START TEST spdk_target_abort 00:35:47.916 ************************************ 00:35:47.916 11:53:48 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1127 -- # spdk_target 00:35:47.916 11:53:48 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@43 -- # local name=spdk_target 00:35:47.916 11:53:48 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@45 -- # rpc_cmd bdev_nvme_attach_controller -t pcie -a 0000:86:00.0 -b spdk_target 00:35:47.916 11:53:48 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:47.916 11:53:48 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:35:51.202 spdk_targetn1 00:35:51.202 11:53:51 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:51.202 11:53:51 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@47 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:35:51.202 11:53:51 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:51.202 11:53:51 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:35:51.202 [2024-11-15 11:53:51.375101] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:35:51.202 11:53:51 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:51.202 11:53:51 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:testnqn -a -s SPDKISFASTANDAWESOME 00:35:51.202 11:53:51 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:51.202 11:53:51 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:35:51.202 11:53:51 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:51.202 11:53:51 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@49 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:testnqn spdk_targetn1 00:35:51.202 11:53:51 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:51.202 11:53:51 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:35:51.202 11:53:51 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:51.202 11:53:51 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@50 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:testnqn -t tcp -a 10.0.0.2 -s 4420 00:35:51.202 11:53:51 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:51.202 11:53:51 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:35:51.202 [2024-11-15 11:53:51.427468] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:35:51.202 11:53:51 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:51.202 11:53:51 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@52 -- # rabort tcp IPv4 10.0.0.2 4420 nqn.2016-06.io.spdk:testnqn 00:35:51.202 11:53:51 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:35:51.202 11:53:51 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:35:51.202 11:53:51 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.2 00:35:51.202 11:53:51 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:35:51.202 11:53:51 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:35:51.202 11:53:51 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:35:51.202 11:53:51 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 00:35:51.202 11:53:51 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:35:51.202 11:53:51 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:35:51.202 11:53:51 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:35:51.202 11:53:51 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:35:51.202 11:53:51 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:35:51.202 11:53:51 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:35:51.202 11:53:51 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2' 00:35:51.202 11:53:51 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:35:51.202 11:53:51 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:35:51.202 11:53:51 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:35:51.202 11:53:51 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:35:51.202 11:53:51 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:35:51.202 11:53:51 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:35:54.492 Initializing NVMe Controllers 00:35:54.492 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:35:54.492 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:35:54.492 Initialization complete. Launching workers. 00:35:54.492 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 13914, failed: 0 00:35:54.492 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 1309, failed to submit 12605 00:35:54.492 success 721, unsuccessful 588, failed 0 00:35:54.492 11:53:54 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:35:54.492 11:53:54 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:35:57.781 Initializing NVMe Controllers 00:35:57.781 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:35:57.781 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:35:57.781 Initialization complete. Launching workers. 00:35:57.781 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 8606, failed: 0 00:35:57.781 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 1229, failed to submit 7377 00:35:57.781 success 319, unsuccessful 910, failed 0 00:35:57.781 11:53:57 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:35:57.781 11:53:57 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:36:00.318 Initializing NVMe Controllers 00:36:00.318 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:36:00.318 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:36:00.318 Initialization complete. Launching workers. 00:36:00.318 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 38176, failed: 0 00:36:00.319 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 2650, failed to submit 35526 00:36:00.319 success 579, unsuccessful 2071, failed 0 00:36:00.319 11:54:01 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@54 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:testnqn 00:36:00.319 11:54:01 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:00.319 11:54:01 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:36:00.319 11:54:01 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:00.319 11:54:01 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@55 -- # rpc_cmd bdev_nvme_detach_controller spdk_target 00:36:00.319 11:54:01 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:00.319 11:54:01 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:36:01.704 11:54:02 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:01.704 11:54:02 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@61 -- # killprocess 1525652 00:36:01.704 11:54:02 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@952 -- # '[' -z 1525652 ']' 00:36:01.704 11:54:02 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@956 -- # kill -0 1525652 00:36:01.705 11:54:02 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@957 -- # uname 00:36:01.705 11:54:02 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:36:01.705 11:54:02 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 1525652 00:36:01.964 11:54:02 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:36:01.964 11:54:02 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:36:01.964 11:54:02 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@970 -- # echo 'killing process with pid 1525652' 00:36:01.964 killing process with pid 1525652 00:36:01.964 11:54:02 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@971 -- # kill 1525652 00:36:01.964 11:54:02 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@976 -- # wait 1525652 00:36:01.964 00:36:01.964 real 0m14.242s 00:36:01.964 user 0m57.157s 00:36:01.964 sys 0m2.410s 00:36:01.964 11:54:02 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1128 -- # xtrace_disable 00:36:01.964 11:54:02 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:36:01.964 ************************************ 00:36:01.964 END TEST spdk_target_abort 00:36:01.964 ************************************ 00:36:01.964 11:54:02 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@81 -- # run_test kernel_target_abort kernel_target 00:36:01.964 11:54:02 nvmf_abort_qd_sizes -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:36:01.964 11:54:02 nvmf_abort_qd_sizes -- common/autotest_common.sh@1109 -- # xtrace_disable 00:36:01.964 11:54:02 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:36:02.223 ************************************ 00:36:02.223 START TEST kernel_target_abort 00:36:02.223 ************************************ 00:36:02.223 11:54:02 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1127 -- # kernel_target 00:36:02.223 11:54:02 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # get_main_ns_ip 00:36:02.223 11:54:02 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@769 -- # local ip 00:36:02.223 11:54:02 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@770 -- # ip_candidates=() 00:36:02.223 11:54:02 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@770 -- # local -A ip_candidates 00:36:02.223 11:54:02 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:02.223 11:54:02 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:02.223 11:54:02 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:36:02.223 11:54:02 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:02.223 11:54:02 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:36:02.223 11:54:02 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:36:02.223 11:54:02 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:36:02.223 11:54:02 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:36:02.223 11:54:02 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@660 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:36:02.223 11:54:02 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@662 -- # nvmet=/sys/kernel/config/nvmet 00:36:02.223 11:54:02 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@663 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:36:02.223 11:54:02 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@664 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:36:02.223 11:54:02 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@665 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:36:02.223 11:54:02 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@667 -- # local block nvme 00:36:02.223 11:54:02 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@669 -- # [[ ! -e /sys/module/nvmet ]] 00:36:02.223 11:54:02 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@670 -- # modprobe nvmet 00:36:02.223 11:54:02 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@673 -- # [[ -e /sys/kernel/config/nvmet ]] 00:36:02.223 11:54:02 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@675 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:36:04.757 Waiting for block devices as requested 00:36:04.757 0000:86:00.0 (8086 0a54): vfio-pci -> nvme 00:36:05.015 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:36:05.015 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:36:05.015 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:36:05.274 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:36:05.274 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:36:05.274 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:36:05.274 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:36:05.533 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:36:05.533 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:36:05.533 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:36:05.793 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:36:05.793 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:36:05.793 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:36:05.793 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:36:06.053 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:36:06.053 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:36:06.053 11:54:06 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:36:06.053 11:54:06 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n1 ]] 00:36:06.053 11:54:06 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@680 -- # is_block_zoned nvme0n1 00:36:06.053 11:54:06 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1648 -- # local device=nvme0n1 00:36:06.053 11:54:06 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:36:06.053 11:54:06 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:36:06.053 11:54:06 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@681 -- # block_in_use nvme0n1 00:36:06.053 11:54:06 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@381 -- # local block=nvme0n1 pt 00:36:06.053 11:54:06 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@390 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:36:06.313 No valid GPT data, bailing 00:36:06.313 11:54:06 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:36:06.313 11:54:06 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # pt= 00:36:06.313 11:54:06 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@395 -- # return 1 00:36:06.313 11:54:06 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n1 00:36:06.313 11:54:06 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@684 -- # [[ -b /dev/nvme0n1 ]] 00:36:06.313 11:54:06 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@686 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:36:06.313 11:54:06 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@687 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:36:06.313 11:54:06 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@688 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:36:06.313 11:54:06 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@693 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:36:06.313 11:54:06 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@695 -- # echo 1 00:36:06.313 11:54:06 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@696 -- # echo /dev/nvme0n1 00:36:06.313 11:54:06 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@697 -- # echo 1 00:36:06.313 11:54:06 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@699 -- # echo 10.0.0.1 00:36:06.313 11:54:06 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@700 -- # echo tcp 00:36:06.313 11:54:06 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@701 -- # echo 4420 00:36:06.313 11:54:06 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@702 -- # echo ipv4 00:36:06.313 11:54:06 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@705 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:36:06.313 11:54:07 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@708 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid=00abaa28-3537-eb11-906e-0017a4403562 -a 10.0.0.1 -t tcp -s 4420 00:36:06.313 00:36:06.313 Discovery Log Number of Records 2, Generation counter 2 00:36:06.313 =====Discovery Log Entry 0====== 00:36:06.313 trtype: tcp 00:36:06.313 adrfam: ipv4 00:36:06.313 subtype: current discovery subsystem 00:36:06.313 treq: not specified, sq flow control disable supported 00:36:06.313 portid: 1 00:36:06.313 trsvcid: 4420 00:36:06.313 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:36:06.313 traddr: 10.0.0.1 00:36:06.313 eflags: none 00:36:06.313 sectype: none 00:36:06.313 =====Discovery Log Entry 1====== 00:36:06.313 trtype: tcp 00:36:06.313 adrfam: ipv4 00:36:06.313 subtype: nvme subsystem 00:36:06.313 treq: not specified, sq flow control disable supported 00:36:06.313 portid: 1 00:36:06.313 trsvcid: 4420 00:36:06.313 subnqn: nqn.2016-06.io.spdk:testnqn 00:36:06.313 traddr: 10.0.0.1 00:36:06.313 eflags: none 00:36:06.313 sectype: none 00:36:06.313 11:54:07 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@66 -- # rabort tcp IPv4 10.0.0.1 4420 nqn.2016-06.io.spdk:testnqn 00:36:06.313 11:54:07 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:36:06.313 11:54:07 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:36:06.313 11:54:07 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.1 00:36:06.313 11:54:07 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:36:06.313 11:54:07 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:36:06.313 11:54:07 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:36:06.313 11:54:07 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 00:36:06.313 11:54:07 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:36:06.313 11:54:07 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:36:06.313 11:54:07 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:36:06.313 11:54:07 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:36:06.313 11:54:07 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:36:06.313 11:54:07 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:36:06.313 11:54:07 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1' 00:36:06.313 11:54:07 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:36:06.313 11:54:07 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420' 00:36:06.313 11:54:07 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:36:06.313 11:54:07 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:36:06.313 11:54:07 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:36:06.313 11:54:07 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:36:09.603 Initializing NVMe Controllers 00:36:09.603 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:36:09.603 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:36:09.603 Initialization complete. Launching workers. 00:36:09.603 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 48854, failed: 0 00:36:09.603 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 48854, failed to submit 0 00:36:09.603 success 0, unsuccessful 48854, failed 0 00:36:09.603 11:54:10 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:36:09.603 11:54:10 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:36:12.891 Initializing NVMe Controllers 00:36:12.891 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:36:12.891 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:36:12.891 Initialization complete. Launching workers. 00:36:12.891 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 84446, failed: 0 00:36:12.891 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 19262, failed to submit 65184 00:36:12.891 success 0, unsuccessful 19262, failed 0 00:36:12.891 11:54:13 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:36:12.891 11:54:13 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:36:16.179 Initializing NVMe Controllers 00:36:16.179 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:36:16.179 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:36:16.179 Initialization complete. Launching workers. 00:36:16.180 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 78332, failed: 0 00:36:16.180 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 19558, failed to submit 58774 00:36:16.180 success 0, unsuccessful 19558, failed 0 00:36:16.180 11:54:16 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@67 -- # clean_kernel_target 00:36:16.180 11:54:16 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@712 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:36:16.180 11:54:16 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@714 -- # echo 0 00:36:16.180 11:54:16 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@716 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:36:16.180 11:54:16 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@717 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:36:16.180 11:54:16 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@718 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:36:16.180 11:54:16 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@719 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:36:16.180 11:54:16 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@721 -- # modules=(/sys/module/nvmet/holders/*) 00:36:16.180 11:54:16 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@723 -- # modprobe -r nvmet_tcp nvmet 00:36:16.180 11:54:16 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@726 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:36:18.712 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:36:18.712 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:36:18.712 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:36:18.712 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:36:18.712 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:36:18.712 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:36:18.712 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:36:18.712 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:36:18.712 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:36:18.712 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:36:18.712 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:36:18.712 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:36:18.712 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:36:18.712 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:36:18.712 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:36:18.712 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:36:19.279 0000:86:00.0 (8086 0a54): nvme -> vfio-pci 00:36:19.537 00:36:19.537 real 0m17.412s 00:36:19.537 user 0m8.368s 00:36:19.537 sys 0m5.169s 00:36:19.537 11:54:20 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1128 -- # xtrace_disable 00:36:19.537 11:54:20 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@10 -- # set +x 00:36:19.537 ************************************ 00:36:19.537 END TEST kernel_target_abort 00:36:19.537 ************************************ 00:36:19.537 11:54:20 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:36:19.537 11:54:20 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@84 -- # nvmftestfini 00:36:19.537 11:54:20 nvmf_abort_qd_sizes -- nvmf/common.sh@516 -- # nvmfcleanup 00:36:19.537 11:54:20 nvmf_abort_qd_sizes -- nvmf/common.sh@121 -- # sync 00:36:19.538 11:54:20 nvmf_abort_qd_sizes -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:36:19.538 11:54:20 nvmf_abort_qd_sizes -- nvmf/common.sh@124 -- # set +e 00:36:19.538 11:54:20 nvmf_abort_qd_sizes -- nvmf/common.sh@125 -- # for i in {1..20} 00:36:19.538 11:54:20 nvmf_abort_qd_sizes -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:36:19.538 rmmod nvme_tcp 00:36:19.538 rmmod nvme_fabrics 00:36:19.538 rmmod nvme_keyring 00:36:19.538 11:54:20 nvmf_abort_qd_sizes -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:36:19.538 11:54:20 nvmf_abort_qd_sizes -- nvmf/common.sh@128 -- # set -e 00:36:19.538 11:54:20 nvmf_abort_qd_sizes -- nvmf/common.sh@129 -- # return 0 00:36:19.538 11:54:20 nvmf_abort_qd_sizes -- nvmf/common.sh@517 -- # '[' -n 1525652 ']' 00:36:19.538 11:54:20 nvmf_abort_qd_sizes -- nvmf/common.sh@518 -- # killprocess 1525652 00:36:19.538 11:54:20 nvmf_abort_qd_sizes -- common/autotest_common.sh@952 -- # '[' -z 1525652 ']' 00:36:19.538 11:54:20 nvmf_abort_qd_sizes -- common/autotest_common.sh@956 -- # kill -0 1525652 00:36:19.538 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 956: kill: (1525652) - No such process 00:36:19.538 11:54:20 nvmf_abort_qd_sizes -- common/autotest_common.sh@979 -- # echo 'Process with pid 1525652 is not found' 00:36:19.538 Process with pid 1525652 is not found 00:36:19.538 11:54:20 nvmf_abort_qd_sizes -- nvmf/common.sh@520 -- # '[' iso == iso ']' 00:36:19.538 11:54:20 nvmf_abort_qd_sizes -- nvmf/common.sh@521 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:36:22.070 Waiting for block devices as requested 00:36:22.330 0000:86:00.0 (8086 0a54): vfio-pci -> nvme 00:36:22.330 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:36:22.589 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:36:22.589 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:36:22.589 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:36:22.847 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:36:22.847 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:36:22.847 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:36:22.847 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:36:23.106 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:36:23.106 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:36:23.106 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:36:23.365 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:36:23.366 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:36:23.366 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:36:23.366 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:36:23.624 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:36:23.624 11:54:24 nvmf_abort_qd_sizes -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:36:23.624 11:54:24 nvmf_abort_qd_sizes -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:36:23.624 11:54:24 nvmf_abort_qd_sizes -- nvmf/common.sh@297 -- # iptr 00:36:23.624 11:54:24 nvmf_abort_qd_sizes -- nvmf/common.sh@791 -- # iptables-save 00:36:23.624 11:54:24 nvmf_abort_qd_sizes -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:36:23.624 11:54:24 nvmf_abort_qd_sizes -- nvmf/common.sh@791 -- # iptables-restore 00:36:23.624 11:54:24 nvmf_abort_qd_sizes -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:36:23.624 11:54:24 nvmf_abort_qd_sizes -- nvmf/common.sh@302 -- # remove_spdk_ns 00:36:23.624 11:54:24 nvmf_abort_qd_sizes -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:36:23.624 11:54:24 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:36:23.624 11:54:24 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:36:26.159 11:54:26 nvmf_abort_qd_sizes -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:36:26.159 00:36:26.159 real 0m48.425s 00:36:26.159 user 1m9.744s 00:36:26.159 sys 0m15.807s 00:36:26.159 11:54:26 nvmf_abort_qd_sizes -- common/autotest_common.sh@1128 -- # xtrace_disable 00:36:26.159 11:54:26 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:36:26.159 ************************************ 00:36:26.159 END TEST nvmf_abort_qd_sizes 00:36:26.159 ************************************ 00:36:26.159 11:54:26 -- spdk/autotest.sh@288 -- # run_test keyring_file /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/file.sh 00:36:26.159 11:54:26 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:36:26.159 11:54:26 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:36:26.159 11:54:26 -- common/autotest_common.sh@10 -- # set +x 00:36:26.159 ************************************ 00:36:26.159 START TEST keyring_file 00:36:26.159 ************************************ 00:36:26.159 11:54:26 keyring_file -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/file.sh 00:36:26.159 * Looking for test storage... 00:36:26.159 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring 00:36:26.159 11:54:26 keyring_file -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:36:26.159 11:54:26 keyring_file -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:36:26.159 11:54:26 keyring_file -- common/autotest_common.sh@1691 -- # lcov --version 00:36:26.159 11:54:26 keyring_file -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:36:26.159 11:54:26 keyring_file -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:36:26.159 11:54:26 keyring_file -- scripts/common.sh@333 -- # local ver1 ver1_l 00:36:26.160 11:54:26 keyring_file -- scripts/common.sh@334 -- # local ver2 ver2_l 00:36:26.160 11:54:26 keyring_file -- scripts/common.sh@336 -- # IFS=.-: 00:36:26.160 11:54:26 keyring_file -- scripts/common.sh@336 -- # read -ra ver1 00:36:26.160 11:54:26 keyring_file -- scripts/common.sh@337 -- # IFS=.-: 00:36:26.160 11:54:26 keyring_file -- scripts/common.sh@337 -- # read -ra ver2 00:36:26.160 11:54:26 keyring_file -- scripts/common.sh@338 -- # local 'op=<' 00:36:26.160 11:54:26 keyring_file -- scripts/common.sh@340 -- # ver1_l=2 00:36:26.160 11:54:26 keyring_file -- scripts/common.sh@341 -- # ver2_l=1 00:36:26.160 11:54:26 keyring_file -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:36:26.160 11:54:26 keyring_file -- scripts/common.sh@344 -- # case "$op" in 00:36:26.160 11:54:26 keyring_file -- scripts/common.sh@345 -- # : 1 00:36:26.160 11:54:26 keyring_file -- scripts/common.sh@364 -- # (( v = 0 )) 00:36:26.160 11:54:26 keyring_file -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:36:26.160 11:54:26 keyring_file -- scripts/common.sh@365 -- # decimal 1 00:36:26.160 11:54:26 keyring_file -- scripts/common.sh@353 -- # local d=1 00:36:26.160 11:54:26 keyring_file -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:36:26.160 11:54:26 keyring_file -- scripts/common.sh@355 -- # echo 1 00:36:26.160 11:54:26 keyring_file -- scripts/common.sh@365 -- # ver1[v]=1 00:36:26.160 11:54:26 keyring_file -- scripts/common.sh@366 -- # decimal 2 00:36:26.160 11:54:26 keyring_file -- scripts/common.sh@353 -- # local d=2 00:36:26.160 11:54:26 keyring_file -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:36:26.160 11:54:26 keyring_file -- scripts/common.sh@355 -- # echo 2 00:36:26.160 11:54:26 keyring_file -- scripts/common.sh@366 -- # ver2[v]=2 00:36:26.160 11:54:26 keyring_file -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:36:26.160 11:54:26 keyring_file -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:36:26.160 11:54:26 keyring_file -- scripts/common.sh@368 -- # return 0 00:36:26.160 11:54:26 keyring_file -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:36:26.160 11:54:26 keyring_file -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:36:26.160 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:26.160 --rc genhtml_branch_coverage=1 00:36:26.160 --rc genhtml_function_coverage=1 00:36:26.160 --rc genhtml_legend=1 00:36:26.160 --rc geninfo_all_blocks=1 00:36:26.160 --rc geninfo_unexecuted_blocks=1 00:36:26.160 00:36:26.160 ' 00:36:26.160 11:54:26 keyring_file -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:36:26.160 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:26.160 --rc genhtml_branch_coverage=1 00:36:26.160 --rc genhtml_function_coverage=1 00:36:26.160 --rc genhtml_legend=1 00:36:26.160 --rc geninfo_all_blocks=1 00:36:26.160 --rc geninfo_unexecuted_blocks=1 00:36:26.160 00:36:26.160 ' 00:36:26.160 11:54:26 keyring_file -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:36:26.160 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:26.160 --rc genhtml_branch_coverage=1 00:36:26.160 --rc genhtml_function_coverage=1 00:36:26.160 --rc genhtml_legend=1 00:36:26.160 --rc geninfo_all_blocks=1 00:36:26.160 --rc geninfo_unexecuted_blocks=1 00:36:26.160 00:36:26.160 ' 00:36:26.160 11:54:26 keyring_file -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:36:26.160 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:26.160 --rc genhtml_branch_coverage=1 00:36:26.160 --rc genhtml_function_coverage=1 00:36:26.160 --rc genhtml_legend=1 00:36:26.160 --rc geninfo_all_blocks=1 00:36:26.160 --rc geninfo_unexecuted_blocks=1 00:36:26.160 00:36:26.160 ' 00:36:26.160 11:54:26 keyring_file -- keyring/file.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/common.sh 00:36:26.160 11:54:26 keyring_file -- keyring/common.sh@4 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:36:26.160 11:54:26 keyring_file -- nvmf/common.sh@7 -- # uname -s 00:36:26.160 11:54:26 keyring_file -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:36:26.160 11:54:26 keyring_file -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:36:26.160 11:54:26 keyring_file -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:36:26.160 11:54:26 keyring_file -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:36:26.160 11:54:26 keyring_file -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:36:26.160 11:54:26 keyring_file -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:36:26.160 11:54:26 keyring_file -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:36:26.160 11:54:26 keyring_file -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:36:26.160 11:54:26 keyring_file -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:36:26.160 11:54:26 keyring_file -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:36:26.160 11:54:26 keyring_file -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:36:26.160 11:54:26 keyring_file -- nvmf/common.sh@18 -- # NVME_HOSTID=00abaa28-3537-eb11-906e-0017a4403562 00:36:26.160 11:54:26 keyring_file -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:36:26.160 11:54:26 keyring_file -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:36:26.160 11:54:26 keyring_file -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:36:26.160 11:54:26 keyring_file -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:36:26.160 11:54:26 keyring_file -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:36:26.160 11:54:26 keyring_file -- scripts/common.sh@15 -- # shopt -s extglob 00:36:26.160 11:54:26 keyring_file -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:36:26.160 11:54:26 keyring_file -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:36:26.160 11:54:26 keyring_file -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:36:26.160 11:54:26 keyring_file -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:26.160 11:54:26 keyring_file -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:26.160 11:54:26 keyring_file -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:26.160 11:54:26 keyring_file -- paths/export.sh@5 -- # export PATH 00:36:26.160 11:54:26 keyring_file -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:26.160 11:54:26 keyring_file -- nvmf/common.sh@51 -- # : 0 00:36:26.160 11:54:26 keyring_file -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:36:26.160 11:54:26 keyring_file -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:36:26.160 11:54:26 keyring_file -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:36:26.160 11:54:26 keyring_file -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:36:26.160 11:54:26 keyring_file -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:36:26.160 11:54:26 keyring_file -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:36:26.160 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:36:26.160 11:54:26 keyring_file -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:36:26.160 11:54:26 keyring_file -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:36:26.160 11:54:26 keyring_file -- nvmf/common.sh@55 -- # have_pci_nics=0 00:36:26.160 11:54:26 keyring_file -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 00:36:26.160 11:54:26 keyring_file -- keyring/file.sh@13 -- # subnqn=nqn.2016-06.io.spdk:cnode0 00:36:26.160 11:54:26 keyring_file -- keyring/file.sh@14 -- # hostnqn=nqn.2016-06.io.spdk:host0 00:36:26.160 11:54:26 keyring_file -- keyring/file.sh@15 -- # key0=00112233445566778899aabbccddeeff 00:36:26.160 11:54:26 keyring_file -- keyring/file.sh@16 -- # key1=112233445566778899aabbccddeeff00 00:36:26.160 11:54:26 keyring_file -- keyring/file.sh@24 -- # trap cleanup EXIT 00:36:26.160 11:54:26 keyring_file -- keyring/file.sh@26 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:36:26.160 11:54:26 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:36:26.160 11:54:26 keyring_file -- keyring/common.sh@17 -- # name=key0 00:36:26.160 11:54:26 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:36:26.160 11:54:26 keyring_file -- keyring/common.sh@17 -- # digest=0 00:36:26.160 11:54:26 keyring_file -- keyring/common.sh@18 -- # mktemp 00:36:26.160 11:54:26 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.ynscIoA1lc 00:36:26.160 11:54:26 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:36:26.160 11:54:26 keyring_file -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:36:26.160 11:54:26 keyring_file -- nvmf/common.sh@730 -- # local prefix key digest 00:36:26.160 11:54:26 keyring_file -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:36:26.160 11:54:26 keyring_file -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff 00:36:26.160 11:54:26 keyring_file -- nvmf/common.sh@732 -- # digest=0 00:36:26.160 11:54:26 keyring_file -- nvmf/common.sh@733 -- # python - 00:36:26.160 11:54:26 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.ynscIoA1lc 00:36:26.160 11:54:26 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.ynscIoA1lc 00:36:26.160 11:54:26 keyring_file -- keyring/file.sh@26 -- # key0path=/tmp/tmp.ynscIoA1lc 00:36:26.160 11:54:26 keyring_file -- keyring/file.sh@27 -- # prep_key key1 112233445566778899aabbccddeeff00 0 00:36:26.160 11:54:26 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:36:26.160 11:54:26 keyring_file -- keyring/common.sh@17 -- # name=key1 00:36:26.160 11:54:26 keyring_file -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 00:36:26.160 11:54:26 keyring_file -- keyring/common.sh@17 -- # digest=0 00:36:26.160 11:54:26 keyring_file -- keyring/common.sh@18 -- # mktemp 00:36:26.160 11:54:26 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.VuTGaTQZoX 00:36:26.160 11:54:26 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 00:36:26.160 11:54:26 keyring_file -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 00:36:26.161 11:54:26 keyring_file -- nvmf/common.sh@730 -- # local prefix key digest 00:36:26.161 11:54:26 keyring_file -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:36:26.161 11:54:26 keyring_file -- nvmf/common.sh@732 -- # key=112233445566778899aabbccddeeff00 00:36:26.161 11:54:26 keyring_file -- nvmf/common.sh@732 -- # digest=0 00:36:26.161 11:54:26 keyring_file -- nvmf/common.sh@733 -- # python - 00:36:26.161 11:54:26 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.VuTGaTQZoX 00:36:26.161 11:54:26 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.VuTGaTQZoX 00:36:26.161 11:54:26 keyring_file -- keyring/file.sh@27 -- # key1path=/tmp/tmp.VuTGaTQZoX 00:36:26.161 11:54:26 keyring_file -- keyring/file.sh@30 -- # tgtpid=1535556 00:36:26.161 11:54:26 keyring_file -- keyring/file.sh@32 -- # waitforlisten 1535556 00:36:26.161 11:54:26 keyring_file -- keyring/file.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:36:26.161 11:54:26 keyring_file -- common/autotest_common.sh@833 -- # '[' -z 1535556 ']' 00:36:26.161 11:54:26 keyring_file -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:36:26.161 11:54:26 keyring_file -- common/autotest_common.sh@838 -- # local max_retries=100 00:36:26.161 11:54:26 keyring_file -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:36:26.161 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:36:26.161 11:54:26 keyring_file -- common/autotest_common.sh@842 -- # xtrace_disable 00:36:26.161 11:54:26 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:36:26.161 [2024-11-15 11:54:26.926353] Starting SPDK v25.01-pre git sha1 4b2d483c6 / DPDK 24.03.0 initialization... 00:36:26.161 [2024-11-15 11:54:26.926416] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1535556 ] 00:36:26.420 [2024-11-15 11:54:27.018394] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:36:26.420 [2024-11-15 11:54:27.068433] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:36:26.679 11:54:27 keyring_file -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:36:26.679 11:54:27 keyring_file -- common/autotest_common.sh@866 -- # return 0 00:36:26.679 11:54:27 keyring_file -- keyring/file.sh@33 -- # rpc_cmd 00:36:26.679 11:54:27 keyring_file -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:26.679 11:54:27 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:36:26.679 [2024-11-15 11:54:27.303491] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:36:26.679 null0 00:36:26.679 [2024-11-15 11:54:27.335531] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:36:26.679 [2024-11-15 11:54:27.335982] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:36:26.679 11:54:27 keyring_file -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:26.679 11:54:27 keyring_file -- keyring/file.sh@44 -- # NOT rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:36:26.679 11:54:27 keyring_file -- common/autotest_common.sh@650 -- # local es=0 00:36:26.679 11:54:27 keyring_file -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:36:26.679 11:54:27 keyring_file -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:36:26.679 11:54:27 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:36:26.679 11:54:27 keyring_file -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:36:26.679 11:54:27 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:36:26.679 11:54:27 keyring_file -- common/autotest_common.sh@653 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:36:26.679 11:54:27 keyring_file -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:26.679 11:54:27 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:36:26.679 [2024-11-15 11:54:27.363591] nvmf_rpc.c: 762:nvmf_rpc_listen_paused: *ERROR*: Listener already exists 00:36:26.679 request: 00:36:26.679 { 00:36:26.679 "nqn": "nqn.2016-06.io.spdk:cnode0", 00:36:26.679 "secure_channel": false, 00:36:26.679 "listen_address": { 00:36:26.679 "trtype": "tcp", 00:36:26.679 "traddr": "127.0.0.1", 00:36:26.679 "trsvcid": "4420" 00:36:26.679 }, 00:36:26.679 "method": "nvmf_subsystem_add_listener", 00:36:26.679 "req_id": 1 00:36:26.679 } 00:36:26.679 Got JSON-RPC error response 00:36:26.679 response: 00:36:26.679 { 00:36:26.679 "code": -32602, 00:36:26.679 "message": "Invalid parameters" 00:36:26.679 } 00:36:26.679 11:54:27 keyring_file -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:36:26.679 11:54:27 keyring_file -- common/autotest_common.sh@653 -- # es=1 00:36:26.679 11:54:27 keyring_file -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:36:26.679 11:54:27 keyring_file -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:36:26.679 11:54:27 keyring_file -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:36:26.679 11:54:27 keyring_file -- keyring/file.sh@47 -- # bperfpid=1535566 00:36:26.679 11:54:27 keyring_file -- keyring/file.sh@49 -- # waitforlisten 1535566 /var/tmp/bperf.sock 00:36:26.679 11:54:27 keyring_file -- keyring/file.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z 00:36:26.679 11:54:27 keyring_file -- common/autotest_common.sh@833 -- # '[' -z 1535566 ']' 00:36:26.679 11:54:27 keyring_file -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bperf.sock 00:36:26.679 11:54:27 keyring_file -- common/autotest_common.sh@838 -- # local max_retries=100 00:36:26.679 11:54:27 keyring_file -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:36:26.679 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:36:26.679 11:54:27 keyring_file -- common/autotest_common.sh@842 -- # xtrace_disable 00:36:26.679 11:54:27 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:36:26.679 [2024-11-15 11:54:27.423267] Starting SPDK v25.01-pre git sha1 4b2d483c6 / DPDK 24.03.0 initialization... 00:36:26.679 [2024-11-15 11:54:27.423326] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1535566 ] 00:36:26.679 [2024-11-15 11:54:27.490649] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:36:26.938 [2024-11-15 11:54:27.531202] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:36:26.938 11:54:27 keyring_file -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:36:26.938 11:54:27 keyring_file -- common/autotest_common.sh@866 -- # return 0 00:36:26.938 11:54:27 keyring_file -- keyring/file.sh@50 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.ynscIoA1lc 00:36:26.938 11:54:27 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.ynscIoA1lc 00:36:27.197 11:54:27 keyring_file -- keyring/file.sh@51 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.VuTGaTQZoX 00:36:27.197 11:54:27 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.VuTGaTQZoX 00:36:27.456 11:54:28 keyring_file -- keyring/file.sh@52 -- # get_key key0 00:36:27.456 11:54:28 keyring_file -- keyring/file.sh@52 -- # jq -r .path 00:36:27.456 11:54:28 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:36:27.456 11:54:28 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:36:27.456 11:54:28 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:36:27.715 11:54:28 keyring_file -- keyring/file.sh@52 -- # [[ /tmp/tmp.ynscIoA1lc == \/\t\m\p\/\t\m\p\.\y\n\s\c\I\o\A\1\l\c ]] 00:36:27.715 11:54:28 keyring_file -- keyring/file.sh@53 -- # get_key key1 00:36:27.715 11:54:28 keyring_file -- keyring/file.sh@53 -- # jq -r .path 00:36:27.716 11:54:28 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:36:27.716 11:54:28 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:36:27.716 11:54:28 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:36:27.974 11:54:28 keyring_file -- keyring/file.sh@53 -- # [[ /tmp/tmp.VuTGaTQZoX == \/\t\m\p\/\t\m\p\.\V\u\T\G\a\T\Q\Z\o\X ]] 00:36:27.974 11:54:28 keyring_file -- keyring/file.sh@54 -- # get_refcnt key0 00:36:27.974 11:54:28 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:36:27.974 11:54:28 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:36:27.974 11:54:28 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:36:27.974 11:54:28 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:36:27.974 11:54:28 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:36:28.233 11:54:29 keyring_file -- keyring/file.sh@54 -- # (( 1 == 1 )) 00:36:28.233 11:54:29 keyring_file -- keyring/file.sh@55 -- # get_refcnt key1 00:36:28.233 11:54:29 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:36:28.233 11:54:29 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:36:28.233 11:54:29 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:36:28.233 11:54:29 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:36:28.233 11:54:29 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:36:28.492 11:54:29 keyring_file -- keyring/file.sh@55 -- # (( 1 == 1 )) 00:36:28.492 11:54:29 keyring_file -- keyring/file.sh@58 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:36:28.492 11:54:29 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:36:28.752 [2024-11-15 11:54:29.538186] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:36:28.752 nvme0n1 00:36:29.012 11:54:29 keyring_file -- keyring/file.sh@60 -- # get_refcnt key0 00:36:29.012 11:54:29 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:36:29.012 11:54:29 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:36:29.012 11:54:29 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:36:29.012 11:54:29 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:36:29.012 11:54:29 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:36:29.271 11:54:29 keyring_file -- keyring/file.sh@60 -- # (( 2 == 2 )) 00:36:29.271 11:54:29 keyring_file -- keyring/file.sh@61 -- # get_refcnt key1 00:36:29.271 11:54:29 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:36:29.271 11:54:29 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:36:29.271 11:54:29 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:36:29.271 11:54:29 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:36:29.271 11:54:29 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:36:29.271 11:54:30 keyring_file -- keyring/file.sh@61 -- # (( 1 == 1 )) 00:36:29.271 11:54:30 keyring_file -- keyring/file.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:36:29.576 Running I/O for 1 seconds... 00:36:30.704 13498.00 IOPS, 52.73 MiB/s 00:36:30.704 Latency(us) 00:36:30.704 [2024-11-15T10:54:31.557Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:36:30.704 Job: nvme0n1 (Core Mask 0x2, workload: randrw, percentage: 50, depth: 128, IO size: 4096) 00:36:30.704 nvme0n1 : 1.01 13501.77 52.74 0.00 0.00 9435.14 6583.39 15192.44 00:36:30.704 [2024-11-15T10:54:31.557Z] =================================================================================================================== 00:36:30.704 [2024-11-15T10:54:31.557Z] Total : 13501.77 52.74 0.00 0.00 9435.14 6583.39 15192.44 00:36:30.704 { 00:36:30.704 "results": [ 00:36:30.704 { 00:36:30.704 "job": "nvme0n1", 00:36:30.704 "core_mask": "0x2", 00:36:30.704 "workload": "randrw", 00:36:30.704 "percentage": 50, 00:36:30.704 "status": "finished", 00:36:30.704 "queue_depth": 128, 00:36:30.704 "io_size": 4096, 00:36:30.704 "runtime": 1.009349, 00:36:30.704 "iops": 13501.771934187283, 00:36:30.704 "mibps": 52.741296617919076, 00:36:30.704 "io_failed": 0, 00:36:30.704 "io_timeout": 0, 00:36:30.704 "avg_latency_us": 9435.139333457853, 00:36:30.704 "min_latency_us": 6583.389090909091, 00:36:30.704 "max_latency_us": 15192.436363636363 00:36:30.704 } 00:36:30.704 ], 00:36:30.704 "core_count": 1 00:36:30.704 } 00:36:30.704 11:54:31 keyring_file -- keyring/file.sh@65 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:36:30.705 11:54:31 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:36:30.705 11:54:31 keyring_file -- keyring/file.sh@66 -- # get_refcnt key0 00:36:30.705 11:54:31 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:36:30.705 11:54:31 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:36:30.705 11:54:31 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:36:30.705 11:54:31 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:36:30.705 11:54:31 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:36:30.963 11:54:31 keyring_file -- keyring/file.sh@66 -- # (( 1 == 1 )) 00:36:30.963 11:54:31 keyring_file -- keyring/file.sh@67 -- # get_refcnt key1 00:36:30.963 11:54:31 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:36:30.963 11:54:31 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:36:30.963 11:54:31 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:36:30.963 11:54:31 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:36:30.963 11:54:31 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:36:31.223 11:54:32 keyring_file -- keyring/file.sh@67 -- # (( 1 == 1 )) 00:36:31.223 11:54:32 keyring_file -- keyring/file.sh@70 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:36:31.223 11:54:32 keyring_file -- common/autotest_common.sh@650 -- # local es=0 00:36:31.223 11:54:32 keyring_file -- common/autotest_common.sh@652 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:36:31.223 11:54:32 keyring_file -- common/autotest_common.sh@638 -- # local arg=bperf_cmd 00:36:31.223 11:54:32 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:36:31.223 11:54:32 keyring_file -- common/autotest_common.sh@642 -- # type -t bperf_cmd 00:36:31.223 11:54:32 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:36:31.223 11:54:32 keyring_file -- common/autotest_common.sh@653 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:36:31.223 11:54:32 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:36:31.481 [2024-11-15 11:54:32.331390] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:36:31.481 [2024-11-15 11:54:32.332254] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18e9200 (107): Transport endpoint is not connected 00:36:31.740 [2024-11-15 11:54:32.333249] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18e9200 (9): Bad file descriptor 00:36:31.740 [2024-11-15 11:54:32.334251] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 0] Ctrlr is in error state 00:36:31.740 [2024-11-15 11:54:32.334259] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 00:36:31.740 [2024-11-15 11:54:32.334266] nvme.c: 884:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=127.0.0.1 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode0, Operation not permitted 00:36:31.740 [2024-11-15 11:54:32.334274] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 0] in failed state. 00:36:31.740 request: 00:36:31.740 { 00:36:31.740 "name": "nvme0", 00:36:31.740 "trtype": "tcp", 00:36:31.740 "traddr": "127.0.0.1", 00:36:31.740 "adrfam": "ipv4", 00:36:31.740 "trsvcid": "4420", 00:36:31.740 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:36:31.740 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:36:31.740 "prchk_reftag": false, 00:36:31.740 "prchk_guard": false, 00:36:31.740 "hdgst": false, 00:36:31.740 "ddgst": false, 00:36:31.740 "psk": "key1", 00:36:31.740 "allow_unrecognized_csi": false, 00:36:31.740 "method": "bdev_nvme_attach_controller", 00:36:31.740 "req_id": 1 00:36:31.740 } 00:36:31.740 Got JSON-RPC error response 00:36:31.740 response: 00:36:31.740 { 00:36:31.740 "code": -5, 00:36:31.740 "message": "Input/output error" 00:36:31.740 } 00:36:31.740 11:54:32 keyring_file -- common/autotest_common.sh@653 -- # es=1 00:36:31.740 11:54:32 keyring_file -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:36:31.740 11:54:32 keyring_file -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:36:31.740 11:54:32 keyring_file -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:36:31.740 11:54:32 keyring_file -- keyring/file.sh@72 -- # get_refcnt key0 00:36:31.740 11:54:32 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:36:31.740 11:54:32 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:36:31.740 11:54:32 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:36:31.740 11:54:32 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:36:31.740 11:54:32 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:36:31.999 11:54:32 keyring_file -- keyring/file.sh@72 -- # (( 1 == 1 )) 00:36:31.999 11:54:32 keyring_file -- keyring/file.sh@73 -- # get_refcnt key1 00:36:31.999 11:54:32 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:36:31.999 11:54:32 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:36:31.999 11:54:32 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:36:31.999 11:54:32 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:36:31.999 11:54:32 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:36:32.258 11:54:32 keyring_file -- keyring/file.sh@73 -- # (( 1 == 1 )) 00:36:32.258 11:54:32 keyring_file -- keyring/file.sh@76 -- # bperf_cmd keyring_file_remove_key key0 00:36:32.258 11:54:32 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:36:32.517 11:54:33 keyring_file -- keyring/file.sh@77 -- # bperf_cmd keyring_file_remove_key key1 00:36:32.517 11:54:33 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key1 00:36:32.775 11:54:33 keyring_file -- keyring/file.sh@78 -- # bperf_cmd keyring_get_keys 00:36:32.775 11:54:33 keyring_file -- keyring/file.sh@78 -- # jq length 00:36:32.775 11:54:33 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:36:33.034 11:54:33 keyring_file -- keyring/file.sh@78 -- # (( 0 == 0 )) 00:36:33.034 11:54:33 keyring_file -- keyring/file.sh@81 -- # chmod 0660 /tmp/tmp.ynscIoA1lc 00:36:33.034 11:54:33 keyring_file -- keyring/file.sh@82 -- # NOT bperf_cmd keyring_file_add_key key0 /tmp/tmp.ynscIoA1lc 00:36:33.034 11:54:33 keyring_file -- common/autotest_common.sh@650 -- # local es=0 00:36:33.034 11:54:33 keyring_file -- common/autotest_common.sh@652 -- # valid_exec_arg bperf_cmd keyring_file_add_key key0 /tmp/tmp.ynscIoA1lc 00:36:33.034 11:54:33 keyring_file -- common/autotest_common.sh@638 -- # local arg=bperf_cmd 00:36:33.034 11:54:33 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:36:33.034 11:54:33 keyring_file -- common/autotest_common.sh@642 -- # type -t bperf_cmd 00:36:33.034 11:54:33 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:36:33.034 11:54:33 keyring_file -- common/autotest_common.sh@653 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.ynscIoA1lc 00:36:33.034 11:54:33 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.ynscIoA1lc 00:36:33.293 [2024-11-15 11:54:33.897318] keyring.c: 36:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.ynscIoA1lc': 0100660 00:36:33.293 [2024-11-15 11:54:33.897341] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:36:33.293 request: 00:36:33.293 { 00:36:33.293 "name": "key0", 00:36:33.293 "path": "/tmp/tmp.ynscIoA1lc", 00:36:33.293 "method": "keyring_file_add_key", 00:36:33.293 "req_id": 1 00:36:33.293 } 00:36:33.293 Got JSON-RPC error response 00:36:33.293 response: 00:36:33.293 { 00:36:33.293 "code": -1, 00:36:33.293 "message": "Operation not permitted" 00:36:33.293 } 00:36:33.293 11:54:33 keyring_file -- common/autotest_common.sh@653 -- # es=1 00:36:33.293 11:54:33 keyring_file -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:36:33.293 11:54:33 keyring_file -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:36:33.293 11:54:33 keyring_file -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:36:33.293 11:54:33 keyring_file -- keyring/file.sh@85 -- # chmod 0600 /tmp/tmp.ynscIoA1lc 00:36:33.293 11:54:33 keyring_file -- keyring/file.sh@86 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.ynscIoA1lc 00:36:33.293 11:54:33 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.ynscIoA1lc 00:36:33.552 11:54:34 keyring_file -- keyring/file.sh@87 -- # rm -f /tmp/tmp.ynscIoA1lc 00:36:33.552 11:54:34 keyring_file -- keyring/file.sh@89 -- # get_refcnt key0 00:36:33.552 11:54:34 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:36:33.552 11:54:34 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:36:33.552 11:54:34 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:36:33.552 11:54:34 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:36:33.552 11:54:34 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:36:33.811 11:54:34 keyring_file -- keyring/file.sh@89 -- # (( 1 == 1 )) 00:36:33.811 11:54:34 keyring_file -- keyring/file.sh@91 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:36:33.811 11:54:34 keyring_file -- common/autotest_common.sh@650 -- # local es=0 00:36:33.811 11:54:34 keyring_file -- common/autotest_common.sh@652 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:36:33.811 11:54:34 keyring_file -- common/autotest_common.sh@638 -- # local arg=bperf_cmd 00:36:33.811 11:54:34 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:36:33.811 11:54:34 keyring_file -- common/autotest_common.sh@642 -- # type -t bperf_cmd 00:36:33.811 11:54:34 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:36:33.811 11:54:34 keyring_file -- common/autotest_common.sh@653 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:36:33.811 11:54:34 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:36:34.073 [2024-11-15 11:54:34.723478] keyring.c: 31:keyring_file_check_path: *ERROR*: Could not stat key file '/tmp/tmp.ynscIoA1lc': No such file or directory 00:36:34.073 [2024-11-15 11:54:34.723501] nvme_tcp.c:2498:nvme_tcp_generate_tls_credentials: *ERROR*: Failed to obtain key 'key0': No such file or directory 00:36:34.073 [2024-11-15 11:54:34.723515] nvme.c: 682:nvme_ctrlr_probe: *ERROR*: Failed to construct NVMe controller for SSD: 127.0.0.1 00:36:34.073 [2024-11-15 11:54:34.723521] nvme.c: 884:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=127.0.0.1 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode0, No such device 00:36:34.073 [2024-11-15 11:54:34.723546] nvme.c: 831:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:36:34.073 [2024-11-15 11:54:34.723552] bdev_nvme.c:6669:spdk_bdev_nvme_create: *ERROR*: No controller was found with provided trid (traddr: 127.0.0.1) 00:36:34.073 request: 00:36:34.073 { 00:36:34.073 "name": "nvme0", 00:36:34.073 "trtype": "tcp", 00:36:34.073 "traddr": "127.0.0.1", 00:36:34.073 "adrfam": "ipv4", 00:36:34.073 "trsvcid": "4420", 00:36:34.073 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:36:34.073 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:36:34.073 "prchk_reftag": false, 00:36:34.073 "prchk_guard": false, 00:36:34.073 "hdgst": false, 00:36:34.073 "ddgst": false, 00:36:34.073 "psk": "key0", 00:36:34.073 "allow_unrecognized_csi": false, 00:36:34.073 "method": "bdev_nvme_attach_controller", 00:36:34.073 "req_id": 1 00:36:34.073 } 00:36:34.073 Got JSON-RPC error response 00:36:34.073 response: 00:36:34.073 { 00:36:34.074 "code": -19, 00:36:34.074 "message": "No such device" 00:36:34.074 } 00:36:34.074 11:54:34 keyring_file -- common/autotest_common.sh@653 -- # es=1 00:36:34.074 11:54:34 keyring_file -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:36:34.074 11:54:34 keyring_file -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:36:34.074 11:54:34 keyring_file -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:36:34.074 11:54:34 keyring_file -- keyring/file.sh@93 -- # bperf_cmd keyring_file_remove_key key0 00:36:34.074 11:54:34 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:36:34.338 11:54:35 keyring_file -- keyring/file.sh@96 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:36:34.338 11:54:35 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:36:34.338 11:54:35 keyring_file -- keyring/common.sh@17 -- # name=key0 00:36:34.338 11:54:35 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:36:34.338 11:54:35 keyring_file -- keyring/common.sh@17 -- # digest=0 00:36:34.338 11:54:35 keyring_file -- keyring/common.sh@18 -- # mktemp 00:36:34.338 11:54:35 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.VpP2whq9Yw 00:36:34.338 11:54:35 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:36:34.338 11:54:35 keyring_file -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:36:34.338 11:54:35 keyring_file -- nvmf/common.sh@730 -- # local prefix key digest 00:36:34.338 11:54:35 keyring_file -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:36:34.338 11:54:35 keyring_file -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff 00:36:34.338 11:54:35 keyring_file -- nvmf/common.sh@732 -- # digest=0 00:36:34.338 11:54:35 keyring_file -- nvmf/common.sh@733 -- # python - 00:36:34.338 11:54:35 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.VpP2whq9Yw 00:36:34.338 11:54:35 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.VpP2whq9Yw 00:36:34.338 11:54:35 keyring_file -- keyring/file.sh@96 -- # key0path=/tmp/tmp.VpP2whq9Yw 00:36:34.338 11:54:35 keyring_file -- keyring/file.sh@97 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.VpP2whq9Yw 00:36:34.338 11:54:35 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.VpP2whq9Yw 00:36:34.597 11:54:35 keyring_file -- keyring/file.sh@98 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:36:34.597 11:54:35 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:36:34.856 nvme0n1 00:36:34.856 11:54:35 keyring_file -- keyring/file.sh@100 -- # get_refcnt key0 00:36:34.856 11:54:35 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:36:34.856 11:54:35 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:36:34.856 11:54:35 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:36:34.856 11:54:35 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:36:34.856 11:54:35 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:36:35.423 11:54:35 keyring_file -- keyring/file.sh@100 -- # (( 2 == 2 )) 00:36:35.423 11:54:35 keyring_file -- keyring/file.sh@101 -- # bperf_cmd keyring_file_remove_key key0 00:36:35.423 11:54:35 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:36:35.423 11:54:36 keyring_file -- keyring/file.sh@102 -- # get_key key0 00:36:35.423 11:54:36 keyring_file -- keyring/file.sh@102 -- # jq -r .removed 00:36:35.423 11:54:36 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:36:35.423 11:54:36 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:36:35.423 11:54:36 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:36:35.682 11:54:36 keyring_file -- keyring/file.sh@102 -- # [[ true == \t\r\u\e ]] 00:36:35.682 11:54:36 keyring_file -- keyring/file.sh@103 -- # get_refcnt key0 00:36:35.682 11:54:36 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:36:35.682 11:54:36 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:36:35.682 11:54:36 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:36:35.682 11:54:36 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:36:35.682 11:54:36 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:36:35.941 11:54:36 keyring_file -- keyring/file.sh@103 -- # (( 1 == 1 )) 00:36:35.941 11:54:36 keyring_file -- keyring/file.sh@104 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:36:35.941 11:54:36 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:36:36.200 11:54:36 keyring_file -- keyring/file.sh@105 -- # bperf_cmd keyring_get_keys 00:36:36.200 11:54:36 keyring_file -- keyring/file.sh@105 -- # jq length 00:36:36.200 11:54:36 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:36:36.459 11:54:37 keyring_file -- keyring/file.sh@105 -- # (( 0 == 0 )) 00:36:36.459 11:54:37 keyring_file -- keyring/file.sh@108 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.VpP2whq9Yw 00:36:36.459 11:54:37 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.VpP2whq9Yw 00:36:36.717 11:54:37 keyring_file -- keyring/file.sh@109 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.VuTGaTQZoX 00:36:36.717 11:54:37 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.VuTGaTQZoX 00:36:36.976 11:54:37 keyring_file -- keyring/file.sh@110 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:36:36.976 11:54:37 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:36:37.235 nvme0n1 00:36:37.235 11:54:38 keyring_file -- keyring/file.sh@113 -- # bperf_cmd save_config 00:36:37.235 11:54:38 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock save_config 00:36:37.802 11:54:38 keyring_file -- keyring/file.sh@113 -- # config='{ 00:36:37.802 "subsystems": [ 00:36:37.802 { 00:36:37.803 "subsystem": "keyring", 00:36:37.803 "config": [ 00:36:37.803 { 00:36:37.803 "method": "keyring_file_add_key", 00:36:37.803 "params": { 00:36:37.803 "name": "key0", 00:36:37.803 "path": "/tmp/tmp.VpP2whq9Yw" 00:36:37.803 } 00:36:37.803 }, 00:36:37.803 { 00:36:37.803 "method": "keyring_file_add_key", 00:36:37.803 "params": { 00:36:37.803 "name": "key1", 00:36:37.803 "path": "/tmp/tmp.VuTGaTQZoX" 00:36:37.803 } 00:36:37.803 } 00:36:37.803 ] 00:36:37.803 }, 00:36:37.803 { 00:36:37.803 "subsystem": "iobuf", 00:36:37.803 "config": [ 00:36:37.803 { 00:36:37.803 "method": "iobuf_set_options", 00:36:37.803 "params": { 00:36:37.803 "small_pool_count": 8192, 00:36:37.803 "large_pool_count": 1024, 00:36:37.803 "small_bufsize": 8192, 00:36:37.803 "large_bufsize": 135168, 00:36:37.803 "enable_numa": false 00:36:37.803 } 00:36:37.803 } 00:36:37.803 ] 00:36:37.803 }, 00:36:37.803 { 00:36:37.803 "subsystem": "sock", 00:36:37.803 "config": [ 00:36:37.803 { 00:36:37.803 "method": "sock_set_default_impl", 00:36:37.803 "params": { 00:36:37.803 "impl_name": "posix" 00:36:37.803 } 00:36:37.803 }, 00:36:37.803 { 00:36:37.803 "method": "sock_impl_set_options", 00:36:37.803 "params": { 00:36:37.803 "impl_name": "ssl", 00:36:37.803 "recv_buf_size": 4096, 00:36:37.803 "send_buf_size": 4096, 00:36:37.803 "enable_recv_pipe": true, 00:36:37.803 "enable_quickack": false, 00:36:37.803 "enable_placement_id": 0, 00:36:37.803 "enable_zerocopy_send_server": true, 00:36:37.803 "enable_zerocopy_send_client": false, 00:36:37.803 "zerocopy_threshold": 0, 00:36:37.803 "tls_version": 0, 00:36:37.803 "enable_ktls": false 00:36:37.803 } 00:36:37.803 }, 00:36:37.803 { 00:36:37.803 "method": "sock_impl_set_options", 00:36:37.803 "params": { 00:36:37.803 "impl_name": "posix", 00:36:37.803 "recv_buf_size": 2097152, 00:36:37.803 "send_buf_size": 2097152, 00:36:37.803 "enable_recv_pipe": true, 00:36:37.803 "enable_quickack": false, 00:36:37.803 "enable_placement_id": 0, 00:36:37.803 "enable_zerocopy_send_server": true, 00:36:37.803 "enable_zerocopy_send_client": false, 00:36:37.803 "zerocopy_threshold": 0, 00:36:37.803 "tls_version": 0, 00:36:37.803 "enable_ktls": false 00:36:37.803 } 00:36:37.803 } 00:36:37.803 ] 00:36:37.803 }, 00:36:37.803 { 00:36:37.803 "subsystem": "vmd", 00:36:37.803 "config": [] 00:36:37.803 }, 00:36:37.803 { 00:36:37.803 "subsystem": "accel", 00:36:37.803 "config": [ 00:36:37.803 { 00:36:37.803 "method": "accel_set_options", 00:36:37.803 "params": { 00:36:37.803 "small_cache_size": 128, 00:36:37.803 "large_cache_size": 16, 00:36:37.803 "task_count": 2048, 00:36:37.803 "sequence_count": 2048, 00:36:37.803 "buf_count": 2048 00:36:37.803 } 00:36:37.803 } 00:36:37.803 ] 00:36:37.803 }, 00:36:37.803 { 00:36:37.803 "subsystem": "bdev", 00:36:37.803 "config": [ 00:36:37.803 { 00:36:37.803 "method": "bdev_set_options", 00:36:37.803 "params": { 00:36:37.803 "bdev_io_pool_size": 65535, 00:36:37.803 "bdev_io_cache_size": 256, 00:36:37.803 "bdev_auto_examine": true, 00:36:37.803 "iobuf_small_cache_size": 128, 00:36:37.803 "iobuf_large_cache_size": 16 00:36:37.803 } 00:36:37.803 }, 00:36:37.803 { 00:36:37.803 "method": "bdev_raid_set_options", 00:36:37.803 "params": { 00:36:37.803 "process_window_size_kb": 1024, 00:36:37.803 "process_max_bandwidth_mb_sec": 0 00:36:37.803 } 00:36:37.803 }, 00:36:37.803 { 00:36:37.803 "method": "bdev_iscsi_set_options", 00:36:37.803 "params": { 00:36:37.803 "timeout_sec": 30 00:36:37.803 } 00:36:37.803 }, 00:36:37.803 { 00:36:37.803 "method": "bdev_nvme_set_options", 00:36:37.803 "params": { 00:36:37.803 "action_on_timeout": "none", 00:36:37.803 "timeout_us": 0, 00:36:37.803 "timeout_admin_us": 0, 00:36:37.803 "keep_alive_timeout_ms": 10000, 00:36:37.803 "arbitration_burst": 0, 00:36:37.803 "low_priority_weight": 0, 00:36:37.803 "medium_priority_weight": 0, 00:36:37.803 "high_priority_weight": 0, 00:36:37.803 "nvme_adminq_poll_period_us": 10000, 00:36:37.803 "nvme_ioq_poll_period_us": 0, 00:36:37.803 "io_queue_requests": 512, 00:36:37.803 "delay_cmd_submit": true, 00:36:37.803 "transport_retry_count": 4, 00:36:37.803 "bdev_retry_count": 3, 00:36:37.803 "transport_ack_timeout": 0, 00:36:37.803 "ctrlr_loss_timeout_sec": 0, 00:36:37.803 "reconnect_delay_sec": 0, 00:36:37.803 "fast_io_fail_timeout_sec": 0, 00:36:37.803 "disable_auto_failback": false, 00:36:37.803 "generate_uuids": false, 00:36:37.803 "transport_tos": 0, 00:36:37.803 "nvme_error_stat": false, 00:36:37.803 "rdma_srq_size": 0, 00:36:37.803 "io_path_stat": false, 00:36:37.803 "allow_accel_sequence": false, 00:36:37.803 "rdma_max_cq_size": 0, 00:36:37.803 "rdma_cm_event_timeout_ms": 0, 00:36:37.803 "dhchap_digests": [ 00:36:37.803 "sha256", 00:36:37.803 "sha384", 00:36:37.803 "sha512" 00:36:37.803 ], 00:36:37.803 "dhchap_dhgroups": [ 00:36:37.803 "null", 00:36:37.803 "ffdhe2048", 00:36:37.803 "ffdhe3072", 00:36:37.803 "ffdhe4096", 00:36:37.803 "ffdhe6144", 00:36:37.803 "ffdhe8192" 00:36:37.803 ] 00:36:37.803 } 00:36:37.803 }, 00:36:37.803 { 00:36:37.803 "method": "bdev_nvme_attach_controller", 00:36:37.803 "params": { 00:36:37.803 "name": "nvme0", 00:36:37.803 "trtype": "TCP", 00:36:37.803 "adrfam": "IPv4", 00:36:37.803 "traddr": "127.0.0.1", 00:36:37.803 "trsvcid": "4420", 00:36:37.803 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:36:37.803 "prchk_reftag": false, 00:36:37.803 "prchk_guard": false, 00:36:37.803 "ctrlr_loss_timeout_sec": 0, 00:36:37.803 "reconnect_delay_sec": 0, 00:36:37.803 "fast_io_fail_timeout_sec": 0, 00:36:37.803 "psk": "key0", 00:36:37.803 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:36:37.803 "hdgst": false, 00:36:37.803 "ddgst": false, 00:36:37.803 "multipath": "multipath" 00:36:37.803 } 00:36:37.803 }, 00:36:37.803 { 00:36:37.803 "method": "bdev_nvme_set_hotplug", 00:36:37.803 "params": { 00:36:37.803 "period_us": 100000, 00:36:37.803 "enable": false 00:36:37.803 } 00:36:37.803 }, 00:36:37.803 { 00:36:37.803 "method": "bdev_wait_for_examine" 00:36:37.803 } 00:36:37.803 ] 00:36:37.803 }, 00:36:37.803 { 00:36:37.803 "subsystem": "nbd", 00:36:37.803 "config": [] 00:36:37.803 } 00:36:37.803 ] 00:36:37.803 }' 00:36:37.803 11:54:38 keyring_file -- keyring/file.sh@115 -- # killprocess 1535566 00:36:37.803 11:54:38 keyring_file -- common/autotest_common.sh@952 -- # '[' -z 1535566 ']' 00:36:37.803 11:54:38 keyring_file -- common/autotest_common.sh@956 -- # kill -0 1535566 00:36:37.803 11:54:38 keyring_file -- common/autotest_common.sh@957 -- # uname 00:36:37.803 11:54:38 keyring_file -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:36:37.803 11:54:38 keyring_file -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 1535566 00:36:37.803 11:54:38 keyring_file -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:36:37.803 11:54:38 keyring_file -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:36:37.803 11:54:38 keyring_file -- common/autotest_common.sh@970 -- # echo 'killing process with pid 1535566' 00:36:37.803 killing process with pid 1535566 00:36:37.804 11:54:38 keyring_file -- common/autotest_common.sh@971 -- # kill 1535566 00:36:37.804 Received shutdown signal, test time was about 1.000000 seconds 00:36:37.804 00:36:37.804 Latency(us) 00:36:37.804 [2024-11-15T10:54:38.657Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:36:37.804 [2024-11-15T10:54:38.657Z] =================================================================================================================== 00:36:37.804 [2024-11-15T10:54:38.657Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:36:37.804 11:54:38 keyring_file -- common/autotest_common.sh@976 -- # wait 1535566 00:36:37.804 11:54:38 keyring_file -- keyring/file.sh@118 -- # bperfpid=1537556 00:36:37.804 11:54:38 keyring_file -- keyring/file.sh@120 -- # waitforlisten 1537556 /var/tmp/bperf.sock 00:36:37.804 11:54:38 keyring_file -- common/autotest_common.sh@833 -- # '[' -z 1537556 ']' 00:36:37.804 11:54:38 keyring_file -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bperf.sock 00:36:37.804 11:54:38 keyring_file -- keyring/file.sh@116 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z -c /dev/fd/63 00:36:37.804 11:54:38 keyring_file -- common/autotest_common.sh@838 -- # local max_retries=100 00:36:37.804 11:54:38 keyring_file -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:36:37.804 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:36:37.804 11:54:38 keyring_file -- common/autotest_common.sh@842 -- # xtrace_disable 00:36:37.804 11:54:38 keyring_file -- keyring/file.sh@116 -- # echo '{ 00:36:37.804 "subsystems": [ 00:36:37.804 { 00:36:37.804 "subsystem": "keyring", 00:36:37.804 "config": [ 00:36:37.804 { 00:36:37.804 "method": "keyring_file_add_key", 00:36:37.804 "params": { 00:36:37.804 "name": "key0", 00:36:37.804 "path": "/tmp/tmp.VpP2whq9Yw" 00:36:37.804 } 00:36:37.804 }, 00:36:37.804 { 00:36:37.804 "method": "keyring_file_add_key", 00:36:37.804 "params": { 00:36:37.804 "name": "key1", 00:36:37.804 "path": "/tmp/tmp.VuTGaTQZoX" 00:36:37.804 } 00:36:37.804 } 00:36:37.804 ] 00:36:37.804 }, 00:36:37.804 { 00:36:37.804 "subsystem": "iobuf", 00:36:37.804 "config": [ 00:36:37.804 { 00:36:37.804 "method": "iobuf_set_options", 00:36:37.804 "params": { 00:36:37.804 "small_pool_count": 8192, 00:36:37.804 "large_pool_count": 1024, 00:36:37.804 "small_bufsize": 8192, 00:36:37.804 "large_bufsize": 135168, 00:36:37.804 "enable_numa": false 00:36:37.804 } 00:36:37.804 } 00:36:37.804 ] 00:36:37.804 }, 00:36:37.804 { 00:36:37.804 "subsystem": "sock", 00:36:37.804 "config": [ 00:36:37.804 { 00:36:37.804 "method": "sock_set_default_impl", 00:36:37.804 "params": { 00:36:37.804 "impl_name": "posix" 00:36:37.804 } 00:36:37.804 }, 00:36:37.804 { 00:36:37.804 "method": "sock_impl_set_options", 00:36:37.804 "params": { 00:36:37.804 "impl_name": "ssl", 00:36:37.804 "recv_buf_size": 4096, 00:36:37.804 "send_buf_size": 4096, 00:36:37.804 "enable_recv_pipe": true, 00:36:37.804 "enable_quickack": false, 00:36:37.804 "enable_placement_id": 0, 00:36:37.804 "enable_zerocopy_send_server": true, 00:36:37.804 "enable_zerocopy_send_client": false, 00:36:37.804 "zerocopy_threshold": 0, 00:36:37.804 "tls_version": 0, 00:36:37.804 "enable_ktls": false 00:36:37.804 } 00:36:37.804 }, 00:36:37.804 { 00:36:37.804 "method": "sock_impl_set_options", 00:36:37.804 "params": { 00:36:37.804 "impl_name": "posix", 00:36:37.804 "recv_buf_size": 2097152, 00:36:37.804 "send_buf_size": 2097152, 00:36:37.804 "enable_recv_pipe": true, 00:36:37.804 "enable_quickack": false, 00:36:37.804 "enable_placement_id": 0, 00:36:37.804 "enable_zerocopy_send_server": true, 00:36:37.804 "enable_zerocopy_send_client": false, 00:36:37.804 "zerocopy_threshold": 0, 00:36:37.804 "tls_version": 0, 00:36:37.804 "enable_ktls": false 00:36:37.804 } 00:36:37.804 } 00:36:37.804 ] 00:36:37.804 }, 00:36:37.804 { 00:36:37.804 "subsystem": "vmd", 00:36:37.804 "config": [] 00:36:37.804 }, 00:36:37.804 { 00:36:37.804 "subsystem": "accel", 00:36:37.804 "config": [ 00:36:37.804 { 00:36:37.804 "method": "accel_set_options", 00:36:37.804 "params": { 00:36:37.804 "small_cache_size": 128, 00:36:37.804 "large_cache_size": 16, 00:36:37.804 "task_count": 2048, 00:36:37.804 "sequence_count": 2048, 00:36:37.804 "buf_count": 2048 00:36:37.804 } 00:36:37.804 } 00:36:37.804 ] 00:36:37.804 }, 00:36:37.804 { 00:36:37.804 "subsystem": "bdev", 00:36:37.804 "config": [ 00:36:37.804 { 00:36:37.804 "method": "bdev_set_options", 00:36:37.804 "params": { 00:36:37.804 "bdev_io_pool_size": 65535, 00:36:37.804 "bdev_io_cache_size": 256, 00:36:37.804 "bdev_auto_examine": true, 00:36:37.804 "iobuf_small_cache_size": 128, 00:36:37.804 "iobuf_large_cache_size": 16 00:36:37.804 } 00:36:37.804 }, 00:36:37.804 { 00:36:37.804 "method": "bdev_raid_set_options", 00:36:37.804 "params": { 00:36:37.804 "process_window_size_kb": 1024, 00:36:37.804 "process_max_bandwidth_mb_sec": 0 00:36:37.804 } 00:36:37.804 }, 00:36:37.804 { 00:36:37.804 "method": "bdev_iscsi_set_options", 00:36:37.804 "params": { 00:36:37.804 "timeout_sec": 30 00:36:37.804 } 00:36:37.804 }, 00:36:37.804 { 00:36:37.804 "method": "bdev_nvme_set_options", 00:36:37.804 "params": { 00:36:37.804 "action_on_timeout": "none", 00:36:37.804 "timeout_us": 0, 00:36:37.804 "timeout_admin_us": 0, 00:36:37.804 "keep_alive_timeout_ms": 10000, 00:36:37.804 "arbitration_burst": 0, 00:36:37.804 "low_priority_weight": 0, 00:36:37.804 "medium_priority_weight": 0, 00:36:37.804 "high_priority_weight": 0, 00:36:37.804 "nvme_adminq_poll_period_us": 10000, 00:36:37.804 "nvme_ioq_poll_period_us": 0, 00:36:37.804 "io_queue_requests": 512, 00:36:37.804 "delay_cmd_submit": true, 00:36:37.804 "transport_retry_count": 4, 00:36:37.804 "bdev_retry_count": 3, 00:36:37.804 "transport_ack_timeout": 0, 00:36:37.804 "ctrlr_loss_timeout_sec": 0, 00:36:37.804 "reconnect_delay_sec": 0, 00:36:37.804 "fast_io_fail_timeout_sec": 0, 00:36:37.804 "disable_auto_failback": false, 00:36:37.804 "generate_uuids": false, 00:36:37.804 "transport_tos": 0, 00:36:37.804 "nvme_error_stat": false, 00:36:37.804 "rdma_srq_size": 0, 00:36:37.804 "io_path_stat": false, 00:36:37.804 "allow_accel_sequence": false, 00:36:37.804 "rdma_max_cq_size": 0, 00:36:37.804 "rdma_cm_event_timeout_ms": 0, 00:36:37.804 "dhchap_digests": [ 00:36:37.804 "sha256", 00:36:37.804 "sha384", 00:36:37.804 "sha512" 00:36:37.804 ], 00:36:37.804 "dhchap_dhgroups": [ 00:36:37.804 "null", 00:36:37.804 "ffdhe2048", 00:36:37.804 "ffdhe3072", 00:36:37.804 "ffdhe4096", 00:36:37.804 "ffdhe6144", 00:36:37.804 "ffdhe8192" 00:36:37.804 ] 00:36:37.804 } 00:36:37.804 }, 00:36:37.804 { 00:36:37.804 "method": "bdev_nvme_attach_controller", 00:36:37.804 "params": { 00:36:37.804 "name": "nvme0", 00:36:37.804 "trtype": "TCP", 00:36:37.804 "adrfam": "IPv4", 00:36:37.804 "traddr": "127.0.0.1", 00:36:37.804 "trsvcid": "4420", 00:36:37.804 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:36:37.804 "prchk_reftag": false, 00:36:37.804 "prchk_guard": false, 00:36:37.804 "ctrlr_loss_timeout_sec": 0, 00:36:37.804 "reconnect_delay_sec": 0, 00:36:37.804 "fast_io_fail_timeout_sec": 0, 00:36:37.804 "psk": "key0", 00:36:37.804 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:36:37.804 "hdgst": false, 00:36:37.804 "ddgst": false, 00:36:37.804 "multipath": "multipath" 00:36:37.804 } 00:36:37.804 }, 00:36:37.804 { 00:36:37.804 "method": "bdev_nvme_set_hotplug", 00:36:37.804 "params": { 00:36:37.804 "period_us": 100000, 00:36:37.805 "enable": false 00:36:37.805 } 00:36:37.805 }, 00:36:37.805 { 00:36:37.805 "method": "bdev_wait_for_examine" 00:36:37.805 } 00:36:37.805 ] 00:36:37.805 }, 00:36:37.805 { 00:36:37.805 "subsystem": "nbd", 00:36:37.805 "config": [] 00:36:37.805 } 00:36:37.805 ] 00:36:37.805 }' 00:36:37.805 11:54:38 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:36:37.805 [2024-11-15 11:54:38.646571] Starting SPDK v25.01-pre git sha1 4b2d483c6 / DPDK 24.03.0 initialization... 00:36:37.805 [2024-11-15 11:54:38.646619] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1537556 ] 00:36:38.064 [2024-11-15 11:54:38.701026] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:36:38.064 [2024-11-15 11:54:38.738054] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:36:38.064 [2024-11-15 11:54:38.897848] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:36:38.632 11:54:39 keyring_file -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:36:38.632 11:54:39 keyring_file -- common/autotest_common.sh@866 -- # return 0 00:36:38.632 11:54:39 keyring_file -- keyring/file.sh@121 -- # jq length 00:36:38.632 11:54:39 keyring_file -- keyring/file.sh@121 -- # bperf_cmd keyring_get_keys 00:36:38.632 11:54:39 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:36:38.890 11:54:39 keyring_file -- keyring/file.sh@121 -- # (( 2 == 2 )) 00:36:38.890 11:54:39 keyring_file -- keyring/file.sh@122 -- # get_refcnt key0 00:36:38.890 11:54:39 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:36:38.890 11:54:39 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:36:38.890 11:54:39 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:36:38.890 11:54:39 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:36:38.890 11:54:39 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:36:39.457 11:54:40 keyring_file -- keyring/file.sh@122 -- # (( 2 == 2 )) 00:36:39.457 11:54:40 keyring_file -- keyring/file.sh@123 -- # get_refcnt key1 00:36:39.457 11:54:40 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:36:39.457 11:54:40 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:36:39.457 11:54:40 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:36:39.457 11:54:40 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:36:39.457 11:54:40 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:36:39.457 11:54:40 keyring_file -- keyring/file.sh@123 -- # (( 1 == 1 )) 00:36:39.457 11:54:40 keyring_file -- keyring/file.sh@124 -- # bperf_cmd bdev_nvme_get_controllers 00:36:39.457 11:54:40 keyring_file -- keyring/file.sh@124 -- # jq -r '.[].name' 00:36:39.457 11:54:40 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_get_controllers 00:36:40.024 11:54:40 keyring_file -- keyring/file.sh@124 -- # [[ nvme0 == nvme0 ]] 00:36:40.024 11:54:40 keyring_file -- keyring/file.sh@1 -- # cleanup 00:36:40.024 11:54:40 keyring_file -- keyring/file.sh@19 -- # rm -f /tmp/tmp.VpP2whq9Yw /tmp/tmp.VuTGaTQZoX 00:36:40.024 11:54:40 keyring_file -- keyring/file.sh@20 -- # killprocess 1537556 00:36:40.024 11:54:40 keyring_file -- common/autotest_common.sh@952 -- # '[' -z 1537556 ']' 00:36:40.024 11:54:40 keyring_file -- common/autotest_common.sh@956 -- # kill -0 1537556 00:36:40.024 11:54:40 keyring_file -- common/autotest_common.sh@957 -- # uname 00:36:40.024 11:54:40 keyring_file -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:36:40.025 11:54:40 keyring_file -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 1537556 00:36:40.025 11:54:40 keyring_file -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:36:40.025 11:54:40 keyring_file -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:36:40.025 11:54:40 keyring_file -- common/autotest_common.sh@970 -- # echo 'killing process with pid 1537556' 00:36:40.025 killing process with pid 1537556 00:36:40.025 11:54:40 keyring_file -- common/autotest_common.sh@971 -- # kill 1537556 00:36:40.025 Received shutdown signal, test time was about 1.000000 seconds 00:36:40.025 00:36:40.025 Latency(us) 00:36:40.025 [2024-11-15T10:54:40.878Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:36:40.025 [2024-11-15T10:54:40.878Z] =================================================================================================================== 00:36:40.025 [2024-11-15T10:54:40.878Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:36:40.025 11:54:40 keyring_file -- common/autotest_common.sh@976 -- # wait 1537556 00:36:40.025 11:54:40 keyring_file -- keyring/file.sh@21 -- # killprocess 1535556 00:36:40.025 11:54:40 keyring_file -- common/autotest_common.sh@952 -- # '[' -z 1535556 ']' 00:36:40.025 11:54:40 keyring_file -- common/autotest_common.sh@956 -- # kill -0 1535556 00:36:40.025 11:54:40 keyring_file -- common/autotest_common.sh@957 -- # uname 00:36:40.025 11:54:40 keyring_file -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:36:40.025 11:54:40 keyring_file -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 1535556 00:36:40.025 11:54:40 keyring_file -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:36:40.025 11:54:40 keyring_file -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:36:40.025 11:54:40 keyring_file -- common/autotest_common.sh@970 -- # echo 'killing process with pid 1535556' 00:36:40.025 killing process with pid 1535556 00:36:40.025 11:54:40 keyring_file -- common/autotest_common.sh@971 -- # kill 1535556 00:36:40.025 11:54:40 keyring_file -- common/autotest_common.sh@976 -- # wait 1535556 00:36:40.593 00:36:40.593 real 0m14.673s 00:36:40.593 user 0m37.384s 00:36:40.593 sys 0m3.028s 00:36:40.593 11:54:41 keyring_file -- common/autotest_common.sh@1128 -- # xtrace_disable 00:36:40.593 11:54:41 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:36:40.593 ************************************ 00:36:40.593 END TEST keyring_file 00:36:40.593 ************************************ 00:36:40.593 11:54:41 -- spdk/autotest.sh@289 -- # [[ y == y ]] 00:36:40.593 11:54:41 -- spdk/autotest.sh@290 -- # run_test keyring_linux /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/keyctl-session-wrapper /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/linux.sh 00:36:40.593 11:54:41 -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:36:40.593 11:54:41 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:36:40.593 11:54:41 -- common/autotest_common.sh@10 -- # set +x 00:36:40.593 ************************************ 00:36:40.593 START TEST keyring_linux 00:36:40.593 ************************************ 00:36:40.593 11:54:41 keyring_linux -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/keyctl-session-wrapper /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/linux.sh 00:36:40.593 Joined session keyring: 330725471 00:36:40.593 * Looking for test storage... 00:36:40.593 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring 00:36:40.593 11:54:41 keyring_linux -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:36:40.593 11:54:41 keyring_linux -- common/autotest_common.sh@1691 -- # lcov --version 00:36:40.593 11:54:41 keyring_linux -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:36:40.593 11:54:41 keyring_linux -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:36:40.593 11:54:41 keyring_linux -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:36:40.593 11:54:41 keyring_linux -- scripts/common.sh@333 -- # local ver1 ver1_l 00:36:40.593 11:54:41 keyring_linux -- scripts/common.sh@334 -- # local ver2 ver2_l 00:36:40.593 11:54:41 keyring_linux -- scripts/common.sh@336 -- # IFS=.-: 00:36:40.593 11:54:41 keyring_linux -- scripts/common.sh@336 -- # read -ra ver1 00:36:40.593 11:54:41 keyring_linux -- scripts/common.sh@337 -- # IFS=.-: 00:36:40.593 11:54:41 keyring_linux -- scripts/common.sh@337 -- # read -ra ver2 00:36:40.593 11:54:41 keyring_linux -- scripts/common.sh@338 -- # local 'op=<' 00:36:40.593 11:54:41 keyring_linux -- scripts/common.sh@340 -- # ver1_l=2 00:36:40.593 11:54:41 keyring_linux -- scripts/common.sh@341 -- # ver2_l=1 00:36:40.593 11:54:41 keyring_linux -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:36:40.593 11:54:41 keyring_linux -- scripts/common.sh@344 -- # case "$op" in 00:36:40.593 11:54:41 keyring_linux -- scripts/common.sh@345 -- # : 1 00:36:40.593 11:54:41 keyring_linux -- scripts/common.sh@364 -- # (( v = 0 )) 00:36:40.593 11:54:41 keyring_linux -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:36:40.593 11:54:41 keyring_linux -- scripts/common.sh@365 -- # decimal 1 00:36:40.593 11:54:41 keyring_linux -- scripts/common.sh@353 -- # local d=1 00:36:40.593 11:54:41 keyring_linux -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:36:40.593 11:54:41 keyring_linux -- scripts/common.sh@355 -- # echo 1 00:36:40.593 11:54:41 keyring_linux -- scripts/common.sh@365 -- # ver1[v]=1 00:36:40.593 11:54:41 keyring_linux -- scripts/common.sh@366 -- # decimal 2 00:36:40.593 11:54:41 keyring_linux -- scripts/common.sh@353 -- # local d=2 00:36:40.593 11:54:41 keyring_linux -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:36:40.593 11:54:41 keyring_linux -- scripts/common.sh@355 -- # echo 2 00:36:40.593 11:54:41 keyring_linux -- scripts/common.sh@366 -- # ver2[v]=2 00:36:40.593 11:54:41 keyring_linux -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:36:40.593 11:54:41 keyring_linux -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:36:40.593 11:54:41 keyring_linux -- scripts/common.sh@368 -- # return 0 00:36:40.593 11:54:41 keyring_linux -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:36:40.593 11:54:41 keyring_linux -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:36:40.593 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:40.593 --rc genhtml_branch_coverage=1 00:36:40.593 --rc genhtml_function_coverage=1 00:36:40.593 --rc genhtml_legend=1 00:36:40.593 --rc geninfo_all_blocks=1 00:36:40.593 --rc geninfo_unexecuted_blocks=1 00:36:40.593 00:36:40.593 ' 00:36:40.593 11:54:41 keyring_linux -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:36:40.593 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:40.593 --rc genhtml_branch_coverage=1 00:36:40.593 --rc genhtml_function_coverage=1 00:36:40.593 --rc genhtml_legend=1 00:36:40.593 --rc geninfo_all_blocks=1 00:36:40.593 --rc geninfo_unexecuted_blocks=1 00:36:40.593 00:36:40.593 ' 00:36:40.593 11:54:41 keyring_linux -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:36:40.593 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:40.593 --rc genhtml_branch_coverage=1 00:36:40.593 --rc genhtml_function_coverage=1 00:36:40.593 --rc genhtml_legend=1 00:36:40.593 --rc geninfo_all_blocks=1 00:36:40.593 --rc geninfo_unexecuted_blocks=1 00:36:40.593 00:36:40.593 ' 00:36:40.593 11:54:41 keyring_linux -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:36:40.593 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:40.593 --rc genhtml_branch_coverage=1 00:36:40.593 --rc genhtml_function_coverage=1 00:36:40.593 --rc genhtml_legend=1 00:36:40.593 --rc geninfo_all_blocks=1 00:36:40.593 --rc geninfo_unexecuted_blocks=1 00:36:40.593 00:36:40.593 ' 00:36:40.593 11:54:41 keyring_linux -- keyring/linux.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/common.sh 00:36:40.593 11:54:41 keyring_linux -- keyring/common.sh@4 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:36:40.593 11:54:41 keyring_linux -- nvmf/common.sh@7 -- # uname -s 00:36:40.593 11:54:41 keyring_linux -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:36:40.593 11:54:41 keyring_linux -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:36:40.593 11:54:41 keyring_linux -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:36:40.853 11:54:41 keyring_linux -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:36:40.853 11:54:41 keyring_linux -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:36:40.853 11:54:41 keyring_linux -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:36:40.853 11:54:41 keyring_linux -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:36:40.853 11:54:41 keyring_linux -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:36:40.853 11:54:41 keyring_linux -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:36:40.853 11:54:41 keyring_linux -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:36:40.853 11:54:41 keyring_linux -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:36:40.853 11:54:41 keyring_linux -- nvmf/common.sh@18 -- # NVME_HOSTID=00abaa28-3537-eb11-906e-0017a4403562 00:36:40.853 11:54:41 keyring_linux -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:36:40.853 11:54:41 keyring_linux -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:36:40.853 11:54:41 keyring_linux -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:36:40.853 11:54:41 keyring_linux -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:36:40.853 11:54:41 keyring_linux -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:36:40.853 11:54:41 keyring_linux -- scripts/common.sh@15 -- # shopt -s extglob 00:36:40.853 11:54:41 keyring_linux -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:36:40.853 11:54:41 keyring_linux -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:36:40.853 11:54:41 keyring_linux -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:36:40.853 11:54:41 keyring_linux -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:40.853 11:54:41 keyring_linux -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:40.853 11:54:41 keyring_linux -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:40.853 11:54:41 keyring_linux -- paths/export.sh@5 -- # export PATH 00:36:40.853 11:54:41 keyring_linux -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:40.853 11:54:41 keyring_linux -- nvmf/common.sh@51 -- # : 0 00:36:40.853 11:54:41 keyring_linux -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:36:40.853 11:54:41 keyring_linux -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:36:40.853 11:54:41 keyring_linux -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:36:40.853 11:54:41 keyring_linux -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:36:40.853 11:54:41 keyring_linux -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:36:40.853 11:54:41 keyring_linux -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:36:40.853 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:36:40.853 11:54:41 keyring_linux -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:36:40.853 11:54:41 keyring_linux -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:36:40.853 11:54:41 keyring_linux -- nvmf/common.sh@55 -- # have_pci_nics=0 00:36:40.853 11:54:41 keyring_linux -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 00:36:40.853 11:54:41 keyring_linux -- keyring/linux.sh@11 -- # subnqn=nqn.2016-06.io.spdk:cnode0 00:36:40.853 11:54:41 keyring_linux -- keyring/linux.sh@12 -- # hostnqn=nqn.2016-06.io.spdk:host0 00:36:40.853 11:54:41 keyring_linux -- keyring/linux.sh@13 -- # key0=00112233445566778899aabbccddeeff 00:36:40.853 11:54:41 keyring_linux -- keyring/linux.sh@14 -- # key1=112233445566778899aabbccddeeff00 00:36:40.853 11:54:41 keyring_linux -- keyring/linux.sh@45 -- # trap cleanup EXIT 00:36:40.853 11:54:41 keyring_linux -- keyring/linux.sh@47 -- # prep_key key0 00112233445566778899aabbccddeeff 0 /tmp/:spdk-test:key0 00:36:40.853 11:54:41 keyring_linux -- keyring/common.sh@15 -- # local name key digest path 00:36:40.853 11:54:41 keyring_linux -- keyring/common.sh@17 -- # name=key0 00:36:40.853 11:54:41 keyring_linux -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:36:40.853 11:54:41 keyring_linux -- keyring/common.sh@17 -- # digest=0 00:36:40.853 11:54:41 keyring_linux -- keyring/common.sh@18 -- # path=/tmp/:spdk-test:key0 00:36:40.853 11:54:41 keyring_linux -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:36:40.853 11:54:41 keyring_linux -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:36:40.853 11:54:41 keyring_linux -- nvmf/common.sh@730 -- # local prefix key digest 00:36:40.853 11:54:41 keyring_linux -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:36:40.853 11:54:41 keyring_linux -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff 00:36:40.853 11:54:41 keyring_linux -- nvmf/common.sh@732 -- # digest=0 00:36:40.853 11:54:41 keyring_linux -- nvmf/common.sh@733 -- # python - 00:36:40.853 11:54:41 keyring_linux -- keyring/common.sh@21 -- # chmod 0600 /tmp/:spdk-test:key0 00:36:40.853 11:54:41 keyring_linux -- keyring/common.sh@23 -- # echo /tmp/:spdk-test:key0 00:36:40.853 /tmp/:spdk-test:key0 00:36:40.853 11:54:41 keyring_linux -- keyring/linux.sh@48 -- # prep_key key1 112233445566778899aabbccddeeff00 0 /tmp/:spdk-test:key1 00:36:40.853 11:54:41 keyring_linux -- keyring/common.sh@15 -- # local name key digest path 00:36:40.853 11:54:41 keyring_linux -- keyring/common.sh@17 -- # name=key1 00:36:40.853 11:54:41 keyring_linux -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 00:36:40.853 11:54:41 keyring_linux -- keyring/common.sh@17 -- # digest=0 00:36:40.853 11:54:41 keyring_linux -- keyring/common.sh@18 -- # path=/tmp/:spdk-test:key1 00:36:40.853 11:54:41 keyring_linux -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 00:36:40.853 11:54:41 keyring_linux -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 00:36:40.853 11:54:41 keyring_linux -- nvmf/common.sh@730 -- # local prefix key digest 00:36:40.853 11:54:41 keyring_linux -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:36:40.853 11:54:41 keyring_linux -- nvmf/common.sh@732 -- # key=112233445566778899aabbccddeeff00 00:36:40.853 11:54:41 keyring_linux -- nvmf/common.sh@732 -- # digest=0 00:36:40.853 11:54:41 keyring_linux -- nvmf/common.sh@733 -- # python - 00:36:40.853 11:54:41 keyring_linux -- keyring/common.sh@21 -- # chmod 0600 /tmp/:spdk-test:key1 00:36:40.854 11:54:41 keyring_linux -- keyring/common.sh@23 -- # echo /tmp/:spdk-test:key1 00:36:40.854 /tmp/:spdk-test:key1 00:36:40.854 11:54:41 keyring_linux -- keyring/linux.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:36:40.854 11:54:41 keyring_linux -- keyring/linux.sh@51 -- # tgtpid=1538169 00:36:40.854 11:54:41 keyring_linux -- keyring/linux.sh@53 -- # waitforlisten 1538169 00:36:40.854 11:54:41 keyring_linux -- common/autotest_common.sh@833 -- # '[' -z 1538169 ']' 00:36:40.854 11:54:41 keyring_linux -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:36:40.854 11:54:41 keyring_linux -- common/autotest_common.sh@838 -- # local max_retries=100 00:36:40.854 11:54:41 keyring_linux -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:36:40.854 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:36:40.854 11:54:41 keyring_linux -- common/autotest_common.sh@842 -- # xtrace_disable 00:36:40.854 11:54:41 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:36:40.854 [2024-11-15 11:54:41.628561] Starting SPDK v25.01-pre git sha1 4b2d483c6 / DPDK 24.03.0 initialization... 00:36:40.854 [2024-11-15 11:54:41.628623] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1538169 ] 00:36:41.113 [2024-11-15 11:54:41.724645] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:36:41.113 [2024-11-15 11:54:41.773598] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:36:41.372 11:54:41 keyring_linux -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:36:41.372 11:54:41 keyring_linux -- common/autotest_common.sh@866 -- # return 0 00:36:41.372 11:54:41 keyring_linux -- keyring/linux.sh@54 -- # rpc_cmd 00:36:41.372 11:54:41 keyring_linux -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:41.372 11:54:41 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:36:41.372 [2024-11-15 11:54:42.003843] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:36:41.372 null0 00:36:41.372 [2024-11-15 11:54:42.035895] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:36:41.372 [2024-11-15 11:54:42.036335] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:36:41.372 11:54:42 keyring_linux -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:41.372 11:54:42 keyring_linux -- keyring/linux.sh@66 -- # keyctl add user :spdk-test:key0 NVMeTLSkey-1:00:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: @s 00:36:41.372 245679642 00:36:41.372 11:54:42 keyring_linux -- keyring/linux.sh@67 -- # keyctl add user :spdk-test:key1 NVMeTLSkey-1:00:MTEyMjMzNDQ1NTY2Nzc4ODk5YWFiYmNjZGRlZWZmMDA6CPcs: @s 00:36:41.372 621936291 00:36:41.372 11:54:42 keyring_linux -- keyring/linux.sh@70 -- # bperfpid=1538375 00:36:41.372 11:54:42 keyring_linux -- keyring/linux.sh@72 -- # waitforlisten 1538375 /var/tmp/bperf.sock 00:36:41.372 11:54:42 keyring_linux -- common/autotest_common.sh@833 -- # '[' -z 1538375 ']' 00:36:41.372 11:54:42 keyring_linux -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bperf.sock 00:36:41.372 11:54:42 keyring_linux -- common/autotest_common.sh@838 -- # local max_retries=100 00:36:41.372 11:54:42 keyring_linux -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:36:41.372 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:36:41.372 11:54:42 keyring_linux -- common/autotest_common.sh@842 -- # xtrace_disable 00:36:41.372 11:54:42 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:36:41.372 11:54:42 keyring_linux -- keyring/linux.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randread -t 1 -m 2 -r /var/tmp/bperf.sock -z --wait-for-rpc 00:36:41.372 [2024-11-15 11:54:42.111865] Starting SPDK v25.01-pre git sha1 4b2d483c6 / DPDK 24.03.0 initialization... 00:36:41.372 [2024-11-15 11:54:42.111921] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1538375 ] 00:36:41.372 [2024-11-15 11:54:42.177393] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:36:41.372 [2024-11-15 11:54:42.217543] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:36:41.630 11:54:42 keyring_linux -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:36:41.630 11:54:42 keyring_linux -- common/autotest_common.sh@866 -- # return 0 00:36:41.630 11:54:42 keyring_linux -- keyring/linux.sh@73 -- # bperf_cmd keyring_linux_set_options --enable 00:36:41.631 11:54:42 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_linux_set_options --enable 00:36:41.889 11:54:42 keyring_linux -- keyring/linux.sh@74 -- # bperf_cmd framework_start_init 00:36:41.889 11:54:42 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:36:41.889 11:54:42 keyring_linux -- keyring/linux.sh@75 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key0 00:36:41.889 11:54:42 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key0 00:36:42.147 [2024-11-15 11:54:42.868639] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:36:42.147 nvme0n1 00:36:42.147 11:54:42 keyring_linux -- keyring/linux.sh@77 -- # check_keys 1 :spdk-test:key0 00:36:42.147 11:54:42 keyring_linux -- keyring/linux.sh@19 -- # local count=1 name=:spdk-test:key0 00:36:42.147 11:54:42 keyring_linux -- keyring/linux.sh@20 -- # local sn 00:36:42.147 11:54:42 keyring_linux -- keyring/linux.sh@22 -- # bperf_cmd keyring_get_keys 00:36:42.147 11:54:42 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:36:42.147 11:54:42 keyring_linux -- keyring/linux.sh@22 -- # jq length 00:36:42.405 11:54:43 keyring_linux -- keyring/linux.sh@22 -- # (( 1 == count )) 00:36:42.405 11:54:43 keyring_linux -- keyring/linux.sh@23 -- # (( count == 0 )) 00:36:42.405 11:54:43 keyring_linux -- keyring/linux.sh@25 -- # get_key :spdk-test:key0 00:36:42.405 11:54:43 keyring_linux -- keyring/linux.sh@25 -- # jq -r .sn 00:36:42.405 11:54:43 keyring_linux -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:36:42.405 11:54:43 keyring_linux -- keyring/common.sh@10 -- # jq '.[] | select(.name == ":spdk-test:key0")' 00:36:42.405 11:54:43 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:36:42.664 11:54:43 keyring_linux -- keyring/linux.sh@25 -- # sn=245679642 00:36:42.664 11:54:43 keyring_linux -- keyring/linux.sh@26 -- # get_keysn :spdk-test:key0 00:36:42.664 11:54:43 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key0 00:36:42.664 11:54:43 keyring_linux -- keyring/linux.sh@26 -- # [[ 245679642 == \2\4\5\6\7\9\6\4\2 ]] 00:36:42.664 11:54:43 keyring_linux -- keyring/linux.sh@27 -- # keyctl print 245679642 00:36:42.664 11:54:43 keyring_linux -- keyring/linux.sh@27 -- # [[ NVMeTLSkey-1:00:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: == \N\V\M\e\T\L\S\k\e\y\-\1\:\0\0\:\M\D\A\x\M\T\I\y\M\z\M\0\N\D\U\1\N\j\Y\3\N\z\g\4\O\T\l\h\Y\W\J\i\Y\2\N\k\Z\G\V\l\Z\m\Z\w\J\E\i\Q\: ]] 00:36:42.664 11:54:43 keyring_linux -- keyring/linux.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:36:42.923 Running I/O for 1 seconds... 00:36:43.860 12402.00 IOPS, 48.45 MiB/s 00:36:43.860 Latency(us) 00:36:43.860 [2024-11-15T10:54:44.713Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:36:43.860 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:36:43.860 nvme0n1 : 1.01 12404.34 48.45 0.00 0.00 10264.42 7179.17 16443.58 00:36:43.860 [2024-11-15T10:54:44.713Z] =================================================================================================================== 00:36:43.860 [2024-11-15T10:54:44.713Z] Total : 12404.34 48.45 0.00 0.00 10264.42 7179.17 16443.58 00:36:43.860 { 00:36:43.860 "results": [ 00:36:43.860 { 00:36:43.860 "job": "nvme0n1", 00:36:43.860 "core_mask": "0x2", 00:36:43.860 "workload": "randread", 00:36:43.860 "status": "finished", 00:36:43.860 "queue_depth": 128, 00:36:43.860 "io_size": 4096, 00:36:43.860 "runtime": 1.01013, 00:36:43.860 "iops": 12404.343995327334, 00:36:43.860 "mibps": 48.4544687317474, 00:36:43.860 "io_failed": 0, 00:36:43.860 "io_timeout": 0, 00:36:43.860 "avg_latency_us": 10264.416118406732, 00:36:43.860 "min_latency_us": 7179.170909090909, 00:36:43.860 "max_latency_us": 16443.578181818182 00:36:43.860 } 00:36:43.860 ], 00:36:43.860 "core_count": 1 00:36:43.860 } 00:36:43.860 11:54:44 keyring_linux -- keyring/linux.sh@80 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:36:43.860 11:54:44 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:36:44.119 11:54:44 keyring_linux -- keyring/linux.sh@81 -- # check_keys 0 00:36:44.119 11:54:44 keyring_linux -- keyring/linux.sh@19 -- # local count=0 name= 00:36:44.119 11:54:44 keyring_linux -- keyring/linux.sh@20 -- # local sn 00:36:44.119 11:54:44 keyring_linux -- keyring/linux.sh@22 -- # bperf_cmd keyring_get_keys 00:36:44.119 11:54:44 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:36:44.119 11:54:44 keyring_linux -- keyring/linux.sh@22 -- # jq length 00:36:44.378 11:54:45 keyring_linux -- keyring/linux.sh@22 -- # (( 0 == count )) 00:36:44.378 11:54:45 keyring_linux -- keyring/linux.sh@23 -- # (( count == 0 )) 00:36:44.378 11:54:45 keyring_linux -- keyring/linux.sh@23 -- # return 00:36:44.378 11:54:45 keyring_linux -- keyring/linux.sh@84 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:36:44.378 11:54:45 keyring_linux -- common/autotest_common.sh@650 -- # local es=0 00:36:44.378 11:54:45 keyring_linux -- common/autotest_common.sh@652 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:36:44.378 11:54:45 keyring_linux -- common/autotest_common.sh@638 -- # local arg=bperf_cmd 00:36:44.378 11:54:45 keyring_linux -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:36:44.378 11:54:45 keyring_linux -- common/autotest_common.sh@642 -- # type -t bperf_cmd 00:36:44.378 11:54:45 keyring_linux -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:36:44.379 11:54:45 keyring_linux -- common/autotest_common.sh@653 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:36:44.379 11:54:45 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:36:44.637 [2024-11-15 11:54:45.377846] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:36:44.637 [2024-11-15 11:54:45.378347] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8bf480 (107): Transport endpoint is not connected 00:36:44.637 [2024-11-15 11:54:45.379343] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8bf480 (9): Bad file descriptor 00:36:44.637 [2024-11-15 11:54:45.380344] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 0] Ctrlr is in error state 00:36:44.637 [2024-11-15 11:54:45.380352] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 00:36:44.637 [2024-11-15 11:54:45.380359] nvme.c: 884:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=127.0.0.1 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode0, Operation not permitted 00:36:44.637 [2024-11-15 11:54:45.380367] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 0] in failed state. 00:36:44.637 request: 00:36:44.637 { 00:36:44.637 "name": "nvme0", 00:36:44.637 "trtype": "tcp", 00:36:44.637 "traddr": "127.0.0.1", 00:36:44.637 "adrfam": "ipv4", 00:36:44.637 "trsvcid": "4420", 00:36:44.637 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:36:44.637 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:36:44.637 "prchk_reftag": false, 00:36:44.637 "prchk_guard": false, 00:36:44.637 "hdgst": false, 00:36:44.637 "ddgst": false, 00:36:44.637 "psk": ":spdk-test:key1", 00:36:44.637 "allow_unrecognized_csi": false, 00:36:44.637 "method": "bdev_nvme_attach_controller", 00:36:44.637 "req_id": 1 00:36:44.637 } 00:36:44.637 Got JSON-RPC error response 00:36:44.637 response: 00:36:44.637 { 00:36:44.637 "code": -5, 00:36:44.637 "message": "Input/output error" 00:36:44.637 } 00:36:44.637 11:54:45 keyring_linux -- common/autotest_common.sh@653 -- # es=1 00:36:44.637 11:54:45 keyring_linux -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:36:44.637 11:54:45 keyring_linux -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:36:44.637 11:54:45 keyring_linux -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:36:44.638 11:54:45 keyring_linux -- keyring/linux.sh@1 -- # cleanup 00:36:44.638 11:54:45 keyring_linux -- keyring/linux.sh@38 -- # for key in key0 key1 00:36:44.638 11:54:45 keyring_linux -- keyring/linux.sh@39 -- # unlink_key key0 00:36:44.638 11:54:45 keyring_linux -- keyring/linux.sh@31 -- # local name=key0 sn 00:36:44.638 11:54:45 keyring_linux -- keyring/linux.sh@33 -- # get_keysn :spdk-test:key0 00:36:44.638 11:54:45 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key0 00:36:44.638 11:54:45 keyring_linux -- keyring/linux.sh@33 -- # sn=245679642 00:36:44.638 11:54:45 keyring_linux -- keyring/linux.sh@34 -- # keyctl unlink 245679642 00:36:44.638 1 links removed 00:36:44.638 11:54:45 keyring_linux -- keyring/linux.sh@38 -- # for key in key0 key1 00:36:44.638 11:54:45 keyring_linux -- keyring/linux.sh@39 -- # unlink_key key1 00:36:44.638 11:54:45 keyring_linux -- keyring/linux.sh@31 -- # local name=key1 sn 00:36:44.638 11:54:45 keyring_linux -- keyring/linux.sh@33 -- # get_keysn :spdk-test:key1 00:36:44.638 11:54:45 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key1 00:36:44.638 11:54:45 keyring_linux -- keyring/linux.sh@33 -- # sn=621936291 00:36:44.638 11:54:45 keyring_linux -- keyring/linux.sh@34 -- # keyctl unlink 621936291 00:36:44.638 1 links removed 00:36:44.638 11:54:45 keyring_linux -- keyring/linux.sh@41 -- # killprocess 1538375 00:36:44.638 11:54:45 keyring_linux -- common/autotest_common.sh@952 -- # '[' -z 1538375 ']' 00:36:44.638 11:54:45 keyring_linux -- common/autotest_common.sh@956 -- # kill -0 1538375 00:36:44.638 11:54:45 keyring_linux -- common/autotest_common.sh@957 -- # uname 00:36:44.638 11:54:45 keyring_linux -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:36:44.638 11:54:45 keyring_linux -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 1538375 00:36:44.638 11:54:45 keyring_linux -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:36:44.638 11:54:45 keyring_linux -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:36:44.638 11:54:45 keyring_linux -- common/autotest_common.sh@970 -- # echo 'killing process with pid 1538375' 00:36:44.638 killing process with pid 1538375 00:36:44.638 11:54:45 keyring_linux -- common/autotest_common.sh@971 -- # kill 1538375 00:36:44.638 Received shutdown signal, test time was about 1.000000 seconds 00:36:44.638 00:36:44.638 Latency(us) 00:36:44.638 [2024-11-15T10:54:45.491Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:36:44.638 [2024-11-15T10:54:45.491Z] =================================================================================================================== 00:36:44.638 [2024-11-15T10:54:45.491Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:36:44.638 11:54:45 keyring_linux -- common/autotest_common.sh@976 -- # wait 1538375 00:36:44.897 11:54:45 keyring_linux -- keyring/linux.sh@42 -- # killprocess 1538169 00:36:44.897 11:54:45 keyring_linux -- common/autotest_common.sh@952 -- # '[' -z 1538169 ']' 00:36:44.897 11:54:45 keyring_linux -- common/autotest_common.sh@956 -- # kill -0 1538169 00:36:44.897 11:54:45 keyring_linux -- common/autotest_common.sh@957 -- # uname 00:36:44.897 11:54:45 keyring_linux -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:36:44.897 11:54:45 keyring_linux -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 1538169 00:36:44.897 11:54:45 keyring_linux -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:36:44.897 11:54:45 keyring_linux -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:36:44.897 11:54:45 keyring_linux -- common/autotest_common.sh@970 -- # echo 'killing process with pid 1538169' 00:36:44.897 killing process with pid 1538169 00:36:44.897 11:54:45 keyring_linux -- common/autotest_common.sh@971 -- # kill 1538169 00:36:44.897 11:54:45 keyring_linux -- common/autotest_common.sh@976 -- # wait 1538169 00:36:45.155 00:36:45.155 real 0m4.746s 00:36:45.155 user 0m9.075s 00:36:45.155 sys 0m1.533s 00:36:45.155 11:54:46 keyring_linux -- common/autotest_common.sh@1128 -- # xtrace_disable 00:36:45.155 11:54:46 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:36:45.155 ************************************ 00:36:45.155 END TEST keyring_linux 00:36:45.155 ************************************ 00:36:45.415 11:54:46 -- spdk/autotest.sh@307 -- # '[' 0 -eq 1 ']' 00:36:45.415 11:54:46 -- spdk/autotest.sh@311 -- # '[' 0 -eq 1 ']' 00:36:45.415 11:54:46 -- spdk/autotest.sh@315 -- # '[' 0 -eq 1 ']' 00:36:45.415 11:54:46 -- spdk/autotest.sh@320 -- # '[' 0 -eq 1 ']' 00:36:45.415 11:54:46 -- spdk/autotest.sh@329 -- # '[' 0 -eq 1 ']' 00:36:45.415 11:54:46 -- spdk/autotest.sh@334 -- # '[' 0 -eq 1 ']' 00:36:45.415 11:54:46 -- spdk/autotest.sh@338 -- # '[' 0 -eq 1 ']' 00:36:45.415 11:54:46 -- spdk/autotest.sh@342 -- # '[' 0 -eq 1 ']' 00:36:45.415 11:54:46 -- spdk/autotest.sh@346 -- # '[' 0 -eq 1 ']' 00:36:45.415 11:54:46 -- spdk/autotest.sh@351 -- # '[' 0 -eq 1 ']' 00:36:45.415 11:54:46 -- spdk/autotest.sh@355 -- # '[' 0 -eq 1 ']' 00:36:45.415 11:54:46 -- spdk/autotest.sh@362 -- # [[ 0 -eq 1 ]] 00:36:45.415 11:54:46 -- spdk/autotest.sh@366 -- # [[ 0 -eq 1 ]] 00:36:45.415 11:54:46 -- spdk/autotest.sh@370 -- # [[ 0 -eq 1 ]] 00:36:45.415 11:54:46 -- spdk/autotest.sh@374 -- # [[ '' -eq 1 ]] 00:36:45.415 11:54:46 -- spdk/autotest.sh@381 -- # trap - SIGINT SIGTERM EXIT 00:36:45.415 11:54:46 -- spdk/autotest.sh@383 -- # timing_enter post_cleanup 00:36:45.415 11:54:46 -- common/autotest_common.sh@724 -- # xtrace_disable 00:36:45.415 11:54:46 -- common/autotest_common.sh@10 -- # set +x 00:36:45.415 11:54:46 -- spdk/autotest.sh@384 -- # autotest_cleanup 00:36:45.415 11:54:46 -- common/autotest_common.sh@1394 -- # local autotest_es=0 00:36:45.415 11:54:46 -- common/autotest_common.sh@1395 -- # xtrace_disable 00:36:45.415 11:54:46 -- common/autotest_common.sh@10 -- # set +x 00:36:50.686 INFO: APP EXITING 00:36:50.686 INFO: killing all VMs 00:36:50.686 INFO: killing vhost app 00:36:50.686 WARN: no vhost pid file found 00:36:50.686 INFO: EXIT DONE 00:36:52.587 0000:86:00.0 (8086 0a54): Already using the nvme driver 00:36:52.587 0000:00:04.7 (8086 2021): Already using the ioatdma driver 00:36:52.587 0000:00:04.6 (8086 2021): Already using the ioatdma driver 00:36:52.587 0000:00:04.5 (8086 2021): Already using the ioatdma driver 00:36:52.587 0000:00:04.4 (8086 2021): Already using the ioatdma driver 00:36:52.587 0000:00:04.3 (8086 2021): Already using the ioatdma driver 00:36:52.587 0000:00:04.2 (8086 2021): Already using the ioatdma driver 00:36:52.587 0000:00:04.1 (8086 2021): Already using the ioatdma driver 00:36:52.587 0000:00:04.0 (8086 2021): Already using the ioatdma driver 00:36:52.587 0000:80:04.7 (8086 2021): Already using the ioatdma driver 00:36:52.587 0000:80:04.6 (8086 2021): Already using the ioatdma driver 00:36:52.587 0000:80:04.5 (8086 2021): Already using the ioatdma driver 00:36:52.587 0000:80:04.4 (8086 2021): Already using the ioatdma driver 00:36:52.587 0000:80:04.3 (8086 2021): Already using the ioatdma driver 00:36:52.587 0000:80:04.2 (8086 2021): Already using the ioatdma driver 00:36:52.587 0000:80:04.1 (8086 2021): Already using the ioatdma driver 00:36:52.587 0000:80:04.0 (8086 2021): Already using the ioatdma driver 00:36:55.877 Cleaning 00:36:55.877 Removing: /var/run/dpdk/spdk0/config 00:36:55.877 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:36:55.877 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:36:55.877 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:36:55.877 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:36:55.877 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-0 00:36:55.877 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-1 00:36:55.877 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-2 00:36:55.877 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-3 00:36:55.877 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:36:55.877 Removing: /var/run/dpdk/spdk0/hugepage_info 00:36:55.877 Removing: /var/run/dpdk/spdk1/config 00:36:55.877 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-0 00:36:55.877 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-1 00:36:55.877 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-2 00:36:55.877 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-3 00:36:55.877 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-0 00:36:55.877 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-1 00:36:55.877 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-2 00:36:55.877 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-3 00:36:55.877 Removing: /var/run/dpdk/spdk1/fbarray_memzone 00:36:55.877 Removing: /var/run/dpdk/spdk1/hugepage_info 00:36:55.877 Removing: /var/run/dpdk/spdk2/config 00:36:55.877 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-0 00:36:55.877 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-1 00:36:55.877 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-2 00:36:55.877 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-3 00:36:55.877 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-0 00:36:55.877 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-1 00:36:55.877 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-2 00:36:55.877 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-3 00:36:55.877 Removing: /var/run/dpdk/spdk2/fbarray_memzone 00:36:55.878 Removing: /var/run/dpdk/spdk2/hugepage_info 00:36:55.878 Removing: /var/run/dpdk/spdk3/config 00:36:55.878 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-0 00:36:55.878 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-1 00:36:55.878 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-2 00:36:55.878 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-3 00:36:55.878 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-0 00:36:55.878 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-1 00:36:55.878 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-2 00:36:55.878 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-3 00:36:55.878 Removing: /var/run/dpdk/spdk3/fbarray_memzone 00:36:55.878 Removing: /var/run/dpdk/spdk3/hugepage_info 00:36:55.878 Removing: /var/run/dpdk/spdk4/config 00:36:55.878 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-0 00:36:55.878 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-1 00:36:55.878 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-2 00:36:55.878 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-3 00:36:55.878 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-0 00:36:55.878 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-1 00:36:55.878 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-2 00:36:55.878 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-3 00:36:55.878 Removing: /var/run/dpdk/spdk4/fbarray_memzone 00:36:55.878 Removing: /var/run/dpdk/spdk4/hugepage_info 00:36:55.878 Removing: /dev/shm/bdev_svc_trace.1 00:36:55.878 Removing: /dev/shm/nvmf_trace.0 00:36:55.878 Removing: /dev/shm/spdk_tgt_trace.pid1018384 00:36:55.878 Removing: /var/run/dpdk/spdk0 00:36:55.878 Removing: /var/run/dpdk/spdk1 00:36:55.878 Removing: /var/run/dpdk/spdk2 00:36:55.878 Removing: /var/run/dpdk/spdk3 00:36:55.878 Removing: /var/run/dpdk/spdk4 00:36:55.878 Removing: /var/run/dpdk/spdk_pid1015953 00:36:55.878 Removing: /var/run/dpdk/spdk_pid1017174 00:36:55.878 Removing: /var/run/dpdk/spdk_pid1018384 00:36:55.878 Removing: /var/run/dpdk/spdk_pid1019083 00:36:55.878 Removing: /var/run/dpdk/spdk_pid1020129 00:36:55.878 Removing: /var/run/dpdk/spdk_pid1020181 00:36:55.878 Removing: /var/run/dpdk/spdk_pid1021282 00:36:55.878 Removing: /var/run/dpdk/spdk_pid1021457 00:36:55.878 Removing: /var/run/dpdk/spdk_pid1021681 00:36:55.878 Removing: /var/run/dpdk/spdk_pid1023649 00:36:55.878 Removing: /var/run/dpdk/spdk_pid1025086 00:36:55.878 Removing: /var/run/dpdk/spdk_pid1025490 00:36:55.878 Removing: /var/run/dpdk/spdk_pid1025777 00:36:55.878 Removing: /var/run/dpdk/spdk_pid1026084 00:36:55.878 Removing: /var/run/dpdk/spdk_pid1026397 00:36:55.878 Removing: /var/run/dpdk/spdk_pid1026685 00:36:55.878 Removing: /var/run/dpdk/spdk_pid1026965 00:36:55.878 Removing: /var/run/dpdk/spdk_pid1027277 00:36:55.878 Removing: /var/run/dpdk/spdk_pid1028071 00:36:55.878 Removing: /var/run/dpdk/spdk_pid1031755 00:36:55.878 Removing: /var/run/dpdk/spdk_pid1032051 00:36:55.878 Removing: /var/run/dpdk/spdk_pid1032129 00:36:55.878 Removing: /var/run/dpdk/spdk_pid1032347 00:36:55.878 Removing: /var/run/dpdk/spdk_pid1032652 00:36:55.878 Removing: /var/run/dpdk/spdk_pid1032916 00:36:55.878 Removing: /var/run/dpdk/spdk_pid1033362 00:36:55.878 Removing: /var/run/dpdk/spdk_pid1033475 00:36:55.878 Removing: /var/run/dpdk/spdk_pid1033768 00:36:55.878 Removing: /var/run/dpdk/spdk_pid1033784 00:36:55.878 Removing: /var/run/dpdk/spdk_pid1034072 00:36:55.878 Removing: /var/run/dpdk/spdk_pid1034333 00:36:55.878 Removing: /var/run/dpdk/spdk_pid1034715 00:36:55.878 Removing: /var/run/dpdk/spdk_pid1034990 00:36:55.878 Removing: /var/run/dpdk/spdk_pid1035324 00:36:55.878 Removing: /var/run/dpdk/spdk_pid1039336 00:36:55.878 Removing: /var/run/dpdk/spdk_pid1043809 00:36:55.878 Removing: /var/run/dpdk/spdk_pid1055567 00:36:55.878 Removing: /var/run/dpdk/spdk_pid1056209 00:36:55.878 Removing: /var/run/dpdk/spdk_pid1060693 00:36:55.878 Removing: /var/run/dpdk/spdk_pid1061059 00:36:55.878 Removing: /var/run/dpdk/spdk_pid1065614 00:36:55.878 Removing: /var/run/dpdk/spdk_pid1072023 00:36:55.878 Removing: /var/run/dpdk/spdk_pid1075057 00:36:55.878 Removing: /var/run/dpdk/spdk_pid1086026 00:36:55.878 Removing: /var/run/dpdk/spdk_pid1095556 00:36:55.878 Removing: /var/run/dpdk/spdk_pid1097454 00:36:55.878 Removing: /var/run/dpdk/spdk_pid1098510 00:36:55.878 Removing: /var/run/dpdk/spdk_pid1117215 00:36:55.878 Removing: /var/run/dpdk/spdk_pid1121414 00:36:55.878 Removing: /var/run/dpdk/spdk_pid1170600 00:36:55.878 Removing: /var/run/dpdk/spdk_pid1176081 00:36:55.878 Removing: /var/run/dpdk/spdk_pid1182323 00:36:55.878 Removing: /var/run/dpdk/spdk_pid1189397 00:36:55.878 Removing: /var/run/dpdk/spdk_pid1189402 00:36:55.878 Removing: /var/run/dpdk/spdk_pid1190271 00:36:55.878 Removing: /var/run/dpdk/spdk_pid1191229 00:36:55.878 Removing: /var/run/dpdk/spdk_pid1192238 00:36:55.878 Removing: /var/run/dpdk/spdk_pid1192836 00:36:55.878 Removing: /var/run/dpdk/spdk_pid1193066 00:36:55.878 Removing: /var/run/dpdk/spdk_pid1193325 00:36:55.878 Removing: /var/run/dpdk/spdk_pid1193352 00:36:55.878 Removing: /var/run/dpdk/spdk_pid1193514 00:36:55.878 Removing: /var/run/dpdk/spdk_pid1194392 00:36:55.878 Removing: /var/run/dpdk/spdk_pid1195426 00:36:55.878 Removing: /var/run/dpdk/spdk_pid1196242 00:36:55.878 Removing: /var/run/dpdk/spdk_pid1196997 00:36:55.878 Removing: /var/run/dpdk/spdk_pid1197007 00:36:55.878 Removing: /var/run/dpdk/spdk_pid1197278 00:36:55.878 Removing: /var/run/dpdk/spdk_pid1198689 00:36:55.878 Removing: /var/run/dpdk/spdk_pid1200077 00:36:55.878 Removing: /var/run/dpdk/spdk_pid1209225 00:36:55.878 Removing: /var/run/dpdk/spdk_pid1246395 00:36:55.878 Removing: /var/run/dpdk/spdk_pid1251405 00:36:55.878 Removing: /var/run/dpdk/spdk_pid1253233 00:36:55.878 Removing: /var/run/dpdk/spdk_pid1255135 00:36:55.878 Removing: /var/run/dpdk/spdk_pid1255339 00:36:55.878 Removing: /var/run/dpdk/spdk_pid1255605 00:36:55.878 Removing: /var/run/dpdk/spdk_pid1255810 00:36:55.878 Removing: /var/run/dpdk/spdk_pid1256448 00:36:55.878 Removing: /var/run/dpdk/spdk_pid1258377 00:36:55.878 Removing: /var/run/dpdk/spdk_pid1259406 00:36:55.878 Removing: /var/run/dpdk/spdk_pid1259963 00:36:55.878 Removing: /var/run/dpdk/spdk_pid1262350 00:36:55.878 Removing: /var/run/dpdk/spdk_pid1262904 00:36:55.878 Removing: /var/run/dpdk/spdk_pid1263722 00:36:55.878 Removing: /var/run/dpdk/spdk_pid1267770 00:36:55.878 Removing: /var/run/dpdk/spdk_pid1273612 00:36:55.878 Removing: /var/run/dpdk/spdk_pid1273614 00:36:55.878 Removing: /var/run/dpdk/spdk_pid1273615 00:36:55.878 Removing: /var/run/dpdk/spdk_pid1277626 00:36:55.878 Removing: /var/run/dpdk/spdk_pid1286810 00:36:55.878 Removing: /var/run/dpdk/spdk_pid1291130 00:36:55.878 Removing: /var/run/dpdk/spdk_pid1297405 00:36:55.878 Removing: /var/run/dpdk/spdk_pid1298850 00:36:55.878 Removing: /var/run/dpdk/spdk_pid1300344 00:36:55.878 Removing: /var/run/dpdk/spdk_pid1301832 00:36:55.878 Removing: /var/run/dpdk/spdk_pid1306645 00:36:55.878 Removing: /var/run/dpdk/spdk_pid1311052 00:36:55.878 Removing: /var/run/dpdk/spdk_pid1315202 00:36:55.878 Removing: /var/run/dpdk/spdk_pid1322876 00:36:56.137 Removing: /var/run/dpdk/spdk_pid1322922 00:36:56.137 Removing: /var/run/dpdk/spdk_pid1327840 00:36:56.137 Removing: /var/run/dpdk/spdk_pid1328045 00:36:56.137 Removing: /var/run/dpdk/spdk_pid1328244 00:36:56.137 Removing: /var/run/dpdk/spdk_pid1328755 00:36:56.137 Removing: /var/run/dpdk/spdk_pid1328767 00:36:56.137 Removing: /var/run/dpdk/spdk_pid1333581 00:36:56.137 Removing: /var/run/dpdk/spdk_pid1334226 00:36:56.137 Removing: /var/run/dpdk/spdk_pid1339399 00:36:56.137 Removing: /var/run/dpdk/spdk_pid1342289 00:36:56.137 Removing: /var/run/dpdk/spdk_pid1348158 00:36:56.137 Removing: /var/run/dpdk/spdk_pid1353669 00:36:56.137 Removing: /var/run/dpdk/spdk_pid1363632 00:36:56.137 Removing: /var/run/dpdk/spdk_pid1370922 00:36:56.137 Removing: /var/run/dpdk/spdk_pid1370924 00:36:56.137 Removing: /var/run/dpdk/spdk_pid1391726 00:36:56.137 Removing: /var/run/dpdk/spdk_pid1392387 00:36:56.137 Removing: /var/run/dpdk/spdk_pid1393051 00:36:56.137 Removing: /var/run/dpdk/spdk_pid1393669 00:36:56.137 Removing: /var/run/dpdk/spdk_pid1394445 00:36:56.137 Removing: /var/run/dpdk/spdk_pid1395217 00:36:56.137 Removing: /var/run/dpdk/spdk_pid1395766 00:36:56.137 Removing: /var/run/dpdk/spdk_pid1396470 00:36:56.137 Removing: /var/run/dpdk/spdk_pid1400605 00:36:56.137 Removing: /var/run/dpdk/spdk_pid1400973 00:36:56.137 Removing: /var/run/dpdk/spdk_pid1407414 00:36:56.137 Removing: /var/run/dpdk/spdk_pid1407535 00:36:56.137 Removing: /var/run/dpdk/spdk_pid1413206 00:36:56.137 Removing: /var/run/dpdk/spdk_pid1417730 00:36:56.137 Removing: /var/run/dpdk/spdk_pid1428404 00:36:56.137 Removing: /var/run/dpdk/spdk_pid1429042 00:36:56.137 Removing: /var/run/dpdk/spdk_pid1433456 00:36:56.137 Removing: /var/run/dpdk/spdk_pid1434083 00:36:56.137 Removing: /var/run/dpdk/spdk_pid1438667 00:36:56.137 Removing: /var/run/dpdk/spdk_pid1444793 00:36:56.137 Removing: /var/run/dpdk/spdk_pid1447891 00:36:56.137 Removing: /var/run/dpdk/spdk_pid1458269 00:36:56.138 Removing: /var/run/dpdk/spdk_pid1467499 00:36:56.138 Removing: /var/run/dpdk/spdk_pid1469261 00:36:56.138 Removing: /var/run/dpdk/spdk_pid1470124 00:36:56.138 Removing: /var/run/dpdk/spdk_pid1488040 00:36:56.138 Removing: /var/run/dpdk/spdk_pid1491915 00:36:56.138 Removing: /var/run/dpdk/spdk_pid1494953 00:36:56.138 Removing: /var/run/dpdk/spdk_pid1503053 00:36:56.138 Removing: /var/run/dpdk/spdk_pid1503138 00:36:56.138 Removing: /var/run/dpdk/spdk_pid1508299 00:36:56.138 Removing: /var/run/dpdk/spdk_pid1510330 00:36:56.138 Removing: /var/run/dpdk/spdk_pid1512539 00:36:56.138 Removing: /var/run/dpdk/spdk_pid1513738 00:36:56.138 Removing: /var/run/dpdk/spdk_pid1515975 00:36:56.138 Removing: /var/run/dpdk/spdk_pid1517189 00:36:56.138 Removing: /var/run/dpdk/spdk_pid1526383 00:36:56.138 Removing: /var/run/dpdk/spdk_pid1526905 00:36:56.138 Removing: /var/run/dpdk/spdk_pid1527427 00:36:56.138 Removing: /var/run/dpdk/spdk_pid1530007 00:36:56.138 Removing: /var/run/dpdk/spdk_pid1530914 00:36:56.138 Removing: /var/run/dpdk/spdk_pid1531449 00:36:56.138 Removing: /var/run/dpdk/spdk_pid1535556 00:36:56.138 Removing: /var/run/dpdk/spdk_pid1535566 00:36:56.138 Removing: /var/run/dpdk/spdk_pid1537556 00:36:56.138 Removing: /var/run/dpdk/spdk_pid1538169 00:36:56.138 Removing: /var/run/dpdk/spdk_pid1538375 00:36:56.138 Clean 00:36:56.396 11:54:57 -- common/autotest_common.sh@1451 -- # return 0 00:36:56.396 11:54:57 -- spdk/autotest.sh@385 -- # timing_exit post_cleanup 00:36:56.396 11:54:57 -- common/autotest_common.sh@730 -- # xtrace_disable 00:36:56.396 11:54:57 -- common/autotest_common.sh@10 -- # set +x 00:36:56.396 11:54:57 -- spdk/autotest.sh@387 -- # timing_exit autotest 00:36:56.396 11:54:57 -- common/autotest_common.sh@730 -- # xtrace_disable 00:36:56.396 11:54:57 -- common/autotest_common.sh@10 -- # set +x 00:36:56.396 11:54:57 -- spdk/autotest.sh@388 -- # chmod a+r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/timing.txt 00:36:56.396 11:54:57 -- spdk/autotest.sh@390 -- # [[ -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/udev.log ]] 00:36:56.396 11:54:57 -- spdk/autotest.sh@390 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/udev.log 00:36:56.396 11:54:57 -- spdk/autotest.sh@392 -- # [[ y == y ]] 00:36:56.396 11:54:57 -- spdk/autotest.sh@394 -- # hostname 00:36:56.396 11:54:57 -- spdk/autotest.sh@394 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk -t spdk-wfp-16 -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_test.info 00:36:56.655 geninfo: WARNING: invalid characters removed from testname! 00:37:28.732 11:55:27 -- spdk/autotest.sh@395 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -a /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_base.info -a /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_test.info -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:37:31.273 11:55:32 -- spdk/autotest.sh@396 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/dpdk/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:37:34.561 11:55:35 -- spdk/autotest.sh@400 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info --ignore-errors unused,unused '/usr/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:37:37.850 11:55:38 -- spdk/autotest.sh@401 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/examples/vmd/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:37:40.384 11:55:41 -- spdk/autotest.sh@402 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:37:43.673 11:55:44 -- spdk/autotest.sh@403 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:37:46.959 11:55:47 -- spdk/autotest.sh@404 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:37:46.959 11:55:47 -- spdk/autorun.sh@1 -- $ timing_finish 00:37:46.959 11:55:47 -- common/autotest_common.sh@736 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/timing.txt ]] 00:37:46.959 11:55:47 -- common/autotest_common.sh@738 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:37:46.959 11:55:47 -- common/autotest_common.sh@739 -- $ [[ -x /usr/local/FlameGraph/flamegraph.pl ]] 00:37:46.959 11:55:47 -- common/autotest_common.sh@742 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/timing.txt 00:37:46.959 + [[ -n 931128 ]] 00:37:46.959 + sudo kill 931128 00:37:46.968 [Pipeline] } 00:37:46.984 [Pipeline] // stage 00:37:46.990 [Pipeline] } 00:37:47.005 [Pipeline] // timeout 00:37:47.011 [Pipeline] } 00:37:47.026 [Pipeline] // catchError 00:37:47.033 [Pipeline] } 00:37:47.049 [Pipeline] // wrap 00:37:47.057 [Pipeline] } 00:37:47.070 [Pipeline] // catchError 00:37:47.080 [Pipeline] stage 00:37:47.082 [Pipeline] { (Epilogue) 00:37:47.095 [Pipeline] catchError 00:37:47.096 [Pipeline] { 00:37:47.110 [Pipeline] echo 00:37:47.112 Cleanup processes 00:37:47.118 [Pipeline] sh 00:37:47.402 + sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:37:47.402 1549345 sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:37:47.417 [Pipeline] sh 00:37:47.701 ++ sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:37:47.701 ++ grep -v 'sudo pgrep' 00:37:47.701 ++ awk '{print $1}' 00:37:47.701 + sudo kill -9 00:37:47.701 + true 00:37:47.714 [Pipeline] sh 00:37:47.997 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:38:06.103 [Pipeline] sh 00:38:06.386 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:38:06.387 Artifacts sizes are good 00:38:06.402 [Pipeline] archiveArtifacts 00:38:06.410 Archiving artifacts 00:38:06.627 [Pipeline] sh 00:38:06.989 + sudo chown -R sys_sgci: /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:38:07.004 [Pipeline] cleanWs 00:38:07.015 [WS-CLEANUP] Deleting project workspace... 00:38:07.015 [WS-CLEANUP] Deferred wipeout is used... 00:38:07.022 [WS-CLEANUP] done 00:38:07.026 [Pipeline] } 00:38:07.045 [Pipeline] // catchError 00:38:07.058 [Pipeline] sh 00:38:07.338 + logger -p user.info -t JENKINS-CI 00:38:07.347 [Pipeline] } 00:38:07.361 [Pipeline] // stage 00:38:07.367 [Pipeline] } 00:38:07.381 [Pipeline] // node 00:38:07.386 [Pipeline] End of Pipeline 00:38:07.428 Finished: SUCCESS